[PVE-User] Ceph freeze upgrading from Hammer to Jewel - lessons learned

Eneko Lacunza elacunza at binovo.es
Fri Oct 19 12:56:02 CEST 2018


Hi Ronny,

El 19/10/18 a las 11:22, Ronny Aasen escribió:
> On 10/19/18 10:05 AM, Eneko Lacunza wrote:
>> Hi all,
>>
>> Yesterday we performed a Ceph upgrade in a 3-node Proxmox 4.4 
>> cluster, from Hammer to Jewel following the procedure in the wiki:
>> https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel
>>
>> It went smoothly for the first two nodes, but we had a grave problem 
>> with the 3rd, because when shuting down OSDs on that node, only one 
>> registered as "down" in ceph monitors although all three OSDs on that 
>> node were effectively down (no process running!).
>>
>> OSDs couldn't be started back directly, because we had to chown data 
>> files/directories and it took quite long (like 1 hour), so VMs trying 
>> to write and read from those 2 phantom OSDs just freezed.
>>
>> We had downtime, no data was lost, and managed to recover everything 
>> back to working condition as fast as chown command finished :)
>>
>> * Lessons learned:
>>
>> I think the procedure described in the wiki could be improved, so it 
>> is instructed first to stop Ceph Mons and OSDs, and only after that 
>> perform apt-get update && apt-get upgrade . This way, it's possible 
>> to restart OSDs in case this bug? happens, or any other problem 
>> surfaces; without performing the long chown work.
>
> You are probably finished with your upgrade, but answering for the 
> googleabillity.
>
>
> normally with ceph you upgrade in this order
> mon's
> mgr's
> osd's
> mds
> rwg
> clients
>
While this is true generally, it was not specifically advised in Ceph 
release notes:
https://ceph.com/geen-categorie/v10-2-0-jewel-released/

It says:
There are no major compatibility changes since Infernalis. Simply 
upgrading the daemons on each host and restarting all daemons is sufficient.

So I don't think the wiki entry I referenced is wrong on this point.

> use something like "watch ceph versions" to be 100% certain all 
> servers of a given type upgraded before starting on the next service.
>
> when ceph nodes have dedicated tasks, this is not a problem. But if 
> you have mons and osds on the same machine, you have to take extra 
> care.. you need to apt-get upgrade the files first, without restarting 
> the services. (on all mon machines)
> then restart the mon service on all mons so you have the working 
> quorum on all with the new version.
>
> then restart the mgr service on all mgrs
>
> only then do you start to restart one and one osd service.
>
> if you do the upgrade where you have to change the file ownership i I 
> would recommend to restart as root user first with the
> "setuser match path = /var/lib/ceph/osd/$cluster-$id"
> config option in ceph.conf
>
> then when the whole migration is done and the cluster is stable. i 
> would take down one and one osd, and do the chown part. and remove the 
> config option once all osd's run as ceph user.

This is all generally good, but always look at release notes. That's why 
we used the wiki procedure and I'm reporting the issue we found! :-)

Cheers
Eneko

-- 
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es




More information about the pve-user mailing list