[PVE-User] Ceph freeze upgrading from Hammer to Jewel - lessons learned

Ronny Aasen ronny+pve-user at aasen.cx
Fri Oct 19 11:22:48 CEST 2018

On 10/19/18 10:05 AM, Eneko Lacunza wrote:
> Hi all,
> Yesterday we performed a Ceph upgrade in a 3-node Proxmox 4.4 cluster, 
> from Hammer to Jewel following the procedure in the wiki:
> https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel
> It went smoothly for the first two nodes, but we had a grave problem 
> with the 3rd, because when shuting down OSDs on that node, only one 
> registered as "down" in ceph monitors although all three OSDs on that 
> node were effectively down (no process running!).
> OSDs couldn't be started back directly, because we had to chown data 
> files/directories and it took quite long (like 1 hour), so VMs trying to 
> write and read from those 2 phantom OSDs just freezed.
> We had downtime, no data was lost, and managed to recover everything 
> back to working condition as fast as chown command finished :)
> * Lessons learned:
> I think the procedure described in the wiki could be improved, so it is 
> instructed first to stop Ceph Mons and OSDs, and only after that perform 
> apt-get update && apt-get upgrade . This way, it's possible to restart 
> OSDs in case this bug? happens, or any other problem surfaces; without 
> performing the long chown work.

You are probably finished with your upgrade, but answering for the 

normally with ceph you upgrade in this order

use something like "watch ceph versions" to be 100% certain all servers 
of a given type upgraded before starting on the next service.

when ceph nodes have dedicated tasks, this is not a problem. But if you 
have mons and osds on the same machine, you have to take extra care.. 
you need to apt-get upgrade the files first, without restarting the 
services. (on all mon machines)
then restart the mon service on all mons so you have the working quorum 
on all with the new version.

then restart the mgr service on all mgrs

only then do you start to restart one and one osd service.

if you do the upgrade where you have to change the file ownership i I 
would recommend to restart as root user first with the
"setuser match path = /var/lib/ceph/osd/$cluster-$id"
config option in ceph.conf

then when the whole migration is done and the cluster is stable. i would 
take down one and one osd, and do the chown part. and remove the config 
option once all osd's run as ceph user.

kind regards
Ronny Aasen

More information about the pve-user mailing list