[PVE-User] Does proxmox fiddle with the ceph host bucket?

Lindsay Mathieson lindsay.mathieson at gmail.com
Wed Dec 31 00:27:03 CET 2014

As per the subject :)

I was experimenting with setting up a SSD only pool as a prelim to setting up 
a cache tier.

I added the osd to ceph.conf

    host = vnb

And added it to the crush map with "host=vnb-sdd" to stop it getting added to 
the default ruleset.

  ceph osd crush add osd.2 0 host=vnb-ssd root=ssd

That worked, though the osd list in the gui changed to:

and stopped showing vnb/osd.0 & vng/osd.1

A pain, but liveable with.

However within a short while (minutes) osd.2 was moved from "vnb-ssd" to "vnb" 
which placed it in the default rulset.

Fortunately it had a weight of zero, otherwise I could have been looking at a 
long recovery.

The same thing happened with osd.3 that I had setup.

I don't believe ceph ever does this - could proxmox be setting the ceph host 
bucket values from ceph.conf?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://pve.proxmox.com/pipermail/pve-user/attachments/20141230/9ce94bf7/attachment-0014.sig>

More information about the pve-user mailing list