[PVE-User] Ceph server manageability issue in upgraded PVE 6 Ceph Server

Eneko Lacunza elacunza at binovo.es
Wed Aug 21 14:37:35 CEST 2019


Hi all,

I'm reporting here an issue that I think should be handled somehow by 
Proxmox, maybe with extended migration notes.

Starting point:
- Proxmox 5.4 cluster with Ceph Server. Proxmox nodes have 1 SSD + 3 
HDD. System and Ceph OSD journals (filestore or bluestore db) are on the 
SSD.

This starting point can be achieved with standard Proxmox installation 
and GUI.

After migrating that cluster to Proxmox 6, we adapt OSDs created with 
ceph-disk (internally by Proxmox) to ceph-volume, as per 
https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus#Restart_the_OSD_daemon_on_all_nodes

Everything works OK.

The issue happens when a OSD disk fails and has to be replaced (also 
should we want to add a new OSD). OSD removal works OK, but creating the 
new replacement OSD won't work because:

- GUI says SSD disk device has no LVM and is in use, and refuses to 
continue (didn't take a screenshot sorry)
- Trying from CLI: (sdb is data device and sdd SSD disk with SO and 
journals)

# pveceph createosd /dev/sdb -db_dev /dev/sdd
device '/dev/sdd' is already in use and has no LVM on it

- Trying from CLI after creating a GPT partition manually:

# pveceph createosd /dev/sdb -db_dev /dev/sdd4
unable to get device info for '/dev/sdd4' for type db_dev

- Finally, one has to resort to native ceph tooling:

# ceph-volume lvm prepare --data /dev/sdb --block.db /dev/sdd4
[...]
--> ceph-volume lvm prepare successful for: /dev/sdb
# ceph-volume lvm activate --all
[...]
--> ceph-volume lvm activate successful for osd ID: 3

This works. So if I haven't missed something in the way, we can no 
longer manage or Proxmox Ceph Server OSDs replacement/additions with 
Proxmox GUI nor CLI, at least using the current SSD used for journal/dbs.

I tried to find info on what LVM structures needs ceph-volume on a 
journals disk, but was unable to find it.

Is there a way to manually configure/do something so that Proxmox Ceph 
tooling will be able to further create journals/dbs on such a SSD disk? 
Create a partition and VG, whatever...

Thanks a lot
Eneko

-- 
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es




More information about the pve-user mailing list