[pve-devel] applied: [PATCH manager v3] fix #4631: ceph: osd: create: add osds-per-device
Thomas Lamprecht
t.lamprecht at proxmox.com
Mon Nov 6 18:27:15 CET 2023
Am 23/08/2023 um 11:44 schrieb Aaron Lauterer:
> Allows to automatically create multiple OSDs per physical device. The
> main use case are fast NVME drives that would be bottlenecked by a
> single OSD service.
>
> By using the 'ceph-volume lvm batch' command instead of the 'ceph-volume
> lvm create' for multiple OSDs / device, we don't have to deal with the
> split of the drive ourselves.
>
> But this means that the parameters to specify a DB or WAL device won't
> work as the 'batch' command doesn't use them. Dedicated DB and WAL
> devices don't make much sense anyway if we place the OSDs on fast NVME
> drives.
>
> Some other changes to how the command is built were needed as well, as
> the 'batch' command needs the path to the disk as a positional argument,
> not as '--data /dev/sdX'.
> We drop the '--cluster-fsid' parameter because the 'batch' command
> doesn't accept it. The 'create' will fall back to reading it from the
> ceph.conf file.
>
> Removal of OSDs works as expected without any code changes. As long as
> there are other OSDs on a disk, the VG & PV won't be removed, even if
> 'cleanup' is enabled.
>
> The '--no-auto' parameter is used to avoid the following deprecation
> warning:
> ```
> --> DEPRECATION NOTICE
> --> You are using the legacy automatic disk sorting behavior
> --> The Pacific release will change the default to --no-auto
> --> passed data devices: 1 physical, 0 LVM
> --> relative data size: 0.3333333333333333
> ```
>
> Signed-off-by: Aaron Lauterer <a.lauterer at proxmox.com>
> ---
>
> changes since v2:
> * removed check for fsid
> * rework ceph-volume call to place the positional devpath parameter
> after '--'
>
> PVE/API2/Ceph/OSD.pm | 35 +++++++++++++++++++++++++++++------
> 1 file changed, 29 insertions(+), 6 deletions(-)
>
>
applied, thanks!
More information about the pve-devel
mailing list