[pve-devel] [PATCH manager v4 1/3] api ceph osd: add OSD index, metadata and lv-info

Alwin Antreich alwin at antreich.com
Fri Dec 9 16:28:23 CET 2022


December 9, 2022 3:05 PM, "Aaron Lauterer" <a.lauterer at proxmox.com> wrote:

> On 12/7/22 18:23, Alwin Antreich wrote:
> 
>> December 7, 2022 2:22 PM, "Aaron Lauterer" <a.lauterer at proxmox.com> wrote:
>> On 12/7/22 12:15, Alwin Antreich wrote:
>>> 
> 
> [...]
> 
>>> 'ceph-volume' is used to gather the infos, except for the creation time
>>> of the LV which is retrieved via 'lvs'.
>> 
>> You could use lvs/vgs directly, the ceph osd relevant infos are in the lv_tags.
>>> IIRC, and I looked at it again, mapping the OSD ID to the associated LV/VG would be a manual lookup
>>> via /var/lib/ceph/osd/ceph-X/block which is a symlink to the LV/VG.
>>> So yeah, would be possible, but I think a bit more fragile should something change (as unlikely as
>>> it is) in comparsion to using ceph-volume.
>> 
>> The lv_tags already shows the ID (ceph.osd_id=<id>). And I just see that `ceph-volume lvm list
>> <id>` also exists, that is definitely faster then listing all OSDs.
> 
> Ok I see now what you meant with the lv tags. I'll think about it. Adding the OSD ID to the
> ceph-volume call is definitely a good idea in case we stick with it.
> 
>> I don't expect these API endpoints to be run all the time, and am therefore okay if they are a bit
>>> more expensive regarding computation resources.
>> 
>> `lvs -o lv_all,vg_all --reportformat=json`
>> `vgs -o vg_all,pv_all --reportformat=json`
>> Why do you want to expose the lv-info?
>>> Why not? The LVs are the only thing I found for an OSD that contain some hint to when it was
>>> created. Adding more general infos such as VG and LV for a specific OSD can help users understand
>>> where the actual data is stored. And that without digging even deeper into how things are handled
>>> internally and how it is mapped.
>> 
>> In my experience this data is only useful if you want to handle the OSD on the CLI. Hence my
>> question about the use-case. :)
>> The metdata on the other hand displays all disks, sizes and more of an OSD. Then for example you
>> can display DB/WAL devices in the UI and how big the DB partition is.
> 
> Did you look at the rest of the patches, or gave it a try on a test cluster? Quite a bit of the
> metadata for each device is shown. With additional infos of the underlying volume. I think it is
> nice, as it can make it a bit easier to know where to look for the correct volumes when CLI
> interaction on a deeper level is needed.
NO, I just had a glimpse on the patches. :)

It would be nice to have this information at `pveceph osd list`.

> 
> If you see something that should be added as well, let me know :)
+1

Cheers,
Alwin




More information about the pve-devel mailing list