[pve-devel] [PATCH manager v4 1/3] api ceph osd: add OSD index, metadata and lv-info
Alwin Antreich
alwin at antreich.com
Wed Dec 7 18:23:58 CET 2022
December 7, 2022 2:22 PM, "Aaron Lauterer" <a.lauterer at proxmox.com> wrote:
> On 12/7/22 12:15, Alwin Antreich wrote:
>
>> Hi,
>
> December 6, 2022 4:47 PM, "Aaron Lauterer" <a.lauterer at proxmox.com> wrote:
>> To get more details for a single OSD, we add two new endpoints:
>
> * nodes/{node}/ceph/osd/{osdid}/metadata
> * nodes/{node}/ceph/osd/{osdid}/lv-info
>> As an idea for a different name for lv-info, `nodes/{node}/ceph/osd/{osdid}/volume`? :)
>
> Could be done, as you would expect to get overall physical volume infos from it, right? So that the
> endpoint won't change, once the underlying technology changes?
Yes. It sounds more clear to me, as LV could mean something different. :P
> [...]
>
> Possible volumes are:
> * block (default value if not provided)
> * db
> * wal
>
> 'ceph-volume' is used to gather the infos, except for the creation time
> of the LV which is retrieved via 'lvs'.
>> You could use lvs/vgs directly, the ceph osd relevant infos are in the lv_tags.
>
> IIRC, and I looked at it again, mapping the OSD ID to the associated LV/VG would be a manual lookup
> via /var/lib/ceph/osd/ceph-X/block which is a symlink to the LV/VG.
> So yeah, would be possible, but I think a bit more fragile should something change (as unlikely as
> it is) in comparsion to using ceph-volume.
The lv_tags already shows the ID (ceph.osd_id=<id>). And I just see that `ceph-volume lvm list
<id>` also exists, that is definitely faster then listing all OSDs.
> I don't expect these API endpoints to be run all the time, and am therefore okay if they are a bit
> more expensive regarding computation resources.
>
>> `lvs -o lv_all,vg_all --reportformat=json`
>> `vgs -o vg_all,pv_all --reportformat=json`
>> Why do you want to expose the lv-info?
>
> Why not? The LVs are the only thing I found for an OSD that contain some hint to when it was
> created. Adding more general infos such as VG and LV for a specific OSD can help users understand
> where the actual data is stored. And that without digging even deeper into how things are handled
> internally and how it is mapped.
In my experience this data is only useful if you want to handle the OSD on the CLI. Hence my
question about the use-case. :)
The metdata on the other hand displays all disks, sizes and more of an OSD. Then for example you
can display DB/WAL devices in the UI and how big the DB partition is.
Cheers,
Alwin
More information about the pve-devel
mailing list