[pve-devel] [PATCH manager v3 1/2] api: ceph: add applications of each pool to the lspools endpoint

Stefan Sterz s.sterz at proxmox.com
Fri Oct 21 09:59:50 CEST 2022


On 10/21/22 09:04, Thomas Lamprecht wrote:
> Am 21/10/2022 um 08:57 schrieb Stefan Sterz:
>>> out of interest: how expensive is this, did you check the overhead?
>>>
>> do you want a specific metric? in my (admittedly small) test setup
>> (three vm cluster with 4 cores and 4Gib RAM) it is barely noticeable.
>> the api call takes between 18 and 25ms in both cases for me.
>>
> 
> I mean, with a handful of pools you probably won't (or really should not)
> see any difference >50 ms, that would make it a hard case arguing.
> 
> Just wondered if many pools (e.g., [10, 100, 1000]) actually increase this
> O(n) or if that falls in the noise of the rados monitor query overhead.
> 
> I don't expect that is significant, just wondered if you already checked
> and got some info on that.

ok so from what i can tell this is probably O(n) as it iterates once
over all pools, but that info should be in memory and not too bad imo
(and since this is lspools there are some other calls here that are
likely in O(n)).

however, even if this rados command takes too long, it will time out
after 5 seconds and then no applications will be included in the
response. which is just the previous behavior and imo "safe".

some more detail below:

technically, one may argue this call is in O(n*m*k), with n being the
number of pools, m the number of applications and k the number of
metadata keys for the application. but m and k are very like very small
if not zero (e.g. for a default rbd pool m is one and k would be zero).
so more like O(n).

at least if i am reading this right:
https://github.com/smithfarm/ceph/blob/v17.0.0/src/mon/OSDMonitor.cc#L6876-L6956






More information about the pve-devel mailing list