[pve-devel] [PATCH storage 1/2] Fix: 1542 - show storage utilization per pool, not per global

Alwin Antreich a.antreich at proxmox.com
Thu Nov 16 10:05:39 CET 2017


On Wed, Nov 15, 2017 at 07:42:57PM +0100, Alexandre DERUMIER wrote:
> >>Wouldn't that be totally confusing if you have several pools with different
> >>replication factor?
>
> Personnaly, I don't think it'll be confusing.
> Currently you need to do the math manually, and you don't known the replication factor without calling manually ceph api or commands.
>
>
>
>
> ----- Mail original -----
> De: "dietmar" <dietmar at proxmox.com>
> À: "aderumier" <aderumier at odiso.com>, "pve-devel" <pve-devel at pve.proxmox.com>
> Envoyé: Mercredi 15 Novembre 2017 17:21:05
> Objet: Re: [pve-devel] [PATCH storage 1/2] Fix: 1542 - show storage utilization per pool, not per global
>
> Wouldn't that be totally confusing if you have several pools with different
> replication factor?
>
> > Could it be possible to use replication factor from pool config to display
> > used and free space ?
> >
> > (/3 if size=3 for example)
> >
> > We didn't do it before, but I think this was because we monitor the full
> > storage,
> > now that it's only for a specific pool, I think it can make sense.
>
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

The replication factor only tells you how much raw_space the pool is
using on the cluster. The max_available space is calculated by ceph and takes the
replication_rule (sometimes not the whole cluster) and other pools residing there into account.

Further more, in an degraded state, your raw_space_used and max_available reduces but your actual data_size not,
hence the object_copies & objects_degraded of the pool. This is why we need to know the curr_object_copies_rate, see the calculation below.

Taken from PGmap.cc of Ceph:
curr_object_copies_rate = (float)(sum.num_object_copies - sum.num_objects_degraded) / sum.num_object_copies;
used = sum.num_bytes * curr_object_copies_rate;
used /= used + avail;

== Example ==
# ceph pg dump -f json-pretty
    "poolid": 6,
    "stat_sum": {
        "num_bytes": 1098907660,
        "num_object_copies": 789,
        "num_objects_degraded": 0,

# ceph df -f json-pretty
"pools": [
    {
        "name": "default",
        "id": 6,
        "stats": {
            "kb_used": 1073153,
            "bytes_used": 1098907660,
            "max_avail": 52436236773,
            "objects": 263

# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED
    161G      148G       13678M          8.25
POOLS:
    NAME        ID     USED      %USED     MAX AVAIL     OBJECTS
    default     6      1048M      2.05        50007M         263
    test2       8      5144M      6.42        75010M        1286

# ceph osd pool ls detail
pool 6 'default' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 168 flags hashpspool stripe_width 0
pool 8 'test2' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 177 flags hashpspool stripe_width 0

With luminous (or Kraken, I don't know), they introduced the percent_used key. The code uses only the calculation if the key doesn't exist.




More information about the pve-devel mailing list