[PVE-User] Ceph df
Alwin Antreich
alwin at antreich.com
Wed Feb 2 05:43:32 CET 2022
Missed to send it to the list as well. Answers inline.
On February 1, 2022 2:59:59 PM GMT+01:00, "Сергей Цаболов" <tsabolov at t8.ru> wrote:
>Hello Alwin,
>
>In this post
>https://forum.proxmox.com/threads/ceph-octopus-upgrade-notes-think-twice-before-enabling-auto-scale.80105/#post-399654
>
>I read about *target ratio to 1 and call it a day *, in my case I set to
>vm.pool Target ratio 1 :
>
>ceph osd pool autoscale-status
>POOL SIZE TARGET SIZE RATE
>RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW
>PG_NUM AUTOSCALE
>device_health_metrics 22216k 500.0G 2.0 106.4T 0.0092
> 1.0 8 on
>vm.pool 2734G 3.0
>106.4T 0.0753 1.0000
>0.8180 1.0 512 on
>cephfs_data 0 2.0
>106.4T 0.0000 0.2000 0.1636 1.0 128
> on
>cephfs_metadata 27843k 500.0G 2.0
>106.4T 0.0092 4.0 32 on
>
>What you think I need to set target ratio on cephfs_metadata &
>device_health_metrics?
```
TARGET RATIO, if present, is the ratio of storage that the administrator has specified that they expect this pool to consume relative to other pools with target ratios set. If both target size bytes and ratio are specified, the ratio takes precedence.
```
From the ceph docs. [0]
>
>To pool cephfs_data I set the target ratio 0.2 .
>
>Or the target ration on vm.pool need not the *1* but more?
That's up to the kind of usage you're expecting and the ratio set on other pools. See above and the docs. [0]
Cheers
Alwin
[0] https://docs.ceph.com/en/latest/rados/operations/placement-groups/
More information about the pve-user
mailing list