[pve-devel] [PATCH docs 1/2] docs: ceph: explain pool options

Dylan Whyte d.whyte at proxmox.com
Fri Jan 15 15:05:58 CET 2021


>     On 15.01.2021 14:17 Alwin Antreich <a.antreich at proxmox.com mailto:a.antreich at proxmox.com > wrote:
> 
> 
>     Signed-off-by: Alwin Antreich <a.antreich at proxmox.com mailto:a.antreich at proxmox.com >
>     ---
>     pveceph.adoc | 45 ++++++++++++++++++++++++++++++++++++++-------
>     1 file changed, 38 insertions(+), 7 deletions(-)
> 
>     diff --git a/pveceph.adoc b/pveceph.adoc
>     index fd3fded..42dfb02 100644
>     --- a/pveceph.adoc
>     +++ b/pveceph.adoc
>     @@ -466,12 +466,16 @@ WARNING: **Do not set a min_size of 1**. A replicated pool with min_size of 1
>     allows I/O on an object when it has only 1 replica which could lead to data
>     loss, incomplete PGs or unfound objects.
> 
>     -It is advised to calculate the PG number depending on your setup, you can find
>     -the formula and the PG calculator footnote:[PG calculator
>     -https://ceph.com/pgcalc/] online. From Ceph Nautilus onwards it is possible to
>     -increase and decrease the number of PGs later on footnote:[Placement Groups
>     -{cephdocs-url}/rados/operations/placement-groups/].
>     +It is advisable to calculate the PG number depending on your setup. You can
>     +find the formula and the PG calculator footnote:[PG calculator
>     +https://ceph.com/pgcalc/] online. Ceph Nautilus and newer, allow to increase
>     +and decrease the number of PGs footnoteref:[placement_groups,Placement Groups
> 
s/Ceph Nautilus and newer, allow to/Ceph Nautilus and newer allow you to/
or "Ceph Nautilus and newer allow you to change the number of PGs", depending on whether you want "increase and decrease" to be clear or not.

>     +{cephdocs-url}/rados/operations/placement-groups/] later on.
> 
s/later on/after setup/

>     +In addition to manual adjustment, the PG autoscaler
>     +footnoteref:[autoscaler,Automated Scaling
>     +{cephdocs-url}/rados/operations/placement-groups/#automated-scaling] can
>     +automatically scale the PG count for a pool in the background.
> 
>     You can create pools through command line or on the GUI on each PVE host under
>     **Ceph -> Pools**.
>     @@ -485,6 +489,34 @@ If you would like to automatically also get a storage definition for your pool,
>     mark the checkbox "Add storages" in the GUI or use the command line option
>     '--add_storages' at pool creation.
> 
>     +.Base Options
>     +Name:: The name of the pool. It must be unique and can't be changed afterwards.
> 
s/It must/This must/
s/unique and/unique, and/

>     +Size:: The number of replicas per object. Ceph always tries to have that many
> 
s/have that/have this/

>     +copies of an object. Default: `3`.
>     +PG Autoscale Mode:: The automatic PG scaling mode footnoteref:[autoscaler] of
>     +the pool. If set to `warn`, introducing a warning message when a pool
> 
s/introducing/it produces/


>     +is too far away from an optimal PG count. Default: `warn`.
> 
s/is too far away from an optimal/has a suboptimal/

>     +Add as Storage:: Configure a VM and container storage using the new pool.
> 
s/VM and container/VM and/or container/

>     +Default: `true`.
>     +
>     +.Advanced Options
>     +Min. Size:: The minimum number of replicas per object. Ceph will reject I/O on
>     +the pool if a PG has less than this many replicas. Default: `2`.
>     +Crush Rule:: The rule to use for mapping object placement in the cluster. These
>     +rules define how data is placed within the cluster. See
>     +xref:pve_ceph_device_classes[Ceph CRUSH & device classes] for information on
>     +device-based rules.
>     +# of PGs:: The number of placement groups footnoteref:[placement_groups] that
>     +the pool should have at the beginning. Default: `128`.
>     +Traget Size:: The estimated amount of data expected in the pool. The PG
>     +autoscaler uses that size to estimate the optimal PG count.
> 
s/that size/this size/

>     +Target Size Ratio:: The ratio of data that is expected in the pool. The PG
>     +autoscaler uses the ratio relative to other ratio sets. It takes precedence
>     +over the `target size` if both are set.
>     +Min. # of PGs:: The minimal number of placement groups. This setting is used to
> 
s/minimal/minimum/

>     +fine-tune the lower amount of the PG count for that pool. The PG autoscaler
> 
s/lower amount/lower bound/

>     +will not merge PGs below this threshold.
>     +
>     Further information on Ceph pool handling can be found in the Ceph pool
>     operation footnote:[Ceph pool operation
>     {cephdocs-url}/rados/operations/pools/]
>     @@ -697,8 +729,7 @@ This creates a CephFS named `'cephfs'' using a pool for its data named
>     `'cephfs_metadata'' with one quarter of the data pools placement groups (`32`).
>     Check the xref:pve_ceph_pools[{pve} managed Ceph pool chapter] or visit the
>     Ceph documentation for more information regarding a fitting placement group
>     -number (`pg_num`) for your setup footnote:[Ceph Placement Groups
>     -{cephdocs-url}/rados/operations/placement-groups/].
>     +number (`pg_num`) for your setup footnoteref:[placement_groups].
>     Additionally, the `'--add-storage'' parameter will add the CephFS to the {pve}
>     storage configuration after it was created successfully.
> 
s/was/has been/

>     --
>     2.29.2
> 
> 
> 
>     _______________________________________________
>     pve-devel mailing list
>     pve-devel at lists.proxmox.com mailto:pve-devel at lists.proxmox.com
>     https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 



More information about the pve-devel mailing list