[pve-devel] [PATCH pve-docs 3/3] fix #3967: add ZFS dRAID documentation

Dominik Csapak d.csapak at proxmox.com
Fri Jun 3 14:34:02 CEST 2022


comments inline

On 6/2/22 13:22, Stefan Hrdlicka wrote:
> add some basic explanation how ZFS dRAID works including
> links to openZFS for more details
> 
> add documentation for two dRAID parameters used in code
> 
> Signed-off-by: Stefan Hrdlicka <s.hrdlicka at proxmox.com>
> ---
>   local-zfs.adoc | 40 +++++++++++++++++++++++++++++++++++++++-
>   1 file changed, 39 insertions(+), 1 deletion(-)
> 
> diff --git a/local-zfs.adoc b/local-zfs.adoc
> index ab0f6ad..8eb681c 100644
> --- a/local-zfs.adoc
> +++ b/local-zfs.adoc
> @@ -32,7 +32,8 @@ management.
>   
>   * Copy-on-write clone
>   
> -* Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
> +* Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2, RAIDZ-3,
> +dRAID, dRAID2, dRAID3
>   
>   * Can use SSD for cache
>   
> @@ -244,6 +245,43 @@ them, unless your environment has specific needs and characteristics where
>   RAIDZ performance characteristics are acceptable.
>   
>   
> +ZFS dRAID
> +~~~~~~~~~
> +
> +In a ZFS dRAID (declustered RAID) the hot spare drive(s) participate in the RAID.
> +Their spare capacity is reservered and used for rebuilding when one drive fails.
> +This provides depending on the configuration faster rebuilding compaired to a
> +RAIDZ in case of drive failure. More information can be found in the official
> +openZFS documenation. footnote:[OpenZFS dRAID
> +https://openzfs.github.io/openzfs-docs/Basic%20Concepts/dRAID%20Howto.html]
> +
> +NOTE: dRAID is intended for more then 10-15 disks in a dRAID. A RAIDZ
> +setup should be better for a lower amount of disks in most use cases.
> +
> + * `dRAID1` or `dRAID`: requires at least 2 disks, one can fail before data is
> +lost
> + * `dRAID2`: requires at least 3 disks, two can fail before data is lost
> + * `dRAID3`: requires at least 4 disks, three can fail before data is lost
> +
> +
> +Additonal information can be found on manual page:
> +
> +----
> +# man zpoolconcepts
> +----
> +
> +spares and data
> +^^^^^^^^^^^^^^^
> +The number of `spares` tells the system how many disks it should keep ready in
> +case of of a disk failure. The default value is 0 `spares`. Without spares
> +rebuilding won't get any speed benefits.
> +
> +The number of `data` devices specifies the size of a parity group. The default
> +is 8 if the number of `disks - parity - spares >= 8`. A higher number of `data`
> +and parity drives increases the allocation size (e.g. for 4k sectors with
> +default `data`=6 minimum allocation size is 24k) which can affect compression.

i found this block a bit confusing, because among other things, you talk about
'parity groups' but neither this, nor the offical docs mention a 'parity group'
rather a 'redundancy group', so i'd rename that

also i'd spell it out more clearly that the default is `disks - parity - spares` until
it's greater, then it's clamped at 8 (by default)

also i'd somehow mention the things from this sentence from the offical docs:
 > In general a smaller value of D will increase IOPS, improve the compression ratio, and speed up 
resilvering at the expense of total usable capacity. Defaults to 8, unless N-P-S is less than 8.


> +
> +
>   Bootloader
>   ~~~~~~~~~~
>   






More information about the pve-devel mailing list