[pve-devel] [PATCH storage] rbd: add support for erasure coded ec pools

Alwin Antreich alwin at antreich.com
Thu Jan 27 16:41:12 CET 2022

January 27, 2022 12:27 PM, "Aaron Lauterer" <a.lauterer at proxmox.com> wrote:

> Thanks for the hint, as I wasn't aware of it. It will not be considered for PVE managed Ceph
> though, so not really an option here.[0]
> [0]
> https://git.proxmox.com/?p=pve-storage.git;a=blob;f=PVE/CephConfig.pm;h=c388f025b409c660913c08276376
> dda0fba2c6c;hb=HEAD#l192

That's where the db config would work.

> What these approaches do have in common, is that we spread the config over multiple places and
> cannot set different data pools for different storages.

Yes indeed, it adds to the fragmentation. But this conf file is for each storage, a data pool per
storage is already possible.

> I rather keep the data pool stored in our storage.cfg and apply the parameter where needed. From
> what I can tell, I missed the image clone in this patch, where the data-pool also needs to be
> applied.
> But this way we have the settings for that storage in one place we control and are also able to
> have different EC pools for different storages. Not that I expect it to happen a lot in practice,
> but you never know.

This sure is a good place. But to argue in favor of a separate config file. :)

Wouldn't it make sense to have a parameter for a `client.conf` in the storage definition? Or maybe
an inherent place like it already exists. This would allow to not only set the data pool but also
adjust client caching, timeouts, debug levels, ...? [0] The benefit is mostly for users not having
administrative access to their Ceph cluster.

Hyper-converged setups can store these settings in the config db. Each storage would need its
own user to separate the setting.

Thanks for listening to my 2 cents. ;)


[0] https://docs.ceph.com/en/latest/cephfs/client-config-ref

More information about the pve-devel mailing list