[PVE-User] Shared ZFS storage that supports LXC's
Bryan at bryanfields.net
Sun Oct 29 11:16:07 CET 2023
I've been working to migrate my servers to a shared storage from a local-zfs.
My backend is a Linux server with 24 16T SAS drives in ZFS as follows:
4x (6d raidz2) + 3x mirror 4T NVME + 375G Optane split 32G/312G ZIL/L2ARC
768G of ram, and 1/2 that is for zfs arc. All connected via 2x10g lag with
jumbo frames to the other servers.
I have been working with the ZFS over ISCSI backend for my VM's and it works
very well. The only issue is there's not any support for multipath, but with
a 10g network, I'm not running into limits here for practical purposes.
I do get a warning in proxmox when moving a VM into it or making a snapshot:
Warning: volblocksize (8192) is much less than the minimum allocation
unit (32768), which wastes at least 75% of space. To reduce wasted space,
use a larger volblocksize (32768 is recommended), fewer dRAID data disks
per group, or smaller sector size (ashift).
I can't find exactly what this is referring to or how to fix it. Does anyone
have insight into this message?
With the LXC's I've found they don't support this backend storage. (and it's
not mentioned in the docs) I assume this is do to them needing a filesystem,
not a block device. My option here would be to run NFS for shared storage,
but this loses the ability to do snapshots (a must have). LVM would work, but
it's not able to be shared.
I was thinking it might make sense to do this as a per LXC NFS mount via ZFS,
and then the PVE node makes a new dataset in zfs on the shared storage server
via ssh for that lxc. This is basically how the zfs over iscsi is handled
today, as how I understand it.
Has anyone solved this? I'd like an option here to migrate LXC's from local
to shared storage on a linux/zfs server.
727-409-1194 - Voice
More information about the pve-user