[pve-devel] [PATCH pve-container 1/1] Adding new mount point type named 'zfs' to let configure a ZFS dataset as mount point for LXC container
Fabian Grünbichler
f.gruenbichler at proxmox.com
Wed May 17 09:50:37 CEST 2023
On May 16, 2023 3:07 pm, Konstantin wrote:
> Hello,
>
> > most tools have ways to exclude certain paths ;)
>
> Yeah - and every time when this "need to be excluded datasets"
> list/names changed we need to update exclude options for this tools as
> well. It seems that just make this datasets not visible to host is
> simpler, isn't it?
well the idea would be to exclude the whole dataset used by PVE, not
every single volume on it. but I understand this can be cumbersome.
> > you could "protect" the guest:
>
> I know about this option - but sometimes it isn't applicable. For
> example I often use the following scenario when need to upgrade an OS on
> container: save configs from container, destroy the container (dataset
> with my data isn't destroyed because it's non PVE), deploy the new one
> from updated template (dataset with my data just reattached back),
> restore configs and it's ready to use. Maybe the following option will
> be useful if you're insist on using Proxmox managed storage - introduce
> the ability to protect a volume? If so - it probably will be acceptable
> way for me.
you can just create a new container, then re-assign your "data" volume
that is managed by PVE to that new container (that feature is even on
the GUI nowadays ;)), then delete the old one. before that people used
"special" VMIDs to own such volumes, which also works, but is a bit more
brittle (e.g., migration will allocate a new volume owned by the guest,
and that would then be cleaned up, so extra care would need to be
applied).
> > but like I said, it can be implemented more properly as well
>
> In a couple with volume protection capability it could be an option -
> make a possibility for PVE managed ZFS dataset to have a legacy
> mountpoint instead of mandatory mount on host. But as I said - it's the
> only (and working) method which I've found for me and I'm just proposing
> it as starting point for possible improvement in such use cases like
> mine. If you can propose a better solution for that - ok, let's discuss
> in details how it can be done.
adding a protected flag that prevents certain operations is doable - the
question is then, what else except explicit detaching of the volume
should be forbidden? force restoring over that container? moving the
volume? reassigning it? migrating the container? changing some option of
the mountpoint? destruction of the container itself? the semantics are
not 100% clear to me, and should not be tailored to one specific use
case but match as broadly as sensible. but if you think this is
sensible, we can also discuss this enhancement further (but to me, it's
entirely orthogonal to the mountpoint issue at hand, other than you
happening to want both of them for your use case ;)).
my gut feeling is still that the root issue is that you have data
that is both too valuable to accidentally lose, but at the same time not
backed up? because usually when you have backups, you still try to
minimize the potential for accidents, but you accept the fact that you
cannot ever 100% prevent them. this is a time bomb waiting to explode,
no amount of features or workarounds will really help unless the root
problem is addressed. if I misunderstood something, I'd be glad to get
more information to help me understand the issue!
like I said, changing our ZFS mountpoint handling to either default to,
or optionally support working without the need to have the volume
dataset already mounted in a specific path by the storage layer sounds
okay to me. there is no need for this on the PVE side, so somebody that
wants this feature would need to write the patches and drive the change,
otherwise it will be a low-priority enhancement request.
More information about the pve-devel
mailing list