[pve-devel] [PATCH pve-container 1/1] Adding new mount point type named 'zfs' to let configure a ZFS dataset as mount point for LXC container

Konstantin frank030366 at hotmail.com
Thu May 18 15:00:05 CEST 2023


Hello,

 > you can just create a new container, then re-assign your "data" volume...

Yeah this way looks acceptable too, but when looking from "ergonomic 
perspective" it seems that sometimes "destroy/deploy" method is 
sufficiently faster. In this case I just need to save configs from old 
container, destroy it (while data volumes isn't destroyed because it non 
PVE) and then just deploy container back with the same ID/hostname/mount 
points/etc, put back configs and that's all.

 > there is no need for this on the PVE side, so somebody that wants 
this feature would need to write the patches and drive the change

Regardless of implementation way this change isn't needed on PVE side - 
so from this point of view it doesn't matter how this feature will be 
realized. I probably can agree with "non-mountable-to-host" option 
feature for PVE volume, but it seems that this way less flexible and 
less ergonomic (IMHO of course :) ). In addition external (non PVE 
managed) ZFS mount have one more possibility - we can mount for example 
ZFS dataset from pool which even not a part of PVE storage, just another 
ZFS pool with some data and other settings optimized for that data 
(logbias, recordsize, etc). If we're talking about priorities - since 
I've come with a patch (not just a mail with proposals) it means that 
I'll be able to supply and support (like it usually means in opensource 
of course) this change just because I'm interested in further growing of 
your product and adding more interesting and useful features in it.

Best regards,

Konstantin.


that is managed by PVE to that new container

On 17.05.2023 10:50, Fabian Grünbichler wrote:
> On May 16, 2023 3:07 pm, Konstantin wrote:
>> Hello,
>>
>>   > most tools have ways to exclude certain paths ;)
>>
>> Yeah - and every time when this "need to be excluded datasets"
>> list/names changed we need to update exclude options for this tools as
>> well. It seems that just make this datasets not visible to host is
>> simpler, isn't it?
> well the idea would be to exclude the whole dataset used by PVE, not
> every single volume on it. but I understand this can be cumbersome.
>
>>   > you could "protect" the guest:
>>
>> I know about this option - but sometimes it isn't applicable. For
>> example I often use the following scenario when need to upgrade an OS on
>> container: save configs from container, destroy the container (dataset
>> with my data isn't destroyed because it's non PVE), deploy the new one
>> from updated template (dataset with my data just reattached back),
>> restore configs and it's ready to use. Maybe the following option will
>> be useful if you're insist on using Proxmox managed storage - introduce
>> the ability to protect a volume? If so - it probably will be acceptable
>> way for me.
> you can just create a new container, then re-assign your "data" volume
> that is managed by PVE to that new container (that feature is even on
> the GUI nowadays ;)), then delete the old one. before that people used
> "special" VMIDs to own such volumes, which also works, but is a bit more
> brittle (e.g., migration will allocate a new volume owned by the guest,
> and that would then be cleaned up, so extra care would need to be
> applied).
>
>>   > but like I said, it can be implemented more properly as well
>>
>> In a couple with volume protection capability it could be an option -
>> make a possibility for PVE managed ZFS dataset to have a legacy
>> mountpoint instead of mandatory mount on host. But as I said - it's the
>> only (and working) method which I've found for me and I'm just proposing
>> it as starting point for possible improvement in such use cases like
>> mine. If you can propose a better solution for that - ok, let's discuss
>> in details how it can be done.
> adding a protected flag that prevents certain operations is doable - the
> question is then, what else except explicit detaching of the volume
> should be forbidden? force restoring over that container? moving the
> volume? reassigning it? migrating the container? changing some option of
> the mountpoint? destruction of the container itself? the semantics are
> not 100% clear to me, and should not be tailored to one specific use
> case but match as broadly as sensible. but if you think this is
> sensible, we can also discuss this enhancement further (but to me, it's
> entirely orthogonal to the mountpoint issue at hand, other than you
> happening to want both of them for your use case ;)).
>
> my gut feeling is still that the root issue is that you have data
> that is both too valuable to accidentally lose, but at the same time not
> backed up? because usually when you have backups, you still try to
> minimize the potential for accidents, but you accept the fact that you
> cannot ever 100% prevent them. this is a time bomb waiting to explode,
> no amount of features or workarounds will really help unless the root
> problem is addressed. if I misunderstood something, I'd be glad to get
> more information to help me understand the issue!
>
> like I said, changing our ZFS mountpoint handling to either default to,
> or optionally support working without the need to have the volume
> dataset already mounted in a specific path by the storage layer sounds
> okay to me. there is no need for this on the PVE side, so somebody that
> wants this feature would need to write the patches and drive the change,
> otherwise it will be a low-priority enhancement request.
>




More information about the pve-devel mailing list