[pve-devel] [PATCH storage] fix #2085: Handle non-default mount point in path() by introducing new mountpoint property

Fabian Grünbichler f.gruenbichler at proxmox.com
Tue Nov 12 13:05:30 CET 2019


On November 12, 2019 12:18 pm, Fabian Ebner wrote:
> On 11/7/19 12:59 PM, Fabian Grünbichler wrote:
>> On November 7, 2019 12:52 pm, Fabian Ebner wrote:
>>> On 11/7/19 9:34 AM, Fabian Grünbichler wrote:
>>>> On November 6, 2019 1:46 pm, Fabian Ebner wrote:
>>>>> A new mountpoint property is added to the schema for ZFSPool storages.
>>>>> When needed for the first time, the current mount point is determined and
>>>>> written to the storage config.
>>>>>
>>>>> Signed-off-by: Fabian Ebner <f.ebner at proxmox.com>
>>>>> ---
>>>>>    PVE/Storage/ZFSPoolPlugin.pm | 25 +++++++++++++++++++++++--
>>>>>    1 file changed, 23 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
>>>>> index 16fb0d6..44c8123 100644
>>>>> --- a/PVE/Storage/ZFSPoolPlugin.pm
>>>>> +++ b/PVE/Storage/ZFSPoolPlugin.pm
>>>>> @@ -32,6 +32,10 @@ sub properties {
>>>>>    	    description => "use sparse volumes",
>>>>>    	    type => 'boolean',
>>>>>    	},
>>>>> +	mountpoint => {
>>>>> +	    description => "mount point",
>>>>> +	    type => 'string', format => 'pve-storage-path',
>>>>> +	},
>>>>>        };
>>>>>    }
>>>>>    
>>>>> @@ -44,6 +48,7 @@ sub options {
>>>>>    	disable => { optional => 1 },
>>>>>    	content => { optional => 1 },
>>>>>    	bwlimit => { optional => 1 },
>>>>> +	mountpoint => { optional => 1 },
>>>>>        };
>>>>>    }
>>>>>    
>>>>> @@ -148,11 +153,27 @@ sub path {
>>>>>        my ($vtype, $name, $vmid) = $class->parse_volname($volname);
>>>>>    
>>>>>        my $path = '';
>>>>> +    my $mountpoint = $scfg->{mountpoint};
>>>>> +
>>>>> +    # automatically write mountpoint to storage config if it is not present
>>>>> +    if (!$mountpoint) {
>>>>> +	$mountpoint = $class->zfs_request($scfg, undef, 'get', '-o', 'value',
>>>>> +					  'mountpoint', '-H', $scfg->{pool});
>>>>> +	chomp($mountpoint);
>>>>> +
>>>>> +	eval {
>>>>> +	    PVE::Storage::lock_storage_config(sub {
>>>>> +		my $cfg = PVE::Storage::config();
>>>>> +		$cfg->{ids}->{$storeid}->{mountpoint} = $mountpoint;
>>>>> +		PVE::Storage::write_config($cfg);
>>>>> +	    }, "update storage config failed");
>>>>> +	};
>>>>> +	warn $@ if $@;
>>>>> +    }
>>>>>    
>>>>>        if ($vtype eq "images") {
>>>>>    	if ($name =~ m/^subvol-/ || $name =~ m/^basevol-/) {
>>>>> -	    # fixme: we currently assume standard mount point?!
>>>>> -	    $path = "/$scfg->{pool}/$name";
>>>>> +	    $path = "$mountpoint/$name";
>>>>>    	} else {
>>>>>    	    $path = "/dev/zvol/$scfg->{pool}/$name";
>>>>>    	}
>>>>
>>>> it would be great if this "get mountpoint property of dataset" (or better:
>>>> even "get arbitrary properties of dataset") helper could be factored out.
>>>> we already have two calls to "zfs get -Hp '...' $dataset" that could
>>>> re-use such a helper.
>>>>
>>>> then we could add a single check in activate_storage (maybe just for the
>>>> branch where we actually just imported the pool?) to verify that the
>>>> stored mountpoint value is correct, and if not update it (or warn about
>>>> the problem). that check could have a very low timeout, and errors could
>>>> be ignored since it is just a double-check.
>>>>
>>>> _______________________________________________
>>>> pve-devel mailing list
>>>> pve-devel at pve.proxmox.com
>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>>>
>>>
>>> I noticed that the $scfg parameter in zfs_request is never used. Should
>>> I clean that up? Otherwise I'd need to include the $scfg parameter in
>>> the helper function as well, where it also isn't needed.
>> 
>> it's used by ZFS over iSCSI, where the plugin is derived from
>> ZFSPoolPlugin, so it's not safe to remove ;)
>> 
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>> 
> 
> 
> I see, thanks. I'll remember to check whether the method is virtual next 
> time.
> 
> There are two problems with doing the mount point configuration in 
> activate_storage.
> First, we might get called by 'create' (API for config triggered by 
> 'pvesm add') which holds a lock on storage.cfg at that time, but we 
> might also get called by something else without such a lock.
> Second, 'create' will write its own version of the config after calling 
> activate_storage anyways.
> 
> It also doesn't help to only do the mount point configuration inside the 
> branch where we just imported, since the pool/dataset might be imported 
> already when we do 'pvesm add' and the second problem is still there.
> 
> Now there is plugin->on_add_hook used by 'create' which would also be a 
> good place to configure the mount point. We could activate the storage 
> there, check for the mount point and then hijack the scfg that is passed 
> along to it (also not nice), so that it is written by 'create' later. In 
> activate_storage we could still check whether configured and effective 
> mount points match and warn if they don't.
> 
> I don't see a clean way to do the automatic adding of the mount point 
> property (doing it in path() is possible, but we need to assume that our 
> caller doesn't hold the lock on storage.cfg). Maybe it's better to just 
> do the warning in activate_storage and leave it to the user to configure 
> the mount point properly? It feels a bit weird for the plugin code to 
> modify the config; is there an instance where it does?

IMHO, checking in the on_add_hook (and die-ing if appropriate) seems 
like a sane first step. we could also verify (and die if it fails) in 
the activate_storage branch where we just did the import (since the 
check is small compared to the import). but like you said, that won't 
catch all instances of misconfiguration, since for most systems the 
pools will already have gotten imported earlier during boot. 
(re-)checking it on every activate_storage seems like overkill to me, 
and would mean an additional zfs_request for almost all volume 
operations (we do a lot of "activate_storage" all over the place, backed 
by the assumption that it is mostly a no-op)..

modifying the storage config on the fly on operations like 
activate_storage or even worse, path, seems like not such a good idea. 
allowing users to set it in a straight-forward fashion is probably 
enough, since it gets pretty obvious if storage.cfg and the actual 
properties are out of sync.




More information about the pve-devel mailing list