[pve-devel] [PATCH v2 qemu-server 1/5] disk reassign: add API endpoint

Aaron Lauterer a.lauterer at proxmox.com
Thu Sep 3 10:30:37 CEST 2020



On 9/3/20 9:46 AM, Fabian Grünbichler wrote:
> On September 1, 2020 2:44 pm, Aaron Lauterer wrote:


[..]

>> +
>> +	my $storecfg = PVE::Storage::config();
>> +
>> +	die "You cannot reassign a disk to the same VM\n"
>> +	    if $vmid eq $target_vmid;
>> +
>> +	my $vmlist = PVE::QemuServer::vzlist();
>> +	die "Both VMs need to be on the same node\n"
>> +	    if !$vmlist->{$vmid}->{exists} || !$vmlist->{$target_vmid}->{exists};
> 
> we could skip this if the target storage is shared, and let the user run
> 'rescan' afterwards?

I would like to keep this as simple as possible. Just as a thought for now, but maybe we could issue the rescan to the other node by calling that API endpoint. Though we probably lose the possibility to log the new disk key and volid that way. But at least it would not be a two step process for the user.

>> +
>> +	return PVE::QemuConfig->lock_config($vmid, sub {
>> +	    my $conf = PVE::QemuConfig->load_config($vmid);
>> +	    PVE::QemuConfig->check_lock($conf);
>> +
>> +	    die "Source VM config checksum missmatch (file change by other user?)\n"
>> +		if $digest && $digest ne $conf->{digest};
> 
> PVE::Tools::assert_if_modified

thx

> 
>> +
>> +	    die "Disk '${disk}' does not exist\n"
>> +		if !exists($conf->{$disk});
> 
> why not defined?

because I thought exists is enough ;). But yep, defined is better

> 
>> +
>> +	    my $drive = PVE::QemuServer::parse_drive($disk, $conf->{$disk});
>> +	    die "disk '$disk' has no associated volume\n" if !$drive->{file};
> 
> missing check for whether it's actually volume-based, and not
> pass-through.. what about templates/base volumes/linked
> clones/snapshots?

Passed through disks will fail later on in the code but having a check right away with a nicer error msg is probably a good idea. I haven't thought about templates, base volumes and linked clones yet. Same goes for Snapshots :/

I guess we won't want to rename a ZFS dataset with snapshots present? Another thing is that renaming the dataset will most likely break replication.

> 
>> +	    die "Cannot reassign a cdrom drive\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);
> 
> "CD drive contents can't be reassigned.\n"

Sounds much better :)

> 
>> +
>> +	    die "Cannot reassign disk while the source VM is running\n"
>> +		if PVE::QemuServer::check_running($vmid) && $disk !~ m/unused[0-9]/;
>> +
>> +	    return PVE::QemuConfig->lock_config($target_vmid, sub {
>> +		my $target_conf = PVE::QemuConfig->load_config($target_vmid);
>> +		PVE::QemuConfig->check_lock($target_conf);
>> +
>> +		die "Target VM config checksum missmatch (file change by other user?)\n"
>> +		    if $target_digest && $target_digest ne $target_conf->{digest};
> 
> same as above.
> 
>> +
>> +		PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $vmid: reassign --disk $disk --target_vmid $target_vmid");
>> +
>> +		my $realcmd = sub {
>> +		    my $new_volid = PVE::Storage::reassign_volume($storecfg, $drive->{file}, $target_vmid);
>> +
>> +		    delete $conf->{$disk};
>> +		    PVE::QemuConfig->write_config($vmid, $conf);
>> +
>> +		    my $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid);
> 
> this can fail, now the volume is unreferenced altogether -> maybe wrap
> in an eval and print a more helpful error message including the new
> volid?

Ok

> 
>> +		    PVE::QemuConfig->write_config($target_vmid, $target_conf);
>> +
>> +		    print "reassigned disk to VM ${target_vmid} as '${key}: ${new_volid}'\n";
> 
> or split this up, and
> 
> print 'renamed volume '$old_volid' to '$new_volid'\n";
> 
> after the call to reassign_volume, and print the config changes
> individually after the respective actions?
> 
> print "remove disk '$disk' from VM '$vmid'\n";
> 
> print "added volume '$new_volid' as '$key' to VM '$target_vmid'\n";

Sounds reasonable, being more verbose on what is going on is probably a good idea.

> 
>> +		};
>> +
>> +		return $rpcenv->fork_worker('qmreassign', $vmid, $authuser, $realcmd);
> 
> I am not so happy about
> 
> lock_config(
>    load_config;
>    lock_config(
>      load_config;
>      fork_worker(
>        cfs_lock/write_config
>      )
>    )
> )
> 
> as it tends to accumulate more checks and extra functionality and at
> some point might introduce an inconsistency.. note that the current
> version seems fine (we always lock first then load, we only look at
> config files that belong to a specific node, ...), but maybe we could
> switch to
> 
> # check for early return/nice error
> (conf1, conf2) = load_and_check_sub(id1, id2);
> fork_worker(
>    lock_config(
>      lock_config(
>        # recheck in locked worker context
>        (conf1, conf2) = load_and_check_sub(id1, id2);
>        ...
>      )
>    )
> )
> 
> to make it more fool/future-proof, especially since the checks are rather cheap..

Sounds good.

> 
>> +	    });
>> +	});
>> +    }});
>> +
>>   1;
>> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
>> index 91c33f8..d2f59cd 100644
>> --- a/PVE/QemuServer/Drive.pm
>> +++ b/PVE/QemuServer/Drive.pm
>> @@ -383,6 +383,10 @@ sub valid_drive_names {
>>               'efidisk0');
>>   }
>>   
>> +sub valid_drive_names_with_unused {
>> +    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
>> +}
>> +
>>   sub is_valid_drivename {
>>       my $dev = shift;
>>   
>> -- 
>> 2.20.1
>>
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel at lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
>>
>>
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 





More information about the pve-devel mailing list