[pve-devel] [PATCH v4 qemu-server 1/4] disk reassign: add API endpoint

Fabian Grünbichler f.gruenbichler at proxmox.com
Fri Nov 20 17:17:28 CET 2020


On October 2, 2020 10:23 am, Aaron Lauterer wrote:
> The goal of this new API endpoint is to provide an easy way to move a
> disk between VMs as this was only possible with manual intervention
> until now. Either by renaming the VM disk or by manually adding the
> disks volid to the config of the other VM.
> 
> The latter can easily cause unexpected behavior such as disks attached
> to VM B would be deleted if it used to be a disk of VM A. This happens
> because PVE assumes that the VMID in the volname always matches the VM
> the disk is attached to and thus, would remove any disk with VMID A
> when VM A was deleted.
> 
> The term `reassign` was chosen as it is not yet used
> for VM disks.
> 
> Signed-off-by: Aaron Lauterer <a.lauterer at proxmox.com>
> ---
> v3 -> v4: nothing
> 
> v2 -> v3:
> * reordered the locking as discussed with fabian [0] to
> run checks
>     fork worker
> 	lock source config
> 	    lock target config
> 		run checks
> 		...
> 
> * added more checks
>     * will not reassign to or from templates
>     * will not reassign if VM has snapshots present
> * cleanup if disk used to be replicated
> * made task log slightly more verbose
> * integrated general recommendations regarding code
> * renamed `disk` to `drive_key`
> * prepended some vars with `source_` for easier distinction
> 
> v1 -> v2: print config key and volid info at the end of the job so it
> shows up on the CLI and task log
> 
> rfc -> v1:
> * add support to reassign unused disks
> * add support to provide a config digest for the target vm
> * add additional check if disk key is present in config
> * reorder checks a bit
> 
> In order to support unused disk I had to extend
> PVE::QemuServer::Drive::valid_drive_names for the API parameter
> validation.
> 
> Checks are ordered so that cheap tests are run at the first chance to
> fail early.
> 
> The check if both VMs are present on the node is a bit redundant because
> locking the config files will fail if the VM is not present. But with
> the additional check we can provide a useful error message to the user
> instead of a "Configuration file xyz does not exist" error.
> 
> [0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
> 
> 
>  PVE/API2/Qemu.pm        | 156 ++++++++++++++++++++++++++++++++++++++++
>  PVE/QemuServer/Drive.pm |   4 ++
>  2 files changed, 160 insertions(+)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 8da616a..613b257 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -4265,4 +4265,160 @@ __PACKAGE__->register_method({
>  	return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
>      }});
>  
> +__PACKAGE__->register_method({
> +    name => 'reassign_vm_disk',
> +    path => '{vmid}/reassign_disk',
> +    method => 'POST',
> +    protected => 1,
> +    proxyto => 'node',
> +    description => "Reassign a disk to another VM",
> +    permissions => {
> +	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid}, and 'Datastore.Allocate' permissions on the storage.",

and VM.Config.Disk on target_vmid?

> +	check => [ 'and',
> +		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
> +		   ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
> +	    ],
> +    },
> +    parameters => {
> +        additionalProperties => 0,
> +	properties => {
> +	    node => get_standard_option('pve-node'),
> +	    vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> +	    target_vmid => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
> +	    drive_key => {
> +	        type => 'string',
> +		description => "The config key of the disk to reassign (for example, ide0 or scsi1).",
> +		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
> +	    },
> +	    digest => {
> +		type => 'string',
> +		description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
> +		maxLength => 40,
> +		optional => 1,
> +	    },
> +	    target_digest => {
> +		type => 'string',
> +		description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
> +		maxLength => 40,
> +		optional => 1,
> +	    },
> +	},
> +    },
> +    returns => {
> +	type => 'string',
> +	description => "the task ID.",
> +    },
> +    code => sub {
> +	my ($param) = @_;
> +
> +	my $rpcenv = PVE::RPCEnvironment::get();
> +	my $authuser = $rpcenv->get_user();
> +
> +	my $node = extract_param($param, 'node');
> +	my $source_vmid = extract_param($param, 'vmid');
> +	my $target_vmid = extract_param($param, 'target_vmid');
> +	my $source_digest = extract_param($param, 'digest');
> +	my $target_digest = extract_param($param, 'target_digest');
> +	my $drive_key = extract_param($param, 'drive_key');
> +
> +	my $storecfg = PVE::Storage::config();
> +	my $vmlist;
> +	my $drive;
> +	my $source_volid;
> +
> +	die "You cannot reassign a disk to the same VM\n"
"Reassigning disk with same source and target VM not possible. Did you 
mean to move the disk?"

> +	    if $source_vmid eq $target_vmid;
> +
> +	my $load_and_check_configs = sub {
> +	    $vmlist = PVE::QemuServer::vzlist();
> +	    die "Both VMs need to be on the same node\n"
> +		if !$vmlist->{$source_vmid}->{exists} || !$vmlist->{$target_vmid}->{exists};

if we use PVE::Cluser::get_vmlist() here, we could include the nodes as 
well, which might be more informative?

> +
> +	    my $source_conf = PVE::QemuConfig->load_config($source_vmid);
> +	    PVE::QemuConfig->check_lock($source_conf);
> +	    my $target_conf = PVE::QemuConfig->load_config($target_vmid);
> +	    PVE::QemuConfig->check_lock($target_conf);
> +
> +	    die "Can't reassign disks with templates\n"

disks from/to template

> +		if ($source_conf->{template} || $target_conf->{template});
> +
> +	    if ($source_digest) {
> +		eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "Verification of source VM digest failed: ${err}";

a simple "VM $vmid: " prefix would be enough, the rest is contained in 
$err anyway..

> +		}
> +	    }
> +
> +	    if ($target_digest) {
> +		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
> +		if (my $err = $@) {
> +		    die "Verification of target VM digest failed: ${err}";

same

> +		}
> +	    }
> +
> +	    die "Disk '${drive_key}' does not exist\n"
> +		if !defined($source_conf->{$drive_key});
> +
> +	    $drive = PVE::QemuServer::parse_drive($drive_key, $source_conf->{$drive_key});
> +	    $source_volid = $drive->{file};
> +	    die "disk '${drive_key}' has no associated volume\n" if !$source_volid;
> +	    die "CD drive contents can't be reassigned\n" if PVE::QemuServer::drive_is_cdrom($drive, 1);

check for non-volume disks missing? it will/should fail in the storage 
layer, but better to catch it here already..

> +
> +	    die "Can't reassign disk used by a snapshot\n"
> +		if PVE::QemuServer::Drive::is_volume_in_use($storecfg, $source_conf, $drive_key, $source_volid);
> +
> +	    my $hasfeature = PVE::Storage::volume_has_feature($storecfg, 'reassign', $source_volid);
> +	    die "Storage does not support the reassignment of this disk\n" if !$hasfeature;

variable only used once for this check, you can just

die ..
  if !PVE::Storage::...

> +
> +	    die "Cannot reassign disk while the source VM is running\n"
> +		if PVE::QemuServer::check_running($source_vmid) && $drive_key !~ m/unused[0-9]/;
> +
> +	    return ($source_conf, $target_conf);
> +	};
> +
> +	my $reassign_func = sub {
> +	    return PVE::QemuConfig->lock_config($source_vmid, sub {
> +		return PVE::QemuConfig->lock_config($target_vmid, sub {
> +		    my ($source_conf, $target_conf) = &$load_and_check_configs();
> +
> +		    PVE::Cluster::log_msg('info', $authuser, "reassign disk VM $source_vmid: reassign --disk ${drive_key} --target_vmid $target_vmid");
> +
> +		    my $new_volid = PVE::Storage::reassign_volume($storecfg, $source_volid, $target_vmid);
> +
> +		    delete $source_conf->{$drive_key};
> +		    PVE::QemuConfig->write_config($source_vmid, $source_conf);
> +		    print "removing disk '${drive_key}' from VM '${source_vmid}'\n";

this message is misleading, as tense and the state of the source VM 
don't match ;)

> +
> +		    # remove possible replication snapshots
> +		    my $had_snapshots = 0;
> +		    if (PVE::Storage::volume_has_feature($storecfg, 'replicate', $new_volid)) {
> +			my $snapshots = PVE::Storage::volume_snapshot_list($storecfg, $new_volid);

> +			for my $snap (@$snapshots) {
> +			    next if (substr($snap, 0, 12) ne '__replicate_');
> +
> +			    $had_snapshots = 1;
> +			    PVE::Storage::volume_snapshot_delete($storecfg, $new_volid, $snap);
> +			}
> +			print "Disk '${drive_key}:${source_volid}' was replicated. On the next replication run it will be cleaned up on the replication target.\n"
> +			    if $had_snapshots;;
> +		    }

this can fail, so either wrap it in eval or move it below the following 
block, or above the removal from source config. above is potentially 
problematic, as we would need to get the replication lock then..

also, isn't this basically what PVE::Replication::prepare does?

> +
> +		    my $key;
> +		    eval { $key = PVE::QemuConfig->add_unused_volume($target_conf, $new_volid) };
> +		    if (my $err = $@) {
> +			print "adding moved disk '${new_volid}' to VM '${target_vmid}' config failed.\n";

I thought we are reassigned a disk here ;) might want to mention that 
adding it as unused failed, which is basically only possible if there 
is no free empty unused slot. freeing up a slot, and rescanning the VMID 
will fix the issue.

> +			return 0;
> +		    }
> +
> +		    PVE::QemuConfig->write_config($target_vmid, $target_conf);
> +		    print "adding disk to VM '${target_vmid}' as '${key}: ${new_volid}'\n";

again, order is wrong here - if the write_config fails, the print never 
happens. if the write was successful, the tense of the print is wrong.

> +		});
> +	    });
> +	};
> +
> +	&$load_and_check_configs();
> +
> +	return $rpcenv->fork_worker('qmreassign', $source_vmid, $authuser, $reassign_func);
> +    }});
> +
>  1;
> diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
> index 91c33f8..d2f59cd 100644
> --- a/PVE/QemuServer/Drive.pm
> +++ b/PVE/QemuServer/Drive.pm
> @@ -383,6 +383,10 @@ sub valid_drive_names {
>              'efidisk0');
>  }
>  
> +sub valid_drive_names_with_unused {
> +    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
> +}
> +
>  sub is_valid_drivename {
>      my $dev = shift;
>  
> -- 
> 2.20.1
> 
> 
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 





More information about the pve-devel mailing list