[pve-devel] [PATCH v7 qemu-server 2/5] disk reassign: add API endpoint

Aaron Lauterer a.lauterer at proxmox.com
Tue Apr 20 18:34:09 CEST 2021


The goal of this new API endpoint is to provide an easy way to move a
disk between VMs as this was only possible with manual intervention
until now. Either by renaming the VM disk or by manually adding the
disks volid to the config of the other VM.

The latter can easily cause unexpected behavior such as disks attached
to VM B would be deleted if it used to be a disk of VM A. This happens
because PVE assumes that the VMID in the volname always matches the VM
the disk is attached to and thus, would remove any disk with VMID A
when VM A was deleted.

The term `reassign` was chosen as it is not yet used
for VM disks.

Signed-off-by: Aaron Lauterer <a.lauterer at proxmox.com>
---

v6 -> v7:
this was a rather large change:

* added new parameter to specify target disk config key
    * add check if free
    * use $update_vm_api to add disk to new VM (hotplug if possible)
* renamed parameters and vars to clearly distinguish between source and
  target VMs / disk config keys
* expand description to mention that a rename works only between VMs on
  the same node
* check if target drive type supports all config parameters of the disk
* removed cluster log. was there to emulate the behavior of move_disk
  but even there it seems to log a very outdated syntax...
* reordered the reassignment procedure
    1. reassign/rename volume
    2. remove from source vm config
    3. update target vm
    4. remove potential old replication snapshots

    This should help to reduce the possibilities that a disk ends up in
    limbo. If the rename/reassign on the storage level fails, we haven't
    changed any VM config yet. If the replication snapshot removal
    fails, nothing happens to the VMs, it needs to be cleaned up
    manually though.
* fixed parameter for replication snapshot removal (thx @febner for the
  hint)
* change worker ID to show which vm & disk is reassigned to which.
    tried to find a way that does not interfere with the UPID parser.
    AFAICT this one works okayish now. The GUI has a bit of a glitch
    where it replaces - with / in the title of the tasks detail view.

v5 -> v6:
* guard Replication snapshot cleanup
    additionally to the eval, that code is now only run if the volume is
    on a storage with the 'replicate' feature
* add permission check for target vmid
* changed regex to match unused keys better

thx @Fabian for these suggestions/catching problems

v4 -> v5:
* implemented suggestions from Fabian [1]
    * logging before action
    * improving description
    * improving error messages
    * using Replication::prepare to remove replication snapshots
    * check if disk is physical disk using /dev/...

v3 -> v4: nothing

v2 -> v3:
* reordered the locking as discussed with fabian [0] to
run checks
    fork worker
	lock source config
	    lock target config
		run checks
		...

* added more checks
    * will not reassign to or from templates
    * will not reassign if VM has snapshots present
* cleanup if disk used to be replicated
* made task log slightly more verbose
* integrated general recommendations regarding code
* renamed `disk` to `drive_key`
* prepended some vars with `source_` for easier distinction

v1 -> v2: print config key and volid info at the end of the job so it
shows up on the CLI and task log

rfc -> v1:
* add support to reassign unused disks
* add support to provide a config digest for the target vm
* add additional check if disk key is present in config
* reorder checks a bit

In order to support unused disk I had to extend
PVE::QemuServer::Drive::valid_drive_names for the API parameter
validation.

Checks are ordered so that cheap tests are run at the first chance to
fail early.

The check if both VMs are present on the node is a bit redundant because
locking the config files will fail if the VM is not present. But with
the additional check we can provide a useful error message to the user
instead of a "Configuration file xyz does not exist" error.

[0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
[1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html
 PVE/API2/Qemu.pm        | 220 ++++++++++++++++++++++++++++++++++++++++
 PVE/QemuServer/Drive.pm |   4 +
 2 files changed, 224 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index c56b609..b90a83b 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
 use PVE::VZDump::Plugin;
 use PVE::DataCenterConfig;
 use PVE::SSHInfo;
+use PVE::Replication;
 
 BEGIN {
     if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -4395,4 +4396,223 @@ __PACKAGE__->register_method({
 	return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, $param->{vmid}, $param->{type});
     }});
 
+__PACKAGE__->register_method({
+    name => 'reassign_vm_disk',
+    path => '{vmid}/reassign_disk',
+    method => 'POST',
+    protected => 1,
+    proxyto => 'node',
+    description => "Reassign a disk to another VM on the same node",
+    permissions => {
+	description => "You need 'VM.Config.Disk' permissions on /vms/{vmid} and /vms/{target vmid}, and 'Datastore.Allocate' permissions on the storage.",
+	check => [ 'and',
+		   ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
+		   ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
+	    ],
+    },
+    parameters => {
+        additionalProperties => 0,
+	properties => {
+	    node => get_standard_option('pve-node'),
+	    'source-vmid' => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+	    'target-vmid' => get_standard_option('pve-vmid', { completion => \&PVE::QemuServer::complete_vmid }),
+	    'source-drive' => {
+	        type => 'string',
+		description => "The config key of the disk to reassign (for example, ide0 or scsi1).",
+		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+	    },
+	    'target-drive' => {
+	        type => 'string',
+		description => "The config key the disk will be reassigned to (for example, ide0 or scsi1).",
+		enum => [PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+	    },
+	    'source-digest' => {
+		type => 'string',
+		description => 'Prevent changes if current the configuration file of the source VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+		maxLength => 40,
+		optional => 1,
+	    },
+	    'target-digest' => {
+		type => 'string',
+		description => 'Prevent changes if current the configuration file of the target VM has a different SHA1 digest. This can be used to prevent concurrent modifications.',
+		maxLength => 40,
+		optional => 1,
+	    },
+	},
+    },
+    returns => {
+	type => 'string',
+	description => "the task ID.",
+    },
+    code => sub {
+	my ($param) = @_;
+
+	my $rpcenv = PVE::RPCEnvironment::get();
+	my $authuser = $rpcenv->get_user();
+
+	my $node = extract_param($param, 'node');
+	my $source_vmid = extract_param($param, 'source-vmid');
+	my $target_vmid = extract_param($param, 'target-vmid');
+	my $source_digest = extract_param($param, 'source-digest');
+	my $target_digest = extract_param($param, 'target-digest');
+	my $source_drive = extract_param($param, 'source-drive');
+	my $target_drive = extract_param($param, 'target-drive');
+
+	my $storecfg = PVE::Storage::config();
+	my $source_volid;
+
+	$rpcenv->check_vm_perm($authuser, $target_vmid, undef, ['VM.Config.Disk'])
+	    if $authuser ne 'root at pam';
+
+	die "Reassigning a disk to the same VM is not possible. Did you mean to move the disk?\n"
+	    if $source_vmid eq $target_vmid;
+
+	my $load_and_check_configs = sub {
+	    my $vmlist = PVE::Cluster::get_vmlist()->{ids};
+	    die "Both VMs need to be on the same node ($vmlist->{$source_vmid}->{node}) but target VM is on $vmlist->{$target_vmid}->{node}.\n"
+		if $vmlist->{$source_vmid}->{node} ne $vmlist->{$target_vmid}->{node};
+
+	    my $source_conf = PVE::QemuConfig->load_config($source_vmid);
+	    PVE::QemuConfig->check_lock($source_conf);
+	    my $target_conf = PVE::QemuConfig->load_config($target_vmid);
+	    PVE::QemuConfig->check_lock($target_conf);
+
+	    die "Can't reassign disks from or to templates\n"
+		if ($source_conf->{template} || $target_conf->{template});
+
+	    if ($source_digest) {
+		eval { PVE::Tools::assert_if_modified($source_digest, $source_conf->{digest}) };
+		if (my $err = $@) {
+		    die "VM ${source_vmid}: ${err}";
+		}
+	    }
+
+	    if ($target_digest) {
+		eval { PVE::Tools::assert_if_modified($target_digest, $target_conf->{digest}) };
+		if (my $err = $@) {
+		    die "VM ${target_vmid}: ${err}";
+		}
+	    }
+
+	    die "Disk '${source_drive}' does not exist\n"
+		if !defined($source_conf->{$source_drive});
+
+	    die "Target disk key '${target_drive}' is already in use\n"
+		if exists $target_conf->{$target_drive};
+
+	    my $drive = PVE::QemuServer::parse_drive(
+		$source_drive,
+		$source_conf->{$source_drive},
+	    );
+	    $source_volid = $drive->{file};
+
+	    die "disk '${source_drive}' has no associated volume\n"
+		if !$source_volid;
+	    die "CD drive contents can't be reassigned\n"
+		if PVE::QemuServer::drive_is_cdrom($drive, 1);
+	    die "Can't reassign physical disk\n" if $drive->{file} =~ m|^/dev/|;
+	    die "Can't reassign disk used by a snapshot\n"
+		if PVE::QemuServer::Drive::is_volume_in_use(
+		    $storecfg,
+		    $source_conf,
+		    $source_drive,
+		    $source_volid,
+		);
+
+	    die "Storage does not support the reassignment of this disk\n"
+		if !PVE::Storage::volume_has_feature(
+		    $storecfg,
+		    'reassign',
+		    $source_volid,
+		);
+
+	    die "Cannot reassign disk while the source VM is running\n"
+		if PVE::QemuServer::check_running($source_vmid)
+		    && $source_drive !~ m/^unused\d+$/;
+
+	    if ($target_drive !~ m/^unused\d+$/ && $target_drive =~ m/^([^\d]+)\d+$/) {
+		my $interface = $1;
+		my $desc = PVE::JSONSchema::get_standard_option("pve-qm-${interface}");
+		eval {
+		    PVE::JSONSchema::parse_property_string(
+			$desc->{format},
+			$source_conf->{$source_drive},
+		    )
+		};
+		if (my $err = $@) {
+		    die "Cannot reassign disk: ${err}";
+		}
+	    }
+
+	    return ($source_conf, $target_conf);
+	};
+
+	my $logfunc = sub {
+	    my ($msg) = @_;
+	    print STDERR "$msg\n";
+	};
+
+	my $reassign_func = sub {
+	    return PVE::QemuConfig->lock_config($source_vmid, sub {
+		return PVE::QemuConfig->lock_config($target_vmid, sub {
+		    my ($source_conf, $target_conf) = &$load_and_check_configs();
+
+		    my $drive_param = PVE::QemuServer::parse_drive(
+			$target_drive,
+			$source_conf->{$source_drive},
+		    );
+
+		    print "reassigning disk '$source_drive'\n";
+		    my $new_volid = PVE::Storage::reassign_volume(
+			$storecfg,
+			$source_volid,
+			$target_vmid,
+		    );
+
+		    $drive_param->{file} = $new_volid;
+
+		    delete $source_conf->{$source_drive};
+		    print "removing disk '${source_drive}' from VM '${source_vmid}' config\n";
+		    PVE::QemuConfig->write_config($source_vmid, $source_conf);
+
+		    my $drive_string = PVE::QemuServer::print_drive($drive_param);
+		    &$update_vm_api(
+			{
+			    node => $node,
+			    vmid => $target_vmid,
+			    digest => $target_digest,
+			    $target_drive => $drive_string,
+			},
+			1,
+		    );
+
+		    # remove possible replication snapshots
+		    if (PVE::Storage::volume_has_feature(
+			    $storecfg,
+			    'replicate',
+			    $source_volid),
+		    ) {
+			eval {
+			    PVE::Replication::prepare(
+				$storecfg,
+				[$new_volid],
+				undef,
+				1,
+				undef,
+				$logfunc,
+			    )
+			};
+			if (my $err = $@) {
+			    print "Failed to remove replication snapshots on reassigned disk. Manual cleanup could be necessary.\n";
+			}
+		    }
+		});
+	    });
+	};
+
+	&$load_and_check_configs();
+
+	return $rpcenv->fork_worker('qmreassign', "${source_vmid}-${source_drive}>${target_vmid}-${target_drive}", $authuser, $reassign_func);
+    }});
+
 1;
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index 9016a43..db0f3c9 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -392,6 +392,10 @@ sub valid_drive_names {
             'efidisk0');
 }
 
+sub valid_drive_names_with_unused {
+    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS -1)));
+}
+
 sub is_valid_drivename {
     my $dev = shift;
 
-- 
2.20.1






More information about the pve-devel mailing list