[pve-devel] [RFC qemu-server] clone disk: fix #3970 catch same source and destination

Aaron Lauterer a.lauterer at proxmox.com
Fri Apr 1 17:24:24 CEST 2022


In rare situations, it could happen that the source and target path is
the same. For example, if the disk image is to be copied from one RBD
storage to another one on different Ceph clusters but the pools have the
same name.

In this situation, the clone operation will clone it to the same image and
one will end up with an empty destination volume.

This patch does not solve the underlying issue, but is a first step to
avoid potential data loss, for example  when the 'delete source' option
is enabled as well.

We also need to delete the newly created image right away because the
regular cleanup gets confused and tries to remove the source image. This
will fail and we have an orphaned image which cannot be removed easily
because the same underlying root cause (same path) will falsely trigger
the "Drive::is_volume_in_use" check.

Signed-off-by: Aaron Lauterer <a.lauterer at proxmox.com>
---
We could think about leaving this as a permanent safety check as there
should not be a situation where both paths are the same AFAICT.

 PVE/QemuServer.pm | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 6c80c0c..a1aa4f2 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -7633,6 +7633,17 @@ sub clone_disk {
 
 	PVE::Storage::activate_volumes($storecfg, [$newvolid]);
 
+	my $src_path = PVE::Storage::path($storecfg, $drive->{file});
+	my $dst_path = PVE::Storage::path($storecfg, $newvolid);
+
+	if ($src_path eq $dst_path) {
+	    # Delete the new volume right away. Doing it later will try to remove the old volume and
+	    # fail. Currenlty the only known case is RBD. See bugs 3969 and 3970 for more details.
+	    PVE::Storage::vdisk_free($storecfg, $newvolid);
+	    pop @$newvollist;
+	    die "Source and destination are the same!\n";
+	}
+
 	if (drive_is_cloudinit($drive)) {
 	    # when cloning multiple disks (e.g. during clone_vm) it might be the last disk
 	    # if this is the case, we have to complete any block-jobs still there from
@@ -7650,8 +7661,6 @@ sub clone_disk {
 		# the relevant data on the efidisk may be smaller than the source
 		# e.g. on RBD/ZFS, so we use dd to copy only the amount
 		# that is given by the OVMF_VARS.fd
-		my $src_path = PVE::Storage::path($storecfg, $drive->{file});
-		my $dst_path = PVE::Storage::path($storecfg, $newvolid);
 
 		# better for Ceph if block size is not too small, see bug #3324
 		my $bs = 1024*1024;
-- 
2.30.2






More information about the pve-devel mailing list