[pve-devel] [RFC qemu-server] clone disk: fix #3970 catch same source and destination

Aaron Lauterer a.lauterer at proxmox.com
Tue Apr 5 09:28:30 CEST 2022



On 4/4/22 17:26, Fabian Grünbichler wrote:
> On April 1, 2022 5:24 pm, Aaron Lauterer wrote:
>> In rare situations, it could happen that the source and target path is
>> the same. For example, if the disk image is to be copied from one RBD
>> storage to another one on different Ceph clusters but the pools have the
>> same name.
>>
>> In this situation, the clone operation will clone it to the same image and
>> one will end up with an empty destination volume.
>>
>> This patch does not solve the underlying issue, but is a first step to
>> avoid potential data loss, for example  when the 'delete source' option
>> is enabled as well.
>>
>> We also need to delete the newly created image right away because the
>> regular cleanup gets confused and tries to remove the source image. This
>> will fail and we have an orphaned image which cannot be removed easily
>> because the same underlying root cause (same path) will falsely trigger
>> the "Drive::is_volume_in_use" check.
> 
> isn't this technically - just like for the container case - a problem in
> general, not just for cloning a disk? I haven't tested this in practice,
> but since you already have the reproducing setup ;)
> 
> e.g., given the following:
> - storage A, krbd, cluster A, pool foo
> - storage B, krbd, cluster B, pool foo
> - VM 123, with scsi0: A:vm-123-disk-0 and no volumes on B
> - qm set 123 -scsi1: B:1
> 
> next free slot on B is 'vm-123-disk-0', which will be allocated. mapping
> will skip the map part, since the RBD path already exists (provided
> scsi0's volume is already activated). the returned path will point to
> the mapped blockdev corresponding to A:vm-123-disk-0, not B:..
> 
> guest writes to scsi1, likely corrupting whatever is on scsi0, since
> most things that tend to get put on guest disks are not
> multi-writer-safe (or something along the way notices it?)
> 
> if the above is the case, it might actually be prudent to just put the
> check from your other patch into RBDPlugin.pm 's alloc method (and
> clone and rename?) since we'd want to block any allocations on affected
> systems?

Tested it and yep... unfortunately the wrong disk is attached. I am going to implement the check in the RBDPlugin.pm.





More information about the pve-devel mailing list