[pve-devel] [PATCH v2 qemu-server 2/3] Avoid collisions of unused disks when doing online migration with --targetstorage

Fabian Grünbichler f.gruenbichler at proxmox.com
Thu Nov 21 16:46:48 CET 2019


On November 4, 2019 11:23 am, Fabian Ebner wrote:
> On 10/31/19 10:19 AM, Thomas Lamprecht wrote:
>> On 10/30/19 10:54 AM, Fabian Ebner wrote:
>>> Doing an online migration with --targetstorage and two unused disks with the
>>> same name on different storages failed, because they would collide on the
>>> target storage. This patch makes sure that we don't use the same name twice.
>>>
>>> Signed-off-by: Fabian Ebner <f.ebner at proxmox.com>
>>> ---
>>>   PVE/QemuMigrate.pm | 8 ++++++--
>>>   1 file changed, 6 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
>>> index 045f3b0..94c5764 100644
>>> --- a/PVE/QemuMigrate.pm
>>> +++ b/PVE/QemuMigrate.pm
>>> @@ -10,6 +10,7 @@ use PVE::INotify;
>>>   use PVE::Tools;
>>>   use PVE::Cluster;
>>>   use PVE::Storage;
>>> +use PVE::Storage::Plugin;
>>>   use PVE::QemuServer;
>>>   use Time::HiRes qw( usleep );
>>>   use PVE::RPCEnvironment;
>>> @@ -466,9 +467,12 @@ sub sync_disks {
>>>   		next if $rep_volumes->{$volid};
>>>   		push @{$self->{volumes}}, $volid;
>>>   
>>> +		my $targetvolname = undef;
>>>   		if (defined($override_targetsid) && $self->{running} && $ref eq 'storage') {
>>> -		    my (undef, $targetvolname) = PVE::Storage::parse_volume_id($volid);
>>> +		    my $scfg = PVE::Storage::storage_config($self->{storecfg}, $targetsid);
>>> +		    $targetvolname = PVE::Storage::Plugin::get_next_vm_diskname($self->{online_unused_volumes}, $targetsid, $vmid, undef, $scfg, 0);
>>>   		    push @{$self->{online_unused_volumes}}, "${targetsid}:${targetvolname}";
>>> +		    $self->log('info', "$volid will become ${targetsid}:${targetvolname} on the target node");
>> 
>> but isn't above racy? I only can assume that the new volid is
>> OK once I alloced the image on the target storage, or?
>> Or is it OK as we're protected by the migrate lock anyway?
>> 
> 
> Yes, we are protected by the migrate lock.
> I assume that the target storage does not currently contain any disks 
> with our VM ID. Using get_next_vm_diskname doesn't look at which disks 
> are present on the storage, but uses the list we pass along to determine 
> the next free disk name. If there is a problem with storage_migrate we 
> die anyways, so we can be sure that our online_unused_volumes list is 
> consistent with the disks that land on the storage.

that's a pretty big assumption though? e.g., there might be orphaned 
disks, or replication going on on that target storage?

IMHO, the right approach would be to teach 'storage_migrate' to allocate 
the disk (on storage X, with size Y, with prefered format Z), and return the 
allocated volid so that we can put it into the (new) config as unused 
volume. maybe we need a new helper/wrapper around storage_migrate for 
that?

Aaron (CC-ed) is trying to implement similar functionality for 
pve-container atm, but there we have the additional complication that we 
also have to differentiate between block/image based volumes, and 
subvols.. it would probably be a good idea to discuss the needed 
interfaces/changes ;)

>>>   		}
>>>   
>>>   		my $opts = $self->{opts};
>>> @@ -480,7 +484,7 @@ sub sync_disks {
>>>   		$bwlimit = $bwlimit * 1024 if defined($bwlimit);
>>>   
>>>   		PVE::Storage::storage_migrate($self->{storecfg}, $volid, $self->{ssh_info}, $targetsid,
>>> -					      undef, undef, undef, $bwlimit, $insecure, $with_snapshots);
>>> +					      $targetvolname, undef, undef, $bwlimit, $insecure, $with_snapshots);
>>>   	    }
>>>   	}
>>>       };
>>>
> 
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 




More information about the pve-devel mailing list