[pve-devel] [PATCH storage] Actually use target_volid

Fabian Ebner f.ebner at proxmox.com
Mon Sep 16 09:42:24 CEST 2019


On 9/12/19 11:03 AM, Fabian Ebner wrote:
> On 9/11/19 1:18 PM, Thomas Lamprecht wrote:
>> On 11.09.19 11:46, Fabian Ebner wrote:
>>> Migration with --targetstorage was broken because of this.
>>>
>>> Signed-off-by: Fabian Ebner <f.ebner at proxmox.com>
>>> ---
>>>   PVE/Storage.pm | 4 ++--
>>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/PVE/Storage.pm b/PVE/Storage.pm
>>> index 755eca8..31eeb21 100755
>>> --- a/PVE/Storage.pm
>>> +++ b/PVE/Storage.pm
>>> @@ -586,7 +586,7 @@ sub storage_migrate {
>>>       }
>>>       }
>>>   -    my @formats = volume_transfer_formats($cfg, $volid, $volid, 
>>> $snapshot, $base_snapshot, $with_snapshots);
>>> +    my @formats = volume_transfer_formats($cfg, $volid, 
>>> $target_volid, $snapshot, $base_snapshot, $with_snapshots);
>>>       die "cannot migrate from storage type '$scfg->{type}' to 
>>> '$tcfg->{type}'\n" if !@formats;
>>>       my $format = $formats[0];
>>>   @@ -598,7 +598,7 @@ sub storage_migrate {
>>>         $with_snapshots = $with_snapshots ? 1 : 0; # sanitize for 
>>> passing as cli parameter
>>>       my $send = ['pvesm', 'export', $volid, $format, '-', 
>>> '-with-snapshots', $with_snapshots];
>>> -    my $recv = [@$ssh, '--', 'pvesm', 'import', $volid, $format, 
>>> $import_fn, '-with-snapshots', $with_snapshots];
>>> +    my $recv = [@$ssh, '--', 'pvesm', 'import', $target_volid, 
>>> $format, $import_fn, '-with-snapshots', $with_snapshots];
>>>       if (defined($snapshot)) {
>>>       push @$send, '-snapshot', $snapshot
>>>       }
>>>
>> Without testing: Looks OK, thanks for catching it.
>>
>> @Wolfgang, could you also take a look at this, it seems like a 
>> regression from
>> your change to only using pvesm export/import in commit:
>> da72898cc65b2c63c25ac1988b256936f67bd72c
>
> Tested a bit more and found that this alone won't do it. One problem 
> is that the VM config file is not updated, so it still contains the 
> old storage->disk association. Second thing is that having 
> local_storage1/vm-100-disk-0 and local_storage2/vm-100-disk-0 they 
> won't be renamed and will collide on target_storage.
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>

It turns out that the original patch should be fine. The above mentioned 
problems actually *don't* happen when migration is used as intended.

I encountered them since I used 'qm migrate 109 node0 --targetstorage 
myzpool --online 1 --with-local-disks 1' while the VM was shut down. We 
don't check yet if the machine is really running when --online is 
specified. So it still used --targetstorage but when the machine is 
offline that doesn't work, see the problems.

I'll send a patch with a check for VM running with --online right away.





More information about the pve-devel mailing list