[pve-devel] [PATCH v5 qemu-server 14/19] Update volume IDs in one go
Fabian Grünbichler
f.gruenbichler at proxmox.com
Thu Apr 9 09:53:01 CEST 2020
small nit inline
On April 8, 2020 11:25 am, Fabian Ebner wrote:
> Use 'update_volume_ids' for the live-migrated disks as well.
>
> Signed-off-by: Fabian Ebner <f.ebner at proxmox.com>
> ---
> PVE/QemuMigrate.pm | 26 ++++++++++----------------
> 1 file changed, 10 insertions(+), 16 deletions(-)
>
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
> index 14f3fcb..eb78537 100644
> --- a/PVE/QemuMigrate.pm
> +++ b/PVE/QemuMigrate.pm
> @@ -838,14 +838,20 @@ sub phase2 {
> my $source_drive = PVE::QemuServer::parse_drive($drive, $conf->{$drive});
> my $target_drive = PVE::QemuServer::parse_drive($drive, $target->{drivestr});
>
> - my $source_sid = PVE::Storage::Plugin::parse_volume_id($source_drive->{file});
> - my $target_sid = PVE::Storage::Plugin::parse_volume_id($target_drive->{file});
> + my $source_volid = $source_drive->{file};
> + my $target_volid = $target_drive->{file};
> +
> + my $source_sid = PVE::Storage::Plugin::parse_volume_id($source_volid);
> + my $target_sid = PVE::Storage::Plugin::parse_volume_id($target_volid);
>
> my $bwlimit = PVE::Storage::get_bandwidth_limit('migration', [$source_sid, $target_sid], $opt_bwlimit);
> my $bitmap = $target->{bitmap};
>
> $self->log('info', "$drive: start migration to $nbd_uri");
> PVE::QemuServer::qemu_drive_mirror($vmid, $drive, $nbd_uri, $vmid, undef, $self->{storage_migration_jobs}, 'skip', undef, $bwlimit, $bitmap);
> +
> + $self->{volume_map}->{$source_volid} = $target_volid;
> + $self->log('info', "volume '$source_volid' is '$target_volid' on the target\n");
potential followup: if we do that before attempting to start the mirror which might
die, it would make the cleanup handling easier. even better if we would
allocate each disk individually before calling start - then we'd also
avoid the problem of 'third of five disks failed to allocate, but we
haven't told you about the first two yet' ;) but for now, just moving
these two lines two lines up would simplify the last patch of this series.
> }
> }
>
> @@ -1114,13 +1120,6 @@ sub phase3_cleanup {
>
> my $tunnel = $self->{tunnel};
>
> - # Needs to happen before printing the drives that where migrated via qemu,
> - # since a new volume on the target might have the same ID as an old volume
> - # on the source and update_volume_ids relies on matching IDs in the config.
> - if ($self->{volume_map}) {
> - PVE::QemuConfig->update_volume_ids($conf, $self->{volume_map});
> - }
> -
> if ($self->{storage_migration}) {
> # finish block-job with block-job-cancel, to disconnect source VM from NBD
> # to avoid it trying to re-establish it. We are in blockjob ready state,
> @@ -1131,16 +1130,11 @@ sub phase3_cleanup {
> eval { PVE::QemuServer::qemu_blockjobs_cancel($vmid, $self->{storage_migration_jobs}) };
> eval { PVE::QemuMigrate::cleanup_remotedisks($self) };
> die "Failed to complete storage migration: $err\n";
> - } else {
> - foreach my $target_drive (keys %{$self->{target_drive}}) {
> - my $drive = PVE::QemuServer::parse_drive($target_drive, $self->{target_drive}->{$target_drive}->{drivestr});
> - $conf->{$target_drive} = PVE::QemuServer::print_drive($drive);
> - }
> }
> }
>
> - # only write after successfully completing the migration
> - if ($self->{volume_map} || $self->{storage_migration}) {
> + if ($self->{volume_map}) {
> + PVE::QemuConfig->update_volume_ids($conf, $self->{volume_map});
> PVE::QemuConfig->write_config($vmid, $conf);
> }
>
> --
> 2.20.1
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
>
More information about the pve-devel
mailing list