[pve-devel] [PATCH 1/5] qemu_drive_mirror : handle multiple jobs
Alexandre DERUMIER
aderumier at odiso.com
Mon Oct 24 15:45:05 CEST 2016
>>This change now means qemu_drive_mirror() can add a job to the job hash
>>and optionally complete _all_ of them. My suggestion was meant more
>>like:
>>
>>qemu_drive_mirror(previous args, new sync arg);
>>
>>when $sync is 1 it finishes that one job, otherwise it returns its name
>>to be later passed to qemu_drive_mirror_monitor().
Well, the job finish in qemu_drive_mirror_monitor.
The current patch behavior is
qemu_drive_mirror(previous args) : finish the job if !skipcomplete (so it don't break current move_disk)
if skipcomplete is present, then begin the mirror until it reach 100% and return
It's to avoid to have too much jobs in parallel.
Can you send me a sample code of what you have in mind just to be sure ?
>>Can we just say: "storage migration timed out\n" ?
ok
>>With jobs now possibly still running in the background this check might
>>not be valid in all cases anymore. There might still be other mirrors in
>>the background.
>>It would be better to use the loop below to check whether all the jobs
>>in $jobs are also available in @$stats, and error otherwise.
ok
>>(btw., do you think would it make sense to limit the output to the jobs
>>from $jobs?)
whay do you mean exactly ?
>> + if ($readycounter == @$stats) {
>>If you count all jobs and compare to @$stats you're waiting for all jobs
>>regardless of what's in $jobs.
>>
>>I think that if you pass a job list to this function it should only be
>>monitoring those jobs.
Ok, I'll try to improve that
> + eval { PVE::QemuServer::qemu_blockjobs_cancel($vmid) };
>>Same question: does it make sense to cancel all jobs if you have a $jobs
>>list?
It was more for safety.
But I can compare running block jobs to the list and only cancel specific jobs.
I don't known if we need to running differents jobs (backup for example)
at the same time than mirroring ?
----- Mail original -----
De: "Wolfgang Bumiller" <w.bumiller at proxmox.com>
À: "aderumier" <aderumier at odiso.com>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Lundi 24 Octobre 2016 14:03:28
Objet: Re: [pve-devel] [PATCH 1/5] qemu_drive_mirror : handle multiple jobs
This change now means qemu_drive_mirror() can add a job to the job hash
and optionally complete _all_ of them. My suggestion was meant more
like:
qemu_drive_mirror(previous args, new sync arg);
when $sync is 1 it finishes that one job, otherwise it returns its name
to be later passed to qemu_drive_mirror_monitor().
On Fri, Oct 21, 2016 at 05:00:44AM +0200, Alexandre Derumier wrote:
> we can use multiple drive_mirror in parralel.
>
> block-job-complete can be skipped, if we want to add more mirror job later.
>
> also add support for nbd uri to qemu_drive_mirror
>
> Signed-off-by: Alexandre Derumier <aderumier at odiso.com>
> ---
> PVE/QemuServer.pm | 141 +++++++++++++++++++++++++++++++++++-------------------
> 1 file changed, 93 insertions(+), 48 deletions(-)
>
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 728110f..1fcaf0f 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -5801,91 +5801,135 @@ sub qemu_img_format {
> }
>
> sub qemu_drive_mirror {
> - my ($vmid, $drive, $dst_volid, $vmiddst, $is_zero_initialized) = @_;
> + my ($vmid, $drive, $dst_volid, $vmiddst, $is_zero_initialized, $jobs, $skipcomplete) = @_;
>
> - my $storecfg = PVE::Storage::config();
> - my ($dst_storeid, $dst_volname) = PVE::Storage::parse_volume_id($dst_volid);
> + $jobs = {} if !$jobs;
> +
> + my $qemu_target;
> + my $format;
>
> - my $dst_scfg = PVE::Storage::storage_config($storecfg, $dst_storeid);
> + if($dst_volid =~ /^nbd:/) {
> + $qemu_target = $dst_volid;
> + $format = "nbd";
> + } else {
>
> - my $format = qemu_img_format($dst_scfg, $dst_volname);
> + my $storecfg = PVE::Storage::config();
> + my ($dst_storeid, $dst_volname) = PVE::Storage::parse_volume_id($dst_volid);
>
> - my $dst_path = PVE::Storage::path($storecfg, $dst_volid);
> + my $dst_scfg = PVE::Storage::storage_config($storecfg, $dst_storeid);
>
> - my $qemu_target = $is_zero_initialized ? "zeroinit:$dst_path" : $dst_path;
> + $format = qemu_img_format($dst_scfg, $dst_volname);
> +
> + my $dst_path = PVE::Storage::path($storecfg, $dst_volid);
> +
> + $qemu_target = $is_zero_initialized ? "zeroinit:$dst_path" : $dst_path;
> + }
>
> my $opts = { timeout => 10, device => "drive-$drive", mode => "existing", sync => "full", target => $qemu_target };
> $opts->{format} = $format if $format;
>
> - print "drive mirror is starting (scanning bitmap) : this step can take some minutes/hours, depend of disk size and storage speed\n";
> + print "drive mirror is starting for drive-$drive\n";
>
> - my $finish_job = sub {
> - while (1) {
> - my $stats = vm_mon_cmd($vmid, "query-block-jobs");
> - my $stat = @$stats[0];
> - last if !$stat;
> - sleep 1;
> - }
> - };
> + eval { vm_mon_cmd($vmid, "drive-mirror", %$opts); }; #if a job already run for this device,it's throw an error
> + if (my $err = $@) {
> + eval { PVE::QemuServer::qemu_blockjobs_cancel($vmid) };
> + die "mirroring error: $err";
> + }
> +
> + $jobs->{"drive-$drive"} = 1;
> +
> + qemu_drive_mirror_monitor ($vmid, $vmiddst, $jobs, $skipcomplete);
> +}
> +
> +sub qemu_drive_mirror_monitor {
> + my ($vmid, $vmiddst, $jobs, $skipcomplete) = @_;
>
> eval {
> - vm_mon_cmd($vmid, "drive-mirror", %$opts);
> +
> + my $err_complete = 0;
> +
> while (1) {
> my $stats = vm_mon_cmd($vmid, "query-block-jobs");
> - my $stat = @$stats[0];
> - die "mirroring job seem to have die. Maybe do you have bad sectors?" if !$stat;
> - die "error job is not mirroring" if $stat->{type} ne "mirror";
>
> - my $busy = $stat->{busy};
> - my $ready = $stat->{ready};
> + die "too much retries to complete drive mirroring, migration timeout after $err_complete retries" if $err_complete > 300;
Can we just say: "storage migration timed out\n" ?
> + die "Some mirroring jobs seem to be aborded. Maybe do you have bad sectors?" if @$stats < scalar(keys %$jobs);
With jobs now possibly still running in the background this check might
not be valid in all cases anymore. There might still be other mirrors in
the background.
It would be better to use the loop below to check whether all the jobs
in $jobs are also available in @$stats, and error otherwise.
(btw., do you think would it make sense to limit the output to the jobs
from $jobs?)
>
> - if (my $total = $stat->{len}) {
> - my $transferred = $stat->{offset} || 0;
> - my $remaining = $total - $transferred;
> - my $percent = sprintf "%.2f", ($transferred * 100 / $total);
> + last if @$stats == 0 && scalar(keys %$jobs) == 0; #no more block-job running
> + my $readycounter = 0;
>
> - print "transferred: $transferred bytes remaining: $remaining bytes total: $total bytes progression: $percent % busy: $busy ready: $ready \n";
> - }
> + foreach my $stat (@$stats) {
> + die "error job is not mirroring" if $stat->{type} ne "mirror";
>
> + my $busy = $stat->{busy};
> + my $ready = $stat->{ready};
> + if (my $total = $stat->{len}) {
> + my $transferred = $stat->{offset} || 0;
> + my $remaining = $total - $transferred;
> + my $percent = sprintf "%.2f", ($transferred * 100 / $total);
>
> - if ($stat->{ready} eq 'true') {
> + print "$stat->{device} transferred: $transferred bytes remaining: $remaining bytes total: $total bytes progression: $percent % busy: $busy ready: $ready \n";
> + }
> +
> + $readycounter++ if $stat->{ready} eq 'true';
> + }
>
> - last if $vmiddst != $vmid;
> + if ($readycounter == @$stats) {
If you count all jobs and compare to @$stats you're waiting for all jobs
regardless of what's in $jobs.
I think that if you pass a job list to this function it should only be
monitoring those jobs.
> + print "all drives are ready \n";
> + last if $skipcomplete; #do the complete later
>
> - # try to switch the disk if source and destination are on the same guest
> - eval { vm_mon_cmd($vmid, "block-job-complete", device => "drive-$drive") };
> - if (!$@) {
> - &$finish_job();
> + if ($vmiddst && $vmiddst != $vmid) {
> + # if we clone a disk for a new target vm, we don't switch the disk
> + PVE::QemuServer::qemu_blockjobs_cancel($vmid, $jobs);
> last;
> + } else {
> +
> + foreach my $job (keys %$jobs) {
> + # try to switch the disk if source and destination are on the same guest
> + print "$job : Try to complete block job\n";
> +
> + eval { vm_mon_cmd($vmid, "block-job-complete", device => $job) };
> + if ($@ =~ m/cannot be completed/) {
> + print "$job : block job cannot be complete. Try again \n";
> + $err_complete++;
> + }else {
> + print "$job : complete ok\n";
> + delete $jobs->{$job};
> + }
> + }
> }
> - die $@ if $@ !~ m/cannot be completed/;
> }
> sleep 1;
> }
> -
> -
> };
> my $err = $@;
>
> - my $cancel_job = sub {
> - vm_mon_cmd($vmid, "block-job-cancel", device => "drive-$drive");
> - &$finish_job();
> - };
> -
> if ($err) {
> - eval { &$cancel_job(); };
> + eval { PVE::QemuServer::qemu_blockjobs_cancel($vmid) };
Same question: does it make sense to cancel all jobs if you have a $jobs
list?
> die "mirroring error: $err";
> }
>
> - if ($vmiddst != $vmid) {
> - # if we clone a disk for a new target vm, we don't switch the disk
> - &$cancel_job(); # so we call block-job-cancel
> +}
> +
> +sub qemu_blockjobs_cancel {
> + my ($vmid) = @_;
> +
> + while (1) {
> +
> + print "trying to cancel jobs\n";
> + my $stats = vm_mon_cmd($vmid, "query-block-jobs");
> + last if @$stats == 0;
> + foreach my $stat (@$stats) {
> + my $device = $stat->{device};
> + eval { vm_mon_cmd($vmid, "block-job-cancel", device => $device); };
> + }
> + sleep 1;
> }
> +
> }
>
> sub clone_disk {
> my ($storecfg, $vmid, $running, $drivename, $drive, $snapname,
> - $newvmid, $storage, $format, $full, $newvollist) = @_;
> + $newvmid, $storage, $format, $full, $newvollist, $jobs, $skipcomplete) = @_;
>
> my $newvolid;
>
> @@ -5894,6 +5938,7 @@ sub clone_disk {
> $newvolid = PVE::Storage::vdisk_clone($storecfg, $drive->{file}, $newvmid, $snapname);
> push @$newvollist, $newvolid;
> } else {
> +
> my ($storeid, $volname) = PVE::Storage::parse_volume_id($drive->{file});
> $storeid = $storage if $storage;
>
> @@ -5926,7 +5971,7 @@ sub clone_disk {
> if $drive->{iothread};
> }
>
> - qemu_drive_mirror($vmid, $drivename, $newvolid, $newvmid, $sparseinit);
> + qemu_drive_mirror($vmid, $drivename, $newvolid, $newvmid, $sparseinit, $jobs, $skipcomplete);
> }
> }
>
> --
> 2.1.4
>
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
More information about the pve-devel
mailing list