[pve-devel] [PATCH 2/5] clone live vm : add support for multiple jobs

Alexandre DERUMIER aderumier at odiso.com
Mon Oct 24 15:49:42 CEST 2016


>>Since clone_disk currently doesn't put jobs in the background anymore, 
>>this patch seems almost superfluous? 
>>AFAIK the way it currently works $skipcomplete just skips the 
>>block-job-complete, but still waits for the job to be done. 

Well, we have still jobs in background, as the process is now

start mirroring first disk until 100% but don't not complete
start mirroring second disk until 100% (first disk still replicate delta), but don't complete
start mirroring last disk until 100% (first && second still replication delta), and complete.


(In current proxmox code, we replicate first disk - complete, replicate second disk -complete,
which is bad if for example, you have a database with datas && logs on separate disks)



----- Mail original -----
De: "Wolfgang Bumiller" <w.bumiller at proxmox.com>
À: "aderumier" <aderumier at odiso.com>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Lundi 24 Octobre 2016 14:06:08
Objet: Re: [pve-devel] [PATCH 2/5] clone live vm : add support for multiple jobs

Since clone_disk currently doesn't put jobs in the background anymore, 
this patch seems almost superfluous? 
AFAIK the way it currently works $skipcomplete just skips the 
block-job-complete, but still waits for the job to be done. 

On Fri, Oct 21, 2016 at 05:00:45AM +0200, Alexandre Derumier wrote: 
> Signed-off-by: Alexandre Derumier <aderumier at odiso.com> 
> --- 
> PVE/API2/Qemu.pm | 12 +++++++++++- 
> 1 file changed, 11 insertions(+), 1 deletion(-) 
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm 
> index ad7a0c0..7ec2591 100644 
> --- a/PVE/API2/Qemu.pm 
> +++ b/PVE/API2/Qemu.pm 
> @@ -2398,21 +2398,29 @@ __PACKAGE__->register_method({ 
> my $upid = shift; 
> 
> my $newvollist = []; 
> + my $jobs = {}; 
> 
> eval { 
> local $SIG{INT} = $SIG{TERM} = $SIG{QUIT} = $SIG{HUP} = sub { die "interrupted by signal\n"; }; 
> 
> PVE::Storage::activate_volumes($storecfg, $vollist, $snapname); 
> 
> + my $total_jobs = scalar(keys %{$drives}); 
> + my $i = 1; 
> + my $skipcomplete = 1; 
> + 
> foreach my $opt (keys %$drives) { 
> + 
> my $drive = $drives->{$opt}; 
> + $skipcomplete = undef if $total_jobs == $i; #finish after last drive 
> 
> my $newdrive = PVE::QemuServer::clone_disk($storecfg, $vmid, $running, $opt, $drive, $snapname, 
> - $newid, $storage, $format, $fullclone->{$opt}, $newvollist); 
> + $newid, $storage, $format, $fullclone->{$opt}, $newvollist, $jobs, $skipcomplete); 
> 
> $newconf->{$opt} = PVE::QemuServer::print_drive($vmid, $newdrive); 
> 
> PVE::QemuConfig->write_config($newid, $newconf); 
> + $i++; 
> } 
> 
> delete $newconf->{lock}; 
> @@ -2433,6 +2441,8 @@ __PACKAGE__->register_method({ 
> if (my $err = $@) { 
> unlink $conffile; 
> 
> + eval { PVE::QemuServer::qemu_blockjobs_cancel($vmid) }; 
> + 
> sleep 1; # some storage like rbd need to wait before release volume - really? 
> 
> foreach my $volid (@$newvollist) { 
> -- 
> 2.1.4 
> 
> _______________________________________________ 
> pve-devel mailing list 
> pve-devel at pve.proxmox.com 
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 




More information about the pve-devel mailing list