[pve-devel] [PATCH qemu-server] Fix #1242 : clone_disk : call qga fstrim after clone

Alexandre DERUMIER aderumier at odiso.com
Mon May 28 20:49:29 CEST 2018


>>If I understood it right, then the fstrim is called on every migrate with a 
>>running guest agent. While, I guess the command is called also if you don't 
>>have discard in the vm config activated and might only produce a error 
>>message. 

It don't produce an error, but indeed, it does nothing in this case.
I can add a check for discard option, to avoid the extra qga call.



>>IMHO, it might be good to have a config option that sets if a VM should do a
>>fstrim (eg. qga-fstrim: 0/1) on migration. This way users, are actively setting
>>it and are knowing that this also has its drawbacks on their systems.

maybe can we add it in datacenter.cfg ? or storage.cfg option ?


----- Mail original -----
De: "Alwin Antreich" <a.antreich at proxmox.com>
À: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Lundi 28 Mai 2018 19:50:24
Objet: Re: [pve-devel] [PATCH qemu-server] Fix #1242 : clone_disk : call qga fstrim after clone

On Mon, May 28, 2018 at 05:36:50PM +0200, Alexandre Derumier wrote: 
> Some storage like rbd or lvm can't keep thin-provising after a qemu-mirror. 
> 
> Call qga guest-fstrim if qga is available 
> --- 
> PVE/API2/Qemu.pm | 8 ++++++++ 
> PVE/QemuMigrate.pm | 5 +++++ 
> 2 files changed, 13 insertions(+) 
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm 
> index 8d4b10d..86fac9d 100644 
> --- a/PVE/API2/Qemu.pm 
> +++ b/PVE/API2/Qemu.pm 
> @@ -2741,6 +2741,10 @@ __PACKAGE__->register_method({ 
> 
> PVE::QemuConfig->write_config($newid, $newconf); 
> 
> + if ($running && $conf->{agent} && PVE::QemuServer::qga_check_running($vmid)) { 
> + eval { PVE::QemuServer::vm_mon_cmd($vmid, "guest-fstrim"); }; 
> + } 
> + 
> if ($target) { 
> # always deactivate volumes - avoid lvm LVs to be active on several nodes 
> PVE::Storage::deactivate_volumes($storecfg, $vollist, $snapname) if !$running; 
> @@ -2918,6 +2922,10 @@ __PACKAGE__->register_method({ 
> 
> PVE::QemuConfig->write_config($vmid, $conf); 
> 
> + if ($running && $conf->{agent} && PVE::QemuServer::qga_check_running($vmid)) { 
> + eval { PVE::QemuServer::vm_mon_cmd($vmid, "guest-fstrim"); }; 
> + } 
> + 
> eval { 
> # try to deactivate volumes - avoid lvm LVs to be active on several nodes 
> PVE::Storage::deactivate_volumes($storecfg, [ $newdrive->{file} ]) 
> diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm 
> index 27cf7e3..ab2258d 100644 
> --- a/PVE/QemuMigrate.pm 
> +++ b/PVE/QemuMigrate.pm 
> @@ -966,6 +966,11 @@ sub phase3_cleanup { 
> $self->{errors} = 1; 
> } 
> } 
> + 
> + if ($self->{storage_migration} && $conf->{qga} && $self->{running}) { 
> + my $cmd = [@{$self->{rem_ssh}}, 'qm', 'agent','fstrim']; 
> + eval{ PVE::Tools::run_command($cmd, outfunc => sub {}, errfunc => sub {}) }; 
> + } 
> } 
> 
> # close tunnel on successful migration, on error phase2_cleanup closed it 
> -- 
> 2.11.0 
> 
I have some thoughts on your patch. 

If I understood it right, then the fstrim is called on every migrate with a 
running guest agent. While, I guess the command is called also if you don't 
have discard in the vm config activated and might only produce a error 
message. 

Some users also like some of their VMs to be thick provisioned. 

With multiple simultanious migrations though this would extend/multiply the 
IO load on the target system. As the fstrim starts, while still other VMs are 
migrated. I think that might make users unhappy, especially that the behaviour 
would change with your patch. 

IMHO, it might be good to have a config option that sets if a VM should do a 
fstrim (eg. qga-fstrim: 0/1) on migration. This way users, are actively setting 
it and are knowing that this also has its drawbacks on their systems. 

Please correct me if I'm wrong. 
My two cents. ;) 
-- 
Cheers, 
Alwin 

_______________________________________________ 
pve-devel mailing list 
pve-devel at pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 



More information about the pve-devel mailing list