[PVE-User] Storage migration issue with thin provisionning SAN storage

Dhaussy Alexandre ADhaussy at voyages-sncf.com
Mon Oct 3 18:14:21 CEST 2016


I'm actually migrating more than 1000 Vms from VMware to proxmox, but i'm hitting a major issue with storage migrations..
Actually i'm migrating from datastores VMFS to NFS on VMWare, then from NFS to LVM on Proxmox.

LVMs on Proxmox are on top thin provisionned (FC SAN) LUNs.
Thin provisionning works fine on Proxmox newly created VMs.

But, i just discovered that when using qm move_disk to migrate from NFS to LVM, it actually allocates all blocks of data !
It's a huge problem for me and clearly a nogo... as the SAN storage arrays are filling up very quickly !

After further investigations, in qemu & proxmox... I found in proxmox code that qemu_drive_mirror is called with those arguments :

(In /usr/share/perl5/PVE/QemuServer.pm)

   5640 sub qemu_drive_mirror {
   5654     my $opts = { timeout => 10, device => "drive-$drive", mode => "existing", sync => "full", target => $qemu_target };

If i'm not wrong, Qemu supports "detect-zeroes" flag for mirroring block targets, but proxmox does not use it.
Is there any reason why this flag is not enabled during qemu drive mirroring ??


More information about the pve-user mailing list