[PVE-User] Storage migration issue with thin provisionning SAN storage
Dhaussy Alexandre
ADhaussy at voyages-sncf.com
Tue Oct 4 19:36:29 CEST 2016
Thanks for pointing this out.
I'll try offline migration and see how it behaves.
Le 04/10/2016 à 12:59, Alexandre DERUMIER a écrit :
> But If you do the migration offline, it should work. (don't known if you can stop your vms during the migration)
>
> ----- Mail original -----
> De: "aderumier" <aderumier at odiso.com>
> À: "proxmoxve" <pve-user at pve.proxmox.com>
> Envoyé: Mardi 4 Octobre 2016 12:58:32
> Objet: Re: [PVE-User] Storage migration issue with thin provisionning SAN storage
>
> Hi,
> the limitation come from nfs. (proxmox use correctly detect-zeroes, but nfs protocol have limitations, I think it'll be fixed in nfs 4.2)
>
> I have same problem when I migrate from nfs to ceph.
>
> I'm using discard/triming to retrieve space after migration.
>
> ----- Mail original -----
> De: "Dhaussy Alexandre" <ADhaussy at voyages-sncf.com>
> À: "proxmoxve" <pve-user at pve.proxmox.com>
> Envoyé: Mardi 4 Octobre 2016 10:34:07
> Objet: Re: [PVE-User] Storage migration issue with thin provisionning SAN storage
>
> Hello Brian,
>
> Thanks for the tip, it may be my last chance solution..
>
> Fortunatly i kept all original disk files on a NFS share, so i 'm able
> to rollback and re-do the migration...if manage to make qemu mirroring
> work with sparse vmdks..
>
>
> Le 03/10/2016 à 21:11, Brian :: a écrit :
>> Hi Alexandre,
>>
>> If guests are linux you could try use the scsi driver with discard enabled
>>
>> fstrim -v / then may make the unused space free on the underlying FS then.
>>
>> I don't use LVM but this certainly works with other types of storage..
>>
>>
>>
>>
>>
>> On Mon, Oct 3, 2016 at 5:14 PM, Dhaussy Alexandre
>> <ADhaussy at voyages-sncf.com> wrote:
>>> Hello,
>>>
>>> I'm actually migrating more than 1000 Vms from VMware to proxmox, but i'm hitting a major issue with storage migrations..
>>> Actually i'm migrating from datastores VMFS to NFS on VMWare, then from NFS to LVM on Proxmox.
>>>
>>> LVMs on Proxmox are on top thin provisionned (FC SAN) LUNs.
>>> Thin provisionning works fine on Proxmox newly created VMs.
>>>
>>> But, i just discovered that when using qm move_disk to migrate from NFS to LVM, it actually allocates all blocks of data !
>>> It's a huge problem for me and clearly a nogo... as the SAN storage arrays are filling up very quickly !
>>>
>>> After further investigations, in qemu & proxmox... I found in proxmox code that qemu_drive_mirror is called with those arguments :
>>>
>>> (In /usr/share/perl5/PVE/QemuServer.pm)
>>>
>>> 5640 sub qemu_drive_mirror {
>>> .......
>>> 5654 my $opts = { timeout => 10, device => "drive-$drive", mode => "existing", sync => "full", target => $qemu_target };
>>>
>>> If i'm not wrong, Qemu supports "detect-zeroes" flag for mirroring block targets, but proxmox does not use it.
>>> Is there any reason why this flag is not enabled during qemu drive mirroring ??
>>>
>>> Cheers,
>>> Alexandre.
>>> _______________________________________________
>>> pve-user mailing list
>>> pve-user at pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
More information about the pve-user
mailing list