[PVE-User] ceph/rbd to qcow2 - sparse file
Alexandre DERUMIER
aderumier at odiso.com
Tue Oct 20 12:13:30 CEST 2015
mmm. this is strange because with last qemu version include in proxmox 4.0,
the drive-mirror feature (move disk in proxmox), should skip zeros blocks.
I don't have tested it.
http://git.qemu.org/?p=qemu.git;a=commit;h=0fc9f8ea2800b76eaea20a8a3a91fbeeb4bfa81b
"+# @unmap: #optional Whether to try to unmap target sectors where source has
+# only zero. If true, and target unallocated sectors will read as zero,
+# target image sectors will be unmapped; otherwise, zeroes will be
+# written. Both will result in identical contents.
+# Default is true. (Since 2.4)
#"
As workaround :
- do the move disk with the vm shutdown will do a sparse file
- if you use virtio-scsi + discard, you can use fstrim command (linux guest) in your guest after the migration.
----- Mail original -----
De: "Fabrizio Cuseo" <f.cuseo at panservice.it>
À: "proxmoxve" <pve-user at pve.proxmox.com>
Envoyé: Lundi 19 Octobre 2015 22:30:02
Objet: [PVE-User] ceph/rbd to qcow2 - sparse file
Hello.
I have a test cluster (3 hosts) with 20/30 test vm's, and ceph storage.
Last week i planned to upgrade from 3.4 to 4.0; so i moved all the vm disks on a moosefs storage (qcow2).
Moving from rbd to qcow2 caused all the disks to loose the sparse mode.
I have reinstalled the whole cluster from scratch and now I am moving back the disks from qcow2 to rbd, but now i need to convert (with the vm off) every single disk from qcow2 to qcow2, so i can have the disk image sparsed.
There is the possibility to move the disk from proxmox gui without loosing the sparse mode ? At least with pve 4.0
Regards, Fabrizio
_______________________________________________
pve-user mailing list
pve-user at pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
More information about the pve-user
mailing list