[PVE-User] ceph/rbd to qcow2 - sparse file

Alexandre DERUMIER aderumier at odiso.com
Tue Oct 20 12:49:32 CEST 2015


looking at rbd block driver, it seem that bdrv_co_write_zeroes is not implemented.

Does  rbd -> qcow2 (with proxmox 4.0), give you sparse qcow2 ?

is it only qcow2->rbd which is non sparse ?


----- Mail original -----
De: "aderumier" <aderumier at odiso.com>
À: "proxmoxve" <pve-user at pve.proxmox.com>
Envoyé: Mardi 20 Octobre 2015 12:18:43
Objet: Re: [PVE-User] ceph/rbd to qcow2 - sparse file

and also this one: 

"mirror: Do zero write on target if sectors not allocated" 
http://git.qemu.org/?p=qemu.git;a=blobdiff;f=block/mirror.c;h=8888cea9521fd5fcbc300c054fc8936bdac4f47e;hp=4be06a508233e69040c74fce00d3baac107dbfd8;hb=dcfb3beb5130694b76b57de109619fcbf9c7e5b5;hpb=0fc9f8ea2800b76eaea20a8a3a91fbeeb4bfa81b 


----- Mail original ----- 
De: "aderumier" <aderumier at odiso.com> 
À: "Fabrizio Cuseo" <f.cuseo at panservice.it>, "proxmoxve" <pve-user at pve.proxmox.com> 
Envoyé: Mardi 20 Octobre 2015 12:13:30 
Objet: Re: [PVE-User] ceph/rbd to qcow2 - sparse file 

mmm. this is strange because with last qemu version include in proxmox 4.0, 

the drive-mirror feature (move disk in proxmox), should skip zeros blocks. 

I don't have tested it. 

http://git.qemu.org/?p=qemu.git;a=commit;h=0fc9f8ea2800b76eaea20a8a3a91fbeeb4bfa81b 

"+# @unmap: #optional Whether to try to unmap target sectors where source has 
+# only zero. If true, and target unallocated sectors will read as zero, 
+# target image sectors will be unmapped; otherwise, zeroes will be 
+# written. Both will result in identical contents. 
+# Default is true. (Since 2.4) 
#" 




As workaround : 

- do the move disk with the vm shutdown will do a sparse file 

- if you use virtio-scsi + discard, you can use fstrim command (linux guest) in your guest after the migration. 




----- Mail original ----- 
De: "Fabrizio Cuseo" <f.cuseo at panservice.it> 
À: "proxmoxve" <pve-user at pve.proxmox.com> 
Envoyé: Lundi 19 Octobre 2015 22:30:02 
Objet: [PVE-User] ceph/rbd to qcow2 - sparse file 

Hello. 
I have a test cluster (3 hosts) with 20/30 test vm's, and ceph storage. 
Last week i planned to upgrade from 3.4 to 4.0; so i moved all the vm disks on a moosefs storage (qcow2). 

Moving from rbd to qcow2 caused all the disks to loose the sparse mode. 

I have reinstalled the whole cluster from scratch and now I am moving back the disks from qcow2 to rbd, but now i need to convert (with the vm off) every single disk from qcow2 to qcow2, so i can have the disk image sparsed. 

There is the possibility to move the disk from proxmox gui without loosing the sparse mode ? At least with pve 4.0 

Regards, Fabrizio 
_______________________________________________ 
pve-user mailing list 
pve-user at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
_______________________________________________ 
pve-user mailing list 
pve-user at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
_______________________________________________ 
pve-user mailing list 
pve-user at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 



More information about the pve-user mailing list