[PVE-User] Info on qcow2 clone

Alessandro Briosi ab1 at metalit.com
Wed Oct 22 19:04:26 CEST 2014

Il 22/10/2014 18:45, Paul Gray ha scritto:
> Your definition of "sparse" and my definition of "cruft" are colliding
> here.
> "Sparse" == hardly used filesystem.
> "cruft" == non-zeroed, *unused* sectors on the disk
> Your sparse filesystem likely has a lot of cruft.  The two facets aren't
> mutually exclusive.

Maybe I wasn't clear enough.

If I do an "ls -lsh" of the original file this is the result:
5.1G -rw-r--r-- 1 root root 33G Oct 22 18:48 vm-100-disk-1.qcow2

If I do the same on the cloned disk this is the result:
33G -rw-r--r-- 1 root root 33G Oct 22 18:49 vm-199-disk-1.qcow2

If I do the same on a file copied with °cp --sparse=always" from the 
original file disk (vm-100-disk-1.qcow2) the result is:
4.5G -rw-r--r-- 1 root root 33G Oct 22 18:49 vm-100-disk-2.qcow2

So the point is. There is a system inside this disk (CentOS6 64bit) 
which is using something like 3G
There is some cruft ok, (something like 2G and that's fine)

Though I'm surprised that cloning the VM through proxmox the disk is not 
sparse and is using the whole 33G. So the question was if it's a known 
behaviour (maybe it's because of NFS and other filesystems which might 
create trouble with sparse files)

If I do a backup and then a restore the restored file uses same space as 
the copied VM
4.5G -rw-r--r-- 1 root root 33G Oct 22 19:02 vm-101-disk-1.qcow2


METAL.it Nord srl
Via Maioliche 137 - 38068 Rovereto (TN) - ITALY
Tel.+39.0464.430130 - Fax +39.0464.437393
e-mail: ab1 at metalit.com

More information about the pve-user mailing list