[PVE-User] sparse and compression
Miguel González
miguel_3_gonzalez at yahoo.es
Mon Dec 11 16:10:29 CET 2017
On 12/11/17 3:47 PM, Andreas Herrmann wrote:
> Hi
>
> On 11.12.2017 14:17, Fabian Grünbichler wrote:
>> On Mon, Dec 11, 2017 at 01:40:28PM +0100, Miguel González wrote:
>>> Why a virtual disk shows as 60G when originally It was 36 Gb in raw format?
>>>
>>> NAME USED AVAIL REFER MOUNTPOINT
>>> rpool/data/vm-102-disk-1 60.0G 51.3G 20.9G -
>>
>> wild guess - you are using raidz of some kind? ashift is set to 12 /
>> auto-detected?
>
> No! 'zpool list' will show what is used on disk. zfs list is totally
> transparent to zpool layout. Have a look at 'zpool get all' for the ashift
> setting.
>
> Example for raidz1 (4x 960GB SSDs):
> root at foobar:~# zpool list
> NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
> zpool 3.41T 102G 3.31T - 8% 2% 1.00x ONLINE -
>
> root at foobar:~# zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> zpool 237G 2.17T 140K /zpool
>
> zpool ALLOC is smaller than zfs USED in this example. Why? Try to unserstand
> the difference between 'referenced' and 'used'. My volumes aren't sparse but
> discard is used.
I have search around about how to understand those columns. I didn´t
find anything on the wiki that explains this.
This is my zfs list:
NAME USED AVAIL REFER MOUNTPOINT
rpool 207G 61.9G 104K /rpool
rpool/ROOT 6.10G 61.9G 96K /rpool/ROOT
rpool/ROOT/pve-1 6.10G 61.9G 6.10G /
rpool/data 197G 61.9G 96K /rpool/data
rpool/data/vm-100-disk-1 108G 61.9G 108G -
rpool/data/vm-102-disk-1 37.1G 77.9G 21.1G -
rpool/data/vm-102-disk-2 51.6G 81.8G 31.7G -
rpool/swap 4.25G 64.9G 1.25G -
If I run zfs get all I get:
rpool/data/vm-100-disk-1 written 108G
rpool/data/vm-100-disk-1 logicalused 129G
rpool/data/vm-100-disk-1 logicalreferenced 129G
rpool/data/vm-102-disk-1 written 21.1G
rpool/data/vm-102-disk-1 logicalused 27.1G
rpool/data/vm-102-disk-1 logicalreferenced 27.1G
rpool/data/vm-102-disk-2 written 31.7G
rpool/data/vm-102-disk-2 logicalused 36.2G
rpool/data/vm-102-disk-2 logicalreferenced 36.2G
So even If I´m having 8k blocksize and non-sparse the written data is
quite close to the real usage in the guests VMs.
All this comes from that I was running out of space when running
pve-zsync to perform a copy of the VM in other node.
I have found out that snapshots were taken some part of the data (30 Gb).
Any way to run a pve-zsync only a day that doesn´t consume snapshots on
this machine (Maybe running from the target machine?)
Thanks
Miguel
---
This email has been checked for viruses by AVG.
http://www.avg.com
More information about the pve-user
mailing list