[PVE-User] Moving thin Provisioned disks to ZFS converts them toFat Disks

Lindsay Mathieson lindsay.mathieson at gmail.com
Fri Aug 21 04:55:34 CEST 2015


Ok as a test, I setup the following:

Created 3 10G disks on:
1. zfs storage
2. nfs storage
3. local storage

All disks are empty, never used by the VM.

I then moved the nfs and local disks to zfs. A "zfs get
reservation,volsize,usedbydataset zfs_vm/vm-301-disk-x" showed the
following:


Disk 1:
zfs_vm/vm-301-disk-1  reservation    none     default
zfs_vm/vm-301-disk-1  volsize        10G      local
zfs_vm/vm-301-disk-1  usedbydataset  8K       -


Disk 2:
zfs_vm/vm-301-disk-2  reservation    none     default
zfs_vm/vm-301-disk-2  volsize        10G      local
zfs_vm/vm-301-disk-2  usedbydataset  10.0G    -


Disk 3:
zfs_vm/vm-301-disk-4  reservation    none     default
zfs_vm/vm-301-disk-4  volsize        10G      local
zfs_vm/vm-301-disk-4  usedbydataset  10.0G    -



All the disks are thin provisioned (reservation=none), but 2 & 3 have been
filled with zero's to their capacity, something to do with the transfer
process I imagine.

I'll raise a bug for this.



On 21 August 2015 at 08:28, Michael Rasmussen <mir at miras.org> wrote:

> On Fri, 21 Aug 2015 08:07:08 +1000
> Lindsay Mathieson <lindsay.mathieson at gmail.com> wrote:
>
> >
> > Quick clarifications - by:the following:
> >
> > *Have you tried using scsi for controller and virtio-scsi for*
> >
> > *implementation?*
> >
> > Did you mean setting "SCSI Controller type" in "Options to VIRTIO and the
> > "Bus Type" for the disk to "SCSI"?
> >
> Yes.
>
> >
> > Re the results - it made no difference for me, still fat
> populated.(using a
> > test empty disk).
> >
> Could it be a result of the chosen filesystem in the client?
> What filesystem is used in the client?
>
> > However - did you try this with compression set to "off"? because to me
> > from your results it looks like what your seeing is the effects of disk
> > compression, not thin provisioning.
> >
> It couldn't be compression because when I moved the disk to a raw image
> on NFS the actual size of the image was reported by qemu-img info to be
> almost the same size as what was reported from zfs get written (11GB
> compared tom 12,4GB).
>
>
> --
> Hilsen/Regards
> Michael Rasmussen
>
> Get my public GnuPG keys:
> michael <at> rasmussen <dot> cc
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
> mir <at> datanom <dot> net
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
> mir <at> miras <dot> org
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
> --------------------------------------------------------------
> /usr/games/fortune -es says:
> We gotta get out of this place,
> If it's the last thing we ever do.
>                 -- The Animals
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>


-- 
Lindsay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20150821/69b186a4/attachment.htm>


More information about the pve-user mailing list