[PVE-User] Strange ZFS Performance

Lindsay Mathieson lindsay.mathieson at gmail.com
Fri Aug 21 05:31:54 CEST 2015


I believe I managed to recreate the issue today while testing drive
transfers.

I created a empty 10G VM Disk on the local SSD drive and moved it (via the
webgui) to the ZFS Storage (reservation = off, compression=off). The
transfer went very quickly, reaching 100% in a few seconds but then stalled
for a few minutes before the final "OK"

iostat was excessively high - 15-20%, but zfslist was very slow and it
showed the destination drive filling up quit slowly. a ps showed a lot of
z_processess (z_wr_iss etc). After about 5 minutes it finished and load
returned to normal.

Some of the google stuff I read mentioned zero detection being a problem
when compression is off, so I set compression to lz4 and tried again - this
time the transfer completed in < 5 seconds with no load.

I repeated the exercise with compression=off and thin provisioning off - it
took a little longer, 40 seconds, with no load to speak off.

So in conclusion - it would seem that disk transfer with  thin provisioning
on and compression off can impose quite a load on the system, especially if
the src disk is thin provisioned to start with.



On 20 August 2015 at 03:33, Pongrácz István <pongracz.istvan at gmail.com>
wrote:

>
> Hi Lindsay,
>
> Could you post me the following results by private email (outputs of these
> commands)?
>
>    - zpool status -v
>    - zpool get all
>    - zfs list
>    - zfs get all <TOPLEVEL of your zfs filesystem, for example datazfs if
>    your pool called datazfs> (needed 2 times for the system and data)
>    - arcstat.py
>
> Questions:
>
>    - did you turn on dedup on your pools?
>    - do you use any non-default zfs/zpool settings?
>    - how is your cpu load on normal and stressed (problematic) situation?
>    - is it true, the situation depends on uptime? For example usually the
>    situation happens after 2 weeks?
>    - Or can you see any pattern when the bad situation happens?
>
> Hints without too much explanations:
>
>    - you can monitor your pool activity by the following command:  *zpool
>    iostat -v 1   *(a screenshot would be nice)
>    - do not turn on dedup. if you turned on -> recreate your pool from
>    backup without dedup enabled
>    - the default memory usage of zfs is 50% of the physical RAM. Due to
>    the interaction between the linux memory management and spl, the total used
>    memory could be the double of the ARC size. In other words: plan your
>    memory allocation: MEMORY_OF_ALLVM + 2 * ARC size < total physical RAM.
>    - probably it is much better to turn off swap, as the swap is on zfs:
>       - any problem on zfs will bring your computer down -> like
>       performance
>       - if your system start to use swap, your system is underplanned (as
>       I can see, this is not the case)
>    - try to find a pattern of your performance issue
>
> Best regards,
>
> István
>
> ----------------eredeti üzenet-----------------
> Feladó: "Lindsay Mathieson" <lindsay.mathieson at gmail.com>
> Címzett: "Dietmar Maurer" <dietmar at proxmox.com>
> CC: "ProxMox Users" <pve-user at pve.proxmox.com>
> Dátum: Mon, 17 Aug 2015 16:03:34 +1000
> ----------------------------------------------------------
>
>
> Think I'll try reinstalling with EXT4 for the boot drive.
>
> On 17 August 2015 at 14:50, Lindsay Mathieson <lindsay.mathieson at gmail.com
> > wrote:
>
>>
>> On 17 August 2015 at 14:43, Dietmar Maurer <dietmar at proxmox.com> wrote:
>>
>>> what kernel and zfs version do you run exactly?
>>
>>
>> Free install, updated to latest from the pve-no-sub repos
>>
>> uname -r
>> 3.10.0-11-pve
>>
>>
>> cat /var/log/dmesg | grep -E 'SPL:|ZFS:'
>> [   17.384858] SPL: Loaded module v0.6.4-358_gaaf6ad2
>> [   17.449584] ZFS: Loaded module v0.6.4.1-1099_g7939064, ZFS pool
>> version 5000, ZFS filesystem version 5
>> [   18.007733] SPL: using hostid 0xa8c00802
>>
>>
>> --
>> Lindsay
>>
>
>
>
> --
> Lindsay
> ------------------------------
>
> _______________________________________________
> pve-user mailing listpve-user at pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
>



-- 
Lindsay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20150821/23cd35f9/attachment.htm>


More information about the pve-user mailing list