[PVE-User] Strange ZFS Performance

Lindsay Mathieson lindsay.mathieson at gmail.com
Thu Aug 20 06:55:01 CEST 2015


Thanks Pongrácz, much appreciated, however I'm currently rebuilding the
ndoe from scratch - for the nth time :) and am adding two extra disks.

However since I switched to the boot sys on ext4 rather than zfs I have
been unable to replicate the problem. Once I finish resetting up today I
will start stress testing and let you know how it goes.



On 20 August 2015 at 03:33, Pongrácz István <pongracz.istvan at gmail.com>
wrote:

>
> Hi Lindsay,
>
> Could you post me the following results by private email (outputs of these
> commands)?
>
>    - zpool status -v
>    - zpool get all
>    - zfs list
>    - zfs get all <TOPLEVEL of your zfs filesystem, for example datazfs if
>    your pool called datazfs> (needed 2 times for the system and data)
>    - arcstat.py
>
> Questions:
>
>    - did you turn on dedup on your pools?
>    - do you use any non-default zfs/zpool settings?
>    - how is your cpu load on normal and stressed (problematic) situation?
>    - is it true, the situation depends on uptime? For example usually the
>    situation happens after 2 weeks?
>    - Or can you see any pattern when the bad situation happens?
>
> Hints without too much explanations:
>
>    - you can monitor your pool activity by the following command:  *zpool
>    iostat -v 1   *(a screenshot would be nice)
>    - do not turn on dedup. if you turned on -> recreate your pool from
>    backup without dedup enabled
>    - the default memory usage of zfs is 50% of the physical RAM. Due to
>    the interaction between the linux memory management and spl, the total used
>    memory could be the double of the ARC size. In other words: plan your
>    memory allocation: MEMORY_OF_ALLVM + 2 * ARC size < total physical RAM.
>    - probably it is much better to turn off swap, as the swap is on zfs:
>       - any problem on zfs will bring your computer down -> like
>       performance
>       - if your system start to use swap, your system is underplanned (as
>       I can see, this is not the case)
>    - try to find a pattern of your performance issue
>
> Best regards,
>
> István
>
> ----------------eredeti üzenet-----------------
> Feladó: "Lindsay Mathieson" <lindsay.mathieson at gmail.com>
> Címzett: "Dietmar Maurer" <dietmar at proxmox.com>
> CC: "ProxMox Users" <pve-user at pve.proxmox.com>
> Dátum: Mon, 17 Aug 2015 16:03:34 +1000
> ----------------------------------------------------------
>
>
> Think I'll try reinstalling with EXT4 for the boot drive.
>
> On 17 August 2015 at 14:50, Lindsay Mathieson <lindsay.mathieson at gmail.com
> > wrote:
>
>>
>> On 17 August 2015 at 14:43, Dietmar Maurer <dietmar at proxmox.com> wrote:
>>
>>> what kernel and zfs version do you run exactly?
>>
>>
>> Free install, updated to latest from the pve-no-sub repos
>>
>> uname -r
>> 3.10.0-11-pve
>>
>>
>> cat /var/log/dmesg | grep -E 'SPL:|ZFS:'
>> [   17.384858] SPL: Loaded module v0.6.4-358_gaaf6ad2
>> [   17.449584] ZFS: Loaded module v0.6.4.1-1099_g7939064, ZFS pool
>> version 5000, ZFS filesystem version 5
>> [   18.007733] SPL: using hostid 0xa8c00802
>>
>>
>> --
>> Lindsay
>>
>
>
>
> --
> Lindsay
> ------------------------------
>
> _______________________________________________
> pve-user mailing listpve-user at pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
>



-- 
Lindsay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20150820/a70fc871/attachment.htm>


More information about the pve-user mailing list