<p><br />
Hi Lindsay,</p>
<p>Could you post me the following results by private email (outputs of these commands)?</p>
<ul>
<li>zpool status -v</li>
<li>zpool get all</li>
<li>zfs list</li>
<li>zfs get all <TOPLEVEL of your zfs filesystem, for example datazfs if your pool called datazfs> (needed 2 times for the system and data)</li>
<li>arcstat.py</li>
</ul>
<p>Questions:</p>
<ul>
<li>did you turn on dedup on your pools?</li>
<li>do you use any non-default zfs/zpool settings?</li>
<li>how is your cpu load on normal and stressed (problematic) situation?</li>
<li>is it true, the situation depends on uptime? For example usually the situation happens after 2 weeks?</li>
<li>Or can you see any pattern when the bad situation happens?</li>
</ul>
<p>Hints without too much explanations:</p>
<ul>
<li>you can monitor your pool activity by the following command: <strong>zpool iostat -v 1 </strong>(a screenshot would be nice)</li>
<li>do not turn on dedup. if you turned on -> recreate your pool from backup without dedup enabled</li>
<li>the default memory usage of zfs is 50% of the physical RAM. Due to the interaction between the linux memory management and spl, the total used memory could be the double of the ARC size. In other words: plan your memory allocation: MEMORY_OF_ALLVM + 2 * ARC size < total physical RAM.</li>
<li>probably it is much better to turn off swap, as the swap is on zfs:
<ul>
<li>any problem on zfs will bring your computer down -> like performance</li>
<li>if your system start to use swap, your system is underplanned (as I can see, this is not the case)</li>
</ul>
</li>
<li>try to find a pattern of your performance issue</li>
</ul>
<p>Best regards,</p>
<p>István</p>
<p>----------------eredeti üzenet----------------- <br />
Feladó: "Lindsay Mathieson" <lindsay.mathieson@gmail.com> <br />
Címzett: "Dietmar Maurer" <dietmar@proxmox.com> <br />
CC: "ProxMox Users" <pve-user@pve.proxmox.com> <br />
Dátum: Mon, 17 Aug 2015 16:03:34 +1000 <br />
----------------------------------------------------------</p>
<blockquote type="cite"><br />
<div>Think I'll try reinstalling with EXT4 for the boot drive.</div>
<div><br />
<div>On 17 August 2015 at 14:50, Lindsay Mathieson <span><<a href="mailto:lindsay.mathieson@gmail.com" target="_blank">lindsay.mathieson@gmail.com</a>></span> wrote:<br />
<blockquote class="gmail_quote c1">
<div>
<div><span><br />
</span>
<div><span>On 17 August 2015 at 14:43, Dietmar Maurer <span><<a href="mailto:dietmar@proxmox.com" target="_blank">dietmar@proxmox.com</a>></span> wrote:<br />
</span> <blockquote class="gmail_quote c1">what kernel and zfs version do you run exactly?</blockquote></div>
<br />
</div>
<div>Free install, updated to latest from the pve-no-sub repos</div>
<div><br />
uname -r<br />
3.10.0-11-pve<br />
<br />
<br />
cat /var/log/dmesg | grep -E 'SPL:|ZFS:'<br />
[ 17.384858] SPL: Loaded module v0.6.4-358_gaaf6ad2<br />
[ 17.449584] ZFS: Loaded module v0.6.4.1-1099_g7939064, ZFS pool version 5000, ZFS filesystem version 5<br />
[ 18.007733] SPL: using hostid 0xa8c00802<span class="HOEnZb c2"><br />
<br />
<br />
--<br />
</span>
<div><span class="HOEnZb c2">Lindsay</span></div>
</div>
</div>
</blockquote></div>
<br />
<br />
<br />
--<br />
<div>Lindsay</div>
</div>
<hr />
<br />
<pre>
_______________________________________________
pve-user mailing list
pve-user@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
</pre>
</blockquote>
<p> </p>