<div dir="ltr"><div><div><div><div><div>I believe I managed to recreate the issue today while testing drive transfers.<br><br></div>I created a empty 10G VM Disk on the local SSD drive and moved it (via the webgui) to the ZFS Storage (reservation = off, compression=off). The transfer went very quickly, reaching 100% in a few seconds but then stalled for a few minutes before the final "OK"<br><br></div>iostat was excessively high - 15-20%, but zfslist was very slow and it showed the destination drive filling up quit slowly. a ps showed a lot of z_processess (z_wr_iss etc). After about 5 minutes it finished and load returned to normal.<br><br></div>Some of the google stuff I read mentioned zero detection being a problem when compression is off, so I set compression to lz4 and tried again - this time the transfer completed in < 5 seconds with no load.<br><br></div>I repeated the exercise with compression=off and thin provisioning off - it took a little longer, 40 seconds, with no load to speak off.<br><br></div>So in conclusion - it would seem that disk transfer with thin provisioning on and compression off can impose quite a load on the system, especially if the src disk is thin provisioned to start with.<br><div><div><br><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 20 August 2015 at 03:33, Pongrácz István <span dir="ltr"><<a href="mailto:pongracz.istvan@gmail.com" target="_blank">pongracz.istvan@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p><br>
Hi Lindsay,</p>
<p>Could you post me the following results by private email (outputs of these commands)?</p>
<ul>
<li>zpool status -v</li>
<li>zpool get all</li>
<li>zfs list</li>
<li>zfs get all <TOPLEVEL of your zfs filesystem, for example datazfs if your pool called datazfs> (needed 2 times for the system and data)</li>
<li>arcstat.py</li>
</ul>
<p>Questions:</p>
<ul>
<li>did you turn on dedup on your pools?</li>
<li>do you use any non-default zfs/zpool settings?</li>
<li>how is your cpu load on normal and stressed (problematic) situation?</li>
<li>is it true, the situation depends on uptime? For example usually the situation happens after 2 weeks?</li>
<li>Or can you see any pattern when the bad situation happens?</li>
</ul>
<p>Hints without too much explanations:</p>
<ul>
<li>you can monitor your pool activity by the following command: <strong>zpool iostat -v 1 </strong>(a screenshot would be nice)</li>
<li>do not turn on dedup. if you turned on -> recreate your pool from backup without dedup enabled</li>
<li>the default memory usage of zfs is 50% of the physical RAM. Due to the interaction between the linux memory management and spl, the total used memory could be the double of the ARC size. In other words: plan your memory allocation: MEMORY_OF_ALLVM + 2 * ARC size < total physical RAM.</li>
<li>probably it is much better to turn off swap, as the swap is on zfs:
<ul>
<li>any problem on zfs will bring your computer down -> like performance</li>
<li>if your system start to use swap, your system is underplanned (as I can see, this is not the case)</li>
</ul>
</li>
<li>try to find a pattern of your performance issue</li>
</ul>
<p>Best regards,</p>
<p>István</p>
<p>----------------eredeti üzenet----------------- <br>
Feladó: "Lindsay Mathieson" <<a href="mailto:lindsay.mathieson@gmail.com" target="_blank">lindsay.mathieson@gmail.com</a>> <br>
Címzett: "Dietmar Maurer" <<a href="mailto:dietmar@proxmox.com" target="_blank">dietmar@proxmox.com</a>> <br>
CC: "ProxMox Users" <<a href="mailto:pve-user@pve.proxmox.com" target="_blank">pve-user@pve.proxmox.com</a>> <br>
Dátum: Mon, 17 Aug 2015 16:03:34 +1000 <br>
----------------------------------------------------------</p>
<blockquote type="cite"><div><div class="h5"><br>
<div>Think I'll try reinstalling with EXT4 for the boot drive.</div>
<div><br>
<div>On 17 August 2015 at 14:50, Lindsay Mathieson <span><<a href="mailto:lindsay.mathieson@gmail.com" target="_blank">lindsay.mathieson@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote">
<div>
<div><span><br>
</span>
<div><span>On 17 August 2015 at 14:43, Dietmar Maurer <span><<a href="mailto:dietmar@proxmox.com" target="_blank">dietmar@proxmox.com</a>></span> wrote:<br>
</span> <blockquote class="gmail_quote">what kernel and zfs version do you run exactly?</blockquote></div>
<br>
</div>
<div>Free install, updated to latest from the pve-no-sub repos</div>
<div><br>
uname -r<br>
3.10.0-11-pve<br>
<br>
<br>
cat /var/log/dmesg | grep -E 'SPL:|ZFS:'<br>
[ 17.384858] SPL: Loaded module v0.6.4-358_gaaf6ad2<br>
[ 17.449584] ZFS: Loaded module v0.6.4.1-1099_g7939064, ZFS pool version 5000, ZFS filesystem version 5<br>
[ 18.007733] SPL: using hostid 0xa8c00802<span><br>
<br>
<br>
--<br>
</span>
<div><span>Lindsay</span></div>
</div>
</div>
</blockquote></div>
<br>
<br>
<br>
--<br>
<div>Lindsay</div>
</div>
</div></div><hr>
<br>
<pre>_______________________________________________
pve-user mailing list
<a href="mailto:pve-user@pve.proxmox.com" target="_blank">pve-user@pve.proxmox.com</a>
<a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user" target="_blank">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user</a>
</pre>
</blockquote>
<p> </p>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature">Lindsay</div>
</div>