<div dir="ltr"><br><div class="gmail_extra"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><ul><li>for ZIL, in your config, 1-8GB size more than enoug in any case</li>
<li>L2ARC - it needs ram to keep header information in ARC, probably lower l2arc than actual
<ul>
<li>Example: for ZIL and L2ARC, you should be better with 2 x 60GB SSD:
<ul>
<li>2 x 40GB for L2ARC, striped, let's say sdb2 and sdc2 - total 80GB</li>
<li>2 x 5GB for ZIL, in mirror, let's say sdb1 and sdb2 - total 5GB (mirror)</li>
</ul>
</li>
</ul>
</li>
<li>you should check your ZFS setup in details (compression, atime, ashift, dedup etc.)
</li></ul></blockquote><div><br></div><div>First of i am really thankful for such a informative and helpful email, as you have discuss so many points i will have to test it one by one. however just a question which is still confusing me. <br>
<br></div><div>the status that i have showed in my "rsync" examples. it is almost bumps around 65MB to 70MB read/write performance. which i think is not bad at all.<br><br></div><div>so my question is why it is showing me good stats when i am sending or receiving data from my storage to Proxmox. why it showing delay only inside VM.<br>
<br></div><div>Lets say if "compression , Atime, ashift , dedup etc" all creating problem in VM then why they are not causing the same problem when copy from proxmox to OmniOS or OmniOS to proxmox?<br><br></div>
<div><br></div><div><br> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><ul><li><ul>
<li>compression: lz4, atime: off, ashift: 12, dedup: off, blocksize 128k</li>
</ul>
</li>
<li>you should check your raw ZFS performance on the nas, be careful, not as simple as sounds</li>
<li>check your cache hit rates (arc, l2arc),</li>
<li>check your iostats under load (zpool iostat -v 1)</li>
<li>read carefully the manual of the chosen ZFS implementation, seriously, great tool, but needs some knowledge</li>
<li>sign up to a zfs specific mailing list to get ZFS specific help</li>
</ul>
<p>Network:</p>
<ul>
<li>check your NFS setup on the ZFS server (sync vs. async)</li>
<li>check your Proxmox nfs client settings, how do you mount</li></ul></blockquote><div>would you please share me that Where can i tweak nfs client settings in proxmox?<br><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<ul>
</ul>
<p>Proxmox:</p>
<ul>
<li>try to use writeback cache</li></ul></blockquote><div><br></div><div>i tried that , didnt helpe<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><ul>
<li>compare raw and qcow2 format performance, choose the better one</li></ul></blockquote><div>ok i will try this but i know previously i did the test and they both ended up with the same issue.. <br><br></div><div>VERY NOTICEABLE THING IS. it does not happen all the time, e.g, for 5 seconds my networks graph reach 25MBPS and continues for 6 to 7 seconds then again goes down to 0.90 to 0.70% which is tooooo much slow. this shows a ZIG ZAG graph up and down i dont know why.<br>
<br></div><div>on the other hand, when coping from Omni to PVE and PVE to Omni. the bandwidth stats show 70MBPS and this is very constant till file transfer ends.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<ul>
<li>install proxmox into a kvm and check its pveperf - good indicator</li>
<li>you can mount nfs manually and setup proxmox to use that point as a simple directory -> you can tune nfs parameters</li>
</ul>
<p>In kvm:</p>
<ul>
<li>try to use safe delete regularly or always (overwrite deleted files with 0)</li>
</ul>
<p>In general, if you tune one parameter, it should need change other parameters as well, for example if you use qcow2 as image format on the proxmox server, the zfs compression should be zle or off.</p></blockquote><div>
can you please share any link, where i can get more information on above point. <br><br>Thanks<br></div><div> <br></div><div>one more question, when we are using NFS for hosting VM, isn't it like we are sanding and receiving data via "scp" , "rsycn" does VM hosted on external box operates very differently?<br>
<br></div><div>One more confusing point which i want to share, with same settings i am hosting a VM in virtualbox and my virtualbox VM some time hangs during copying big file but it does not low down the graph no matter how big the file is. mean IO read/writes from Virtualbox are very constant. <br>
<br></div></div></div></div>