[PVE-User] Proxmox NFS issue

Muhammad Yousuf Khan sirtcp at gmail.com
Wed Nov 6 19:27:47 CET 2013


>
>    - for ZIL, in your config, 1-8GB size more than enoug in any case
>    - L2ARC - it needs ram to keep header information in ARC, probably
>    lower l2arc than actual
>       - Example: for ZIL and L2ARC, you should be better with 2 x 60GB
>       SSD:
>          - 2 x 40GB for L2ARC, striped, let's say sdb2 and sdc2 - total
>          80GB
>          - 2 x 5GB for ZIL, in mirror, let's say sdb1 and sdb2 - total
>          5GB (mirror)
>         - you should check your ZFS setup in details (compression, atime,
>    ashift, dedup etc.)
>
>
First of i am really thankful for such a  informative and helpful email, as
you have discuss so many points i will have to test it one by  one. however
just a question which is still confusing me.

the status that i have showed in my "rsync" examples. it is almost bumps
around 65MB to 70MB read/write performance. which i think is not bad at all.

so my question is why it is showing me good stats  when i am sending or
receiving data from my storage to Proxmox. why it showing delay only inside
VM.

Lets say if "compression , Atime, ashift , dedup etc" all creating problem
in VM then why they are not causing the same problem when copy from proxmox
to OmniOS  or OmniOS to proxmox?





>
>    -
>       - compression: lz4, atime: off, ashift: 12, dedup: off, blocksize
>       128k
>     - you should check your raw ZFS performance on the nas, be careful,
>    not as simple as sounds
>    - check your cache hit rates (arc, l2arc),
>    - check your iostats under load (zpool iostat -v 1)
>    - read carefully the manual of the chosen ZFS implementation,
>    seriously, great tool, but needs some knowledge
>    - sign up to a zfs specific mailing list to get ZFS specific help
>
> Network:
>
>    - check your NFS setup on the ZFS server (sync vs. async)
>    - check your Proxmox nfs client settings, how do you mount
>
> would you please share me that Where can i tweak nfs client settings in
proxmox?



>
>
> Proxmox:
>
>    - try to use writeback cache
>
>
i tried that , didnt helpe

>
>    - compare raw and qcow2 format performance, choose the better one
>
> ok i will try this but i know previously i did the test and they both
ended up with the same issue..

VERY NOTICEABLE THING IS. it does not happen all the time, e.g, for 5
seconds my networks graph reach 25MBPS  and continues for 6 to 7 seconds
then again goes down to 0.90 to 0.70% which is tooooo much slow. this shows
a ZIG ZAG graph up and down i dont know why.

on the other hand, when coping from Omni to PVE and PVE to Omni. the
bandwidth stats show 70MBPS and this is very constant till file transfer
ends.


>
>    - install proxmox into a kvm and check its pveperf - good indicator
>    - you can mount nfs manually and setup proxmox to use that point as a
>    simple directory -> you can tune nfs parameters
>
> In kvm:
>
>    - try to use safe delete regularly or always (overwrite deleted files
>    with 0)
>
> In general, if you tune one parameter, it should need change other
> parameters as well, for example if you use qcow2 as image format on the
> proxmox server, the zfs compression should be zle or off.
>
can you please share any link,  where i can get more information on above
point.

Thanks

one more question, when we are using NFS for hosting VM, isn't it like we
are sanding and receiving data  via "scp" , "rsycn" does VM hosted on
external box operates very differently?

One more confusing point which i want to share, with same settings i am
hosting a VM in virtualbox and my virtualbox VM some time hangs during
copying big file but it does not low down the graph no matter how big the
file is. mean IO read/writes from Virtualbox are very constant.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://pve.proxmox.com/pipermail/pve-user/attachments/20131106/2e8c086b/attachment-0015.html>


More information about the pve-user mailing list