[PVE-User] Proxmox NFS issue
Pongrácz István
pongracz.istvan at gmail.com
Wed Nov 6 18:43:06 CET 2013
Hi,
I just read your issue.
Some comments on your ZFS server setup:
for ZIL, in your config, 1-8GB size more than enoug in any case
L2ARC - it needs ram to keep header information in ARC, probably lower l2arc than actual
Example: for ZIL and L2ARC, you should be better with 2 x 60GB SSD:
2 x 40GB for L2ARC, striped, let's say sdb2 and sdc2 - total 80GB
2 x 5GB for ZIL, in mirror, let's say sdb1 and sdb2 - total 5GB (mirror)
you should check your ZFS setup in details (compression, atime, ashift, dedup etc.)
compression: lz4, atime: off, ashift: 12, dedup: off, blocksize 128k
you should check your raw ZFS performance on the nas, be careful, not as simple as sounds
check your cache hit rates (arc, l2arc),
check your iostats under load (zpool iostat -v 1)
read carefully the manual of the chosen ZFS implementation, seriously, great tool, but needs some knowledge
sign up to a zfs specific mailing list to get ZFS specific help
Network:
check your NFS setup on the ZFS server (sync vs. async)
check your Proxmox nfs client settings, how do you mount
Proxmox:
try to use writeback cache
compare raw and qcow2 format performance, choose the better one
install proxmox into a kvm and check its pveperf - good indicator
you can mount nfs manually and setup proxmox to use that point as a simple directory -> you can tune nfs parameters
In kvm:
try to use safe delete regularly or always (overwrite deleted files with 0)
In general, if you tune one parameter, it should need change other parameters as well, for example if you use qcow2 as image format on the proxmox server, the zfs compression should be zle or off.
My opinion, your problem at this moment somewhere in your network/nfs setup, later you will have issues with ZFS under real world load :)))))))
Bye,
István
----------------eredeti üzenet-----------------
Feladó: "Muhammad Yousuf Khan" sirtcp at gmail.com
Címzett: "pve-user at pve.proxmox.com "
Dátum: Wed, 6 Nov 2013 20:05:13 +0500
----------------------------------------------------------
>
>
>
>i am facing slow read and write on our new NAS.
>
>here is the hardware detail.
>
>
>Proxmox :
>
>12GB RAM
>
>Xeon 3.2 (2 processors)
>
>500GB HD for Proxmox and Debian OS.
>
>
>
>Remote SAN/NAS:
>
>OS : omniOS
>
>RAM : 12 GB
>
>FS : ZFS
>
>Sharing protocol : NFS
>
>Xeon 3.2 (2 processors)
>
>1 SSD 60GB ARCH2
>
>1 SSD 120GB ZIL
>
>3 Mix capasity HD 800GB,500GB and 1TB all are 64bit Cache.
>
>ZFS RAID Type : RaidZ
>
>
>
>
>
>
>when i am inside the VM and trying to copy "to" network or "from" network i see very slow
>traffic. specifically talking about inside VM.
>
>
>kindly find an attach file. to see my read and write performance.
>in side graphs Red line is showing my copy data-to-VM and white line showing copy
>data-to-network.
>
>
>
>when i am copying data from terminal it is showing me good speed.
>
>
>
>here is the some more detail on Mount points. (output of my "df -h" command)
>
>10.x.x.25:/tank/VMbase 903G 2.1G 901G 1% /nfscom
>
>
>-------------------------------------------------------------------------
>--
>
>here is some copy test on terminal from the nfs mount point
>
>root at bull :/nfscom/images/1009# rsync --progress vm-1009-disk-1.raw /
>vm-1009-disk-1.raw
> 824311808 7% 70.00MB/s 0:02:18
>
>
>(you can see i am coping 10GB file from NFS mount to "/" this is the same VM image which is
>showing problem in the attached graphics)
>
>
>now copying same VM from "/" to same NFS mount point.
>
>root at bull :/nfscom/images# rsync --progress /vm-1009-disk-1.raw
>/nfscom/images/
>vm-1009-disk-1.raw
> 607682560 5% 63.71MB/s 0:02:35
>
>
>
>
>you can see my read and writes are working-great from the console but when it come to VM i
>am facing issues inside VM.
>
>
>
>actually i have asked this question few weeks ago. some one suggested me in the forum to
>buy a SSD for ZIL so it took me a while and i bought even two SSDs 1 for Level 2 Arch and 1 for ZIL.
>but still i am standing at the same point.
>
>Can anyone please tell me what mistake i am doing here.
>
>
>even i tried FreeNas with same ZFS config i am facing same issue.
>
>however when using Same NFS with Virtualbox in ubuntu 12.x it is doing fine.
>
>
>i tried any available HD type and all the modes available in proxmox, such as "no cache"
>"sync" write through" on both raw and qemu drive types but nothing help.
>
>
>
>i dont know where i am doing wrong.
>
>please help me out. i am very near to bang my head :).
>
>
>
>
>
>
>
>
>
>
>
>__________________________________________________
>
>
>_______________________________________________
>pve-user mailing list
>pve-user at pve.proxmox.com
>http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20131106/cd48bd00/attachment.htm>
More information about the pve-user
mailing list