[PVE-User] Proxmox NFS issue

Pongrácz István pongracz.istvan at gmail.com
Thu Nov 7 23:01:23 CET 2013


Hi,

Sorry, I have not enough free time.

My comments below in red (html viewer can help).

----------------eredeti üzenet----------------- 
Feladó: "Muhammad Yousuf Khan" sirtcp at gmail.com 
Címzett: "Pongrácz István" 
 
CC: "pve-user pve.proxmox.com" 
 
Dátum: Wed, 6 Nov 2013 23:27:47 +0500 
----------------------------------------------------------

>
>
>>
>>
>>
>>
>>
>>First of i am really thankful for such a informative and helpful email, as you have 
>>discuss so many points i will have to test it one by one. however just a question which is still 
>>confusing me.
>> 
>>
>>the status that i have showed in my "rsync" examples. it is almost bumps around 65MB to 
>>70MB read/write performance. which i think is not bad at all.
>> 
>>
>>so my question is why it is showing me good stats when i am sending or receiving data from 
>>my storage to Proxmox. why it showing delay only inside VM.
>>
>> 
>>
>>Probably you need to check KVM in various way: use linux on it, virtio etc. Check as many 
>>combination as you can.
>> 
>>
>>Lets say if "compression , Atime, ashift , dedup etc" all creating problem in VM then 
>>why they are not causing the same problem when copy from proxmox to OmniOS or OmniOS to 
>>proxmox?
>> 
>>
>>You just narrow your problem to the VM, you are on your way :)
>>
>> 
>>
>>would you please share me that Where can i tweak nfs client settings in proxmox?
>>
>> 
>>
>>Just issue the command: mount and check the output for details. You can mount manually 
>>the NFS export to a directory and set up in PVE this directory as local storage.
>>
>> 
>>
>> 
>>
>>
>>Proxmox:
>>
>> 
>>try to use writeback cache
>>
>>
>>
>> 
>>
>>i tried that , didnt helpe
>>
>>
>> 
>>compare raw and qcow2 format performance, choose the better one
>>
>>
>>
>>ok i will try this but i know previously i did the test and they both ended up with the same 
>>issue..
>> 
>>
>>VERY NOTICEABLE THING IS. it does not happen all the time, e.g, for 5 seconds my 
>>networks graph reach 25MBPS and continues for 6 to 7 seconds then again goes down to 0.90 to 0.70% 
>>which is tooooo much slow. this shows a ZIG ZAG graph up and down i dont know why.
>> 
>>
>>on the other hand, when coping from Omni to PVE and PVE to Omni. the bandwidth stats show 
>>70MBPS and this is very constant till file transfer ends.
>>
>> 
>>
>>
>> 
>>install proxmox into a kvm and check its pveperf - good indicator
>> 
>>you can mount nfs manually and setup proxmox to use that point as a simple directory -> 
>>you can tune nfs parameters
>>
>>
>>In kvm:
>>
>> 
>>try to use safe delete regularly or always (overwrite deleted files with 0)
>>
>>
>>In general, if you tune one parameter, it should need change other parameters as well, 
>>for example if you use qcow2 as image format on the proxmox server, the zfs compression 
>>should be zle or off.
>>
>>
>>can you please share any link, where i can get more information on above point.
>>
>> 
>>
>>Just google it, one good start is zfsonlinux.org .
>>
>>
>>Thanks
>>
>> 
>>
>>one more question, when we are using NFS for hosting VM, isn't it like we are sanding and 
>>receiving data via "scp" , "rsycn" does VM hosted on external box operates very differently?
>> 
>>
>>One more confusing point which i want to share, with same settings i am hosting a VM in 
>>virtualbox and my virtualbox VM some time hangs during copying big file but it does not low down the 
>>graph no matter how big the file is. mean IO read/writes from Virtualbox are very constant.
>> 
>>
>>Virtualbox != KVM. Different. That means, they will act differently. In KVM, you have 
>>a lot of parameters for test: change virtual network/disk type, CPU etc. Anyway, this 
>>also narrow down your problem to the KVM/Guest operating system level, so, you should 
>>focuse on to make good tests.
>>
>>
>>
>>
>>
>>Bye,
>>
>>István
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://pve.proxmox.com/pipermail/pve-user/attachments/20131107/42737503/attachment-0015.html>


More information about the pve-user mailing list