[PVE-User] Proxmox + ZFS: Performance issues

Lindsay Mathieson lindsay.mathieson at gmail.com
Tue Apr 26 02:11:39 CEST 2016


On 26 April 2016 at 09:25, Ralf <ralf+pve at ramses-pyramidenbau.de> wrote:

> ... Just a guess: Shouldn't it be possible to degrade my raid, create a
> new degraded raid1 array having the correct ashift size and sending all
> volumes from the old to the new raid?
>


Yes, should be possible - have done it myself.

- detach one disk from the existing mirror and create a new pool.
- Send the data from the old pool to the new pool (ZFS send|recv).
- destroy the old pool
- attach the old pool disk to the new pool disk as a mirror

Until the new mirror is setup you will have no redundancy.

http://docs.oracle.com/cd/E19253-01/819-5461/6n7ht6qvl/index.html

On reflection I'm not convinced ashift is the cause of your problem, you'll
be losing a little bit of storage to blocksize but it shouldn't effect
performance.

TBH, 10GB does not sound like a lot of ram for Proxmox, 3 VM's and ZFS.

Do you have a RAM limit set for ZFS? I'd suggest 4-6GB. It can be set at
runtime:


echo 4294967296 > /sys/module/zfs/parameters/zfs_arc_max

And could you post your zfs proprs?

zfs get all <poolname>


Also you might be better taking this to the zfs on linux user list. More
zfs experts there.




-- 
Lindsay



More information about the pve-user mailing list