[PVE-User] Proxmox + ZFS: Performance issues

Ralf ralf+pve at ramses-pyramidenbau.de
Tue Apr 26 21:13:51 CEST 2016


Hi,

I think the best thing will be to buy some newer disks...
Thanks for all your help!

Maybe I should then also disable thin-provisioning as it might cause a
lot of random IO....


On 04/26/16 02:11, Lindsay Mathieson wrote:
> On 26 April 2016 at 09:25, Ralf <ralf+pve at ramses-pyramidenbau.de> wrote:
>
>> ... Just a guess: Shouldn't it be possible to degrade my raid, create a
>> new degraded raid1 array having the correct ashift size and sending all
>> volumes from the old to the new raid?
>>
>
> Yes, should be possible - have done it myself.
>
> - detach one disk from the existing mirror and create a new pool.
> - Send the data from the old pool to the new pool (ZFS send|recv).
> - destroy the old pool
> - attach the old pool disk to the new pool disk as a mirror
>
> Until the new mirror is setup you will have no redundancy.
>
> http://docs.oracle.com/cd/E19253-01/819-5461/6n7ht6qvl/index.html
>
> On reflection I'm not convinced ashift is the cause of your problem, you'll
> be losing a little bit of storage to blocksize but it shouldn't effect
> performance.
>
> TBH, 10GB does not sound like a lot of ram for Proxmox, 3 VM's and ZFS.
>
> Do you have a RAM limit set for ZFS? I'd suggest 4-6GB. It can be set at
> runtime:
>
>
> echo 4294967296 > /sys/module/zfs/parameters/zfs_arc_max
I set it already to

# cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=3221225472

3GiB

  Ralf
>
> And could you post your zfs proprs?
>
> zfs get all <poolname>
>
>
> Also you might be better taking this to the zfs on linux user list. More
> zfs experts there.
>
>
>
>




More information about the pve-user mailing list