[PVE-User] Proxmox + ZFS: Performance issues

Lindsay Mathieson lindsay.mathieson at gmail.com
Sun Apr 24 23:57:48 CEST 2016


On 25/04/2016 5:03 AM, Ralf wrote:
> Some Western Digital Green stuff:

  There's a big part of your problem, never ever use WD greens in 
servers, especially with ZFS. They are a desktop drive, not suitable for 
24/7 operation. They have power saving modes which can't be disabled and 
slow them down, they don't support the ata TLER, which necessary for 
prompt timing out on disk errors - greens will hang for every accessing 
a bad read/write which will freeze your whole zfs pool (I've been 
there). Might be worth scanning your dmesg for disk errors.

But you should really be using NAS rated drives. I thoroughly recommend 
the Western Digital Reds, they aren't high performance but they are very 
reliable and fast enough. For better performance (and more $) the 
Hitachi NAS drives are also very good.

I had two WD Blacks start failing on me last weekend, less than a year 
in the server, a tense few hours :) And one had failed earlier after 6 
months. They run too hot and aren't up to the 24/7 workout they get in a 
typical VM Server. We've replaced them all with WD Red (3TB) now.

What ZFS disk setup do you have? - could you post your "zpool status"

Mine are RAID10 with a ssd for log and cache.

zpool status
   pool: tank
  state: ONLINE
   scan: resilvered 1.46T in 11h10m with 0 errors on Fri Apr 22 05:04:00 
2016
config:

NAME                                                   STATE READ WRITE 
CKSUM
tank ONLINE       0     0     0
mirror-0                                             ONLINE 0     0     0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7XE8CXN           ONLINE 0     0     0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5EC7AUN           ONLINE 0     0     0
mirror-1                                             ONLINE 0     0     0
ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N0D46M4N           ONLINE 0     0     0
ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7DH176P           ONLINE 0     0     0
         logs
ata-Samsung_SSD_850_PRO_128GB_S24ZNSAG422885X-part1  ONLINE 0     0     0
         cache
ata-Samsung_SSD_850_PRO_128GB_S24ZNSAG422885X-part2  ONLINE 0     0     0


I recommend RAID10 for performance and reduncy purposes as well. It has 
twice the write IOPS and 4 times the Read IOPS of a single disk. It 
outperforms raid 5 or 6 (RAIDZ, RAIDZ2)
     - http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/

The ssd log partition speeds up writes, the cache speeds up reads (with 
time, as its get populated).

Warning: To large a cache can actually kill performance as it uses up 
ZFS ARC memory.

ZFS needs RAM as well, how much have you allocated to it?


-- 
Lindsay Mathieson




More information about the pve-user mailing list