[PVE-User] Proxmox + ZFS: Performance issues
Ralf
ralf+pve at ramses-pyramidenbau.de
Mon Apr 25 15:43:29 CEST 2016
Hi,
some more random analysis:
I used atop to debug the problem.
Turned out, that disks are 100% busy and the average io time is beyond
all hope. ZFS seems to read/write/arrange things in a random and not in
a sequential way.
Nevertheless, I'll buy some new disks for the system. Are SSHDs known to
work well together with ZFS? Or better buy some normal server hard
drives together with a ZIL-SSD cache?
Cheers
Ralf
On 04/25/16 12:37, Ralf wrote:
> Hi,
>
> On 04/24/16 23:57, Lindsay Mathieson wrote:
>> On 25/04/2016 5:03 AM, Ralf wrote:
>>> Some Western Digital Green stuff:
>> There's a big part of your problem, never ever use WD greens in
>> servers, especially with ZFS. They are a desktop drive, not suitable
> I know, I know, but they were available :-)
>> for 24/7 operation. They have power saving modes which can't be
>> disabled and slow them down, they don't support the ata TLER, which
>> necessary for prompt timing out on disk errors - greens will hang for
>> every accessing a bad read/write which will freeze your whole zfs pool
>> (I've been there). Might be worth scanning your dmesg for disk errors.
> No errors, I can dump the whole disk without *any* errors, constantly
> 100MiB/s. I absolutely understand your argumentation and I really should
> buy better disks and migrate data, but I still don't understand that
> tremendous performance breakdown. 30-50 MiB/s, yes, that'd be okay, but
> 5MiB/s. Srsly?
>> But you should really be using NAS rated drives. I thoroughly
>> recommend the Western Digital Reds, they aren't high performance but
>> they are very reliable and fast enough. For better performance (and
>> more $) the Hitachi NAS drives are also very good.
>>
>> I had two WD Blacks start failing on me last weekend, less than a year
>> in the server, a tense few hours :) And one had failed earlier after 6
>> months. They run too hot and aren't up to the 24/7 workout they get in
>> a typical VM Server. We've replaced them all with WD Red (3TB) now.
>>
>> What ZFS disk setup do you have? - could you post your "zpool status"
> Sure:
>
> pool: rpool
> state: ONLINE
> scan: resilvered 4.73M in 0h0m with 0 errors on Sat Apr 23 19:21:58
> 2016 <- This happened on purpose after off- and onlining sdb2
> config:
>
> NAME STATE READ WRITE CKSUM
> rpool ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> sdb2 ONLINE 0 0 0
> sda2 ONLINE 0 0 0
>
>> Mine are RAID10 with a ssd for log and cache.
>>
>> zpool status
>> pool: tank
>> state: ONLINE
>> scan: resilvered 1.46T in 11h10m with 0 errors on Fri Apr 22
>> 05:04:00 2016
>> config:
>>
>> NAME STATE READ
>> WRITE CKSUM
>> tank ONLINE 0 0 0
>> mirror-0 ONLINE 0 0 0
>> ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7XE8CXN ONLINE 0 0 0
>> ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N5EC7AUN ONLINE 0 0 0
>> mirror-1 ONLINE 0 0 0
>> ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N0D46M4N ONLINE 0 0 0
>> ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7DH176P ONLINE 0 0 0
>> logs
>> ata-Samsung_SSD_850_PRO_128GB_S24ZNSAG422885X-part1 ONLINE 0 0 0
>> cache
>> ata-Samsung_SSD_850_PRO_128GB_S24ZNSAG422885X-part2 ONLINE 0 0 0
>>
>>
>> I recommend RAID10 for performance and reduncy purposes as well. It
>> has twice the write IOPS and 4 times the Read IOPS of a single disk.
>> It outperforms raid 5 or 6 (RAIDZ, RAIDZ2)
>> - http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/
>>
>> The ssd log partition speeds up writes, the cache speeds up reads
>> (with time, as its get populated).
>>
>> Warning: To large a cache can actually kill performance as it uses up
>> ZFS ARC memory.
>>
>> ZFS needs RAM as well, how much have you allocated to it?
> There are 10GiB of RAM allocated to it, while Proxmox+three VMs consume
> ~4GiB and the rest is for ZFS. As dedup is deactivated and I use mirror
> raid, I think this should be fair enough. CPU is 8xXeon E5405. So
> anything that would consume tons of memory (RaidZ, Dedup, ...) is disabled.
>
> Cheers
> Ralf
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
More information about the pve-user
mailing list