[PVE-User] DELL PowerEdge T440/PERC H750 and embarassing HDD performance...

Eneko Lacunza elacunza at binovo.es
Thu Apr 20 09:11:40 CEST 2023


Hi Marco,

What disk model?

sdc has only one backing HDD right?

84 IOPS is not much, but I don't think you can get much more from an HDD 
with random RW...

PERC controller is quite new, maybe driver in PVE 7 is more optimized...

Cheers

El 19/4/23 a las 19:07, Marco Gaiarin escribió:
> Situation: some little PVE clusters with ZFS, still on PVE6.
>
> We have a set of PowerEdge T340 with PERC H330 Adapter in JBOD mode, that
> perform decently with HDD disks and ZFS.
>
> We have also a set of DELL PowerEdge T440 with PERC H750, that does NOT have
> a JBOD mode, but a 'Non-RAID' auto-RAID0 mode, and perform 'indecently' on
> HDD disks, for example:
>
> root at pppve1:~# fio --filename=/dev/sdc --direct=1 --rw=randrw --bs=128k --ioengine=libaio --iodepth=256 --runtime=120 --numjobs=4 --time_based --group_reporting --name=hdd-rw-128
> hdd-rw-128: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=256
> ...
> fio-3.12
> Starting 4 processes
> Jobs: 4 (f=4): [m(4)][0.0%][eta 08d:17h:11m:29s]
> hdd-rw-128: (groupid=0, jobs=4): err= 0: pid=26198: Wed May 18 19:11:04 2022
>    read: IOPS=84, BW=10.5MiB/s (11.0MB/s)(1279MiB/121557msec)
>      slat (usec): min=4, max=303887, avg=23029.19, stdev=61832.29
>      clat (msec): min=1329, max=6673, avg=4737.71, stdev=415.84
>       lat (msec): min=1543, max=6673, avg=4760.74, stdev=420.10
>      clat percentiles (msec):
>       |  1.00th=[ 2802],  5.00th=[ 4329], 10.00th=[ 4463], 20.00th=[ 4530],
>       | 30.00th=[ 4597], 40.00th=[ 4665], 50.00th=[ 4732], 60.00th=[ 4799],
>       | 70.00th=[ 4866], 80.00th=[ 4933], 90.00th=[ 5134], 95.00th=[ 5336],
>       | 99.00th=[ 5805], 99.50th=[ 6007], 99.90th=[ 6342], 99.95th=[ 6409],
>       | 99.99th=[ 6611]
>     bw (  KiB/s): min=  256, max= 5120, per=25.18%, avg=2713.08, stdev=780.45, samples=929
>     iops        : min=    2, max=   40, avg=21.13, stdev= 6.10, samples=929
>    write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(1328MiB/121557msec); 0 zone resets
>      slat (usec): min=9, max=309914, avg=23025.13, stdev=61676.77
>      clat (msec): min=1444, max=13086, avg=6943.12, stdev=2068.26
>       lat (msec): min=1543, max=13086, avg=6966.15, stdev=2069.28
>      clat percentiles (msec):
>       |  1.00th=[ 2769],  5.00th=[ 4597], 10.00th=[ 4799], 20.00th=[ 5067],
>       | 30.00th=[ 5403], 40.00th=[ 5873], 50.00th=[ 6409], 60.00th=[ 7148],
>       | 70.00th=[ 8020], 80.00th=[ 9060], 90.00th=[10134], 95.00th=[10671],
>       | 99.00th=[11610], 99.50th=[11879], 99.90th=[12550], 99.95th=[12550],
>       | 99.99th=[12684]
>     bw (  KiB/s): min=  256, max= 5376, per=24.68%, avg=2762.20, stdev=841.30, samples=926
>     iops        : min=    2, max=   42, avg=21.52, stdev= 6.56, samples=926
>    cpu          : usr=0.05%, sys=0.09%, ctx=2847, majf=0, minf=49
>    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8%
>       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
>       issued rwts: total=10233,10627,0,0 short=0,0,0,0 dropped=0,0,0,0
>       latency   : target=0, window=0, percentile=100.00%, depth=256
>
> Run status group 0 (all jobs):
>     READ: bw=10.5MiB/s (11.0MB/s), 10.5MiB/s-10.5MiB/s (11.0MB/s-11.0MB/s), io=1279MiB (1341MB), run=121557-121557msec
>    WRITE: bw=10.9MiB/s (11.5MB/s), 10.9MiB/s-10.9MiB/s (11.5MB/s-11.5MB/s), io=1328MiB (1393MB), run=121557-121557msec
>
> Disk stats (read/write):
>    sdc: ios=10282/10601, merge=0/0, ticks=3041312/27373721, in_queue=30373472, util=99.99%
>
> note in particular the slow IOPS, very slow...
>
>
> Someone have some hint to share?! Thanks.
>


Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/




More information about the pve-user mailing list