[PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS

Mikhail m at plus-plus.su
Wed Jul 19 12:01:45 CEST 2017


On 07/19/2017 12:52 PM, Emmanuel Kasper wrote:
> do not use dd to benchmark storages, use fio
> 
> with a command line like
> 
> fio  --size=9G --bs=64k --rw=write --direct=1 --runtime=60
> --name=64kwrite --group_reporting | grep bw
> 
> inside your mount point
> 
> or use the --filename option to point to a block device
> 
> from this you will get reliable sequential write info


Emmanuel, thanks for the hint!
Just tried benchmarking with fio using your command line. Results below,
looks very slow (avg=24888.52 KB/s):

# fio  --size=9G --bs=64k --rw=write --direct=1 --runtime=60
--name=64kwrite --group_reporting
64kwrite: (g=0): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=sync,
iodepth=1
fio-2.1.11
Starting 1 process
64kwrite: Laying out IO file(s) (1 file(s) / 9216MB)
Jobs: 1 (f=1): [W(1)] [15.4% done] [0KB/1022KB/0KB /s] [0/15/0 iops]
[eta 05m:34s]
64kwrite: (groupid=0, jobs=1): err= 0: pid=7841: Wed Jul 19 12:57:15 2017
  write: io=1422.6MB, bw=24231KB/s, iops=378, runt= 60117msec
    clat (usec): min=87, max=293416, avg=2637.70, stdev=14667.15
     lat (usec): min=87, max=293418, avg=2639.85, stdev=14667.17
    clat percentiles (usec):
     |  1.00th=[   87],  5.00th=[   88], 10.00th=[   88], 20.00th=[   89],
     | 30.00th=[  101], 40.00th=[  135], 50.00th=[  195], 60.00th=[  235],
     | 70.00th=[  334], 80.00th=[  414], 90.00th=[  700], 95.00th=[ 8384],
     | 99.00th=[81408], 99.50th=[117248], 99.90th=[193536],
99.95th=[211968],
     | 99.99th=[250880]
    bw (KB  /s): min=  555, max=172928, per=100.00%, avg=24888.52,
stdev=34949.10
    lat (usec) : 100=29.27%, 250=32.97%, 500=25.85%, 750=2.35%, 1000=1.41%
    lat (msec) : 2=0.49%, 4=0.37%, 10=3.04%, 20=1.57%, 50=1.22%
    lat (msec) : 100=0.78%, 250=0.67%, 500=0.01%
  cpu          : usr=0.18%, sys=1.34%, ctx=26211, majf=0, minf=8
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%,
>=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>=64=0.0%
     issued    : total=r=0/w=22761/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: io=1422.6MB, aggrb=24231KB/s, minb=24231KB/s, maxb=24231KB/s,
mint=60117msec, maxt=60117msec

Disk stats (read/write):
    dm-7: ios=0/22961, merge=0/0, ticks=0/77576, in_queue=77692,
util=98.84%, aggrios=2437/28407, aggrmerge=0/0, aggrticks=0/0,
aggrin_queue=0, aggrutil=0.00%
    md0: ios=2437/28407, merge=0/0, ticks=0/0, in_queue=0, util=0.00%,
aggrios=1035/14632, aggrmerge=53/259, aggrticks=4785/68958,
aggrin_queue=73796, aggrutil=67.74%
  sda: ios=1782/14834, merge=50/265, ticks=8488/77372, in_queue=85876,
util=67.74%
  sdb: ios=1153/14837, merge=50/264, ticks=4460/71308, in_queue=75792,
util=63.19%
  sdc: ios=737/14428, merge=57/254, ticks=3924/65828, in_queue=69896,
util=56.76%
  sdd: ios=471/14431, merge=55/255, ticks=2268/61324, in_queue=63620,
util=54.84%
#

I have also changed CPU freq. to max 3.40GHz, but looks like this was
not an issue.

Mikhail.



More information about the pve-user mailing list