[pve-devel] multipath problems

Alexandre DERUMIER aderumier at odiso.com
Wed Aug 22 08:00:32 CEST 2012


Hi Dietmar, I'll try to help you.

what is your hardware config ?( cpu/ram/disk).
what's your raid level ? (raid-z, mirrors)
do you use sas or sata disk?

some quick performnce tuning,through gui:
:2000/settings/preferences/?this_form=sys_form
zfs_no_cacheflush=yes
Sys_zfs_vdev_max_pending = 1 for sata, = 10 for sas


Also if you want to have more sequential bandwith, you can try to use big block size for zvol, like 128K. (I setup 4K by default in my module).

about nexenta caches:
For read, nexenta use 2cache layer, 1rst layer ram, 2nd layer(optionnal) ssd (named l2arc).
for write, nexenta can use ssd or nvram devices (slog devices), to write datas randomly and flush them sequentially to disks each Xseconds.



I never do hdparm test, so I can't compare, but maybe can you try with fio ?

random read :

fio --filename=/dev/dm-20 --direct=1 --rw=randread --bs=4k --size=20G --numjobs=200 --runtime=60 --group_reporting --name=file1


randomwrite  
fio --filename=/dev/dm-20 --direct=1 --rw=randwrite --bs=4k --size=20G --numjobs=200 --runtime=60 --group_reporting --name=file1



seq read

fio --filename=/dev/dm-20 --direct=1 --rw=read --bs=1m --size=20G --numjobs=200 --runtime=60 --group_reporting --name=file1 


seq write: 

fio --filename=/dev/dm-20 --direct=1 --rw=write --bs=1m --size=5G --numjobs=200 --runtime=60 --group_reporting --name=file1 


----- Mail original ----- 

De: "Dietmar Maurer" <dietmar at proxmox.com> 
À: pve-devel at pve.proxmox.com 
Envoyé: Mercredi 22 Août 2012 07:28:08 
Objet: [pve-devel] multipath problems 



Hi all, 

I am just setting up a nexenta storage (community edition) and try to use multipath on the proxmox side: 

# multipath -l 
mpath1 (3600144f0001e58febc44503399600001) dm-3 NEXENTA,NEXENTASTOR 
size=100G features='1 queue_if_no_path' hwhandler='0' wp=rw 
`-+- policy='round-robin 0' prio=-3 status=active 
|- 5:0:0:0 sdd 8:48 active undef running 
|- 3:0:0:0 sdb 8:16 active undef running 
`- 4:0:0:0 sdc 8:32 active undef running 

I assume above output is OK so far. 

The strange thing is that we get very poor performance. When I test a 
single iscsi drive I get: 

# hdparm -t /dev/sdc 

/dev/sdc: 
Timing buffered disk reads: 338 MB in 3.00 seconds = 112.48 MB/sec 
root at dell1:~# hdparm -t /dev/sdb 

/dev/sdb: 
Timing buffered disk reads: 222 MB in 3.01 seconds = 73.69 MB/sec 
root at dell1:~# hdparm -t /dev/sdd 

/dev/sdd: 
Timing buffered disk reads: 262 MB in 3.01 seconds = 87.07 MB/sec 

When I test the multipath drive I get: 

# hdparm -t /dev/mapper/mpath1 

/dev/mapper/mpath1: 
Timing buffered disk reads: 266 MB in 3.02 seconds = 88.04 MB/sec 

So there is no speed gain. I already checked that traffic is routed to different network cards. 
Any idea why it is that slow? 

- Dietmar 


_______________________________________________ 
pve-devel mailing list 
pve-devel at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 



-- 

-- 



	

Alexandre D e rumier 

Ingénieur Systèmes et Réseaux 


Fixe : 03 20 68 88 85 

Fax : 03 20 68 90 88 


45 Bvd du Général Leclerc 59100 Roubaix 
12 rue Marivaux 75002 Paris 



More information about the pve-devel mailing list