[PVE-User] Prox 4.x LVM-iscsi-multipath

Tonči Stipičević tonci at suma-informatika.hr
Thu Jul 7 12:52:53 CEST 2016

Hi to all,

During verzion 3.x I was using freenas 9.x as iscsi-target shared 
storage with 2 x GBit multipath(alua) connection.

Each host has 2 x Gbit Nic and evrything was working smooth and fast.

I could saturate both links up to 95% w/o any problems and traffic was 
always correctly shared on both links.

Very often I must restore 16 VMs (800G archive) onto this freenas 
"storage" (vulgaris PC with 5 disks / sata controller on board / zfs 
raidz1 / ...)

and in this multipath/zfs scenario the result was perfect -> Both hosts 
were restoring 8 VMs each in the same time to this storage , saturating 
both storage links up to 90% .

But the most important information was that I/O% delay was allways lower 
than CPU% .

But as soon as I switched to 4.x prox  this story is not that nice any more.

Multipath works correctly but only on the OS level. So if we import 
mpath lvm storage into prox 4.x  everything gets too too slow.

When restoring only one VM at a time I/O% delay raises up to 40% and 
there is no way to start another restore.

But , very interesting is that:

- if I format this iscsi target partition as linux and mount it on the 
host, and than import it as "directory" in "storage" section,  the 
result is very very good again.

- if I mount  this zfs storage as PV volume into both hosts , define two 
LV and format  them into linux and mount them into the hosts (now each 
hosts is connected to the one LV on the same PV ->  "dd" 65G from each 
host onto its LV on the common PV shows  122MBs write speed each (more 
than 240MBs on the multipath link)

But only when I import LVM as common PV into the prox hosts then things 
get very very slow ....

It seems that OS is handling multipath correctly but as soon as prox 
starts to manage it , things get very "underaverige"

So please an advice what could be so wrong in this my scenario that I 
cannot get so good results from the 3.x platform ?

Thank you very much in advance and




More information about the pve-user mailing list