[PVE-User] prox storage replication <> iscsi multipath problem

Tonči Stipičević tonci at suma-informatika.hr
Sat Jul 8 00:49:17 CEST 2017


no sorry my mistake , zfs interconnection goes through other switch , no 
VLAns, all hosts (vmbr's) just plugged into the hpe switch (8port)

iscsi multipath goes through 3 separate nics and through 3 separate 
vlans on hp procurve 1720 and when there is no zfs pools I get fantastic 
results with multipath (alua) :

root at pvesuma01:~# multipath -ll

FNAS04 (36589cfc0000004081c5d751435a19ea7) dm-3 FreeNAS,iSCSI Disk
size=3.0T features='2 queue_if_no_path retain_attached_hw_handler' 
hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
   |- 8:0:0:0 sdd 8:48 active ready running
   |- 7:0:0:0 sde 8:64 active ready running
   `- 9:0:0:0 sdf 8:80 active ready running

root at pvesuma01:~# iscsiadm --mode session
tcp: [1] 10.1.10.4:3260,1 iqn.2005-fn4.org.freenas.ctl:target1 (non-flash)
tcp: [2] 10.3.10.4:3260,3 iqn.2005-fn4.org.freenas.ctl:target1 (non-flash)
tcp: [3] 10.2.10.4:3260,2 iqn.2005-fn4.org.freenas.ctl:target1 (non-flash)

When I restore 3 VMs from all 3 hosts in the same time , from the 
Freenas shared nfs storage onto this iscsi multipath volume this 
multipath link gets saturated up to 80% and all 3 links are equally 
occupied (800Mbs each) .  So 200Euros PC (1 quadports nic , 5 sata 
drives on the jbod controller - because of zfs) is receiving average 
data stream of 2,4Gbs   etc ...    So this switch would not be bottle-neck



Now I have additional and more accurate feedback :

1. No VM on iscsi target is running so there is no switch load at all , 
zpools are created and iscsilvm target is still online and visible

2. I clone one VM to the zpool and everything is still ok

root at pvesuma01:~# multipath -ll
FNAS04 (36589cfc0000004081c5d751435a19ea7) dm-3 FreeNAS,iSCSI Disk
size=3.0T features='2 queue_if_no_path retain_attached_hw_handler' 
hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
   |- 7:0:0:0 sdd 8:48 active ready running
   |- 8:0:0:0 sde 8:64 active ready running
   `- 9:0:0:0 sdf 8:80 active ready running
root at pvesuma01:~# lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                         8:0    0   1.8T  0 disk
└─sda1                      8:1    0   1.8T  0 part
sdb                         8:16   1 465.8G  0 disk
├─sdb1                      8:17   1  1007K  0 part
├─sdb2                      8:18   1   127M  0 part
└─sdb3                      8:19   1 465.7G  0 part
   ├─pve-swap              253:0    0    15G  0 lvm   [SWAP]
   ├─pve-root              253:1    0    96G  0 lvm   /
   └─pve-data              253:2    0 338.7G  0 lvm   /var/lib/vz
sdc                         8:32   1 465.8G  0 disk
├─sdc1                      8:33   1 465.8G  0 part
└─sdc9                      8:41   1     8M  0 part
sdd                         8:48   0     3T  0 disk
└─FNAS04                  253:3    0     3T  0 mpath
   ├─vg4-vm--9999--disk--1 253:4    0     1G  0 lvm
   ├─vg4-vm--9998--disk--1 253:5    0    10G  0 lvm
   ├─vg4-vm--9997--disk--1 253:6    0    32G  0 lvm
   ├─vg4-vm--8002--disk--1 253:7    0    32G  0 lvm
   ├─vg4-vm--9995--disk--1 253:8    0    10G  0 lvm
   ├─vg4-vm--8001--disk--1 253:9    0    32G  0 lvm
   ├─vg4-vm--9994--disk--1 253:10   0    15G  0 lvm
   ├─vg4-vm--9993--disk--1 253:11   0    32G  0 lvm
   ├─vg4-vm--9996--disk--1 253:12   0    25G  0 lvm
   ├─vg4-vm--9996--disk--2 253:13   0    32G  0 lvm
   ├─vg4-vm--9996--disk--3 253:14   0     5G  0 lvm
   ├─vg4-vm--9991--disk--1 253:15   0    32G  0 lvm
   ├─vg4-vm--9990--disk--1 253:16   0    32G  0 lvm
   ├─vg4-vm--9989--disk--1 253:17   0    32G  0 lvm
   ├─vg4-vm--9988--disk--1 253:18   0    64G  0 lvm
   ├─vg4-vm--9987--disk--1 253:19   0    42G  0 lvm
   ├─vg4-vm--9990--disk--2 253:20   0    32G  0 lvm
   └─vg4-vm--6001--disk--1 253:21   0    15G  0 lvm
sde                         8:64   0     3T  0 disk
└─FNAS04                  253:3    0     3T  0 mpath
   ├─vg4-vm--9999--disk--1 253:4    0     1G  0 lvm
   ├─vg4-vm--9998--disk--1 253:5    0    10G  0 lvm
   ├─vg4-vm--9997--disk--1 253:6    0    32G  0 lvm
   ├─vg4-vm--8002--disk--1 253:7    0    32G  0 lvm
   ├─vg4-vm--9995--disk--1 253:8    0    10G  0 lvm
   ├─vg4-vm--8001--disk--1 253:9    0    32G  0 lvm
   ├─vg4-vm--9994--disk--1 253:10   0    15G  0 lvm
   ├─vg4-vm--9993--disk--1 253:11   0    32G  0 lvm
   ├─vg4-vm--9996--disk--1 253:12   0    25G  0 lvm
   ├─vg4-vm--9996--disk--2 253:13   0    32G  0 lvm
   ├─vg4-vm--9996--disk--3 253:14   0     5G  0 lvm
   ├─vg4-vm--9991--disk--1 253:15   0    32G  0 lvm
   ├─vg4-vm--9990--disk--1 253:16   0    32G  0 lvm
   ├─vg4-vm--9989--disk--1 253:17   0    32G  0 lvm
   ├─vg4-vm--9988--disk--1 253:18   0    64G  0 lvm
   ├─vg4-vm--9987--disk--1 253:19   0    42G  0 lvm
   ├─vg4-vm--9990--disk--2 253:20   0    32G  0 lvm
   └─vg4-vm--6001--disk--1 253:21   0    15G  0 lvm
sdf                         8:80   0     3T  0 disk
└─FNAS04                  253:3    0     3T  0 mpath
   ├─vg4-vm--9999--disk--1 253:4    0     1G  0 lvm
   ├─vg4-vm--9998--disk--1 253:5    0    10G  0 lvm
   ├─vg4-vm--9997--disk--1 253:6    0    32G  0 lvm
   ├─vg4-vm--8002--disk--1 253:7    0    32G  0 lvm
   ├─vg4-vm--9995--disk--1 253:8    0    10G  0 lvm
   ├─vg4-vm--8001--disk--1 253:9    0    32G  0 lvm
   ├─vg4-vm--9994--disk--1 253:10   0    15G  0 lvm
   ├─vg4-vm--9993--disk--1 253:11   0    32G  0 lvm
   ├─vg4-vm--9996--disk--1 253:12   0    25G  0 lvm
   ├─vg4-vm--9996--disk--2 253:13   0    32G  0 lvm
   ├─vg4-vm--9996--disk--3 253:14   0     5G  0 lvm
   ├─vg4-vm--9991--disk--1 253:15   0    32G  0 lvm
   ├─vg4-vm--9990--disk--1 253:16   0    32G  0 lvm
   ├─vg4-vm--9989--disk--1 253:17   0    32G  0 lvm
   ├─vg4-vm--9988--disk--1 253:18   0    64G  0 lvm
   ├─vg4-vm--9987--disk--1 253:19   0    42G  0 lvm
   ├─vg4-vm--9990--disk--2 253:20   0    32G  0 lvm
   └─vg4-vm--6001--disk--1 253:21   0    15G  0 lvm
zd0                       230:0    0    15G  0 disk




3. after reboot  iscsi-target multipath disappears

root at pvesuma01:~# multipath -ll
root at pvesuma01:~# iscsiadm --mode session
tcp: [1] 10.1.10.4:3260,1 iqn.2005-fn4.org.freenas.ctl:target1 (non-flash)
tcp: [2] 10.3.10.4:3260,3 iqn.2005-fn4.org.freenas.ctl:target1 (non-flash)
tcp: [3] 10.2.10.4:3260,2 iqn.2005-fn4.org.freenas.ctl:target1 (non-flash)
root at pvesuma01:~# lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda            8:0    0   1.8T  0 disk
└─sda1         8:1    0   1.8T  0 part
sdb            8:16   1 465.8G  0 disk
├─sdb1         8:17   1  1007K  0 part
├─sdb2         8:18   1   127M  0 part
└─sdb3         8:19   1 465.7G  0 part
   ├─pve-swap 253:0    0    15G  0 lvm  [SWAP]
   ├─pve-root 253:1    0    96G  0 lvm  /
   └─pve-data 253:2    0 338.7G  0 lvm  /var/lib/vz
sdc            8:32   1 465.8G  0 disk
├─sdc1         8:33   1 465.8G  0 part
└─sdc9         8:41   1     8M  0 part
sdd            8:48   0     3T  0 disk
sde            8:64   0     3T  0 disk
sdf            8:80   0     3T  0 disk
zd0          230:0    0    15G  0 disk
├─zd0p1      230:1    0    13G  0 part
├─zd0p2      230:2    0     1K  0 part
└─zd0p5      230:5    0     2G  0 part


-- 

/
/
/srdačan pozdrav / best regards/

Tonči Stipičević, dipl. ing. Elektr.
/direktor / manager/**
**
	


d.o.o.
ltd.
	
*podrška / upravljanje
**IT*/ sustavima za male i srednje tvrtke/

/Small & Medium Business
/*IT*//*support / management*
	
Badalićeva 27 / 10000 Zagreb / Hrvatska – Croatia
url: www.suma-informatika.hr
mob: +385 91 1234003
fax: +385 1  5560007



On 07/07/2017 17:17, Dietmar Maurer wrote:
>> Everything is VLAN-separated ... all three multipath links have its own
>> subnets and the link between zfs local storages uses its own
>> VLAN-separated link (actually vmbr1 -> intranet link )
> Usually VLAN separation does not help to prevent network overload. Or do you
> have some special switches which can guarantee minimum transfer rates?
>
> Besides, I cannot see why replication (ssh/zfs) can disturb an iscsi connection.
>
> What error do you get exactly on the iscsi connection?
>




More information about the pve-user mailing list