[PVE-User] PVE-6 and iscsi server

Gilberto Nunes gilberto.nunes32 at gmail.com
Fri Aug 16 13:40:54 CEST 2019


Hi there! It's me again!

I do all same steps in Proxmox VE 5.4 and everything works as expected!
Is this some bug with Proxmox VE 6?



---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36




Em qui, 15 de ago de 2019 às 15:33, Gilberto Nunes
<gilberto.nunes32 at gmail.com> escreveu:
>
> More info
>
> ve01:~# iscsiadm -m discovery -t st -p 10.10.10.100
> 10.10.10.100:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> pve01:~# iscsiadm -m discovery -t st -p 10.10.10.110
> 10.10.10.110:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> pve01:~# iscsiadm -m node --login
> iscsiadm: default: 1 session requested, but 1 already present.
> iscsiadm: default: 1 session requested, but 1 already present.
> iscsiadm: Could not log into all portals
> pve01:~# iscsiadm -m node --logout
> Logging out of session [sid: 5, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.100,3260]
> Logging out of session [sid: 6, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.110,3260]
> Logout of [sid: 5, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.100,3260] successful.
> Logout of [sid: 6, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.110,3260] successful.
> pve01:~# iscsiadm -m node --login
> Logging in to [iface: default, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.100,3260] (multiple)
> Logging in to [iface: default, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.110,3260] (multiple)
> Login to [iface: default, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.100,3260] successful.
> Login to [iface: default, target:
> iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br, portal:
> 10.10.10.110,3260] successful.
> pve01:~# iscsiadm -m node
> 10.10.10.100:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> 10.10.10.110:3260,1 iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> pve01:~# iscsiadm -m session -P 1
> Target: iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br (non-flash)
> Current Portal: 10.10.10.100:3260,1
> Persistent Portal: 10.10.10.100:3260,1
> **********
> Interface:
> **********
> Iface Name: default
> Iface Transport: tcp
> Iface Initiatorname: iqn.1993-08.org.debian:01:3af61619768
> Iface IPaddress: 10.10.10.200
> Iface HWaddress: <empty>
> Iface Netdev: <empty>
> SID: 7
> iSCSI Connection State: LOGGED IN
> iSCSI Session State: LOGGED_IN
> Internal iscsid Session State: NO CHANGE
> Current Portal: 10.10.10.110:3260,1
> Persistent Portal: 10.10.10.110:3260,1
> **********
> Interface:
> **********
> Iface Name: default
> Iface Transport: tcp
> Iface Initiatorname: iqn.1993-08.org.debian:01:3af61619768
> Iface IPaddress: 10.10.10.200
> Iface HWaddress: <empty>
> Iface Netdev: <empty>
> SID: 8
> iSCSI Connection State: LOGGED IN
> iSCSI Session State: LOGGED_IN
> Internal iscsid Session State: NO CHANGE
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
> Em qui, 15 de ago de 2019 às 15:28, Gilberto Nunes
> <gilberto.nunes32 at gmail.com> escreveu:
> >
> > Hi there...
> >
> > I have two iSCSI server, which works with zfs replication and tgt...
> > (OviOS Linux, to be precise).
> >
> > In Windows BOX, using the MS iSCSI initiator, I am able to set 2
> > different IP addresses in order to get a redundancy, but how can I
> > achieve that in Proxmox?
> > In /etc/pve/storage.cfg I cannot use two different IP addresses.
> > So, I make my way trying to use multipath.
> > Both disks from both iSCSI servers appears in PVE box, as /dev/sdc and /dev/sdd.
> > Here the multipath.conf file:
> >
> > defaults {
> >     user_friendly_names    yes
> >         polling_interval        2
> >         path_selector           "round-robin 0"
> >         path_grouping_policy    multibus
> >         path_checker            readsector0
> >         rr_min_io               100
> >         failback                immediate
> >         no_path_retry           queue
> > }
> > blacklist {
> >         wwid .*
> > }
> > blacklist_exceptions {
> >         wwid "360000000000000000000000000010001"
> > property "(ID_SCSI_VPD|ID_WWN|ID_SERIAL)"
> > }
> > multipaths {
> >   multipath {
> >         wwid "360000000000000000000000000010001"
> >         alias mylun
> >   }
> > }
> >
> > The wwid I get from:
> > /lib/udev/scsi_id -g -u -d /dev/sdc
> > /lib/udev/scsi_id -g -u -d /dev/sdd
> >
> > Command multipath -v3 show this:
> > Aug 15 15:25:30 | set open fds limit to 1048576/1048576
> > Aug 15 15:25:30 | loading //lib/multipath/libchecktur.so checker
> > Aug 15 15:25:30 | checker tur: message table size = 3
> > Aug 15 15:25:30 | loading //lib/multipath/libprioconst.so prioritizer
> > Aug 15 15:25:30 | foreign library "nvme" loaded successfully
> > Aug 15 15:25:30 | sda: udev property ID_SERIAL whitelisted
> > Aug 15 15:25:30 | sda: mask = 0x1f
> > Aug 15 15:25:30 | sda: dev_t = 8:0
> > Aug 15 15:25:30 | sda: size = 41943040
> > Aug 15 15:25:30 | sda: vendor = QEMU
> > Aug 15 15:25:30 | sda: product = QEMU HARDDISK
> > Aug 15 15:25:30 | sda: rev = 2.5+
> > Aug 15 15:25:30 | sda: h:b:t:l = 0:0:1:0
> > Aug 15 15:25:30 | sda: tgt_node_name =
> > Aug 15 15:25:30 | sda: path state = running
> > Aug 15 15:25:30 | sda: 20480 cyl, 64 heads, 32 sectors/track, start at 0
> > Aug 15 15:25:30 | 0:0:1:0: attribute vpd_pg80 not found in sysfs
> > Aug 15 15:25:30 | failed to read sysfs vpd pg80
> > Aug 15 15:25:30 | sda: fail to get serial
> > Aug 15 15:25:30 | sda: get_state
> > Aug 15 15:25:30 | sda: detect_checker = yes (setting: multipath internal)
> > Aug 15 15:25:30 | failed to issue vpd inquiry for pgc9
> > Aug 15 15:25:30 | loading //lib/multipath/libcheckreadsector0.so checker
> > Aug 15 15:25:30 | checker readsector0: message table size = 0
> > Aug 15 15:25:30 | sda: path_checker = readsector0 (setting:
> > multipath.conf defaults/devices section)
> > Aug 15 15:25:30 | sda: checker timeout = 30 s (setting: kernel sysfs)
> > Aug 15 15:25:30 | sda: readsector0 state = up
> > Aug 15 15:25:30 | sda: uid_attribute = ID_SERIAL (setting: multipath internal)
> > Aug 15 15:25:30 | sda: uid = 0QEMU_QEMU_HARDDISK_drive-scsi0-0-1 (udev)
> > Aug 15 15:25:30 | sda: detect_prio = yes (setting: multipath internal)
> > Aug 15 15:25:30 | sda: prio = const (setting: multipath internal)
> > Aug 15 15:25:30 | sda: prio args = "" (setting: multipath internal)
> > Aug 15 15:25:30 | sda: const prio = 1
> > Aug 15 15:25:30 | sr0: udev property ID_SERIAL whitelisted
> > Aug 15 15:25:30 | sr0: device node name blacklisted
> > Aug 15 15:25:30 | sdb: udev property ID_SERIAL whitelisted
> > Aug 15 15:25:30 | sdb: mask = 0x1f
> > Aug 15 15:25:30 | sdb: dev_t = 8:16
> > Aug 15 15:25:30 | sdb: size = 83886080
> > Aug 15 15:25:30 | sdb: vendor = ATA
> > Aug 15 15:25:30 | sdb: product = QEMU HARDDISK
> > Aug 15 15:25:30 | sdb: rev = 2.5+
> > Aug 15 15:25:30 | sdb: h:b:t:l = 3:0:0:0
> > Aug 15 15:25:30 | sdb: tgt_node_name = ata-3.00
> > Aug 15 15:25:30 | sdb: path state = running
> > Aug 15 15:25:30 | sdb: 5221 cyl, 255 heads, 63 sectors/track, start at 0
> > Aug 15 15:25:30 | sdb: serial = QM00005
> > Aug 15 15:25:30 | sdb: get_state
> > Aug 15 15:25:30 | sdb: detect_checker = yes (setting: multipath internal)
> > Aug 15 15:25:30 | failed to issue vpd inquiry for pgc9
> > Aug 15 15:25:30 | sdb: path_checker = readsector0 (setting:
> > multipath.conf defaults/devices section)
> > Aug 15 15:25:30 | sdb: checker timeout = 30 s (setting: kernel sysfs)
> > Aug 15 15:25:30 | sdb: readsector0 state = up
> > Aug 15 15:25:30 | sdb: uid_attribute = ID_SERIAL (setting: multipath internal)
> > Aug 15 15:25:30 | sdb: uid = QEMU_HARDDISK_QM00005 (udev)
> > Aug 15 15:25:30 | sdb: detect_prio = yes (setting: multipath internal)
> > Aug 15 15:25:30 | sdb: prio = const (setting: multipath internal)
> > Aug 15 15:25:30 | sdb: prio args = "" (setting: multipath internal)
> > Aug 15 15:25:30 | sdb: const prio = 1
> > Aug 15 15:25:30 | sdd: udev property ID_SERIAL whitelisted
> > Aug 15 15:25:30 | sdd: mask = 0x1f
> > Aug 15 15:25:30 | sdd: dev_t = 8:48
> > Aug 15 15:25:30 | sdd: size = 10485760
> > Aug 15 15:25:30 | sdd: vendor = OVIOS
> > Aug 15 15:25:30 | sdd: product = OVIOS_LUN
> > Aug 15 15:25:30 | sdd: rev = 0001
> > Aug 15 15:25:30 | sdd: h:b:t:l = 7:0:0:1
> > Aug 15 15:25:30 | sdd: tgt_node_name =
> > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> > Aug 15 15:25:30 | sdd: path state = running
> > Aug 15 15:25:30 | sdd: 1018 cyl, 166 heads, 62 sectors/track, start at 0
> > Aug 15 15:25:30 | sdd: serial =                              ovios11
> > Aug 15 15:25:30 | sdd: get_state
> > Aug 15 15:25:30 | sdd: detect_checker = yes (setting: multipath internal)
> > Aug 15 15:25:30 | failed to issue vpd inquiry for pgc9
> > Aug 15 15:25:30 | sdd: path_checker = readsector0 (setting:
> > multipath.conf defaults/devices section)
> > Aug 15 15:25:30 | sdd: checker timeout = 30 s (setting: kernel sysfs)
> > Aug 15 15:25:30 | sdd: readsector0 state = up
> > Aug 15 15:25:30 | sdd: uid_attribute = ID_SERIAL (setting: multipath internal)
> > Aug 15 15:25:30 | sdd: uid = 360000000000000000000000000010001 (udev)
> > Aug 15 15:25:30 | sdd: detect_prio = yes (setting: multipath internal)
> > Aug 15 15:25:30 | sdd: prio = const (setting: multipath internal)
> > Aug 15 15:25:30 | sdd: prio args = "" (setting: multipath internal)
> > Aug 15 15:25:30 | sdd: const prio = 1
> > Aug 15 15:25:30 | sdc: udev property ID_SERIAL whitelisted
> > Aug 15 15:25:30 | sdc: mask = 0x1f
> > Aug 15 15:25:30 | sdc: dev_t = 8:32
> > Aug 15 15:25:30 | sdc: size = 10485760
> > Aug 15 15:25:30 | sdc: vendor = OVIOS
> > Aug 15 15:25:30 | sdc: product = OVIOS_LUN
> > Aug 15 15:25:30 | sdc: rev = 0001
> > Aug 15 15:25:30 | sdc: h:b:t:l = 8:0:0:1
> > Aug 15 15:25:30 | sdc: tgt_node_name =
> > iqn.2012-04.org.ovios:9z8ymb8elezy-homegiba.com.br
> > Aug 15 15:25:30 | sdc: path state = running
> > Aug 15 15:25:30 | sdc: 1018 cyl, 166 heads, 62 sectors/track, start at 0
> > Aug 15 15:25:30 | sdc: serial =                              ovios11
> > Aug 15 15:25:30 | sdc: get_state
> > Aug 15 15:25:30 | sdc: detect_checker = yes (setting: multipath internal)
> > Aug 15 15:25:30 | failed to issue vpd inquiry for pgc9
> > Aug 15 15:25:30 | sdc: path_checker = readsector0 (setting:
> > multipath.conf defaults/devices section)
> > Aug 15 15:25:30 | sdc: checker timeout = 30 s (setting: kernel sysfs)
> > Aug 15 15:25:30 | sdc: readsector0 state = up
> > Aug 15 15:25:30 | sdc: uid_attribute = ID_SERIAL (setting: multipath internal)
> > Aug 15 15:25:30 | sdc: uid = 360000000000000000000000000010001 (udev)
> > Aug 15 15:25:30 | sdc: detect_prio = yes (setting: multipath internal)
> > Aug 15 15:25:30 | sdc: prio = const (setting: multipath internal)
> > Aug 15 15:25:30 | sdc: prio args = "" (setting: multipath internal)
> > Aug 15 15:25:30 | sdc: const prio = 1
> > Aug 15 15:25:30 | loop0: blacklisted, udev property missing
> > Aug 15 15:25:30 | loop1: blacklisted, udev property missing
> > Aug 15 15:25:30 | loop2: blacklisted, udev property missing
> > Aug 15 15:25:30 | loop3: blacklisted, udev property missing
> > Aug 15 15:25:30 | loop4: blacklisted, udev property missing
> > Aug 15 15:25:30 | loop5: blacklisted, udev property missing
> > Aug 15 15:25:30 | loop6: blacklisted, udev property missing
> > Aug 15 15:25:30 | loop7: blacklisted, udev property missing
> > Aug 15 15:25:30 | dm-0: blacklisted, udev property missing
> > Aug 15 15:25:30 | dm-1: blacklisted, udev property missing
> > Aug 15 15:25:30 | dm-2: blacklisted, udev property missing
> > Aug 15 15:25:30 | dm-3: blacklisted, udev property missing
> > Aug 15 15:25:30 | dm-4: blacklisted, udev property missing
> > ===== paths list =====
> > uuid                                hcil    dev dev_t pri dm_st chk_st vend/pr
> > 0QEMU_QEMU_HARDDISK_drive-scsi0-0-1 0:0:1:0 sda 8:0   1   undef undef  QEMU,QE
> > QEMU_HARDDISK_QM00005               3:0:0:0 sdb 8:16  1   undef undef  ATA,QEM
> > 360000000000000000000000000010001   7:0:0:1 sdd 8:48  1   undef undef  OVIOS,O
> > 360000000000000000000000000010001   8:0:0:1 sdc 8:32  1   undef undef  OVIOS,O
> > Aug 15 15:25:30 | libdevmapper version 1.02.155 (2018-12-18)
> > Aug 15 15:25:30 | DM multipath kernel driver v1.13.0
> > Aug 15 15:25:30 | sda: udev property ID_SERIAL whitelisted
> > Aug 15 15:25:30 | sda: wwid 0QEMU_QEMU_HARDDISK_drive-scsi0-0-1 blacklisted
> > Aug 15 15:25:30 | sda: orphan path, blacklisted
> > Aug 15 15:25:30 | const prioritizer refcount 4
> > Aug 15 15:25:30 | sdb: udev property ID_SERIAL whitelisted
> > Aug 15 15:25:30 | sdb: wwid QEMU_HARDDISK_QM00005 blacklisted
> > Aug 15 15:25:30 | sdb: orphan path, blacklisted
> > Aug 15 15:25:30 | const prioritizer refcount 3
> > Aug 15 15:25:30 | sdd: udev property ID_SERIAL whitelisted
> > Aug 15 15:25:30 | sdd: wwid 360000000000000000000000000010001 whitelisted
> > Aug 15 15:25:30 | wwid 360000000000000000000000000010001 not in wwids
> > file, skipping sdd
> > Aug 15 15:25:30 | sdd: orphan path, only one path
> > Aug 15 15:25:30 | const prioritizer refcount 2
> > Aug 15 15:25:30 | sdc: udev property ID_SERIAL whitelisted
> > Aug 15 15:25:30 | sdc: wwid 360000000000000000000000000010001 whitelisted
> > Aug 15 15:25:30 | wwid 360000000000000000000000000010001 not in wwids
> > file, skipping sdc
> > Aug 15 15:25:30 | sdc: orphan path, only one path
> > Aug 15 15:25:30 | const prioritizer refcount 1
> > Aug 15 15:25:30 | unloading const prioritizer
> > Aug 15 15:25:30 | unloading readsector0 checker
> > Aug 15 15:25:30 | unloading tur checker
> >
> >
> > But the command
> > multipath -ll shows me nothing!
> > Is not supose to show the paths to the servers?
> > Like this:
> >
> > multipath -ll
> >
> > mpath0 (3600144f028f88a0000005037a95d0001) dm-3 NEXENTA,NEXENTASTOR
> > size=64G features='1 queue_if_no_path' hwhandler='0' wp=rw
> > `-+- policy='round-robin 0' prio=2 status=active
> >   |- 5:0:0:0 sdb 8:16 active ready running
> >   `- 6:0:0:0 sdc 8:32 active ready running
> >
> > But instead, I get nothing at all!
> >
> > Where am I mistaking here??
> >
> > Thanks a lot
> >
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36



More information about the pve-user mailing list