[PVE-User] LVM autoactivation failed with multipath over iSCSI
Stefan M. Radman
smr at kmi.com
Wed Jan 15 12:06:31 CET 2020
Hi Nada,
Unfortunately I don't have any first hand experience with PVE6 yet.
On the PVE5.4 cluster I am working with I had an issue that looked very similar to yours:
LVM refused to activate iSCSI multipath volumes on boot, making the lvm2-activation-net.service fail.
This only happened during boot of the host.
Restarting the lvm2-activation-net.service after boot activated the volume with multipath working.
Suspecting a timing/dependency issue specific to my configuration I took a pragmatic approach and added a custom systemd service template to restart the lvm2-activation-net.service after multipath initialization (see below).
# cat /etc/systemd/system/lvm2-after-multipath.service
[Unit]
Description=LVM2 after Multipath
After=multipathd.service lvm2-activation-net.service
[Service]
Type=oneshot
ExecStart=/bin/systemctl start lvm2-activation-net.service
[Install]
WantedBy=sysinit.target
Things on PVE6 seem to have changed a bit but your lvm2-pvescan service failures indicate a similar problem ("failed LVM event activation").
Disable your rc.local workaround and try to restart the two failed services after reboot.
If that works you might want to take a similar approach instead of activating the volumes manually.
The masked status of multipath-tools-boot.service is ok. The package is only needed when booting from multipath devices.
Your mistake in multipath.conf seems to be in the multipaths section.
Each multipath device can only have one WWID. For two WWIDs you'll need two multiparty subsections.
https://help.ubuntu.com/lts/serverguide/multipath-dm-multipath-config-file.html#multipath-config-multipath
Stefan
On Jan 15, 2020, at 10:55, nada <nada at verdnatura.es<mailto:nada at verdnatura.es>> wrote:
On 2020-01-14 19:46, Stefan M. Radman via pve-user wrote:
Hi Nada
What's the output of "systemctl --failed" and "systemctl status lvm2-activation-net.service".
Stefan
Hi Stefan
thank you for your response !
the output of "systemctl --failed" was claiming devices from SAN during boot,
which were activated by rc-local.service after boot
i do NOT have "lvm2-activation-net.service"
and find masked status of multipath-tools-boot.service < is this ok ???
but i find some mistake in multipath.conf eventhough it is running
i am going to reconfigure it and reboot this afternoon
following are details
Nada
root at mox11:~# systemctl --failed --all
UNIT LOAD ACTIVE SUB DESCRIPTION
lvm2-pvscan at 253:7.service loaded failed failed LVM event activation on device 253:7
lvm2-pvscan at 253:8.service loaded failed failed LVM event activation on device 253:8
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
2 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
root at mox11:~# dmsetup ls
san2020jan-vm--903--disk--0 (253:18)
santest-santestpool (253:12)
3600c0ff000195f8e7d0a855701000000 (253:7)
pve-data-tpool (253:4)
pve-data_tdata (253:3)
pve-zfs (253:6)
santest-santestpool-tpool (253:11)
santest-santestpool_tdata (253:10)
pve-data_tmeta (253:2)
san2020jan-san2020janpool (253:17)
santest-santestpool_tmeta (253:9)
pve-swap (253:0)
pve-root (253:1)
pve-data (253:5)
3600c0ff000195f8ec3f01d5e01000000 (253:8)
san2020jan-san2020janpool-tpool (253:16)
san2020jan-san2020janpool_tdata (253:15)
san2020jan-san2020janpool_tmeta (253:14)
root at mox11:~# pvs -a
PV VG Fmt Attr PSize PFree
/dev/mapper/3600c0ff000195f8e7d0a855701000000 santest lvm2 a-- <9.31g 292.00m
/dev/mapper/3600c0ff000195f8ec3f01d5e01000000 san2020jan lvm2 a-- <93.13g <2.95g
/dev/mapper/san2020jan-vm--903--disk--0 --- 0 0
/dev/sdb --- 0 0
/dev/sdc2 --- 0 0
/dev/sdc3 pve lvm2 a-- 67.73g 6.97g
root at mox11:~# vgs -a
VG #PV #LV #SN Attr VSize VFree
pve 1 4 0 wz--n- 67.73g 6.97g
san2020jan 1 2 0 wz--n- <93.13g <2.95g
santest 1 1 0 wz--n- <9.31g 292.00m
root at mox11:~# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 9.98g 0.00 10.61
[data_tdata] pve Twi-ao---- 9.98g
[data_tmeta] pve ewi-ao---- 12.00m
[lvol0_pmspare] pve ewi------- 12.00m
root pve -wi-ao---- 16.75g
swap pve -wi-ao---- 4.00g
zfs pve -wi-ao---- 30.00g
[lvol0_pmspare] san2020jan ewi------- 92.00m
san2020janpool san2020jan twi-aotz-- 90.00g 0.86 10.84
[san2020janpool_tdata] san2020jan Twi-ao---- 90.00g
[san2020janpool_tmeta] san2020jan ewi-ao---- 92.00m
vm-903-disk-0 san2020jan Vwi-aotz-- 2.50g san2020janpool 30.99
[lvol0_pmspare] santest ewi------- 12.00m
santestpool santest twi-aotz-- 9.00g 0.00 10.58
[santestpool_tdata] santest Twi-ao---- 9.00g
[santestpool_tmeta] santest ewi-ao---- 12.00m
root at mox11:~# multipathd -k"show maps"
Jan 15 10:50:02 | /etc/multipath.conf line 24, duplicate keyword: wwid
name sysfs uuid
3600c0ff000195f8e7d0a855701000000 dm-7 3600c0ff000195f8e7d0a855701000000
3600c0ff000195f8ec3f01d5e01000000 dm-8 3600c0ff000195f8ec3f01d5e01000000
root at mox11:~# multipathd -k"show paths"
Jan 15 10:50:07 | /etc/multipath.conf line 24, duplicate keyword: wwid
hcil dev dev_t pri dm_st chk_st dev_st next_check
6:0:0:3 sde 8:64 10 active ready running XX........ 2/8
9:0:0:3 sdn 8:208 10 active ready running XX........ 2/8
7:0:0:3 sdh 8:112 50 active ready running XX........ 2/8
5:0:0:3 sdd 8:48 10 active ready running XXXXXX.... 5/8
11:0:0:3 sdp 8:240 50 active ready running X......... 1/8
10:0:0:3 sdl 8:176 50 active ready running XXXXXXXXXX 8/8
8:0:0:6 sdk 8:160 50 active ready running XXXXXX.... 5/8
8:0:0:3 sdj 8:144 50 active ready running XXXXXXXXXX 8/8
9:0:0:6 sdo 8:224 10 active ready running X......... 1/8
6:0:0:6 sdg 8:96 10 active ready running XXXXXX.... 5/8
5:0:0:6 sdf 8:80 10 active ready running XXXXXX.... 5/8
10:0:0:6 sdm 8:192 50 active ready running XXXXXXX... 6/8
11:0:0:6 sdq 65:0 50 active ready running XXXXXXXX.. 7/8
7:0:0:6 sdi 8:128 50 active ready running XXXXXXX... 6/8
root at mox11:~# cat /etc/multipath.conf
defaults {
polling_interval 2
uid_attribute ID_SERIAL
no_path_retry queue
find_multipaths yes
}
blacklist {
wwid .*
# BECAREFULL @mox11 blacklit sda,sdb,sdc
devnode "^sd[a-c]"
}
blacklist_exceptions {
# 25G v_multitest
# wwid "3600c0ff000195f8e2172de5d01000000"
# 10G prueba
wwid "3600c0ff000195f8e7d0a855701000000"
# 100G sanmox11
wwid "3600c0ff000195f8ec3f01d5e01000000"
}
multipaths {
multipath {
# wwid "3600c0ff000195f8e2172de5d01000000"
wwid "3600c0ff000195f8e7d0a855701000000"
wwid "3600c0ff000195f8ec3f01d5e01000000"
}
}
devices {
device {
#### the following 6 lines do NOT change
vendor "HP"
product "P2000 G3 FC|P2000G3 FC/iSCSI|P2000 G3 SAS|P2000 G3 iSCSI"
# getuid_callout "/lib/udev/scsi_id -g -u -s /block/%n"
path_grouping_policy "group_by_prio"
prio "alua"
failback "immediate"
no_path_retry 18
####
hardware_handler "0"
path_selector "round-robin 0"
rr_weight uniform
rr_min_io 100
path_checker tur
}
}
root at mox11:~# systemctl status multipath-tools-boot
multipath-tools-boot.service
Loaded: masked (Reason: Unit multipath-tools-boot.service is masked.)
Active: inactive (dead)
root at mox11:~# pveversion -V
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1
pve-kernel-4.15: 5.4-12
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-4.15.18-24-pve: 4.15.18-52
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-11-pve: 4.15.18-34
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-15
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-4
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
_______________________________________________
pve-user mailing list
pve-user at pve.proxmox.com<mailto:pve-user at pve.proxmox.com>
https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpve.proxmox.com%2Fcgi-bin%2Fmailman%2Flistinfo%2Fpve-user&data=02%7C01%7Csmr%40kmi.com%7C5197e0e1f4c64fb738cf08d799a1067b%7Cc2283768b8d34e008f3d85b1b4f03b33%7C0%7C1%7C637146789208877811&sdata=JhYOtpzDpjbQ1g4yrV%2FUwB8d3d4vX08Pd9wISQUmGp8%3D&reserved=0
More information about the pve-user
mailing list