[PVE-User] LV removed but dmsetup say it has the mapping for it (how to fix it)
Germain Maurice
germain.maurice at linkfluence.net
Wed Feb 1 22:28:37 CET 2012
Hello everybody,
I have a problem on my production 6 nodes cluster (Proxmox 1.9) The master node is "node1".
I created a VM on node1, made the provisioning and tried to migrate the VM. (using dd, kpartx and mkfs.ext4 directly on the LV device)
Once the VM was migrated, the virtual disks were not synchronized between the both nodes (node1 and node2, actually the problem is the same for all the other nodes)
So, after different tests, i created a new VM with another id, the migration worked.
The problem is i can't use VM 324 anymore because i have no more LV named SATA6To-vm--324--disk--1, however dmsetup say they exist :
node2:~# dmsetup info | grep 324
Name: SATA6To-vm--324--disk--1
Name: SAS1.8To-vm--324--disk--3
Name: SAS1.8To-vm--324--disk--2
Name: SAS1.8To-vm--324--disk--1
node2:~# dmsetup status /dev/mapper/SATA6To-vm--324--disk--1
0 20971520 linear
node2:~# lvdisplay -C | grep 324
node2:~#
Same commands on node1:
node1:~# dmsetup info | grep 324
node1:~# dmsetup status /dev/mapper/SATA6To-vm--324--disk--1
dm_task_set_name: Device /dev/mapper/SATA6To-vm--324--disk--1 not found
Command failed
node1:~# lvdisplay -C | grep 324
node1:~#
Node1
pve-manager: 1.9-24 (pve-manager/1.9/6542)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.9-47
pve-kernel-2.6.32-4-pve: 2.6.32-33
pve-kernel-2.6.32-6-pve: 2.6.32-47
qemu-server: 1.1-32
pve-firmware: 1.0-14
libpve-storage-perl: 1.0-19
vncterm: 0.9-2
vzctl: 3.0.29-2pve1
vzdump: 1.2-16
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.15.0-1
ksm-control-daemon: 1.0-6
Node2 :
pve-manager: 1.8-18 (pve-manager/1.8/6070)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.8-33
pve-kernel-2.6.32-4-pve: 2.6.32-33
qemu-server: 1.1-30
pve-firmware: 1.0-11
libpve-storage-perl: 1.0-17
vncterm: 0.9-2
vzctl: 3.0.27-1pve1
vzdump: 1.2-13
vzprocps: 2.0.11-2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.14.1-1
ksm-control-daemon: 1.0-6
I just discovered that pvemanager version have diverged between my nodes….
However, is there a way to clean up /dev/mapper without restarting the nodes ? (i think i have to, because of the unmatched proxmox version over the cluster)
Maybe a reboot is the best way to have a proper clean up of /dev/mapper ?
Thank you in advance for any help you could give me.
Germain
More information about the pve-user
mailing list