[pve-devel] [PATCH storage 0/2] Fix #2046 and disksize-mismatch with shared LVM

Stoiko Ivanov s.ivanov at proxmox.com
Fri Jan 4 14:06:23 CET 2019


The issue was observed recently and can lead to potential dataloss. When using
a shared LVM storage (e.g. over iSCSI) in a clustered setup only the node, where
a guest is active notices the size change upon disk-resize (lvextend/lvreduce).

LVM's metadata gets updated on all nodes eventually (the latest when pvestatd
runs and lists all LVM-volumes (lvs/vgs update the metadata), however the
device-files (/dev/$vg/$lv) on all nodes, where the guest is not actively
running do not notice the change.

Steps to reproduce an I/O error:
* create a qemu-guest with a disk backed by a shared LVM storage
* create a filesystem on that disk and fill it to 100%
* resize the disk/filesystem
* put some more data on the filesystem
* migrate the guest to another node
* try reading past the initial disksize

The second patch fixes the size-mismatch by running `lvchange --refresh`
whenever we activate a volume with LVM and should fix the critical issue

The first patch introduces a direct implementation of volume_size_info to the
LVMPlugin.pm, reading the volume size via `lvs`, instead of falling back to
`qemu-img info` from Plugin.pm.
While this should always yield the same output after the second patch on the
node where a guest is currently running, there still might be a mismatch when
the LV is active (e.g. after a fresh boot) on a node, and gets resized on
another node.

Stoiko Ivanov (2):
  fix #2046 add volume_size_info to LVMPlugin
  LVM: Add '--refresh' when activating volumes

 PVE/Storage/LVMPlugin.pm | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

-- 
2.11.0





More information about the pve-devel mailing list