[PVE-User] Using raw LVM without partitions inside VM

Alexandre DERUMIER aderumier at odiso.com
Sun Apr 12 18:26:07 CEST 2015


I have had a customer with same problem, (raw lvm in guest + lvm disk on host for the vms).

The problem is that the host is seeing lvm disks from the guests because of vgscan/lvscan.

The solution was to use fileting in lvm.conf of the host, to only scan the hosts devices.

I don't remember the config 'filter = [.....]", sorry

----- Mail original -----
De: "Brian Hart" <brianhart at ou.edu>
À: "proxmoxve" <pve-user at pve.proxmox.com>
Envoyé: Samedi 11 Avril 2015 05:17:57
Objet: [PVE-User] Using raw LVM without partitions inside VM

Hello everybody, 
For a long time now I've used raw LVM on disks inside of virtual machines without using disk partitions. I create a separate small disk to serve for the "boot" drive and give it a partition. This is formatted and mounted in /boot. Then we create a separate disk to contain everything else in an LVM structure. Outside of Proxmox this is perfectly acceptable as long as you do not need to boot from the device which we do not since we create that separate device. The partition table would only serve as a method for the bios to interact with the disk for boot purposes. The main advantage here is it makes the non-boot sections of the system very fluid and makes adding removing space on a live system SO much easier without having to worry about the restrictions of a partition table. 

We've been doing this successfully in VMware for a long time but only today did we attempt this in Proxmox and ran into a serious issue which long story short - resulted in the loss of a disk. I understand what went wrong and why this happened and luckily it was just a template that it happened to so nothing major lost, we can rebuild it. On Proxmox we use an iSCSI SAN with multipath connections for our backend storage so we do LVM on proxmox for our disks for our VMs. I know some answers on the forum are to "use partitions" and I understand why that is the answer given but we do this very intentionally with a deep understanding of how it would normally work. The reason it doesn't is because of how the disks are handled on LVM backed storage on the host in this case. 

What I am hoping for are alternate suggestion on how we can use raw LVM on disks with proxmox? Do we need to use a different storage method? Would this same problem exist some how with qcow2 files or on a ZFS backed storage (such as ZFS over iSCSI)? It seems like it shouldn't for the same reasons it doesn't happen on VMware with VMDK files but I wanted to be sure. If I understand the issue correctly it should only be because we're doing LVM on a raw block device and so the proxmox host sees that directly. I would expect something like a qcow2 file would sufficiently shield it but maybe not the ZFS over iSCSI(?) I'm not sure. I'm basically looking for any creative solutions to accomplish what we are trying to do or any advice that doesn't follow the beaten path of "use partitions". 

Thanks for any feedback or suggestions -- 


pve-user mailing list 
pve-user at pve.proxmox.com 

More information about the pve-user mailing list