[pve-devel] [PATCH docs v7 1/1] qemu: add documentation about cluster device mapping

Dominik Csapak d.csapak at proxmox.com
Fri Jun 16 15:05:42 CEST 2023


explain why someone would want it, how to configure and which privileges
are necessary

Signed-off-by: Dominik Csapak <d.csapak at proxmox.com>
---
changes from v6:
* added small note about only one usb device per node per map

 qm-pci-passthrough.adoc |  8 ++++
 qm.adoc                 | 87 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 95 insertions(+)

diff --git a/qm-pci-passthrough.adoc b/qm-pci-passthrough.adoc
index df6cf21..b90a0b9 100644
--- a/qm-pci-passthrough.adoc
+++ b/qm-pci-passthrough.adoc
@@ -400,6 +400,14 @@ Example configuration with an `Intel GVT-g vGPU` (`Intel Skylake 6700k`):
 With this set, {pve} automatically creates such a device on VM start, and
 cleans it up again when the VM stops.
 
+Use in Clusters
+~~~~~~~~~~~~~~~
+
+It is also possible to map devices on a cluster level, so that they can be
+properly used with HA and hardware changes are detected and non root users
+can configure them. See xref:resource_mapping[Resource Mapping]
+for details on that.
+
 ifdef::wiki[]
 
 See Also
diff --git a/qm.adoc b/qm.adoc
index c6dc652..4e9c8b5 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -753,6 +753,10 @@ if you use a SPICE client which supports it. If you add a SPICE USB port
 to your VM, you can passthrough a USB device from where your SPICE client is,
 directly to the VM (for example an input device or hardware dongle).
 
+It is also possible to map devices on a cluster level, so that they can be
+properly used with HA and hardware changes are detected and non root users
+can configure them. See xref:resource_mapping[Resource Mapping]
+for details on that.
 
 [[qm_bios_and_uefi]]
 BIOS and UEFI
@@ -1511,6 +1515,89 @@ chosen, the first of:
 3. The first non-shared storage from any VM disk.
 4. The storage `local` as a fallback.
 
+[[resource_mapping]]
+Resource Mapping
+~~~~~~~~~~~~~~~~
+
+When using or referencing local resources (e.g. address of a pci device), using
+the raw address or id is sometimes problematic, for example:
+
+* when using HA, a different device with the same id or path may exist on the
+  target node, and if one is not careful when assigning such guests to HA
+  groups, the wrong device could be used, breaking configurations.
+
+* changing hardware can change ids and paths, so one would have to check all
+  assigned devices and see if the path or id is still correct.
+
+To handle this better, one can define cluster wide resource mappings, such that
+a resource has a cluster unique, user selected identifier which can correspond
+to different devices on different hosts. With this, HA won't start a guest with
+a wrong device, and hardware changes can be detected.
+
+Creating such a mapping can be done with the {pve} web GUI under `Datacenter`
+in the relevant tab in the `Resource Mappings` category, or on the cli with
+
+----
+# pvesh create /cluster/mapping/TYPE OPTIONS
+----
+
+Where `TYPE` is the hardware type (currently either `pci` or `usb`) and
+`OPTIONS` are the device mappings and other configuration parameters.
+
+Note that the options must include a map property with all identifying
+properties of that hardware, so that it's possible to verify the hardware did
+not change and the correct device is passed through.
+
+For example to add a PCI device as `device1` with the path `0000:01:00.0` that
+has the device id `0001` and the vendor id `0002` on the node `node1`, and
+`0000:02:00.0` on `node2` you can add it with:
+
+----
+# pvesh create /cluster/mapping/pci --id device1 \
+ --map node=node1,path=0000:01:00.0,id=0002:0001 \
+ --map node=node2,path=0000:02:00.0,id=0002:0001
+----
+
+You must repeat the `map` parameter for each node where that device should have
+a mapping (note that you can currently only map one USB device per node per
+mapping).
+
+Using the GUI makes this much easier, as the correct properties are
+automatically picked up and sent to the API.
+
+It's also possible for PCI devices to provide multiple devices per node with
+multiple map properties for the nodes. If such a device is assigned to a guest,
+the first free one will be used when the guest is started. The order of the
+paths given is also the order in which they are tried, so arbitrary allocation
+policies can be implemented.
+
+This is useful for devices with SR-IOV, since some times it is not important
+which exact virtual function is passed through.
+
+You can assign such a device to a guest either with the GUI or with
+
+----
+# qm set ID -hostpci0 NAME
+----
+
+for PCI devices, or
+
+----
+# qm set ID -usb0 NAME
+----
+
+for USB devices.
+
+Where `ID` is the guests id and `NAME` is the chosen name for the created
+mapping. All usual options for passing through the devices are allowed, such as
+`mdev`.
+
+To create mappings `Mapping.Modify` on `/mapping/TYPE/NAME` is necessary
+(where `TYPE` is the device type and `NAME` is the name of the mapping).
+
+To use these mappings, `Mapping.Use` on `/mapping/TYPE/NAME` is necessary (in
+addition to the normal guest privileges to edit the configuration).
+
 Managing Virtual Machines with `qm`
 ------------------------------------
 
-- 
2.30.2






More information about the pve-devel mailing list