[pve-devel] [PATCH docs v5 4/5] added vIOMMU documentation

Fiona Ebner f.ebner at proxmox.com
Thu Aug 17 14:41:44 CEST 2023


Am 18.01.23 um 14:57 schrieb Markus Frank:
> Signed-off-by: Markus Frank <m.frank at proxmox.com>
> ---
>  qm-pci-passthrough.adoc | 25 +++++++++++++++++++++++++
>  1 file changed, 25 insertions(+)
> 
> diff --git a/qm-pci-passthrough.adoc b/qm-pci-passthrough.adoc
> index df6cf21..0db9b06 100644
> --- a/qm-pci-passthrough.adoc
> +++ b/qm-pci-passthrough.adoc
> @@ -400,6 +400,31 @@ Example configuration with an `Intel GVT-g vGPU` (`Intel Skylake 6700k`):
>  With this set, {pve} automatically creates such a device on VM start, and
>  cleans it up again when the VM stops.
>  
> +[[qm_pci_viommu]]
> +vIOMMU
> +~~~~~~
> +
> +vIOMMU enables the option to passthrough pci devices to Level-2 VMs
> +in Level-1 VMs via Nested Virtualisation.

Nit: "PCI" should be capitalized, "level" and "nested virtualization" not.

Instead of "vIOMMU enables the option to" maybe "Using a vIOMMU allows
you to" or "With a vIOMMU you can" are slightly better IMHO.

> +> +Host Requirement: Add `intel_iommu=on` or `amd_iommu=on`
> +depending on your CPU to your kernel command line.

Nit: capitalization of "Requirement" here. You could argue it's a title,
but not sure.

> +
> +VM Requirement: For both Intel and AMD CPUs, set `intel_iommu=on`
> +as the kernel parameter in the vIOMMU enabled VM, since qemu-server currently
> +uses the Intel variant. The guest vIOMMU only works with the *q35* machine
> +type and with *kvm* enabled.

A quick sentence why we use the Intel variant might be good

> +
> +To enable vIOMMU, add `viommu=1` to the machine-parameter in the
> +configuration of the VM that should be able to passthrough pci devices.

Nit: "PCI"

> +
> +----
> +# qm set VMID -machine q35,viommu=1
> +----
> +
> +
> +https://wiki.qemu.org/Features/VT-d
> +
>  ifdef::wiki[]
>  
>  See Also





More information about the pve-devel mailing list