[pve-devel] [PATCH docs v4 4/5] added vIOMMU documentation

Wolfgang Bumiller w.bumiller at proxmox.com
Fri Jan 13 11:09:29 CET 2023


On Fri, Nov 25, 2022 at 03:08:56PM +0100, Markus Frank wrote:
> Signed-off-by: Markus Frank <m.frank at proxmox.com>
> ---
>  qm-pci-passthrough.adoc | 25 +++++++++++++++++++++++++
>  1 file changed, 25 insertions(+)
> 
> diff --git a/qm-pci-passthrough.adoc b/qm-pci-passthrough.adoc
> index fa6ba35..7ed4d49 100644
> --- a/qm-pci-passthrough.adoc
> +++ b/qm-pci-passthrough.adoc
> @@ -389,6 +389,31 @@ Example configuration with an `Intel GVT-g vGPU` (`Intel Skylake 6700k`):
>  With this set, {pve} automatically creates such a device on VM start, and
>  cleans it up again when the VM stops.
>  
> +[[qm_pci_viommu]]
> +vIOMMU
> +~~~~~~
> +
> +vIOMMU enables the option to passthrough pci devices to Level-2 VMs
> +in Level-1 VMs via Nested Virtualisation.
> +
> +Host-Requirement: Set `intel_iommu=on` or `amd_iommu=on` depending on your
> +CPU.

And by "CPU" you mean kernel command line? ;-)

> +
> +VM-Requirement: For both Intel and AMD CPUs you will have to set
> +`intel_iommu=on` as a Linux boot parameter in the vIOMMU-enabled-VM, because
> +Qemu implements the Intel variant.

^ As mentioned, there does appear to be an amd_iommu device in the qemu
code, so would the amd variant work?

In my reply to the code patch I mentioned checking the host arch. But if
you say we can use intel_iommu on AMD as well, I'd say, if both work,
give the user a choice, otherwise we can of course just stick to the one
that works ;-)





More information about the pve-devel mailing list