[pve-devel] [PATCH docs v8 3/7] added shared filesystem doc for virtio-fs

Markus Frank m.frank at proxmox.com
Wed Nov 8 09:52:50 CET 2023


Signed-off-by: Markus Frank <m.frank at proxmox.com>
---
 qm.adoc | 84 +++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 82 insertions(+), 2 deletions(-)

diff --git a/qm.adoc b/qm.adoc
index c4f1024..571c42e 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -996,6 +996,85 @@ recommended to always use a limiter to avoid guests using too many host
 resources. If desired, a value of '0' for `max_bytes` can be used to disable
 all limits.
 
+[[qm_virtiofs]]
+Virtio-fs
+~~~~~~~~~
+
+Virtio-fs is a shared file system, that enables sharing a directory between
+host and guest VM while taking advantage of the locality of virtual machines
+and the hypervisor to get a higher throughput than 9p.
+
+To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] daemon
+needs to run in the background.
+In {pve} this process starts immediately before the start of QEMU.
+
+Linux VMs with kernel >=5.4 support this feature by default.
+
+There is a guide available on how to utilize virtio-fs in Windows VMs.
+https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
+
+Known limitations
+^^^^^^^^^^^^^^^^^
+
+* Virtiofsd crashing means no recovery until VM is fully stopped
+and restarted.
+* Virtiofsd not responding may result in NFS-like hanging access in the VM.
+* Memory hotplug does not work in combination with virtio-fs.
+* Windows cannot understand ACLs. Therefore, disable it for Windows VMs,
+otherwise the virtio-fs device will not be visible within the VMs.
+
+Add mapping for Shared Directories
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To add a mapping, either use the API directly with pvesh as described in the
+xref:resource_mapping[Resource Mapping] section,
+or add the mapping to the configuration file `/etc/pve/mapping/dir.cfg`:
+
+----
+some-dir-id
+    map node=node1,path=/mnt/share/,submounts=1
+    map node=node2,path=/mnt/share/,
+    xattr 1
+    acl 1
+----
+
+Set `submounts` to `1` when multiple file systems are mounted in a
+shared directory.
+
+Add virtio-fs to VM
+^^^^^^^^^^^^^^^^^^^
+
+To share a directory using virtio-fs, you need to specify the directory ID
+(dirid) that has been configured in the Resource Mapping.
+Additionally, you can set the `cache` option to either `always`, `never`,
+or `auto`, depending on your requirements.
+If you want virtio-fs to honor the `O_DIRECT` flag, you can set the
+`direct-io` parameter to `1`.
+Additionally it possible to overwrite the default mapping settings
+for xattr & acl by setting then to either `1` or `0`.
+
+The `acl` parameter automatically implies `xattr`, that is, it makes no
+difference whether you set xattr to `0` if acl is set to `1`.
+
+----
+qm set <vmid> -virtiofs0 dirid=<dirid>,cache=always,direct-io=1
+qm set <vmid> -virtiofs1 <dirid>,cache=never,xattr=1
+qm set <vmid> -virtiofs2 <dirid>,acl=1
+----
+
+To mount virtio-fs in a guest VM with the Linux kernel virtio-fs driver, run the
+following command:
+
+The dirid associated with the path on the current node is also used as the
+mount tag (name used to mount the device on the guest).
+
+----
+mount -t virtiofs <mount tag> <mount point>
+----
+
+For more information on available virtiofsd parameters, see the
+https://gitlab.com/virtio-fs/virtiofsd[GitLab virtiofsd project page].
+
 [[qm_bootorder]]
 Device Boot Order
 ~~~~~~~~~~~~~~~~~
@@ -1603,8 +1682,9 @@ in the relevant tab in the `Resource Mappings` category, or on the cli with
 
 [thumbnail="screenshot/gui-datacenter-mapping-pci-edit.png"]
 
-Where `<type>` is the hardware type (currently either `pci` or `usb`) and
-`<options>` are the device mappings and other configuration parameters.
+Where `<type>` is the hardware type (currently either `pci`, `usb` or
+xref:qm_virtiofs[dir]) and `<options>` are the device mappings and other
+configuration parameters.
 
 Note that the options must include a map property with all identifying
 properties of that hardware, so that it's possible to verify the hardware did
-- 
2.39.2






More information about the pve-devel mailing list