[pve-devel] [PATCH qemu-server] Add iothread_vq_mapping support for virtio-blk (PVE 8.4)

Dominik Budzowski dbudzowski at alfaline.pl
Thu Jun 26 00:15:34 CEST 2025


Cover letter for series of patches adding iothread_vq_mapping support to
virtio-blk devices in Proxmox VE 8.4.

This feature was introduced in QEMU 9 and allows mapping multiple vhost
queues onto separate IO threads, dramatically improving RAW block I/O
throughput.

See discussion and background here:
https://blogs.oracle.com/linux/post/virtioblk-using-iothread-vq-mapping

Patches:

  1. drive-iothread-vq-pve8.4.patch
     – extend PVE/QemuServer/Drive.pm to expose `iothread_vq_mapping`
       parameter in `Drive` objects.

  2. qemuserver-iothread-vq-pve8.4.patch
     – update PVE/QemuServer.pm to consume `iothread_vq_mapping`,
       generate `-object iothread,id=…` entries and JSON `-device`
       parameters with separate bus/addr fields.

Installation:

  cp /usr/share/perl5/PVE/QemuServer/Drive.pm \
     /usr/share/perl5/PVE/QemuServer/Drive.pm.backup
  cp /usr/share/perl5/PVE/QemuServer.pm \
     /usr/share/perl5/PVE/QemuServer.pm.backup
  patch /usr/share/perl5/PVE/QemuServer/Drive.pm \
     < drive-iothread-vq-pve8.4.patch
  patch /usr/share/perl5/PVE/QemuServer.pm \
     < qemuserver-iothread-vq-pve8.4.patch
  systemctl restart pvedaemon pveproxy

Usage:

  Add `iothread_vq_mapping=<num>` (2–16) to your disk line in
  `/etc/pve/qemu-server/<VMID>.conf`, for example:

    virtio0: local-lvm:vm-100-disk-0,aio=native,iothread_vq_mapping=8,size=50G

Test Results:

  fio 4k randread RAW:
   - legacy `iothread=1`: ~200k IOPS  
   - new `iothread_vq_mapping=8`: ~800k IOPS  

Please review this series to bring enhanced virtio-blk performance to PVE 8.4.

— 
Dominik Budzowski <dbudzowski at alfaline.pl>

Signed-off-by: Dominik Budzowski <dbudzowski at alfaline.pl>



More information about the pve-devel mailing list