[pve-devel] [PATCH qemu] add patch to work around stuck guest IO with iothread and VirtIO block/SCSI
dea
dea at corep.it
Tue Jan 23 14:43:41 CET 2024
Very good news Fiona !!!!
For quite some time I have been using patchlevel 5 of the pve-qemu
package (the one that has CPU overloads) because package 4-6 gives me
stuck storage problems, as you correctly describe in your post.
Very thanks !
Il 23/01/24 14:13, Fiona Ebner ha scritto:
> This essentially repeats commit 6b7c181 ("add patch to work around
> stuck guest IO with iothread and VirtIO block/SCSI") with an added
> fix for the SCSI event virtqueue, which requires special handling.
> This is to avoid the issue [4] that made the revert 2a49e66 ("Revert
> "add patch to work around stuck guest IO with iothread and VirtIO
> block/SCSI"") necessary the first time around.
>
> When using iothread, after commits
> 1665d9326f ("virtio-blk: implement BlockDevOps->drained_begin()")
> 766aa2de0f ("virtio-scsi: implement BlockDevOps->drained_begin()")
> it can happen that polling gets stuck when draining. This would cause
> IO in the guest to get completely stuck.
>
> A workaround for users is stopping and resuming the vCPUs because that
> would also stop and resume the dataplanes which would kick the host
> notifiers.
>
> This can happen with block jobs like backup and drive mirror as well
> as with hotplug [2].
>
> Reports in the community forum that might be about this issue[0][1]
> and there is also one in the enterprise support channel.
>
> As a workaround in the code, just re-enable notifications and kick the
> virt queue after draining. Draining is already costly and rare, so no
> need to worry about a performance penalty here. This was taken from
> the following comment of a QEMU developer [3] (in my debugging,
> I had already found re-enabling notification to work around the issue,
> but also kicking the queue is more complete).
>
> Take special care to attach the SCSI event virtqueue host notifier
> with the _no_poll() variant like in virtio_scsi_dataplane_start().
> This avoids the issue from the first attempted fix where the iothread
> would suddenly loop with 100% CPU usage whenever some guest IO came in
> [4]. This is necessary because of commit 38738f7dbb ("virtio-scsi:
> don't waste CPU polling the event virtqueue"). See [5] for the
> relevant discussion.
>
> [0]: https://forum.proxmox.com/threads/137286/
> [1]: https://forum.proxmox.com/threads/137536/
> [2]: https://issues.redhat.com/browse/RHEL-3934
> [3]: https://issues.redhat.com/browse/RHEL-3934?focusedId=23562096&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23562096
> [4]: https://forum.proxmox.com/threads/138140/
> [5]: https://lore.kernel.org/qemu-devel/bfc7b20c-2144-46e9-acbc-e726276c5a31@proxmox.com/
>
> Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
> ---
> ...work-around-iothread-polling-getting.patch | 87 +++++++++++++++++++
> debian/patches/series | 1 +
> 2 files changed, 88 insertions(+)
> create mode 100644 debian/patches/pve/0046-virtio-blk-scsi-work-around-iothread-polling-getting.patch
>
> diff --git a/debian/patches/pve/0046-virtio-blk-scsi-work-around-iothread-polling-getting.patch b/debian/patches/pve/0046-virtio-blk-scsi-work-around-iothread-polling-getting.patch
> new file mode 100644
> index 0000000..a268eed
> --- /dev/null
> +++ b/debian/patches/pve/0046-virtio-blk-scsi-work-around-iothread-polling-getting.patch
> @@ -0,0 +1,87 @@
> +From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
> +From: Fiona Ebner <f.ebner at proxmox.com>
> +Date: Tue, 23 Jan 2024 13:21:11 +0100
> +Subject: [PATCH] virtio blk/scsi: work around iothread polling getting stuck
> + with drain
> +
> +When using iothread, after commits
> +1665d9326f ("virtio-blk: implement BlockDevOps->drained_begin()")
> +766aa2de0f ("virtio-scsi: implement BlockDevOps->drained_begin()")
> +it can happen that polling gets stuck when draining. This would cause
> +IO in the guest to get completely stuck.
> +
> +A workaround for users is stopping and resuming the vCPUs because that
> +would also stop and resume the dataplanes which would kick the host
> +notifiers.
> +
> +This can happen with block jobs like backup and drive mirror as well
> +as with hotplug [2].
> +
> +Reports in the community forum that might be about this issue[0][1]
> +and there is also one in the enterprise support channel.
> +
> +As a workaround in the code, just re-enable notifications and kick the
> +virt queue after draining. Draining is already costly and rare, so no
> +need to worry about a performance penalty here. This was taken from
> +the following comment of a QEMU developer [3] (in my debugging,
> +I had already found re-enabling notification to work around the issue,
> +but also kicking the queue is more complete).
> +
> +Take special care to attach the SCSI event virtqueue host notifier
> +with the _no_poll() variant like in virtio_scsi_dataplane_start().
> +This avoids the issue from the first attempted fix where the iothread
> +would suddenly loop with 100% CPU usage whenever some guest IO came in
> +[4]. This is necessary because of commit 38738f7dbb ("virtio-scsi:
> +don't waste CPU polling the event virtqueue"). See [5] for the
> +relevant discussion.
> +
> +[0]: https://forum.proxmox.com/threads/137286/
> +[1]: https://forum.proxmox.com/threads/137536/
> +[2]: https://issues.redhat.com/browse/RHEL-3934
> +[3]: https://issues.redhat.com/browse/RHEL-3934?focusedId=23562096&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-23562096
> +[4]: https://forum.proxmox.com/threads/138140/
> +[5]: https://lore.kernel.org/qemu-devel/bfc7b20c-2144-46e9-acbc-e726276c5a31@proxmox.com/
> +
> +Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
> +---
> + hw/block/virtio-blk.c | 4 ++++
> + hw/scsi/virtio-scsi.c | 10 +++++++++-
> + 2 files changed, 13 insertions(+), 1 deletion(-)
> +
> +diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> +index 39e7f23fab..d9a655e9b8 100644
> +--- a/hw/block/virtio-blk.c
> ++++ b/hw/block/virtio-blk.c
> +@@ -1536,7 +1536,11 @@ static void virtio_blk_drained_end(void *opaque)
> +
> + for (uint16_t i = 0; i < s->conf.num_queues; i++) {
> + VirtQueue *vq = virtio_get_queue(vdev, i);
> ++ if (!virtio_queue_get_notification(vq)) {
> ++ virtio_queue_set_notification(vq, true);
> ++ }
> + virtio_queue_aio_attach_host_notifier(vq, ctx);
> ++ virtio_queue_notify(vdev, i);
> + }
> + }
> +
> +diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
> +index 45b95ea070..93a292df60 100644
> +--- a/hw/scsi/virtio-scsi.c
> ++++ b/hw/scsi/virtio-scsi.c
> +@@ -1165,7 +1165,15 @@ static void virtio_scsi_drained_end(SCSIBus *bus)
> +
> + for (uint32_t i = 0; i < total_queues; i++) {
> + VirtQueue *vq = virtio_get_queue(vdev, i);
> +- virtio_queue_aio_attach_host_notifier(vq, s->ctx);
> ++ if (!virtio_queue_get_notification(vq)) {
> ++ virtio_queue_set_notification(vq, true);
> ++ }
> ++ if (vq == VIRTIO_SCSI_COMMON(s)->event_vq) {
> ++ virtio_queue_aio_attach_host_notifier_no_poll(vq, s->ctx);
> ++ } else {
> ++ virtio_queue_aio_attach_host_notifier(vq, s->ctx);
> ++ }
> ++ virtio_queue_notify(vdev, i);
> + }
> + }
> +
> diff --git a/debian/patches/series b/debian/patches/series
> index b3da8bb..7dcedcb 100644
> --- a/debian/patches/series
> +++ b/debian/patches/series
> @@ -60,3 +60,4 @@ pve/0042-Revert-block-rbd-implement-bdrv_co_block_status.patch
> pve/0043-alloc-track-fix-deadlock-during-drop.patch
> pve/0044-migration-for-snapshots-hold-the-BQL-during-setup-ca.patch
> pve/0045-savevm-async-don-t-hold-BQL-during-setup.patch
> +pve/0046-virtio-blk-scsi-work-around-iothread-polling-getting.patch
More information about the pve-devel
mailing list