[pve-devel] [PATCH v2 qemu-server] resume: bump timeout for query-status

Fiona Ebner f.ebner at proxmox.com
Thu Jul 25 14:32:26 CEST 2024


As reported in the community forum [0], after migration, the VM might
not immediately be able to respond to QMP commands, which means the VM
could fail to resume and stay in paused state on the target.

The reason is that activating the block drives in QEMU can take a bit
of time. For example, it might be necessary to invalidate the caches
(where for raw devices a flush might be needed) and the request
alignment and size of the block device needs to be queried.

In [0], an external Ceph cluster with krbd is used, and the initial
read to the block device after migration, for probing the request
alignment, takes a bit over 10 seconds[1]. Use 60 seconds as the new
timeout to be on the safe side for the future.

All callers are inside workers or via the 'qm' CLI command, so bumping
beyond 30 seconds is fine.

[0]: https://forum.proxmox.com/threads/149610/

Signed-off-by: Fiona Ebner <f.ebner at proxmox.com>
---

Changes in v2:
* improve commit message with new findings from the forum thread

 PVE/QemuServer.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index bf59b091..9e840912 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -6461,7 +6461,9 @@ sub vm_resume {
     my ($vmid, $skiplock, $nocheck) = @_;
 
     PVE::QemuConfig->lock_config($vmid, sub {
-	my $res = mon_cmd($vmid, 'query-status');
+	# After migration, the VM might not immediately be able to respond to QMP commands, because
+	# activating the block devices might take a bit of time.
+	my $res = mon_cmd($vmid, 'query-status', timeout => 60);
 	my $resume_cmd = 'cont';
 	my $reset = 0;
 	my $conf;
-- 
2.39.2





More information about the pve-devel mailing list