[pve-devel] [POC qemu-server] fix 3303: allow "live" upgrade of qemu version

Fabian Ebner f.ebner at proxmox.com
Thu Apr 8 12:33:10 CEST 2021


The code is in a very early state, I'm just sending this to discuss the idea.
I didn't do a whole lot of testing yet, but it does seem to work.

The idea is rather simple:
1. save the state to ramfs
2. stop the VM
3. start the VM loading the state

This approach solves the problem that our stack is (currently) not designed to
have multiple instances with the same VM ID running. To do so, we'd need to
handle config locking, sockets, pid file, passthrough resources?, etc.

Another nice feature of this approach is that it doesn't require touching the
vm_start or migration code at all, avoiding further bloating.


Thanks to Fabian G. and Stefan for inspiring this idea:

Fabian G. suggested using the suspend to disk + start route if the required
changes to our stack would turn out to be infeasable.

Stefan suggested migrating to a dummy VM (outside our stack) which just holds
the state and migrating back right away. It seems that dummy VM is in fact not
even needed ;) If we really really care about smallest possible downtime, this
approach might still be the best, and we'd need to start the dummy VM while the
backwards migration runs (resulting in two times the migration downtime). But
it does have more moving parts and requires some migration/startup changes.


Fabian Ebner (6):
  create vmstate_size helper
  create savevm_monitor helper
  draft of upgrade_qemu function
  draft of qemuupgrade API call
  add timing for testing
  add usleep parameter to savevm_monitor

 PVE/API2/Qemu.pm  |  60 ++++++++++++++++++++++
 PVE/QemuConfig.pm |  10 +---
 PVE/QemuServer.pm | 125 +++++++++++++++++++++++++++++++++++++++-------
 3 files changed, 170 insertions(+), 25 deletions(-)

-- 
2.20.1






More information about the pve-devel mailing list