[pve-devel] [PATCH v4 qemu-server 2/2] remote-migration: add target-cpu && target-reboot params

Fiona Ebner f.ebner at proxmox.com
Fri Oct 27 11:19:33 CEST 2023


Am 25.10.23 um 18:01 schrieb DERUMIER, Alexandre:
>>> Unused disks can just be migrated
>>> offline via storage_migrate(), or? 
> 
> currently unused disk can't be migrate through the http tunnel for
> remote-migration
> 
> 2023-10-25 17:51:38 ERROR: error - tunnel command
> '{"format":"raw","migration_snapshot":"1","export_formats":"zfs","allow
> _rename":"1","snapshot":"__migration__","volname":"vm-1112-disk-
> 1","cmd":"disk-import","storage":"targetstorage","with_snapshots":1}'
> failed - failed to handle 'disk-import' command - no matching
> import/export format found for storage 'preprodkvm'
> 2023-10-25 17:51:38 aborting phase 1 - cleanup resources
> tunnel: -> sending command "quit" to remote
> tunnel: <- got reply
> tunnel: CMD channel closed, shutting down
> 2023-10-25 17:51:39 ERROR: migration aborted (duration 00:00:01): error
> - tunnel command
> '{"format":"raw","migration_snapshot":"1","export_formats":"zfs","allow
> _rename":"1","snapshot":"__migration__","volname":"vm-1112-disk-
> 1","cmd":"disk-import","storage":"targetstorage","with_snapshots":1}'
> failed - failed to handle 'disk-import' command - no matching
> import/export format found for storage 'preprodkvm'
> migration aborted
> 

Well, yes, they can. But there needs to be a common import/export format
between the storage types. Which admittedly is a bit limited for certain
storage types, e.g. ZFS only supports ZFS and RBD does not implement
import/export at all yet (because in a single cluster it wasn't needed).


>>> If we want to switch to migrating
>>> disks offline via QEMU instead of our current storage_migrate(),
>>> going
>>> for QEMU storage daemon + NBD seems the most natural to me.
> 
> Yes, I more for this solution.
> 
>>> If it's not too complicated to temporarily attach the disks to the
>>> VM,
>>> that can be done too, but is less re-usable (e.g. pure offline
>>> migration
>>> won't benefit from that).
> 
> No sure about attach/detach temporary once by once, or attach all
> devices (but this need enough controllers slot).
> 

I think you can attach them to the VM without attaching to a controller
by using QMP blockdev-add, but...

> qemu storage daemon seem to be a less hacky  solution ^_^
> 

...sure, this should be nicer and more re-usable.

> 
>> but if it's work, I think we'll need to add config generation in pv
>> storage for differents blockdriver
>>
>>
>> like:
>>
>> –blockdev driver=file,node-name=file0,filename=vm.img 
>>
>> –blockdev driver=rbd,node-name=rbd0,pool=my-pool,image=vm01
>>
> 
>>> What other special cases besides (non-krbd) RBD are there? If it's
>>> just
>>> that, I'd much rather keep the special handling in QEMU itself then
>>> burden all other storage plugins with implementing something specific
>>> to
>>> VMs.
> 
> not sure, maybe glusterfs, .raw (should works for block device like
> lvm,zfs), .qcow2
> 

There's a whole lot of drivers
https://qemu.readthedocs.io/en/v8.1.0/interop/qemu-qmp-ref.html#qapidoc-883

But e.g. for NFS, we don't necessarily need it and can just use
qcow2/raw. Currently, with -drive we also just treat it like any other file.

I'd like to keep the logic for how to construct the -blockdev command
line option (mostly) in qemu-server itself. But I guess we can't avoid
some amount of coupling. Currently, for -drive we have the coupling in
path() which can e.g. return rbd: or gluster: and then QEMU will parse
what driver to use from that path.

Two approaches that make sense to me (no real preference at the moment):

1. Have a storage plugin method which tells qemu-server about the
necessary driver and properties for opening the image. E.g. return the
properties as a hash and then have qemu-server join them together and
then add the generic properties (e.g. aio,node-name) to construct the
full -blockdev option.

2. Do everything in qemu-server and special case for certain storage
types that have a dedicated driver. Still needs to get the info like
pool name from the RBD storage of course, but that should be possible
with existing methods.

Happy to hear other suggestions/opinions.

> 
>>> Or is there a way to use the path from the storage plugin somehow
>>> like
>>> we do at the moment, i.e.
>>> "rbd:rbd/vm-111-disk-
>>> 1:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/rbd.key
>>> ring"?
> 
> I don't think it's possible just like this.I need to do more test, 
> looking at libvirt before they are not too much doc about it.
> 

Probably they decided to get rid of this magic for the newer -blockdev
variant. I tried to cheat using driver=file and specify the "rbd:"-path
as the filename, but it doesn't work :P





More information about the pve-devel mailing list