[pve-devel] [PATCH v4 qemu-server 2/2] remote-migration: add target-cpu && target-reboot params
DERUMIER, Alexandre
alexandre.derumier at groupe-cyllene.com
Wed Oct 25 18:01:30 CEST 2023
>
>>Is it required for this series?
for this series, no.
It's only focus on migrating to remote with different cpu without too
much downtime.
>> Unused disks can just be migrated
>>offline via storage_migrate(), or?
currently unused disk can't be migrate through the http tunnel for
remote-migration
2023-10-25 17:51:38 ERROR: error - tunnel command
'{"format":"raw","migration_snapshot":"1","export_formats":"zfs","allow
_rename":"1","snapshot":"__migration__","volname":"vm-1112-disk-
1","cmd":"disk-import","storage":"targetstorage","with_snapshots":1}'
failed - failed to handle 'disk-import' command - no matching
import/export format found for storage 'preprodkvm'
2023-10-25 17:51:38 aborting phase 1 - cleanup resources
tunnel: -> sending command "quit" to remote
tunnel: <- got reply
tunnel: CMD channel closed, shutting down
2023-10-25 17:51:39 ERROR: migration aborted (duration 00:00:01): error
- tunnel command
'{"format":"raw","migration_snapshot":"1","export_formats":"zfs","allow
_rename":"1","snapshot":"__migration__","volname":"vm-1112-disk-
1","cmd":"disk-import","storage":"targetstorage","with_snapshots":1}'
failed - failed to handle 'disk-import' command - no matching
import/export format found for storage 'preprodkvm'
migration aborted
>>If we want to switch to migrating
>>disks offline via QEMU instead of our current storage_migrate(),
>>going
>>for QEMU storage daemon + NBD seems the most natural to me.
Yes, I more for this solution.
>>If it's not too complicated to temporarily attach the disks to the
>>VM,
>>that can be done too, but is less re-usable (e.g. pure offline
>>migration
>>won't benefit from that).
No sure about attach/detach temporary once by once, or attach all
devices (but this need enough controllers slot).
qemu storage daemon seem to be a less hacky solution ^_^
> but if it's work, I think we'll need to add config generation in pv
> storage for differents blockdriver
>
>
> like:
>
> –blockdev driver=file,node-name=file0,filename=vm.img
>
> –blockdev driver=rbd,node-name=rbd0,pool=my-pool,image=vm01
>
>>What other special cases besides (non-krbd) RBD are there? If it's
>>just
>>that, I'd much rather keep the special handling in QEMU itself then
>>burden all other storage plugins with implementing something specific
>>to
>>VMs.
not sure, maybe glusterfs, .raw (should works for block device like
lvm,zfs), .qcow2
>>Or is there a way to use the path from the storage plugin somehow
>>like
>>we do at the moment, i.e.
>>"rbd:rbd/vm-111-disk-
>>1:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/rbd.key
>>ring"?
I don't think it's possible just like this.I need to do more test,
looking at libvirt before they are not too much doc about it.
> So maybe it'll take a little bit more time.
>
> (Maybe a second patch serie later to implement it)
>
>>Yes, I think that makes sense as a dedicated series.
More information about the pve-devel
mailing list