[pve-devel] [PATCH v4 qemu-server 2/2] remote-migration: add target-cpu && target-reboot params
Fiona Ebner
f.ebner at proxmox.com
Wed Oct 25 10:30:48 CEST 2023
Am 24.10.23 um 14:20 schrieb DERUMIER, Alexandre:
>>> So I think the best way for now is to restart the target vm.
>>>
>>> Sure! Going with that is a much cleaner approach then.
>
> I'll try to send a v5 today with you're last comments.
>
> I don't manage yet the unused disks, I need to test with blockdev,
>
Is it required for this series? Unused disks can just be migrated
offline via storage_migrate(), or? If we want to switch to migrating
disks offline via QEMU instead of our current storage_migrate(), going
for QEMU storage daemon + NBD seems the most natural to me.
If it's not too complicated to temporarily attach the disks to the VM,
that can be done too, but is less re-usable (e.g. pure offline migration
won't benefit from that).
> but if it's work, I think we'll need to add config generation in pve-
> storage for differents blockdriver
>
>
> like:
>
> –blockdev driver=file,node-name=file0,filename=vm.img
>
> –blockdev driver=rbd,node-name=rbd0,pool=my-pool,image=vm01
>
What other special cases besides (non-krbd) RBD are there? If it's just
that, I'd much rather keep the special handling in QEMU itself then
burden all other storage plugins with implementing something specific to
VMs.
Or is there a way to use the path from the storage plugin somehow like
we do at the moment, i.e.
"rbd:rbd/vm-111-disk-1:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/rbd.keyring"?
> So maybe it'll take a little bit more time.
>
> (Maybe a second patch serie later to implement it)
>
Yes, I think that makes sense as a dedicated series.
More information about the pve-devel
mailing list