[PVE-User] qm remote-migrate

Stefan Lendl s.lendl at proxmox.com
Wed Oct 11 11:37:55 CEST 2023


Fabrizio Cuseo <f.cuseo at panservice.it> writes:

Thanks for providing the details.
I will investigate the situation and we will consider a solution for our
upcoming SDN upgrade.

As a solution for now, please try to remove the VLAN tag from the source VM
and try to migrate again. The target net interface will not require a
VLAN tag assigned (and is therefore not allowed) on the VM because it is
configured via SDN already.

Best regards,
Stefan

> ----- Il 11-ott-23, alle 9:41, Stefan Lendl s.lendl at proxmox.com ha scritto:
>
>> Fabrizio Cuseo <f.cuseo at panservice.it> writes:
>>
>> Hello Fabrizio,
>>
>> To better understand your issue, the source cluster has a VM with a
>> bridge with a VLAN tag assigned and the target cluster does not have the
>> same setup but uses SDN (vnet) without vlan.
>
> Yes, it's correct.
>
>
>> After migration you manually changed the VMs configuration to match the
>> new setup?
>
> I can't because remote-migration returns an error (I cannot specify vlan tag on that bridge)
>
>> What SDN configuration are you using on the traget cluster?
>> Please send the output of the following:
>>
>> head -n -1 /etc/pve/sdn/*.cfg
>
> Can I send you in private ? Because is full of customer's names :/
>
> But, this is a part of files:
>
> ==> /etc/pve/sdn/zones.cfg <==
> vlan: ZonaVLAN
>         bridge vmbr0
>         ipam pve
>
> qinq: VPVT
>         bridge vmbr0
>         tag 929
>         ipam pve
>         vlan-protocol 802.1q
>
>
> ==> /etc/pve/sdn/vnets.cfg <==
> vnet: test100
>         zone FWHous
>         alias Vlan 100 Test 921 qinq
>         tag 100
>
> vnet: vlan902
>         zone ZonaVLAN
>         alias Vlan 902 Private-Vlan
>         tag 902
>
>
>
>
>> What was to exact command you ran to start the remote-migrate process?
>
> qm remote-migrate 4980 4980 'host=172.16.20.41,apitoken=PVEAPIToken=root at pam!remotemigrate=hiddensecret,fingerprint=hiddenfingerprint' --target-bridge vlan902 --target-storage NfsMirror --online
>
>
>
>> Did you notice any suspicios log messages in the source clusters
>> journal?
>
> Source:
>
> tunnel: -> sending command "version" to remote
> tunnel: <- got reply
> 2023-10-10 18:08:48 local WS tunnel version: 2
> 2023-10-10 18:08:48 remote WS tunnel version: 2
> 2023-10-10 18:08:48 minimum required WS tunnel version: 2
> websocket tunnel started
> 2023-10-10 18:08:48 starting migration of VM 4980 to node 'nodo01-cluster1' (172.16.20.41)
> tunnel: -> sending command "bwlimit" to remote
> tunnel: <- got reply
> 2023-10-10 18:08:49 found local disk 'CephCluster3Copie:vm-4980-disk-0' (attached)
> 2023-10-10 18:08:49 mapped: net0 from vmbr1 to vlan902
> 2023-10-10 18:08:49 Allocating volume for drive 'scsi0' on remote storage 'NfsMirror'..
> tunnel: -> sending command "disk" to remote
> tunnel: <- got reply
> 2023-10-10 18:08:49 volume 'CephCluster3Copie:vm-4980-disk-0' is 'NfsMirror:4980/vm-4980-disk-0.raw' on the target
> tunnel: -> sending command "config" to remote
> tunnel: <- got reply
> tunnel: -> sending command "start" to remote
> tunnel: <- got reply
> 2023-10-10 18:08:50 ERROR: online migrate failure - error - tunnel command '{"start_params":{"forcemachine":"pc-i440fx-8.0+pve0","forcecpu":null,"statefile":"unix","skiplock":1},"cmd":"start","migrate_opts":{"network":null,"nbd
> ":{"scsi0":{"volid":"NfsMirror:4980/vm-4980-disk-0.raw","success":true,"drivestr":"NfsMirror:4980/vm-4980-disk-0.raw,discard=on,format=raw,size=64G"}},"nbd_proto_version":1,"storagemap":{"default":"NfsMirror"},"migratedfrom":"n
> ode06-cluster4","type":"websocket","remote_node":"nodo01-cluster1","spice_ticket":null}}' failed - failed to handle 'start' command - start failed: QEMU exited with code 1
> 2023-10-10 18:08:50 aborting phase 2 - cleanup resources
> 2023-10-10 18:08:50 migrate_cancel
> tunnel: -> sending command "stop" to remote
> tunnel: <- got reply
> tunnel: -> sending command "quit" to remote
> tunnel: <- got reply
> 2023-10-10 18:08:51 ERROR: migration finished with problems (duration 00:00:03)
>
> TASK ERROR: migration problems
>
>
>
>
>
> DESTINATION:
>
> mtunnel started
> received command 'version'
> received command 'bwlimit'
> received command 'disk'
> Formatting '/mnt/pve/NfsMirror/images/4980/vm-4980-disk-0.raw', fmt=raw size=68719476736 preallocation=off
> received command 'config'
> update VM 4980: -agent 1 -boot order=scsi0;ide2;net0 -cores 2 -ide2 none,media=cdrom -memory 8192 -name SeafileProTestS3 -net0 e1000=86:64:73:AB:33:AE,bridge=vlan902,tag=902 -numa 1 -ostype l26 -scsi0 NfsMirror:4980
> /vm-4980-disk-0.raw,discard=on,format=raw,size=64G -scsihw virtio-scsi-pci -smbios1 uuid=39a07e5b-16b5-45a3-aad9-4e3f2b4e87ce -sockets 2
> received command 'start'
> QEMU: vm vlans are not allowed on vnet vlan902 at /usr/share/perl5/PVE/Network/SDN/Zones/Plugin.pm line 228.
> QEMU: kvm: -netdev type=tap,id=net0,ifname=tap4980i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown: network script /var/lib/qemu-server/pve-bridge failed with status 6400
> received command 'stop'
> received command 'quit'
> freeing volume 'NfsMirror:4980/vm-4980-disk-0.raw' as part of cleanup
> disk image '/mnt/pve/NfsMirror/images/4980/vm-4980-disk-0.raw' does not exist
> switching to exit-mode, waiting for client to disconnect
> mtunnel exited
> TASK OK
>
>
> Source VM conf file:
>
> agent: 1
> balloon: 2048
> boot: order=scsi0;ide2;net0
> cores: 2
> ide2: none,media=cdrom
> memory: 4096
> name: SeafileProTestS3
> net0: virtio=86:64:73:AB:33:AE,bridge=vmbr1,tag=902
> numa: 1
> ostype: l26
> scsi0: CephCluster3Copie:vm-4980-disk-0,discard=on,size=64G
> scsihw: virtio-scsi-pci
> smbios1: uuid=39a07e5b-16b5-45a3-aad9-4e3f2b4e87ce
> sockets: 2
> vmgenid: 035cd26d-c74e-405e-9b4d-481f26d9cf5f
>
>
>
>
>> Usually I would ask you to send me the entire journal but this is not
>> feasible on the mailing list. If necessary, I would recommend you open a
>> Thread in our community forum and I will take a look there.
>
>> https://forum.proxmox.com/
>>
>> Best regards,
>> Stefan Lendl
>
>
> Thank you in advance, Fabrizio
>
>
>>
>>> Hello.
>>> I am testing qm remote-migrate with 2 pve 8.0.4 clusters.
>>> Source cluster has one bridge with vlan id on every VM, destination cluster uses
>>> SDN and a different bridge (vnet) without vlanid.
>>> If I migrate the vm, i need to specify both bridge and vlan-id, but I have not
>>> found an option to do it.
>>>
>>> PS: after migration, on the new cluster the vm is running without any problem,
>>> but on source cluster remains locked and in migration, so I need to issue a "qm
>>> unlock vmid" and stop/delete it.
>>>
>>> I know that is an experimental feature, so I send my test results.
>>>
>>> Regards, Fabrizio
>>>
>>>
>>> --
>>> ---
>>> Fabrizio Cuseo - mailto:f.cuseo at panservice.it
>>>
>>> _______________________________________________
>>> pve-user mailing list
>>> pve-user at lists.proxmox.com
>> > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
> --
> ---
> Fabrizio Cuseo - mailto:f.cuseo at panservice.it
> Direzione Generale - Panservice InterNetWorking
> Servizi Professionali per Internet ed il Networking
> Panservice e' associata AIIP - RIPE Local Registry
> Phone: +39 0773 410020 - Fax: +39 0773 470219
> http://www.panservice.it  mailto:info at panservice.it
> Numero verde nazionale: 800 901492




More information about the pve-user mailing list