[PVE-User] qm remote-migrate

DERUMIER, Alexandre alexandre.derumier at groupe-cyllene.com
Wed Oct 11 18:06:45 CEST 2023


yes, currently they are no mapping for vlan, It should be easy to add
in the code.

I had similar case in production, migrating from vm tagged vlan to
remote sdn vxlan.
I finally first move to sdn without vlan on source cluster , then
migrate to target cluster.   

But doing it directly in migrate could be better indeed :)




-------- Message initial --------
De: Fabrizio Cuseo <f.cuseo at panservice.it>
Répondre à: Fabrizio Cuseo <f.cuseo at panservice.it>, Proxmox VE user
list <pve-user at lists.proxmox.com>
À: Stefan Lendl <s.lendl at proxmox.com>
Cc: pve-user <pve-user at pve.proxmox.com>
Objet: Re: [PVE-User] qm remote-migrate
Date: 11/10/2023 12:14:24



----- Il 11-ott-23, alle 11:37, Stefan Lendl s.lendl at proxmox.com ha
scritto:

> Fabrizio Cuseo <f.cuseo at panservice.it> writes:
> 
> Thanks for providing the details.
> I will investigate the situation and we will consider a solution for
> our
> upcoming SDN upgrade.
> 
> As a solution for now, please try to remove the VLAN tag from the
> source VM
> and try to migrate again. The target net interface will not require a
> VLAN tag assigned (and is therefore not allowed) on the VM because it
> is
> configured via SDN already.

Yes, I have done with a test VM, but I can't do it with production VMs
because if I remove vlan tag, the source VM will stop to work.
But I can install and configure SDN on the source cluster (upgrading it
to the last 8.x), create a vlan zone, a vnet with that vlan id, change
source bridge to vnet bridge and removing the vlan tag, and migrate. (I
have just tested and seems to work).

Thank you again, Fabrizio 



> 
> Best regards,
> Stefan
> 
> > ----- Il 11-ott-23, alle 9:41, Stefan Lendl s.lendl at proxmox.com ha
> > scritto:
> > 
> > > Fabrizio Cuseo <f.cuseo at panservice.it> writes:
> > > 
> > > Hello Fabrizio,
> > > 
> > > To better understand your issue, the source cluster has a VM with
> > > a
> > > bridge with a VLAN tag assigned and the target cluster does not
> > > have the
> > > same setup but uses SDN (vnet) without vlan.
> > 
> > Yes, it's correct.
> > 
> > 
> > > After migration you manually changed the VMs configuration to
> > > match the
> > > new setup?
> > 
> > I can't because remote-migration returns an error (I cannot specify
> > vlan tag on
> > that bridge)
> > 
> > > What SDN configuration are you using on the traget cluster?
> > > Please send the output of the following:
> > > 
> > > head -n -1 /etc/pve/sdn/*.cfg
> > 
> > Can I send you in private ? Because is full of customer's names :/
> > 
> > But, this is a part of files:
> > 
> > ==> /etc/pve/sdn/zones.cfg <==
> > vlan: ZonaVLAN
> >         bridge vmbr0
> >         ipam pve
> > 
> > qinq: VPVT
> >         bridge vmbr0
> >         tag 929
> >         ipam pve
> >         vlan-protocol 802.1q
> > 
> > 
> > ==> /etc/pve/sdn/vnets.cfg <==
> > vnet: test100
> >         zone FWHous
> >         alias Vlan 100 Test 921 qinq
> >         tag 100
> > 
> > vnet: vlan902
> >         zone ZonaVLAN
> >         alias Vlan 902 Private-Vlan
> >         tag 902
> > 
> > 
> > 
> > 
> > > What was to exact command you ran to start the remote-migrate
> > > process?
> > 
> > qm remote-migrate 4980 4980
> > 'host=172.16.20.41,apitoken=PVEAPIToken=root at pam!remotemigrate=hidd
> > ensecret,fingerprint=hiddenfingerprint'
> > --target-bridge vlan902 --target-storage NfsMirror --online
> > 
> > 
> > 
> > > Did you notice any suspicios log messages in the source clusters
> > > journal?
> > 
> > Source:
> > 
> > tunnel: -> sending command "version" to remote
> > tunnel: <- got reply
> > 2023-10-10 18:08:48 local WS tunnel version: 2
> > 2023-10-10 18:08:48 remote WS tunnel version: 2
> > 2023-10-10 18:08:48 minimum required WS tunnel version: 2
> > websocket tunnel started
> > 2023-10-10 18:08:48 starting migration of VM 4980 to node 'nodo01-
> > cluster1'
> > (172.16.20.41)
> > tunnel: -> sending command "bwlimit" to remote
> > tunnel: <- got reply
> > 2023-10-10 18:08:49 found local disk 'CephCluster3Copie:vm-4980-
> > disk-0'
> > (attached)
> > 2023-10-10 18:08:49 mapped: net0 from vmbr1 to vlan902
> > 2023-10-10 18:08:49 Allocating volume for drive 'scsi0' on remote
> > storage
> > 'NfsMirror'..
> > tunnel: -> sending command "disk" to remote
> > tunnel: <- got reply
> > 2023-10-10 18:08:49 volume 'CephCluster3Copie:vm-4980-disk-0' is
> > 'NfsMirror:4980/vm-4980-disk-0.raw' on the target
> > tunnel: -> sending command "config" to remote
> > tunnel: <- got reply
> > tunnel: -> sending command "start" to remote
> > tunnel: <- got reply
> > 2023-10-10 18:08:50 ERROR: online migrate failure - error - tunnel
> > command
> > '{"start_params":{"forcemachine":"pc-i440fx-
> > 8.0+pve0","forcecpu":null,"statefile":"unix","skiplock":1},"cmd":"s
> > tart","migrate_opts":{"network":null,"nbd
> > ":{"scsi0":{"volid":"NfsMirror:4980/vm-4980-disk-
> > 0.raw","success":true,"drivestr":"NfsMirror:4980/vm-4980-disk-
> > 0.raw,discard=on,format=raw,size=64G"}},"nbd_proto_version":1,"stor
> > agemap":{"default":"NfsMirror"},"migratedfrom":"n
> > ode06-cluster4","type":"websocket","remote_node":"nodo01-
> > cluster1","spice_ticket":null}}'
> > failed - failed to handle 'start' command - start failed: QEMU
> > exited with code
> > 1
> > 2023-10-10 18:08:50 aborting phase 2 - cleanup resources
> > 2023-10-10 18:08:50 migrate_cancel
> > tunnel: -> sending command "stop" to remote
> > tunnel: <- got reply
> > tunnel: -> sending command "quit" to remote
> > tunnel: <- got reply
> > 2023-10-10 18:08:51 ERROR: migration finished with problems
> > (duration 00:00:03)
> > 
> > TASK ERROR: migration problems
> > 
> > 
> > 
> > 
> > 
> > DESTINATION:
> > 
> > mtunnel started
> > received command 'version'
> > received command 'bwlimit'
> > received command 'disk'
> > Formatting '/mnt/pve/NfsMirror/images/4980/vm-4980-disk-0.raw',
> > fmt=raw
> > size=68719476736 preallocation=off
> > received command 'config'
> > update VM 4980: -agent 1 -boot order=scsi0;ide2;net0 -cores 2 -ide2
> > none,media=cdrom -memory 8192 -name SeafileProTestS3 -net0
> > e1000=86:64:73:AB:33:AE,bridge=vlan902,tag=902 -numa 1 -ostype l26
> > -scsi0
> > NfsMirror:4980
> > /vm-4980-disk-0.raw,discard=on,format=raw,size=64G -scsihw virtio-
> > scsi-pci
> > -smbios1 uuid=39a07e5b-16b5-45a3-aad9-4e3f2b4e87ce -sockets 2
> > received command 'start'
> > QEMU: vm vlans are not allowed on vnet vlan902 at
> > /usr/share/perl5/PVE/Network/SDN/Zones/Plugin.pm line 228.
> > QEMU: kvm: -netdev
> > type=tap,id=net0,ifname=tap4980i0,script=/var/lib/qemu-server/pve-
> > bridge,downscript=/var/lib/qemu-server/pve-bridgedown:
> > network script /var/lib/qemu-server/pve-bridge failed with status
> > 6400
> > received command 'stop'
> > received command 'quit'
> > freeing volume 'NfsMirror:4980/vm-4980-disk-0.raw' as part of
> > cleanup
> > disk image '/mnt/pve/NfsMirror/images/4980/vm-4980-disk-0.raw' does
> > not exist
> > switching to exit-mode, waiting for client to disconnect
> > mtunnel exited
> > TASK OK
> > 
> > 
> > Source VM conf file:
> > 
> > agent: 1
> > balloon: 2048
> > boot: order=scsi0;ide2;net0
> > cores: 2
> > ide2: none,media=cdrom
> > memory: 4096
> > name: SeafileProTestS3
> > net0: virtio=86:64:73:AB:33:AE,bridge=vmbr1,tag=902
> > numa: 1
> > ostype: l26
> > scsi0: CephCluster3Copie:vm-4980-disk-0,discard=on,size=64G
> > scsihw: virtio-scsi-pci
> > smbios1: uuid=39a07e5b-16b5-45a3-aad9-4e3f2b4e87ce
> > sockets: 2
> > vmgenid: 035cd26d-c74e-405e-9b4d-481f26d9cf5f
> > 
> > 
> > 
> > 
> > > Usually I would ask you to send me the entire journal but this is
> > > not
> > > feasible on the mailing list. If necessary, I would recommend you
> > > open a
> > > Thread in our community forum and I will take a look there.
> > 
> > > https://antiphishing.cetsi.fr/proxy/v3?i=SGI0YVJGNmxZNE90Z2thMFYL
> > > WSxJOfIERJocpmb73Vs&r=SW5LV3JodE9QZkRVZ3JEYaKpfBJeBDlAX9E2aicRCRO
> > > 3qsFIBX9zb4pDqGdxG45MOoGKkZ3R8w3DjSjAvqYgRg&f=bnJjU3hQT3pQSmNQZVE
> > > 3aPE86c9l_NoW4_35ZkzMrkRr2MW_BjbnDTswuy0AwOve3AOvphs1mSlvmN7amrdG
> > > SQ&u=https%3A//forum.proxmox.com/&k=dFBm
> > > 
> > > Best regards,
> > > Stefan Lendl
> > 
> > 
> > Thank you in advance, Fabrizio
> > 
> > 
> > > 
> > > > Hello.
> > > > I am testing qm remote-migrate with 2 pve 8.0.4 clusters.
> > > > Source cluster has one bridge with vlan id on every VM,
> > > > destination cluster uses
> > > > SDN and a different bridge (vnet) without vlanid.
> > > > If I migrate the vm, i need to specify both bridge and vlan-id,
> > > > but I have not
> > > > found an option to do it.
> > > > 
> > > > PS: after migration, on the new cluster the vm is running
> > > > without any problem,
> > > > but on source cluster remains locked and in migration, so I
> > > > need to issue a "qm
> > > > unlock vmid" and stop/delete it.
> > > > 
> > > > I know that is an experimental feature, so I send my test
> > > > results.
> > > > 
> > > > Regards, Fabrizio
> > > > 
> > > > 
> > > > --
> > > > ---
> > > > Fabrizio Cuseo - mailto:f.cuseo at panservice.it
> > > > 
> > > > _______________________________________________
> > > > pve-user mailing list
> > > > pve-user at lists.proxmox.com
> > > > https://antiphishing.cetsi.fr/proxy/v3?i=SGI0YVJGNmxZNE90Z2thMF
> > > > YLWSxJOfIERJocpmb73Vs&r=SW5LV3JodE9QZkRVZ3JEYaKpfBJeBDlAX9E2aic
> > > > RCRO3qsFIBX9zb4pDqGdxG45MOoGKkZ3R8w3DjSjAvqYgRg&f=bnJjU3hQT3pQS
> > > > mNQZVE3aPE86c9l_NoW4_35ZkzMrkRr2MW_BjbnDTswuy0AwOve3AOvphs1mSlv
> > > > mN7amrdGSQ&u=https%3A//lists.proxmox.com/cgi-
> > > > bin/mailman/listinfo/pve-user&k=dFBm
> > 
> > --
> > ---
> > Fabrizio Cuseo - mailto:f.cuseo at panservice.it
> > Direzione Generale - Panservice InterNetWorking
> > Servizi Professionali per Internet ed il Networking
> > Panservice e' associata AIIP - RIPE Local Registry
> > Phone: +39 0773 410020 - Fax: +39 0773 470219
> > https://antiphishing.cetsi.fr/proxy/v3?i=SGI0YVJGNmxZNE90Z2thMFYLWS
> > xJOfIERJocpmb73Vs&r=SW5LV3JodE9QZkRVZ3JEYaKpfBJeBDlAX9E2aicRCRO3qsF
> > IBX9zb4pDqGdxG45MOoGKkZ3R8w3DjSjAvqYgRg&f=bnJjU3hQT3pQSmNQZVE3aPE86
> > c9l_NoW4_35ZkzMrkRr2MW_BjbnDTswuy0AwOve3AOvphs1mSlvmN7amrdGSQ&u=htt
> > p%3A//www.panservice.it&k=dFBm  mailto:info at panservice.it
> > Numero verde nazionale: 800 901492




More information about the pve-user mailing list