[PVE-User] Migration error!

Gilberto Nunes gilberto.nunes32 at gmail.com
Fri Aug 25 18:42:20 CEST 2017


I suppose I was running in network issues here!
Now everything is ok!
The only thing to bother me is to have to append --with-local-disk in the
CLI!
I already asked to devel's folk to put a checkbox when make the migration
through web interface, in order to choose --with-local-disk option.

Cheers




2017-08-25 12:54 GMT-03:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:

> Well... I make the first replication, remove the job from prox01, and I
> was able to migrate the VM to server prox02, without downtime!.
> However I can't make the reverse way, i.e, migrate the VM from prox02 to
> prox01...
>
>
> prox02:~# qm migrate 101 prox01 --online --with-local-disks
> 2017-08-25 12:43:54 starting migration of VM 101 to node 'prox01'
> (10.1.1.10)
> 2017-08-25 12:43:55 found local disk 'local-zfs:vm-101-disk-1' (in current
> VM config)
> 2017-08-25 12:43:55 copying disk images
> 2017-08-25 12:43:55 starting VM 101 on remote node 'prox01'
> 2017-08-25 12:44:02 start remote tunnel
> 2017-08-25 12:44:03 ssh tunnel ver 1
> 2017-08-25 12:44:03 starting storage migration
> 2017-08-25 12:44:03 virtio0: start migration to to nbd:10.1.1.10:60000:
> exportname=drive-virtio0
> drive mirror is starting for drive-virtio0
> drive-virtio0: transferred: 0 bytes remaining: 5368709120
> <(53)%206870-9120> bytes total: 5368709120 <(53)%206870-9120> bytes
> progression: 0.00 % busy: 1 ready: 0
> drive-virtio0: transferred: 143654912 bytes remaining: 5225054208 bytes
> total: 5368709120 <(53)%206870-9120> bytes progression: 2.68 % busy: 1
> ready: 0
> drive-virtio0: transferred: 286261248 bytes remaining: 5082447872 bytes
> total: 5368709120 <(53)%206870-9120> bytes progression: 5.33 % busy: 1
> ready: 0
> ...
> ...
> ...
> drive-virtio0: transferred: 3728736256 <(37)%202873-6256> bytes
> remaining: 1640300544 <(16)%204030-0544> bytes total: 5369036800
> <(53)%206903-6800> bytes progression: 69.45 % busy: 1 ready: 0
> drive-virtio0: Cancelling block job
> drive-virtio0: Done.
> 2017-08-25 12:51:46 ERROR: online migrate failure - mirroring error:
> drive-virtio0: mirroring has been cancelled
> 2017-08-25 12:51:46 aborting phase 2 - cleanup resources
> 2017-08-25 12:51:46 migrate_cancel
>
>
> Why not?
>
>
>
> 2017-08-25 11:29 GMT-03:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:
>
>> Unless I turn off the storage replication in KVM VM !!!
>> I will try it!
>>
>>
>> Obrigado
>>
>> Cordialmente
>>
>>
>> Gilberto Ferreira
>>
>> Consultor TI Linux | IaaS Proxmox, CloudStack, KVM | Zentyal Server |
>> Zimbra Mail Server
>>
>> (47) 3025-5907
>> (47) 99676-7530
>>
>> Skype: gilberto.nunes36
>>
>>
>> konnectati.com.br <http://www.konnectati.com.br/>
>>
>>
>> https://www.youtube.com/watch?v=dsiTPeNWcSE
>>
>>
>> 2017-08-25 11:14 GMT-03:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:
>>
>>> Yeah!
>>> That's explain the fact storage replication works fine here with a CT!
>>> Thanks for remind!
>>>
>>> 2017-08-25 11:11 GMT-03:00 Yannis Milios <yannis.milios at gmail.com>:
>>>
>>>> My understanding is that in pvesr the live migration of guest vm is not
>>>> supported:
>>>>
>>>> "Virtual guest with active replication cannot currently use online
>>>> migration. Offline migration is supported in general"
>>>>
>>>> On Fri, 25 Aug 2017 at 16:48, Fábio Rabelo <fabio at fabiorabelo.wiki.br>
>>>> wrote:
>>>>
>>>> > Sorry .... my knowledge do not go beyond here ...
>>>> >
>>>> > I abandoned shared storage years ago for lack of trustworthy
>>>> >
>>>> >
>>>> > Fábio Rabelo
>>>> >
>>>> > 2017-08-25 10:22 GMT-03:00 Gilberto Nunes <gilberto.nunes32 at gmail.com
>>>> >:
>>>> > > According to the design model of Proxmox Storage Replication, there
>>>> is a
>>>> > > schedule to make the sync.
>>>> > > And of course, I set up the VM, I have scheduled the sync and for
>>>> finish.
>>>> > > But still stuck!
>>>> > >
>>>> > >
>>>> > >
>>>> > >
>>>> > >
>>>> > > 2017-08-25 10:19 GMT-03:00 Fábio Rabelo <fabio at fabiorabelo.wiki.br
>>>> >:
>>>> > >
>>>> > >> I never used zfs on Linux .
>>>> > >>
>>>> > >> But, in the Solaris OS family, this replication must be set up
>>>> > beforehand
>>>> > >> ...
>>>> > >>
>>>> > >> Someone with some milestone with zfs on linux can confirm or deny
>>>> that
>>>> > ??
>>>> > >>
>>>> > >>
>>>> > >> Fábio Rabelo
>>>> > >>
>>>> > >> 2017-08-25 10:11 GMT-03:00 Gilberto Nunes <
>>>> gilberto.nunes32 at gmail.com>:
>>>> > >> > So.. One of the premise of the ZFS Replication volume, is to
>>>> replicate
>>>> > >> > local volume to another node.
>>>> > >> > Or am I wrong?
>>>> > >> >
>>>> > >> >
>>>> > >> > Obrigado
>>>> > >> >
>>>> > >> > Cordialmente
>>>> > >> >
>>>> > >> >
>>>> > >> > Gilberto Ferreira
>>>> > >> >
>>>> > >> > Consultor TI Linux | IaaS Proxmox, CloudStack, KVM | Zentyal
>>>> Server |
>>>> > >> > Zimbra Mail Server
>>>> > >> >
>>>> > >> > (47) 3025-5907
>>>> > >> > (47) 99676-7530
>>>> > >> >
>>>> > >> > Skype: gilberto.nunes36
>>>> > >> >
>>>> > >> >
>>>> > >> > konnectati.com.br <http://www.konnectati.com.br/>
>>>> > >> >
>>>> > >> >
>>>> > >> > https://www.youtube.com/watch?v=dsiTPeNWcSE
>>>> > >> >
>>>> > >> >
>>>> > >> > 2017-08-25 10:07 GMT-03:00 Fábio Rabelo <
>>>> fabio at fabiorabelo.wiki.br>:
>>>> > >> >
>>>> > >> >> this entry :
>>>> > >> >>
>>>> > >> >> 2017-08-25 09:24:44 can't migrate local disk
>>>> 'stg:vm-100-disk-1':
>>>> > can't
>>>> > >> >> live migrate attached local disks without with-local-disks
>>>> option
>>>> > >> >>
>>>> > >> >> Seems to be the responsable .
>>>> > >> >>
>>>> > >> >> Local disk ?
>>>> > >> >>
>>>> > >> >> where this image are stored ?
>>>> > >> >>
>>>> > >> >>
>>>> > >> >> Fábio Rabelo
>>>> > >> >>
>>>> > >> >> 2017-08-25 9:36 GMT-03:00 Gilberto Nunes <
>>>> gilberto.nunes32 at gmail.com
>>>> > >:
>>>> > >> >> > If I turn off the VM, migrate goes on.
>>>> > >> >> > But make offline migration is out of the question!!!
>>>> > >> >> >
>>>> > >> >> >
>>>> > >> >> >
>>>> > >> >> > 2017-08-25 9:28 GMT-03:00 Gilberto Nunes <
>>>> > gilberto.nunes32 at gmail.com
>>>> > >> >:
>>>> > >> >> >
>>>> > >> >> >> Hi again
>>>> > >> >> >>
>>>> > >> >> >> I try remove all replication jobs and image files from target
>>>> > node...
>>>> > >> >> >> Still get critical error:
>>>> > >> >> >>
>>>> > >> >> >> qm migrate 100 prox02 --online
>>>> > >> >> >> 2017-08-25 09:24:43 starting migration of VM 100 to node
>>>> 'prox02'
>>>> > >> >> >> (10.1.1.20)
>>>> > >> >> >> 2017-08-25 09:24:44 found local disk 'stg:vm-100-disk-1' (in
>>>> > current
>>>> > >> VM
>>>> > >> >> >> config)
>>>> > >> >> >> 2017-08-25 09:24:44 can't migrate local disk
>>>> 'stg:vm-100-disk-1':
>>>> > >> can't
>>>> > >> >> >> live migrate attached local disks without with-local-disks
>>>> option
>>>> > >> >> >> 2017-08-25 09:24:44 ERROR: Failed to sync data - can't
>>>> migrate VM
>>>> > -
>>>> > >> >> check
>>>> > >> >> >> log
>>>> > >> >> >> 2017-08-25 09:24:44 aborting phase 1 - cleanup resources
>>>> > >> >> >> 2017-08-25 09:24:44 ERROR: migration aborted (duration
>>>> 00:00:02):
>>>> > >> Failed
>>>> > >> >> >> to sync data - can't migrate VM - check log
>>>> > >> >> >> migration aborted
>>>> > >> >> >> prox01:~# qm migrate 100 prox02 --online --with-local-disks
>>>> > >> >> >> 2017-08-25 09:24:58 starting migration of VM 100 to node
>>>> 'prox02'
>>>> > >> >> >> (10.1.1.20)
>>>> > >> >> >> 2017-08-25 09:24:58 found local disk 'stg:vm-100-disk-1' (in
>>>> > current
>>>> > >> VM
>>>> > >> >> >> config)
>>>> > >> >> >> 2017-08-25 09:24:58 copying disk images
>>>> > >> >> >> 2017-08-25 09:24:58 ERROR: Failed to sync data - can't live
>>>> > migrate
>>>> > >> VM
>>>> > >> >> >> with replicated volumes
>>>> > >> >> >> 2017-08-25 09:24:58 aborting phase 1 - cleanup resources
>>>> > >> >> >> 2017-08-25 09:24:58 ERROR: migration aborted (duration
>>>> 00:00:01):
>>>> > >> Failed
>>>> > >> >> >> to sync data - can't live migrate VM with replicated volumes
>>>> > >> >> >> migration aborted
>>>> > >> >> >> prox01:~# pvesr status
>>>> > >> >> >> JobID      Enabled    Target
>>>>  LastSync
>>>> > >> >> >>   NextSync   Duration  FailCount State
>>>> > >> >> >> 100-0      Yes        local/prox02
>>>> 2017-08-25_09:25:01
>>>> > >> >> >>  2017-08-25_12:00:00  15.200315          0 OK
>>>> > >> >> >>
>>>> > >> >> >> Somebody help me!
>>>> > >> >> >>
>>>> > >> >> >> Cheers
>>>> > >> >> >>
>>>> > >> >> >>
>>>> > >> >> >>
>>>> > >> >> >>
>>>> > >> >> >> 2017-08-24 9:55 GMT-03:00 Gilberto Nunes <
>>>> > gilberto.nunes32 at gmail.com
>>>> > >> >:
>>>> > >> >> >>
>>>> > >> >> >>> Well...
>>>> > >> >> >>> I will try it
>>>> > >> >> >>>
>>>> > >> >> >>> Thanks
>>>> > >> >> >>>
>>>> > >> >> >>>
>>>> > >> >> >>>
>>>> > >> >> >>>
>>>> > >> >> >>> 2017-08-24 4:37 GMT-03:00 Dominik Csapak <
>>>> d.csapak at proxmox.com>:
>>>> > >> >> >>>
>>>> > >> >> >>>> On 08/23/2017 08:50 PM, Gilberto Nunes wrote:
>>>> > >> >> >>>>
>>>> > >> >> >>>>> more info:
>>>> > >> >> >>>>>
>>>> > >> >> >>>>>
>>>> > >> >> >>>>> pvesr status
>>>> > >> >> >>>>> JobID      Enabled    Target
>>>>  LastSync
>>>> > >> >> >>>>> NextSync   Duration  FailCount State
>>>> > >> >> >>>>> 100-0      Yes        local/prox01
>>>>     -
>>>> > >> >> >>>>>   2017-08-23_15:55:04   3.151884          1 command 'set
>>>> -o
>>>> > >> pipefail
>>>> > >> >> &&
>>>> > >> >> >>>>> pvesm export local-zfs:vm-100-disk-1 zfs -
>>>> -with-snapshots 1
>>>> > >> >> -snapshot
>>>> > >> >> >>>>> __replicate_100-0_1503514204__ | /usr/bin/cstream -t
>>>> > 1024000000 |
>>>> > >> >> >>>>> /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=prox01'
>>>> > >> >> root at 10.1.1.10
>>>> > >> >> >>>>> --
>>>> > >> >> >>>>> pvesm import local-zfs:vm-100-disk-1 zfs -
>>>> -with-snapshots 1'
>>>> > >> failed:
>>>> > >> >> >>>>> exit
>>>> > >> >> >>>>> code 255
>>>> > >> >> >>>>> 100-1      Yes        local/prox02
>>>>     -
>>>> > >> >> >>>>>   2017-08-23_15:55:01   3.089044          1 command 'set
>>>> -o
>>>> > >> pipefail
>>>> > >> >> &&
>>>> > >> >> >>>>> pvesm export local-zfs:vm-100-disk-1 zfs -
>>>> -with-snapshots 1
>>>> > >> >> -snapshot
>>>> > >> >> >>>>> __replicate_100-1_1503514201__ | /usr/bin/cstream -t
>>>> > 1024000000 |
>>>> > >> >> >>>>> /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=prox02'
>>>> > >> >> root at 10.1.1.20
>>>> > >> >> >>>>> --
>>>> > >> >> >>>>> pvesm import local-zfs:vm-100-disk-1 zfs -
>>>> -with-snapshots 1'
>>>> > >> failed:
>>>> > >> >> >>>>> exit
>>>> > >> >> >>>>> code 255
>>>> > >> >> >>>>>
>>>> > >> >> >>>>>
>>>> > >> >> >>>>>
>>>> > >> >> >>>> according to this output, no lastsync was completed, so i
>>>> guess
>>>> > the
>>>> > >> >> >>>> replication did never work, so the migration will also not
>>>> worK?
>>>> > >> >> >>>>
>>>> > >> >> >>>> i would remove all replication jobs (maybe with -force, via
>>>> > >> >> commandline)
>>>> > >> >> >>>> delete all images of this vm from all nodes where the vm
>>>> *not*
>>>> > is
>>>> > >> at
>>>> > >> >> the
>>>> > >> >> >>>> moment (afaics from prox01 and prox02, as the vm is
>>>> currently on
>>>> > >> >> prox03)
>>>> > >> >> >>>>
>>>> > >> >> >>>> then add the replication again wait for it to complete
>>>> (verify
>>>> > with
>>>> > >> >> >>>> pvesr status) and try again to migrate
>>>> > >> >> >>>>
>>>> > >> >> >>>> _______________________________________________
>>>> > >> >> >>>> pve-user mailing list
>>>> > >> >> >>>> pve-user at pve.proxmox.com
>>>> > >> >> >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>> > >> >> >>>>
>>>> > >> >> >>>
>>>> > >> >> >>>
>>>> > >> >> >>
>>>> > >> >> > _______________________________________________
>>>> > >> >> > pve-user mailing list
>>>> > >> >> > pve-user at pve.proxmox.com
>>>> > >> >> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>> > >> >> _______________________________________________
>>>> > >> >> pve-user mailing list
>>>> > >> >> pve-user at pve.proxmox.com
>>>> > >> >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>> > >> >>
>>>> > >> > _______________________________________________
>>>> > >> > pve-user mailing list
>>>> > >> > pve-user at pve.proxmox.com
>>>> > >> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>> > >> _______________________________________________
>>>> > >> pve-user mailing list
>>>> > >> pve-user at pve.proxmox.com
>>>> > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>> > >>
>>>> > > _______________________________________________
>>>> > > pve-user mailing list
>>>> > > pve-user at pve.proxmox.com
>>>> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>> > _______________________________________________
>>>> > pve-user mailing list
>>>> > pve-user at pve.proxmox.com
>>>> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>> >
>>>> --
>>>> Sent from Gmail Mobile
>>>> _______________________________________________
>>>> pve-user mailing list
>>>> pve-user at pve.proxmox.com
>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>
>>>
>>>
>>
>



More information about the pve-user mailing list