[PVE-User] Migration error!

Gilberto Nunes gilberto.nunes32 at gmail.com
Fri Aug 25 16:29:25 CEST 2017


Unless I turn off the storage replication in KVM VM !!!
I will try it!


Obrigado

Cordialmente


Gilberto Ferreira

Consultor TI Linux | IaaS Proxmox, CloudStack, KVM | Zentyal Server |
Zimbra Mail Server

(47) 3025-5907
(47) 99676-7530

Skype: gilberto.nunes36


konnectati.com.br <http://www.konnectati.com.br/>


https://www.youtube.com/watch?v=dsiTPeNWcSE


2017-08-25 11:14 GMT-03:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:

> Yeah!
> That's explain the fact storage replication works fine here with a CT!
> Thanks for remind!
>
> 2017-08-25 11:11 GMT-03:00 Yannis Milios <yannis.milios at gmail.com>:
>
>> My understanding is that in pvesr the live migration of guest vm is not
>> supported:
>>
>> "Virtual guest with active replication cannot currently use online
>> migration. Offline migration is supported in general"
>>
>> On Fri, 25 Aug 2017 at 16:48, Fábio Rabelo <fabio at fabiorabelo.wiki.br>
>> wrote:
>>
>> > Sorry .... my knowledge do not go beyond here ...
>> >
>> > I abandoned shared storage years ago for lack of trustworthy
>> >
>> >
>> > Fábio Rabelo
>> >
>> > 2017-08-25 10:22 GMT-03:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:
>> > > According to the design model of Proxmox Storage Replication, there
>> is a
>> > > schedule to make the sync.
>> > > And of course, I set up the VM, I have scheduled the sync and for
>> finish.
>> > > But still stuck!
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > 2017-08-25 10:19 GMT-03:00 Fábio Rabelo <fabio at fabiorabelo.wiki.br>:
>> > >
>> > >> I never used zfs on Linux .
>> > >>
>> > >> But, in the Solaris OS family, this replication must be set up
>> > beforehand
>> > >> ...
>> > >>
>> > >> Someone with some milestone with zfs on linux can confirm or deny
>> that
>> > ??
>> > >>
>> > >>
>> > >> Fábio Rabelo
>> > >>
>> > >> 2017-08-25 10:11 GMT-03:00 Gilberto Nunes <
>> gilberto.nunes32 at gmail.com>:
>> > >> > So.. One of the premise of the ZFS Replication volume, is to
>> replicate
>> > >> > local volume to another node.
>> > >> > Or am I wrong?
>> > >> >
>> > >> >
>> > >> > Obrigado
>> > >> >
>> > >> > Cordialmente
>> > >> >
>> > >> >
>> > >> > Gilberto Ferreira
>> > >> >
>> > >> > Consultor TI Linux | IaaS Proxmox, CloudStack, KVM | Zentyal
>> Server |
>> > >> > Zimbra Mail Server
>> > >> >
>> > >> > (47) 3025-5907
>> > >> > (47) 99676-7530
>> > >> >
>> > >> > Skype: gilberto.nunes36
>> > >> >
>> > >> >
>> > >> > konnectati.com.br <http://www.konnectati.com.br/>
>> > >> >
>> > >> >
>> > >> > https://www.youtube.com/watch?v=dsiTPeNWcSE
>> > >> >
>> > >> >
>> > >> > 2017-08-25 10:07 GMT-03:00 Fábio Rabelo <fabio at fabiorabelo.wiki.br
>> >:
>> > >> >
>> > >> >> this entry :
>> > >> >>
>> > >> >> 2017-08-25 09:24:44 can't migrate local disk 'stg:vm-100-disk-1':
>> > can't
>> > >> >> live migrate attached local disks without with-local-disks option
>> > >> >>
>> > >> >> Seems to be the responsable .
>> > >> >>
>> > >> >> Local disk ?
>> > >> >>
>> > >> >> where this image are stored ?
>> > >> >>
>> > >> >>
>> > >> >> Fábio Rabelo
>> > >> >>
>> > >> >> 2017-08-25 9:36 GMT-03:00 Gilberto Nunes <
>> gilberto.nunes32 at gmail.com
>> > >:
>> > >> >> > If I turn off the VM, migrate goes on.
>> > >> >> > But make offline migration is out of the question!!!
>> > >> >> >
>> > >> >> >
>> > >> >> >
>> > >> >> > 2017-08-25 9:28 GMT-03:00 Gilberto Nunes <
>> > gilberto.nunes32 at gmail.com
>> > >> >:
>> > >> >> >
>> > >> >> >> Hi again
>> > >> >> >>
>> > >> >> >> I try remove all replication jobs and image files from target
>> > node...
>> > >> >> >> Still get critical error:
>> > >> >> >>
>> > >> >> >> qm migrate 100 prox02 --online
>> > >> >> >> 2017-08-25 09:24:43 starting migration of VM 100 to node
>> 'prox02'
>> > >> >> >> (10.1.1.20)
>> > >> >> >> 2017-08-25 09:24:44 found local disk 'stg:vm-100-disk-1' (in
>> > current
>> > >> VM
>> > >> >> >> config)
>> > >> >> >> 2017-08-25 09:24:44 can't migrate local disk
>> 'stg:vm-100-disk-1':
>> > >> can't
>> > >> >> >> live migrate attached local disks without with-local-disks
>> option
>> > >> >> >> 2017-08-25 09:24:44 ERROR: Failed to sync data - can't migrate
>> VM
>> > -
>> > >> >> check
>> > >> >> >> log
>> > >> >> >> 2017-08-25 09:24:44 aborting phase 1 - cleanup resources
>> > >> >> >> 2017-08-25 09:24:44 ERROR: migration aborted (duration
>> 00:00:02):
>> > >> Failed
>> > >> >> >> to sync data - can't migrate VM - check log
>> > >> >> >> migration aborted
>> > >> >> >> prox01:~# qm migrate 100 prox02 --online --with-local-disks
>> > >> >> >> 2017-08-25 09:24:58 starting migration of VM 100 to node
>> 'prox02'
>> > >> >> >> (10.1.1.20)
>> > >> >> >> 2017-08-25 09:24:58 found local disk 'stg:vm-100-disk-1' (in
>> > current
>> > >> VM
>> > >> >> >> config)
>> > >> >> >> 2017-08-25 09:24:58 copying disk images
>> > >> >> >> 2017-08-25 09:24:58 ERROR: Failed to sync data - can't live
>> > migrate
>> > >> VM
>> > >> >> >> with replicated volumes
>> > >> >> >> 2017-08-25 09:24:58 aborting phase 1 - cleanup resources
>> > >> >> >> 2017-08-25 09:24:58 ERROR: migration aborted (duration
>> 00:00:01):
>> > >> Failed
>> > >> >> >> to sync data - can't live migrate VM with replicated volumes
>> > >> >> >> migration aborted
>> > >> >> >> prox01:~# pvesr status
>> > >> >> >> JobID      Enabled    Target                           LastSync
>> > >> >> >>   NextSync   Duration  FailCount State
>> > >> >> >> 100-0      Yes        local/prox02          2017-08-25_09:25:01
>> > >> >> >>  2017-08-25_12:00:00  15.200315          0 OK
>> > >> >> >>
>> > >> >> >> Somebody help me!
>> > >> >> >>
>> > >> >> >> Cheers
>> > >> >> >>
>> > >> >> >>
>> > >> >> >>
>> > >> >> >>
>> > >> >> >> 2017-08-24 9:55 GMT-03:00 Gilberto Nunes <
>> > gilberto.nunes32 at gmail.com
>> > >> >:
>> > >> >> >>
>> > >> >> >>> Well...
>> > >> >> >>> I will try it
>> > >> >> >>>
>> > >> >> >>> Thanks
>> > >> >> >>>
>> > >> >> >>>
>> > >> >> >>>
>> > >> >> >>>
>> > >> >> >>> 2017-08-24 4:37 GMT-03:00 Dominik Csapak <
>> d.csapak at proxmox.com>:
>> > >> >> >>>
>> > >> >> >>>> On 08/23/2017 08:50 PM, Gilberto Nunes wrote:
>> > >> >> >>>>
>> > >> >> >>>>> more info:
>> > >> >> >>>>>
>> > >> >> >>>>>
>> > >> >> >>>>> pvesr status
>> > >> >> >>>>> JobID      Enabled    Target
>>  LastSync
>> > >> >> >>>>> NextSync   Duration  FailCount State
>> > >> >> >>>>> 100-0      Yes        local/prox01
>>   -
>> > >> >> >>>>>   2017-08-23_15:55:04   3.151884          1 command 'set -o
>> > >> pipefail
>> > >> >> &&
>> > >> >> >>>>> pvesm export local-zfs:vm-100-disk-1 zfs - -with-snapshots 1
>> > >> >> -snapshot
>> > >> >> >>>>> __replicate_100-0_1503514204__ | /usr/bin/cstream -t
>> > 1024000000 |
>> > >> >> >>>>> /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=prox01'
>> > >> >> root at 10.1.1.10
>> > >> >> >>>>> --
>> > >> >> >>>>> pvesm import local-zfs:vm-100-disk-1 zfs - -with-snapshots
>> 1'
>> > >> failed:
>> > >> >> >>>>> exit
>> > >> >> >>>>> code 255
>> > >> >> >>>>> 100-1      Yes        local/prox02
>>   -
>> > >> >> >>>>>   2017-08-23_15:55:01   3.089044          1 command 'set -o
>> > >> pipefail
>> > >> >> &&
>> > >> >> >>>>> pvesm export local-zfs:vm-100-disk-1 zfs - -with-snapshots 1
>> > >> >> -snapshot
>> > >> >> >>>>> __replicate_100-1_1503514201__ | /usr/bin/cstream -t
>> > 1024000000 |
>> > >> >> >>>>> /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=prox02'
>> > >> >> root at 10.1.1.20
>> > >> >> >>>>> --
>> > >> >> >>>>> pvesm import local-zfs:vm-100-disk-1 zfs - -with-snapshots
>> 1'
>> > >> failed:
>> > >> >> >>>>> exit
>> > >> >> >>>>> code 255
>> > >> >> >>>>>
>> > >> >> >>>>>
>> > >> >> >>>>>
>> > >> >> >>>> according to this output, no lastsync was completed, so i
>> guess
>> > the
>> > >> >> >>>> replication did never work, so the migration will also not
>> worK?
>> > >> >> >>>>
>> > >> >> >>>> i would remove all replication jobs (maybe with -force, via
>> > >> >> commandline)
>> > >> >> >>>> delete all images of this vm from all nodes where the vm
>> *not*
>> > is
>> > >> at
>> > >> >> the
>> > >> >> >>>> moment (afaics from prox01 and prox02, as the vm is
>> currently on
>> > >> >> prox03)
>> > >> >> >>>>
>> > >> >> >>>> then add the replication again wait for it to complete
>> (verify
>> > with
>> > >> >> >>>> pvesr status) and try again to migrate
>> > >> >> >>>>
>> > >> >> >>>> _______________________________________________
>> > >> >> >>>> pve-user mailing list
>> > >> >> >>>> pve-user at pve.proxmox.com
>> > >> >> >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > >> >> >>>>
>> > >> >> >>>
>> > >> >> >>>
>> > >> >> >>
>> > >> >> > _______________________________________________
>> > >> >> > pve-user mailing list
>> > >> >> > pve-user at pve.proxmox.com
>> > >> >> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > >> >> _______________________________________________
>> > >> >> pve-user mailing list
>> > >> >> pve-user at pve.proxmox.com
>> > >> >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > >> >>
>> > >> > _______________________________________________
>> > >> > pve-user mailing list
>> > >> > pve-user at pve.proxmox.com
>> > >> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > >> _______________________________________________
>> > >> pve-user mailing list
>> > >> pve-user at pve.proxmox.com
>> > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > >>
>> > > _______________________________________________
>> > > pve-user mailing list
>> > > pve-user at pve.proxmox.com
>> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> > _______________________________________________
>> > pve-user mailing list
>> > pve-user at pve.proxmox.com
>> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> >
>> --
>> Sent from Gmail Mobile
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
>



More information about the pve-user mailing list