[PVE-User] Migration error!

Gilberto Nunes gilberto.nunes32 at gmail.com
Wed Aug 23 20:50:55 CEST 2017


more info:


pvesr status
JobID      Enabled    Target                           LastSync
NextSync   Duration  FailCount State
100-0      Yes        local/prox01                            -
 2017-08-23_15:55:04   3.151884          1 command 'set -o pipefail &&
pvesm export local-zfs:vm-100-disk-1 zfs - -with-snapshots 1 -snapshot
__replicate_100-0_1503514204__ | /usr/bin/cstream -t 1024000000 |
/usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=prox01' root at 10.1.1.10 --
pvesm import local-zfs:vm-100-disk-1 zfs - -with-snapshots 1' failed: exit
code 255
100-1      Yes        local/prox02                            -
 2017-08-23_15:55:01   3.089044          1 command 'set -o pipefail &&
pvesm export local-zfs:vm-100-disk-1 zfs - -with-snapshots 1 -snapshot
__replicate_100-1_1503514201__ | /usr/bin/cstream -t 1024000000 |
/usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=prox02' root at 10.1.1.20 --
pvesm import local-zfs:vm-100-disk-1 zfs - -with-snapshots 1' failed: exit
code 255



Obrigado

Cordialmente


Gilberto Ferreira

Consultor TI Linux | IaaS Proxmox, CloudStack, KVM | Zentyal Server |
Zimbra Mail Server

(47) 3025-5907
(47) 99676-7530

Skype: gilberto.nunes36


konnectati.com.br <http://www.konnectati.com.br/>


https://www.youtube.com/watch?v=dsiTPeNWcSE


2017-08-23 15:42 GMT-03:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:

> I just have 3 node cluster.
>
> I make the first replication for all nodes. Wait to finish in all nodes.
>
> I make the replication with the VM turned off.
>
> Now, the vm is lay down in prox03, my third node.
>
>
> When I try to migrate, I get all errors bellow:
>
>
> qm migrate 100 prox01
>
> 2017-08-23 15:39:32 starting migration of VM 100 to node 'prox01'
> (10.1.1.10)
>
> 2017-08-23 15:39:32 found local disk 'local-zfs:vm-100-disk-1' (in current
> VM config)
>
> 2017-08-23 15:39:32 copying disk images
>
> 2017-08-23 15:39:32 start replication job
>
> 2017-08-23 15:39:32 guest => VM 100, running => 0
>
> 2017-08-23 15:39:32 volumes => local-zfs:vm-100-disk-1
>
> 2017-08-23 15:39:34 create snapshot '__replicate_100-2_1503513572__' on
> local-zfs:vm-100-disk-1
>
> 2017-08-23 15:39:34 full sync 'local-zfs:vm-100-disk-1'
> (__replicate_100-2_1503513572__)
>
> send from @ to rpool/data/vm-100-disk-1 at __replicate_100-0_1503513037__
> estimated size is 2.20G
>
> send from @__replicate_100-0_1503513037__ to rpool/data/vm-100-disk-1 at __replicate_100-1_1503513063__
> estimated size is 2K
>
> send from @__replicate_100-1_1503513063__ to rpool/data/vm-100-disk-1 at __replicate_100-2_1503513572__
> estimated size is 0
>
> total estimated size is 2.20G
>
> TIME        SENT   SNAPSHOT
>
> rpool/data/vm-100-disk-1 name rpool/data/vm-100-disk-1 -
>
> volume 'rpool/data/vm-100-disk-1' already exists
>
> command 'zfs send -Rpv -- rpool/data/vm-100-disk-1 at __replicate_100-2_1503513572__'
> failed: got signal 13
>
> send/receive failed, cleaning up snapshot(s)..
>
> 2017-08-23 15:39:35 delete previous replication snapshot
> '__replicate_100-2_1503513572__' on local-zfs:vm-100-disk-1
>
> 2017-08-23 15:39:35 end replication job with error: command 'set -o
> pipefail && pvesm export local-zfs:vm-100-disk-1 zfs - -with-snapshots 1
> -snapshot __replicate_100-2_1503513572__ | /usr/bin/cstream -t 512000000 |
> /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=prox01' root at 10.1.1.10
> -- pvesm import local-zfs:vm-100-disk-1 zfs - -with-snapshots 1' failed:
> exit code 255
>
> 2017-08-23 15:39:35 ERROR: Failed to sync data - command 'set -o pipefail
> && pvesm export local-zfs:vm-100-disk-1 zfs - -with-snapshots 1 -snapshot
> __replicate_100-2_1503513572__ | /usr/bin/cstream -t 512000000 |
> /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=prox01' root at 10.1.1.10
> -- pvesm import local-zfs:vm-100-disk-1 zfs - -with-snapshots 1' failed:
> exit code 255
>
> 2017-08-23 15:39:35 aborting phase 1 - cleanup resources
>
> 2017-08-23 15:39:35 ERROR: migration aborted (duration 00:00:03): Failed
> to sync data - command 'set -o pipefail && pvesm export
> local-zfs:vm-100-disk-1 zfs - -with-snapshots 1 -snapshot
> __replicate_100-2_1503513572__ | /usr/bin/cstream -t 512000000 |
> /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=prox01' root at 10.1.1.10
> -- pvesm import local-zfs:vm-100-disk-1 zfs - -with-snapshots 1' failed:
> exit code 255
>
> migration aborted
>
>
> So what???
>
> Something that I miss???
>



More information about the pve-user mailing list