[PVE-User] CT replication error
adamw at matrixscience.com
Wed Sep 9 20:11:48 CEST 2020
Yes, all replication use the same shared zfs storage.
Are you referring to /var/log/pve/replicate/102-0 file?
It seems to only hold information about the last run.
Anyway, my problem turned out to be the node2 still holding
zfs-pool/subvol-102-disk-0 of the previous container.
I had deleted the old container from the web GUI before creating a new
one in its place (id 102).
For some reason node2 still had the old disk. Once I rm'ed it from the
shell of node2 replication started working for CT-102.
On 09/09/2020 08:51, Fabian Ebner wrote:
> could you check the replication log itself? There might be more
> information there. Do the working replications use the same storages
> as the failing one?
> Am 03.09.20 um 14:56 schrieb Adam Weremczuk:
>> Hi all,
>> I have a dual host set up, PVE 6.2-6.
>> All containers replicate fine except for 102 giving the following:
>> Sep 3 13:49:00 node1 systemd: Starting Proxmox VE replication
>> Sep 3 13:49:02 node1 zed: eid=7290 class=history_event
>> Sep 3 13:49:03 node1 pvesr: send/receive failed, cleaning up
>> Sep 3 13:49:03 node1 pvesr: 102-0: got unexpected replication
>> job error - command 'set -o pipefail && pvesm export
>> zfs-pool:subvol-102-disk-0 zfs - -with-snapshots 1 -snapshot
>> __replicate_102-0_1599137341__ | /usr/bin/ssh -e none -o
>> 'BatchMode=yes' -o 'HostKeyAlias=node2' root at 192.168.100.2 -- pvesm
>> import zfs-pool:subvol-102-disk-0 zfs - -with-snapshots 1
>> -allow-rename 0' failed: exit code 255
>> Sep 3 13:49:03 node1 zed: eid=7291 class=history_event
>> Any idea what the problem is and how to fix it?
>> pve-user mailing list
>> pve-user at lists.proxmox.com
> pve-user mailing list
> pve-user at lists.proxmox.com
More information about the pve-user