[PVE-User] pve-zsync issues
Miguel González
miguel_3_gonzalez at yahoo.es
Tue Jan 22 23:12:50 CET 2019
Hi,
I have two servers running proxmox 5.3-6. Both run several VMs and I
am using pve-zsync to sync two machines in server1 in server two for
disaster recovery and offline backups.
This has been working without issue with two proxmox servers running
5.1-46. I have just replaced them with two new servers.
I have two jobs, one is reporting that is has to send the full batch
and the other one reporting a failure. Snapshots in the backup server
show 0B.
root at server1:~# pve-zsync status
SOURCE NAME STATUS
100 plesk1 error
102 cpanel1 ok
root at server2:~# zfs list -t snapshot
NAME USED AVAIL
REFER MOUNTPOINT
rpool/data/vm-100-disk-0 at rep_plesk1_2019-01-21_22:30:03 0B -
20.4G -
rpool/data/vm-100-disk-1 at rep_plesk1_2019-01-21_22:30:03 0B -
67.3G -
rpool/data/vm-100-disk-2 at rep_plesk1_2019-01-21_22:30:03 0B -
92.9G -
rpool/data/vm-102-disk-0 at rep_cpanel1_2019-01-22_01:00:01 0B -
20.0G -
rpool/data/vm-102-disk-1 at rep_cpanel1_2019-01-22_01:00:01 0B -
60.4G -
root at server1:~# zfs list -t snapshot
NAME USED AVAIL
REFER MOUNTPOINT
rpool/vm-100-disk-0 at rep_plesk1_2019-01-19_22:47:37 597M - 20.0G -
rpool/vm-100-disk-0 at rep_plesk1_2019-01-20_11:22:21 482M - 20.1G -
rpool/vm-100-disk-0 at rep_plesk1_2019-01-21_22:05:08 121M - 20.4G -
rpool/vm-100-disk-0 at rep_plesk1_2019-01-21_22:30:03 117M - 20.4G -
rpool/vm-100-disk-1 at rep_plesk1_2019-01-19_22:47:37 9.68G - 67.1G -
rpool/vm-100-disk-1 at rep_plesk1_2019-01-20_11:22:21 9.49G - 67.2G -
rpool/vm-100-disk-1 at rep_plesk1_2019-01-21_22:30:03 4.84G - 67.3G -
rpool/vm-100-disk-2 at rep_plesk1_2019-01-19_22:47:37 519M - 92.9G -
rpool/vm-100-disk-2 at rep_plesk1_2019-01-20_11:22:21 335M - 92.9G -
rpool/vm-100-disk-2 at rep_plesk1_2019-01-21_22:30:03 517M - 92.9G -
rpool/vm-102-disk-0 at rep_cpanel1_2019-01-20_01:00:01 1.87G - 20.1G -
rpool/vm-102-disk-0 at rep_cpanel1_2019-01-21_01:00:04 1.21G - 20.1G -
rpool/vm-102-disk-0 at rep_cpanel1_2019-01-22_01:00:01 1.25G - 20.0G -
rpool/vm-102-disk-1 at rep_cpanel1_2019-01-20_01:00:01 4.94G - 60.5G -
rpool/vm-102-disk-1 at rep_cpanel1_2019-01-21_01:00:04 3.97G - 60.5G -
rpool/vm-102-disk-1 at rep_cpanel1_2019-01-22_01:00:01 3.31G - 60.4G -
Nigthly jobs report different things:
cpanel1 VM:
WARN: COMMAND:
ssh root at server2 -- zfs list -rt snapshot -Ho name rpool/data/vm-102-disk-0 at rep_cpanel1_2019-01-20_01:00:01
GET ERROR:
cannot open 'rpool/data/vm-102-disk-0 at rep_cpanel1_2019-01-20_01:00:01': dataset does not exist
full send of rpool/vm-102-disk-0 at rep_cpanel1_2019-01-22_01:00:01 estimated size is 29.7G
total estimated size is 29.7G
TIME SENT SNAPSHOT
01:00:03 23.8M rpool/vm-102-disk-0 at rep_cpanel1_2019-01-22_01:00:01
01:00:04 54.3M rpool/vm-102-disk-0 at rep_cpanel1_2019-01-22_01:00:01
01:00:05 84.7M rpool/vm-102-disk-0 at rep_cpanel1_2019-01-22_01:00:01
01:00:06 115M rpool/vm-102-disk-0 at rep_cpanel1_2019-01-22_01:00:01
and it has two set the full two disks, which I don´t understand why
plesk1 VM:
WARN: COMMAND:
ssh root at server2 -- zfs list -rt snapshot -Ho name rpool/data/vm-100-disk-0 at rep_plesk1_2019-01-19_22:47:37
GET ERROR:
cannot open 'rpool/data/vm-100-disk-0 at rep_plesk1_2019-01-19_22:47:37': dataset does not exist
full send of rpool/vm-100-disk-0 at rep_plesk1_2019-01-22_01:58:55 estimated size is 28.4G
total estimated size is 28.4G
TIME SENT SNAPSHOT
COMMAND:
zfs send -v -- rpool/vm-100-disk-0 at rep_plesk1_2019-01-22_01:58:55 | ssh -o 'BatchMode=yes' root at 37.187.154.74 -- zfs recv -F -- rpool/data/vm-100-disk-0
GET ERROR:
cannot receive new filesystem stream: destination has snapshots (eg. rpool/data/vm-100-disk-0)
must destroy them to overwrite it
Job --source 100 --name plesk1 got an ERROR!!!
ERROR Message:
---
This email has been checked for viruses by AVG.
https://www.avg.com
More information about the pve-user
mailing list