[PVE-User] can't make zfs-zsync working
Jean-Laurent Ivars
jl.ivars at ipgenius.fr
Thu Sep 24 12:44:29 CEST 2015
Dear co-users,
I just installed a two proxmox cluster with last iso 3.4 and have subscription so i made all the updates. The pve-zsync fonction is really great and I really really would like to succeed make it work.
If someone already managed to make it work would you like please tell me how, I followed all the instructions on the page :
https://pve.proxmox.com/wiki/PVE-zsync <https://pve.proxmox.com/wiki/PVE-zsync>
Not working for me :(
First, a little bit more on my configuration, i made a completely zfs install, i created some nfs datasets but I could not add them in the storage configuration with the gui so had to create the directly into storage.cfg :
dir: local
path /var/lib/vz
content images,iso,vztmpl,backup,rootdir
maxfiles 3
zfspool: Disks
pool rpool/disks
content images
sparse
zfspool: BKP_24H
pool rpool/BKP_24H
content images
sparse
zfspool: rpool
pool rpool
content images
sparse
I show you what my zfs config look like :
NAME USED AVAIL REFER MOUNTPOINT
rpool 199G 6,83T 96K /rpool
rpool/BKP_24H 96K 6,83T 96K /rpool/BKP_24H
rpool/ROOT 73,1G 6,83T 96K /rpool/ROOT
rpool/ROOT/pve-1 73,1G 6,83T 73,1G /
rpool/disks 92,5G 6,83T 96K /rpool/disks
rpool/disks/vm-100-disk-1 2,40G 6,83T 2,40G -
rpool/disks/vm-106-disk-1 963M 6,83T 963M -
rpool/disks/vm-107-disk-1 3,61G 6,83T 3,61G -
rpool/disks/vm-108-disk-1 9,29G 6,83T 9,29G -
rpool/disks/vm-110-disk-1 62,9G 6,83T 62,9G -
rpool/disks/vm-204-disk-1 13,4G 6,83T 13,4G -
rpool/swap 33,0G 6,86T 64K -
and for the exemple the configuration of my test machine 106.conf :
balloon: 256
bootdisk: virtio0
cores: 1
ide0: none,media=cdrom
memory: 1024
name: Deb-Test
net0: virtio=52:D5:C1:5C:3F:61,bridge=vmbr1
ostype: l26
scsihw: virtio-scsi-pci
sockets: 1
virtio0: Disks:vm-106-disk-1,cache=writeback,size=5G
So now the result when i try the command given in the wiki :
COMMAND:
zfs list -r -t snapshot -Ho name, -S creation rpool/vm-106-disk-1
GET ERROR:
cannot open 'rpool/vm-106-disk-1': dataset does not exist
I understand the command expect the vm disk to be at the root of the pool so I change the disk place and i try either to send directly to the pool to the other side, but with no more luck :
send from @ to rpool/vm-106-disk-1 at rep_default_2015-09-24_12:30:49 estimated size is 1,26G
total estimated size is 1,26G
TIME SENT SNAPSHOT
warning: cannot send 'rpool/vm-106-disk-1 at rep_default_2015-09-24_12:30:49': Relais brisé (pipe)
COMMAND:
zfs send -v rpool/vm-106-disk-1 at rep_default_2015-09-24_12:30:49 | zfs recv ouragan:rpool/vm-106-disk-1 at rep_default_2015-09-24_12:30:49
GET ERROR:
cannot open 'ouragan:rpool/vm-106-disk-1': dataset does not exist
cannot receive new filesystem stream: dataset does not exist
Now it seem to work from the sender side but from the receiver side I get the error « dataset does not exist » of course, it’s supposed to be created no ?
I am completely new to ZFS so surely i’m doing something wrong… for exemple I don’t understand the difference between volume and dataset i searched a lot but nothing helped me understand clearly on the web and i suspect it could be it.
Is the a way to tell the command the disk is not at the root of the pool ?
Thank very much if someone can help me
P.S. I’m posting in the forum too (not sure what the best place to ask)
Best regards,
Jean-Laurent Ivars
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47
Linkedin <http://fr.linkedin.com/in/jlivars/> | Viadeo <http://www.viadeo.com/fr/profile/jean-laurent.ivars> | www.ipgenius.fr <https://www.ipgenius.fr/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20150924/3109379b/attachment.htm>
More information about the pve-user
mailing list