[PVE-User] Fwd: can't make zfs-zsync working
Jean-Laurent Ivars
jl.ivars at ipgenius.fr
Fri Sep 25 17:27:49 CEST 2015
hi everyone,
Nobody answered my previous mail, maybe lack of inspiration...
I continue my investigations (never abandoning) if i do the command according to the wiki :
root at cyclone ~ # pve-zsync sync --source 106 --dest ouragan:rpool/BKP_24H --verbose
COMMAND:
zfs list -r -t snapshot -Ho name, -S creation rpool/vm-106-disk-1
GET ERROR:
cannot open 'rpool/vm-106-disk-1': dataset does not exist
I assume this is because my vm disk is in not at the root of the rpool... so i tried specifying the disk i want to sync :
root at cyclone ~ # pve-zsync sync --source rpool/disks/vm-106-disk-1 --dest ouragan:rpool/BKP_24H --verbose
send from @ to rpool/disks/vm-106-disk-1 at rep_default_2015-09-25_16:55:51 estimated size is 1,26G
total estimated size is 1,26G
TIME SENT SNAPSHOT
warning: cannot send 'rpool/disks/vm-106-disk-1 at rep_default_2015-09-25_16:55:51': Relais brisé (pipe)
COMMAND:
zfs send -v rpool/disks/vm-106-disk-1 at rep_default_2015-09-25_16:55:51 | zfs recv ouragan:rpool/BKP_24H/vm-106-disk-1 at rep_default_2015-09-25_16:55:51
GET ERROR:
cannot open 'ouragan:rpool/BKP_24H/vm-106-disk-1': dataset does not exist
cannot receive new filesystem stream: dataset does not exist
Always the same error from the remote side : dataset does not exist
Although if I try to create a snapshot and sending it by myself it seem to work :
root at cyclone ~ # zfs send rpool/disks/vm-106-disk-1 at 25-09-2015_16h58m14s | ssh -p 2223 ouragan zfs receive rpool/BKP_24H/vm-106-disk-1
root at cyclone ~ #
no error... and I can see it from the other side (even tried to boot from it and it works)
root at ouragan ~ # zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 3,39T 3,63T 96K /rpool
rpool/BKP_24H 964M 3,63T 96K /rpool/BKP_24H
rpool/BKP_24H/vm-106-disk-1 963M 3,63T 963M -
rpool/ROOT 2,37T 3,63T 96K /rpool/ROOT
root at ouragan ~ # zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
rpool/BKP_24H/vm-106-disk-1 at 25-09-2015_16h58m14s 0 - 963M -
but can't do it for a second snapshot :
root at cyclone ~ # zfs send rpool/disks/vm-106-disk-1 at 25-09-2015_17h03m07s | ssh -p 2223 ouragan zfs receive rpool/BKP_24H/vm-106-disk-1
cannot receive new filesystem stream: destination 'rpool/BKP_24H/vm-106-disk-1' exists
must specify -F to overwrite it
I can try to find how to send multiples snapshot and end up with a script made by myself but seem a little bit silly to me not to use the tool already provided...
FYI I changed the default ssh port for security reasons but in case it would be the problem i tried go back to the standard one but it changes nothing.
I remarked there is a difference between the command used by the pve-zsync script and the command that works (at least for the first snapshot)
pve-zsync :
zfs send -v rpool/disks/vm-106-disk-1 at rep_default_2015-09-25_16:55:51 | zfs recv ouragan:rpool/BKP_24H/vm-106-disk-1 at rep_default_2015-09-25_16:55:51
my command :
zfs send rpool/disks/vm-106-disk-1 at 25-09-2015_16h58m14s | ssh -p 2223 ouragan zfs receive rpool/BKP_24H/vm-106-disk-1
Tried to have a look in the pve-zsync script since it seem to be a perl script but it didn't helped me (I only practice bash shell)
If someone came by and had an idea to help me to progress I would be very grateful
Can someone please answer me (just say hello would be great) since i’m not even sure it works when i write to the mailing list (i don’t write often)
Best regards,
Jean-Laurent Ivars
Responsable Technique | Technical Manager
22, rue Robert - 13007 Marseille
Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47
Linkedin <http://fr.linkedin.com/in/jlivars/> | Viadeo <http://www.viadeo.com/fr/profile/jean-laurent.ivars> | www.ipgenius.fr <https://www.ipgenius.fr/>
> Début du message réexpédié :
>
> De: Jean-Laurent Ivars <jl.ivars at ipgenius.fr>
> Date: 24 septembre 2015 12:44:29 UTC+2
> À: "pve-user at pve.proxmox.com" <pve-user at pve.proxmox.com>
> Objet: can't make zfs-zsync working
>
> Dear co-users,
>
> I just installed a two proxmox cluster with last iso 3.4 and have subscription so i made all the updates. The pve-zsync fonction is really great and I really really would like to succeed make it work.
>
> If someone already managed to make it work would you like please tell me how, I followed all the instructions on the page :
> https://pve.proxmox.com/wiki/PVE-zsync <https://pve.proxmox.com/wiki/PVE-zsync>
>
> Not working for me :(
>
> First, a little bit more on my configuration, i made a completely zfs install, i created some nfs datasets but I could not add them in the storage configuration with the gui so had to create the directly into storage.cfg :
>
> dir: local
> path /var/lib/vz
> content images,iso,vztmpl,backup,rootdir
> maxfiles 3
>
> zfspool: Disks
> pool rpool/disks
> content images
> sparse
>
> zfspool: BKP_24H
> pool rpool/BKP_24H
> content images
> sparse
>
> zfspool: rpool
> pool rpool
> content images
> sparse
>
> I show you what my zfs config look like :
>
> NAME USED AVAIL REFER MOUNTPOINT
> rpool 199G 6,83T 96K /rpool
> rpool/BKP_24H 96K 6,83T 96K /rpool/BKP_24H
> rpool/ROOT 73,1G 6,83T 96K /rpool/ROOT
> rpool/ROOT/pve-1 73,1G 6,83T 73,1G /
> rpool/disks 92,5G 6,83T 96K /rpool/disks
> rpool/disks/vm-100-disk-1 2,40G 6,83T 2,40G -
> rpool/disks/vm-106-disk-1 963M 6,83T 963M -
> rpool/disks/vm-107-disk-1 3,61G 6,83T 3,61G -
> rpool/disks/vm-108-disk-1 9,29G 6,83T 9,29G -
> rpool/disks/vm-110-disk-1 62,9G 6,83T 62,9G -
> rpool/disks/vm-204-disk-1 13,4G 6,83T 13,4G -
> rpool/swap 33,0G 6,86T 64K -
>
> and for the exemple the configuration of my test machine 106.conf :
>
> balloon: 256
> bootdisk: virtio0
> cores: 1
> ide0: none,media=cdrom
> memory: 1024
> name: Deb-Test
> net0: virtio=52:D5:C1:5C:3F:61,bridge=vmbr1
> ostype: l26
> scsihw: virtio-scsi-pci
> sockets: 1
> virtio0: Disks:vm-106-disk-1,cache=writeback,size=5G
>
> So now the result when i try the command given in the wiki :
>
> COMMAND:
> zfs list -r -t snapshot -Ho name, -S creation rpool/vm-106-disk-1
> GET ERROR:
> cannot open 'rpool/vm-106-disk-1': dataset does not exist
>
> I understand the command expect the vm disk to be at the root of the pool so I change the disk place and i try either to send directly to the pool to the other side, but with no more luck :
>
> send from @ to rpool/vm-106-disk-1 at rep_default_2015-09-24_12:30:49 estimated size is 1,26G
> total estimated size is 1,26G
> TIME SENT SNAPSHOT
> warning: cannot send 'rpool/vm-106-disk-1 at rep_default_2015-09-24_12:30:49': Relais brisé (pipe)
> COMMAND:
> zfs send -v rpool/vm-106-disk-1 at rep_default_2015-09-24_12:30:49 | zfs recv ouragan:rpool/vm-106-disk-1 at rep_default_2015-09-24_12:30:49
> GET ERROR:
> cannot open 'ouragan:rpool/vm-106-disk-1': dataset does not exist
> cannot receive new filesystem stream: dataset does not exist
>
>
> Now it seem to work from the sender side but from the receiver side I get the error « dataset does not exist » of course, it’s supposed to be created no ?
>
> I am completely new to ZFS so surely i’m doing something wrong… for exemple I don’t understand the difference between volume and dataset i searched a lot but nothing helped me understand clearly on the web and i suspect it could be it.
>
> Is the a way to tell the command the disk is not at the root of the pool ?
>
> Thank very much if someone can help me
> P.S. I’m posting in the forum too (not sure what the best place to ask)
>
> Best regards,
>
>
> Jean-Laurent Ivars
> Responsable Technique | Technical Manager
> 22, rue Robert - 13007 Marseille
> Tel: 09 84 56 64 30 - Mobile: 06.52.60.86.47
> Linkedin <http://fr.linkedin.com/in/jlivars/> | Viadeo <http://www.viadeo.com/fr/profile/jean-laurent.ivars> | www.ipgenius.fr <https://www.ipgenius.fr/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20150925/df00e39f/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: icon_sad.gif
Type: image/gif
Size: 171 bytes
Desc: not available
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20150925/df00e39f/attachment.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: icon_surprised.gif
Type: image/gif
Size: 174 bytes
Desc: not available
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20150925/df00e39f/attachment-0001.gif>
More information about the pve-user
mailing list