[PVE-User] VM clone with 2 disks fails on ZFS-iSCSI IET storage
Markus Köberl
markus.koeberl at tugraz.at
Sat Mar 19 06:29:22 CET 2016
On Friday 18 March 2016 18:31:10 Mikhail wrote:
> My ZFS pool runs 4 x 4TB SAS drives in RAID10:
>
> # zpool status
> pool: rpool
> state: ONLINE
> scan: resilvered 1.55G in 0h0m with 0 errors on Mon Mar 7 23:29:50 2016
> config:
>
> NAME STATE READ WRITE CKSUM
> rpool ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> sda ONLINE 0 0 0
> sdb ONLINE 0 0 0
> mirror-1 ONLINE 0 0 0
> sdc ONLINE 0 0 0
> sdd ONLINE 0 0 0
>
> and just now I tried ZFS send/receive on the storage system to copy
> volumes. I was very much surprised that the speed is getting at 100MB/s
> max..
maybe you force your pool to only use 4k blocks, see later.
have you also activated jumbo frames on the network?
> I just guess this shows how unstable and untested ZFS on Linux is. This
> also proves Proxmox's WIKI page that does not suggest to use ZFS over
> iSCSI on Linux in production.
>
> So I think it is now about time to switch to old school LVM over iSCSI
> in my case, until I put some real data on this cluster..
>
> Mikhail.
>
> On 03/18/2016 05:59 PM, Mikhail wrote:
> > Hello,
> >
> > I'm running 3-node cluster with latest PVE 4.1-1 community edition.
> > My shared storage is ZFS over iSCSI (ZFS storage server is Linux Debian
> > Jessie with IET).
> >
> > There's a problem cloning VM that has 2 (or possibly "2 and more") disks
> > attached to VM in this setup. The problem is that one disk gets copied,
> > and then "Clone" task fails with message: TASK ERROR: clone failed: File
> > exists. at /usr/share/perl5/PVE/Storage/LunCmd/Iet.pm line 376.
> >
> > There's no problems cloning same VM if it has only one disk.
> >
> > Here's steps to reproduce
> >
> > 1) create VM with 2 disks (both on shared storage)
> > 2) shutdown VM
> > 3) attempt to clone this VM to another VM
> >
> > More details in my case:
> >
> > 1) VM 8002 - source VM to clone, here's output from storage server -
> >
> > root at storage:/etc/iet# zfs list|grep 8002
> > rpool/vm-8002-disk-1 30.5G 6.32T 30.5G -
> > rpool/vm-8002-disk-2 77.8G 6.32T 77.8G -
> >
> > root at storage:/etc/iet# cat /etc/iet/ietd.conf
> > Target iqn.2016-03.eu.myorg:rpool
> > Lun 1 Path=/dev/rpool/vm-1001-disk-2,Type=blockio
> > Lun 2 Path=/dev/rpool/vm-1002-disk-1,Type=blockio
> > Lun 7 Path=/dev/rpool/vm-8003-disk-1,Type=blockio
> > Lun 5 Path=/dev/rpool/vm-101-disk-1,Type=blockio
> > Lun 3 Path=/dev/rpool/vm-8002-disk-1,Type=blockio
> > Lun 6 Path=/dev/rpool/vm-8002-disk-2,Type=blockio
> > Lun 0 Path=/dev/rpool/vm-8201-disk-1,Type=blockio
> > Lun 8 Path=/dev/rpool/vm-8301-disk-1,Type=blockio
> > Lun 9 Path=/dev/rpool/vm-8301-disk-2,Type=blockio
> > Lun 10 Path=/dev/rpool/vm-8302-disk-1,Type=blockio
> > Lun 4 Path=/dev/rpool/vm-8001-disk-1,Type=blockio
> >
> > root at storage:/etc/iet# cat /proc/net/iet/volume
> > tid:1 name:iqn.2016-03.eu.myorg:rpool
> > lun:1 state:0 iotype:blockio iomode:wt blocks:1048576000 blocksize:512
> > path:/dev/rpool/vm-1001-disk-2
> > lun:2 state:0 iotype:blockio iomode:wt blocks:67108864 blocksize:512
> > path:/dev/rpool/vm-1002-disk-1
> > lun:5 state:0 iotype:blockio iomode:wt blocks:62914560 blocksize:512
> > path:/dev/rpool/vm-101-disk-1
> > lun:3 state:0 iotype:blockio iomode:wt blocks:62914560 blocksize:512
> > path:/dev/rpool/vm-8002-disk-1
> > lun:6 state:0 iotype:blockio iomode:wt blocks:314572800 blocksize:512
> > path:/dev/rpool/vm-8002-disk-2
> > lun:0 state:0 iotype:blockio iomode:wt blocks:83886080 blocksize:512
> > path:/dev/rpool/vm-8201-disk-1
> > lun:8 state:0 iotype:blockio iomode:wt blocks:31457280 blocksize:512
> > path:/dev/rpool/vm-8301-disk-1
> > lun:9 state:0 iotype:blockio iomode:wt blocks:104857600 blocksize:512
> > path:/dev/rpool/vm-8301-disk-2
> > lun:10 state:0 iotype:blockio iomode:wt blocks:104857600 blocksize:512
> > path:/dev/rpool/vm-8302-disk-1
> > lun:12 state:0 iotype:blockio iomode:wt blocks:62914560 blocksize:512
> > path:/dev/rpool/vm-8091-disk-1
> > lun:4 state:0 iotype:blockio iomode:wt blocks:62914560 blocksize:512
> > path:/dev/rpool/vm-8001-disk-1
> >
> >
> >
> > VM config:
> >
> > root at pm1:/etc/pve/nodes/pm2/qemu-server# pwd
> > /etc/pve/nodes/pm2/qemu-server
> > root at pm1:/etc/pve/nodes/pm2/qemu-server# cat 8002.conf
> > boot: cdn
> > bootdisk: virtio0
> > cores: 1
> > ide2: isoimages:iso/systemrescuecd-x86-4.7.1.iso,media=cdrom,size=469942K
> > memory: 1024
> > name: rep
> > net0: virtio=32:38:38:39:39:33,bridge=vmbr0,tag=80
> > numa: 0
> > ostype: l26
> > smbios1: uuid=8b0b1ab8-d3e3-48ae-8834-edd0e68a3c0c
> > sockets: 1
> > virtio0: storage-1:vm-8002-disk-1,size=30G
> > virtio1: storage-1:vm-8002-disk-2,size=150G
> >
> > Storage config:
> >
> > virtio1: storage-1:vm-8002-disk-2,size=150G
> > root at pm1:/etc/pve/nodes/pm2/qemu-server# cat /etc/pve/storage.cfg
> > dir: local
> > path /var/lib/vz
> > maxfiles 0
> > content vztmpl,rootdir,images,iso
> >
> > nfs: isoimages
> > path /mnt/pve/isoimages
> > server 192.168.4.1
> > export /rpool/shared/isoimages
> > content iso
> > options vers=3
> > maxfiles 1
> >
> > zfs: storage-1
> > pool rpool
> > blocksize 4k
Setting blocksize 4k will maybe create all volumes with a max blocksize of 4k
I am using 64k her (produces more fragmentation but gets faster)
run zpool history on the storage
a see entries like:
2014-07-04.20:11:06 zpool create -f -O atime=off -o ashift=12 -m none nsas35 mirror SASi1-6 SASi2-6 mirror SASi1-7 SASi2-7
...
2016-03-03.14:51:39 zfs create -b 64k -V 20971520k nsas35/vm-129-disk-1
2016-03-03.14:52:42 zfs create -b 64k -V 157286400k nsas35/vm-129-disk-2
> > iscsiprovider iet
> > portal 192.168.4.1
> > target iqn.2016-03.eu.myorg:rpool
> > sparse
> > nowritecache
I also do not have sparse and nowritecache in my config
> > content images
Till one month ago i had a nexenta store running (opensolaris) with striped mirror using 4 drives.
the 4 drives have been the bottleneck. An old debian based zfsonlinux server (stuff ling around) using 8 disks works faster for us
> > 2) Attempting to clone 8002 to lets say 80123. Here's task output (cut
> > some % lines):
> >
> > create full clone of drive virtio0 (storage-1:vm-8002-disk-1)
> > transferred: 0 bytes remaining: 32212254720 bytes total: 32212254720
> > bytes progression: 0.00 %
> > qemu-img: iSCSI Failure: SENSE KEY:ILLEGAL_REQUEST(5)
> > ASCQ:INVALID_OPERATION_CODE(0x2000)
> > transferred: 322122547 bytes remaining: 31890132173 bytes total:
> > 32212254720 bytes progression: 1.00 %
> > transferred: 32212254720 bytes remaining: 0 bytes total: 32212254720
> > bytes progression: 100.00 %
> > transferred: 32212254720 bytes remaining: 0 bytes total: 32212254720
> > bytes progression: 100.00 %
> > create full clone of drive virtio1 (storage-1:vm-8002-disk-2)
> > TASK ERROR: clone failed: File exists. at
> > /usr/share/perl5/PVE/Storage/LunCmd/Iet.pm line 376.
> >
> > after that, "zfs list" on storage shows there's one volume on ZFS:
> >
> > root at storage:/etc/iet# zfs list|grep 80123
> > rpool/vm-80123-disk-2 64K 6.32T 64K -
> >
> > Obviously it was created by PVE. rpool/vm-80123-disk-2 was removed
> > automatically when task failed to complete.
> >
> > And here's what I see in /var/log/syslog on storage when this task fails:
> >
> > Mar 18 17:40:19 storage kernel: [840181.448223] zd240: unknown
> > partition table
> > Mar 18 17:40:19 storage ietd: unable to create logical unit 12 in target
> > 1: 17
> > Mar 18 17:40:21 storage kernel: [840182.870132] iscsi_trgt:
> > iscsi_volume_del(319) 1 b
> >
> >
> > So I guess this has something to do with IET.
> >
> > regards,
> > Mikhail.
> > _______________________________________________
> > pve-user mailing list
> > pve-user at pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
More information about the pve-user
mailing list