[PVE-User] VM clone with 2 disks fails on ZFS-iSCSI IET storage

Mikhail m at plus-plus.su
Sat Mar 19 09:57:39 CET 2016


On 03/19/2016 08:29 AM, Markus Köberl wrote:
>> and just now I tried ZFS send/receive on the storage system to copy
>> volumes. I was very much surprised that the speed is getting at 100MB/s
>> max..
> 
> maybe you force your pool to only use 4k blocks, see later.
> have you also activated jumbo frames on the network? 

Yes, I'm using 8000 MTU across all systems connected to storage.
And yes, the blocks were 4k.

> 
> Setting blocksize 4k will maybe create all volumes with a max blocksize of 4k
> I am using 64k her (produces more fragmentation but gets faster)
> run zpool history on the storage
> a see entries like:
> 2014-07-04.20:11:06 zpool create -f -O atime=off -o ashift=12 -m none nsas35 mirror SASi1-6 SASi2-6 mirror SASi1-7 SASi2-7
> ...
> 2016-03-03.14:51:39 zfs create -b 64k -V 20971520k nsas35/vm-129-disk-1
> 2016-03-03.14:52:42 zfs create -b 64k -V 157286400k nsas35/vm-129-disk-2

No more history, last night I converted storage system to MD RAID10
w/LVM. I wish I had more time to run experiments, but my time limit for
this is exhausted, I need stable storage system by April.

> 
>>> 	iscsiprovider iet
>>> 	portal 192.168.4.1
>>> 	target iqn.2016-03.eu.myorg:rpool
>>> 	sparse
>>> 	nowritecache
> 
> I also do not have sparse and nowritecache in my config
> 
>>> 	content images
> 
> Till one month ago i had a nexenta store running (opensolaris) with striped mirror using 4 drives.
> the 4 drives have been the bottleneck. An old debian based zfsonlinux server (stuff ling around) using 8 disks works faster for us
> 

So maybe I will have better luck next time, on my next cluster =)

Mikhail.




More information about the pve-user mailing list