[PVE-User] VM clone with 2 disks fails on ZFS-iSCSI IET storage
Mikhail
m at plus-plus.su
Fri Mar 18 21:53:08 CET 2016
Greetings Sebastian,
Thanks for your response.
I did not check exactly how ZFS accessed disks, it could be there's a
SAS backplane in between ZFS and disks, which is not good for ZFS as I know.
There's plenty of RAM on my storage server - 32GB, and 10Gbig connection
for Proxmox nodes to storage, so I doubt this could be an issue.
As for SSD and L2ARC and ZiL stuff - unfortunately, hardware is all set
and sits couple country borders away from me, so there's no way I can
add this hardware into existent setup. My cluster needs to be up and
running by April, so no more time for tests.
Anyway, decision is taken and I'm returning back to LVM over iSCSI due
to fact that I cannot use my Intel X550 10Gbit NICs with Solaris.
Thanks everyone for responing!
On 03/18/2016 07:12 PM, Sebastian Oswald wrote:
> Hello Mikhail,
>
> You should _always_ access the zvol block devices via /dev/zvol/... !
>
> I have ZoL on several debian/devuan jessie machines for development, on
> my private PCs and servers and on backup systems in production with
> really good performance and reliability so far.
>
> On my "playground" I also use zvols as FC SCSI targets for 2 hosts,
> both using ZFS on top of that zvol. No Problems so far with these, even
> when I purposedly tried to break it (because thats why its called a
> test setup, isn't it?).
>
> I get >800MB/s (just under 7GBit/s) at the raw MP-device on the
> initiator with a 2x4GBit link and "out of the box" SCST-setup, so
> nothing fine-tuned or optimized yet. The pool consists of cheap
> SATA-Disks ranging from 1-4TB and 2 SATA-SSDs for caching, so nothing
> fancy or fast and even different vdev-sizes mixed together (again:
> testing playground!)
>
> You might want to throw in some SSDs for L2ARC and ZIL on your storage
> host and maybe some more RAM as ZFS really lives up with proper cache
> sizes and will get completely unusable if its caches are flushed to
> swap (as with every filesystem or LVM). Also check the (stupidly low)
> default settings for max bandwith in the VM/container config - these
> have bitten me several times in the past.
>
> I haven't used Proxmox with ZFS yet, thats on the agenda for the next
> weeks or so before/when installing the new (third) proxmox production
> system. But as long as proxmox isn't doing anything uber-fancy or
> completely different from the normal ZoL-approach, I don't expect any
> gremlins to bite me from the ZFS side of the setup...
>
>
> Regards,
> Sebastian
>
>
> PS: For general ZFS knowledge I can HIGHLY recommend "FreeBSD Mastery:
> ZFS" from Michael W. Lucas. Most of it applies 1:1 to Zol and it gives
> a very good insight to the inner workings and thinking of ZFS.
>
>
More information about the pve-user
mailing list