[PVE-User] scenario

Dimitris Beletsiotis dimitris.beletsiotis at gmail.com
Mon Jan 19 22:49:30 CET 2015

Hi there,

Alternatively you can use Omnios instead of Freenas as the shared storage
In this case you have the advantage of using the integrated proxmox ZFS
plugin that provides live snapshots and cloning of VM's using ZFS' native
snapshot and volume reference features.
See http://pve.proxmox.com/wiki/Storage:_ZFS
Omnios is preferred in this case due to the mature/stable comstar module.

Dimitris Beletsiotis

On Mon, Jan 19, 2015 at 11:12 PM, Paul Gapes <Paul.Gapes at wfa.org.nz> wrote:

> I agree with Adam, iSCSI would be the better option, but I'd go Raid10,
> Not Raid5 or any of the RaidZ options.
> And do it without using a hardware raid controller (i.e. let FreeNAS
> manage the disks) as ZFS and hardware raid don't really play well together.
> RaidZ (Z2 etc) will give good read speeds but will only write at the speed
> of the slowest disk in the raid set (and will complete writes to one disk
> before writing to the next) so can end up maxing out at 80ish meg/sec.
> So... iSCSI multipath + Raid10 will give you the best performance. I've
> had a small site in production like this for the last 2 years running
> faultlessly, and the improvements to FreeNAS's iSCSI from version 9.3 are
> looking very good.
> -----Original Message-----
> From: pve-user [mailto:pve-user-bounces at pve.proxmox.com] On Behalf Of
> Adam Thompson
> Sent: Tuesday, 20 January 2015 7:52 a.m.
> To: XX0001XX YY0001YY; proxmoxve
> Subject: Re: [PVE-User] scenario
> On 2015-01-19 12:37 PM, XX0001XX YY0001YY wrote:
> > Hello,
> >
> > Someone can tell me if my scenario is ok:
> >
> > Two servers :
> >
> > - A Proxmox server RAID5
> > - A storage server FreeNas RAID5.
> >
> > On the Proxmox server I configure an NFS share to FreeNas (FreeNas
> > sharing a ZFS volume, zvol) both servers communicate 10Gbps .
> >
> > on Proxmox, there will most 30 VM .
> >
> > It's good ?
> It will work.
> Exporting iSCSI would be better.
> Not using harware RAID on FreeNAS, and using software RAID Z, Z2 or Z3
> would be even better.
> Why do you need RAID5 on the Proxmox server if the VMs are stored on a
> separate system?  If it is your only Proxmox system (i.e. not a cluster)
> then RAID is useful, but otherwise not really necessary.
> Unless you're running OpenVZ containers, in which case forget everything
> I've said here, except that LVM-over-iSCSI might still be better/faster
> than NFS... not 100% certain, I don't use containers.
> If the storage server is dedicated to Proxmox, then iSCSI allocation isn't
> an issue - just allocate 100% of your disk space to the iSCSI volume(s).
> --
> -Adam Thompson
>   athompso at athompso.net
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> ________________________________
> WFA_Disclaimer
> This message is for the named person's use only. It may contain
> confidential, proprietary or legally privileged information.
> No confidentiality or privilege is waived or lost by any mistransmission.
> If you receive this message in error, please immediately delete it and all
> copies of it from your system, destroy any hard copies of it and notify the
> sender. You must not, directly or indirectly, use, disclose, distribute,
> print, or copy any part of this message if you are not the intended
> recipient. Wellington Free Ambulance reserves the right to monitor all
> e-mail communications through its networks. Any views expressed in this
> message are those of the individual sender, except where the message states
> otherwise and the sender is authorised to state them to be the views of any
> such entity.
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20150119/7e8fd827/attachment.htm>

More information about the pve-user mailing list