[PVE-User] Ceph + ZFS?
Lindsay Mathieson
lindsay.mathieson at gmail.com
Thu Jun 2 15:40:11 CEST 2016
On 2/06/2016 11:29 PM, Eneko Lacunza wrote:
> We have it as per defaults in PVE 4.x .
No barriers then, not recommend for ceph integrity
> We just built a 3-node 6-OSD cluster for 2 Windows VMs and another
> pair of Debian VMs and performance is quite good for the resiliency/HA
> capabilities if offers to us.
I was running a 3 node 9 osd cluster, but we run 10 Windows VM's per
node, ceph wasn't coping - VM's very sluggish. Not just iops, but ceph
didn't ciope well in low memory situations. Moving to gluster helped a
lot, but we also switched to 12 disks (4 *3TB WD Red in RaidZ10 per
node). That + a ssd log/cache made a huge difference.
How much RAM do you have? what sort of networking? just running 3*1G
LACP bonded for us.
--
Lindsay Mathieson
More information about the pve-user
mailing list