[PVE-User] Ceph + ZFS?

Eneko Lacunza elacunza at binovo.es
Thu Jun 2 15:58:50 CEST 2016


El 02/06/16 a las 15:40, Lindsay Mathieson escribió:
> On 2/06/2016 11:29 PM, Eneko Lacunza wrote:
>> We have it as per defaults in PVE 4.x . 
>
> No barriers then, not recommend for ceph integrity
>
>> We just built a 3-node 6-OSD cluster for 2 Windows VMs and another 
>> pair of Debian VMs and performance is quite good for the 
>> resiliency/HA capabilities if offers to us.
>
> I was running a 3 node 9 osd cluster, but we run 10 Windows VM's per 
> node, ceph wasn't coping - VM's very sluggish. Not just iops, but ceph 
> didn't ciope well in low memory situations. Moving to gluster helped a 
> lot, but we also switched to 12 disks (4 *3TB WD Red in RaidZ10 per 
> node). That + a ssd log/cache made a huge difference.
>
> How much RAM do you have? what sort of networking? just running 3*1G 
> LACP bonded for us.
We have 12GB per node, 3x1g ethernet for 3 independent VLANs: public for 
VMs, VM<->public ceph, private ceph.

I think it all depends on the concurrent IO you have on your VMs, in 
this case nodes have plenty of CPU/RAM for the 4 VMs .

We have this kind of setup on different clients and we're very happy. In 
our office the cluster has 9 OSD and about 20 VMs running . This is 
mainly development so there are spikes in VMs use.

Cheers
Eneko

-- 
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943493611
       943324914
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es




More information about the pve-user mailing list