[PVE-User] Ceph or Gluster

Alexandre DERUMIER aderumier at odiso.com
Sat Apr 23 17:23:46 CEST 2016


@Lindsay

>>Never came close to maxing out 1 1Gbps connection, write throughput and 
>>IOPS are terrible. Read is pretty good though.


>>Well your hardware is rather better than mine :) I'm just using consumer 
>>grade SSD's for journals which won't have anywhere near the performance 
>>of NVME


Please, don't use consumer ssd for ceph journal, they sucks (very) for D_SYNC writes

Please read this blog:

http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/


----- Mail original -----
De: "Lindsay Mathieson" <lindsay.mathieson at gmail.com>
À: "proxmoxve" <pve-user at pve.proxmox.com>
Envoyé: Vendredi 22 Avril 2016 16:02:19
Objet: Re: [PVE-User] Ceph or Gluster

On 22/04/2016 11:31 PM, Brian :: wrote: 
> 10Gbps or faster at a minimum or you will have pain. Even using 4 
> nodes with 4 spinner disks in each node and you will be maxing out 
> 1Gbps network. 


Can't say I saw that on our cluster. 

- 3 Nodes 
- 3 OSD's per Node 
- SSD journals for each OSD. 
- 2*1G Eth in LACP Bond dedicated to Ceph 
- 1G Admin Net 
- Size = 3 (Replica 3) 

Never came close to maxing out 1 1Gbps connection, write throughput and 
IOPS are terrible. Read is pretty good though. 

Currently trialing Gluster 3.7.11 on ZFS Bricks, Replica 3 also. Triple 
the throughput ans IOPS I was getting with Ceph, maxes out the 2*1G 
connection, also seems to deal with VM I/O spikes better too, not 
letting other VM's be stalled. 

Not convinced its as robust as Ceph yet, give it a few more weeks. It 
does cope very well with failover and brick heals (using 64MB shards). 

-- 
Lindsay Mathieson 

_______________________________________________ 
pve-user mailing list 
pve-user at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 



More information about the pve-user mailing list