[PVE-User] Ceph or Gluster

Brian :: bc at iptel.co
Fri Apr 22 23:50:57 CEST 2016


Hi Lindsay,

With NVME journals on a 3 node 4 OSD cluster if I do a quick dd of a
1GB file on a VM I can see 2.34Gbps on the storage network straight
away so if I was only using 1Gbps here the network would be a
bottlekneck. If I perform the same in 2 VMs traffic hits 4.19Gbps on
the storage network.

The throughput in the VM is 1073741824 bytes (1.1 GB) copied, 3.43556
s, 313 MB/s (R=3)

Would be very interested in hearing more about your gluster setup.. I
don't know anything about it - how many nodes are involved?





On Fri, Apr 22, 2016 at 3:02 PM, Lindsay Mathieson
<lindsay.mathieson at gmail.com> wrote:
> On 22/04/2016 11:31 PM, Brian :: wrote:
>>
>> 10Gbps or faster at a minimum or you will have pain. Even using 4
>> nodes with 4 spinner disks in each node and you will be maxing out
>> 1Gbps network.
>
>
>
> Can't say I saw that on our cluster.
>
> - 3 Nodes
> - 3 OSD's per Node
> - SSD journals for each OSD.
> - 2*1G Eth in LACP Bond dedicated to Ceph
> - 1G Admin Net
> - Size = 3 (Replica 3)
>
> Never came close to maxing out 1 1Gbps connection, write throughput and IOPS
> are terrible. Read is pretty good though.
>
> Currently trialing Gluster 3.7.11 on ZFS Bricks, Replica 3 also. Triple the
> throughput ans IOPS I was getting with Ceph, maxes out the 2*1G connection,
> also seems to deal with VM I/O spikes better too, not letting other VM's be
> stalled.
>
> Not convinced its as robust as Ceph yet, give it a few more weeks. It does
> cope very well with failover and brick heals (using 64MB shards).
>
> --
> Lindsay Mathieson
>
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



More information about the pve-user mailing list