[PVE-User] Infiniband as Network backend for Ceph&Proxmox
f.cuseo at panservice.it
Sat Oct 7 17:36:37 CEST 2017
I have a 4 nodes cluster with PVE 4.4, 2 x 2Gb for management and cluster network, 2 x 2Gb for VMs network, 2 x Infiniband (10Gbit) active/passive bond for Ceph Network (with IP Over Infiniband).
with iperf, i have about 7/8Gbit speed. With ceph I have 6 x sata WD Gold datacenter disks, with journal on disk partition (no ssd), and i have not bad performances.
----- Il 7-ott-17, alle 17:12, Phil Schwarz infolist at schwarz-fr.net ha scritto:
> able to rebuild a brand new cluster, i wonder about using as backend
> storage a bunch of 4 Mellanox ConnectX DDR with a Flextronics or
> Voltaire IB Switch.
> 1. Does this setup be supported ? I found a drbd doc related to IB, but
> not a ceph's one.
> 2. Should i use IPoIB or RDMA instead ? I'm not afraid of performances
> drawbacks with IPoIB (every node has a max of 5 disk).It'a a test
> lab/home use cluster.
> So, appart of being really fun, and really cheap (whole subsystem should
> be under 250€), i wonder of the potential use of such a huge network
> Does anyone use, with success, IB for Ceph & Proxmox ?
> Best regards
> pve-user mailing list
> pve-user at pve.proxmox.com
Fabrizio Cuseo - mailto:f.cuseo at panservice.it
Direzione Generale - Panservice InterNetWorking
Servizi Professionali per Internet ed il Networking
Panservice e' associata AIIP - RIPE Local Registry
Phone: +39 0773 410020 - Fax: +39 0773 470219
http://www.panservice.it mailto:info at panservice.it
Numero verde nazionale: 800 901492
More information about the pve-user