[PVE-User] Ceph is the preferred proxmox shared storage?

Eneko Lacunza elacunza at binovo.es
Wed Oct 29 11:01:39 CET 2014

Hi Angel,

On 29/10/14 10:50, Angel Docampo wrote:
>> On 29/10/14 09:25, Angel Docampo wrote:
>>> Bonded interfaces on linux are active-backup, so you have a 1Gb 
>>> connexion on the storage side. Consider to upgrade to a faster 
>>> ethernet/fiberchannel/infiniband.
>> I don't think this is the case. You can configure many modes on 
>> bonded interfaces; only a few of them are active-backup. For details, 
>> see https://www.kernel.org/doc/Documentation/networking/bonding.txt
> Hi Eneko
> I spoke based on my experience, and I'm not a network expert, so if 
> I'm wrong, please excuse me (and enlight me! :)). What I meant is I 
> know there are several modes, but none of them can "sum" both network 
> interfaces bandwith (i.e.: 1Gb eth + 1Gb eth becoming a virtual 2Gb 
> interface). I think the mode would be balance-rr but that mode come 
> with many troubles.
Neither am I ;-)

Usually you need more than one network connection to use different 
bonded slaves bandwidth. If you just use one connection (i.e. a scp) 
then you will only get  1Gbit max. but if you have multiple concurrent 
network connections then you can use all 2 Gbit, using LACP for example.

In a Proxmox-DRBD setup, each VM using drbd devices will open 
independent connections so you should be able to use near all the 
available bandwidth in a bonded interface.


Zuzendari Teknikoa / Director T├ęcnico
Binovo IT Human Project, S.L.
Telf. 943575997
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://pve.proxmox.com/pipermail/pve-user/attachments/20141029/94d12d9b/attachment-0015.html>

More information about the pve-user mailing list