[PVE-User] 802.3ad Bonding Proxmox 1.4

Andrew Niemantsverdriet andrew at rocky.edu
Thu Nov 12 13:58:52 CET 2009


Jeff,

I understand what you are saying kind of. I have a SAN network setup
and a similar configuration. I am getting 22,000Mbit/sec on the SAN
network, granted there are some differences like jumbo frames which
effect throughput. Again this was tested with iperf.

So something is not right in my setup with Proxmox setup. I will do
some further investigation and see what I can come up with. Thanks for
your help.

 _
/-\ ndrew



On Wed, Nov 11, 2009 at 4:37 PM, Jeff Saxe <JSaxe at briworks.com> wrote:
> I'm not sure why 943Mb/sec "sucks" -- personally, I'd be pretty pleased to
> get such bandwidth, if I asked for a virtual machine and you hosted it for
> me on your Proxmox server.  :-)
> But seriously, you may not be taking into account precisely what happens
> when you use a layer 2 Ethernet aggregate (EtherChannel, or port channel).
> The accepted standards of layer 2 say that frames should not arrive out of
> order from how they were transmitted, so the way a port-channeling device
> treats each frame is to run it through some quick hash algorithm (based on
> either source or destination MAC, IP, or layer 4 port numbers, or some
> combination), and whatever the hash comes up with, it sends the frame on
> that link out of the bundle. The result of this is that a single
> long-running conversation between two endpoints (for instance, one long FTP
> transfer) is always going to choose the same Ethernet port over and over for
> each frame, so even if you bond together 5 gig Ethernets, one file transfer
> is going to go through only one of the five. So a speed of 943Mb/sec is not
> surprising -- likely, you are nearly saturating just one gig port while the
> others remain idle.
> An Ethernet port channel gives you good redundancy, fast failover, easy
> expansion, no need to use a routing protocol, no need to think hard about
> spanning tree, safety from accidentally plugging into wrong ports (when you
> use 802.3ad protocol), etc. But it does not automatically give you high
> bandwidth for focused transmissions. It only gives you high average
> bandwidth in the larger case, where you have many hosts (or several IP
> addresses on the same hosts, or many TCP conversations, again depending on
> the frame distribution algorithm in use on each side of the aggregate).
> Sorry if that messes up your plans.
> -- Jeff Saxe, Network Engineer
> Blue Ridge InternetWorks, Charlottesville, VA
> 434-817-0707 ext. 2024  /  JSaxe at briworks.com
>
>
>
> On Nov 11, 2009, at 5:13 PM, Andrew Niemantsverdriet wrote:
>
> I just went in and enabled STP the bridge is now and able to
> communicate. It is slow though. Still can't see more than 943Mbits/sec
> through the bond0 interface.
>
> # network interface settings
> auto lo
> iface lo inet loopback
>
> iface eth0 inet manual
>
> iface eth1 inet manual
>
> iface eth2 inet manual
>
> iface eth3 inet manual
>
> iface eth4 inet manual
>
> auto eth5
> iface eth5 inet static
> address  192.168.3.4
> netmask  255.255.255.0
>
> auto bond0
> iface bond0 inet manual
> slaves eth0 eth1 eth2 eth3 eth4
> bond_miimon 100
> bond_mode 802.3ad
>
> auto vmbr0
> iface vmbr0 inet static
> address  192.0.2.6
> netmask  255.255.255.0
> gateway 192.0.2.1
> bridge_ports bond0
> bridge_stp on
> bridge_fd 0
>
> The switch shows 802.3ad partners so that is working however the speed
> sucks although that is better than not working.
>
> Any ideas?
>
>
>



-- 
 _
/-\ ndrew Niemantsverdriet
Academic Computing
(406) 238-7360
Rocky Mountain College
1511 Poly Dr.
Billings MT, 59102



More information about the pve-user mailing list