[PVE-User] Bonding and packetloss

Daniel daniel at linux-nerd.de
Wed Sep 6 13:27:23 CEST 2017

LACP is used for both switches.

My Proxmox Servers are using bonding mode 6 but I get strange bandwith problems:

target host .... host20
 run 1: 	 42.3 Mbits/sec
 run 2: 	 880 Mbits/sec
 run 3: 	 105 Mbits/sec
 run 4: 	 35.9 Mbits/sec
 run 5: 	 36.1 Mbits/sec
 average ....... 219.86 Mbits/sec


Am 04.09.17, 15:31 schrieb "pve-user im Auftrag von Mark Schouten" <pve-user-bounces at pve.proxmox.com im Auftrag von mark at tuxis.nl>:

    You cannot just LACP over different switches. It should be a stack of switches.
    Met vriendelijke groeten,
    Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
    Mark Schouten  | Tuxis Internet Engineering
    KvK: 61527076 | http://www.tuxis.nl/
    T: 0318 200208 | info at tuxis.nl
     Van:   Daniel <daniel at linux-nerd.de> 
     Aan:   PVE User List <pve-user at pve.proxmox.com> 
     Verzonden:   1-9-2017 22:22 
     Onderwerp:   [PVE-User] Bonding and packetloss 
    Hi there, 
    here is a small overview if my Network: 
    2x HP Switches. Both are connected with 4x 1Gbit with a LACP Trunk to each other – Working as expected. 
    Now my Problem, I configured all my hosts with Bond Mode 6 and conncted 1 NIC to Switch One and the other to Switch Two 
    Sometimes I got packetloss and see a Kernel error like this: vmbr0: received packet on bond0 with own address as source address (addr:0c:c4:7a:aa:5c:e4, vlan:0) 
    Some hosts are working pretty well and some has Packet Loss. 
    After adding some “rules” to a host which has loss the error messages disappear but the loss (less loss) still exists. 
    Is there any special hint what can be the matter? When I change to active/passive mode all is fine. 
    This is my interfaces config which has packetloss: 
    auto lo 
    iface lo inet loopback 
    iface eno1 inet manual 
    iface eno2 inet manual 
    auto bond0 
    iface bond0 inet manual 
                    slaves eno1 eno2 
                    bond_miimon 100 
                    bond_mode 6 
    auto vmbr0 
    iface vmbr0 inet static 
                    bridge_ports bond0 
                    bridge_stp off 
                    bridge_fd 0 
                    bridge_maxage 0 
                    bridge_ageing 0 
                    bridge_maxwait 0 
    I am absolutely without any glue ☹ tested a lot and nothing really helps to solve this problem. 
    pve-user mailing list
    pve-user at pve.proxmox.com
    pve-user mailing list
    pve-user at pve.proxmox.com

More information about the pve-user mailing list