[pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

Cesar Peschiera brain at click.com.py
Sat Dec 20 13:30:02 CET 2014


Hi Alexandre

I put 192.100.100.51 ip address directly to bond0, and i don't have network
enabled (as if the node is totally isolated)

This was my setup:
-------------------
auto bond0
iface bond0 inet static
 address  192.100.100.51
 netmask  255.255.255.0
 gateway  192.100.100.4
 slaves eth0 eth2
 bond_miimon 100
 bond_mode 802.3ad
 bond_xmit_hash_policy layer2

auto vmbr0
iface vmbr0 inet manual
 bridge_ports bond0
 bridge_stp off
 bridge_fd 0
 post-up echo 0 > /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
 post-up echo 1 > /sys/class/net/vmbr0/bridge/multicast_querier

...... :-(

Some other suggestion?

----- Original Message ----- 
From: "Alexandre DERUMIER" <aderumier at odiso.com>
To: "Cesar Peschiera" <brain at click.com.py>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Sent: Friday, December 19, 2014 7:59 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off


maybe can you try to put 192.100.100.51 ip address directly to bond0,

to avoid corosync traffic going through to vmbr0.

(I remember some old offloading bugs with 10gbe nic and linux bridge)


----- Mail original -----
De: "Cesar Peschiera" <brain at click.com.py>
À: "aderumier" <aderumier at odiso.com>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Vendredi 19 Décembre 2014 11:08:33
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and 
VMsturns off

>can you post your /etc/network/interfaces of theses 10gb/s nodes ?

This is my configuration:
Note: The LAN use 192.100.100.0/24

#Network interfaces
auto lo
iface lo inet loopback

iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
iface eth4 inet manual
iface eth5 inet manual
iface eth6 inet manual
iface eth7 inet manual
iface eth8 inet manual
iface eth9 inet manual
iface eth10 inet manual
iface eth11 inet manual

#PVE Cluster and VMs (NICs are of 10 Gb/s):
auto bond0
iface bond0 inet manual
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2

#PVE Cluster and VMs:
auto vmbr0
iface vmbr0 inet static
address 192.100.100.51
netmask 255.255.255.0
gateway 192.100.100.4
bridge_ports bond0
bridge_stp off
bridge_fd 0
post-up echo 0 >
/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
post-up echo 1 > /sys/class/net/vmbr0/bridge/multicast_querier

#A link for DRBD (NICs are of 10 Gb/s):
auto bond401
iface bond401 inet static
address 10.1.1.51
netmask 255.255.255.0
slaves eth1 eth3
bond_miimon 100
bond_mode balance-rr
mtu 9000

#Other link for DRBD (NICs are of 10 Gb/s):
auto bond402
iface bond402 inet static
address 10.2.2.51
netmask 255.255.255.0
slaves eth4 eth6
bond_miimon 100
bond_mode balance-rr
mtu 9000

#Other link for DRBD (NICs are of 10 Gb/s):
auto bond403
iface bond403 inet static
address 10.3.3.51
netmask 255.255.255.0
slaves eth5 eth7
bond_miimon 100
bond_mode balance-rr
mtu 9000

#A link for the NFS-Backups (NICs are of 1 Gb/s):
auto bond10
iface bond10 inet static
address 10.100.100.51
netmask 255.255.255.0
slaves eth8 eth10
bond_miimon 100
bond_mode balance-rr
#bond_mode active-backup
mtu 9000




More information about the pve-devel mailing list