[pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off
Alexandre DERUMIER
aderumier at odiso.com
Fri Jan 2 09:40:09 CET 2015
Hi,
>>But as i need that the VMs and the PVE host can be accessed from any
>>workstation, the vlan option isn't a option useful for me.
Ok
>>And about of cluster communication and the VMs, as i don't want that the
>>multicast packages go to the VMs, i believe that i can cut it for the VMs of
>>two modes:
>>
>>a) Removing the option "post-up echo 0 >
>>/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping " to my NIC
>>configuration of the PVE host if i will have a behaviour stable.
Yes, indeed you can enable snooping to filter multicast
>>b) By firewall will be very easy, since that i know the IP address of origin
>>of cluster communication, but unfortunately the wiki of PVE don't show
>>clearly how can i apply it, ie, i see the "firewall" tag in datacenter, PVE
>>hosts and in the network configuration of the VMs, and the wiki don't says
>>nothing about of this, for me, with a global configuration that affect to
>>all VMs of the cluster will be wonderfull using IPset or some other way that
>>be simple of apply.
I think you can create a security group with a rule which block the multicast adress of your pve cluster
#pvecm status|grep "Multicast addresses"
to get your cluster multicast address
Then add this security group to each vm.
(Currently datacenter rules apply only on hosts IN|OUT iptables rules, but not in FORWARD iptables rules which is used by vms)
----- Mail original -----
De: "Cesar Peschiera" <brain at click.com.py>
À: "aderumier" <aderumier at odiso.com>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Vendredi 2 Janvier 2015 05:10:08
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off
Hi Alexandre.
Thanks for your reply.
But as i need that the VMs and the PVE host can be accessed from any
workstation, the vlan option isn't a option useful for me.
Anyway, i am testing with I/OAT DMA Engine enabled in the Bios Hardware,
that after some days with few activity, the CMAN cluster is stable, soon i
will prove with a lot of network activity .
And about of cluster communication and the VMs, as i don't want that the
multicast packages go to the VMs, i believe that i can cut it for the VMs of
two modes:
a) Removing the option "post-up echo 0 >
/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping " to my NIC
configuration of the PVE host if i will have a behaviour stable.
b) By firewall will be very easy, since that i know the IP address of origin
of cluster communication, but unfortunately the wiki of PVE don't show
clearly how can i apply it, ie, i see the "firewall" tag in datacenter, PVE
hosts and in the network configuration of the VMs, and the wiki don't says
nothing about of this, for me, with a global configuration that affect to
all VMs of the cluster will be wonderfull using IPset or some other way that
be simple of apply.
Do you have some idea of how avoid that multicast packages go to the VMs in
a stable mode? and how apply it?
----- Original Message -----
From: "Alexandre DERUMIER" <aderumier at odiso.com>
To: "Cesar Peschiera" <brain at click.com.py>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Sent: Wednesday, December 31, 2014 3:33 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off
Hi Cesar,
I think I totaly forgot that we can't add an ip on an interface slave of a
bridge.
Myself I'm using a tagged vlan interface for the cluster communication
something like:
auto bond0
iface bond0 inet manual
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2
auto bond0.100
iface bond0 inet static
address 192.100.100.50
netmask 255.255.255.0
gateway 192.100.100.4
auto vmbr0
iface vmbr0 inet manual
bridge_ports bond0
bridge_stp off
bridge_fd 0
post-up echo 0 > /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
----- Mail original -----
De: "Cesar Peschiera" <brain at click.com.py>
À: "aderumier" <aderumier at odiso.com>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Mercredi 31 Décembre 2014 05:01:37
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off
Hi Alexandre
Today, and after a week, again a node lost the cluster communication. So i
changed the configuration of the Bios Hardware to "I/OAT DMA enabled" (that
work very well in others nodes Dell R320 with NICs of 1 Gb/s).
Moreover, trying to follow your advice of to put 192.100.100.51 ip address
directly to bond0 and not in vmbr0, when i reboot the node, it is totally
isolated, and i see a message that says that vmbr0 missing a IP address.
Also the node is totally isolated when i apply this ip address to vmbr0:
0.0.0.0/255.255.255.255
In practical terms, can you tell me how can i add a IP address to bond0 and
also have a bridge for these same NICs?
- Now, this is my configuration:
auto bond0
iface bond0 inet manual
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2
auto vmbr0
iface vmbr0 inet static
address 192.100.100.50
netmask 255.255.255.0
gateway 192.100.100.4
bridge_ports bond0
bridge_stp off
bridge_fd 0
post-up echo 0 >
/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
----- Original Message -----
From: "Alexandre DERUMIER" <aderumier at odiso.com>
To: "Cesar Peschiera" <brain at click.com.py>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Sent: Friday, December 19, 2014 7:59 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off
maybe can you try to put 192.100.100.51 ip address directly to bond0,
to avoid corosync traffic going through to vmbr0.
(I remember some old offloading bugs with 10gbe nic and linux bridge)
----- Mail original -----
De: "Cesar Peschiera" <brain at click.com.py>
À: "aderumier" <aderumier at odiso.com>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Vendredi 19 Décembre 2014 11:08:33
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off
>can you post your /etc/network/interfaces of theses 10gb/s nodes ?
This is my configuration:
Note: The LAN use 192.100.100.0/24
#Network interfaces
auto lo
iface lo inet loopback
iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
iface eth4 inet manual
iface eth5 inet manual
iface eth6 inet manual
iface eth7 inet manual
iface eth8 inet manual
iface eth9 inet manual
iface eth10 inet manual
iface eth11 inet manual
#PVE Cluster and VMs (NICs are of 10 Gb/s):
auto bond0
iface bond0 inet manual
slaves eth0 eth2
bond_miimon 100
bond_mode 802.3ad
bond_xmit_hash_policy layer2
#PVE Cluster and VMs:
auto vmbr0
iface vmbr0 inet static
address 192.100.100.51
netmask 255.255.255.0
gateway 192.100.100.4
bridge_ports bond0
bridge_stp off
bridge_fd 0
post-up echo 0 >
/sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
post-up echo 1 > /sys/class/net/vmbr0/bridge/multicast_querier
#A link for DRBD (NICs are of 10 Gb/s):
auto bond401
iface bond401 inet static
address 10.1.1.51
netmask 255.255.255.0
slaves eth1 eth3
bond_miimon 100
bond_mode balance-rr
mtu 9000
#Other link for DRBD (NICs are of 10 Gb/s):
auto bond402
iface bond402 inet static
address 10.2.2.51
netmask 255.255.255.0
slaves eth4 eth6
bond_miimon 100
bond_mode balance-rr
mtu 9000
#Other link for DRBD (NICs are of 10 Gb/s):
auto bond403
iface bond403 inet static
address 10.3.3.51
netmask 255.255.255.0
slaves eth5 eth7
bond_miimon 100
bond_mode balance-rr
mtu 9000
#A link for the NFS-Backups (NICs are of 1 Gb/s):
auto bond10
iface bond10 inet static
address 10.100.100.51
netmask 255.255.255.0
slaves eth8 eth10
bond_miimon 100
bond_mode balance-rr
#bond_mode active-backup
mtu 9000
More information about the pve-devel
mailing list