[PVE-User] ip address on both bond0 and vmbr0
mj
lists at merit.unu.edu
Tue Mar 23 15:28:17 CET 2021
Hi all,
Thanks for all suggestions! I will try with Bastian's:
> bond0 (slaves enp2...)
> vmbr0 (slave bond0) 192.168.143.10/24
> bond0.10 10.0.0.10/24
as that will also give proper separation of ceph traffic, as indicated
by Dorsy.
Also thank you Ronny, for showing your elaborate config!
MJ
On 23/03/2021 13:02, Ronny Aasen wrote:
> On 23.03.2021 11:42, mj wrote:
>> Hi all,
>>
>> First some info:
>> 10.0.0.0/24 is ceph storage
>> 192.168.143.0/24 is our LAN
>>
>> I am trying to make this /etc/networking/interfaces work in in pve:
>>
>>> auto enp2s0f0
>>> iface enp2s0f0 inet manual
>>> #mlag1
>>>
>>> auto enp2s0f1
>>> iface enp2s0f1 inet manual
>>> #mlag2
>>>
>>> iface enp0s25 inet manual
>>> #management
>>>
>>> auto bond0
>>> iface bond0 inet static
>>> address 10.0.0.10/24
>>> bond-slaves enp2s0f0 enp2s0f1
>>> bond-miimon 100
>>> bond-mode active-backup
>>> bond-primary enp2s0f0
>>>
>>> auto vmbr0
>>> iface vmbr0 inet static
>>> address 192.168.143.10/24
>>> gateway 192.168.143.1
>>> bridge-ports bond0
>>> bridge-stp off
>>> bridge-fd 0
>>
>> We will connect pve servers to two mlagged arista 40G switches. The
>> 10.0.0.0/24 ceph network will remain local on the two aristas, and
>> 192.168.143.0/24 will be routed to our core switch.
>>
>> The VM IPs are in the LAN 192.168.143.0/24 range, and obviously don't
>> require access to 10.0.0.0/24
>>
>> We connect the VMs to vmbr0 and assign VLANs to them by configuring a
>> VLAN tag in the proxmox VM config. This works. :-)
>>
>> However, assigning the IP address to bond0 does NOT work. The IP
>> address is ignored. bond0 works, but is IP-less. Adding the IP address
>> manually after boot works, using:
>>> ip addr add 10.0.0.10/24 dev bond0
>>
>> Why is this ip address not assigned to bond0 at boot time?
>>
>> Is it not possible to have an IP on both bond0 and vmbr0, when bond0
>> is also used as a bridge port?
>>
>
>
> No you can not use the ip on the bond and the bridge; while you can run
> 2 ip's on bridge, that is a bit ugly.
>
> the way we do it is running vlan's on the bond, into a vlan aware bridge
>
> auto ens6f0
> iface ens6f0 inet manual
> mtu 9700
>
> auto ens6f1
> iface ens6f1 inet manual
> mtu 9700
>
> auto bond0
> iface bond0 inet manual
> slaves ens6f0 ens6f1
> bond_miimon 100
> bond_mode 1
> bond_xmit_hash_policy layer3+4
> mtu 9700
>
> auto vmbr0
> iface vmbr0 inet manual
> bridge_ports bond0
> bridge_stp off
> bridge_maxage 0
> bridge_ageing 0
> bridge_maxwait 0
> bridge_fd 0
> bridge_vlan_aware yes
> mtu 9700
> up echo 1 >
> /sys/devices/virtual/net/vmbr0/bridge/multicast_querier
> up echo 0 >
> /sys/devices/virtual/net/vmbr0/bridge/multicast_snooping
>
> then define an vlan interface per subnet
>
> auto vmbr0.10
> iface vmbr0.10 inet6 static
> address 2001:db8:2323::11
> netmask 64
> gateway 2001:bd8:2323::1
> mtu 1500
>
>
> vm's attach to vmbr0 + the tag for the vlan they should be in.
>
> good luck
>
> _______________________________________________
> pve-user mailing list
> pve-user at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
More information about the pve-user
mailing list