[PVE-User] openvswitch + bond0 + 2 Fiber interfaces.
Сергей Цаболов
tsabolov at t8.ru
Fri Jan 21 13:28:27 CET 2022
Dimitri, hello
Thank you with you share
My Proxmox is proxmox-ve: 6.4-1 (running kernel: 5.4.143-1-pve)
I try change allow-vmbr0 with auto.
I found the link
https://metadata.ftp-master.debian.org/changelogs/main/o/openvswitch/testing_openvswitch-switch.README.Debian
In section ex 9: Bond + Bridge + VLAN + MTU allow is used.
But nothing wrong I try allow and auto, just comment one line.
Dimitri, thanks again for you share.
21.01.2022 15:03, Dimitri Alexandris пишет:
> I have Openvswitch bonds working fine for years now, but in older versions
> of Proxmox (6.4-4 and 5.3-5):
>
> --------------
> auto eno2
> iface eno2 inet manual
>
> auto eno1
> iface eno1 inet manual
>
> allow-vmbr0 ath
> iface ath inet static
> address 10.NN.NN.38/26
> gateway 10.NN.NN.1
> ovs_type OVSIntPort
> ovs_bridge vmbr0
> ovs_options tag=100
> .
> .
> allow-vmbr0 bond0
> iface bond0 inet manual
> ovs_bonds eno1 eno2
> ovs_type OVSBond
> ovs_bridge vmbr0
> ovs_options bond_mode=balance-slb lacp=active
> allow-ovs vmbr0
> iface vmbr0 inet manual
> ovs_type OVSBridge
> ovs_ports bond0 ath lan dmz_vod ampr
> --------
>
> I think now, "allow-vmbr0" and "allow-ovs" are replaced with "auto".
>
> This bond works fine with HP, 3COM, HUAWEI, and MIKROTIK switches.
> Several OVSIntPort VLANS are attached to it.
> I also had 10G bonds (Intel, Supermicro inter-server links), with the same
> result.
>
> I see the only difference with your setup is the bond_mode. Switch setup
> is also very important to match this.
>
>
>
>
>
> On Fri, Jan 21, 2022 at 1:23 PM Сергей Цаболов<tsabolov at t8.ru> wrote:
>
>> Hello,
>>
>> I have PVE cluster and I thinking to install on the pve-7 openvswitch
>> for can move and add VM from other networks and Proxmox Cluster
>>
>> With base Linux bridge all work well without problem with 2 interface
>> 10GB ens1f0np0 ens1f12np0
>>
>> I install openvswitch with manual
>> https://pve.proxmox.com/wiki/Open_vSwitch
>>
>> I want use Fiber 10GB interfaces ens1f0np0 ens1f12np0 with Bond I think.
>>
>> I try some settings but is not working.
>>
>> My setup in interfaces:
>>
>> auto lo
>> iface lo inet loopback
>>
>> auto ens1f12np0
>> iface ens1f12np0 inet manual
>> #Fiber
>>
>> iface idrac inet manual
>>
>> iface eno2 inet manual
>>
>> iface eno3 inet manual
>>
>> iface eno4 inet manual
>>
>> auto ens1f0np0
>> iface ens1f0np0 inet manual
>>
>> iface eno1 inet manual
>>
>> auto inband
>> iface inband inet static
>> address 10.10.29.10/24
>> gateway 10.10.29.250
>> ovs_type OVSIntPort
>> ovs_bridge vmbr0
>> #Proxmox Web Access
>>
>> auto vlan10
>> iface vlan10 inet manual
>> ovs_type OVSIntPort
>> ovs_bridge vmbr0
>> ovs_options tag=10
>> #Network 10
>>
>> auto bond0
>> iface bond0 inet manual
>> ovs_bonds ens1f0np0 ens1f12np0
>> ovs_type OVSBond
>> ovs_bridge vmbr0
>> ovs_mtu 9000
>> ovs_options bond_mode=active-backup
>>
>> auto vmbr0
>> iface vmbr0 inet manual
>> ovs_type OVSBridge
>> ovs_ports bond0 inband vlan10
>> ovs_mtu 9000
>> #inband
>>
>>
>> Can some one help me if I set all correctly or not?
>>
>> If someone have setup openvswitch with Bond interfaces 10GB share with
>> me configuration.
>>
>> Thank at lot.
>>
>>
>> Sergey TS
>> The best Regard
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
> _______________________________________________
> pve-user mailing list
> pve-user at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Sergey TS
The best Regard
_______________________________________________
pve-user mailing list
pve-user at lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
More information about the pve-user
mailing list