[PVE-User] Problem With Bond & VLANs - Help Please

duluxoz duluxoz at gmail.com
Fri Aug 16 12:42:46 CEST 2024


Hi Stephan,

My apologises, I should have been more precise.

What doesn't work? Most of the ifaces are down (won't come up 
automatically as I expect (not even NIC3)), and so I have no 
connectivity to the LAN, let alone the rest of the outside world.

Yes, each VLAN should have its own gateway - each VLAN is its own 
subnet, of course.

Results of `ip r`:

~~~
default via 10.0.200.1 dev vmbr0 proto kernal onlink linkdown
10.0.100.0/24 dev bond0.100 proto kernal scope link src 10.0.100.0 linkdown
10.0.200.0/24 dev bond0.200 proto kernal scope link src 10.0.200.0 linkdown
10.0.200.0/24 dev vmbr0 proto kernal scope link src 10.0.200.100 linkdown
~~~

Results of `ip a`:

~~~
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
group default qlen 1000
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
   inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
   inet6 ::1/128 scope host noprefixroute
     valid_lft forever preferred_lft forever
2: eno0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group 
default qlen 1000
   link/ether 00:1b:21:e4:a6:f4 brd ff:ff:ff:ff:ff:ff
3: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group 
default qlen 1000
   link/ether 00:1b:21:e4:a6:f5 brd ff:ff:ff:ff:ff:ff
4: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group 
default qlen 1000
   link/ether 00:1b:21:e4:a6:f6 brd ff:ff:ff:ff:ff:ff
5: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc 
noqueue master vmbr0 state DOWN group default qlen 1000
   link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
6: bond0.100 at bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc 
noqueue state LOWERLAYERDOWN group default qlen 1000
   link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
   inet 10.0.100.0/24 scope global bond0.100
     valid_lft forever preferred_lft forever
7: bond0.200 at bond0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc 
noqueue state LOWERLAYERDOWN group default qlen 1000
   link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
   inet 10.0.200.0/24 scope global bond0.200
     valid_lft forever preferred_lft forever
8: vmbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue 
state DOWN group default qlen 1000
   link/ether 4a:3a:67:59:ac:d3 brd ff:ff:ff:ff:ff:ff
   inet 10.0.200.100/24 scope global vmbr0
     valid_lft forever preferred_lft forever
~~~

Thanks for taking a look

Cheers

dulux-oz


On 16/8/24 19:53, Stefan Hanreich wrote:
>
> On 8/16/24 09:36, duluxoz wrote:
>> Hi All,
>>
>> Disclaimer: I'm coming from an EL background - this is my first venture
>> into Debian-world  :-)
>>
>> So I'm having an issue getting the NICs, Bond, and VLANs correctly
>> configured on a new Proxmox Node (Old oVirt Node). This worked on the
>> old oVirt config (abit a different set of config files/statements).
>>
>> What I'm trying to achieve:
>>
>>   * Proxmox Node IP Address: 10.0.200.100/24, Tag:VLAN 200
>>   * Gateway: 10.0.200.1
>>   * Bond: NIC1 (eno0) & NIC2 (eno1), 802.3ad
>>   * VLAN bond0.100: 10.0.100.0/24, Gateway 10.0.100.1
>>   * VLAN bond0.200: 10.0.200.0/24, Gateway 10.0.200.1
>>   * NIC3 (eno2): 10.0.300.100/24 - not really relevant, as its not part
>>     of the Bond, but I've included it to be thorough
> What *exactly* doesn't work?
> Does the configuration not apply? Do you not get any connectivity with
> internet / specific networks?
>
>
> First thing that springs to mind is that you cannot configure two
> default gateways. There can only be one default gateway. You can
> configure different gateways for different subnets / interfaces. Or you
> can configure different routing tables for different processes.
>
> Your current configuration specifies three gateways. I assume you want
> to use different gateways for different subnets?
>
>
> How does the output of the following commands look like?
>
> ip a
> ip r


More information about the pve-user mailing list