[PVE-User] Open vSwitch with multiple VLANs
Adam Thompson
athompso at athompso.net
Fri Jan 23 19:02:08 CET 2015
On 2015-01-23 06:47 AM, Sten Aus wrote:
> Hi guys and girls!
>
> Is someone using open vSwitch (ovs) with multiple VLANs? How is your
> VM-s configured?
> Let's say that on Proxmox host I have created bridge *vmbr0* (for LAN
> communication) and *vmbr1* (for Internet). *And there should be 5
> VLANs on vmbr0.*
>
> Do you need to give VM two network adapters - one connected to vmbr0
> and other one to vmbr1 and then when you configure VM network
> interfaces, then you say that on eth0 (which is connected to vmbr0)
> there are vlans 10, 20, 30 and 40, 50 for example.
>
> So, for Debian an very simple example would be:
> eth0.10
> eth0.20
> eth0.30
> eth0.40
> eth0.50
>
> Right now I am using Linux bridge and VLANs are made in Proxmox host
> and VM-s get multiple ethernet adapters, resulting in Debian conf:
> eth0
> eth1
> eth2
> eth3
> eth4
> So, all vmbrs (vmbr10, vmbr20, vmbr30, vmbr40 and vmbr50) are
> connected to VLANs on Proxmox host and are being used in VMs and as
> far as I know, I can't connect one physical Proxmox host port (eth0)
> to multiple ovs bridges.
> That means I can't leave my VMs configuration like that, is that right?
>
> Hope someone can share his/her experience on this Open vSwitch topic!
>
> All the best
> Sten
In general, in any virtualized environment, you do not want to pass VLAN
tags directly to the guest OS. You want the host OS, or even the
upstream ethernet switch, to strip VLAN tags and preset to the guest (or
host) on separate interfaces.
I'm not sure why, but in my experiments on VMware ESXi, KVM, HyperV and
VirtualBox, *all* of them show drastically reduced performance when
passing VLAN tags through to the guest. The exception is when using
VT-d I/O to allow the guest to directly access NIC hardware, which is
ironically the best-performing option... but also requires the most
dedicated hardware.
The typical setup for PVE would be, one incoming 802.1Q-tagged ethernet
trunk inbound, attached to one bridge (vmbr0, doesn't matter whether
it's traditional bridge or OVS), and multiple VLAN subinterfaces on that
bridge, which then map (up to) 1:1 to multiple vNICs in the guest VM.
So the host will have vmbr0v5, vmbr0v6, vmbr0v7, etc. and the guest will
see eth0, eth1, eth2, etc.
When you create the guest, you create multiple NICs, all bound to vmbr0
but each specifying a different VLAN tag.
As in Perl, TIMTOWTDI so pick your preferred strategy but this seems to
be both the typical approach and the best compromise between
flexibility, performance and manageability.
It's not uncommon to see two variants/extensions of this:
a) multiple .1q-tagged trunks inbound to multiple pNICs, attached to
multiple software bridges (typically to relieve congestion on the first
trunk); or
b) multiple ethernet ports grouped into a single .1q-tagged LAG
(usually via LACP, but not always), attached to a single software bridge
(typically to relieve congestion and/or increase redundancy).
For advanced scenario "a", best practice would normally be to avoid
having the same VLAN appear on more than one pNIC-&-bridge: it's
possible, but will be needlessly confusing and doesn't really improve
any of the things you might think it does (e.g. performance,
redundancy). To rephrase: each inbound tagged trunk, corresponding pNIC
and corresponding software bridge should only manage mutually-exclusive
(sets of) VLANs.
Another reasonably-common scenario, but IMHO wasteful under most
circumstances, is to avoid using VLANs at all, and have untagged links
inbound to the host on multiple pNICs, each attached to a separate
software bridge, and then each attached to a separate vNIC as
appropriate. It is possible to have the multiple pNICs all terminate on
one software bridge (with internal VLAN tagging), but this is extremely
rare because it defeats the usual purpose of separating them in the
first place (i.e. "security" and/or "throughput"). Technically this is
advanced scenario "a" taken to its logical extreme - only one VLAN per
inbound ethernet connection.
As you move into 10gig ethernet, your approach will likely depend on
your NIC's capabilities. Some 10GE "converged" NICs do virtualization
in hardware (usually called "partitioning"), presenting themselves as
multiple pNICs to the host OS. I'm undecided whether I like this
feature or not - it seems highly useful when dealing with e.g. Windows
Server 2003/2008 which don't have native VLAN support, but much less
useful when dealing with a hypervisor with integral support for VLANs.
The other reason for partitioning is QoS: each partition defines its own
bandwidth and QoS parameters at the c-NIC, so you can't (for example)
starve one VLAN by flooding the other VLAN with 10 gigs' worth of
traffic. VMware can handle this in software, PVE can't (yet) as far as I
know.
If you're working in an IBM, Cisco or HP blade chassis, you may be
dealing with virtualized or partitioned pNICs/cNICs anyway, even at 1gig
speeds, at which point I would let the external hardware handle the VLAN
tag stripping as much as possible, and present to the host OS as
multiple pNICs. There's near-zero cost to do it in hardware, but a
small cost to do it in software on the host.
--
-Adam Thompson
athompso at athompso.net
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20150123/37ae3534/attachment.htm>
More information about the pve-user
mailing list