<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">On 2015-01-23 06:47 AM, Sten Aus wrote:<br>
</div>
<blockquote cite="mid:54C242E2.8010408@eenet.ee" type="cite">
<meta http-equiv="content-type" content="text/html;
charset=windows-1252">
Hi guys and girls!<br>
<br>
Is someone using open vSwitch (ovs) with multiple VLANs? How is
your VM-s configured?<br>
Let's say that on Proxmox host I have created bridge <b>vmbr0</b>
(for LAN communication) and <b>vmbr1</b> (for Internet). <b>And
there should be 5 VLANs on vmbr0.</b><br>
<br>
Do you need to give VM two network adapters - one connected to
vmbr0 and other one to vmbr1 and then when you configure VM
network interfaces, then you say that on eth0 (which is connected
to vmbr0) there are vlans 10, 20, 30 and 40, 50 for example.<br>
<br>
So, for Debian an very simple example would be:<br>
eth0.10<br>
eth0.20<br>
eth0.30<br>
eth0.40<br>
eth0.50<br>
<br>
Right now I am using Linux bridge and VLANs are made in Proxmox
host and VM-s get multiple ethernet adapters, resulting in Debian
conf:<br>
eth0<br>
eth1<br>
eth2<br>
eth3<br>
eth4<br>
So, all vmbrs (vmbr10, vmbr20, vmbr30, vmbr40 and vmbr50) are
connected to VLANs on Proxmox host and are being used in VMs and
as far as I know, I can't connect one physical Proxmox host port
(eth0) to multiple ovs bridges.<br>
That means I can't leave my VMs configuration like that, is that
right?<br>
<br>
Hope someone can share his/her experience on this Open vSwitch
topic!<br>
<br>
All the best<br>
Sten<br>
</blockquote>
<br>
In general, in any virtualized environment, you do not want to pass
VLAN tags directly to the guest OS. You want the host OS, or even
the upstream ethernet switch, to strip VLAN tags and preset to the
guest (or host) on separate interfaces.<br>
<br>
I'm not sure why, but in my experiments on VMware ESXi, KVM, HyperV
and VirtualBox, *all* of them show drastically reduced performance
when passing VLAN tags through to the guest. The exception is when
using VT-d I/O to allow the guest to directly access NIC hardware,
which is ironically the best-performing option... but also requires
the most dedicated hardware.<br>
<br>
The typical setup for PVE would be, one incoming 802.1Q-tagged
ethernet trunk inbound, attached to one bridge (vmbr0, doesn't
matter whether it's traditional bridge or OVS), and multiple VLAN
subinterfaces on that bridge, which then map (up to) 1:1 to multiple
vNICs in the guest VM.<br>
<br>
So the host will have vmbr0v5, vmbr0v6, vmbr0v7, etc. and the guest
will see eth0, eth1, eth2, etc.<br>
When you create the guest, you create multiple NICs, all bound to
vmbr0 but each specifying a different VLAN tag.<br>
<br>
As in Perl, TIMTOWTDI so pick your preferred strategy but this seems
to be both the typical approach and the best compromise between
flexibility, performance and manageability.<br>
<br>
It's not uncommon to see two variants/extensions of this:<br>
a) multiple .1q-tagged trunks inbound to multiple pNICs, attached
to multiple software bridges (typically to relieve congestion on the
first trunk); or<br>
b) multiple ethernet ports grouped into a single .1q-tagged LAG
(usually via LACP, but not always), attached to a single software
bridge (typically to relieve congestion and/or increase redundancy).<br>
<br>
For advanced scenario "a", best practice would normally be to avoid
having the same VLAN appear on more than one pNIC-&-bridge: it's
possible, but will be needlessly confusing and doesn't really
improve any of the things you might think it does (e.g. performance,
redundancy). To rephrase: each inbound tagged trunk, corresponding
pNIC and corresponding software bridge should only manage
mutually-exclusive (sets of) VLANs.<br>
<br>
Another reasonably-common scenario, but IMHO wasteful under most
circumstances, is to avoid using VLANs at all, and have untagged
links inbound to the host on multiple pNICs, each attached to a
separate software bridge, and then each attached to a separate vNIC
as appropriate. It is possible to have the multiple pNICs all
terminate on one software bridge (with internal VLAN tagging), but
this is extremely rare because it defeats the usual purpose of
separating them in the first place (i.e. "security" and/or
"throughput"). Technically this is advanced scenario "a" taken to
its logical extreme - only one VLAN per inbound ethernet connection.<br>
<br>
As you move into 10gig ethernet, your approach will likely depend on
your NIC's capabilities. Some 10GE "converged" NICs do
virtualization in hardware (usually called "partitioning"),
presenting themselves as multiple pNICs to the host OS. I'm
undecided whether I like this feature or not - it seems highly
useful when dealing with e.g. Windows Server 2003/2008 which don't
have native VLAN support, but much less useful when dealing with a
hypervisor with integral support for VLANs. The other reason for
partitioning is QoS: each partition defines its own bandwidth and
QoS parameters at the c-NIC, so you can't (for example) starve one
VLAN by flooding the other VLAN with 10 gigs' worth of traffic.
VMware can handle this in software, PVE can't (yet) as far as I
know.<br>
<br>
If you're working in an IBM, Cisco or HP blade chassis, you may be
dealing with virtualized or partitioned pNICs/cNICs anyway, even at
1gig speeds, at which point I would let the external hardware handle
the VLAN tag stripping as much as possible, and present to the host
OS as multiple pNICs. There's near-zero cost to do it in hardware,
but a small cost to do it in software on the host.<br>
<br>
<pre class="moz-signature" cols="72">--
-Adam Thompson
<a class="moz-txt-link-abbreviated" href="mailto:athompso@athompso.net">athompso@athompso.net</a>
</pre>
</body>
</html>