[pve-devel] [PATCH docs 04/10] Rewrite Network

Aaron Lauterer a.lauterer at proxmox.com
Mon Jun 17 15:05:44 CEST 2019


Polished phrasing, tried to make some explanations easier to comprehend,
changed the styling of interfaces in the `Naming Conventions` section

Signed-off-by: Aaron Lauterer <a.lauterer at proxmox.com>
---
 pve-network.adoc | 302 ++++++++++++++++++++++++-----------------------
 1 file changed, 153 insertions(+), 149 deletions(-)

diff --git a/pve-network.adoc b/pve-network.adoc
index b2dae97..4d76d8b 100644
--- a/pve-network.adoc
+++ b/pve-network.adoc
@@ -5,111 +5,114 @@ ifdef::wiki[]
 :pve-toplevel:
 endif::wiki[]
 
-Network configuration can be done either via the GUI, or by manually
-editing the file `/etc/network/interfaces`, which contains the
-whole network configuration. The  `interfaces(5)` manual page contains the
-complete format description. All {pve} tools try hard to keep direct
- user modifications, but using the GUI is still preferable, because it
-protects you from errors.
+The network configuration can either be done via the GUI or by
+manually editing the file `/etc/network/interfaces`. The
+`interfaces(5)` manual page describes the format. All {pve} tools try
+to honor direct modifications of the `interfaces` file but the GUI is
+the preferred way to change the network configuration as it helps to
+avoid errors.
 
-Once the network is configured, you can use the Debian traditional tools `ifup`
-and `ifdown` commands to bring interfaces up and down.
+Once the network is configured the traditional Debian tools `ifup` and
+`ifdown` can be used to bring interfaces up or down.
 
 NOTE: {pve} does not write changes directly to
-`/etc/network/interfaces`. Instead, we write into a temporary file
-called `/etc/network/interfaces.new`, and commit those changes when
-you reboot the node.
+`/etc/network/interfaces`. Instead changes are written to the
+temporary file `/etc/network/interfaces.new`. The changes are
+committed when the node is rebooted.
 
 Naming Conventions
 ~~~~~~~~~~~~~~~~~~
 
-We currently use the following naming conventions for device names:
+The following naming conventions are used for device names:
 
-* Ethernet devices: en*, systemd network interface names. This naming scheme is
- used for new {pve} installations since version 5.0.
+* Ethernet devices: `en*`. On hosts which were installed with {pve}
+  5.0 or later. Based on 'systemd' network interface names.
 
-* Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...) This naming
-scheme is used for {pve} hosts which were installed before the 5.0
-release. When upgrading to 5.0, the names are kept as-is.
+* Ethernet devices: `eth[N]`, where N ≥ 0 (`eth0`, `eth1`, ...)
+  On hosts which were installed prior to the {pve} 5.0 release. The
+  names are kept and do not change when upgrading to 5.0.
 
-* Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
+* Bridge names: `vmbr[N]`, where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
 
-* Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
+* Bonds: `bond[N]`, where N ≥ 0 (`bond0`, `bond1`, ...)
 
-* VLANs: Simply add the VLAN number to the device name,
-  separated by a period (`eno1.50`, `bond1.30`)
+* VLANs: The VLAN number is added to the device name, separated by a
+  period (`eno1.50`, `bond1.30`, ...).
 
-This makes it easier to debug networks problems, because the device
-name implies the device type.
+This helps to debug networks problems, because the device name implies
+the device type.
 
 Systemd Network Interface Names
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 Systemd uses the two character prefix 'en' for Ethernet network
-devices. The next characters depends on the device driver and the fact
-which schema matches first.
+devices. The next characters depends on the device driver and which
+schema is matched first:
 
-* o<index>[n<phys_port_name>|d<dev_port>] — devices on board
+* `o<index>[n<phys_port_name>|d<dev_port>]` — on-board devices
 
-* s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — device by hotplug id
+* `s<slot>[f<function>][n<phys_port_name>|d<dev_port>]` — device by
+  hotplug id
 
-* [P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>] — devices by bus id
+* `[P<domain>]p<bus>s<slot>[f<function>][n<phys_port_name>|d<dev_port>]`
+  — devices by bus id
 
-* x<MAC> — device by MAC address
+* `x<MAC>` — device by MAC address
 
 The most common patterns are:
 
-* eno1 — is the first on board NIC
+* `eno1` — is the first on-board NIC
 
-* enp3s0f1 — is the NIC on pcibus 3 slot 0 and use the NIC function 1.
+* `enp3s0f1` — is the NIC on pcibus 3 slot 0 and the NIC function 1.
 
 For more information see https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/[Predictable Network Interface Names].
 
 Choosing a network configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Depending on your current network organization and your resources you can
-choose either a bridged, routed, or masquerading networking setup.
+Depending on your current network organization and resources you can
+choose between a bridged, routed, or masquerading networking setup.
 
 {pve} server in a private LAN, using an external gateway to reach the internet
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-The *Bridged* model makes the most sense in this case, and this is also
-the default mode on new {pve} installations.
-Each of your Guest system will have a virtual interface attached to the
-{pve} bridge. This is similar in effect to having the Guest network card
-directly connected to a new switch on your LAN, the {pve} host playing the role
-of the switch.
+The *Bridged* setup makes the most sense in this situation. It is the
+default on new {pve} installations. Each of the guest systems has a
+virtual interface attached to the {pve} bridge. This can be compared
+to each guest being directly connected to a new switch in the LAN,
+with the {pve} host being the switch.
 
-{pve} server at hosting provider, with public IP ranges for Guests
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+{pve} server at a hosting provider, with public IP ranges for Guests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-For this setup, you can use either a *Bridged* or *Routed* model, depending on
-what your provider allows.
+Either a *Bridged* or *Routed* setup can be used, depending on the
+specific situation at the provider.
 
 {pve} server at hosting provider, with a single public IP address
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-In that case the only way to get outgoing network accesses for your guest
-systems is to use *Masquerading*. For incoming network access to your guests,
-you will need to configure *Port Forwarding*.
+In this situation *Masquerading* is the only way to get outgoing
+network access for guest systems working. In order to have incoming
+network access to the guests *Port Forwarding* needs to be configured.
 
-For further flexibility, you can configure
-VLANs (IEEE 802.1q) and network bonding, also known as "link
-aggregation". That way it is possible to build complex and flexible
-virtual networks.
+TIP: VLANs (IEEE 802.1q) and network bonding, also known as link
+aggregation, can be configured to build more complex and flexible
+networks.
 
 Default Configuration using a Bridge
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 [thumbnail="default-network-setup-bridge.svg"]
-Bridges are like physical network switches implemented in software.
-All VMs can share a single bridge, or you can create multiple bridges to
-separate network domains. Each host can have up to 4094 bridges.
+Bridges are like physical network switches but implemented in
+software. The simplest setup is a single bridge which is used by each
+virtual machines. It is also possible to split the network into
+multiple domains with a bridge for each domain. A host can have up to
+4094 bridges.
 
-The installation program creates a single bridge named `vmbr0`, which
-is connected to the first Ethernet card. The corresponding
-configuration in `/etc/network/interfaces` might look like this:
+During installation of the {pve} host a single bridge named `vmbr0` is
+created. It is connected to the first Ethernet interface. The
+corresponding configuration in `/etc/network/interfaces` can look like
+this:
 
 ----
 auto lo
@@ -127,31 +130,32 @@ iface vmbr0 inet static
         bridge_fd 0
 ----
 
-Virtual machines behave as if they were directly connected to the
-physical network. The network, in turn, sees each virtual machine as
-having its own MAC, even though there is only one network cable
-connecting all of these VMs to the network.
+Virtual machines using the bridge behave as if they were directly
+connected to the physical network. Each virtual machine has it's own
+MAC address even though there is only one physical cable connecting
+all these virtual machines to the network.
 
 Routed Configuration
 ~~~~~~~~~~~~~~~~~~~~
 
-Most hosting providers do not support the above setup. For security
-reasons, they disable networking as soon as they detect multiple MAC
-addresses on a single interface.
+Most hosting providers do not support a bridged setup. For security
+reasons they disable networking as soon as multiple MAC addresses are
+detected on a single interface.
 
-TIP: Some providers allows you to register additional MACs on their
-management interface. This avoids the problem, but is clumsy to
-configure because you need to register a MAC for each of your VMs.
+TIP: Some providers have the option to register additional MAC
+addresses through their management interface. This avoids the problem
+but is clumsy to configure as the MAC address for each new virtual
+machine needs to be registered before it is started for the first
+time.
 
-You can avoid the problem by ``routing'' all traffic via a single
-interface. This makes sure that all network packets use the same MAC
-address.
+One way to work around the problem is by ``routing'' all traffic via a
+single interface. This ensure that all network packets come from the
+same MAC address.
 
 [thumbnail="default-network-setup-routed.svg"]
-A common scenario is that you have a public IP (assume `198.51.100.5`
-for this example), and an additional IP block for your VMs
-(`203.0.113.16/29`). We recommend the following setup for such
-situations:
+A common scenario is to have a public IP (assume `198.51.100.5` for
+this example) and an additional IP block for the VMs
+(`203.0.113.16/29`). The recommended setup for such a scenario is:
 
 ----
 auto lo
@@ -179,10 +183,14 @@ iface vmbr0 inet static
 Masquerading (NAT) with `iptables`
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Masquerading allows guests having only a private IP address to access the
-network by using the host IP address for outgoing traffic. Each outgoing
-packet is rewritten by `iptables` to appear as originating from the host,
-and responses are rewritten accordingly to be routed to the original sender.
+Masquerading, also known as NAT, helps in the situation when only one
+IP address is available. Each outgoing packet is rewritten by
+`iptables` to appear to be originating from the host, responses are
+rewritten accordingly in order to be routed to the original sender.
+
+The following example configuration assumes a public IP address of
+`198.51.100.5`. The internal network on the {pve} host for the virtual
+machines is `10.10.10.0/24`.
 
 ----
 auto lo
@@ -213,26 +221,28 @@ iface vmbr0 inet static
 Linux Bond
 ~~~~~~~~~~
 
-Bonding (also called NIC teaming or Link Aggregation) is a technique
-for binding multiple NIC's to a single network device.  It is possible
-to achieve different goals, like make the network fault-tolerant,
-increase the performance or both together.
+Bonding (also called NIC teaming or link aggregation) is a technique
+to bind multiple network interface into a single logical one.
+The reason to do this can be to  either make the connection
+fault-tolerant, increase the performance or both.
+
+Network bonding can be an alternative to faster, more expensive
+network hardware. The Linux kernel has native support for link
+aggregation as do most switches. When bonding two NICs one can get
+double the bandwidth. A physical NIC part of a bonded logical interface
+is called (NIC) slave.
 
-High-speed hardware like Fibre Channel and the associated switching
-hardware can be quite expensive. By doing link aggregation, two NICs
-can appear as one logical interface, resulting in double speed. This
-is a native Linux kernel feature that is supported by most
-switches. If your nodes have multiple Ethernet ports, you can
-distribute your points of failure by running network cables to
-different switches and the bonded connection will failover to one
-cable or the other in case of network trouble.
+If your {pve} host has multiple Ethernet ports they can be used to
+spread the points of failure. When connected to different switches the
+bonded connection will do a failover to one cable in case there is a
+problem with the other.
 
-Aggregated links can improve live-migration delays and improve the
-speed of replication of data between Proxmox VE Cluster nodes.
+Bonded connections can improve live-migration delays and the speed of
+replication between {pve} Cluster nodes due to the increased bandwidth.
 
 There are 7 modes for bonding:
 
-* *Round-robin (balance-rr):* Transmit network packets in sequential
+* *Round-robin (balance-rr):* Transmits network packets in sequential
 order from the first available network interface (NIC) slave through
 the last. This mode provides load balancing and fault tolerance.
 
@@ -242,12 +252,12 @@ slave fails. The single logical bonded interface's MAC address is
 externally visible on only one NIC (port) to avoid distortion in the
 network switch. This mode provides fault tolerance.
 
-* *XOR (balance-xor):* Transmit network packets based on [(source MAC
+* *XOR (balance-xor):* Transmits network packets based on [(source MAC
 address XOR'd with destination MAC address) modulo NIC slave
 count]. This selects the same NIC slave for each destination MAC
 address. This mode provides load balancing and fault tolerance.
 
-* *Broadcast (broadcast):* Transmit network packets on all slave
+* *Broadcast (broadcast):* Transmits network packets on all slave
 network interfaces. This mode provides fault tolerance.
 
 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
@@ -264,27 +274,26 @@ designated slave network interface. If this receiving slave fails,
 another slave takes over the MAC address of the failed receiving
 slave.
 
-* *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
-load balancing (rlb) for IPV4 traffic, and does not require any
-special network switch support. The receive load balancing is achieved
-by ARP negotiation. The bonding driver intercepts the ARP Replies sent
-by the local system on their way out and overwrites the source
-hardware address with the unique hardware address of one of the NIC
-slaves in the single logical bonded interface such that different
+* *Adaptive load balancing (balance-alb):* Includes balance-tlb plus
+receive load balancing (rlb) for IPV4 traffic, and does not require
+any special network switch support. The receive load balancing is
+achieved by ARP negotiation. The bonding driver intercepts the ARP
+Replies sent by the local system on their way out and overwrites the
+source hardware address with the unique hardware address of one of the
+NIC slaves in the single logical bonded interface such that different
 network-peers use different MAC addresses for their network packet
 traffic.
 
-If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using
-the corresponding bonding mode (802.3ad). Otherwise you should generally use the
-active-backup mode. +
+If your switch supports the LACP (IEEE 802.3ad) protocol it is
+recommended to use the corresponding bonding mode (802.3ad).
+Otherwise the active-backup mode should be used in general.
 // http://lists.linux-ha.org/pipermail/linux-ha/2013-January/046295.html
-If you intend to run your cluster network on the bonding interfaces, then you
-have to use active-passive mode on the bonding interfaces, other modes are
-unsupported.
+If you intend to run your cluster network on the bonding interfaces,
+you have to use active-passive mode on the bonding interfaces,
+other modes are not supported.
 
-The following bond configuration can be used as distributed/shared
-storage network. The benefit would be that you get more speed and the
-network will be fault-tolerant.
+The following bond configuration can be used for a distributed/shared
+storage network. This will provide more speed and fault-tolerance.
 
 .Example: Use bond with fixed IP address
 ----
@@ -312,13 +321,11 @@ iface vmbr0 inet static
         bridge_ports eno1
         bridge_stp off
         bridge_fd 0
-
 ----
 
-
 [thumbnail="default-network-setup-bond.svg"]
-Another possibility it to use the bond directly as bridge port.
-This can be used to make the guest network fault-tolerant.
+It is possible to use the bond directly as bridge. For example to have
+a fault-tolerant guest network.
 
 .Example: Use a bond as bridge port
 ----
@@ -351,59 +358,56 @@ iface vmbr0 inet static
 VLAN 802.1Q
 ~~~~~~~~~~~
 
-A virtual LAN (VLAN) is a broadcast domain that is partitioned and
-isolated in the network at layer two.  So it is possible to have
-multiple networks (4096) in a physical network, each independent of
-the other ones.
-
-Each VLAN network is identified by a number often called 'tag'.
-Network packages are then 'tagged' to identify which virtual network
-they belong to.
-
+// I tried to describe the concept in an easy to understand way
+A virtual LAN (VLAN) adds segmentation to the physical LAN. This is
+done by adding a so called 'tag' to the packets at layer two in the
+OSI model. The 'tag' is a number (0 - 4095) identifying the VLAN. The
+benefit is to be able to simulate separate network cables and the
+separation of networks that is possible with it.
 
 VLAN for Guest Networks
 ^^^^^^^^^^^^^^^^^^^^^^^
 
-{pve} supports this setup out of the box. You can specify the VLAN tag
-when you create a VM. The VLAN tag is part of the guest network
-configuration. The networking layer supports different modes to
-implement VLANs, depending on the bridge configuration:
+{pve} supports VLANs out of the box. A VLAN tag can be specified when
+creating a virtual machine. The VLAN tag is part fo the guest network
+configuration. Depending on the bridge configuration different modes
+for VLANs are supported:
 
 * *VLAN awareness on the Linux bridge:*
-In this case, each guest's virtual network card is assigned to a VLAN tag,
-which is transparently supported by the Linux bridge.
-Trunk mode is also possible, but that makes configuration
-in the guest necessary.
+In this case, each guest's virtual network card is assigned a VLAN
+tag which is transparently supported by the Linux bridge.
+Trunk mode is also possible but additional configuration in the guest
+is necessary.
 
 * *"traditional" VLAN on the Linux bridge:*
-In contrast to the VLAN awareness method, this method is not transparent
-and creates a VLAN device with associated bridge for each VLAN.
-That is, creating a guest on VLAN 5 for example, would create two
-interfaces eno1.5 and vmbr0v5, which would remain until a reboot occurs.
+In contrast to the VLAN aware method this method is not transparent.
+A VLAN device and the associated bridge is created for each VLAN.
+For example if VLAN 5 is configured with the physical interface `eno1`
+it will create `eno1.5` and `vmbr0v5`.
 
 * *Open vSwitch VLAN:*
 This mode uses the OVS VLAN feature.
 
 * *Guest configured VLAN:*
-VLANs are assigned inside the guest. In this case, the setup is
+VLANs are assigned inside the guest. In this case the setup is
 completely done inside the guest and can not be influenced from the
-outside. The benefit is that you can use more than one VLAN on a
+outside. The benefit is that more than one VLAN can be configured on a
 single virtual NIC.
 
 
 VLAN on the Host
 ^^^^^^^^^^^^^^^^
 
-To allow host communication with an isolated network. It is possible
-to apply VLAN tags to any network device (NIC, Bond, Bridge). In
-general, you should configure the VLAN on the interface with the least
-abstraction layers between itself and the physical NIC.
-
-For example, in a default configuration where you want to place
-the host management address on a separate VLAN.
+It is possible to place the management interface / IP of the {pve}
+host in a VLAN. The VLAN configuration should be applied to the
+interface with the least abstraction layers between itself and the
+physical NIC. VLAN tags can be configured for any network device (NIC,
+Bond, Bridge).
 
+A common use case is to separate the {pve} host management address in
+a separate VLAN to restrict access and exposure.
 
-.Example: Use VLAN 5 for the {pve} management IP with traditional Linux bridge
+.Example: Use VLAN 5 for the {pve} management IP with a traditional Linux bridge
 ----
 auto lo
 iface lo inet loopback
@@ -429,7 +433,7 @@ iface vmbr0 inet manual
 
 ----
 
-.Example: Use VLAN 5 for the {pve} management IP with VLAN aware Linux bridge
+.Example: Use VLAN 5 for the {pve} management IP with a VLAN aware Linux bridge
 ----
 auto lo
 iface lo inet loopback
@@ -451,10 +455,10 @@ iface vmbr0 inet manual
         bridge_vlan_aware yes
 ----
 
-The next example is the same setup but a bond is used to
-make this network fail-safe.
+In the next example a bond is used to make the network setup
+fault-tolerant.
 
-.Example: Use VLAN 5 with bond0 for the {pve} management IP with traditional Linux bridge
+.Example: Use VLAN 5 with bond0 for the {pve} management IP with a traditional Linux bridge
 ----
 auto lo
 iface lo inet loopback
-- 
2.20.1





More information about the pve-devel mailing list