[pve-devel] [PATCH docs 2/2] Update pvecm documentation for corosync 3

Thomas Lamprecht t.lamprecht at proxmox.com
Tue Jul 9 09:19:43 CEST 2019


On 7/8/19 6:26 PM, Stefan Reiter wrote:
> Parts about multicast and RRP have been removed entirely. Instead, a new
> section 'Corosync Redundancy' has been added explaining the concept of
> links and link priorities.
> 

note bad at all, still some notes inline.

> Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
> ---
>  pvecm.adoc | 372 +++++++++++++++++++++--------------------------------
>  1 file changed, 147 insertions(+), 225 deletions(-)
> 
> diff --git a/pvecm.adoc b/pvecm.adoc
> index 1c0b9e7..1246111 100644
> --- a/pvecm.adoc
> +++ b/pvecm.adoc
> @@ -56,13 +56,8 @@ Grouping nodes into a cluster has the following advantages:
>  Requirements
>  ------------
>  
> -* All nodes must be in the same network as `corosync` uses IP Multicast
> - to communicate between nodes (also see
> - http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
> - ports 5404 and 5405 for cluster communication.
> -+
> -NOTE: Some switches do not support IP multicast by default and must be
> -manually enabled first.
> +* All nodes must be able to contact each other via UDP ports 5404 and 5405 for
> + corosync to work.
>  
>  * Date and time have to be synchronized.
>  
> @@ -84,6 +79,11 @@ NOTE: While it's possible for {pve} 4.4 and {pve} 5.0 this is not supported as
>  production configuration and should only used temporarily during upgrading the
>  whole cluster from one to another major version.
>  
> +NOTE: Mixing {pve} 6.x and earlier versions is not supported, because of the
> +major corosync upgrade. While possible to run corosync 3 on {pve} 5.4, this
> +configuration is not supported for production environments and should only be
> +used while upgrading a cluster.
> +
>  
>  Preparing Nodes
>  ---------------
> @@ -96,10 +96,12 @@ Currently the cluster creation can either be done on the console (login via
>  `ssh`) or the API, which we have a GUI implementation for (__Datacenter ->
>  Cluster__).
>  
> -While it's often common use to reference all other nodenames in `/etc/hosts`
> -with their IP this is not strictly necessary for a cluster, which normally uses
> -multicast, to work. It maybe useful as you then can connect from one node to
> -the other with SSH through the easier to remember node name.
> +While it's common to reference all nodenames and their IPs in `/etc/hosts` (or
> +make their names resolveable through other means), this is not strictly
> +necessary for a cluster to work. It may be useful however, as you can then
> +connect from one node to the other with SSH via the easier to remember node
> +name. (see also xref:pvecm_corosync_addresses[Link Address Types])
> +
>  
>  [[pvecm_create_cluster]]
>  Create the Cluster
> @@ -113,31 +115,12 @@ node names.
>   hp1# pvecm create CLUSTERNAME
>  ----
>  
> -CAUTION: The cluster name is used to compute the default multicast address.
> -Please use unique cluster names if you run more than one cluster inside your
> -network. To avoid human confusion, it is also recommended to choose different
> -names even if clusters do not share the cluster network.

Maybe move this from a "CAUTION" to a "NOTE" and keep the hint that it still
makes sense to use unique cluster names, to avoid human confusion and as I have
a feeling that there are other assumption in corosync which depend on that.
Also, _if_ multicast gets integrated into knet we probably have a similar issue
again, so try to bring people in lane now already, even if not 100% required.

> -
>  To check the state of your cluster use:
>  
>  ----
>   hp1# pvecm status
>  ----
>  
> -Multiple Clusters In Same Network
> -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> -
> -It is possible to create multiple clusters in the same physical or logical
> -network. Each cluster must have a unique name, which is used to generate the
> -cluster's multicast group address. As long as no duplicate cluster names are
> -configured in one network segment, the different clusters won't interfere with
> -each other.
> -
> -If multiple clusters operate in a single network it may be beneficial to setup
> -an IGMP querier and enable IGMP Snooping in said network. This may reduce the
> -load of the network significantly because multicast packets are only delivered
> -to endpoints of the respective member nodes.
> -

It's still possible to create multiple clusters in the same network, so I'd keep
above and just adapt to non-multicast for now..

>  
>  [[pvecm_join_node_to_cluster]]
>  Adding Nodes to the Cluster
> @@ -150,7 +133,7 @@ Login via `ssh` to the node you want to add.
>  ----
>  
>  For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
> -An IP address is recommended (see xref:pvecm_corosync_addresses[Ring Address Types]).
> +An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).

Maybe somewhere a note that while the new things are named "Link" the config
still refers to "ringX_addr" for backward compatibility.

>  
>  CAUTION: A new node cannot hold any VMs, because you would get
>  conflicts about identical VM IDs. Also, all existing configuration in
> @@ -173,7 +156,7 @@ Date:             Mon Apr 20 12:30:13 2015
>  Quorum provider:  corosync_votequorum
>  Nodes:            4
>  Node ID:          0x00000001
> -Ring ID:          1928
> +Ring ID:          1/8
>  Quorate:          Yes
>  
>  Votequorum information
> @@ -217,15 +200,15 @@ Adding Nodes With Separated Cluster Network
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>  
>  When adding a node to a cluster with a separated cluster network you need to
> -use the 'ringX_addr' parameters to set the nodes address on those networks:
> +use the 'link0' parameter to set the nodes address on that network:
>  
>  [source,bash]
>  ----
> -pvecm add IP-ADDRESS-CLUSTER -ring0_addr IP-ADDRESS-RING0
> +pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
>  ----
>  
> -If you want to use the Redundant Ring Protocol you will also want to pass the
> -'ring1_addr' parameter.
> +If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
> +kronosnet transport you can also pass the 'link1' parameter.
>  
>  
>  Remove a Cluster Node
> @@ -283,7 +266,7 @@ Date:             Mon Apr 20 12:44:28 2015
>  Quorum provider:  corosync_votequorum
>  Nodes:            3
>  Node ID:          0x00000001
> -Ring ID:          1992
> +Ring ID:          1/8
>  Quorate:          Yes
>  
>  Votequorum information
> @@ -302,8 +285,8 @@ Membership information
>  0x00000003          1 192.168.15.92
>  ----
>  
> -If, for whatever reason, you want that this server joins the same
> -cluster again, you have to
> +If, for whatever reason, you want this server to join the same cluster again,
> +you have to
>  
>  * reinstall {pve} on it from scratch
>  
> @@ -329,14 +312,14 @@ storage with another cluster, as storage locking doesn't work over cluster
>  boundary. Further, it may also lead to VMID conflicts.
>  
>  Its suggested that you create a new storage where only the node which you want
> -to separate has access. This can be an new export on your NFS or a new Ceph
> +to separate has access. This can be a new export on your NFS or a new Ceph
>  pool, to name a few examples. Its just important that the exact same storage
>  does not gets accessed by multiple clusters. After setting this storage up move
>  all data from the node and its VMs to it. Then you are ready to separate the
>  node from the cluster.
>  
>  WARNING: Ensure all shared resources are cleanly separated! You will run into
> -conflicts and problems else.
> +conflicts and problems otherwise.
>  
>  First stop the corosync and the pve-cluster services on the node:
>  [source,bash]
> @@ -400,6 +383,7 @@ the nodes can still connect to each other with public key authentication. This
>  should be fixed by removing the respective keys from the
>  '/etc/pve/priv/authorized_keys' file.
>  
> +
>  Quorum
>  ------
>  
> @@ -419,12 +403,13 @@ if it loses quorum.
>  
>  NOTE: {pve} assigns a single vote to each node by default.
>  
> +
>  Cluster Network
>  ---------------
>  
>  The cluster network is the core of a cluster. All messages sent over it have to
> -be delivered reliable to all nodes in their respective order. In {pve} this
> -part is done by corosync, an implementation of a high performance low overhead
> +be delivered reliably to all nodes in their respective order. In {pve} this
> +part is done by corosync, an implementation of a high performance, low overhead
>  high availability development toolkit. It serves our decentralized
>  configuration file system (`pmxcfs`).
>  
> @@ -432,75 +417,59 @@ configuration file system (`pmxcfs`).
>  Network Requirements
>  ~~~~~~~~~~~~~~~~~~~~
>  This needs a reliable network with latencies under 2 milliseconds (LAN
> -performance) to work properly. While corosync can also use unicast for
> -communication between nodes its **highly recommended** to have a multicast
> -capable network. The network should not be used heavily by other members,
> -ideally corosync runs on its own network.
> -*never* share it with network where storage communicates too.
> +performance) to work properly. The network should not be used heavily by other
> +members, ideally corosync runs on its own network. Do not use a shared network
> +for corosync and storage (except as a potential low-priority fallback in a
> +xref:pvecm_redundancy[redundant] configuration).
>  
>  Before setting up a cluster it is good practice to check if the network is fit
> -for that purpose.
> +for that purpose. With corosync 3, it is enough to ensure all nodes can reach
> +each other over the interfaces you are planning to use. Using `ping` is enough
> +for a basic test.
>  
> -* Ensure that all nodes are in the same subnet. This must only be true for the
> -  network interfaces used for cluster communication (corosync).
> +If the {pve} firewall is enabled, ACCEPT rules for corosync will automatically
> +be generated - no manual action is required.

"will automatically be generated" vs "will be automatically generated"?

>  
> -* Ensure all nodes can reach each other over those interfaces, using `ping` is
> -  enough for a basic test.
> +NOTE: Corosync used Multicast before version 3.0 (introduced in {pve} 6.0).
> +Modern versions rely on https://kronosnet.org/[Kronosnet] for cluster
> +communication, which uses regular UDP unicast.

"... which, for now, only supports regular UDP unicast."?

>  
> -* Ensure that multicast works in general and a high package rates. This can be
> -  done with the `omping` tool. The final "%loss" number should be < 1%.
> -+
> -[source,bash]
> -----
> -omping -c 10000 -i 0.001 -F -q NODE1-IP NODE2-IP ...
> -----
> -
> -* Ensure that multicast communication works over an extended period of time.
> -  This uncovers problems where IGMP snooping is activated on the network but
> -  no multicast querier is active. This test has a duration of around 10
> -  minutes.
> -+
> -[source,bash]
> -----
> -omping -c 600 -i 1 -q NODE1-IP NODE2-IP ...
> -----
> -
> -Your network is not ready for clustering if any of these test fails. Recheck
> -your network configuration. Especially switches are notorious for having
> -multicast disabled by default or IGMP snooping enabled with no IGMP querier
> -active.
> -
> -In smaller cluster its also an option to use unicast if you really cannot get
> -multicast to work.
> +CAUTION: You can still enable Multicast or legacy unicast by setting your
> +transport to `udp` or `udpu` in your xref:pvecm_edit_corosync_conf[corosync.conf],
> +but keep in mind that this will disable all cryptography and redundancy support.
> +This is therefore not recommended.

off-topic: what I general see as a loss are the omping checks, they could be used
to get connection and latencies stats from all of the cluster nodes easily, that was
nice to get a feeling of the whole network...



>  
>  Separate Cluster Network
>  ~~~~~~~~~~~~~~~~~~~~~~~~
>  
>  When creating a cluster without any parameters the cluster network is generally
> -shared with the Web UI and the VMs and its traffic. Depending on your setup
> +shared with the Web UI and the VMs and their traffic. Depending on your setup,
>  even storage traffic may get sent over the same network. Its recommended to
>  change that, as corosync is a time critical real time application.
>  
> +NOTE: Setting up corosync links on a different network does not affect other
> +cluster communications (e.g. Web UI, default migration network, etc...).
> +

This note is a bit confusing, IMO, what to you want to tell here?

>  Setting Up A New Network
>  ^^^^^^^^^^^^^^^^^^^^^^^^
>  
> -First you have to setup a new network interface. It should be on a physical
> +First you have to set up a new network interface. It should be on a physically
>  separate network. Ensure that your network fulfills the
>  xref:pvecm_cluster_network_requirements[cluster network requirements].
>  
>  Separate On Cluster Creation
>  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>  
> -This is possible through the 'ring0_addr' and 'bindnet0_addr' parameter of
> -the 'pvecm create' command used for creating a new cluster.
> +This is possible via the 'linkX' parameters of the 'pvecm create'
> +command used for creating a new cluster.
>  
> -If you have setup an additional NIC with a static address on 10.10.10.1/25
> -and want to send and receive all cluster communication over this interface
> +If you have set up an additional NIC with a static address on 10.10.10.1/25,
> +and want to send and receive all cluster communication over this interface,
>  you would execute:
>  
>  [source,bash]
>  ----
> -pvecm create test --ring0_addr 10.10.10.1 --bindnet0_addr 10.10.10.0
> +pvecm create test --link0 10.10.10.1
>  ----
>  
>  To check if everything is working properly execute:
> @@ -509,20 +478,20 @@ To check if everything is working properly execute:
>  systemctl status corosync
>  ----
>  
> -Afterwards, proceed as descripted in the section to
> +Afterwards, proceed as described above to
>  xref:pvecm_adding_nodes_with_separated_cluster_network[add nodes with a separated cluster network].
>  
>  [[pvecm_separate_cluster_net_after_creation]]
>  Separate After Cluster Creation
>  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>  
> -You can do this also if you have already created a cluster and want to switch
> +You can do this if you have already created a cluster and want to switch
>  its communication to another network, without rebuilding the whole cluster.
>  This change may lead to short durations of quorum loss in the cluster, as nodes
>  have to restart corosync and come up one after the other on the new network.
>  
>  Check how to xref:pvecm_edit_corosync_conf[edit the corosync.conf file] first.
> -The open it and you should see a file similar to:
> +Then, open it and you should see a file similar to:
>  
>  ----
>  logging {
> @@ -560,33 +529,38 @@ quorum {
>  }
>  
>  totem {
> -  cluster_name: thomas-testcluster
> +  cluster_name: testcluster
>    config_version: 3
> -  ip_version: ipv4
> +  ip_version: ipv4-6
>    secauth: on
>    version: 2
>    interface {
> -    bindnetaddr: 192.168.30.50
> -    ringnumber: 0
> +    linknumber: 0
>    }
>  
>  }
>  ----
>  
> -The first you want to do is add the 'name' properties in the node entries if
> -you do not see them already. Those *must* match the node name.
> +NOTE: `ringX_addr` actually specifies a corosync *link address*, the name "ring"
> +is simply a remnant of older corosync versions that was kept for backwards
> +compatibility.

Ah OK, here's the note I talked above about, great!

> +
> +The first thing you want to do is add the 'name' properties in the node entries
> +if you do not see them already. Those *must* match the node name.

as a note: this is pretty much given now, as long as the cluster was
created with our tooling. I.e., I added this in 2015 with the following series:

https://git.proxmox.com/?p=pve-cluster.git;a=commitdiff;h=14d0000a4fe285d93e9944862a2e8d8a49c5554f
https://git.proxmox.com/?p=pve-cluster.git;a=commitdiff;h=3f24bffff0f0638ee243e20b925b5497c24e93d2
https://git.proxmox.com/?p=pve-cluster.git;a=commitdiff;h=baf39b62529aa905c21a32dd26bab4a75532601e

It sadly was not directly for 4.0, as with that cluster needed to be
rebuild and we would be totally safe, but it was added just a few days
after initial release, so most clusters will have it. 

>  
> -Then replace the address from the 'ring0_addr' properties with the new
> -addresses.  You may use plain IP addresses or also hostnames here. If you use
> +Then replace all addresses from the 'ring0_addr' properties of all nodes with
> +the new addresses. You may use plain IP addresses or hostnames here. If you use
>  hostnames ensure that they are resolvable from all nodes. (see also
> -xref:pvecm_corosync_addresses[Ring Address Types])
> +xref:pvecm_corosync_addresses[Link Address Types])
>  
> -In my example I want to switch my cluster communication to the 10.10.10.1/25
> -network. So I replace all 'ring0_addr' respectively. I also set the bindnetaddr
> -in the totem section of the config to an address of the new network. It can be
> -any address from the subnet configured on the new network interface.
> +In this example, we want to switch the cluster communication to the
> +10.10.10.1/25 network. So we replace all 'ring0_addr' respectively.
>  
> -After you increased the 'config_version' property the new configuration file
> +NOTE: The exact same procedure can be used to change other 'ringX_addr' values
> +as well, although we recommend not changing multiple at once, to make it easier
> +to recover if something goes wrong.
> +
> +After we increase the 'config_version' property, the new configuration file

"After that" ?

>  should look like:
>  
>  ----
> @@ -626,26 +600,26 @@ quorum {
>  }
>  
>  totem {
> -  cluster_name: thomas-testcluster
> +  cluster_name: testcluster
>    config_version: 4
> -  ip_version: ipv4
> +  ip_version: ipv4-6
>    secauth: on
>    version: 2
>    interface {
> -    bindnetaddr: 10.10.10.1
> -    ringnumber: 0
> +    linknumber: 0
>    }
>  
>  }
>  ----
>  
> -Now after a final check whether all changed information is correct we save it
> -and see again the xref:pvecm_edit_corosync_conf[edit corosync.conf file] section to
> -learn how to bring it in effect.
> +Then, after a final check whether all changed information is correct, we save it
> +and once again follow the xref:pvecm_edit_corosync_conf[edit corosync.conf file]
> +section to bring it into effect.
>  
> -As our change cannot be enforced live from corosync we have to do an restart.
> +The changes will be applied live, so restarting corosync is not strictly
> +necessary. If you changed other settings as well, or notice corosync
> +complaining, you can optionally trigger a restart using:
>  
> -On a single node execute:

still want to keep abvoe for the case a restart is required?

>  [source,bash]
>  ----
>  systemctl restart corosync
> @@ -658,14 +632,12 @@ Now check if everything is fine:
>  systemctl status corosync
>  ----
>  
> -If corosync runs again correct restart corosync also on all other nodes.
> -They will then join the cluster membership one by one on the new network.
> -

and that? 

>  [[pvecm_corosync_addresses]]
>  Corosync addresses
>  ~~~~~~~~~~~~~~~~~~
>  
> -A corosync link or ring address can be specified in two ways:
> +A corosync link address (denoted by 'ringX_addr' in `corosync.conf`) can be


(for backward compatibility denoted by 'ringX_addr'...) ?

> +specified in two ways:
>  
>  * **IPv4/v6 addresses** will be used directly. They are recommended, since they
>  are static and usually not changed carelessly.
> @@ -691,104 +663,72 @@ Nodes that joined the cluster on earlier versions likely still use their
>  unresolved hostname in `corosync.conf`. It might be a good idea to replace
>  them with IPs or a seperate hostname, as mentioned above.
>  
> -[[pvecm_rrp]]
> -Redundant Ring Protocol
> -~~~~~~~~~~~~~~~~~~~~~~~
> -To avoid a single point of failure you should implement counter measurements.
> -This can be on the hardware and operating system level through network bonding.
> -
> -Corosync itself offers also a possibility to add redundancy through the so
> -called 'Redundant Ring Protocol'. This protocol allows running a second totem
> -ring on another network, this network should be physically separated from the
> -other rings network to actually increase availability.
>  
> -RRP On Cluster Creation
> -~~~~~~~~~~~~~~~~~~~~~~~
> +[[pvecm_redundancy]]
> +Corosync Redundancy
> +-------------------
>  
> -The 'pvecm create' command provides the additional parameters 'bindnetX_addr',
> -'ringX_addr' and 'rrp_mode', can be used for RRP configuration.
> +Corosync supports redundant networking via its integrated kronosnet layer by
> +default (it is not supported on the legacy udp/udpu transports). It can be
> +enabled by specifying more than one link address, either via the '--linkX'
> +parameters of `pvecm` (while creating a cluster or adding a new node) or by
> +specifiying more than one 'ringX_addr' in `corosync.conf`.

s/specifiying/specifying/

>  
> -NOTE: See the xref:pvecm_corosync_conf_glossary[glossary] if you do not know what each parameter means.
> +NOTE: To provide useful failover, every link should be on its own
> +physical network connection.
>  
> -So if you have two networks, one on the 10.10.10.1/24 and the other on the
> -10.10.20.1/24 subnet you would execute:
> +Links will be used in order of their number, with the lower number having higher
> +priority. Even if all links are working, only the one with the highest priority
> +will see corosync traffic. Link priorities cannot be mixed, i.e. links with
> +different priorities will not be able to communicate with each other.

Not really true, the order of use is:
First search the one link with the lowest "knet_link_priority" setting in the
totems interface section (you can pass this along to a linkX param alà
"<address>,priority=10") and if there are more with the same priority (or
no priorities got set at all) then, yes, the one with the lowest ID of that
set is used.

>  
> -[source,bash]
> -----
> -pvecm create CLUSTERNAME -bindnet0_addr 10.10.10.1 -ring0_addr 10.10.10.1 \
> --bindnet1_addr 10.10.20.1 -ring1_addr 10.10.20.1
> -----
> +Since lower priority (higher number) links will not see traffic unless all
> +higher priorities have failed, it becomes a useful strategy to specify even
> +networks used for other tasks (VMs, storage, etc...) as low-priority links. If
> +worst comes to worst, a higher-latency or more congested connection might be
> +better than no connection at all.

"strategy to specify even..." vs "strategy even to specify..." ?

>  
> -RRP On Existing Clusters
> -~~~~~~~~~~~~~~~~~~~~~~~~
> +Adding Redundant Links To An Existing Cluster
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>  
> -You will take similar steps as described in
> -xref:pvecm_separate_cluster_net_after_creation[separating the cluster network] to
> -enable RRP on an already running cluster. The single difference is, that you
> -will add `ring1` and use it instead of `ring0`.
> +To add a new link to a running configuration, first check how to
> +xref:pvecm_edit_corosync_conf[edit the corosync.conf file].
>  
> -First add a new `interface` subsection in the `totem` section, set its
> -`ringnumber` property to `1`. Set the interfaces `bindnetaddr` property to an
> -address of the subnet you have configured for your new ring.
> -Further set the `rrp_mode` to `passive`, this is the only stable mode.
> +Then, add a new 'ringX_addr' to every node in the `nodelist` section. Make
> +sure that your 'X' is the same for every node you add it to, and that it is
> +unique for each node.
>  
> -Then add to each node entry in the `nodelist` section its new `ring1_addr`
> -property with the nodes additional ring address.
> -
> -So if you have two networks, one on the 10.10.10.1/24 and the other on the
> -10.10.20.1/24 subnet, the final configuration file should look like:
> +Lastly, add a new 'interface', as shown below, to your `totem`
> +section, replacing 'X' with your link number chosen above:
>  
>  ----
>  totem {
> -  cluster_name: tweak
> -  config_version: 9
> -  ip_version: ipv4
> -  rrp_mode: passive
> -  secauth: on
> -  version: 2
> +  ...
>    interface {
> -    bindnetaddr: 10.10.10.1
> -    ringnumber: 0
> -  }
> -  interface {
> -    bindnetaddr: 10.10.20.1
> -    ringnumber: 1
> +    linknumber: X
>    }
>  }

I'd like to still have this example a bit more fleshed out, is, IMO, easier
to relate too. IOW, a full totem section, something like:

----
...

totem {
  cluster_name: production-clus
  config_version: 2
  interface {
    linknumber: 0
  }
  interface {
    linknumber: X
  }
  ip_version: ipv4-6
  secauth: on
  version: 2
}
----

also the node list is still intereting? there the addresses needs to be set,
why do you remove that here??

> +----
>  
> -nodelist {
> -  node {
> -    name: pvecm1
> -    nodeid: 1
> -    quorum_votes: 1
> -    ring0_addr: 10.10.10.1
> -    ring1_addr: 10.10.20.1
> -  }
> -
> - node {
> -    name: pvecm2
> -    nodeid: 2
> -    quorum_votes: 1
> -    ring0_addr: 10.10.10.2
> -    ring1_addr: 10.10.20.2
> -  }
> +The new link will be enabled as soon as you follow the last steps to
> +xref:pvecm_edit_corosync_conf[edit the corosync.conf file]. A restart should not
> +be necessary. You can check that corosync loaded the new link using:
>  
> -  [...] # other cluster nodes here
> -}
> +----
> +journalctl -b -u corosync
> +----
>  
> -[...] # other remaining config sections here
> +It might be a good idea to test the new link by temporarily disconnecting the
> +old link on one node and making sure that it doesn't fence itself (since it
> +should be using the new link). You can also check it's status while disconnected
> +using:

Not sure if that's a good test (i.e., to fence oneself) for production systems ^^

>  
> +----
> +pvecm status
>  ----
>  
> -Bring it in effect like described in the
> -xref:pvecm_edit_corosync_conf[edit the corosync.conf file] section.
> -
> -This is a change which cannot take live in effect and needs at least a restart
> -of corosync. Recommended is a restart of the whole cluster.
> +If you see a healthy cluster state, it means that your new link is being used.
>  
> -If you cannot reboot the whole cluster ensure no High Availability services are
> -configured and the stop the corosync service on all nodes. After corosync is
> -stopped on all nodes start it one after the other again.
>  
>  Corosync External Vote Support
>  ------------------------------
> @@ -832,10 +772,8 @@ for Debian based hosts, other Linux distributions should also have a package
>  available through their respective package manager.
>  
>  NOTE: In contrast to corosync itself, a QDevice connects to the cluster over
> -TCP/IP and thus does not need a multicast capable network between itself and
> -the cluster. In fact the daemon may run outside of the LAN and can have
> -longer latencies than 2 ms.
> -
> +TCP/IP. The daemon may even run outside of the clusters LAN and can have longer
> +latencies than 2 ms.
>  
>  Supported Setups
>  ~~~~~~~~~~~~~~~~
> @@ -871,7 +809,6 @@ There are two drawbacks with this:
>  If you understand the drawbacks and implications you can decide yourself if
>  you should use this technology in an odd numbered cluster setup.
>  
> -
>  QDevice-Net Setup
>  ~~~~~~~~~~~~~~~~~
>  
> @@ -923,7 +860,6 @@ Membership information
>  
>  which means the QDevice is set up.
>  
> -
>  Frequently Asked Questions
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~
>  
> @@ -961,15 +897,15 @@ pve# pvecm qdevice remove
>  
>  //Still TODO
>  //^^^^^^^^^^
> -//There ist still stuff to add here
> +//There is still stuff to add here
>  
>  
>  Corosync Configuration
>  ----------------------
>  
> -The `/etc/pve/corosync.conf` file plays a central role in {pve} cluster. It
> -controls the cluster member ship and its network.
> -For reading more about it check the corosync.conf man page:
> +The `/etc/pve/corosync.conf` file plays a central role in a {pve} cluster. It
> +controls the cluster membership and the nodes addresses.

what was wrong with "it's network", it does not controls the node's addresses,
per se, in the sense of this is the configuration for what addresses are
applied on the node network. Also the node address from /etc/pve/.members is
not controlled by it, that one comes from the start of pmxcfs where it resolves
the local hostname and then uses that (no need for the same address being used in
corosync.conf, directly or indirectly).

> +For further information about it, check the corosync.conf man page:
>  [source,bash]
>  ----
>  man corosync.conf
> @@ -983,22 +919,22 @@ Here are a few best practice tips for doing this.
>  Edit corosync.conf
>  ~~~~~~~~~~~~~~~~~~
>  
> -Editing the corosync.conf file can be not always straight forward. There are
> -two on each cluster, one in `/etc/pve/corosync.conf` and the other in
> +Editing the corosync.conf file is not always very straightforward. There are
> +two on each cluster node, one in `/etc/pve/corosync.conf` and the other in
>  `/etc/corosync/corosync.conf`. Editing the one in our cluster file system will
>  propagate the changes to the local one, but not vice versa.
>  
>  The configuration will get updated automatically as soon as the file changes.
>  This means changes which can be integrated in a running corosync will take
> -instantly effect. So you should always make a copy and edit that instead, to
> -avoid triggering some unwanted changes by an in between safe.
> +effect instantly. So you should always make a copy and edit that instead, to

s/instantly/immediately/

> +avoid triggering some unwanted changes by an in-between safe.
>  
>  [source,bash]
>  ----
>  cp /etc/pve/corosync.conf /etc/pve/corosync.conf.new
>  ----
>  
> -Then open the Config file with your favorite editor, `nano` and `vim.tiny` are
> +Then open the config file with your favorite editor, `nano` and `vim.tiny` are
>  preinstalled on {pve} for example.

semi-related, but probably better written as:

.. preinstalled on any {pve} node, for example.

>  
>  NOTE: Always increment the 'config_version' number on configuration changes,
> @@ -1026,7 +962,7 @@ systemctl status corosync
>  journalctl -b -u corosync
>  ----
>  
> -If the change could applied automatically. If not you may have to restart the
> +If the change could be applied automatically. If not you may have to restart the
>  corosync service via:
>  [source,bash]
>  ----
> @@ -1054,7 +990,6 @@ corosync[1647]:  [SERV  ] Service engine 'corosync_quorum' failed to load for re
>  It means that the hostname you set for corosync 'ringX_addr' in the
>  configuration could not be resolved.
>  
> -
>  Write Configuration When Not Quorate
>  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>  
> @@ -1080,19 +1015,8 @@ Corosync Configuration Glossary
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>  
>  ringX_addr::
> -This names the different ring addresses for the corosync totem rings used for
> -the cluster communication.
> -
> -bindnetaddr::
> -Defines to which interface the ring should bind to. It may be any address of
> -the subnet configured on the interface we want to use. In general its the
> -recommended to just use an address a node uses on this interface.
> -
> -rrp_mode::
> -Specifies the mode of the redundant ring protocol and may be passive, active or
> -none. Note that use of active is highly experimental and not official
> -supported. Passive is the preferred mode, it may double the cluster
> -communication throughput and increases availability.
> +This names the different link addresses for the kronosnet connections between
> +nodes.
>  
>  
>  Cluster Cold Start
> @@ -1127,10 +1051,10 @@ It makes a difference if a Guest is online or offline, or if it has
>  local resources (like a local disk).
>  
>  For Details about Virtual Machine Migration see the
> -xref:qm_migration[QEMU/KVM Migration Chapter]
> +xref:qm_migration[QEMU/KVM Migration Chapter].
>  
>  For Details about Container Migration see the
> -xref:pct_migration[Container Migration Chapter]
> +xref:pct_migration[Container Migration Chapter].
>  
>  Migration Type
>  ~~~~~~~~~~~~~~
> @@ -1155,7 +1079,6 @@ modern systems is lower because they implement AES encryption in
>  hardware. The performance impact is particularly evident in fast
>  networks where you can transfer 10 Gbps or more.
>  
> -
>  Migration Network
>  ~~~~~~~~~~~~~~~~~
>  
> @@ -1175,7 +1098,6 @@ destination node from the network specified in the CIDR form.  To
>  enable this, the network must be specified so that each node has one,
>  but only one IP in the respective network.
>  
> -
>  Example
>  ^^^^^^^
>  
> 






More information about the pve-devel mailing list