[pve-devel] [PATCH docs 1/2] Mention GUI for creating a cluster and adding nodes

Aaron Lauterer a.lauterer at proxmox.com
Mon Aug 26 10:44:44 CEST 2019


Some things I would write differently. Mainly simpler, more precise 
language, avoiding clutter and reducing the amount of "you".

On 8/22/19 4:53 PM, Stefan Reiter wrote:
> Signed-off-by: Stefan Reiter <s.reiter at proxmox.com>
> ---
>   pvecm.adoc | 81 ++++++++++++++++++++++++++++++++++++++++--------------
>   1 file changed, 60 insertions(+), 21 deletions(-)
> 
> diff --git a/pvecm.adoc b/pvecm.adoc
> index e986a75..5379c3f 100644
> --- a/pvecm.adoc
> +++ b/pvecm.adoc
> @@ -103,25 +103,33 @@ to the other with SSH via the easier to remember node name (see also
>   xref:pvecm_corosync_addresses[Link Address Types]). Note that we always
>   recommend to reference nodes by their IP addresses in the cluster configuration.
>   
> -
> -[[pvecm_create_cluster]]
>   Create the Cluster
>   ------------------
>   
> -Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
> -This name cannot be changed later. The cluster name follows the same rules as
> -node names.
> +Use a unique name for your cluster. This name cannot be changed later. The
> +cluster name follows the same rules as node names.
> +
> +Create via Web GUI
> +~~~~~~~~~~~~~~~~~~
> +
> +Under __Datacenter -> Cluster__, click on *Create Cluster*. Type your cluster

Enter the cluster name...

> +name and select a network connection from the dropdown to serve as your main
...serve as the main...
> +cluster network (Link 0, default is what the node's hostname resolves to).

cluster network (Link 0). It defaults to the IP address resolved by the 
nods hostname.

> +
> +Optionally, you can select the 'Advanced' check box and choose an additional

To add a second link for fallback purposes activate the 'Advanced' 
checkbox and select the second network interface (Link1, see xref:....

> +network interface for fallback purposes (Link 1, see also
> +xref:pvecm_redundancy[Corosync Redundancy]).
> +
> +Create via Command Line
> +~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Login via `ssh` to the first {pve} node and run the following command:
>   
>   ----
>    hp1# pvecm create CLUSTERNAME
>   ----
>   
> -NOTE: It is possible to create multiple clusters in the same physical or logical
> -network. Use unique cluster names if you do so. To avoid human confusion, it is
> -also recommended to choose different names even if clusters do not share the
> -cluster network.
> -
> -To check the state of your cluster use:
> +To check the state of your new cluster use:

...of the new cluster...

>   
>   ----
>    hp1# pvecm status
> @@ -131,9 +139,9 @@ Multiple Clusters In Same Network
>   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>   
>   It is possible to create multiple clusters in the same physical or logical
> -network. Each such cluster must have a unique name, this does not only helps
> -admins to distinguish on which cluster they currently operate, it is also
> -required to avoid possible clashes in the cluster communication stack.
> +network. Each such cluster must have a unique name, to not only help admins
Each cluster must have a unique name to avoid possible clashes in the 
cluster communication stack. It also helps admins to distinguish the 
cluster they are currently operating on.
> +distinguish which cluster they are currently operating on, but also to avoid
> +possible clashes in the cluster communication stack.
>   
>   While the bandwidth requirement of a corosync cluster is relatively low, the
>   latency of packages and the package per second (PPS) rate is the limiting
> @@ -145,6 +153,39 @@ infrastructure for bigger clusters.
>   Adding Nodes to the Cluster
>   ---------------------------
>   
> +CAUTION: A new node cannot hold any VMs, because you would get
> +conflicts about identical VM IDs. Also, all existing configuration in
> +`/etc/pve` is overwritten when you join a new node to the cluster. To
> +workaround, use `vzdump` to backup and restore to a different VMID after
> +adding the node to the cluster.

A node that is about to be added to the cluster cannot hold any guests. 
All existing configuration in `/etc/pve` is overwritten when joining a 
cluster and guest IDs could be conflicting. As a workaround create a 
backup of the guest (`vzdump`) and restore it to a different ID after 
the node has been added to the cluster.

> +
> +Add Node via GUI
> +~~~~~~~~~~~~~~~~
> +
> +If you want to use "assisted join", where most parameters will be filled in for
> +you, first login to the web interface on a node already in the cluster. Under
> +__Datacenter -> Cluster__, click on *Join Information* at the top. Click on
> +*Copy Information* or manually copy the string from the 'Information' field.

Login to the web-based interface of an existing cluster node. Under 
__Datacenter -> Cluster__ click the button *Join Information* at the 
top. The easiest way is to click the button *Copy Information* or to 
copy the content of the text field *Join Information* manually.

> +
> +To add the new node, login to the web interface on the node you want to add.
> +Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the
> +'Information' field with the text you copied earlier.
> +
> +For security reasons, the password is not included, so you have to fill that in
> +manually.

For security reasons the password of the cluster needs to be entered 
manually.

> +
> +NOTE: The Join Information is not necessarily required, you can also uncheck the
> +'Assisted Join' checkbox and fill in the required fields manually.

NODE: To enter all required data manually disable the 'Assisted Join' 
checkbox.

> +
> +After clicking on *Join* your node will immediately be added to the cluster.

...on *Join* the node will...

> +You might need to reload the web page, and re-login with the cluster
> +credentials.

no comma?

> +
> +Confirm that your node is visible under __Datacenter -> Cluster__.
> +
> +Add Node via Command Line
> +~~~~~~~~~~~~~~~~~~~~~~~~~
> +
>   Login via `ssh` to the node you want to add.
>   
>   ----
> @@ -154,11 +195,6 @@ Login via `ssh` to the node you want to add.
>   For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node.
>   An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]).
>   
> -CAUTION: A new node cannot hold any VMs, because you would get
> -conflicts about identical VM IDs. Also, all existing configuration in
> -`/etc/pve` is overwritten when you join a new node to the cluster. To
> -workaround, use `vzdump` to backup and restore to a different VMID after
> -adding the node to the cluster.
>   
>   To check the state of the cluster use:
>   
> @@ -229,6 +265,8 @@ pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0
>   If you want to use the built-in xref:pvecm_redundancy[redundancy] of the
>   kronosnet transport layer, also use the 'link1' parameter.
>   
> +In the GUI you can select the correct interface from the corresponding 'Link 0'
> +and 'Link 1' fields.

You can select the correct interface in the GUI with the 'Link 0' and ...

(Not sure if "with" or "in" would be preferable here)
>   
>   Remove a Cluster Node
>   ---------------------
> @@ -692,8 +730,9 @@ Corosync Redundancy
>   Corosync supports redundant networking via its integrated kronosnet layer by
>   default (it is not supported on the legacy udp/udpu transports). It can be
>   enabled by specifying more than one link address, either via the '--linkX'
> -parameters of `pvecm` (while creating a cluster or adding a new node) or by
> -specifying more than one 'ringX_addr' in `corosync.conf`.
> +parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or
> +adding a new node) or by specifying more than one 'ringX_addr' in
> +`corosync.conf`.
>   
>   NOTE: To provide useful failover, every link should be on its own
>   physical network connection.
> 



More information about the pve-devel mailing list