[pmg-devel] [PATCH docs] pmgcm: typos, grammar and rephrasing fixups

Aaron Lauterer a.lauterer at proxmox.com
Thu Apr 23 13:39:52 CEST 2020


LGTM

Reviewed-By: Aaron Lauterer <a.lauterer at proxmox.com>

On 4/23/20 10:38 AM, Oguz Bektas wrote:
> Signed-off-by: Oguz Bektas <o.bektas at proxmox.com>
> ---
>   pmgcm.adoc | 45 ++++++++++++++++++++++-----------------------
>   1 file changed, 22 insertions(+), 23 deletions(-)
> 
> diff --git a/pmgcm.adoc b/pmgcm.adoc
> index 76bfcd3..8cfdb31 100644
> --- a/pmgcm.adoc
> +++ b/pmgcm.adoc
> @@ -30,7 +30,7 @@ failures in email systems are just not acceptable. To meet these
>   requirements we developed the Proxmox HA (High Availability) Cluster.
>   
>   The {pmg} HA Cluster consists of a master and several slave nodes
> -(minimum one node). Configuration is done on the master. Configuration
> +(minimum one slave node). Configuration is done on the master. Configuration
>   and data is synchronized to all cluster nodes over a VPN tunnel. This
>   provides the following advantages:
>   
> @@ -43,8 +43,8 @@ provides the following advantages:
>   * high performance
>   
>   We use a unique application level clustering scheme, which provides
> -extremely good performance. Special considerations where taken to make
> -management as easy as possible. Complete Cluster setup is done within
> +extremely good performance. Special considerations were taken to make
> +management as easy as possible. Complete cluster setup is done within

s/Complete/A complete/

>   minutes, and nodes automatically reintegrate after temporary failures
>   without any operator interaction.
>   
> @@ -64,7 +64,7 @@ The HA Cluster can also run in virtualized environments.
>   Subscriptions
>   -------------
>   
> -Each host in a cluster has its own subscription. If you want support
> +Each node in a cluster has its own subscription. If you want support
>   for a cluster, each cluster node needs to have a valid
>   subscription. All nodes must have the same subscription level.
>   
> @@ -79,7 +79,7 @@ second node is used as quarantine host, and only provides the web
>   interface to the user quarantine.
>   
>   The normal mail delivery process looks up DNS Mail Exchange (`MX`)
> -records to determine the destination host. A `MX` record tells the
> +records to determine the destination host. An `MX` record tells the
>   sending system where to deliver mail for a certain domain. It is also
>   possible to have several `MX` records for a single domain, they can have
>   different priorities. For example, our `MX` record looks like that:
> @@ -94,7 +94,7 @@ proxmox.com.            22879   IN      MX      10 mail.proxmox.com.
>   mail.proxmox.com.       22879   IN      A       213.129.239.114
>   ----
>   
> -Please notice that there is one single `MX` record for the Domain
> +Notice that there is a single `MX` record for the domain
>   `proxmox.com`, pointing to `mail.proxmox.com`. The `dig` command
>   automatically puts out the corresponding address record if it
>   exists. In our case it points to `213.129.239.114`. The priority of
> @@ -124,28 +124,28 @@ server (mail.provider.tld) if the primary server (mail.proxmox.com) is
>   not available.
>   
>   NOTE: Any reasonable mail server retries mail delivery if the target
> -server is not available, i.e. {pmg} stores mail and retries delivery
> -for up to one week. So you will not lose mail if your mail server is
> +server is not available, and {pmg} stores mail and retries delivery
> +for up to one week. So you will not lose mails if your mail server is
>   down, even if you run a single server setup.
>   
>   
>   Load balancing with `MX` records
>   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>   
> -Using your ISPs mail server is not always a good idea, because many
> +Using your ISP's mail server is not always a good idea, because many
>   ISPs do not use advanced spam prevention techniques, or do not filter
> -SPAM at all. It is often better to run a second server yourself to
> +spam at all. It is often better to run a second server yourself to
>   avoid lower spam detection rates.
>   
> -Anyways, it’s quite simple to set up a high performance load balanced
> -mail cluster using `MX` records. You just need to define two `MX` records
> +It’s quite simple to set up a high performance load balanced
> +mail cluster using `MX` records. You need to define two `MX` records
>   with the same priority. Here is a complete example to make it clearer.
>   
>   First, you need to have at least 2 working {pmg} servers
>   (mail1.example.com and mail2.example.com) configured as cluster (see
>   section xref:pmg_cluster_administration[Cluster administration]
>   below), each having its own IP address. Let us assume the following
> -addresses (DNS address records):
> +DNS address records:
>   
>   ----
>   mail1.example.com.       22879   IN      A       1.2.3.4
> @@ -154,7 +154,7 @@ mail2.example.com.       22879   IN      A       1.2.3.5
>   
>   It is always a good idea to add reverse lookup entries (PTR
>   records) for those hosts. Many email systems nowadays reject mails
> -from hosts without valid PTR records.  Then you need to define your `MX`
> +from hosts without valid PTR records. Then you need to define your `MX`
>   records:
>   
>   ----
> @@ -162,9 +162,8 @@ example.com.            22879   IN      MX      10 mail1.example.com.
>   example.com.            22879   IN      MX      10 mail2.example.com.
>   ----
>   
> -This is all you need. You will receive mails on both hosts, more or
> -less load-balanced using round-robin scheduling. If one host fails the
> -other one is used.
> +This is all you need. You will receive mails on both hosts, load-balanced using
> +round-robin scheduling. If one host fails the other one is used.
>   
>   
>   Other ways
> @@ -173,7 +172,7 @@ Other ways
>   Multiple address records
>   ^^^^^^^^^^^^^^^^^^^^^^^^
>   
> -Using several DNS `MX` records is sometimes clumsy if you have many
> +Using several DNS `MX` records is sometimes tedious if you have many
>   domains. It is also possible to use one `MX` record per domain, but
>   multiple address records:
>   
> @@ -195,9 +194,9 @@ using DNAT. See your firewall manual for more details.
>   Cluster administration
>   ----------------------
>   
> -Cluster administration can be done on the GUI or using the command
> +Cluster administration can be done in the GUI or by using the command
>   line utility `pmgcm`. The CLI tool is a bit more verbose, so we suggest
> -to use that if you run into problems.
> +to use that if you run into any problems.
>   
>   NOTE: Always setup the IP configuration before adding a node to the
>   cluster. IP address, network mask, gateway address and hostname can’t
> @@ -243,8 +242,8 @@ Adding Cluster Nodes
>   
>   [thumbnail="pmg-gui-cluster-join.png", big=1]
>   
> -When you add a new node to a cluster (using `join`) all data on that node is
> -destroyed. The whole database is initialized with cluster data from
> +When you add a new node to a cluster (using `join`), all data on that node is
> +destroyed. The whole database is initialized with the cluster data from
>   the master.
>   
>   * make sure you have the right IP configuration
> @@ -257,7 +256,7 @@ pmgcm join <master_ip>
>   
>   You need to enter the root password of the master host when asked for
>   a password. When joining a cluster using the GUI, you also need to
> -enter the 'fingerprint' of the master node. You get that information
> +enter the 'fingerprint' of the master node. You can get that information
>   by pressing the `Add` button on the master node.
>   
>   CAUTION: Node initialization deletes all existing databases, stops and
> 



More information about the pmg-devel mailing list