[pbs-devel] [PATCH proxmox-backup] docs: language fixup

Dylan Whyte d.whyte at proxmox.com
Tue May 17 18:12:42 CEST 2022


This fixup covers every doc patch since my previous language fixup patch.

Note: not much attention was paid to certificate-management, as it's
derived from pmg, which I had touched up not so long ago.

Signed-off-by: Dylan Whyte <d.whyte at proxmox.com>
---
 docs/backup-client.rst          |  10 +--
 docs/certificate-management.rst |   2 +-
 docs/conf.py                    |   2 +-
 docs/introduction.rst           |   2 +-
 docs/local-zfs.rst              |   8 +--
 docs/maintenance.rst            |   4 +-
 docs/managing-remotes.rst       |  23 ++++---
 docs/markdown-primer.rst        |  35 +++++-----
 docs/storage.rst                |  30 ++++----
 docs/sysadmin.rst               |   2 +-
 docs/system-booting.rst         | 118 ++++++++++++++++----------------
 docs/tape-backup.rst            |  44 ++++++------
 docs/terminology.rst            |   6 +-
 docs/traffic-control.rst        |  30 ++++----
 docs/user-management.rst        |  84 +++++++++++------------
 15 files changed, 200 insertions(+), 200 deletions(-)

diff --git a/docs/backup-client.rst b/docs/backup-client.rst
index afed415f..b2419468 100644
--- a/docs/backup-client.rst
+++ b/docs/backup-client.rst
@@ -142,7 +142,7 @@ you want to back up two disks mounted at ``/mnt/disk1`` and ``/mnt/disk2``:
 
 This creates a backup of both disks.
 
-If you want to use a namespace for the backup target you can add the `--ns`
+If you want to use a namespace for the backup target, you can add the `--ns`
 parameter:
 
 .. code-block:: console
@@ -685,17 +685,17 @@ It is also possible to protect single snapshots from being pruned or deleted:
   # proxmox-backup-client snapshot protected update <snapshot> true
 
 This will set the protected flag on the snapshot and prevent pruning or manual
-deletion of this snapshot untilt he flag is removed again with:
+deletion of this snapshot until the flag is removed again with:
 
 .. code-block:: console
 
   # proxmox-backup-client snapshot protected update <snapshot> false
 
-When a group is with a protected snapshot is deleted, only the non-protected
-ones are removed and the group will remain.
+When a group with a protected snapshot is deleted, only the non-protected
+ones are removed, and the rest will remain.
 
 .. note:: This flag will not be synced when using pull or sync jobs. If you
-   want to protect a synced snapshot, you have to manually to this again on
+   want to protect a synced snapshot, you have to do this again manually on
    the target backup server.
 
 .. _client_garbage-collection:
diff --git a/docs/certificate-management.rst b/docs/certificate-management.rst
index 510d68e5..6f7283c8 100644
--- a/docs/certificate-management.rst
+++ b/docs/certificate-management.rst
@@ -18,7 +18,7 @@ configuration, or by using certificates, signed by a trusted certificate authori
 Certificates for the API and SMTP
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-`Proxmox Backup`_ stores it certificate and key in:
+`Proxmox Backup`_ stores its certificate and key in:
 
 -  ``/etc/proxmox-backup/proxy.pem``
 
diff --git a/docs/conf.py b/docs/conf.py
index 2c212dee..749599c0 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -77,7 +77,7 @@ project = 'Proxmox Backup'
 copyright = '2019-2021, Proxmox Server Solutions GmbH'
 author = 'Proxmox Support Team'
 
-# The version info for the project you're documenting, acts as a replacement for
+# The version info for the project you're documenting acts as a replacement for
 # |version| and |release|, also used in various other places throughout the
 # built documents.
 #
diff --git a/docs/introduction.rst b/docs/introduction.rst
index 47b5d606..52da74ec 100644
--- a/docs/introduction.rst
+++ b/docs/introduction.rst
@@ -37,7 +37,7 @@ integrated client.
 A single backup is allowed to contain several archives. For example, when you
 backup a :term:`virtual machine<Virtual machine>`, each disk is stored as a
 separate archive inside that backup. The VM configuration itself is stored as
-an extra file.  This way, it's easy to access and restore only the important
+an extra file. This way, it's easy to access and restore only the important
 parts of the backup, without the need to scan the whole backup.
 
 
diff --git a/docs/local-zfs.rst b/docs/local-zfs.rst
index 7b2d3850..dec9166f 100644
--- a/docs/local-zfs.rst
+++ b/docs/local-zfs.rst
@@ -211,13 +211,13 @@ Usually `grub.cfg` is located in `/boot/grub/grub.cfg`
 Activate e-mail notification
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-ZFS comes with an event daemon ``ZED``, which monitors events generated by the
-ZFS kernel module. The daemon can also send emails on ZFS events like pool
+ZFS comes with an event daemon, ``ZED``, which monitors events generated by the
+ZFS kernel module. The daemon can also send emails upon ZFS events, such as pool
 errors. Newer ZFS packages ship the daemon in a separate package ``zfs-zed``,
 which should already be installed by default in `Proxmox Backup`_.
 
-You can configure the daemon via the file ``/etc/zfs/zed.d/zed.rc`` with your
-favorite editor. The required setting for email notfication is
+You can configure the daemon via the file ``/etc/zfs/zed.d/zed.rc``, using your
+preferred editor. The required setting for email notfication is
 ``ZED_EMAIL_ADDR``, which is set to ``root`` by default.
 
 .. code-block:: console
diff --git a/docs/maintenance.rst b/docs/maintenance.rst
index 7ca4a7ac..0767e300 100644
--- a/docs/maintenance.rst
+++ b/docs/maintenance.rst
@@ -192,8 +192,8 @@ following options are available:
 Maintenance Mode
 ----------------
 
-Proxmox Backup Server implements setting the `read-only` and `offline`
-maintenance modes for a datastore.
+Proxmox Backup Server supports setting `read-only` and `offline`
+maintenance modes on a datastore.
 
 Once enabled, depending on the mode, new reads and/or writes to the datastore
 are blocked, allowing an administrator to safely execute maintenance tasks, for
diff --git a/docs/managing-remotes.rst b/docs/managing-remotes.rst
index d9d0facd..23aa3486 100644
--- a/docs/managing-remotes.rst
+++ b/docs/managing-remotes.rst
@@ -98,19 +98,20 @@ the local datastore as well. If the ``owner`` option is not set (defaulting to
 If the ``group-filter`` option is set, only backup groups matching at least one
 of the specified criteria are synced. The available criteria are:
 
-* backup type, for example to only sync groups of the `ct` (Container) type:
+* Backup type, for example, to only sync groups of the `ct` (Container) type:
     .. code-block:: console
 
      # proxmox-backup-manager sync-job update ID --group-filter type:ct
-* full group identifier
+* Full group identifier, to sync a specific backup group:
     .. code-block:: console
 
      # proxmox-backup-manager sync-job update ID --group-filter group:vm/100
-* regular expression matched against the full group identifier
+* Regular expression, matched against the full group identifier
+    .. code-block:: console
 
-.. todo:: add example for regex
+     # proxmox-backup-manager sync-job update ID --group-filter regex:'^vm/1\d{2,3}$'
 
-The same filter is applied to local groups for handling of the
+The same filter is applied to local groups, for handling of the
 ``remove-vanished`` option.
 
 .. note:: The ``protected`` flag of remote backup snapshots will not be synced.
@@ -118,9 +119,9 @@ The same filter is applied to local groups for handling of the
 Namespace Support
 ^^^^^^^^^^^^^^^^^
 
-Sync jobs can be configured to not only sync datastores, but also sub-sets of
+Sync jobs can be configured to not only sync datastores, but also subsets of
 datastores in the form of namespaces or namespace sub-trees. The following
-parameters influence how namespaces are treated as part of a sync job
+parameters influence how namespaces are treated as part of a sync job's
 execution:
 
 - ``remote-ns``: the remote namespace anchor (default: the root namespace)
@@ -199,10 +200,10 @@ sync job scope but only exist locally are treated as vanished and removed
 Bandwidth Limit
 ^^^^^^^^^^^^^^^
 
-Syncing a datastore to an archive can produce lots of traffic and impact other
-users of the network. So, to avoid network or storage congestion you can limit
-the bandwidth of the sync job by setting the ``rate-in`` option either in the
-web interface or using the ``proxmox-backup-manager`` command-line tool:
+Syncing a datastore to an archive can produce a lot of traffic and impact other
+users of the network. In order to avoid network or storage congestion, you can
+limit the bandwidth of the sync job by setting the ``rate-in`` option either in
+the web interface or using the ``proxmox-backup-manager`` command-line tool:
 
 .. code-block:: console
 
diff --git a/docs/markdown-primer.rst b/docs/markdown-primer.rst
index 01ce1d6d..acdbd9ac 100644
--- a/docs/markdown-primer.rst
+++ b/docs/markdown-primer.rst
@@ -10,18 +10,18 @@ Markdown Primer
   --  John Gruber, https://daringfireball.net/projects/markdown/
 
 
-The Proxmox Backup Server (PBS) web-interface has support for using Markdown to
-rendering rich text formatting in node and virtual guest notes.
+The "Notes" panel of the Proxmox Backup Server web-interface supports
+rendering Markdown text.
 
-PBS supports CommonMark with most extensions of GFM (GitHub Flavoured Markdown),
-like tables or task-lists.
+Proxmox Backup Server supports CommonMark with most extensions of GFM (GitHub
+Flavoured Markdown), like tables or task-lists.
 
 .. _markdown_basics:
 
 Markdown Basics
 ---------------
 
-Note that we only describe the basics here, please search the web for more
+Note that we only describe the basics here. Please search the web for more
 extensive resources, for example on https://www.markdownguide.org/
 
 Headings
@@ -51,7 +51,7 @@ Combinations are also possible, for example:
 Links
 ~~~~~
 
-You can use automatic detection of links, for example,
+You can use automatic detection of links. For example,
 ``https://forum.proxmox.com/`` would transform it into a clickable link.
 
 You can also control the link text, for example:
@@ -76,7 +76,7 @@ Use ``*`` or ``-`` for unordered lists, for example:
   * Item 2b
 
 
-Adding an indentation can be used to created nested lists.
+You can create nested lists by adding indentation.
 
 Ordered Lists
 ^^^^^^^^^^^^^
@@ -94,7 +94,7 @@ NOTE: The integer of ordered lists does not need to be correct, they will be num
 Task Lists
 ^^^^^^^^^^
 
-Task list use a empty box ``[ ]`` for unfinished tasks and a box with an `X` for finished tasks.
+Task lists use a empty box ``[ ]`` for unfinished tasks and a box with an `X` for finished tasks.
 
 For example:
 
@@ -110,7 +110,7 @@ Tables
 ~~~~~~
 
 Tables use the pipe symbol ``|`` to separate columns, and ``-`` to separate the
-table header from the table body, in that separation one can also set the text
+table header from the table body. In that separation, you can also set the text
 alignment, making one column left-, center-, or right-aligned.
 
 
@@ -143,23 +143,24 @@ You can enter block quotes by prefixing a line with ``>``, similar as in plain-t
 Code and Snippets
 ~~~~~~~~~~~~~~~~~
 
-You can use backticks to avoid processing for a few word or paragraphs. That is useful for
-avoiding that a code or configuration hunk gets mistakenly interpreted as markdown.
+You can use backticks to avoid processing a group of words or paragraphs. This
+is useful for preventing a code or configuration hunk from being mistakenly
+interpreted as markdown.
 
-Inline code
+Inline Code
 ^^^^^^^^^^^
 
-Surrounding part of a line with single backticks allows to write code inline,
-for examples:
+Surrounding part of a line with single backticks allows you to write code
+inline, for examples:
 
 .. code-block:: md
 
   This hosts IP address is `10.0.0.1`.
 
-Whole blocks of code
-^^^^^^^^^^^^^^^^^^^^
+Entire Blocks of Code
+^^^^^^^^^^^^^^^^^^^^^
 
-For code blocks spanning several lines you can use triple-backticks to start
+For code blocks spanning several lines, you can use triple-backticks to start
 and end such a block, for example:
 
 .. code-block:: md
diff --git a/docs/storage.rst b/docs/storage.rst
index d2a12e8c..ed4e28f2 100644
--- a/docs/storage.rst
+++ b/docs/storage.rst
@@ -261,44 +261,44 @@ categorized by checksum, after a backup operation has been executed.
  276490 drwxr-x--- 1 backup backup 1.1M Jul  8 12:35 .
 
 
-Once you uploaded some backups, or created namespaces, you may see the Backup
-Type (`ct`, `vm`, `host`) and the start of the namespace hierachy (`ns`).
+Once you've uploaded some backups or created namespaces, you may see the backup
+type (`ct`, `vm`, `host`) and the start of the namespace hierachy (`ns`).
 
 .. _storage_namespaces:
 
 Backup Namespaces
 ~~~~~~~~~~~~~~~~~
 
-A datastore can host many backups as long as the underlying storage is big
-enough and provides the performance required for one's use case.
-But, without any hierarchy or separation its easy to run into naming conflicts,
+A datastore can host many backups, as long as the underlying storage is large
+enough and provides the performance required for a user's use case.
+However, without any hierarchy or separation, it's easy to run into naming conflicts,
 especially when using the same datastore for multiple Proxmox VE instances or
 multiple users.
 
 The backup namespace hierarchy allows you to clearly separate different users
-or backup sources in general, avoiding naming conflicts and providing
+or backup sources in general, avoiding naming conflicts and providing a
 well-organized backup content view.
 
-Each namespace level can host any backup type, CT, VM or Host but also other
-namespaces, up to a depth of 8 level, where the root namespace is the first
+Each namespace level can host any backup type, CT, VM or Host, but also other
+namespaces, up to a depth of 8 levels, where the root namespace is the first
 level.
 
-
 Namespace Permissions
 ^^^^^^^^^^^^^^^^^^^^^
 
 You can make the permission configuration of a datastore more fine-grained by
 setting permissions only on a specific namespace.
 
-To see a datastore you need permission that has at least one of `AUDIT`,
+To view a datastore, you need a permission that has at least an `AUDIT`,
 `MODIFY`, `READ` or `BACKUP` privilege on any namespace it contains.
 
-To create or delete a namespace you require the modify privilege on the parent
-namespace. So, to initially create namespaces you need to have a permission
-with a access role that includes the `MODIFY` privilege on the datastore itself.
+To create or delete a namespace, you require the modify privilege on the parent
+namespace. Thus, to initially create namespaces, you need to have a permission
+with an access role that includes the `MODIFY` privilege on the datastore itself.
 
-For backup groups the existing privilege rules still apply, you either need a
-powerful permission or be the owner of the backup group, nothing changed here.
+For backup groups, the existing privilege rules still apply. You either need a
+privileged enough permission or to be the owner of the backup group; nothing
+changed here.
 
 .. todo:: continue
 
diff --git a/docs/sysadmin.rst b/docs/sysadmin.rst
index 7440d201..88bf256f 100644
--- a/docs/sysadmin.rst
+++ b/docs/sysadmin.rst
@@ -16,7 +16,7 @@ repository to roll out all Proxmox related packages. This includes
 updates to some Debian packages when necessary.
 
 We also deliver a specially optimized Linux kernel, based on the Ubuntu
-kernel. That kernel includes drivers for ZFS_.
+kernel. This kernel includes drivers for ZFS_.
 
 The following sections will concentrate on backup related topics. They
 will explain things which are different on `Proxmox Backup`_, or
diff --git a/docs/system-booting.rst b/docs/system-booting.rst
index f8fa0e14..caf46303 100644
--- a/docs/system-booting.rst
+++ b/docs/system-booting.rst
@@ -4,7 +4,7 @@
 Host Bootloader
 ---------------
 
-`Proxmox Backup`_ currently uses one of two bootloaders depending on the disk setup
+`Proxmox Backup`_ currently uses one of two bootloaders, depending on the disk setup
 selected in the installer.
 
 For EFI Systems installed with ZFS as the root filesystem ``systemd-boot`` is
@@ -22,54 +22,54 @@ installation.
 
 The created partitions are:
 
-* a 1 MB BIOS Boot Partition (gdisk type EF02)
+* A 1 MB BIOS Boot Partition (gdisk type EF02)
 
-* a 512 MB EFI System Partition (ESP, gdisk type EF00)
+* A 512 MB EFI System Partition (ESP, gdisk type EF00)
 
-* a third partition spanning the set ``hdsize`` parameter or the remaining space
-  used for the chosen storage type
+* A third partition spanning the configured ``hdsize`` parameter or the
+  remaining space available for the chosen storage type
 
-Systems using ZFS as root filesystem are booted with a kernel and initrd image
+Systems using ZFS as a root filesystem are booted with a kernel and initrd image
 stored on the 512 MB EFI System Partition. For legacy BIOS systems, ``grub`` is
 used, for EFI systems ``systemd-boot`` is used. Both are installed and configured
 to point to the ESPs.
 
 ``grub`` in BIOS mode (``--target i386-pc``) is installed onto the BIOS Boot
-Partition of all selected disks on all systems booted with ``grub`` (These are
-all installs with root on ``ext4`` or ``xfs`` and installs with root on ZFS on
+Partition of all selected disks on all systems booted with ``grub`` (that is,
+all installs with root on ``ext4`` or ``xfs``, and installs with root on ZFS on
 non-EFI systems).
 
 
 .. _systembooting-proxmox-boot-tool:
 
-Synchronizing the content of the ESP with ``proxmox-boot-tool``
+Synchronizing the Content of the ESP with ``proxmox-boot-tool``
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 ``proxmox-boot-tool`` is a utility used to keep the contents of the EFI System
 Partitions properly configured and synchronized. It copies certain kernel
 versions to all ESPs and configures the respective bootloader to boot from
-the ``vfat`` formatted ESPs. In the context of ZFS as root filesystem this means
-that you can use all optional features on your root pool instead of the subset
+the ``vfat`` formatted ESPs. In the context of ZFS as root filesystem, this means
+that you can use all the optional features on your root pool, instead of the subset
 which is also present in the ZFS implementation in ``grub`` or having to create a
-separate small boot-pool (see: `Booting ZFS on root with grub
+small, separate boot-pool (see: `Booting ZFS on root with grub
 <https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS>`_).
 
-In setups with redundancy all disks are partitioned with an ESP, by the
-installer. This ensures the system boots even if the first boot device fails
+In setups with redundancy, all disks are partitioned with an ESP by the
+installer. This ensures the system boots, even if the first boot device fails
 or if the BIOS can only boot from a particular disk.
 
 The ESPs are not kept mounted during regular operation. This helps to prevent
-filesystem corruption to the ``vfat`` formatted ESPs in case of a system crash,
+filesystem corruption in the ``vfat`` formatted ESPs in case of a system crash,
 and removes the need to manually adapt ``/etc/fstab`` in case the primary boot
 device fails.
 
 ``proxmox-boot-tool`` handles the following tasks:
 
-* formatting and setting up a new partition
-* copying and configuring new kernel images and initrd images to all listed ESPs
-* synchronizing the configuration on kernel upgrades and other maintenance tasks
-* managing the list of kernel versions which are synchronized
-* configuring the boot-loader to boot a particular kernel version (pinning)
+* Formatting and setting up a new partition
+* Copying and configuring new kernel images and initrd images to all listed ESPs
+* Synchronizing the configuration on kernel upgrades and other maintenance tasks
+* Managing the list of kernel versions which are synchronized
+* Configuring the boot-loader to boot a particular kernel version (pinning)
 
 
 You can view the currently configured ESPs and their state by running:
@@ -80,13 +80,13 @@ You can view the currently configured ESPs and their state by running:
 
 .. _systembooting-proxmox-boot-setup:
 
-Setting up a new partition for use as synced ESP
+Setting up a New Partition for use as Synced ESP
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
-To format and initialize a partition as synced ESP, e.g., after replacing a
+To format and initialize a partition as synced ESP, for example, after replacing a
 failed vdev in an rpool, ``proxmox-boot-tool`` from ``pve-kernel-helper`` can be used.
 
-WARNING: the ``format`` command will format the ``<partition>``, make sure to pass
+WARNING: the ``format`` command will format the ``<partition>``. Make sure to pass
 in the right device/partition!
 
 For example, to format an empty partition ``/dev/sda2`` as ESP, run the following:
@@ -102,48 +102,48 @@ To setup an existing, unmounted ESP located on ``/dev/sda2`` for inclusion in
 
   # proxmox-boot-tool init /dev/sda2
 
-Afterwards `/etc/kernel/proxmox-boot-uuids`` should contain a new line with the
+Following this, `/etc/kernel/proxmox-boot-uuids`` should contain a new line with the
 UUID of the newly added partition. The ``init`` command will also automatically
 trigger a refresh of all configured ESPs.
 
 .. _systembooting-proxmox-boot-refresh:
 
-Updating the configuration on all ESPs
+Updating the Configuration on all ESPs
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 To copy and configure all bootable kernels and keep all ESPs listed in
-``/etc/kernel/proxmox-boot-uuids`` in sync you just need to run:
+``/etc/kernel/proxmox-boot-uuids`` in sync, you just need to run:
 
 .. code-block:: console
 
   # proxmox-boot-tool refresh
 
-(The equivalent to running ``update-grub`` systems with ``ext4`` or ``xfs`` on root).
+(Equivalent to running ``update-grub`` on systems with ``ext4`` or ``xfs`` on root).
 
-This is necessary should you make changes to the kernel commandline, or want to
-sync all kernels and initrds.
+This is necessary after making changes to the kernel commandline, or if you want
+to sync all kernels and initrds.
 
 .. NOTE:: Both ``update-initramfs`` and ``apt`` (when necessary) will automatically
    trigger a refresh.
 
-Kernel Versions considered by ``proxmox-boot-tool``
+Kernel Versions Considered by ``proxmox-boot-tool``
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 The following kernel versions are configured by default:
 
-* the currently running kernel
-* the version being newly installed on package updates
-* the two latest already installed kernels
-* the latest version of the second-to-last kernel series (e.g. 5.0, 5.3), if applicable
-* any manually selected kernels
+* The currently running kernel
+* The version being newly installed on package updates
+* The two latest, already installed kernels
+* The latest version of the second-to-last kernel series (e.g. 5.0, 5.3), if applicable
+* Any manually selected kernels
 
-Manually keeping a kernel bootable
+Manually Keeping a Kernel Bootable
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 Should you wish to add a certain kernel and initrd image to the list of
-bootable kernels use ``proxmox-boot-tool kernel add``.
+bootable kernels, use ``proxmox-boot-tool kernel add``.
 
-For example run the following to add the kernel with ABI version ``5.0.15-1-pve``
+For example, run the following to add the kernel with ABI version ``5.0.15-1-pve``
 to the list of kernels to keep installed and synced to all ESPs:
 
 .. code-block:: console
@@ -199,7 +199,7 @@ You will either see the blue box of ``grub`` or the simple black on white
   :alt: systemd-boot screen
 
 Determining the bootloader from a running system might not be 100% accurate. The
-safest way is to run the following command:
+most reliable way is to run the following command:
 
 
 .. code-block:: console
@@ -225,32 +225,30 @@ If the output contains a line similar to the following, ``systemd-boot`` is used
   Boot0006* Linux Boot Manager	[...] File(\EFI\systemd\systemd-bootx64.efi)
 
 
-By running:
+By running the following command, you can find out if ``proxmox-boot-tool`` is
+configured, which is a good indication of how the system is booted:
 
 .. code-block:: console
 
   # proxmox-boot-tool status
 
 
-you can find out if ``proxmox-boot-tool`` is configured, which is a good
-indication of how the system is booted.
-
-
 .. _systembooting-grub:
 
 Grub
 ~~~~
 
-``grub`` has been the de-facto standard for booting Linux systems for many years
+``grub`` has been the de facto standard for booting Linux systems for many years
 and is quite well documented
 (see the `Grub Manual
 <https://www.gnu.org/software/grub/manual/grub/grub.html>`_).
 
 Configuration
 ^^^^^^^^^^^^^
+
 Changes to the ``grub`` configuration are done via the defaults file
-``/etc/default/grub`` or config snippets in ``/etc/default/grub.d``. To regenerate
-the configuration file after a change to the configuration run:
+``/etc/default/grub`` or via config snippets in ``/etc/default/grub.d``. To
+regenerate the configuration file after a change to the configuration, run:
 
 .. code-block:: console
 
@@ -268,7 +266,7 @@ Systemd-boot
 images directly from the EFI Service Partition (ESP) where it is installed.
 The main advantage of directly loading the kernel from the ESP is that it does
 not need to reimplement the drivers for accessing the storage. In `Proxmox
-Backup`_ :ref:`proxmox-boot-tool <systembooting-proxmox-boot-tool>` is used to
+Backup`_, :ref:`proxmox-boot-tool <systembooting-proxmox-boot-tool>` is used to
 keep the configuration on the ESPs synchronized.
 
 .. _systembooting-systemd-boot-config:
@@ -280,7 +278,7 @@ Configuration
 directory of an EFI System Partition (ESP). See the ``loader.conf(5)`` manpage
 for details.
 
-Each bootloader entry is placed in a file of its own in the directory
+Each bootloader entry is placed in a file of its own, in the directory
 ``loader/entries/``
 
 An example entry.conf looks like this (``/`` refers to the root of the ESP):
@@ -310,7 +308,7 @@ The kernel commandline needs to be placed in the variable
 ``update-grub`` appends its content to all ``linux`` entries in
 ``/boot/grub/grub.cfg``.
 
-Systemd-boot
+systemd-boot
 ^^^^^^^^^^^^
 
 The kernel commandline needs to be placed as one line in ``/etc/kernel/cmdline``.
@@ -325,18 +323,18 @@ Override the Kernel-Version for next Boot
 
 To select a kernel that is not currently the default kernel, you can either:
 
-* use the boot loader menu that is displayed at the beginning of the boot
+* Use the boot loader menu that is displayed at the beginning of the boot
   process
-* use the ``proxmox-boot-tool`` to ``pin`` the system to a kernel version either
+* Use the ``proxmox-boot-tool`` to ``pin`` the system to a kernel version either
   once or permanently (until pin is reset).
 
 This should help you work around incompatibilities between a newer kernel
 version and the hardware.
 
-.. NOTE:: Such a pin should be removed as soon as possible so that all current
-   security patches of the latest kernel are also applied to the system.
+.. NOTE:: Such a pin should be removed as soon as possible, so that all recent
+   security patches from the latest kernel are also applied to the system.
 
-For example: To permanently select the version ``5.15.30-1-pve`` for booting you
+For example, to permanently select the version ``5.15.30-1-pve`` for booting, you
 would run:
 
 .. code-block:: console
@@ -346,11 +344,11 @@ would run:
 
 .. TIP:: The pinning functionality works for all `Proxmox Backup`_ systems, not only those using
    ``proxmox-boot-tool`` to synchronize the contents of the ESPs, if your system
-   does not use ``proxmox-boot-tool`` for synchronizing you can also skip the
+   does not use ``proxmox-boot-tool`` for synchronizing, you can also skip the
    ``proxmox-boot-tool refresh`` call in the end.
 
 You can also set a kernel version to be booted on the next system boot only.
-This is for example useful to test if an updated kernel has resolved an issue,
+This is useful, for example, to test if an updated kernel has resolved an issue,
 which caused you to ``pin`` a version in the first place:
 
 .. code-block:: console
@@ -358,7 +356,7 @@ which caused you to ``pin`` a version in the first place:
   # proxmox-boot-tool kernel pin 5.15.30-1-pve --next-boot
 
 
-To remove any pinned version configuration use the ``unpin`` subcommand:
+To remove any pinned version configuration, use the ``unpin`` subcommand:
 
 .. code-block:: console
 
@@ -366,9 +364,9 @@ To remove any pinned version configuration use the ``unpin`` subcommand:
 
 While ``unpin`` has a ``--next-boot`` option as well, it is used to clear a pinned
 version set with ``--next-boot``. As that happens already automatically on boot,
-invonking it manually is of little use.
+invoking it manually is of little use.
 
-After setting, or clearing pinned versions you also need to synchronize the
+After setting or clearing pinned versions, you also need to synchronize the
 content and configuration on the ESPs by running the ``refresh`` subcommand.
 
 .. TIP:: You will be prompted to automatically do for  ``proxmox-boot-tool`` managed
diff --git a/docs/tape-backup.rst b/docs/tape-backup.rst
index 8888dacb..194ec345 100644
--- a/docs/tape-backup.rst
+++ b/docs/tape-backup.rst
@@ -521,7 +521,7 @@ a single media pool, so a job only uses tapes from that pool.
 
      .. NOTE:: Retention period starts on the creation time of the next
         media-set or, if that does not exist, when the calendar event
-        triggers the next time after the current media-set start time.
+        next triggers after the current media-set start time.
 
    Additionally, the following events may allocate a new media set:
 
@@ -809,13 +809,13 @@ The following options are available:
 
 --ns  The namespace to backup.
 
-  If you only want to backup a specific namespace. If omitted, the root
-  namespaces is assumed.
+  Used if you only want to backup a specific namespace. If omitted, the root
+  namespace is assumed.
 
 --max-depth  The depth to recurse namespaces.
 
   ``0`` means no recursion at all (only the given namespace). If omitted,
-  all namespaces are recursed (below the the given one).
+  all namespaces are recursed (below the given one).
 
 
 Restore from Tape
@@ -854,8 +854,8 @@ data disk (datastore):
 Single Snapshot Restore
 ^^^^^^^^^^^^^^^^^^^^^^^
 
-Sometimes it is not necessary to restore a whole media-set, but only some
-specific snapshots from the tape. This can be achieved  with the ``snapshots``
+Sometimes it is not necessary to restore an entire media-set, but only some
+specific snapshots from the tape. This can be achieved with the ``snapshots``
 parameter:
 
 
@@ -868,7 +868,7 @@ parameter:
 This first restores the snapshot to a temporary location, then restores the relevant
 chunk archives, and finally restores the snapshot data to the target datastore.
 
-The ``snapshot`` parameter can be given multiple times, so one can restore
+The ``snapshot`` parameter can be passed multiple times, in order to restore
 multiple snapshots with one restore action.
 
 .. NOTE:: When using the single snapshot restore, the tape must be traversed
@@ -880,7 +880,7 @@ Namespaces
 
 It is also possible to select and map specific namespaces from a media-set
 during a restore. This is possible with the ``namespaces`` parameter.
-The format of the parameter is
+The format for the parameter is:
 
 .. code-block:: console
 
@@ -1043,7 +1043,7 @@ This command does the following:
 Example Setups
 --------------
 
-Here are a few example setups for how to manage media pools and schedules.
+Here are a few example setups for managing media pools and schedules.
 This is not an exhaustive list, and there are many more possible combinations
 of useful settings.
 
@@ -1058,14 +1058,14 @@ Allocation policy:
 Retention policy:
   keep
 
-This setup has the advantage of being easy to manage and is re-using the benefits
-from deduplication as much as possible. But, it's also prone to a failure of
-any single tape, which would render all backups referring to chunks from that
-tape unusable.
+This setup has the advantage of being easy to manage and reuses the benefits
+from deduplication as much as possible. But, it also provides no redundancy,
+meaning a failure of any single tape would render all backups referring to
+chunks from that tape unusable.
 
 If you want to start a new media-set manually, you can set the currently
 writable media of the set either to 'full', or set the location to an
-offsite vault.
+off-site vault.
 
 Weekday Scheme
 ~~~~~~~~~~~~~~
@@ -1081,14 +1081,14 @@ Allocation policy:
 Retention policy:
   overwrite
 
-There should be a (or more) tape-backup jobs for each pool on the corresponding
+There should be one or more tape-backup jobs for each pool on the corresponding
 weekday. This scheme is still very manageable with one media set per weekday,
-and could be easily moved off-site.
+and could be moved off-site easily.
 
 Multiple Pools with Different Policies
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Complex setups are also possible with multiple media pools configured with
+Complex setups are also possible, with multiple media pools configured with
 different allocation and retention policies.
 
 An example would be to have two media pools. The first configured with weekly
@@ -1100,7 +1100,7 @@ Allocation policy:
 Retention policy:
   3 weeks
 
-The second pool configured yearly allocation that does not expire:
+The second pool configured with yearly allocation that does not expire:
 
 Allocation policy:
   yearly
@@ -1108,7 +1108,7 @@ Allocation policy:
 Retention policy:
   keep
 
-In combination with suited prune settings and tape backup schedules, this
-achieves long-term storage of some backups, while keeping the current
-backups on smaller media sets that get expired every three plus the current
-week (~ 4 weeks).
+In combination with fitting prune settings and tape backup schedules, this
+achieves long-term storage of some backups, while keeping the recent
+backups on smaller media sets that expire roughly every 4 weeks (that is, three
+plus the current week).
diff --git a/docs/terminology.rst b/docs/terminology.rst
index 4bc810e2..497357a5 100644
--- a/docs/terminology.rst
+++ b/docs/terminology.rst
@@ -65,11 +65,11 @@ Backup Namespace
 ----------------
 
 Namespaces allow for the reuse of a single chunk store deduplication domain for
-multiple sources, while avoiding naming conflicts and getting more fine-grained
+multiple sources, while avoiding naming conflicts and enabling more fine-grained
 access control.
 
-Essentially they're implemented as simple directory structure and need no
-separate configuration.
+Essentially, they're implemented as a simple directory structure and don't
+require separate configuration.
 
 Backup Type
 -----------
diff --git a/docs/traffic-control.rst b/docs/traffic-control.rst
index b8567eea..ad9b0161 100644
--- a/docs/traffic-control.rst
+++ b/docs/traffic-control.rst
@@ -7,21 +7,21 @@ Traffic Control
   :align: right
   :alt: Add a traffic control limit
 
-Creating and restoring backups can produce lots of traffic and impact other
-users of the network or shared storages.
+Creating and restoring backups can produce a lot of traffic, can impact shared
+storage and other users on the network.
 
-Proxmox Backup Server allows to limit network traffic for clients within
+With Proxmox Backup Server, you can constrain network traffic for clients within
 specified networks using a token bucket filter (TBF).
 
-This allows you to avoid network congestion or to prioritize traffic from
+This allows you to avoid network congestion and prioritize traffic from
 certain hosts.
 
-You can manage the traffic controls either over the web-interface or using the
-``traffic-control`` commandos of the ``proxmox-backup-manager`` command-line
+You can manage the traffic controls either via the web-interface or using the
+``traffic-control`` commands of the ``proxmox-backup-manager`` command-line
 tool.
 
-.. note:: Sync jobs on the server are not affected by its rate-in limits. If
-   you want to limit the incoming traffic that a pull-based sync job
+.. note:: Sync jobs on the server are not affected by the configured rate-in limits.
+   If you want to limit the incoming traffic that a pull-based sync job
    generates, you need to setup a job-specific rate-in limit. See
    :ref:`syncjobs`.
 
@@ -34,11 +34,11 @@ The following command adds a traffic control rule to limit all IPv4 clients
    --rate-in 100MB --rate-out 100MB \
    --comment "Default rate limit (100MB/s) for all clients"
 
-.. note:: To limit both IPv4 and IPv6 network spaces you need to pass two
+.. note:: To limit both IPv4 and IPv6 network spaces, you need to pass two
    network parameters ``::/0`` and ``0.0.0.0/0``.
 
 It is possible to restrict rules to certain time frames, for example the
-company office hours:
+company's office hours:
 
 .. tip:: You can use SI (base 10: KB, MB, ...) or IEC (base 2: KiB, MiB, ...)
    units.
@@ -49,9 +49,9 @@ company office hours:
    --timeframe "mon..fri 8-12" \
    --timeframe "mon..fri 14:30-18"
 
-If there are more rules, the server uses the rule with the smaller network. For
-example, we can overwrite the setting for our private network (and the server
-itself) with:
+If there are multiple rules, the server chooses the one with the smaller
+network. For example, we can overwrite the setting for our private network (and
+the server itself) with:
 
 .. code-block:: console
 
@@ -63,11 +63,11 @@ itself) with:
 
 .. note:: The behavior is undefined if there are several rules for the same network.
 
-If there are multiple rules that match the same network all of them will be
+If there are multiple rules which match a specific network, they will all be
 applied, which means that the smallest one wins, as it's bucket fills up the
 fastest.
 
-To list the current rules use:
+To list the current rules, use:
 
 .. code-block:: console
 
diff --git a/docs/user-management.rst b/docs/user-management.rst
index 97b410c4..ada22053 100644
--- a/docs/user-management.rst
+++ b/docs/user-management.rst
@@ -159,7 +159,7 @@ Access Control
 By default, new users and API tokens do not have any permissions. Instead you
 need to specify what is allowed and what is not.
 
-Proxmox Backup Server uses a role and path based permission management system.
+Proxmox Backup Server uses a role- and path-based permission management system.
 An entry in the permissions table allows a user, group or token to take on a
 specific role when accessing an 'object' or 'path'. This means that such an
 access rule can be represented as a triple of '(path, user, role)', '(path,
@@ -169,92 +169,92 @@ allowed actions, and the path representing the target of these actions.
 Privileges
 ~~~~~~~~~~
 
-Privileges are the atoms that access roles are made off. They are internally
+Privileges are the building blocks of access roles. They are internally
 used to enforce the actual permission checks in the API.
 
 We currently support the following privileges:
 
 **Sys.Audit**
-  Sys.Audit allows one to know about the system and its status.
+  Sys.Audit allows a user to know about the system and its status.
 
 **Sys.Modify**
-  Sys.Modify allows one to modify system-level configuration and apply updates.
+  Sys.Modify allows a user to modify system-level configuration and apply updates.
 
 **Sys.PowerManagement**
-  Sys.Modify allows one to to poweroff or reboot the system.
+  Sys.Modify allows a user to power-off and reboot the system.
 
 **Datastore.Audit**
-  Datastore.Audit allows one to know about a datastore, including reading the
+  Datastore.Audit allows a user to know about a datastore, including reading the
   configuration entry and listing its contents.
 
 **Datastore.Allocate**
-  Datastore.Allocate allows one to create or deleting datastores.
+  Datastore.Allocate allows a user to create or delete datastores.
 
 **Datastore.Modify**
-  Datastore.Modify allows one to modify a datastore and its contents, and to
+  Datastore.Modify allows a user to modify a datastore and its contents, and to
   create or delete namespaces inside a datastore.
 
 **Datastore.Read**
-  Datastore.Read allows one to read arbitrary backup contents, independent of
+  Datastore.Read allows a user to read arbitrary backup contents, independent of
   the backup group owner.
 
 **Datastore.Verify**
   Allows verifying the backup snapshots in a datastore.
 
 **Datastore.Backup**
-  Datastore.Backup allows one create new backup snapshot and gives one also the
+  Datastore.Backup allows a user create new backup snapshots and also provides the
   privileges of Datastore.Read and Datastore.Verify, but only if the backup
   group is owned by the user or one of its tokens.
 
 **Datastore.Prune**
-  Datastore.Prune allows one to delete snapshots, but additionally requires
-  backup ownership
+  Datastore.Prune allows a user to delete snapshots, but additionally requires
+  backup ownership.
 
 **Permissions.Modify**
-  Permissions.Modify allows one to modifying ACLs
+  Permissions.Modify allows a user to modify ACLs.
 
-  .. note:: One can always configure privileges for their own API tokens, as
-    they will clamped by the users privileges anyway.
+  .. note:: A user can always configure privileges for their own API tokens, as
+    they will be limited by the users privileges anyway.
 
 **Remote.Audit**
-  Remote.Audit allows one to read the remote and the sync configuration entries
+  Remote.Audit allows a user to read the remote and the sync configuration entries.
 
 **Remote.Modify**
-  Remote.Modify allows one to modify the remote configuration
+  Remote.Modify allows a user to modify the remote configuration.
 
 **Remote.Read**
-  Remote.Read allows one to read data from a configured `Remote`
+  Remote.Read allows a user to read data from a configured `Remote`.
 
 **Sys.Console**
-  Sys.Console allows one to access to the system's console, note that for all
+  Sys.Console allows a user to access the system's console, note that for all
   but `root at pam` a valid system login is still required.
 
 **Tape.Audit**
-  Tape.Audit allows one to read the configuration and status of tape drives,
-  changers and backups
+  Tape.Audit allows a user to read the configuration and status of tape drives,
+  changers and backups.
 
 **Tape.Modify**
-  Tape.Modify allows one to modify the configuration of tape drives, changers
-  and backups
+  Tape.Modify allows a user to modify the configuration of tape drives, changers
+  and backups.
 
 **Tape.Write**
-  Tape.Write allows one to write to a tape media
+  Tape.Write allows a user to write to a tape media.
 
 **Tape.Read**
-  Tape.Read allows one to read tape backup configuration and contents from a
-  tape media
+  Tape.Read allows a user to read tape backup configuration and contents from a
+  tape media.
 
 **Realm.Allocate**
-  Realm.Allocate allows one to view, create, modify and delete authentication
-  realms for users
+  Realm.Allocate allows a user to view, create, modify and delete authentication
+  realms for users.
 
 Access Roles
 ~~~~~~~~~~~~
 
 An access role combines one or more privileges into something that can be
-assigned to an user or API token on an object path.
+assigned to a user or API token on an object path.
 
-Currently there are only built-in roles, that means, you cannot create your
+Currently, there are only built-in roles, meaning you cannot create your
 own, custom role.
 
 The following roles exist:
@@ -277,7 +277,7 @@ The following roles exist:
   read the actual data.
 
 **DatastoreReader**
-  Can inspect a datastore's or namespaces content and do restores.
+  Can inspect a datastore's or namespace's content and do restores.
 
 **DatastoreBackup**
   Can backup and restore owned backups.
@@ -295,31 +295,31 @@ The following roles exist:
   Is allowed to read data from a remote.
 
 **TapeAdmin**
-  Can do anything related to tape backup
+  Can do anything related to tape backup.
 
 **TapeAudit**
-  Can view tape related metrics, configuration and status
+  Can view tape-related metrics, configuration and status.
 
 **TapeOperator**
-  Can do tape backup and restore, but cannot change any configuration
+  Can do tape backup and restore, but cannot change any configuration.
 
 **TapeReader**
-  Can read and inspect tape configuration and media content
+  Can read and inspect tape configuration and media content.
 
 Objects and Paths
 ~~~~~~~~~~~~~~~~~
 
-Access permissions are assigned to objects, such as a datastore, a namespace or
+Access permissions are assigned to objects, such as a datastore, namespace or
 some system resources.
 
-We use file system like paths to address these objects. These paths form a
+We use filesystem-like paths to address these objects. These paths form a
 natural tree, and permissions of higher levels (shorter paths) can optionally
 be propagated down within this hierarchy.
 
-Paths can be templated, that means they can refer to the actual id of an
-configuration entry.  When an API call requires permissions on a templated
+Paths can be templated, meaning they can refer to the actual id of a
+configuration entry. When an API call requires permissions on a templated
 path, the path may contain references to parameters of the API call. These
-references are specified in curly braces.
+references are specified in curly brackets.
 
 Some examples are:
 
@@ -329,7 +329,7 @@ Some examples are:
 * `/datastore/{store}/{ns}`: Access to a specific namespace on a specific
   datastore
 * `/remote`: Access to all remote entries
-* `/system/network`: Access to configuring the host network
+* `/system/network`: Access to configure the host network
 * `/tape/`: Access to tape devices, pools and jobs
 * `/access/users`: User administration
 * `/access/openid/{id}`: Administrative access to a specific OpenID Connect realm
@@ -341,7 +341,7 @@ As mentioned earlier, object paths form a file system like tree, and
 permissions can be inherited by objects down that tree through the propagate
 flag, which is set by default. We use the following inheritance rules:
 
-* Permissions for API tokens are always clamped to the one of the user.
+* Permissions for API tokens are always limited to those of the user.
 * Permissions on deeper, more specific levels replace those inherited from an
   upper level.
 
-- 
2.30.2






More information about the pbs-devel mailing list