[pve-devel] [PATCH docs 1/9] formatting cleanup

Fabian Grünbichler f.gruenbichler at proxmox.com
Tue Sep 27 10:58:50 CEST 2016


reformat most existing 'text' as either `text` or ``text''
reformat most existing "text" as either ``text'' or `text`
reformat 'x' as `x`
harmonize bullet list syntax to use '*' instead of '-'
---
 datacenter.cfg.adoc          |   6 +--
 ha-manager.adoc              |  84 +++++++++++++++----------------
 index.adoc                   |   6 +--
 local-lvm.adoc               |   4 +-
 local-zfs.adoc               |  53 ++++++++++----------
 pct.adoc                     | 116 ++++++++++++++++++++++---------------------
 pct.conf.adoc                |  10 ++--
 pmxcfs.adoc                  |  72 +++++++++++++--------------
 pve-admin-guide.adoc         |   2 +-
 pve-faq.adoc                 |   8 +--
 pve-firewall.adoc            |  73 ++++++++++++++-------------
 pve-installation.adoc        |   8 +--
 pve-intro.adoc               |   7 ++-
 pve-network.adoc             |  22 ++++----
 pve-package-repos.adoc       |  44 ++++++++--------
 pve-storage-dir.adoc         |  18 +++----
 pve-storage-glusterfs.adoc   |   4 +-
 pve-storage-iscsi.adoc       |   6 +--
 pve-storage-iscsidirect.adoc |   4 +-
 pve-storage-lvm.adoc         |   4 +-
 pve-storage-lvmthin.adoc     |   4 +-
 pve-storage-nfs.adoc         |   6 +--
 pve-storage-rbd.adoc         |   8 +--
 pve-storage-zfspool.adoc     |   2 +-
 pveam.adoc                   |   2 +-
 pvecm.adoc                   |  37 +++++++-------
 pvedaemon.adoc               |   6 +--
 pveproxy.adoc                |  31 ++++++------
 pvesm.adoc                   |  26 +++++-----
 pveum.adoc                   |  41 +++++++--------
 qm.adoc                      |   6 +--
 qm.conf.adoc                 |   6 +--
 qmrestore.adoc               |   8 +--
 spiceproxy.adoc              |   6 +--
 sysadmin.adoc                |   2 +-
 system-software-updates.adoc |   4 +-
 vzdump.adoc                  |  16 +++---
 37 files changed, 386 insertions(+), 376 deletions(-)

diff --git a/datacenter.cfg.adoc b/datacenter.cfg.adoc
index 8028376..624472d 100644
--- a/datacenter.cfg.adoc
+++ b/datacenter.cfg.adoc
@@ -12,7 +12,7 @@ datacenter.cfg - Proxmox VE Datacenter Configuration
 SYNOPSYS
 --------
 
-'/etc/pve/datacenter.cfg'
+`/etc/pve/datacenter.cfg`
 
 
 DESCRIPTION
@@ -25,7 +25,7 @@ Datacenter Configuration
 include::attributes.txt[]
 endif::manvolnum[]
 
-The file '/etc/pve/datacenter.cfg' is a configuration file for
+The file `/etc/pve/datacenter.cfg` is a configuration file for
 {pve}. It contains cluster wide default values used by all nodes.
 
 File Format
@@ -36,7 +36,7 @@ the following format:
 
  OPTION: value
 
-Blank lines in the file are ignored, and lines starting with a '#'
+Blank lines in the file are ignored, and lines starting with a `#`
 character are treated as comments and are also ignored.
 
 
diff --git a/ha-manager.adoc b/ha-manager.adoc
index 5db5b05..eadf60e 100644
--- a/ha-manager.adoc
+++ b/ha-manager.adoc
@@ -57,43 +57,41 @@ sometimes impossible because you cannot modify the software
 yourself. The following solutions works without modifying the
 software:
 
-* Use reliable "server" components
+* Use reliable ``server'' components
 
 NOTE: Computer components with same functionality can have varying
 reliability numbers, depending on the component quality. Most vendors
-sell components with higher reliability as "server" components -
+sell components with higher reliability as ``server'' components -
 usually at higher price.
 
 * Eliminate single point of failure (redundant components)
-
- - use an uninterruptible power supply (UPS)
- - use redundant power supplies on the main boards
- - use ECC-RAM
- - use redundant network hardware
- - use RAID for local storage
- - use distributed, redundant storage for VM data
+** use an uninterruptible power supply (UPS)
+** use redundant power supplies on the main boards
+** use ECC-RAM
+** use redundant network hardware
+** use RAID for local storage
+** use distributed, redundant storage for VM data
 
 * Reduce downtime
-
- - rapidly accessible administrators (24/7)
- - availability of spare parts (other nodes in a {pve} cluster)
- - automatic error detection ('ha-manager')
- - automatic failover ('ha-manager')
+** rapidly accessible administrators (24/7)
+** availability of spare parts (other nodes in a {pve} cluster)
+** automatic error detection (provided by `ha-manager`)
+** automatic failover (provided by `ha-manager`)
 
 Virtualization environments like {pve} make it much easier to reach
-high availability because they remove the "hardware" dependency. They
+high availability because they remove the ``hardware'' dependency. They
 also support to setup and use redundant storage and network
 devices. So if one host fail, you can simply start those services on
 another host within your cluster.
 
-Even better, {pve} provides a software stack called 'ha-manager',
+Even better, {pve} provides a software stack called `ha-manager`,
 which can do that automatically for you. It is able to automatically
 detect errors and do automatic failover.
 
-{pve} 'ha-manager' works like an "automated" administrator. First, you
+{pve} `ha-manager` works like an ``automated'' administrator. First, you
 configure what resources (VMs, containers, ...) it should
-manage. 'ha-manager' then observes correct functionality, and handles
-service failover to another node in case of errors. 'ha-manager' can
+manage. `ha-manager` then observes correct functionality, and handles
+service failover to another node in case of errors. `ha-manager` can
 also handle normal user requests which may start, stop, relocate and
 migrate a service.
 
@@ -105,7 +103,7 @@ costs.
 
 TIP: Increasing availability from 99% to 99.9% is relatively
 simply. But increasing availability from 99.9999% to 99.99999% is very
-hard and costly. 'ha-manager' has typical error detection and failover
+hard and costly. `ha-manager` has typical error detection and failover
 times of about 2 minutes, so you can get no more than 99.999%
 availability.
 
@@ -119,7 +117,7 @@ Requirements
 * hardware redundancy (everywhere)
 
 * hardware watchdog - if not available we fall back to the
-  linux kernel software watchdog ('softdog')
+  linux kernel software watchdog (`softdog`)
 
 * optional hardware fencing devices
 
@@ -127,16 +125,16 @@ Requirements
 Resources
 ---------
 
-We call the primary management unit handled by 'ha-manager' a
-resource. A resource (also called "service") is uniquely
+We call the primary management unit handled by `ha-manager` a
+resource. A resource (also called ``service'') is uniquely
 identified by a service ID (SID), which consists of the resource type
-and an type specific ID, e.g.: 'vm:100'. That example would be a
-resource of type 'vm' (virtual machine) with the ID 100.
+and an type specific ID, e.g.: `vm:100`. That example would be a
+resource of type `vm` (virtual machine) with the ID 100.
 
 For now we have two important resources types - virtual machines and
 containers. One basic idea here is that we can bundle related software
 into such VM or container, so there is no need to compose one big
-service from other services, like it was done with 'rgmanager'. In
+service from other services, like it was done with `rgmanager`. In
 general, a HA enabled resource should not depend on other resources.
 
 
@@ -148,14 +146,14 @@ internals. It describes how the CRM and the LRM work together.
 
 To provide High Availability two daemons run on each node:
 
-'pve-ha-lrm'::
+`pve-ha-lrm`::
 
 The local resource manager (LRM), it controls the services running on
 the local node.
 It reads the requested states for its services from the current manager
 status file and executes the respective commands.
 
-'pve-ha-crm'::
+`pve-ha-crm`::
 
 The cluster resource manager (CRM), it controls the cluster wide
 actions of the services, processes the LRM results and includes the state
@@ -174,7 +172,7 @@ lock.
 Local Resource Manager
 ~~~~~~~~~~~~~~~~~~~~~~
 
-The local resource manager ('pve-ha-lrm') is started as a daemon on
+The local resource manager (`pve-ha-lrm`) is started as a daemon on
 boot and waits until the HA cluster is quorate and thus cluster wide
 locks are working.
 
@@ -187,11 +185,11 @@ It can be in three states:
   and quorum was lost.
 
 After the LRM gets in the active state it reads the manager status
-file in '/etc/pve/ha/manager_status' and determines the commands it
+file in `/etc/pve/ha/manager_status` and determines the commands it
 has to execute for the services it owns.
 For each command a worker gets started, this workers are running in
 parallel and are limited to maximal 4 by default. This default setting
-may be changed through the datacenter configuration key "max_worker".
+may be changed through the datacenter configuration key `max_worker`.
 When finished the worker process gets collected and its result saved for
 the CRM.
 
@@ -201,12 +199,12 @@ The default value of 4 maximal concurrent Workers may be unsuited for
 a specific setup. For example may 4 live migrations happen at the same
 time, which can lead to network congestions with slower networks and/or
 big (memory wise) services. Ensure that also in the worst case no congestion
-happens and lower the "max_worker" value if needed. In the contrary, if you
+happens and lower the `max_worker` value if needed. In the contrary, if you
 have a particularly powerful high end setup you may also want to increase it.
 
 Each command requested by the CRM is uniquely identifiable by an UID, when
 the worker finished its result will be processed and written in the LRM
-status file '/etc/pve/nodes/<nodename>/lrm_status'. There the CRM may collect
+status file `/etc/pve/nodes/<nodename>/lrm_status`. There the CRM may collect
 it and let its state machine - respective the commands output - act on it.
 
 The actions on each service between CRM and LRM are normally always synced.
@@ -214,7 +212,7 @@ This means that the CRM requests a state uniquely marked by an UID, the LRM
 then executes this action *one time* and writes back the result, also
 identifiable by the same UID. This is needed so that the LRM does not
 executes an outdated command.
-With the exception of the 'stop' and the 'error' command,
+With the exception of the `stop` and the `error` command,
 those two do not depend on the result produced and are executed
 always in the case of the stopped state and once in the case of
 the error state.
@@ -230,7 +228,7 @@ the same command for the pve-ha-crm on the node which is the current master.
 Cluster Resource Manager
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
-The cluster resource manager ('pve-ha-crm') starts on each node and
+The cluster resource manager (`pve-ha-crm`) starts on each node and
 waits there for the manager lock, which can only be held by one node
 at a time.  The node which successfully acquires the manager lock gets
 promoted to the CRM master.
@@ -261,12 +259,12 @@ Configuration
 -------------
 
 The HA stack is well integrated in the Proxmox VE API2. So, for
-example, HA can be configured via 'ha-manager' or the PVE web
+example, HA can be configured via `ha-manager` or the PVE web
 interface, which both provide an easy to use tool.
 
 The resource configuration file can be located at
-'/etc/pve/ha/resources.cfg' and the group configuration file at
-'/etc/pve/ha/groups.cfg'. Use the provided tools to make changes,
+`/etc/pve/ha/resources.cfg` and the group configuration file at
+`/etc/pve/ha/groups.cfg`. Use the provided tools to make changes,
 there shouldn't be any need to edit them manually.
 
 Node Power Status
@@ -347,7 +345,7 @@ Configure Hardware Watchdog
 By default all watchdog modules are blocked for security reasons as they are
 like a loaded gun if not correctly initialized.
 If you have a hardware watchdog available remove its kernel module from the
-blacklist, load it with insmod and restart the 'watchdog-mux' service or reboot
+blacklist, load it with insmod and restart the `watchdog-mux` service or reboot
 the node.
 
 Recover Fenced Services
@@ -449,7 +447,7 @@ Service Operations
 ------------------
 
 This are how the basic user-initiated service operations (via
-'ha-manager') work.
+`ha-manager`) work.
 
 enable::
 
@@ -470,9 +468,9 @@ current state will not be touched.
 
 start/stop::
 
-start and stop commands can be issued to the resource specific tools
-(like 'qm' or 'pct'), they will forward the request to the
-'ha-manager' which then will execute the action and set the resulting
+`start` and `stop` commands can be issued to the resource specific tools
+(like `qm` or `pct`), they will forward the request to the
+`ha-manager` which then will execute the action and set the resulting
 service state (enabled, disabled).
 
 
diff --git a/index.adoc b/index.adoc
index 95c67ab..7154371 100644
--- a/index.adoc
+++ b/index.adoc
@@ -82,9 +82,9 @@ Configuration Options
 [width="100%",options="header"]
 |===========================================================
 | File name |Download link
-| '/etc/pve/datacenter.cfg'          | link:datacenter.cfg.5.html[datacenter.cfg.5]
-| '/etc/pve/qemu-server/<VMID>.conf' | link:qm.conf.5.html[qm.conf.5]
-| '/etc/pve/lxc/<CTID>.conf'         | link:pct.conf.5.html[pct.conf.5]
+| `/etc/pve/datacenter.cfg`          | link:datacenter.cfg.5.html[datacenter.cfg.5]
+| `/etc/pve/qemu-server/<VMID>.conf` | link:qm.conf.5.html[qm.conf.5]
+| `/etc/pve/lxc/<CTID>.conf`         | link:pct.conf.5.html[pct.conf.5]
 |===========================================================
 
 
diff --git a/local-lvm.adoc b/local-lvm.adoc
index cdc0ef1..c493501 100644
--- a/local-lvm.adoc
+++ b/local-lvm.adoc
@@ -6,7 +6,7 @@ Most people install {pve} directly on a local disk. The {pve}
 installation CD offers several options for local disk management, and
 the current default setup uses LVM. The installer let you select a
 single disk for such setup, and uses that disk as physical volume for
-the **V**olume **G**roup (VG) 'pve'. The following output is from a
+the **V**olume **G**roup (VG) `pve`. The following output is from a
 test installation using a small 8GB disk:
 
 ----
@@ -30,7 +30,7 @@ VG:
   swap pve  -wi-ao---- 896.00m     
 ----
 
-root:: Formatted as 'ext4', and contains the operation system.
+root:: Formatted as `ext4`, and contains the operation system.
 
 swap:: Swap partition
 
diff --git a/local-zfs.adoc b/local-zfs.adoc
index ff602ed..a20903f 100644
--- a/local-zfs.adoc
+++ b/local-zfs.adoc
@@ -64,12 +64,12 @@ increase the overall performance significantly.
 IMPORTANT: Do not use ZFS on top of hardware controller which has it's
 own cache management. ZFS needs to directly communicate with disks. An
 HBA adapter is the way to go, or something like LSI controller flashed
-in 'IT' mode.
+in ``IT'' mode.
 
 If you are experimenting with an installation of {pve} inside a VM
-(Nested Virtualization), don't use 'virtio' for disks of that VM,
+(Nested Virtualization), don't use `virtio` for disks of that VM,
 since they are not supported by ZFS. Use IDE or SCSI instead (works
-also with 'virtio' SCSI controller type).
+also with `virtio` SCSI controller type).
 
 
 Installation as root file system
@@ -80,11 +80,11 @@ root file system. You need to select the RAID type at installation
 time:
 
 [horizontal]
-RAID0:: Also called 'striping'. The capacity of such volume is the sum
-of the capacity of all disks. But RAID0 does not add any redundancy,
+RAID0:: Also called ``striping''. The capacity of such volume is the sum
+of the capacities of all disks. But RAID0 does not add any redundancy,
 so the failure of a single drive makes the volume unusable.
 
-RAID1:: Also called mirroring. Data is written identically to all
+RAID1:: Also called ``mirroring''. Data is written identically to all
 disks. This mode requires at least 2 disks with the same size. The
 resulting capacity is that of a single disk.
 
@@ -97,12 +97,12 @@ RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks.
 RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks.
 
 The installer automatically partitions the disks, creates a ZFS pool
-called 'rpool', and installs the root file system on the ZFS subvolume
-'rpool/ROOT/pve-1'.
+called `rpool`, and installs the root file system on the ZFS subvolume
+`rpool/ROOT/pve-1`.
 
-Another subvolume called 'rpool/data' is created to store VM
+Another subvolume called `rpool/data` is created to store VM
 images. In order to use that with the {pve} tools, the installer
-creates the following configuration entry in '/etc/pve/storage.cfg':
+creates the following configuration entry in `/etc/pve/storage.cfg`:
 
 ----
 zfspool: local-zfs
@@ -112,7 +112,7 @@ zfspool: local-zfs
 ----
 
 After installation, you can view your ZFS pool status using the
-'zpool' command:
+`zpool` command:
 
 ----
 # zpool status
@@ -133,7 +133,7 @@ config:
 errors: No known data errors
 ----
 
-The 'zfs' command is used configure and manage your ZFS file
+The `zfs` command is used configure and manage your ZFS file
 systems. The following command lists all file systems after
 installation:
 
@@ -167,8 +167,8 @@ ZFS Administration
 
 This section gives you some usage examples for common tasks. ZFS
 itself is really powerful and provides many options. The main commands
-to manage ZFS are 'zfs' and 'zpool'. Both commands comes with great
-manual pages, worth to read:
+to manage ZFS are `zfs` and `zpool`. Both commands come with great
+manual pages, which can be read with:
 
 ----
 # man zpool
@@ -177,8 +177,8 @@ manual pages, worth to read:
 
 .Create a new ZPool
 
-To create a new pool, at least one disk is needed. The 'ashift' should
-have the same sector-size (2 power of 'ashift') or larger as the
+To create a new pool, at least one disk is needed. The `ashift` should
+have the same sector-size (2 power of `ashift`) or larger as the
 underlying disk.
 
  zpool create -f -o ashift=12 <pool> <device>
@@ -222,7 +222,7 @@ Minimum 4 Disks
 It is possible to use a dedicated cache drive partition to increase
 the performance (use SSD).
 
-As '<device>' it is possible to use more devices, like it's shown in
+As `<device>` it is possible to use more devices, like it's shown in
 "Create a new pool with RAID*".
 
  zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
@@ -232,7 +232,7 @@ As '<device>' it is possible to use more devices, like it's shown in
 It is possible to use a dedicated cache drive partition to increase
 the performance(SSD).
 
-As '<device>' it is possible to use more devices, like it's shown in
+As `<device>` it is possible to use more devices, like it's shown in
 "Create a new pool with RAID*".
 
  zpool create -f -o ashift=12 <pool> <device> log <log_device>
@@ -240,7 +240,7 @@ As '<device>' it is possible to use more devices, like it's shown in
 .Add Cache and Log to an existing pool
 
 If you have an pool without cache and log. First partition the SSD in
-2 partition with parted or gdisk
+2 partition with `parted` or `gdisk`
 
 IMPORTANT: Always use GPT partition tables (gdisk or parted).
 
@@ -262,14 +262,15 @@ ZFS comes with an event daemon, which monitors events generated by the
 ZFS kernel module. The daemon can also send E-Mails on ZFS event like
 pool errors.
 
-To activate the daemon it is necessary to edit /etc/zfs/zed.d/zed.rc with your favored editor, and uncomment the 'ZED_EMAIL_ADDR' setting:
+To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
+favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
 
 ZED_EMAIL_ADDR="root"
 
-Please note {pve} forwards mails to 'root' to the email address
+Please note {pve} forwards mails to `root` to the email address
 configured for the root user.
 
-IMPORTANT: the only settings that is required is ZED_EMAIL_ADDR. All
+IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
 other settings are optional.
 
 
@@ -279,7 +280,7 @@ Limit ZFS memory usage
 It is good to use maximal 50 percent (which is the default) of the
 system memory for ZFS ARC to prevent performance shortage of the
 host. Use your preferred editor to change the configuration in
-/etc/modprobe.d/zfs.conf and insert:
+`/etc/modprobe.d/zfs.conf` and insert:
 
  options zfs zfs_arc_max=8589934592
 
@@ -302,16 +303,16 @@ to an external Storage.
 
 We strongly recommend to use enough memory, so that you normally do not
 run into low memory situations. Additionally, you can lower the
-'swappiness' value. A good value for servers is 10:
+``swappiness'' value. A good value for servers is 10:
 
  sysctl -w vm.swappiness=10
 
-To make the swappiness persistence, open '/etc/sysctl.conf' with
+To make the swappiness persistent, open `/etc/sysctl.conf` with
 an editor of your choice and add the following line:
 
  vm.swappiness = 10
 
-.Linux Kernel 'swappiness' parameter values
+.Linux kernel `swappiness` parameter values
 [width="100%",cols="<m,2d",options="header"]
 |===========================================================
 | Value               | Strategy
diff --git a/pct.adoc b/pct.adoc
index 9983ba8..59969aa 100644
--- a/pct.adoc
+++ b/pct.adoc
@@ -101,9 +101,9 @@ unprivileged containers are safe by design.
 Configuration
 -------------
 
-The '/etc/pve/lxc/<CTID>.conf' file stores container configuration,
-where '<CTID>' is the numeric ID of the given container. Like all
-other files stored inside '/etc/pve/', they get automatically
+The `/etc/pve/lxc/<CTID>.conf` file stores container configuration,
+where `<CTID>` is the numeric ID of the given container. Like all
+other files stored inside `/etc/pve/`, they get automatically
 replicated to all other cluster nodes.
 
 NOTE: CTIDs < 100 are reserved for internal purposes, and CTIDs need to be
@@ -121,11 +121,11 @@ rootfs: local:107/vm-107-disk-1.raw,size=7G
 ----
 
 Those configuration files are simple text files, and you can edit them
-using a normal text editor ('vi', 'nano', ...). This is sometimes
+using a normal text editor (`vi`, `nano`, ...). This is sometimes
 useful to do small corrections, but keep in mind that you need to
 restart the container to apply such changes.
 
-For that reason, it is usually better to use the 'pct' command to
+For that reason, it is usually better to use the `pct` command to
 generate and modify those files, or do the whole thing using the GUI.
 Our toolkit is smart enough to instantaneously apply most changes to
 running containers. This feature is called "hot plug", and there is no
@@ -140,7 +140,7 @@ format. Each line has the following format:
  # this is a comment
  OPTION: value
 
-Blank lines in those files are ignored, and lines starting with a '#'
+Blank lines in those files are ignored, and lines starting with a `#`
 character are treated as comments and are also ignored.
 
 It is possible to add low-level, LXC style configuration directly, for
@@ -157,9 +157,9 @@ Those settings are directly passed to the LXC low-level tools.
 Snapshots
 ~~~~~~~~~
 
-When you create a snapshot, 'pct' stores the configuration at snapshot
+When you create a snapshot, `pct` stores the configuration at snapshot
 time into a separate snapshot section within the same configuration
-file. For example, after creating a snapshot called 'testsnapshot',
+file. For example, after creating a snapshot called ``testsnapshot'',
 your configuration file will look like this:
 
 .Container Configuration with Snapshot
@@ -176,10 +176,11 @@ snaptime: 1457170803
 ...
 ----
 
-There are a few snapshot related properties like 'parent' and
-'snaptime'. The 'parent' property is used to store the parent/child
-relationship between snapshots. 'snaptime' is the snapshot creation
-time stamp (unix epoch).
+There are a few snapshot related properties like `parent` and
+`snaptime`. The `parent` property is used to store the parent/child
+relationship between snapshots. `snaptime` is the snapshot creation
+time stamp (Unix epoch).
+
 
 Guest Operating System Configuration
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -226,12 +227,12 @@ simple empty file creatd via:
 
 Most modifications are OS dependent, so they differ between different
 distributions and versions. You can completely disable modifications
-by manually setting the 'ostype' to 'unmanaged'.
+by manually setting the `ostype` to `unmanaged`.
 
 OS type detection is done by testing for certain files inside the
 container:
 
-Ubuntu:: inspect /etc/lsb-release ('DISTRIB_ID=Ubuntu')
+Ubuntu:: inspect /etc/lsb-release (`DISTRIB_ID=Ubuntu`)
 
 Debian:: test /etc/debian_version
 
@@ -245,7 +246,7 @@ Alpine:: test /etc/alpine-release
 
 Gentoo:: test /etc/gentoo-release
 
-NOTE: Container start fails if the configured 'ostype' differs from the auto
+NOTE: Container start fails if the configured `ostype` differs from the auto
 detected type.
 
 Options
@@ -257,16 +258,16 @@ include::pct.conf.5-opts.adoc[]
 Container Images
 ----------------
 
-Container Images, sometimes also referred to as "templates" or
-"appliances", are 'tar' archives which contain everything to run a
+Container images, sometimes also referred to as ``templates'' or
+``appliances'', are `tar` archives which contain everything to run a
 container. You can think of it as a tidy container backup. Like most
-modern container toolkits, 'pct' uses those images when you create a
+modern container toolkits, `pct` uses those images when you create a
 new container, for example:
 
  pct create 999 local:vztmpl/debian-8.0-standard_8.0-1_amd64.tar.gz
 
 Proxmox itself ships a set of basic templates for most common
-operating systems, and you can download them using the 'pveam' (short
+operating systems, and you can download them using the `pveam` (short
 for {pve} Appliance Manager) command line utility. You can also
 download https://www.turnkeylinux.org/[TurnKey Linux] containers using
 that tool (or the graphical user interface).
@@ -281,8 +282,8 @@ After that you can view the list of available images using:
 
  pveam available
 
-You can restrict this large list by specifying the 'section' you are
-interested in, for example basic 'system' images:
+You can restrict this large list by specifying the `section` you are
+interested in, for example basic `system` images:
 
 .List available system images
 ----
@@ -299,14 +300,14 @@ system          ubuntu-15.10-standard_15.10-1_amd64.tar.gz
 ----
 
 Before you can use such a template, you need to download them into one
-of your storages. You can simply use storage 'local' for that
+of your storages. You can simply use storage `local` for that
 purpose. For clustered installations, it is preferred to use a shared
 storage so that all nodes can access those images.
 
  pveam download local debian-8.0-standard_8.0-1_amd64.tar.gz
 
 You are now ready to create containers using that image, and you can
-list all downloaded images on storage 'local' with:
+list all downloaded images on storage `local` with:
 
 ----
 # pveam list local
@@ -325,8 +326,8 @@ Container Storage
 
 Traditional containers use a very simple storage model, only allowing
 a single mount point, the root file system. This was further
-restricted to specific file system types like 'ext4' and 'nfs'.
-Additional mounts are often done by user provided scripts. This turend
+restricted to specific file system types like `ext4` and `nfs`.
+Additional mounts are often done by user provided scripts. This turned
 out to be complex and error prone, so we try to avoid that now.
 
 Our new LXC based container model is more flexible regarding
@@ -339,9 +340,9 @@ application.
 
 The second big improvement is that you can use any storage type
 supported by the {pve} storage library. That means that you can store
-your containers on local 'lvmthin' or 'zfs', shared 'iSCSI' storage,
-or even on distributed storage systems like 'ceph'. It also enables us
-to use advanced storage features like snapshots and clones. 'vzdump'
+your containers on local `lvmthin` or `zfs`, shared `iSCSI` storage,
+or even on distributed storage systems like `ceph`. It also enables us
+to use advanced storage features like snapshots and clones. `vzdump`
 can also use the snapshot feature to provide consistent container
 backups.
 
@@ -398,7 +399,7 @@ cannot make snapshots or deal with quotas from inside the container. With
 unprivileged containers you might run into permission problems caused by the
 user mapping and cannot use ACLs.
 
-NOTE: The contents of bind mount points are not backed up when using 'vzdump'.
+NOTE: The contents of bind mount points are not backed up when using `vzdump`.
 
 WARNING: For security reasons, bind mounts should only be established
 using source directories especially reserved for this purpose, e.g., a
@@ -410,8 +411,8 @@ NOTE: The bind mount source path must not contain any symlinks.
 
 For example, to make the directory `/mnt/bindmounts/shared` accessible in the
 container with ID `100` under the path `/shared`, use a configuration line like
-'mp0: /mnt/bindmounts/shared,mp=/shared' in '/etc/pve/lxc/100.conf'.
-Alternatively, use 'pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared' to
+`mp0: /mnt/bindmounts/shared,mp=/shared` in `/etc/pve/lxc/100.conf`.
+Alternatively, use `pct set 100 -mp0 /mnt/bindmounts/shared,mp=/shared` to
 achieve the same result.
 
 
@@ -426,7 +427,7 @@ NOTE: Device mount points should only be used under special circumstances. In
 most cases a storage backed mount point offers the same performance and a lot
 more features.
 
-NOTE: The contents of device mount points are not backed up when using 'vzdump'.
+NOTE: The contents of device mount points are not backed up when using `vzdump`.
 
 
 FUSE mounts
@@ -481,7 +482,7 @@ Container Network
 -----------------
 
 You can configure up to 10 network interfaces for a single
-container. The corresponding options are called 'net0' to 'net9', and
+container. The corresponding options are called `net0` to `net9`, and
 they can contain the following setting:
 
 include::pct-network-opts.adoc[]
@@ -493,27 +494,28 @@ Backup and Restore
 Container Backup
 ~~~~~~~~~~~~~~~~
 
-It is possible to use the 'vzdump' tool for container backup. Please
-refer to the 'vzdump' manual page for details.
+It is possible to use the `vzdump` tool for container backup. Please
+refer to the `vzdump` manual page for details.
+
 
 Restoring Container Backups
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-Restoring container backups made with 'vzdump' is possible using the
-'pct restore' command. By default, 'pct restore' will attempt to restore as much
+Restoring container backups made with `vzdump` is possible using the
+`pct restore` command. By default, `pct restore` will attempt to restore as much
 of the backed up container configuration as possible. It is possible to override
 the backed up configuration by manually setting container options on the command
-line (see the 'pct' manual page for details).
+line (see the `pct` manual page for details).
 
-NOTE: 'pvesm extractconfig' can be used to view the backed up configuration
+NOTE: `pvesm extractconfig` can be used to view the backed up configuration
 contained in a vzdump archive.
 
 There are two basic restore modes, only differing by their handling of mount
 points:
 
 
-"Simple" restore mode
-^^^^^^^^^^^^^^^^^^^^^
+``Simple'' Restore Mode
+^^^^^^^^^^^^^^^^^^^^^^^
 
 If neither the `rootfs` parameter nor any of the optional `mpX` parameters
 are explicitly set, the mount point configuration from the backed up
@@ -535,11 +537,11 @@ This simple mode is also used by the container restore operations in the web
 interface.
 
 
-"Advanced" restore mode
-^^^^^^^^^^^^^^^^^^^^^^^
+``Advanced'' Restore Mode
+^^^^^^^^^^^^^^^^^^^^^^^^^
 
 By setting the `rootfs` parameter (and optionally, any combination of `mpX`
-parameters), the 'pct restore' command is automatically switched into an
+parameters), the `pct restore` command is automatically switched into an
 advanced mode. This advanced mode completely ignores the `rootfs` and `mpX`
 configuration options contained in the backup archive, and instead only
 uses the options explicitly provided as parameters.
@@ -553,10 +555,10 @@ individually
 * Restore to device and/or bind mount points (limited to root user)
 
 
-Managing Containers with 'pct'
+Managing Containers with `pct`
 ------------------------------
 
-'pct' is the tool to manage Linux Containers on {pve}. You can create
+`pct` is the tool to manage Linux Containers on {pve}. You can create
 and destroy containers, and control execution (start, stop, migrate,
 ...). You can use pct to set parameters in the associated config file,
 like network configuration or memory limits.
@@ -585,7 +587,7 @@ Display the configuration
 
  pct config 100
 
-Add a network interface called eth0, bridged to the host bridge vmbr0,
+Add a network interface called `eth0`, bridged to the host bridge `vmbr0`,
 set the address and gateway, while it's running
 
  pct set 100 -net0 name=eth0,bridge=vmbr0,ip=192.168.15.147/24,gw=192.168.15.1
@@ -598,7 +600,7 @@ Reduce the memory of the container to 512MB
 Files
 ------
 
-'/etc/pve/lxc/<CTID>.conf'::
+`/etc/pve/lxc/<CTID>.conf`::
 
 Configuration file for the container '<CTID>'.
 
@@ -606,24 +608,24 @@ Configuration file for the container '<CTID>'.
 Container Advantages
 --------------------
 
-- Simple, and fully integrated into {pve}. Setup looks similar to a normal
+* Simple, and fully integrated into {pve}. Setup looks similar to a normal
   VM setup. 
 
-  * Storage (ZFS, LVM, NFS, Ceph, ...)
+** Storage (ZFS, LVM, NFS, Ceph, ...)
 
-  * Network
+** Network
 
-  * Authentification
+** Authentication
 
-  * Cluster
+** Cluster
 
-- Fast: minimal overhead, as fast as bare metal
+* Fast: minimal overhead, as fast as bare metal
 
-- High density (perfect for idle workloads)
+* High density (perfect for idle workloads)
 
-- REST API
+* REST API
 
-- Direct hardware access
+* Direct hardware access
 
 
 Technology Overview
diff --git a/pct.conf.adoc b/pct.conf.adoc
index 0b3d6cb..0c86b44 100644
--- a/pct.conf.adoc
+++ b/pct.conf.adoc
@@ -12,7 +12,7 @@ pct.conf - Proxmox VE Container Configuration
 SYNOPSYS
 --------
 
-'/etc/pve/lxc/<CTID>.conf'
+`/etc/pve/lxc/<CTID>.conf`
 
 
 DESCRIPTION
@@ -25,8 +25,8 @@ Container Configuration
 include::attributes.txt[]
 endif::manvolnum[]
 
-The '/etc/pve/lxc/<CTID>.conf' files stores container configuration,
-where "CTID" is the numeric ID of the given container.
+The `/etc/pve/lxc/<CTID>.conf` files stores container configuration,
+where `CTID` is the numeric ID of the given container.
 
 NOTE: IDs <= 100 are reserved for internal purposes.
 
@@ -39,10 +39,10 @@ the following format:
 
  OPTION: value
 
-Blank lines in the file are ignored, and lines starting with a '#'
+Blank lines in the file are ignored, and lines starting with a `#`
 character are treated as comments and are also ignored.
 
-One can use the 'pct' command to generate and modify those files.
+One can use the `pct` command to generate and modify those files.
 
 It is also possible to add low-level lxc style configuration directly, for
 example:
diff --git a/pmxcfs.adoc b/pmxcfs.adoc
index 33b8e3e..3474d73 100644
--- a/pmxcfs.adoc
+++ b/pmxcfs.adoc
@@ -23,9 +23,9 @@ Proxmox Cluster File System (pmxcfs)
 include::attributes.txt[]
 endif::manvolnum[]
 
-The Proxmox Cluster file system (pmxcfs) is a database-driven file
+The Proxmox Cluster file system (``pmxcfs'') is a database-driven file
 system for storing configuration files, replicated in real time to all
-cluster nodes using corosync. We use this to store all PVE related
+cluster nodes using `corosync`. We use this to store all PVE related
 configuration files.
 
 Although the file system stores all data inside a persistent database
@@ -63,8 +63,8 @@ some feature are simply not implemented, because we do not need them:
 File access rights
 ------------------
 
-All files and directories are owned by user 'root' and have group
-'www-data'. Only root has write permissions, but group 'www-data' can
+All files and directories are owned by user `root` and have group
+`www-data`. Only root has write permissions, but group `www-data` can
 read most files. Files below the following paths:
 
  /etc/pve/priv/
@@ -93,25 +93,25 @@ Files
 
 [width="100%",cols="m,d"]
 |=======
-|corosync.conf  |corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf)
-|storage.cfg   |{pve} storage configuration
-|datacenter.cfg   |{pve} datacenter wide configuration (keyboard layout, proxy, ...)
-|user.cfg      |{pve} access control configuration (users/groups/...)
-|domains.cfg   |{pve} Authentication domains 
-|authkey.pub   | public key used by ticket system
-|pve-root-ca.pem | public certificate of cluster CA
-|priv/shadow.cfg  | shadow password file
-|priv/authkey.key | private key used by ticket system
-|priv/pve-root-ca.key | private key of cluster CA
-|nodes/<NAME>/pve-ssl.pem                 | public ssl certificate for web server (signed by cluster CA)
-|nodes/<NAME>/pve-ssl.key            | private ssl key for pve-ssl.pem
-|nodes/<NAME>/pveproxy-ssl.pem       | public ssl certificate (chain) for web server (optional override for pve-ssl.pem)
-|nodes/<NAME>/pveproxy-ssl.key       | private ssl key for pveproxy-ssl.pem (optional)
-|nodes/<NAME>/qemu-server/<VMID>.conf    | VM configuration data for KVM VMs
-|nodes/<NAME>/lxc/<VMID>.conf         | VM configuration data for LXC containers
-|firewall/cluster.fw | Firewall config applied to all nodes
-|firewall/<NAME>.fw  | Firewall config for individual nodes
-|firewall/<VMID>.fw  | Firewall config for VMs and Containers
+|`corosync.conf`                        | Corosync cluster configuration file (previous to {pve} 4.x this file was called cluster.conf)
+|`storage.cfg`                          | {pve} storage configuration
+|`datacenter.cfg`                       | {pve} datacenter wide configuration (keyboard layout, proxy, ...)
+|`user.cfg`                             | {pve} access control configuration (users/groups/...)
+|`domains.cfg`                          | {pve} authentication domains
+|`authkey.pub`                          | Public key used by ticket system
+|`pve-root-ca.pem`                      | Public certificate of cluster CA
+|`priv/shadow.cfg`                      | Shadow password file
+|`priv/authkey.key`                     | Private key used by ticket system
+|`priv/pve-root-ca.key`                 | Private key of cluster CA
+|`nodes/<NAME>/pve-ssl.pem`             | Public SSL certificate for web server (signed by cluster CA)
+|`nodes/<NAME>/pve-ssl.key`             | Private SSL key for `pve-ssl.pem`
+|`nodes/<NAME>/pveproxy-ssl.pem`        | Public SSL certificate (chain) for web server (optional override for `pve-ssl.pem`)
+|`nodes/<NAME>/pveproxy-ssl.key`        | Private SSL key for `pveproxy-ssl.pem` (optional)
+|`nodes/<NAME>/qemu-server/<VMID>.conf` | VM configuration data for KVM VMs
+|`nodes/<NAME>/lxc/<VMID>.conf`         | VM configuration data for LXC containers
+|`firewall/cluster.fw`                  | Firewall configuration applied to all nodes
+|`firewall/<NAME>.fw`                   | Firewall configuration for individual nodes
+|`firewall/<VMID>.fw`                   | Firewall configuration for VMs and Containers
 |=======
 
 Symbolic links
@@ -119,9 +119,9 @@ Symbolic links
 
 [width="100%",cols="m,m"]
 |=======
-|local         |nodes/<LOCAL_HOST_NAME>
-|qemu-server   |nodes/<LOCAL_HOST_NAME>/qemu-server/
-|lxc           |nodes/<LOCAL_HOST_NAME>/lxc/
+|`local`         | `nodes/<LOCAL_HOST_NAME>`
+|`qemu-server`   | `nodes/<LOCAL_HOST_NAME>/qemu-server/`
+|`lxc`           | `nodes/<LOCAL_HOST_NAME>/lxc/`
 |=======
 
 Special status files for debugging (JSON)
@@ -129,11 +129,11 @@ Special status files for debugging (JSON)
 
 [width="100%",cols="m,d"]
 |=======
-| .version    |file versions (to detect file modifications)
-| .members    |Info about cluster members
-| .vmlist     |List of all VMs
-| .clusterlog |Cluster log (last 50 entries)
-| .rrd        |RRD data (most recent entries)
+|`.version`    |File versions (to detect file modifications)
+|`.members`    |Info about cluster members
+|`.vmlist`     |List of all VMs
+|`.clusterlog` |Cluster log (last 50 entries)
+|`.rrd`        |RRD data (most recent entries)
 |=======
 
 Enable/Disable debugging
@@ -153,11 +153,11 @@ Recovery
 
 If you have major problems with your Proxmox VE host, e.g. hardware
 issues, it could be helpful to just copy the pmxcfs database file
-/var/lib/pve-cluster/config.db and move it to a new Proxmox VE
+`/var/lib/pve-cluster/config.db` and move it to a new Proxmox VE
 host. On the new host (with nothing running), you need to stop the
-pve-cluster service and replace the config.db file (needed permissions
-0600). Second, adapt '/etc/hostname' and '/etc/hosts' according to the
-lost Proxmox VE host, then reboot and check. (And don´t forget your
+`pve-cluster` service and replace the `config.db` file (needed permissions
+`0600`). Second, adapt `/etc/hostname` and `/etc/hosts` according to the
+lost Proxmox VE host, then reboot and check. (And don't forget your
 VM/CT data)
 
 Remove Cluster configuration
@@ -170,7 +170,7 @@ shared configuration data is destroyed.
 In some cases, you might prefer to put a node back to local mode
 without reinstall, which is described here:
 
-* stop the cluster file system in '/etc/pve/'
+* stop the cluster file system in `/etc/pve/`
 
  # systemctl stop pve-cluster
 
diff --git a/pve-admin-guide.adoc b/pve-admin-guide.adoc
index 1320942..618dfde 100644
--- a/pve-admin-guide.adoc
+++ b/pve-admin-guide.adoc
@@ -103,7 +103,7 @@ include::qm.1-synopsis.adoc[]
 
 :leveloffset: 0
 
-*qmrestore* - Restore QemuServer 'vzdump' Backups
+*qmrestore* - Restore QemuServer `vzdump` Backups
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 :leveloffset: 1
diff --git a/pve-faq.adoc b/pve-faq.adoc
index c412b05..662868b 100644
--- a/pve-faq.adoc
+++ b/pve-faq.adoc
@@ -28,8 +28,8 @@ NOTE: VMs and Containers can be both 32-bit and/or 64-bit.
 
 Does my CPU support virtualization?::
 
-To check if your CPU is virtualization compatible, check for the "vmx"
-or "svm" tag in this command output:
+To check if your CPU is virtualization compatible, check for the `vmx`
+or `svm` tag in this command output:
 +
 ----
 egrep '(vmx|svm)' /proc/cpuinfo
@@ -96,14 +96,14 @@ complete OS inside a container, where you log in as ssh, add users,
 run apache, etc...
 +
 LXD is building on top of LXC to provide a new, better user
-experience. Under the hood, LXD uses LXC through 'liblxc' and its Go
+experience. Under the hood, LXD uses LXC through `liblxc` and its Go
 binding to create and manage the containers. It's basically an
 alternative to LXC's tools and distribution template system with the
 added features that come from being controllable over the network.
 +
 Proxmox Containers also aims at *system virtualization*, and thus uses
 LXC as the basis of its own container offer. The Proxmox Container
-Toolkit is called 'pct', and is tightly coupled with {pve}. That means
+Toolkit is called `pct`, and is tightly coupled with {pve}. That means
 that it is aware of the cluster setup, and it can use the same network
 and storage resources as fully virtualized VMs. You can even use the
 {pve} firewall, create and restore backups, or manage containers using
diff --git a/pve-firewall.adoc b/pve-firewall.adoc
index 5f76f5d..154c907 100644
--- a/pve-firewall.adoc
+++ b/pve-firewall.adoc
@@ -32,7 +32,7 @@ containers. Features like firewall macros, security groups, IP sets
 and aliases helps to make that task easier.
 
 While all configuration is stored on the cluster file system, the
-iptables based firewall runs on each cluster node, and thus provides
+`iptables`-based firewall runs on each cluster node, and thus provides
 full isolation between virtual machines. The distributed nature of
 this system also provides much higher bandwidth than a central
 firewall solution.
@@ -64,17 +64,17 @@ Configuration Files
 
 All firewall related configuration is stored on the proxmox cluster
 file system. So those files are automatically distributed to all
-cluster nodes, and the 'pve-firewall' service updates the underlying
-iptables rules automatically on changes.
+cluster nodes, and the `pve-firewall` service updates the underlying
+`iptables` rules automatically on changes.
 
 You can configure anything using the GUI (i.e. Datacenter -> Firewall,
 or on a Node -> Firewall), or you can edit the configuration files
 directly using your preferred editor.
 
 Firewall configuration files contains sections of key-value
-pairs. Lines beginning with a '#' and blank lines are considered
+pairs. Lines beginning with a `#` and blank lines are considered
 comments. Sections starts with a header line containing the section
-name enclosed in '[' and ']'.
+name enclosed in `[` and `]`.
 
 
 Cluster Wide Setup
@@ -86,25 +86,25 @@ The cluster wide firewall configuration is stored at:
 
 The configuration can contain the following sections:
 
-'[OPTIONS]'::
+`[OPTIONS]`::
 
 This is used to set cluster wide firewall options.
 
 include::pve-firewall-cluster-opts.adoc[]
 
-'[RULES]'::
+`[RULES]`::
 
 This sections contains cluster wide firewall rules for all nodes.
 
-'[IPSET <name>]'::
+`[IPSET <name>]`::
 
 Cluster wide IP set definitions.
 
-'[GROUP <name>]'::
+`[GROUP <name>]`::
 
 Cluster wide security group definitions.
 
-'[ALIASES]'::
+`[ALIASES]`::
 
 Cluster wide Alias definitions.
 
@@ -135,7 +135,7 @@ enabling the firewall. That way you still have access to the host if
 something goes wrong .
 
 To simplify that task, you can instead create an IPSet called
-'management', and add all remote IPs there. This creates all required
+``management'', and add all remote IPs there. This creates all required
 firewall rules to access the GUI from remote.
 
 
@@ -146,17 +146,17 @@ Host related configuration is read from:
 
  /etc/pve/nodes/<nodename>/host.fw
 
-This is useful if you want to overwrite rules from 'cluster.fw'
+This is useful if you want to overwrite rules from `cluster.fw`
 config. You can also increase log verbosity, and set netfilter related
 options. The configuration can contain the following sections:
 
-'[OPTIONS]'::
+`[OPTIONS]`::
 
 This is used to set host related firewall options.
 
 include::pve-firewall-host-opts.adoc[]
 
-'[RULES]'::
+`[RULES]`::
 
 This sections contains host specific firewall rules.
 
@@ -170,21 +170,21 @@ VM firewall configuration is read from:
 
 and contains the following data:
 
-'[OPTIONS]'::
+`[OPTIONS]`::
 
 This is used to set VM/Container related firewall options.
 
 include::pve-firewall-vm-opts.adoc[]
 
-'[RULES]'::
+`[RULES]`::
 
 This sections contains VM/Container firewall rules.
 
-'[IPSET <name>]'::
+`[IPSET <name>]`::
 
 IP set definitions.
 
-'[ALIASES]'::
+`[ALIASES]`::
 
 IP Alias definitions.
 
@@ -194,7 +194,7 @@ Enabling the Firewall for VMs and Containers
 
 Each virtual network device has its own firewall enable flag. So you
 can selectively enable the firewall for each interface. This is
-required in addition to the general firewall 'enable' option.
+required in addition to the general firewall `enable` option.
 
 The firewall requires a special network device setup, so you need to
 restart the VM/container after enabling the firewall on a network
@@ -206,7 +206,8 @@ Firewall Rules
 
 Firewall rules consists of a direction (`IN` or `OUT`) and an
 action (`ACCEPT`, `DENY`, `REJECT`). You can also specify a macro
-name. Macros contain predifined sets of rules and options. Rules can be disabled by prefixing them with '|'.
+name. Macros contain predefined sets of rules and options. Rules can be
+disabled by prefixing them with `|`.
 
 .Firewall rules syntax
 ----
@@ -240,12 +241,13 @@ IN  DROP # drop all incoming packages
 OUT ACCEPT # accept all outgoing packages
 ----
 
+
 Security Groups
 ---------------
 
 A security group is a collection of rules, defined at cluster level, which
 can be used in all VMs' rules. For example you can define a group named
-`webserver` with rules to open the http and https ports.
+``webserver'' with rules to open the 'http' and 'https' ports.
 
 ----
 # /etc/pve/firewall/cluster.fw
@@ -291,7 +293,7 @@ using detected local_network: 192.168.0.0/20
 The firewall automatically sets up rules to allow everything needed
 for cluster communication (corosync, API, SSH) using this alias.
 
-The user can overwrite these values in the cluster.fw alias
+The user can overwrite these values in the `cluster.fw` alias
 section. If you use a single host on a public network, it is better to
 explicitly assign the local IP address
 
@@ -332,7 +334,8 @@ communication. (multicast,ssh,...)
 192.168.2.10/24
 ----
 
-Standard IP set 'blacklist'
+
+Standard IP set `blacklist`
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Traffic from these ips is dropped by every host's and VM's firewall.
@@ -345,8 +348,9 @@ Traffic from these ips is dropped by every host's and VM's firewall.
 213.87.123.0/24
 ----
 
+
 [[ipfilter-section]]
-Standard IP set 'ipfilter-net*'
+Standard IP set `ipfilter-net*`
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 These filters belong to a VM's network interface and are mainly used to prevent
@@ -378,7 +382,7 @@ The firewall runs two service daemons on each node:
 * pvefw-logger: NFLOG daemon (ulogd replacement).
 * pve-firewall: updates iptables rules
 
-There is also a CLI command named 'pve-firewall', which can be used to
+There is also a CLI command named `pve-firewall`, which can be used to
 start and stop the firewall service:
 
  # pve-firewall start
@@ -403,12 +407,12 @@ How to allow FTP
 ~~~~~~~~~~~~~~~~
 
 FTP is an old style protocol which uses port 21 and several other dynamic ports. So you
-need a rule to accept port 21. In addition, you need to load the 'ip_conntrack_ftp' module.
+need a rule to accept port 21. In addition, you need to load the `ip_conntrack_ftp` module.
 So please run: 
 
  modprobe ip_conntrack_ftp
 
-and add `ip_conntrack_ftp` to '/etc/modules' (so that it works after a reboot) .
+and add `ip_conntrack_ftp` to `/etc/modules` (so that it works after a reboot).
 
 
 Suricata IPS integration
@@ -429,7 +433,7 @@ Install suricata on proxmox host:
 # modprobe nfnetlink_queue  
 ----
 
-Don't forget to add `nfnetlink_queue` to '/etc/modules' for next reboot.
+Don't forget to add `nfnetlink_queue` to `/etc/modules` for next reboot.
 
 Then, enable IPS for a specific VM with:
 
@@ -450,8 +454,9 @@ Available queues are defined in
 NFQUEUE=0
 ----
 
-Avoiding link-local addresses on tap and veth devices
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Avoiding `link-local` Addresses on `tap` and `veth` Devices
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 With IPv6 enabled by default every interface gets a MAC-derived link local
 address. However, most devices on a typical {pve} setup are connected to a
@@ -519,7 +524,7 @@ The firewall contains a few IPv6 specific options. One thing to note is that
 IPv6 does not use the ARP protocol anymore, and instead uses NDP (Neighbor
 Discovery Protocol) which works on IP level and thus needs IP addresses to
 succeed. For this purpose link-local addresses derived from the interface's MAC
-address are used. By default the 'NDP' option is enabled on both host and VM
+address are used. By default the `NDP` option is enabled on both host and VM
 level to allow neighbor discovery (NDP) packets to be sent and received.
 
 Beside neighbor discovery NDP is also used for a couple of other things, like
@@ -528,14 +533,14 @@ autoconfiguration and advertising routers.
 By default VMs are allowed to send out router solicitation messages (to query
 for a router), and to receive router advetisement packets. This allows them to
 use stateless auto configuration. On the other hand VMs cannot advertise
-themselves as routers unless the 'Allow Router Advertisement' (`radv: 1`) option
+themselves as routers unless the ``Allow Router Advertisement'' (`radv: 1`) option
 is set.
 
-As for the link local addresses required for NDP, there's also an 'IP Filter'
+As for the link local addresses required for NDP, there's also an ``IP Filter''
 (`ipfilter: 1`) option which can be enabled which has the same effect as adding
 an `ipfilter-net*` ipset for each of the VM's network interfaces containing the
 corresponding link local addresses.  (See the
-<<ipfilter-section,Standard IP set 'ipfilter-net*'>> section for details.)
+<<ipfilter-section,Standard IP set `ipfilter-net*`>> section for details.)
 
 
 Ports used by Proxmox VE
diff --git a/pve-installation.adoc b/pve-installation.adoc
index 4dd4076..ccb3418 100644
--- a/pve-installation.adoc
+++ b/pve-installation.adoc
@@ -61,14 +61,14 @@ BIOS is unable to read the boot block from the disk.
 
 Test Memory::
 
-Runs 'memtest86+'. This is useful to check if your memory if
+Runs `memtest86+`. This is useful to check if your memory if
 functional and error free.
 
 You normally select *Install Proxmox VE* to start the installation.
 After that you get prompted to select the target hard disk(s). The
 `Options` button lets you select the target file system, which
-defaults to `ext4`. The installer uses LVM if you select 'ext3',
-'ext4' or 'xfs' as file system, and offers additional option to
+defaults to `ext4`. The installer uses LVM if you select `ext3`,
+`ext4` or `xfs` as file system, and offers additional option to
 restrict LVM space (see <<advanced_lvm_options,below>>)
 
 If you have more than one disk, you can also use ZFS as file system.
@@ -121,7 +121,7 @@ system.
 `maxvz`::
 
 Define the size of the `data` volume, which is mounted at
-'/var/lib/vz'.
+`/var/lib/vz`.
 
 `minfree`::
 
diff --git a/pve-intro.adoc b/pve-intro.adoc
index ecfa169..fab3585 100644
--- a/pve-intro.adoc
+++ b/pve-intro.adoc
@@ -119,7 +119,7 @@ Local storage types supported are:
 Integrated Backup and Restore
 -----------------------------
 
-The integrated backup tool (vzdump) creates consistent snapshots of
+The integrated backup tool (`vzdump`) creates consistent snapshots of
 running Containers and KVM guests. It basically creates an archive of
 the VM or CT data which includes the VM/CT configuration files.
 
@@ -150,11 +150,14 @@ bonding/aggregation are possible. In this way it is possible to build
 complex, flexible virtual networks for the Proxmox VE hosts,
 leveraging the full power of the Linux network stack.
 
+
 Integrated Firewall
 -------------------
 
 The intergrated firewall allows you to filter network packets on
-any VM or Container interface. Common sets of firewall rules can be grouped into 'security groups'.
+any VM or Container interface. Common sets of firewall rules can
+be grouped into ``security groups''.
+
 
 Why Open Source
 ---------------
diff --git a/pve-network.adoc b/pve-network.adoc
index 7221a87..82226a8 100644
--- a/pve-network.adoc
+++ b/pve-network.adoc
@@ -15,14 +15,14 @@ VLANs (IEEE 802.1q) and network bonding, also known as "link
 aggregation". That way it is possible to build complex and flexible
 virtual networks.
 
-Debian traditionally uses the 'ifup' and 'ifdown' commands to
-configure the network. The file '/etc/network/interfaces' contains the
-whole network setup. Please refer to to manual page ('man interfaces')
+Debian traditionally uses the `ifup` and `ifdown` commands to
+configure the network. The file `/etc/network/interfaces` contains the
+whole network setup. Please refer to to manual page (`man interfaces`)
 for a complete format description.
 
 NOTE: {pve} does not write changes directly to
-'/etc/network/interfaces'. Instead, we write into a temporary file
-called '/etc/network/interfaces.new', and commit those changes when
+`/etc/network/interfaces`. Instead, we write into a temporary file
+called `/etc/network/interfaces.new`, and commit those changes when
 you reboot the node.
 
 It is worth mentioning that you can directly edit the configuration
@@ -52,7 +52,7 @@ Default Configuration using a Bridge
 
 The installation program creates a single bridge named `vmbr0`, which
 is connected to the first ethernet card `eth0`. The corresponding
-configuration in '/etc/network/interfaces' looks like this:
+configuration in `/etc/network/interfaces` looks like this:
 
 ----
 auto lo
@@ -87,13 +87,13 @@ TIP: Some providers allows you to register additional MACs on there
 management interface. This avoids the problem, but is clumsy to
 configure because you need to register a MAC for each of your VMs.
 
-You can avoid the problem by "routing" all traffic via a single
+You can avoid the problem by ``routing'' all traffic via a single
 interface. This makes sure that all network packets use the same MAC
 address.
 
-A common scenario is that you have a public IP (assume 192.168.10.2
+A common scenario is that you have a public IP (assume `192.168.10.2`
 for this example), and an additional IP block for your VMs
-(10.10.10.1/255.255.255.0). We recommend the following setup for such
+(`10.10.10.1/255.255.255.0`). We recommend the following setup for such
 situations:
 
 ----
@@ -118,8 +118,8 @@ iface vmbr0 inet static
 ----
 
 
-Masquerading (NAT) with iptables
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Masquerading (NAT) with `iptables`
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 In some cases you may want to use private IPs behind your Proxmox
 host's true IP, and masquerade the traffic using NAT:
diff --git a/pve-package-repos.adoc b/pve-package-repos.adoc
index a9b428d..84e2c1c 100644
--- a/pve-package-repos.adoc
+++ b/pve-package-repos.adoc
@@ -5,17 +5,17 @@ include::attributes.txt[]
 All Debian based systems use
 http://en.wikipedia.org/wiki/Advanced_Packaging_Tool[APT] as package
 management tool. The list of repositories is defined in
-'/etc/apt/sources.list' and '.list' files found inside
-'/etc/apt/sources.d/'. Updates can be installed directly using
-'apt-get', or via the GUI.
+`/etc/apt/sources.list` and `.list` files found inside
+`/etc/apt/sources.d/`. Updates can be installed directly using
+`apt-get`, or via the GUI.
 
-Apt 'sources.list' files list one package repository per line, with
+Apt `sources.list` files list one package repository per line, with
 the most preferred source listed first. Empty lines are ignored, and a
-'#' character anywhere on a line marks the remainder of that line as a
+`#` character anywhere on a line marks the remainder of that line as a
 comment. The information available from the configured sources is
-acquired by 'apt-get update'.
+acquired by `apt-get update`.
 
-.File '/etc/apt/sources.list'
+.File `/etc/apt/sources.list`
 ----
 deb http://ftp.debian.org/debian jessie main contrib
 
@@ -33,7 +33,7 @@ all {pve} subscription users. It contains the most stable packages,
 and is suitable for production use. The `pve-enterprise` repository is
 enabled by default:
 
-.File '/etc/apt/sources.list.d/pve-enterprise.list'
+.File `/etc/apt/sources.list.d/pve-enterprise.list`
 ----
 deb https://enterprise.proxmox.com/debian jessie pve-enterprise
 ----
@@ -48,7 +48,7 @@ repository. We offer different support levels, and you can find further
 details at http://www.proxmox.com/en/proxmox-ve/pricing.
 
 NOTE: You can disable this repository by commenting out the above line
-using a '#' (at the start of the line). This prevents error messages
+using a `#` (at the start of the line). This prevents error messages
 if you do not have a subscription key. Please configure the
 `pve-no-subscription` repository in that case.
 
@@ -61,9 +61,9 @@ this repository. It can be used for testing and non-production
 use. Its not recommended to run on production servers, as these
 packages are not always heavily tested and validated.
 
-We recommend to configure this repository in '/etc/apt/sources.list'.
+We recommend to configure this repository in `/etc/apt/sources.list`.
 
-.File '/etc/apt/sources.list'
+.File `/etc/apt/sources.list`
 ----
 deb http://ftp.debian.org/debian jessie main contrib
 
@@ -82,7 +82,7 @@ deb http://security.debian.org jessie/updates main contrib
 Finally, there is a repository called `pvetest`. This one contains the
 latest packages and is heavily used by developers to test new
 features. As usual, you can configure this using
-'/etc/apt/sources.list' by adding the following line:
+`/etc/apt/sources.list` by adding the following line:
 
 .sources.list entry for `pvetest`
 ----
@@ -96,7 +96,7 @@ for testing new features or bug fixes.
 SecureApt
 ~~~~~~~~~
 
-We use GnuPG to sign the 'Release' files inside those repositories,
+We use GnuPG to sign the `Release` files inside those repositories,
 and APT uses that signatures to verify that all packages are from a
 trusted source.
 
@@ -128,7 +128,7 @@ ifdef::wiki[]
 {pve} 3.x Repositories
 ~~~~~~~~~~~~~~~~~~~~~~
 
-{pve} 3.x is based on Debian 7.x ('wheezy'). Please note that this
+{pve} 3.x is based on Debian 7.x (``wheezy''). Please note that this
 release is out of date, and you should update your
 installation. Nevertheless, we still provide access to those
 repositories at our download servers.
@@ -144,17 +144,17 @@ deb http://download.proxmox.com/debian wheezy pve-no-subscription
 deb http://download.proxmox.com/debian wheezy pvetest
 |===========================================================
 
-NOTE: Apt 'sources.list' configuration files are basically the same as
-in newer 4.x versions - just replace 'jessie' with 'wheezy'.
+NOTE: Apt `sources.list` configuration files are basically the same as
+in newer 4.x versions - just replace `jessie` with `wheezy`.
 
-Outdated: 'stable' Repository 'pve'
+Outdated: `stable` Repository `pve`
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 This repository is a leftover to easy the update to 3.1. It will not
 get any updates after the release of 3.1. Therefore you need to remove
 this repository after you upgraded to 3.1.
 
-.File '/etc/apt/sources.list'
+.File `/etc/apt/sources.list`
 ----
 deb http://ftp.debian.org/debian wheezy main contrib
 
@@ -169,11 +169,11 @@ deb http://security.debian.org/ wheezy/updates main contrib
 Outdated: {pve} 2.x Repositories
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-{pve} 2.x is based on Debian 6.0 ('squeeze') and outdated. Please
+{pve} 2.x is based on Debian 6.0 (``squeeze'') and outdated. Please
 upgrade to latest version as soon as possible. In order to use the
-stable 'pve' 2.x repository, check your sources.list:
+stable `pve` 2.x repository, check your sources.list:
 
-.File '/etc/apt/sources.list'
+.File `/etc/apt/sources.list`
 ----
 deb http://ftp.debian.org/debian squeeze main contrib
 
@@ -188,7 +188,7 @@ deb http://security.debian.org/ squeeze/updates main contrib
 Outdated: {pve} VE 1.x Repositories
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-{pve} 1.x is based on Debian 5.0 (Lenny) and very outdated. Please
+{pve} 1.x is based on Debian 5.0 (``lenny'') and very outdated. Please
 upgrade to latest version as soon as possible.
 
 
diff --git a/pve-storage-dir.adoc b/pve-storage-dir.adoc
index 99f61cb..569e463 100644
--- a/pve-storage-dir.adoc
+++ b/pve-storage-dir.adoc
@@ -9,7 +9,7 @@ storage. A directory is a file level storage, so you can store any
 content type like virtual disk images, containers, templates, ISO images
 or backup files.
 
-NOTE: You can mount additional storages via standard linux '/etc/fstab',
+NOTE: You can mount additional storages via standard linux `/etc/fstab`,
 and then define a directory storage for that mount point. This way you
 can use any file system supported by Linux.
 
@@ -31,10 +31,10 @@ storage backends.
 [width="100%",cols="d,m",options="header"]
 |===========================================================
 |Content type        |Subdir
-|VM images           |images/<VMID>/
-|ISO images          |template/iso/
-|Container templates |template/cache
-|Backup files        |dump/
+|VM images           |`images/<VMID>/`
+|ISO images          |`template/iso/`
+|Container templates |`template/cache/`
+|Backup files        |`dump/`
 |===========================================================
 
 Configuration
@@ -44,7 +44,7 @@ This backend supports all common storage properties, and adds an
 additional property called `path` to specify the directory. This
 needs to be an absolute file system path.
 
-.Configuration Example ('/etc/pve/storage.cfg')
+.Configuration Example (`/etc/pve/storage.cfg`)
 ----
 dir: backup
         path /mnt/backup
@@ -54,7 +54,7 @@ dir: backup
 
 Above configuration defines a storage pool called `backup`. That pool
 can be used to store up to 7 backups (`maxfiles 7`) per VM. The real
-path for the backup files is '/mnt/backup/dump/...'.
+path for the backup files is `/mnt/backup/dump/...`.
 
 
 File naming conventions
@@ -70,13 +70,13 @@ This specifies the owner VM.
 
 `<NAME>`::
 
-This can be an arbitrary name (`ascii`) without white spaces. The
+This can be an arbitrary name (`ascii`) without white space. The
 backend uses `disk-[N]` as default, where `[N]` is replaced by an
 integer to make the name unique.
 
 `<FORMAT>`::
 
-Species the image format (`raw|qcow2|vmdk`).
+Specifies the image format (`raw|qcow2|vmdk`).
 
 When you create a VM template, all VM images are renamed to indicate
 that they are now read-only, and can be uses as a base image for clones:
diff --git a/pve-storage-glusterfs.adoc b/pve-storage-glusterfs.adoc
index 4afcb40..1dfb228 100644
--- a/pve-storage-glusterfs.adoc
+++ b/pve-storage-glusterfs.adoc
@@ -9,7 +9,7 @@ design, runs on commodity hardware, and can provide a highly available
 enterprise storage at low costs. Such system is capable of scaling to
 several petabytes, and can handle thousands of clients.
 
-NOTE: After a node/brick crash, GlusterFS does a full 'rsync' to make
+NOTE: After a node/brick crash, GlusterFS does a full `rsync` to make
 sure data is consistent. This can take a very long time with large
 files, so this backend is not suitable to store large VM images.
 
@@ -36,7 +36,7 @@ GlusterFS Volume.
 GlusterFS transport: `tcp`, `unix` or `rdma`
 
 
-.Configuration Example ('/etc/pve/storage.cfg')
+.Configuration Example (`/etc/pve/storage.cfg`)
 ----
 glusterfs: Gluster
         server 10.2.3.4
diff --git a/pve-storage-iscsi.adoc b/pve-storage-iscsi.adoc
index 6f700e5..d59a905 100644
--- a/pve-storage-iscsi.adoc
+++ b/pve-storage-iscsi.adoc
@@ -10,13 +10,13 @@ source iSCSI target solutions available,
 e.g. http://www.openmediavault.org/[OpenMediaVault], which is based on
 Debian.
 
-To use this backend, you need to install the 'open-iscsi'
+To use this backend, you need to install the `open-iscsi`
 package. This is a standard Debian package, but it is not installed by
 default to save resources.
 
   # apt-get install open-iscsi
 
-Low-level iscsi management task can be done using the 'iscsiadm' tool.
+Low-level iscsi management task can be done using the `iscsiadm` tool.
 
 
 Configuration
@@ -34,7 +34,7 @@ target::
 iSCSI target.
 
 
-.Configuration Example ('/etc/pve/storage.cfg')
+.Configuration Example (`/etc/pve/storage.cfg`)
 ----
 iscsi: mynas
      portal 10.10.10.1
diff --git a/pve-storage-iscsidirect.adoc b/pve-storage-iscsidirect.adoc
index 2817cdd..4dda04b 100644
--- a/pve-storage-iscsidirect.adoc
+++ b/pve-storage-iscsidirect.adoc
@@ -5,7 +5,7 @@ include::attributes.txt[]
 Storage pool type: `iscsidirect`
 
 This backend provides basically the same functionality as the
-Open-iSCSI backed, but uses a user-level library (package 'libiscsi2')
+Open-iSCSI backed, but uses a user-level library (package `libiscsi2`)
 to implement it.
 
 It should be noted that there are no kernel drivers involved, so this
@@ -19,7 +19,7 @@ Configuration
 The user mode iSCSI backend uses the same configuration options as the
 Open-iSCSI backed.
 
-.Configuration Example ('/etc/pve/storage.cfg')
+.Configuration Example (`/etc/pve/storage.cfg`)
 ----
 iscsidirect: faststore
      portal 10.10.10.1
diff --git a/pve-storage-lvm.adoc b/pve-storage-lvm.adoc
index e4aca9c..12db9dc 100644
--- a/pve-storage-lvm.adoc
+++ b/pve-storage-lvm.adoc
@@ -37,9 +37,9 @@ sure that all data gets erased.
 
 `saferemove_throughput`::
 
-Wipe throughput ('cstream -t' parameter value).
+Wipe throughput (`cstream -t` parameter value).
 
-.Configuration Example ('/etc/pve/storage.cfg')
+.Configuration Example (`/etc/pve/storage.cfg`)
 ----
 lvm: myspace
 	vgname myspace
diff --git a/pve-storage-lvmthin.adoc b/pve-storage-lvmthin.adoc
index 46e54b7..be730cf 100644
--- a/pve-storage-lvmthin.adoc
+++ b/pve-storage-lvmthin.adoc
@@ -10,7 +10,7 @@ called thin-provisioning, because volumes can be much larger than
 physically available space.
 
 You can use the normal LVM command line tools to manage and create LVM
-thin pools (see 'man lvmthin' for details). Assuming you already have
+thin pools (see `man lvmthin` for details). Assuming you already have
 a LVM volume group called `pve`, the following commands create a new
 LVM thin pool (size 100G) called `data`:
 
@@ -35,7 +35,7 @@ LVM volume group name. This must point to an existing volume group.
 The name of the LVM thin pool.
 
 
-.Configuration Example ('/etc/pve/storage.cfg')
+.Configuration Example (`/etc/pve/storage.cfg`)
 ----
 lvmthin: local-lvm
 	thinpool data
diff --git a/pve-storage-nfs.adoc b/pve-storage-nfs.adoc
index b366b9b..7e08f7f 100644
--- a/pve-storage-nfs.adoc
+++ b/pve-storage-nfs.adoc
@@ -8,7 +8,7 @@ The NFS backend is based on the directory backend, so it shares most
 properties. The directory layout and the file naming conventions are
 the same. The main advantage is that you can directly configure the
 NFS server properties, so the backend can mount the share
-automatically. There is no need to modify '/etc/fstab'. The backend
+automatically. There is no need to modify `/etc/fstab`. The backend
 can also test if the server is online, and provides a method to query
 the server for exported shares.
 
@@ -34,13 +34,13 @@ You can also set NFS mount options:
 
 path::
 
-The local mount point (defaults to '/mnt/pve/`<STORAGE_ID>`/').
+The local mount point (defaults to `/mnt/pve/<STORAGE_ID>/`).
 
 options::
 
 NFS mount options (see `man nfs`).
 
-.Configuration Example ('/etc/pve/storage.cfg')
+.Configuration Example (`/etc/pve/storage.cfg`)
 ----
 nfs: iso-templates
 	path /mnt/pve/iso-templates
diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc
index d38294b..f8edf85 100644
--- a/pve-storage-rbd.adoc
+++ b/pve-storage-rbd.adoc
@@ -46,7 +46,7 @@ krbd::
 Access rbd through krbd kernel module. This is required if you want to
 use the storage for containers.
 
-.Configuration Example ('/etc/pve/storage.cfg')
+.Configuration Example (`/etc/pve/storage.cfg`)
 ----
 rbd: ceph3
         monhost 10.1.1.20 10.1.1.21 10.1.1.22
@@ -55,15 +55,15 @@ rbd: ceph3
         username admin
 ----
 
-TIP: You can use the 'rbd' utility to do low-level management tasks.
+TIP: You can use the `rbd` utility to do low-level management tasks.
 
 Authentication
 ~~~~~~~~~~~~~~
 
-If you use cephx authentication, you need to copy the keyfile from
+If you use `cephx` authentication, you need to copy the keyfile from
 Ceph to Proxmox VE host.
 
-Create the directory '/etc/pve/priv/ceph' with
+Create the directory `/etc/pve/priv/ceph` with
 
  mkdir /etc/pve/priv/ceph
 
diff --git a/pve-storage-zfspool.adoc b/pve-storage-zfspool.adoc
index c27e046..5df1165 100644
--- a/pve-storage-zfspool.adoc
+++ b/pve-storage-zfspool.adoc
@@ -27,7 +27,7 @@ sparse::
 Use ZFS thin-provisioning. A sparse volume is a volume whose
 reservation is not equal to the volume size.
 
-.Configuration Example ('/etc/pve/storage.cfg')
+.Configuration Example (`/etc/pve/storage.cfg`)
 ----
 zfspool: vmdata
         pool tank/vmdata
diff --git a/pveam.adoc b/pveam.adoc
index e62ab49..e503784 100644
--- a/pveam.adoc
+++ b/pveam.adoc
@@ -24,7 +24,7 @@ Container Images
 include::attributes.txt[]
 endif::manvolnum[]
 
-Command line tool to manage container images. See 'man pct' for usage
+Command line tool to manage container images. See `man pct` for usage
 examples.
 
 ifdef::manvolnum[]
diff --git a/pvecm.adoc b/pvecm.adoc
index e6e2058..867c658 100644
--- a/pvecm.adoc
+++ b/pvecm.adoc
@@ -23,13 +23,13 @@ Cluster Manager
 include::attributes.txt[]
 endif::manvolnum[]
 
-The {PVE} cluster manager 'pvecm' is a tool to create a group of
-physical servers. Such group is called a *cluster*. We use the
+The {PVE} cluster manager `pvecm` is a tool to create a group of
+physical servers. Such a group is called a *cluster*. We use the
 http://www.corosync.org[Corosync Cluster Engine] for reliable group
 communication, and such cluster can consists of up to 32 physical nodes
 (probably more, dependent on network latency).
 
-'pvecm' can be used to create a new cluster, join nodes to a cluster,
+`pvecm` can be used to create a new cluster, join nodes to a cluster,
 leave the cluster, get status information and do various other cluster
 related tasks. The Proxmox Cluster file system (pmxcfs) is used to
 transparently distribute the cluster configuration to all cluster
@@ -41,9 +41,8 @@ Grouping nodes into a cluster has the following advantages:
 
 * Multi-master clusters: Each node can do all management task
 
-* Proxmox Cluster file system (pmxcfs): Database-driven file system
-  for storing configuration files, replicated in real-time on all
-  nodes using corosync.
+* `pmxcfs`: database-driven file system for storing configuration files,
+ replicated in real-time on all nodes using `corosync`.
 
 * Easy migration of Virtual Machines and Containers between physical
   hosts
@@ -56,7 +55,7 @@ Grouping nodes into a cluster has the following advantages:
 Requirements
 ------------
 
-* All nodes must be in the same network as corosync uses IP Multicast
+* All nodes must be in the same network as `corosync` uses IP Multicast
  to communicate between nodes (also see
  http://www.corosync.org[Corosync Cluster Engine]). Corosync uses UDP
  ports 5404 and 5405 for cluster communication.
@@ -87,13 +86,13 @@ installed with the final hostname and IP configuration. Changing the
 hostname and IP is not possible after cluster creation.
 
 Currently the cluster creation has to be done on the console, so you
-need to login via 'ssh'.
+need to login via `ssh`.
 
 Create the Cluster
 ------------------
 
-Login via 'ssh' to the first Proxmox VE node. Use a unique name for
-your cluster. This name cannot be changed later.
+Login via `ssh` to the first {pve} node. Use a unique name for your cluster.
+This name cannot be changed later.
 
  hp1# pvecm create YOUR-CLUSTER-NAME
 
@@ -109,7 +108,7 @@ To check the state of your cluster use:
 Adding Nodes to the Cluster
 ---------------------------
 
-Login via 'ssh' to the node you want to add.
+Login via `ssh` to the node you want to add.
 
  hp2# pvecm add IP-ADDRESS-CLUSTER
 
@@ -117,8 +116,8 @@ For `IP-ADDRESS-CLUSTER` use the IP from an existing cluster node.
 
 CAUTION: A new node cannot hold any VM´s, because you would get
 conflicts about identical VM IDs. Also, all existing configuration in
-'/etc/pve' is overwritten when you join a new node to the cluster. To
-workaround, use vzdump to backup and restore to a different VMID after
+`/etc/pve` is overwritten when you join a new node to the cluster. To
+workaround, use `vzdump` to backup and restore to a different VMID after
 adding the node to the cluster.
 
 To check the state of cluster:
@@ -181,7 +180,7 @@ not be what you want or need.
 Move all virtual machines from the node. Make sure you have no local
 data or backups you want to keep, or save them accordingly.
 
-Log in to one remaining node via ssh. Issue a 'pvecm nodes' command to
+Log in to one remaining node via ssh. Issue a `pvecm nodes` command to
 identify the node ID:
 
 ----
@@ -230,12 +229,12 @@ Membership information
 ----
 
 Log in to one remaining node via ssh. Issue the delete command (here
-deleting node hp4):
+deleting node `hp4`):
 
  hp1# pvecm delnode hp4
 
 If the operation succeeds no output is returned, just check the node
-list again with 'pvecm nodes' or 'pvecm status'. You should see
+list again with `pvecm nodes` or `pvecm status`. You should see
 something like:
 
 ----
@@ -308,11 +307,11 @@ It is obvious that a cluster is not quorate when all nodes are
 offline. This is a common case after a power failure.
 
 NOTE: It is always a good idea to use an uninterruptible power supply
-('UPS', also called 'battery backup') to avoid this state. Especially if
+(``UPS'', also called ``battery backup'') to avoid this state, especially if
 you want HA.
 
-On node startup, service 'pve-manager' is started and waits for
-quorum. Once quorate, it starts all guests which have the 'onboot'
+On node startup, service `pve-manager` is started and waits for
+quorum. Once quorate, it starts all guests which have the `onboot`
 flag set.
 
 When you turn on nodes, or when power comes back after power failure,
diff --git a/pvedaemon.adoc b/pvedaemon.adoc
index 633d578..e3e5d14 100644
--- a/pvedaemon.adoc
+++ b/pvedaemon.adoc
@@ -24,11 +24,11 @@ ifndef::manvolnum[]
 include::attributes.txt[]
 endif::manvolnum[]
 
-This daemom exposes the whole {pve} API on 127.0.0.1:85. It runs as
-'root' and has permission to do all priviledged operations.
+This daemon exposes the whole {pve} API on `127.0.0.1:85`. It runs as
+`root` and has permission to do all privileged operations.
 
 NOTE: The daemon listens to a local address only, so you cannot access
-it from outside. The 'pveproxy' daemon exposes the API to the outside
+it from outside. The `pveproxy` daemon exposes the API to the outside
 world.
 
 
diff --git a/pveproxy.adoc b/pveproxy.adoc
index f7111a1..b09756d 100644
--- a/pveproxy.adoc
+++ b/pveproxy.adoc
@@ -25,9 +25,9 @@ include::attributes.txt[]
 endif::manvolnum[]
 
 This daemon exposes the whole {pve} API on TCP port 8006 using
-HTTPS. It runs as user 'www-data' and has very limited permissions.
+HTTPS. It runs as user `www-data` and has very limited permissions.
 Operation requiring more permissions are forwarded to the local
-'pvedaemon'.
+`pvedaemon`.
 
 Requests targeted for other nodes are automatically forwarded to those
 nodes. This means that you can manage your whole cluster by connecting
@@ -36,8 +36,8 @@ to a single {pve} node.
 Host based Access Control
 -------------------------
 
-It is possible to configure "apache2" like access control
-lists. Values are read from file '/etc/default/pveproxy'. For example:
+It is possible to configure ``apache2''-like access control
+lists. Values are read from file `/etc/default/pveproxy`. For example:
 
 ----
 ALLOW_FROM="10.0.0.1-10.0.0.5,192.168.0.0/22"
@@ -46,9 +46,9 @@ POLICY="allow"
 ----
 
 IP addresses can be specified using any syntax understood by `Net::IP`. The
-name 'all' is an alias for '0/0'.
+name `all` is an alias for `0/0`.
 
-The default policy is 'allow'.
+The default policy is `allow`.
 
 [width="100%",options="header"]
 |===========================================================
@@ -63,7 +63,7 @@ The default policy is 'allow'.
 SSL Cipher Suite
 ----------------
 
-You can define the cipher list in '/etc/default/pveproxy', for example
+You can define the cipher list in `/etc/default/pveproxy`, for example
 
  CIPHERS="HIGH:MEDIUM:!aNULL:!MD5"
 
@@ -75,12 +75,12 @@ Diffie-Hellman Parameters
 -------------------------
 
 You can define the used Diffie-Hellman parameters in
-'/etc/default/pveproxy' by setting `DHPARAMS` to the path of a file
+`/etc/default/pveproxy` by setting `DHPARAMS` to the path of a file
 containing DH parameters in PEM format, for example
 
  DHPARAMS="/path/to/dhparams.pem"
 
-If this option is not set, the built-in 'skip2048' parameters will be
+If this option is not set, the built-in `skip2048` parameters will be
 used.
 
 NOTE: DH parameters are only used if a cipher suite utilizing the DH key
@@ -89,20 +89,21 @@ exchange algorithm is negotiated.
 Alternative HTTPS certificate
 -----------------------------
 
-By default, pveproxy uses the certificate '/etc/pve/local/pve-ssl.pem'
-(and private key '/etc/pve/local/pve-ssl.key') for HTTPS connections.
+By default, pveproxy uses the certificate `/etc/pve/local/pve-ssl.pem`
+(and private key `/etc/pve/local/pve-ssl.key`) for HTTPS connections.
 This certificate is signed by the cluster CA certificate, and therefor
 not trusted by browsers and operating systems by default.
 
 In order to use a different certificate and private key for HTTPS,
 store the server certificate and any needed intermediate / CA
-certificates in PEM format in the file '/etc/pve/local/pveproxy-ssl.pem'
+certificates in PEM format in the file `/etc/pve/local/pveproxy-ssl.pem`
 and the associated private key in PEM format without a password in the
-file '/etc/pve/local/pveproxy-ssl.key'.
+file `/etc/pve/local/pveproxy-ssl.key`.
 
 WARNING: Do not replace the automatically generated node certificate
-files in '/etc/pve/local/pve-ssl.pem'/'etc/pve/local/pve-ssl.key' or
-the cluster CA files in '/etc/pve/pve-root-ca.pem'/'/etc/pve/priv/pve-root-ca.key'.
+files in `/etc/pve/local/pve-ssl.pem` and `etc/pve/local/pve-ssl.key` or
+the cluster CA files in `/etc/pve/pve-root-ca.pem` and
+`/etc/pve/priv/pve-root-ca.key`.
 
 ifdef::manvolnum[]
 include::pve-copyright.adoc[]
diff --git a/pvesm.adoc b/pvesm.adoc
index 1e45b67..270fc97 100644
--- a/pvesm.adoc
+++ b/pvesm.adoc
@@ -36,7 +36,7 @@ live-migrate running machines without any downtime, as all nodes in
 the cluster have direct access to VM disk images. There is no need to
 copy VM image data, so live migration is very fast in that case.
 
-The storage library (package 'libpve-storage-perl') uses a flexible
+The storage library (package `libpve-storage-perl`) uses a flexible
 plugin system to provide a common interface to all storage types. This
 can be easily adopted to include further storage types in future.
 
@@ -81,13 +81,13 @@ snapshots and clones.
 |=========================================================
 
 TIP: It is possible to use LVM on top of an iSCSI storage. That way
-you get a 'shared' LVM storage.
+you get a `shared` LVM storage.
 
 Thin provisioning
 ~~~~~~~~~~~~~~~~~
 
-A number of storages, and the Qemu image format `qcow2`, support _thin
-provisioning_.  With thin provisioning activated, only the blocks that
+A number of storages, and the Qemu image format `qcow2`, support 'thin
+provisioning'.  With thin provisioning activated, only the blocks that
 the guest system actually use will be written to the storage.
 
 Say for instance you create a VM with a 32GB hard disk, and after
@@ -99,7 +99,7 @@ available storage blocks. You can create large disk images for your
 VMs, and when the need arises, add more disks to your storage without
 resizing the VMs filesystems.
 
-All storage types which have the 'Snapshots' feature also support thin
+All storage types which have the ``Snapshots'' feature also support thin
 provisioning.
 
 CAUTION: If a storage runs full, all guests using volumes on that
@@ -112,12 +112,12 @@ Storage Configuration
 ---------------------
 
 All {pve} related storage configuration is stored within a single text
-file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
+file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
 gets automatically distributed to all cluster nodes. So all nodes
 share the same storage configuration.
 
 Sharing storage configuration make perfect sense for shared storage,
-because the same 'shared' storage is accessible from all nodes. But is
+because the same ``shared'' storage is accessible from all nodes. But is
 also useful for local storage types. In this case such local storage
 is available on all nodes, but it is physically different and can have
 totally different content.
@@ -140,11 +140,11 @@ them come with reasonable default. In that case you can omit the value.
 
 To be more specific, take a look at the default storage configuration
 after installation. It contains one special local storage pool named
-`local`, which refers to the directory '/var/lib/vz' and is always
+`local`, which refers to the directory `/var/lib/vz` and is always
 available. The {pve} installer creates additional storage entries
 depending on the storage type chosen at installation time.
 
-.Default storage configuration ('/etc/pve/storage.cfg')
+.Default storage configuration (`/etc/pve/storage.cfg`)
 ----
 dir: local
 	path /var/lib/vz
@@ -195,7 +195,7 @@ Container templates.
 
 backup:::
 
-Backup files ('vzdump').
+Backup files (`vzdump`).
 
 iso:::
 
@@ -248,7 +248,7 @@ To get the filesystem path for a `<VOLUME_ID>` use:
 Volume Ownership
 ~~~~~~~~~~~~~~~~
 
-There exists an ownership relation for 'image' type volumes. Each such
+There exists an ownership relation for `image` type volumes. Each such
 volume is owned by a VM or Container. For example volume
 `local:230/example-image.raw` is owned by VM 230. Most storage
 backends encodes this ownership information into the volume name.
@@ -266,8 +266,8 @@ of those low level operations on the command line. Normally,
 allocation and removal of volumes is done by the VM and Container
 management tools.
 
-Nevertheless, there is a command line tool called 'pvesm' ({pve}
-storage manager), which is able to perform common storage management
+Nevertheless, there is a command line tool called `pvesm` (``{pve}
+Storage Manager''), which is able to perform common storage management
 tasks.
 
 
diff --git a/pveum.adoc b/pveum.adoc
index b307595..f68b243 100644
--- a/pveum.adoc
+++ b/pveum.adoc
@@ -37,7 +37,7 @@ objects (VM´s, storages, nodes, etc.) granular access can be defined.
 Authentication Realms
 ---------------------
 
-Proxmox VE stores all user attributes in '/etc/pve/user.cfg'. So there
+Proxmox VE stores all user attributes in `/etc/pve/user.cfg`. So there
 must be an entry for each user in that file. The password is not
 stored, instead you can use configure several realms to verify
 passwords.
@@ -48,9 +48,9 @@ LDAP::
 
 Linux PAM standard authentication::
 
-You need to create the system users first with 'adduser'
-(e.g. adduser heinz) and possibly the group as well. After that you
-can create the user on the GUI!
+You need to create the system users first with `adduser`
+(e.g. `adduser heinz`) and possibly the group as well. After that you
+can create the user on the GUI.
 
 [source,bash]
 ----
@@ -63,7 +63,7 @@ usermod -a -G watchman heinz
 Proxmox VE authentication server::
 
 This is a unix like password store
-('/etc/pve/priv/shadow.cfg'). Password are encrypted using the SHA-256
+(`/etc/pve/priv/shadow.cfg`). Password are encrypted using the SHA-256
 hash method. Users are allowed to change passwords.
 
 Terms and Definitions
@@ -76,7 +76,7 @@ A Proxmox VE user name consists of two parts: `<userid>@<realm>`. The
 login screen on the GUI shows them a separate items, but it is
 internally used as single string.
 
-We store the following attribute for users ('/etc/pve/user.cfg'):
+We store the following attribute for users (`/etc/pve/user.cfg`):
 
 * first name
 * last name
@@ -88,7 +88,7 @@ We store the following attribute for users ('/etc/pve/user.cfg'):
 Superuser
 ^^^^^^^^^
 
-The traditional unix superuser account is called 'root at pam'. All
+The traditional unix superuser account is called `root at pam`. All
 system mails are forwarded to the email assigned to that account.
 
 Groups
@@ -103,8 +103,8 @@ Objects and Paths
 ~~~~~~~~~~~~~~~~~
 
 Access permissions are assigned to objects, such as a virtual machines
-('/vms/\{vmid\}') or a storage ('/storage/\{storeid\}') or a pool of
-resources ('/pool/\{poolname\}'). We use filesystem like paths to
+(`/vms/{vmid}`) or a storage (`/storage/{storeid}`) or a pool of
+resources (`/pool/{poolname}`). We use file system like paths to
 address those objects. Those paths form a natural tree, and
 permissions can be inherited down that hierarchy.
 
@@ -221,7 +221,7 @@ Pools
 ~~~~~
 
 Pools can be used to group a set of virtual machines and data
-stores. You can then simply set permissions on pools ('/pool/\{poolid\}'),
+stores. You can then simply set permissions on pools (`/pool/{poolid}`),
 which are inherited to all pool members. This is a great way simplify
 access control.
 
@@ -229,8 +229,8 @@ Command Line Tool
 -----------------
 
 Most users will simply use the GUI to manage users. But there is also
-a full featured command line tool called 'pveum' (short for 'Proxmox
-VE User Manager'). I will use that tool in the following
+a full featured command line tool called `pveum` (short for ``**P**roxmox
+**VE** **U**ser **M**anager''). I will use that tool in the following
 examples. Please note that all Proxmox VE command line tools are
 wrappers around the API, so you can also access those function through
 the REST API.
@@ -302,12 +302,12 @@ Auditors
 You can give read only access to users by assigning the `PVEAuditor`
 role to users or groups.
 
-Example1: Allow user 'joe at pve' to see everything
+Example1: Allow user `joe at pve` to see everything
 
 [source,bash]
  pveum aclmod / -user joe at pve -role PVEAuditor
 
-Example1: Allow user 'joe at pve' to see all virtual machines
+Example1: Allow user `joe at pve` to see all virtual machines
 
 [source,bash]
  pveum aclmod /vms -user joe at pve -role PVEAuditor
@@ -315,24 +315,25 @@ Example1: Allow user 'joe at pve' to see all virtual machines
 Delegate User Management
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
-If you want to delegate user managenent to user 'joe at pve' you can do
+If you want to delegate user managenent to user `joe at pve` you can do
 that with:
 
 [source,bash]
  pveum aclmod /access -user joe at pve -role PVEUserAdmin
 
-User 'joe at pve' can now add and remove users, change passwords and
+User `joe at pve` can now add and remove users, change passwords and
 other user attributes. This is a very powerful role, and you most
 likely want to limit that to selected realms and groups. The following
-example allows 'joe at pve' to modify users within realm 'pve' if they
-are members of group 'customers':
+example allows `joe at pve` to modify users within realm `pve` if they
+are members of group `customers`:
 
 [source,bash]
  pveum aclmod /access/realm/pve -user joe at pve -role PVEUserAdmin
  pveum aclmod /access/groups/customers -user joe at pve -role PVEUserAdmin
 
 NOTE: The user is able to add other users, but only if they are
-members of group 'customers' and within realm 'pve'.
+members of group `customers` and within realm `pve`.
+
 
 Pools
 ~~~~~
@@ -359,7 +360,7 @@ Now we create a new user which is a member of that group
 
 NOTE: The -password parameter will prompt you for a password
 
-I assume we already created a pool called 'dev-pool' on the GUI. So we can now assign permission to that pool:
+I assume we already created a pool called ``dev-pool'' on the GUI. So we can now assign permission to that pool:
 
 [source,bash]
  pveum aclmod /pool/dev-pool/ -group developers -role PVEAdmin
diff --git a/qm.adoc b/qm.adoc
index 67e5da9..bbafe7c 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -401,7 +401,7 @@ with a press of the ESC button during boot), or you have to choose
 SPICE as the display type.
 
 
-Managing Virtual Machines with 'qm'
+Managing Virtual Machines with `qm`
 ------------------------------------
 
 qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can
@@ -437,7 +437,7 @@ All configuration files consists of lines in the form
  PARAMETER: value
 
 Configuration files are stored inside the Proxmox cluster file
-system, and can be accessed at '/etc/pve/qemu-server/<VMID>.conf'.
+system, and can be accessed at `/etc/pve/qemu-server/<VMID>.conf`.
 
 Options
 ~~~~~~~
@@ -448,7 +448,7 @@ include::qm.conf.5-opts.adoc[]
 Locks
 -----
 
-Online migrations and backups ('vzdump') set a lock to prevent incompatible
+Online migrations and backups (`vzdump`) set a lock to prevent incompatible
 concurrent actions on the affected VMs. Sometimes you need to remove such a
 lock manually (e.g., after a power failure).
 
diff --git a/qm.conf.adoc b/qm.conf.adoc
index ce5ce15..641e24c 100644
--- a/qm.conf.adoc
+++ b/qm.conf.adoc
@@ -25,7 +25,7 @@ ifndef::manvolnum[]
 include::attributes.txt[]
 endif::manvolnum[]
 
-The '/etc/pve/qemu-server/<VMID>.conf' files stores VM configuration,
+The `/etc/pve/qemu-server/<VMID>.conf` files stores VM configuration,
 where "VMID" is the numeric ID of the given VM.
 
 NOTE: IDs <= 100 are reserved for internal purposes.
@@ -39,10 +39,10 @@ the following format:
 
  OPTION: value
 
-Blank lines in the file are ignored, and lines starting with a '#'
+Blank lines in the file are ignored, and lines starting with a `#`
 character are treated as comments and are also ignored.
 
-One can use the 'qm' command to generate and modify those files.
+One can use the `qm` command to generate and modify those files.
 
 
 Options
diff --git a/qmrestore.adoc b/qmrestore.adoc
index b3fb53a..864a04b 100644
--- a/qmrestore.adoc
+++ b/qmrestore.adoc
@@ -6,7 +6,7 @@ include::attributes.txt[]
 NAME
 ----
 
-qmrestore - Restore QemuServer 'vzdump' Backups
+qmrestore - Restore QemuServer `vzdump` Backups
 
 SYNOPSYS
 --------
@@ -24,9 +24,9 @@ include::attributes.txt[]
 endif::manvolnum[]
 
 
-Restore the QemuServer vzdump backup 'archive' to virtual machine
-'vmid'. Volumes are allocated on the original storage if there is no
-'storage' specified.
+Restore the QemuServer vzdump backup `archive` to virtual machine
+`vmid`. Volumes are allocated on the original storage if there is no
+`storage` specified.
 
 ifdef::manvolnum[]
 include::pve-copyright.adoc[]
diff --git a/spiceproxy.adoc b/spiceproxy.adoc
index 33737e7..aeeb6a0 100644
--- a/spiceproxy.adoc
+++ b/spiceproxy.adoc
@@ -32,15 +32,15 @@ machines and container.
 
 This daemon listens on TCP port 3128, and implements an HTTP proxy to
 forward 'CONNECT' request from the SPICE client to the correct {pve}
-VM. It runs as user 'www-data' and has very limited permissions.
+VM. It runs as user `www-data` and has very limited permissions.
 
 
 Host based Access Control
 -------------------------
 
 It is possible to configure "apache2" like access control
-lists. Values are read from file '/etc/default/pveproxy'.
-See 'pveproxy' documentation for details.
+lists. Values are read from file `/etc/default/pveproxy`.
+See `pveproxy` documentation for details.
 
 
 ifdef::manvolnum[]
diff --git a/sysadmin.adoc b/sysadmin.adoc
index 855b895..520da27 100644
--- a/sysadmin.adoc
+++ b/sysadmin.adoc
@@ -57,7 +57,7 @@ Recommended system requirements
 
 * RAM: 8 GB is good, more is better
 
-* Hardware RAID with batteries protected write cache (BBU) or flash
+* Hardware RAID with batteries protected write cache (``BBU'') or flash
  based protection
 
 * Fast hard drives, best results with 15k rpm SAS, Raid10
diff --git a/system-software-updates.adoc b/system-software-updates.adoc
index 5d20daf..78cd3fd 100644
--- a/system-software-updates.adoc
+++ b/system-software-updates.adoc
@@ -4,12 +4,12 @@ include::attributes.txt[]
 
 We provide regular package updates on all repositories. You can
 install those update using the GUI, or you can directly run the CLI
-command 'apt-get':
+command `apt-get`:
 
  apt-get update
  apt-get dist-upgrade
 
-NOTE: The 'apt' package management system is extremely flexible and
+NOTE: The `apt` package management system is extremely flexible and
 provides countless of feature - see `man apt-get` or <<Hertzog13>> for
 additional information.
 
diff --git a/vzdump.adoc b/vzdump.adoc
index 5341200..304d868 100644
--- a/vzdump.adoc
+++ b/vzdump.adoc
@@ -80,7 +80,7 @@ This mode provides the lowest operation downtime, at the cost of a
 small inconstancy risk.  It works by performing a Proxmox VE live
 backup, in which data blocks are copied while the VM is running. If the
 guest agent is enabled (`agent: 1`) and running, it calls
-'guest-fsfreeze-freeze' and 'guest-fsfreeze-thaw' to improve
+`guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve
 consistency.
 
 A technical overview of the Proxmox VE live backup for QemuServer can
@@ -122,7 +122,7 @@ snapshot content will be archived in a tar file. Finally, the temporary
 snapshot is deleted again.
 
 NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
-supports snapshots. Using the `backup=no` mountpoint option individual volumes
+supports snapshots. Using the `backup=no` mount point option individual volumes
 can be excluded from the backup (and thus this requirement).
 
 NOTE: bind and device mountpoints are skipped during backup operations, like
@@ -156,13 +156,13 @@ For details see the corresponding manual pages.
 Configuration
 -------------
 
-Global configuration is stored in '/etc/vzdump.conf'. The file uses a
+Global configuration is stored in `/etc/vzdump.conf`. The file uses a
 simple colon separated key/value format. Each line has the following
 format:
 
  OPTION: value
 
-Blank lines in the file are ignored, and lines starting with a '#'
+Blank lines in the file are ignored, and lines starting with a `#`
 character are treated as comments and are also ignored. Values from
 this file are used as default, and can be overwritten on the command
 line.
@@ -172,7 +172,7 @@ We currently support the following options:
 include::vzdump.conf.5-opts.adoc[]
 
 
-.Example 'vzdump.conf' Configuration
+.Example `vzdump.conf` Configuration
 ----
 tmpdir: /mnt/fast_local_disk
 storage: my_backup_storage
@@ -186,14 +186,14 @@ Hook Scripts
 You can specify a hook script with option `--script`. This script is
 called at various phases of the backup process, with parameters
 accordingly set. You can find an example in the documentation
-directory ('vzdump-hook-script.pl').
+directory (`vzdump-hook-script.pl`).
 
 File Exclusions
 ---------------
 
 NOTE: this option is only available for container backups.
 
-'vzdump' skips the following files by default (disable with the option
+`vzdump` skips the following files by default (disable with the option
 `--stdexcludes 0`)
 
  /tmp/?*
@@ -214,7 +214,7 @@ Examples
 
 Simply dump guest 777 - no snapshot, just archive the guest private area and
 configuration files to the default dump directory (usually
-'/var/lib/vz/dump/').
+`/var/lib/vz/dump/`).
 
  # vzdump 777
 
-- 
2.1.4





More information about the pve-devel mailing list