[pve-devel] [PATCH docs 1/4] Rewrite Intro

Aaron Lauterer a.lauterer at proxmox.com
Mon Jun 17 14:56:41 CEST 2019


Polished the phrasing and restructured some parts to improve readability
and comprehension.

Signed-off-by: Aaron Lauterer <a.lauterer at proxmox.com>
---
 pve-intro.adoc | 229 +++++++++++++++++++++++--------------------------
 1 file changed, 108 insertions(+), 121 deletions(-)

diff --git a/pve-intro.adoc b/pve-intro.adoc
index b55889b..3425997 100644
--- a/pve-intro.adoc
+++ b/pve-intro.adoc
@@ -2,16 +2,16 @@ Introduction
 ============
 
 {pve} is a platform to run virtual machines and containers. It is
-based on Debian Linux, and completely open source. For maximum
-flexibility, we implemented two virtualization technologies -
-Kernel-based Virtual Machine (KVM) and container-based virtualization
-(LXC).
+based on https://www.debian.org[Debian] Linux and completely open
+source. For the best flexibility two virtualization technologies are
+supported -- Kernel-based Virtual Machine (KVM) and container-based
+virtualization (LXC).
 
-One main design goal was to make administration as easy as
-possible. You can use {pve} on a single node, or assemble a cluster of
-many nodes. All management tasks can be done using our web-based
-management interface, and even a novice user can setup and install
-{pve} within minutes.
+{pve} can be run in a single node configuration or assembled in a
+cluster spanning many nodes. Easy administration is one of the main
+goals of the {pve} project. All management tasks can be handled
+through the web-based interface. This and the user friendly installer
+enable even novice users to install and setup {pve} within minutes.
 
 image::images/pve-software-stack.svg["Proxmox Software Stack",align="center"]
 
@@ -23,87 +23,76 @@ While many people start with a single node, {pve} can scale out to a
 large set of clustered nodes. The cluster stack is fully integrated
 and ships with the default installation.
 
+Web-based Management Interface::
+
+There is no need for a dedicated management server or software tool.
+The easy to use web-based management interface offers all needed
+controls for manage a cluster. This includes running backup and
+restore jobs, live migration of virtual machines or high availability
+triggered activities.
+
 Unique Multi-Master Design::
 
-The integrated web-based management interface gives you a clean
-overview of all your KVM guests and Linux containers and even of your
-whole cluster. You can easily manage your VMs and containers, storage
-or cluster from the GUI. There is no need to install a separate,
-complex, and pricey management server.
+There is no dedicated master in a cluster. Every node offers the
+web-based interface from which the whole cluster can be managed.
 
 Proxmox Cluster File System (pmxcfs)::
 
-Proxmox VE uses the unique Proxmox Cluster file system (pmxcfs), a
-database-driven file system for storing configuration files. This
-enables you to store the configuration of thousands of virtual
-machines. By using corosync, these files are replicated in real time
-on all cluster nodes. The file system stores all data inside a
-persistent database on disk, nonetheless, a copy of the data resides
-in RAM which provides a maximum storage size of 30MB - more than
-enough for thousands of VMs.
-+
-Proxmox VE is the only virtualization platform using this unique
-cluster file system.
+The Promox Cluster file system (pmxcfs) is a database-driven file
+system to store configuration files of virtual machines. In
+combination with corosync these configuration files are replicated in
+real time between all cluster nodes.
 
-Web-based Management Interface::
-
-Proxmox VE is simple to use. Management tasks can be done via the
-included web based management interface - there is no need to install a
-separate management tool or any additional management node with huge
-databases. The multi-master tool allows you to manage your whole
-cluster from any node of your cluster. The central web-based
-management - based on the JavaScript Framework (ExtJS) - empowers
-you to control all functionalities from the GUI and overview history
-and syslogs of each single node. This includes running backup or
-restore jobs, live-migration or HA triggered activities.
+{pve} is the only virtualization platform using this unique
+cluster file system.
 
 Command Line::
 
-For advanced users who are used to the comfort of the Unix shell or
-Windows Powershell, Proxmox VE provides a command line interface to
-manage all the components of your virtual environment. This command
-line interface has intelligent tab completion and full documentation
-in the form of UNIX man pages.
+For advanced users who prefer to use a console/shell, {pve} provides
+thorough command line tools. The command line interface has
+intelligent tab completion and is fully documented with UNIX man
+pages.
 
 REST API::
 
-Proxmox VE uses a RESTful API. We choose JSON as primary data format,
-and the whole API is formally defined using JSON Schema. This enables
-fast and easy integration for third party management tools like custom
-hosting environments.
+To enable quick and easy integration with third party tools and
+workflows {pve} offers a RESTful API. It uses JSON as the primary
+data format and is formally defined with JSON Schema.
 
 Role-based Administration::
 
-You can define granular access for all objects (like VMs, storages,
-nodes, etc.) by using the role based user- and permission
-management. This allows you to define privileges and helps you to
-control access to objects. This concept is also known as access
-control lists: Each permission specifies a subject (a user or group)
-and a role (set of privileges) on a specific path.
+{pve} offers a granular role based user- and permission management
+system. These access control lists allow to specify a subject
+(a user or group of users) and a role (set of privileges) on a
+specific path. A path can be any object like a virtual machine, a
+storage, physical nodes and more.
 
 Authentication Realms::
 
-Proxmox VE supports multiple authentication sources like Microsoft
+{pve} supports multiple authentication sources like Microsoft
 Active Directory, LDAP, Linux PAM standard authentication or the
-built-in Proxmox VE authentication server.
+built-in {pve} authentication server.
 
 
 Flexible Storage
 ----------------
 
-The Proxmox VE storage model is very flexible. Virtual machine images
-can either be stored on one or several local storages or on shared
-storage like NFS and on SAN. There are no limits, you may configure as
-many storage definitions as you like. You can use all storage
-technologies available for Debian Linux.
+The {pve} storage model is very flexible. Virtual machines
+images can be stored on one or spread over multiple storages. A
+storage can either be local on the host or one of the many remote
+storage technologies available on Debian Linux. A complete list of
+supported storage technologies can be found below.
 
-One major benefit of storing VMs on shared storage is the ability to
-live-migrate running machines without any downtime, as all nodes in
-the cluster have direct access to VM disk images.
+There is no limit for the amount of storages defined.
+
+One major benefit of storing VMs on shared storage is the ability of
+live migration of running machines without any downtime, as all nodes
+in the cluster have direct access to the virtual machines disk
+images.
 
 We currently support the following Network storage types:
 
-* LVM Group (network backing with iSCSI targets)
+* LVM Group (network backed by iSCSI targets)
 * iSCSI target
 * NFS Share
 * CIFS Share
@@ -111,53 +100,52 @@ We currently support the following Network storage types:
 * Directly use iSCSI LUNs
 * GlusterFS
 
-Local storage types supported are:
+Local storage types supported:
 
-* LVM Group (local backing devices like block devices, FC devices, DRBD, etc.)
-* Directory (storage on existing filesystem)
+* LVM Group (local backed devices like block devices, FC devices,
+  DRBD, etc.)
+* Directory (storage on an existing file system)
 * ZFS
 
 
 Integrated Backup and Restore
 -----------------------------
 
-The integrated backup tool (`vzdump`) creates consistent snapshots of
-running Containers and KVM guests. It basically creates an archive of
-the VM or CT data which includes the VM/CT configuration files.
+The integrated backup tool (`vzdump`) creates backups of running
+Containers (CT) and KVM virtual machines (VM) including their
+configuration. This is done by creating a consistent snapshot of the
+VM or CT.
 
-KVM live backup works for all storage types including VM images on
-NFS, CIFS, iSCSI LUN, Ceph RBD. The new backup format is optimized for storing
-VM backups fast and effective (sparse files, out of order data, minimized I/O).
+KVM live backups work for all storage types including VM images on
+NFS, CIFS, iSCSI LUN, Ceph RBD. The new backup format is optimized
+for storing VM backups in a fast and effective way (sparse files,
+out of order data, minimized I/O).
 
 
 High Availability Cluster
 -------------------------
 
-A multi-node Proxmox VE HA Cluster enables the definition of highly
-available virtual servers. The Proxmox VE HA Cluster is based on
-proven Linux HA technologies, providing stable and reliable HA
-services.
+When running as a multi-node cluster {pve} HA Cluster offers
+proven Linux HA technologies for stable and reliable high available
+virtual machines.
 
 
 Flexible Networking
 -------------------
 
-Proxmox VE uses a bridged networking model. All VMs can share one
-bridge as if virtual network cables from each guest were all plugged
-into the same switch. For connecting VMs to the outside world, bridges
-are attached to physical network cards and assigned a TCP/IP
-configuration.
-
-For further flexibility, VLANs (IEEE 802.1q) and network
-bonding/aggregation are possible. In this way it is possible to build
-complex, flexible virtual networks for the Proxmox VE hosts,
-leveraging the full power of the Linux network stack.
+{pve} uses a bridged networking model. VMs can be connected to a
+bridge, simulating a virtual switch. To connect VMs to the outside
+world these bridges are attached to physical network interfaces.
 
+For further flexibility it is possible to configure VLANs
+(IEEE 802.1q) and network bonding/aggregation. This enables complex,
+flexible virtual networks leveraging the full power of the Linux
+network stack.
 
 Integrated Firewall
 -------------------
 
-The integrated firewall allows you to filter network packets on
+The integrated firewall allows to filter network packets on
 any VM or Container interface. Common sets of firewall rules can
 be grouped into ``security groups''.
 
@@ -165,8 +153,8 @@ be grouped into ``security groups''.
 Why Open Source
 ---------------
 
-Proxmox VE uses a Linux kernel and is based on the Debian GNU/Linux
-Distribution. The source code of Proxmox VE is released under the
+{pve} uses a Linux kernel and is based on the Debian GNU/Linux
+Distribution. The source code of {pve} is released under the
 http://www.gnu.org/licenses/agpl-3.0.html[GNU Affero General Public
 License, version 3]. This means that you are free to inspect the
 source code at any time or contribute to the project yourself.
@@ -192,54 +180,53 @@ Your benefits with {pve}
 * Fast installation and easy-to-use
 * Web-based management interface
 * REST API
-* Huge active community
+* Big and active community
 * Low administration costs and simple deployment
-
+include
 include::getting-help.adoc[]
 
 
 Project History
 ---------------
 
-The project started in 2007, followed by a first stable version in
-2008. At the time we used OpenVZ for containers, and KVM for virtual
-machines. The clustering features were limited, and the user interface
-was simple (server generated web page).
-
-But we quickly developed new features using the
-http://corosync.github.io/corosync/[Corosync] cluster stack, and the
-introduction of the new Proxmox cluster file system (pmxcfs) was a big
-step forward, because it completely hides the cluster complexity from
-the user. Managing a cluster of 16 nodes is as simple as managing a
-single node.
-
-We also introduced a new REST API, with a complete declarative
-specification written in JSON-Schema. This enabled other people to
-integrate {pve} into their infrastructure, and made it easy to provide
-additional services.
-
-Also, the new REST API made it possible to replace the original user
-interface with a modern HTML5 application using JavaScript. We also
-replaced the old Java based VNC console code with
-https://kanaka.github.io/noVNC/[noVNC]. So you only need a web browser
-to manage your VMs.
+The project started in 2007, followed by the first stable version in
+2008. At the time OpenVZ was used for containers, and KVM
+for virtual machines. The clustering features were limited, and the
+user interface was simple (server generated web page).
+
+Using the http://corosync.github.io/corosync/[Corosync] cluster stack,
+new features were quickly developed. The introduction of the new
+Proxmox cluster file system (pmxcfs) was a big step forward as it
+hides the cluster complexity from the user. Managing a cluster of
+many nodes is as simple as managing a single node.
+
+A new REST API was introduced with a declarative specification written
+in JSON-Schema. This enabled other people to integrate {pve} into
+their infrastructure, and made it easy to provide additional services.
+
+The new REST API made it possible to replace the original user
+interface with a modern HTML5 application using JavaScript.
+To reduce needed dependencies on the users computer the old
+Java based VNC console was replaced with the modern
+https://kanaka.github.io/noVNC/[noVNC] that runs completely within
+the browser.
 
 The support for various storage types is another big task. Notably,
 {pve} was the first distribution to ship ZFS on Linux by default in
-2014. Another milestone was the ability to run and manage
-http://ceph.com/[Ceph] storage on the hypervisor nodes. Such setups
-are extremely cost effective.
+2014. Another milestone was the ability to run and manage a Ceph
+storage on the hypervisor nodes. Such setups are extremely cost
+effective.
 
-When we started we were among the first companies providing
+When we started Proxmox was among the first companies providing
 commercial support for KVM. The KVM project itself continuously
 evolved, and is now a widely used hypervisor. New features arrive
-with each release. We developed the KVM live backup feature, which
-makes it possible to create snapshot backups on any storage type.
+with each release. Proxmox developed the KVM live backup feature,
+which makes it possible to create snapshot backups on any storage
+type.
 
 The most notable change with version 4.0 was the move from OpenVZ to
-https://linuxcontainers.org/[LXC]. Containers are now deeply
-integrated, and they can use the same storage and network features
-as virtual machines.
+LXC. Containers are now deeply integrated, and can use the same
+storage and network features as KVM based virtual machines
 
 include::howto-improve-pve-docs.adoc[]
 include::translation.adoc[]
-- 
2.20.1





More information about the pve-devel mailing list