[pve-devel] [PATCH docs] Add storage plugin CephFS to docs

Alwin Antreich a.antreich at proxmox.com
Mon Jun 25 18:51:09 CEST 2018

Signed-off-by: Alwin Antreich <a.antreich at proxmox.com>
 pve-storage-cephfs.adoc | 106 ++++++++++++++++++++++++++++++++++++++++++++++++
 pvesm.adoc              |   3 ++
 2 files changed, 109 insertions(+)
 create mode 100644 pve-storage-cephfs.adoc

diff --git a/pve-storage-cephfs.adoc b/pve-storage-cephfs.adoc
new file mode 100644
index 0000000..59a87b3
--- /dev/null
+++ b/pve-storage-cephfs.adoc
@@ -0,0 +1,106 @@
+Ceph Filesystem (CephFS)
+:title: Storage: CephFS
+Storage pool type: `cephfs`
+http://ceph.com[Ceph] is a distributed object store and file system designed to
+provide excellent performance, reliability and scalability. CephFS implements a
+POSIX-compliant filesystem storage, with the following advantages:
+* thin provisioning
+* distributed and redundant (striped over multiple OSDs)
+* snapshot capabilities
+* self healing
+* no single point of failure
+* scalable to the exabyte level
+* kernel and user space implementation available
+NOTE: For smaller deployments, it is also possible to run Ceph
+services directly on your {pve} nodes. Recent hardware has plenty
+of CPU power and RAM, so running storage services and VMs on same node
+is possible.
+This backend supports the common storage properties `nodes`,
+`disable`, `content`, and the following `cephfs` specific properties:
+List of monitor daemon IPs. Optional, only needed if Ceph is not running on the
+PVE cluster.
+The local mount point. Optional, defaults to `/mnt/pve/<STORAGE_ID>/`.
+Ceph user Id. Optional, only needed if Ceph is not running on the PVE cluster.
+CephFS subdirectory to mount. Optional, defaults to `/`.
+Access CephFS through FUSE, instead of the kernel client. Optional, defaults
+to `0`.
+.Configuration Example for a external Ceph cluster (`/etc/pve/storage.cfg`)
+cephfs: cephfs-external
+        monhost
+        path /mnt/pve/cephfs-external
+        content backup
+        username admin
+If you use `cephx` authentication, you need to copy the secret from your
+external Ceph cluster to a Proxmox VE host.
+Create the directory `/etc/pve/priv/ceph` with
+ mkdir /etc/pve/priv/ceph
+Then copy the secret
+ scp <cephserver>:/etc/ceph/cephfs.secret /etc/pve/priv/ceph/<STORAGE_ID>.secret
+The secret must be named to match your `<STORAGE_ID>`. Copying the
+secret generally requires root privileges. The file must only contain the
+secret itself, opposed to the `rbd` backend.
+If Ceph is installed locally on the PVE cluster, this is done automatically.
+Storage Features
+The `cephfs` backend is a POSIX-compliant filesystem on top of a Ceph cluster.
+.Storage features for backend `cephfs`
+|Content types     |Image formats  |Shared |Snapshots |Clones
+|vztmpl iso backup |none           |yes    |yes       |no
+See Also
+* link:/wiki/Storage[Storage]
diff --git a/pvesm.adoc b/pvesm.adoc
index 1d55d59..06c3e76 100644
--- a/pvesm.adoc
+++ b/pvesm.adoc
@@ -78,6 +78,7 @@ snapshots and clones.
 |iSCSI/kernel   |iscsi       |block |yes   |no       |yes
 |iSCSI/libiscsi |iscsidirect |block |yes   |no       |yes
 |Ceph/RBD       |rbd         |block |yes   |yes      |yes
+|Ceph/CephFS    |cephfs      |file  |yes   |yes      |yes
 |Sheepdog       |sheepdog    |block |yes   |yes      |beta
 |ZFS over iSCSI |zfs         |block |yes   |yes      |yes
@@ -405,6 +406,8 @@ include::pve-storage-iscsidirect.adoc[]

More information about the pve-devel mailing list