[pve-devel] [PATCH docs 2/3] storage: rbd: cephs: update authentication section
Aaron Lauterer
a.lauterer at proxmox.com
Wed Jan 26 10:47:54 CET 2022
On 1/24/22 14:48, Fabian Ebner wrote:
> Am 26.11.21 um 17:44 schrieb Aaron Lauterer:
>> It is not needed anymore to place the keyring/secret file manually in
>> the correct location as it can be done with pvesm and the GUI/API now.
>>
>> Signed-off-by: Aaron Lauterer <a.lauterer at proxmox.com>
>> ---
>>
>> Since both sectons share the same footnote, I tried to get them to share
>> the same with footnote:<id>[here some text] and footnote:<id>[] to
>> reference it as explained in the asciidoc documentation [0].
>> Unfortunately I did not get it to work, most likely because they are
>> both in separate files?
>> I rather err on having the same footnote twice than missing it in one
>> place.
>>
>>
>>
>> [0] https://docs.asciidoctor.org/asciidoc/latest/macros/footnote/
>>
>
> Maybe the idea from the "Externalizing a footnote" section with using document attributes works?
Turns out that there is asciidoc and asciidoctor. We use the former, but searching the internet will result in more prominent results for the latter in my experience. They do have slightly different syntax.
With that realization and using the correct documentation ( https://asciidoc-py.github.io/userguide.html#X92 ) it is working now.
>
>> pve-storage-cephfs.adoc | 31 ++++++++++++++++++-------------
>> pve-storage-rbd.adoc | 28 +++++++++++++++++++---------
>> 2 files changed, 37 insertions(+), 22 deletions(-)
>>
>> diff --git a/pve-storage-cephfs.adoc b/pve-storage-cephfs.adoc
>> index c67f089..2437859 100644
>> --- a/pve-storage-cephfs.adoc
>> +++ b/pve-storage-cephfs.adoc
>> @@ -71,31 +71,36 @@ disabled.
>> Authentication
>> ~~~~~~~~~~~~~~
>> -If you use `cephx` authentication, which is enabled by default, you need to copy
>> -the secret from your external Ceph cluster to a Proxmox VE host.
>> +If you use `cephx` authentication, which is enabled by default, you need to provide
>> +the secret from the external Ceph cluster.
>> -Create the directory `/etc/pve/priv/ceph` with
>> +The secret file is expected to be located at
>> - mkdir /etc/pve/priv/ceph
>> + /etc/pve/priv/ceph/<STORAGE_ID>.secret
>> -Then copy the secret
>> +You can copy the secret with
>> - scp cephfs.secret <proxmox>:/etc/pve/priv/ceph/<STORAGE_ID>.secret
>> + scp <external cephserver>:/etc/ceph/cephfs.secret /local/path/to/<STORAGE_ID>.secret
>
> IMHO this is a bit confusing. We tell the user an explicit path where the key should be, and then suggest copying it to some location which might or might not be the same as already mentioned. After reading the next paragraph it might be clearer, but IMHO the structure should be "To add via CLI, do scp + pvesm. To add via GUI, do ...". And/or maybe make it clear that pvesm will put the keyring there?
>
>> -The secret must be renamed to match your `<STORAGE_ID>`. Copying the
>> -secret generally requires root privileges. The file must only contain the
>> -secret key itself, as opposed to the `rbd` backend which also contains a
>> -`[client.userid]` section.
>> +If you use the `pvesm` CLI tool to configure the external RBD storage, use the
>> +`--keyring` parameter, which needs to be a path to the secret file that you
>> +copied.
>> +
>> +When configuring an external RBD storage via the GUI, you can copy and paste
>> +the secret into the appropriate field.
>> +
>> +The secret is only the key itself, as opposed to the `rbd` backend which also
>> +contains a `[client.userid]` section.
>> A secret can be received from the Ceph cluster (as Ceph admin) by issuing the
>> command below, where `userid` is the client ID that has been configured to
>> access the cluster. For further information on Ceph user management, see the
>> -Ceph docs footnote:[Ceph user management
>> -{cephdocs-url}/rados/operations/user-management/].
>> +Ceph docs.footnote:[Ceph user management
>> +{cephdocs-url}/rados/operations/user-management/]
>> ceph auth get-key client.userid > cephfs.secret
>> -If Ceph is installed locally on the PVE cluster, that is, it was set up using
>> +If Ceph is installed locally on the {pve} cluster, that is, it was set up using
>> `pveceph`, this is done automatically.
>> Storage Features
>> diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc
>> index bbc80e2..1f14b7c 100644
>> --- a/pve-storage-rbd.adoc
>> +++ b/pve-storage-rbd.adoc
>> @@ -69,23 +69,33 @@ TIP: You can use the `rbd` utility to do low-level management tasks.
>> Authentication
>> ~~~~~~~~~~~~~~
>> -If you use `cephx` authentication, you need to copy the keyfile from your
>> -external Ceph cluster to a Proxmox VE host.
>> +If you use `cephx` authentication, which is enabled by default, you need to
>> +provide the keyring from the external Ceph cluster.
>> -Create the directory `/etc/pve/priv/ceph` with
>> +The keyring file is expected to be at
>
> Nit: "to be located at" like above sounds better
>
>> - mkdir /etc/pve/priv/ceph
>> + /etc/pve/priv/ceph/<STORAGE_ID>.keyring
>> -Then copy the keyring
>> +You can copy the keyring with
>> - scp <cephserver>:/etc/ceph/ceph.client.admin.keyring /etc/pve/priv/ceph/<STORAGE_ID>.keyring
>> + scp <external cephserver>:/etc/ceph/ceph.client.admin.keyring /local/path/to/<STORAGE_ID>.keyring
>
> Same as above.
>
>> -The keyring must be named to match your `<STORAGE_ID>`. Copying the
>> -keyring generally requires root privileges.
>> +If you use the `pvesm` CLI tool to configure the external RBD storage, use the
>> +`--keyring` parameter, which needs to be a path to the keyring file that you
>> +copied.
>> -If Ceph is installed locally on the PVE cluster, this is done automatically by
>> +When configuring an external RBD storage via the GUI, you can copy and paste the
>> +keyring into the appropriate field.
>> +
>> +If Ceph is installed locally on the {pve} cluster, this is done automatically by
>> 'pveceph' or in the GUI.
>> +TIP: Creating a keyring with only the needed capabilities is recommend when
>> +connecting to an external cluster. For further information on Ceph user
>> +management, see the Ceph docs.footnote:[Ceph user management
>> +{cephdocs-url}/rados/operations/user-management/]
>> +
>> +
>> Storage Features
>> ~~~~~~~~~~~~~~~~
More information about the pve-devel
mailing list