[pve-devel] [PATCH storage/manager] fix #3616: support multiple ceph filesystems
Aaron Lauterer
a.lauterer at proxmox.com
Wed Oct 20 16:40:05 CEST 2021
On my test cluster I ran into the problem when creating the 2nd or third Ceph FS, that the actual mounting and adding to the PVE storage config failed with the following in the task log:
-----
creating data pool 'foobar_data'...
pool foobar_data: applying application = cephfs
pool foobar_data: applying pg_num = 32
creating metadata pool 'foobar_metadata'...
pool foobar_metadata: applying pg_num = 8
configuring new CephFS 'foobar'
Successfully create CephFS 'foobar'
Adding 'foobar' to storage configuration...
TASK ERROR: adding storage for CephFS 'foobar' failed, check log and add manually! create storage failed: mount error: Job failed. See "journalctl -xe" for details.
------
The matching syslog:
------
Oct 20 15:20:04 cephtest1 systemd[1]: Mounting /mnt/pve/foobar...
Oct 20 15:20:04 cephtest1 mount[45484]: mount error: no mds server is up or the cluster is laggy
Oct 20 15:20:04 cephtest1 systemd[1]: mnt-pve-foobar.mount: Mount process exited, code=exited, status=32/n/a
Oct 20 15:20:04 cephtest1 systemd[1]: mnt-pve-foobar.mount: Failed with result 'exit-code'.
Oct 20 15:20:04 cephtest1 systemd[1]: Failed to mount /mnt/pve/foobar.
------
Adding the storage manually right after this worked fine. Seems like the MDS might not be fast enough all the time.
Regarding the removal of a Ceph FS we had an off list discussion which resulted in the following (I hope I am not forgetting something):
The process needs a few manual steps that are hard to automate:
- disable storage (so pvestatd does not auto mount it again)
- unmount on all nodes
- stop standby and active (for this storage) MDS
At this point, any still existing mount will be hanging
- remove storage cfg and pools
Since at least some of those need to be done manually on the CLI, it might not even be worth it to have a "remove button" in the GUI but rather a well documented procedure in the manual and the actual removal as part of `pveceph`.
On 10/19/21 11:33, Dominik Csapak wrote:
> this series support for multiple cephfs. no single patch fixes the bug,
> so it's in no commit subject... (feel free to change the commit subject
> when applying if you find one patch most appropriate?)
>
> a user already can create multiple cephfs via 'pveceph' (or manually
> with the ceph tools), but the ui does not support it properly
>
> storage patch can be applied independently, it only adds a new parameter
> that does nothing if not set.
>
> manager:
>
> patches 1,2 enables basic gui support for showing correct info
> for multiple cephfs
>
> patches 3,4,5 are mostly preparation for the following patches
> (though 4 enables some additional checks that should not hurt either way)
>
> patch 6 enables additional gui support for multiple fs
>
> patch 7,8 depend on the storage patch
>
> patch 9,10,11 are for actually creating multiple cephfs via the gui
> so those can be left out if we do not want to support that
>
> ---
> so if we only want to support basic display functionality, we could only apply
> manager 1,2 & maybe 5+6
>
> for being able to configure multiple cephfs on a ceph cluster, we'd need
> storage 1/1 and manager 7,8
>
> sorry that it's so complicated, if wanted, i can ofc reorder the patches
> or send it in multiple series
>
> pve-storage:
>
> Dominik Csapak (1):
> cephfs: add support for multiple ceph filesystems
>
> PVE/Storage/CephFSPlugin.pm | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> pve-manager:
>
> Dominik Csapak (11):
> api: ceph-mds: get mds state when multple ceph filesystems exist
> ui: ceph: catch missing version for service list
> api: cephfs: refactor {ls,create}_fs
> api: cephfs: more checks on fs create
> ui: ceph/ServiceList: refactor controller out
> ui: ceph/fs: show fs for active mds
> api: cephfs: add 'fs-name' for cephfs storage
> ui: storage/cephfs: make ceph fs selectable
> ui: ceph/fs: allow creating multiple cephfs
> api: cephfs: add destroy cephfs api call
> ui: ceph/fs: allow destroying cephfs
>
> PVE/API2/Ceph/FS.pm | 148 +++++++++--
> PVE/Ceph/Services.pm | 16 +-
> PVE/Ceph/Tools.pm | 51 ++++
> www/manager6/Makefile | 2 +
> www/manager6/Utils.js | 1 +
> www/manager6/ceph/FS.js | 52 +++-
> www/manager6/ceph/ServiceList.js | 313 ++++++++++++-----------
> www/manager6/form/CephFSSelector.js | 42 +++
> www/manager6/storage/CephFSEdit.js | 25 ++
> www/manager6/window/SafeDestroyCephFS.js | 22 ++
> 10 files changed, 492 insertions(+), 180 deletions(-)
> create mode 100644 www/manager6/form/CephFSSelector.js
> create mode 100644 www/manager6/window/SafeDestroyCephFS.js
>
More information about the pve-devel
mailing list