[PVE-User] Trouble Creating CephFS UPDATE

JR Richardson jmr.richardson at gmail.com
Sat Oct 12 14:53:43 CEST 2019


OK Folk's,

I spent a good time in the lab testing cephfs creation on a 3 node proxmox cluster in both 5.4 (latest updates) and 6.0 (latest updates). On both versions, from the GUI, creating a cephfs does not work. I get a timeout error, but the cephfs_data pool does get created, cephfs_metadata pool does not get created and the cephfs MDS does not come active.

I can create cephfs_metadata pool from command line OK and it shows up in the GUI. Once cephfs_metadata is created manually, the MDS server become active and I can mount cephfs.

Now the other thing is mounting cephfs from another proxmox cluster within the GUI, the only option is to mount it for VZDump, ISO, Template and Snippets, not for disk images, which is really what I need.

So what I have to do is this:
On the 3 node ceph cluster from the PVE GUI
Install Ceph on 3 nodes
Create 3 monitors
Create 3 MDS
Add the OSDs
Switch to the command line on one of the nodes
# ceph osd pool application enable cephfs_metadata cephfs
# ceph fs new cephfs cephfs_metadata cephfs

On the external PVE Cluster command line:
SCP over the ceph keyring to /etc/pve/priv/cephfs.keyring
Edit the keyring file to only have the key, nothing else
# mkdir /mnt/mycephfs
# mount -t ceph [IP of MDS SERVER]:/ /mnt/mycephfs -o name=admin,secretfile=/etc/pve/ceph/cephfs.secret
Switch back to the GUI and add Directoy Storage
This will allow adding cephfs and allow for disk image storage

But this is a bottle neck and I don’t think the proper way to accomplish it. Sharing a directory file store across a cluster? So I'm still looking for help to get this working correctly.

Thanks.

JR

JR Richardson
Engineering for the Masses
Chasing the Azeotrope

-----Original Message-----
From: JR Richardson <jmr.richardson at gmail.com> 
Sent: Friday, October 11, 2019 7:07 AM
To: Thomas Lamprecht <t.lamprecht at proxmox.com>
Cc: PVE User List <pve-user at pve.proxmox.com>
Subject: Re: [PVE-User] Trouble Creating CephFS

On Fri, Oct 11, 2019 at 5:50 AM Thomas Lamprecht <t.lamprecht at proxmox.com> wrote:
>
> Hi,
>
> On 10/10/19 5:47 PM, JR Richardson wrote:
> > Hi All,
> >
> > I'm testing ceph in the lab. I constructed a 3 node proxmox cluster 
> > with latest 5.4-13 PVE all updates done yesterday and used the 
> > tutorials to create ceph cluster, added monitors on each node, added 
> > 9 OSDs, 3 disks per ceph cluster node, ceph status OK.
>
> Just to be sure: you did all that using the PVE Webinterface? Which 
> tutorials do you mean? Why not with 6.0? Starting out with Nautilus 
> now will safe you one major ceph (and PVE) upgrade.

Yes, all configured through web interface.

Tutorials:
https://pve.proxmox.com/wiki/Manage_Ceph_Services_on_Proxmox_VE_Nodes
https://www.youtube.com/watch?v=jFFLINtNnXs
https://www.youtube.com/watch?v=0t1UiOg6UoE
And a few other videos and howto's from random folks.

Honestly, I did not consider using 6.0, I read some posts about cluster nodes randomly rebooting after upgrade to 6.0 and I use 5.4 in production. I'll redo my lab with 6.0 and see how it goes.

>
> >
> > From the GUI of the ceph cluster, when I go to CephFS, I can create 
> > MSDs, 1 per node OK, but their state is up:standby. When I try to 
> > create a CephFS, I get timeout error. But when I check Pools, 
> > 'cephfs_data' was created with 3/2 128 PGs and looks OK, ceph status 
> > health_ok.
>
> Hmm, so no MDSs gets up and ready into the active state...
> Was "cephfs_metadata" also created?
>
> You could check out the
> # ceph fs status
> # ceph mds stat
>
root at cephclust1:~# ceph fs status

+-------------+
| Standby MDS |
+-------------+
|  cephclust3 |
|  cephclust1 |
|  cephclust2 |
+-------------+
MDS version: ceph version 12.2.12
(39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)

root at cephclust1:~# ceph mds stat
, 3 up:standby

I was looking into MDS and why it was in standby instead of active, but I didn't get far, is this could be the issue? I don't show any cephfs_metadata pools created, only cephfs_data was created.

>
> >
> > I copied over keyring and I can attach this to an external PVE as 
> > RBD storage but I don't get a path parameter so the ceph storage 
> > will only allow for raw disk images. If I try to attatch as CephFS, 
> > the content does allow for Disk Image. I need the ceph cluster to 
> > export the cephfs so I can attach and copy over qcow2  images. I can 
> > create new disk and spin up VMs within the ceph storage pool. 
> > Because I can attach and use the ceph pool, I'm guessing it is 
> > considered Block storage, hence the raw disk only creation for VM 
> > HDs. How do I setup the ceph to export the pool as file based?
>
> with either CephFS or, to be techincally complete, by creating an FS 
> on a Rados Block Device (RBD).
>
> >
> > I came across this bug:
> > https://bugzilla.proxmox.com/show_bug.cgi?id=2108
> >
> > I'm not sure it applies but sounds similar to what I'm seeing. It's
>
> I realyl think that this exact bug cannot apply to you if you run 
> with, 5.4.. if you did not see any:
>
> >  mon_command failed - error parsing integer value '': Expected 
> > option value to be integer, got ''in"}
>
> errors in the log this it cannot be this bug. Not saying that it 
> cannot possibly be a bug, but not this one, IMO.

No log errors like that so probably not that bug.

>
> cheers,
> Thomas
>

Thanks.

JR
--
JR Richardson
Engineering for the Masses
Chasing the Azeotrope




More information about the pve-user mailing list