[PVE-User] Trouble Creating CephFS

JR Richardson jmr.richardson at gmail.com
Fri Oct 11 14:07:13 CEST 2019


On Fri, Oct 11, 2019 at 5:50 AM Thomas Lamprecht
<t.lamprecht at proxmox.com> wrote:
>
> Hi,
>
> On 10/10/19 5:47 PM, JR Richardson wrote:
> > Hi All,
> >
> > I'm testing ceph in the lab. I constructed a 3 node proxmox cluster
> > with latest 5.4-13 PVE all updates done yesterday and used the
> > tutorials to create ceph cluster, added monitors on each node, added 9
> > OSDs, 3 disks per ceph cluster node, ceph status OK.
>
> Just to be sure: you did all that using the PVE Webinterface? Which tutorials
> do you mean? Why not with 6.0? Starting out with Nautilus now will safe you
> one major ceph (and PVE) upgrade.

Yes, all configured through web interface.

Tutorials:
https://pve.proxmox.com/wiki/Manage_Ceph_Services_on_Proxmox_VE_Nodes
https://www.youtube.com/watch?v=jFFLINtNnXs
https://www.youtube.com/watch?v=0t1UiOg6UoE
And a few other videos and howto's from random folks.

Honestly, I did not consider using 6.0, I read some posts about
cluster nodes randomly rebooting after upgrade to 6.0 and I use 5.4 in
production. I'll redo my lab with 6.0 and see how it goes.

>
> >
> > From the GUI of the ceph cluster, when I go to CephFS, I can create
> > MSDs, 1 per node OK, but their state is up:standby. When I try to
> > create a CephFS, I get timeout error. But when I check Pools,
> > 'cephfs_data' was created with 3/2 128 PGs and looks OK, ceph status
> > health_ok.
>
> Hmm, so no MDSs gets up and ready into the active state...
> Was "cephfs_metadata" also created?
>
> You could check out the
> # ceph fs status
> # ceph mds stat
>
root at cephclust1:~# ceph fs status

+-------------+
| Standby MDS |
+-------------+
|  cephclust3 |
|  cephclust1 |
|  cephclust2 |
+-------------+
MDS version: ceph version 12.2.12
(39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable)

root at cephclust1:~# ceph mds stat
, 3 up:standby

I was looking into MDS and why it was in standby instead of active,
but I didn't get far, is this could be the issue? I don't show any
cephfs_metadata pools created, only cephfs_data was created.

>
> >
> > I copied over keyring and I can attach this to an external PVE as RBD
> > storage but I don't get a path parameter so the ceph storage will only
> > allow for raw disk images. If I try to attatch as CephFS, the content
> > does allow for Disk Image. I need the ceph cluster to export the
> > cephfs so I can attach and copy over qcow2  images. I can create new
> > disk and spin up VMs within the ceph storage pool. Because I can
> > attach and use the ceph pool, I'm guessing it is considered Block
> > storage, hence the raw disk only creation for VM HDs. How do I setup
> > the ceph to export the pool as file based?
>
> with either CephFS or, to be techincally complete, by creating an FS on
> a Rados Block Device (RBD).
>
> >
> > I came across this bug:
> > https://bugzilla.proxmox.com/show_bug.cgi?id=2108
> >
> > I'm not sure it applies but sounds similar to what I'm seeing. It's
>
> I realyl think that this exact bug cannot apply to you if you run with,
> 5.4.. if you did not see any:
>
> >  mon_command failed - error parsing integer value '': Expected option value to be integer, got ''in"}
>
> errors in the log this it cannot be this bug. Not saying that it cannot
> possibly be a bug, but not this one, IMO.

No log errors like that so probably not that bug.

>
> cheers,
> Thomas
>

Thanks.

JR
-- 
JR Richardson
Engineering for the Masses
Chasing the Azeotrope



More information about the pve-user mailing list