[PVE-User] Trooble with ceph on the last version of proxmox
jean-mathieu.chantrein at univ-angers.fr
Mon Sep 11 11:47:44 CEST 2017
I recently deployed proxmox 5 in cluster mode on 4 nodes. Everything looks good at this level and I successfully set up a VM on a local storage that serves as a gateway.
I want to move this VM on a shared storage later and I want to use ceph for this.
I have deployed the latest version of ceph (luminous) as shown in documentation: https://pve.proxmox.com/wiki/Ceph_Server . I did it in command line.
_ My OSD are create with bluestore on a dedicated SSD.
_ I created 2 ceph pools: one for VM (ceph-vm) and one for container (ceph-lxc with krbd).
The health of my 2 cluster is OK. Nevertheless, my bluestore SSD are not see in GUI OSD tab ...
_ When I add my ceph-vm pool to my proxmox storage, I can not use it and my local storage is no longer visible when I try to create VMs, the ceph pools are no longer visible in GUI but clearly visible in CLI.
_ When I add my ceph-lxc pool, idem, and in addition my cluster become strange: graphically, my nodes appear randomly offline even though the status of my cluster proxmox and ceph are ok and the nodes online (in GUI Datacenter Summary tab to)! Also, my VM gateway seems inacessible (grayed) even though it works very well.
_ When I disable the 2 ceph-storage (from proxmox storage), everything go be allright.
This is my first ceph deployment and I am novice on proxmox. I followed the documentation, I would have missed some important thing ?
Thanks for any help.
More information about the pve-user