[PVE-User] Help on setting up cloud server!
ronny+pve-user at aasen.cx
Mon Jul 22 12:35:41 CEST 2019
you can use RBD image for nextcloud data as well. but you must either
limit yourself to a single nexcloud server since there is no shared data
between the servers.
OR use a cluster aware filesystem on top of the shared RBD image,
something like ocfs, gfs.
in order to mount cephfs on the nextcloud vm, the vm must be able to
talk to the whole public ceph network. either via a router, or have an
interface in that network.
then you can mount cephs on the vm using the kernel
or using fuse
generally fuse have a newer version so supports more features, but i
think kernel have better performance.
On 22.07.2019 12:11, System Admin via pve-user wrote:
> Thanks for the help Ronny.
> Yes, VM will be installed on ceph-vm pool which is RDB storage. For
> storing nextcloud data, I'll mount CephFS but how do mount cephfs on vm?
> (Sorry, that "other disk partition" was meant to be some "mount point" ).
> But it is also possible to use RBD storage right? or CephFS is always
> the winner for cloud storage?
> Thank you.
> On 7/22/19 2:00 PM, Ronny Aasen wrote:
>> On 22.07.2019 09:06, System Admin via pve-user wrote:
>>> Hi all,
>>> I'm new to Proxmox & Ceph. I would like to seek your help on setting
>>> up cloud server.
>>> I've three PVE (version 5.5-3) nodes configured with Ceph storage on
>>> hardware RAID 0 (MegaRAID SAS). I couldn't find a way to flash to IT
>>> HBA mode.
>>> Now, I would like to install CentOS 7 VM on *ceph-vm* pools and then,
>>> will configure NextCloud web application on other disk partition
>>> using different storage pool. Now this is where I need help.
>>> Which storage type RBD or CephFS is best for the cloud and how would
>>> I load them on VM. Your help will be appreciated.
>>> Thank you.
>> generally on a VM you want your OS/boot disk to be RBD image. since
>> RBD is made for this purpose.
>> nextcloud is a bit special tho. since if you want to scale your
>> nextcloud across multiple servers, for High availabillity or
>> performance reasons, you will need a shared storage for the nextcloud
>> servers in addition to the OS disk for each VM.
>> And for this you can use cephfs and mount it as a regular client from
>> the VM as the nextcloud storage area on each nextcloud VM.
>> I am a bit perplexed by what you mean by "other disk partition" since
>> in ceph you do not split storage pools by partitions. give ceph the
>> whole disk as osd.
>> you can place different pools on different classes of disk, eg if you
>> have fast or slow disks. but multiple pools live on the same disks.
>> good luck
More information about the pve-user