[PVE-User] (Very) basic question regarding PVE Ceph integration
Thomas Lamprecht
t.lamprecht at proxmox.com
Mon Dec 17 13:59:23 CET 2018
On 12/16/18 2:28 PM, Frank Thommen wrote:
> I understand that with the new PVE release PVE hosts (hypervisors) can be used as Ceph servers. But it's not clear to me if (or when) that makes sense. Do I really want to have Ceph MDS/OSD on the same hardware as my hypervisors? Doesn't that a) accumulate multiple POFs on the same hardware and b) occupy computing resources (CPU, RAM), that I'd rather use for my VMs and containers? Wouldn't I rather want to have a separate Ceph cluster?
Some users also run a bit of a mixed approach. For example 6 nodes all Proxmox
VE, 3 are used for compute (VM/CT) and 3 are used for ceph, you get full
management through one stack but still have some separation.
You also could do a bit less separation and and have nodes which mostly host
Ceph but also some low resource CTs/VMs, and some which mostly do Compute.
In smaller setups, e.g., with 3 nodes you often can have less POFs easier.
E.g., in this case it's easy to use 10G or 40G NICs with two ports each and
setup a full mesh for the ceph storage traffic, you get full performance but
save a switch, thus cost and a bit of complexity (one POF less).
It often depends on the use case and the desired reliability/availability.
IMO, it's often a balance between single node resources and total node count.
Because you need to have a few nodes to ensure that one or two can die or get
temporarily rendered unusable, but do not want to have to many small nodes as
(maintenance) overhead and complexity increases here.
E.g., from a availability POV,it would be better to have 5 normal but powerful
nodes than three 4-socket, >1TB ram monster nodes. But, while then a high node
count with small nodes would increase the capacity to handle node losses, you
counterpart this with added complexity and also fixed cost per node. Further,
each nodes reserve to handle fail over resources also gets smaller, in absolute
terms.
Oh and for your 3 PVE + 3 Ceph example you mentioned somewhere in this thread:
Here each instance can only loose one node, so while theoretically you could
afford two node losses, they need to be the "right" nodes, i.e., cannot be from
the same instance. If you to mixed 5 node cluster you can loose two nodes too,
but here it does not matter which two are affected. This is IMO a win for such
a setup, it handles more scenarios with even one node less.
Anyway, so much ramblings from my side..
cheers,
Thomas
More information about the pve-user
mailing list