[PVE-User] ceph

Leandro Roggerone leandro at tecnetmza.com.ar
Mon Sep 13 13:32:21 CEST 2021


hi guys , your responses were very useful.
Lets suppose  I have my 3 nodes running and forming a cluster.
Please confirm:
a -Can I add the ceph storage at any time ?
b- All nodes should be running the same pve version ?
c- All nodes should have 1 or more non used storages with no hardware raid
to be included in the ceph ?
Those storages (c) should be exactly same in capacity , speed , and so ...
?
What can goes wrong if dont have 10 but 1 gbps ports ?
Regards.
Leandro


<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
Libre
de virus. www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

El mié, 8 sept 2021 a las 19:21, ic (<lists at benappy.com>) escribió:

> Hi there,
>
> > On 8 Sep 2021, at 14:46, Leandro Roggerone <leandro at tecnetmza.com.ar>
> wrote:
> >
> > I would like to know the goods that a ceph storage can bring to my
> existing
> > cluster.
> > What is an easy / recommended way to implement it ?
> > Wich hardware should I consider to use ?
>
> First, HW.
>
> Get two Cisco Nexus 3064PQ (they typically go for $600-700 for 48 10G
> ports) and two Intel X520-DA2 per server.
>
> Hook up each port of the Intel cards to each of the Nexuses, getting a
> full redundancy between network cards and switches.
>
> Add 4x40G DAC cables between the switches, setup 2 as VPC peer-links, 2 as
> a simple L2 trunk (can provide more details as why if needed).
>
> Use ports 0 from both NICs for ceph, ports 1 for VM traffic. This way you
> get 2x10 Gbps for Ceph only and 2x10 Gbps for everything else, and if you
> loose one card or one switch, you still have 10 Gbps for each.
>
> The benefits? With default configuration, your data lives in 3 places.
> Also, scale out. You know the expensive stuff, hyperconverged servers
> (nutanix and such) ? You get that with this.
>
> The performance is wild, just moved my customers from a proxmox cluster
> backed by a TrueNAS server (full flash, 4x10Gbps) to a 3 node cluster of
> AMD EPYC nodes with Ceph on local SATA SSDs and the VMs started flying.
>
> Keep your old storage infrastructure, whatever that is, for backups with
> PBS.
>
> YMMV
>
> Regards, ic
>
> _______________________________________________
> pve-user mailing list
> pve-user at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>



More information about the pve-user mailing list