elacunza at binovo.es
Mon Sep 13 13:44:22 CEST 2021
El 13/9/21 a las 13:32, Leandro Roggerone escribió:
> hi guys , your responses were very useful.
> Lets suppose I have my 3 nodes running and forming a cluster.
> Please confirm:
> a -Can I add the ceph storage at any time ?
> b- All nodes should be running the same pve version ?
Generally speaking this is advisable. What versions do you have right now?
> c- All nodes should have 1 or more non used storages with no hardware raid
> to be included in the ceph ?
It is advisable to have OSDs in at least 3 nodes yes (some may say 4 is
> Those storages (c) should be exactly same in capacity , speed , and so ...
Roughly speaking, Ceph will perform as well as the worst disk configured
for Ceph. If you plan to use SSD disks, use Enteprise SSD disk, not
> What can goes wrong if dont have 10 but 1 gbps ports ?
Latency and overall performance of Ceph storage will be worse/slower. If
you plan using 1G, consider setting up separate "cluster" ports for Ceph
(1G for VM traffic, 1G for ceph public, 1G for ceph cluster/private)
We have clusters with both 10G and 1G (3x1G) networks. All of them work
well but 10G network is quite noticeable, specially with SSD disks.
> de virus. www.avast.com
> El mié, 8 sept 2021 a las 19:21, ic (<lists at benappy.com>) escribió:
>> Hi there,
>>> On 8 Sep 2021, at 14:46, Leandro Roggerone <leandro at tecnetmza.com.ar>
>>> I would like to know the goods that a ceph storage can bring to my
>>> What is an easy / recommended way to implement it ?
>>> Wich hardware should I consider to use ?
>> First, HW.
>> Get two Cisco Nexus 3064PQ (they typically go for $600-700 for 48 10G
>> ports) and two Intel X520-DA2 per server.
>> Hook up each port of the Intel cards to each of the Nexuses, getting a
>> full redundancy between network cards and switches.
>> Add 4x40G DAC cables between the switches, setup 2 as VPC peer-links, 2 as
>> a simple L2 trunk (can provide more details as why if needed).
>> Use ports 0 from both NICs for ceph, ports 1 for VM traffic. This way you
>> get 2x10 Gbps for Ceph only and 2x10 Gbps for everything else, and if you
>> loose one card or one switch, you still have 10 Gbps for each.
>> The benefits? With default configuration, your data lives in 3 places.
>> Also, scale out. You know the expensive stuff, hyperconverged servers
>> (nutanix and such) ? You get that with this.
>> The performance is wild, just moved my customers from a proxmox cluster
>> backed by a TrueNAS server (full flash, 4x10Gbps) to a 3 node cluster of
>> AMD EPYC nodes with Ceph on local SATA SSDs and the VMs started flying.
>> Keep your old storage infrastructure, whatever that is, for backups with
>> Regards, ic
>> pve-user mailing list
>> pve-user at lists.proxmox.com
> pve-user mailing list
> pve-user at lists.proxmox.com
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
More information about the pve-user