[PVE-User] Proxmox Ceph with differents HDD Size

Gilberto Nunes gilberto.nunes32 at gmail.com
Thu Aug 30 15:27:56 CEST 2018


Right now the ceph are very slow

343510/2089155 objects misplaced (16.443%)
Status

HEALTH_WARN
Monitors
pve-ceph01:
pve-ceph02:
pve-ceph03:
pve-ceph04:
pve-ceph05:
pve-ceph06:
OSDs
In Out
Up 21 0
Down 0 0
Total: 21
PGs
active+clean:
157

active+recovery_wait+remapped:
1

active+remapped+backfill_wait:
82

active+remapped+backfilling:
2

active+undersized+degraded+remapped+backfill_wait:
8

Usage
7.68 TiB of 62.31 TiB
Reads:
Writes:
IOPS: Reads:
IOPS: Writes:
<http://www.proxmox.com/products/proxmox-ve/subscription-service-plans>
()
Degraded data redundancy: 21495/2089170 objects degraded (1.029%), 8 pgs
degraded, 8 pgs undersized

pg 21.0 is stuck undersized for 63693.346103, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,9]
pg 21.2 is stuck undersized for 63693.346973, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,10]
pg 21.6f is stuck undersized for 62453.277248, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,5]
pg 21.8b is stuck undersized for 63693.361835, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,8]
pg 21.c3 is stuck undersized for 63693.321337, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,9]
pg 21.c5 is stuck undersized for 66587.797684, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,8]
pg 21.d4 is stuck undersized for 62453.047415, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,6]
pg 21.e1 is stuck undersized for 62453.276631, current state
active+undersized+degraded+remapped+backfill_wait, last acting [2,5]




---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36




2018-08-30 10:23 GMT-03:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:

> SO, what you guys think about this HDD distribuiton?
>
> CEPH-01
> 1x 3 TB
> 1x 2 TB
>
> CEPH-02
> 1x 4 TB
> 1x 3 TB
>
> CEPH-03
> 1x 4 TB
> 1x 3 TB
>
> CEPH-04
> 1x 4 TB
> 1x 3 TB
> 1x 2 TB
>
> CEPH-05
> 1x 8 TB
> 1x 2 TB
>
> CEPH-06
> 1x 3 TB
> 1x 1 TB
> 1x 8 TB
>
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
> 2018-08-30 9:57 GMT-03:00 Eneko Lacunza <elacunza at binovo.es>:
>
>> El 30/08/18 a las 14:37, Mark Schouten escribió:
>>
>>> On Thu, 2018-08-30 at 09:30 -0300, Gilberto Nunes wrote:
>>>
>>>> Any advice to, at least, mitigate the low performance?
>>>>
>>> Balance the number of spinning disks and the size per server. This will
>>> probably be the safest.
>>>
>>> It's not said that not balancing degrades performance, it's said that
>>> it might potentially cause degraded performance.
>>>
>> Yes I agree, although that might probably overload the biggest disks,
>> too. But all depends on the space and performance requirements/desires,
>> really :)
>>
>> Cheers
>> Eneko
>>
>> --
>> Zuzendari Teknikoa / Director Técnico
>> Binovo IT Human Project, S.L.
>> Telf. 943569206
>> Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
>> www.binovo.es
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>
>



More information about the pve-user mailing list