[PVE-User] Proxmox Ceph with differents HDD Size
Gilberto Nunes
gilberto.nunes32 at gmail.com
Sat Sep 1 16:28:27 CEST 2018
HI again
Last message I thing that I figure out was happen to my ceph 6 server
cluster, but I didn't at all!
Cluster still slow performance.
'till this morning.
I reweight the osd in the low speed disk's with this command:
ceph osd crush reweight osd.ID weight
Now everything is ok!
Thanks to the list.
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
2018-08-31 11:08 GMT-03:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:
> Thanks for all buddies that replied my messages.
> Indeed I used
>
> ceph osd primary-affinity <osd-id> <weight>
>
> And we felt some performance increment.
>
> What's help here is that we have 6 proxmox ceph server:
>
> ceph01 - HDD with 5 900 rpm
> ceph02 - HDD with 7 200 rpm
> ceph03 - HDD with 7 200 rpm
> ceph04 - HDD with 7 200 rpm
> ceph05 - HDD with 5 900 rpm
> ceph06 - HDD with 5 900 rpm
>
> So what I do is define weight 0 to HDD's with 5 900 rpm and define weight
> 1 to HDD's with 7 200 rpm.
>
> ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
> -1 62.31059 root default
> -3 14.55438 host pve-ceph01
> 0 hdd 3.63860 osd.0 up 1.00000 0
> 1 hdd 3.63860 osd.1 up 1.00000 0
> 2 hdd 3.63860 osd.2 up 1.00000 0
> 3 hdd 3.63860 osd.3 up 1.00000 0
> -5 10.91559 host pve-ceph02
> 4 hdd 2.72890 osd.4 up 1.00000 1.00000
> 5 hdd 2.72890 osd.5 up 1.00000 1.00000
> 6 hdd 2.72890 osd.6 up 1.00000 1.00000
> 7 hdd 2.72890 osd.7 up 1.00000 1.00000
> -7 7.27708 host pve-ceph03
> 8 hdd 2.72890 osd.8 up 1.00000 1.00000
> 9 hdd 2.72890 osd.9 up 1.00000 1.00000
> 10 hdd 1.81929 osd.10 up 1.00000 1.00000
> -9 7.27716 host pve-ceph04
> 11 hdd 1.81929 osd.11 up 1.00000 1.00000
> 12 hdd 1.81929 osd.12 up 1.00000 1.00000
> 13 hdd 1.81929 osd.13 up 1.00000 1.00000
> 14 hdd 1.81929 osd.14 up 1.00000 1.00000
> -11 14.55460 host pve-ceph05
> 15 hdd 7.27730 osd.15 up 1.00000 0
> 16 hdd 7.27730 osd.16 up 1.00000 0
> -13 7.73178 host pve-ceph06
> 17 hdd 0.90959 osd.17 up 1.00000 0
> 18 hdd 2.72890 osd.18 up 1.00000 0
> 19 hdd 1.36440 osd.19 up 1.00000 0
> 20 hdd 2.72890 osd.20 up 1.00000 0
>
> Tha's it! Thanks again.
>
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
> 2018-08-30 11:47 GMT-03:00 Phil Schwarz <infolist at schwarz-fr.net>:
>
>> Hope you did change a single disk at a time !
>>
>> Be warned (if not) that moving an OSD from a server to another triggers
>> a rebalancing of almost the complete datas stored upon in order to
>> follow crushmap.
>>
>> For instance exchanging two OSDs between servers result in a complete
>> rebalance of the two OSDS,a ccording to my knowledge.
>>
>> 16% of misplaced datas could be acceptable or not depending on your
>> needs of redundancy and throughput, but it's not a low value that could
>> be underestimated.
>>
>> Best regards
>>
>>
>>
>> Le 30/08/2018 à 15:27, Gilberto Nunes a écrit :
>> > Right now the ceph are very slow
>> >
>> > 343510/2089155 objects misplaced (16.443%)
>> > Status
>> >
>> > HEALTH_WARN
>> > Monitors
>> > pve-ceph01:
>> > pve-ceph02:
>> > pve-ceph03:
>> > pve-ceph04:
>> > pve-ceph05:
>> > pve-ceph06:
>> > OSDs
>> > In Out
>> > Up 21 0
>> > Down 0 0
>> > Total: 21
>> > PGs
>> > active+clean:
>> > 157
>> >
>> > active+recovery_wait+remapped:
>> > 1
>> >
>> > active+remapped+backfill_wait:
>> > 82
>> >
>> > active+remapped+backfilling:
>> > 2
>> >
>> > active+undersized+degraded+remapped+backfill_wait:
>> > 8
>> >
>> > Usage
>> > 7.68 TiB of 62.31 TiB
>> > Reads:
>> > Writes:
>> > IOPS: Reads:
>> > IOPS: Writes:
>> > <http://www.proxmox.com/products/proxmox-ve/subscription-service-plans>
>> > ()
>> > Degraded data redundancy: 21495/2089170 objects degraded (1.029%), 8 pgs
>> > degraded, 8 pgs undersized
>> >
>> > pg 21.0 is stuck undersized for 63693.346103, current state
>> > active+undersized+degraded+remapped+backfill_wait, last acting [2,9]
>> > pg 21.2 is stuck undersized for 63693.346973, current state
>> > active+undersized+degraded+remapped+backfill_wait, last acting [2,10]
>> > pg 21.6f is stuck undersized for 62453.277248, current state
>> > active+undersized+degraded+remapped+backfill_wait, last acting [2,5]
>> > pg 21.8b is stuck undersized for 63693.361835, current state
>> > active+undersized+degraded+remapped+backfill_wait, last acting [2,8]
>> > pg 21.c3 is stuck undersized for 63693.321337, current state
>> > active+undersized+degraded+remapped+backfill_wait, last acting [2,9]
>> > pg 21.c5 is stuck undersized for 66587.797684, current state
>> > active+undersized+degraded+remapped+backfill_wait, last acting [2,8]
>> > pg 21.d4 is stuck undersized for 62453.047415, current state
>> > active+undersized+degraded+remapped+backfill_wait, last acting [2,6]
>> > pg 21.e1 is stuck undersized for 62453.276631, current state
>> > active+undersized+degraded+remapped+backfill_wait, last acting [2,5]
>> >
>> >
>> >
>> >
>> > ---
>> > Gilberto Nunes Ferreira
>> >
>> > (47) 3025-5907
>> > (47) 99676-7530 - Whatsapp / Telegram
>> >
>> > Skype: gilberto.nunes36
>> >
>> >
>> >
>> >
>> > 2018-08-30 10:23 GMT-03:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:
>> >
>> >> SO, what you guys think about this HDD distribuiton?
>> >>
>> >> CEPH-01
>> >> 1x 3 TB
>> >> 1x 2 TB
>> >>
>> >> CEPH-02
>> >> 1x 4 TB
>> >> 1x 3 TB
>> >>
>> >> CEPH-03
>> >> 1x 4 TB
>> >> 1x 3 TB
>> >>
>> >> CEPH-04
>> >> 1x 4 TB
>> >> 1x 3 TB
>> >> 1x 2 TB
>> >>
>> >> CEPH-05
>> >> 1x 8 TB
>> >> 1x 2 TB
>> >>
>> >> CEPH-06
>> >> 1x 3 TB
>> >> 1x 1 TB
>> >> 1x 8 TB
>> >>
>> >>
>> >> ---
>> >> Gilberto Nunes Ferreira
>> >>
>> >> (47) 3025-5907
>> >> (47) 99676-7530 - Whatsapp / Telegram
>> >>
>> >> Skype: gilberto.nunes36
>> >>
>> >>
>>
>
>
More information about the pve-user
mailing list