[PVE-User] CEPH: How to remove an OSD without experiencing inactive placement groups

Eneko Lacunza elacunza at binovo.es
Fri Dec 19 12:55:41 CET 2014


Hi Chris,

The problem you reported is quite common in small Ceph clusters.

I suggest tuning the following in /etc/pve/ceph.conf in [osd] section:

      osd max backfills = 1
      osd recovery max active = 1

This should make the recovery "slower" and thus should make VMs 
responsive. Recovery will still be noticeable though.

Cheers
Eneko

On 19/12/14 12:36, Chris Murray wrote:
> That would make sense. Thank you Dietmar, I'll give them a try.
>
> Sent from my HTC
>
> ----- Reply message -----
> From: "Dietmar Maurer" <dietmar at proxmox.com>
> To: "pve-user at pve.proxmox.com" <pve-user at pve.proxmox.com>, "Chris Murray" <chrismurray84 at gmail.com>
> Subject: [PVE-User] CEPH: How to remove an OSD without experiencing inactive placement groups
> Date: Fri, Dec 19, 2014 10:48
>
>
>> I understand this is an emerging technology with active development;
>> just want to check I'm not missing anything obvious or haven't
>> fundamentally misunderstood how it works. I didn't expect the loss of
>> 1/9 of the devices in the pool to cease IO, especially when every object
>> exists three times.
> This looks like a crush related problem to me. crush maps sometime have problems
> with
> small setups (also see crush tunables). But I suggest to ask that on the ceph
> list.
>
>
> -----
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2015.0.5577 / Virus Database: 4253/8761 - Release Date: 12/18/14
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


-- 
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
       943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
www.binovo.es




More information about the pve-user mailing list