[PVE-User] add multiple OSDs to pve cluster

Jeff Palmer jeff at palmerit.net
Tue Mar 21 05:49:52 CET 2017

There are a couple ways you can do this, and in the end, it's entirely up
to you to decide which would work best in your environment.

You can disable backfilling when your cluster needs I/O,  you can reduce
the number of backfill threads each OSD can use at a time. You can disable
deep scrubbing during the backfill. You can change the recovery priority..

At the end of the day, you will need to decide how to approach this in your

I'd probably start by adding 1 additional failure domains worth of OSDs,
and see what the impact is, and be ready to tune the things i mentioned
above. For example, if your failure domain is at the 'host' level, I'd
consider adding 1 OSD per host all at once. See what the impact is, adjust
the options.. from there, make the determination of adding more at once or

Adding small numbers of OSDs at a time mean your rebalancing is faster each
time, but in the end you actually move more data in total then a single
rebalance with all OSDs at once. So there are pros and cons to either

Not sure if this helped, but maybe the info is useful to you?

On Mar 20, 2017 3:47 PM, "lists" <lists at merit.unu.edu> wrote:

> Hi,
> We would like to expand the ceph storage on our three node pve cluster.
> (from four to eight 4TB disks per server)
> I have physically installed the disks, and they are visible in the proxmox
> gui.
> We assume that the best procedure would be:
> Configure four disks as OSD at the same time (using the pve gui) on pve1,
> wait for the data to be redistributed, and then do the same on pve2 and
> pve3?
> (as an alternative: add one OSD disk at a time. This would perhaps cause
> less data to be moved, but would trigger 4 'expensive and risky' rebuild
> operations per server)
> (I'm asking here, since the docs talk about adding one disk, not about
> multiple disks)
> Since all disks are exactly similar, I don't need to touch anything else,
> right? (disk weight, etc)
> Or is there even another way to do this best?
> MJ
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

More information about the pve-user mailing list