[PVE-User] confirmation on osd replacement

mj lists at merit.unu.edu
Thu Nov 26 21:31:40 CET 2020


Hi Alejandro,

Thanks for your feedback, much appreciated!

Enjoy your weekend!

MJ

On 11/26/20 4:39 PM, Alejandro Bonilla wrote:
> 
> 
>> On Nov 26, 2020, at 2:54 AM, mj <lists at merit.unu.edu> wrote:
>>
>> Hi,
>>
>> Yes, perhaps I should have given more details :-)
>>
>> On 11/25/20 3:03 PM, Alejandro Bonilla wrote:
>>
>>> Have a look at /etc/fstab for any disk path mounts - since I think Proxmox uses lvm mostly, you shouldn’t see a problem.
>> I will, thanks!
>>
>>> What is the pool replication configuration or ec-profile? How many nodes in the cluster?
>> We're 3/2 replication, no ec. It's a three-node (small) cluster, 8 filestore OSDs per node, with an SSD journal (wear evel 75%)
> 
> If it’s 3 replicas, min 2, then you should be able to clear all drives from a system at once and replace them all to minimize the amount of times the cluster will end up rebalancing.
> 
>>
>> We will be using samsung PM833 bluestore of the same size
>> (4TB spinners to 3.83GB PM833 SSDs)
>>
>>> Are you planning to remove all disks per server at once or disk by disk?
>> I was planning to:
>> - first add two SSDs to each server, and gradually increase their weight
> 
> Two per server to ensure the disk replacement will work as expected is a good idea - I don’t think you’ll gain anything with a gradual re-weight.
> 
>> - then, disk-by-disk, replace the 8 (old) spinners with the 6 remaining SSDs
> 
> IF you have two other replicas, then a full system disk replacement should be no trouble - especially after two other SSDs were added and most data was shuffled around.
> 
>>
>>> Will all new drives equal or increase the disk capacity of the cluster?
>> Approx equal yes.
>> The aim in not to increase space.
> 
> There are other reasons why I ask, specifically based on PG count and balancing of the cluster.
> 
>>
>> MJ
>>
> 




More information about the pve-user mailing list