[PVE-User] Reinstall Proxmox with Ceph storage

Alwin Antreich alwin at antreich.com
Tue Aug 6 15:02:00 CEST 2019


On August 6, 2019 2:46:21 PM GMT+02:00, Gilberto Nunes <gilberto.nunes32 at gmail.com> wrote:
>WOW! This is it??? Geez! So simple.... Thanks a lot
>---
>Gilberto Nunes Ferreira
>
>(47) 3025-5907
>(47) 99676-7530 - Whatsapp / Telegram
>
>Skype: gilberto.nunes36
>
>
>
>
>Em ter, 6 de ago de 2019 às 06:48, Alwin Antreich
><a.antreich at proxmox.com> escreveu:
>>
>> Hello Gilberto,
>>
>> On Mon, Aug 05, 2019 at 04:21:03PM -0300, Gilberto Nunes wrote:
>> > Hi there...
>> >
>> > Today we have 3 servers work on Cluster HA and Ceph.
>> > Proxmox all nodes is 5.4
>> > We have a mix of 3 SAS and 3 SATA, but just 2 SAS are using in CEPH
>storage.
>> > So, we like to reinstall each node in an HDD SSD 120GB in order to
>you
>> > the third SAS into SAS CEPH POOL.
>> > We get 2 POOL's:
>> > SAS - which content 2 HDD SAS
>> > SATA - which content 3 HDD SATA
>> >
>> > In general we need move the disk image in SAS POOL to SATA POOL?
>> > Or there any other advice in how to proceed in this case??
>> As you have 3x nodes, you can simply do it one node at a time.
>Assuming
>> you are using a size 3 / min_size 2 for your Ceph pools. No need to
>move
>> any image.
>>
>> Ceph OSDs are portable, meaning, if you configure the newly installed
>> node to be connected (and configured) to the same Ceph cluster, the
>OSDs
>> should just pop-in again.
>>
>> First deactivate HA on all nodes. Then you could try to clone the OS
>> disk to the SSD (eg. clonezilla, dd). Or remove the node from the
>> cluster (not from Ceph) and re-install it from scratch. Later on, the
>> old SAS disk can be reused as additional OSD.
>>
>> --
>> Cheers,
>> Alwin
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>_______________________________________________
>pve-user mailing list
>pve-user at pve.proxmox.com
>https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

But don't forget to let the Ceph cluster heal first, before you start the next. ;)



More information about the pve-user mailing list