[PVE-User] Node reinstallation [SOLVED]
Phil Schwarz
infolist at schwarz-fr.net
Sun Apr 1 16:24:10 CEST 2018
Le 01/04/2018 à 15:50, Phil Schwarz a écrit :
> Le 01/04/2018 à 14:43, Phil Schwarz a écrit :
>> Hi,
>>
>> let's assume a 5-nodes working Proxmox cluster.
>>
>> 1 node dies because of system disk failure.
>>
>> Got enough OSD to be safe regarding the replacation.
>>
>> Before going into weird issues, i'd like to get some advices on the
>> best way to go.
>>
>> Step 1. Reinstall & reconfigure (same IP, same SSH keys,,
>> passwords,..) the new node
>>
>> Step 2. pvecm add IP-of-cluster -force
>>
>> Step 3. Let's rebalance get automatically into a more stable state
>> with the new (/same ?) crushmap. ??
>>
>> My questions :
>>
>> - Is point 2 relevant ?
>> - Will the OSD (which remained unchanged) get into the OSD tree well,
>> with same wights, same ID, and so on.
>> - Do i have to be suspicious about the whole operation ?
>>
>> Thanks by advance.
>>
>> Best regards
> Ok, i lost the ssh keys form the lost node....
> Any issue with that (except reconfiguring better backup ...) ?
> Thanks
> Best regards
>
>
OK, i backlog :
After reinstall
cat /etc/apt/sources.list.d/ceph.list should be
deb http://download.proxmox.com/debian/ceph-luminous stretch main
cat /etc/apt/sources.list.d/pve-enterprise.list should be
deb http://download.proxmox.com/debian/pve stretch pve-no-subscription
#deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise
apt-get update && apt-get dist-upgrade
Update /etc/hosts
Update /etc/network/interfaces
Copy ssh pub_keys (full mesh between cluster members)
pvecm add IP-of-cluster (no need to force!)
pveceph install --version luminous
ln -s /etc/pve/ceph.conf /etc/ceph/ceph.conf (was missing)
scp root at another_server:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
reboot (in order to get the OSD be mount)
Success !
Love Proxmox, Love Ceph.
Have fun.
Best regards
More information about the pve-user
mailing list