[PVE-User] Question about best way to replace boot disk of a proxmox + ceph node

Christoph Weber Christoph.Weber at xpecto.com
Fri Jul 9 11:43:59 CEST 2021


Hi everybody,

we have one proxmox node  (node3) with ceph where the boot disk is beginning to fail (In fact we already experienced some defective system libraries which led to kernel panic on boot until we were able to determine the affected library to be replaced with a working copy).

We see two possible ways:
a) clone the partially defective disk to a new ssd which would keep all configuration, but might also copy defective files
b) install a fresh copy of proxmox 6.4  with two subvariants:
   b1) only configure the same network interface address and name and join the proxmox and ceph cluster when the node has booted up
   b2) copy network configuration and /etc/ceph folder from defective node to the new disk before booting - and then join the proxmox cluster. In this case the question is, if there are more files to be copied like /etc/corosync?

Method b1 seems to be the most safe to me, but I'm not 100% sure if it might be a problem when we cluster join the node3 again with the same name and ip address as it was. 
Would we have to prepare ceph or proxmox for this? Remove the node3 from ceph and/or proxmox before we re-join it?

Additional Bonus: We have a fresh node (node6) without disks set up - we might move the ceph disks from the node3 to the new node6 before we replace the bootdisk. 

Any opinions/suggestions would be greatly appreciated

-- 
Christoph





More information about the pve-user mailing list