Hello Christoph,
please read the documentation about removing/replacing a Node carefully
- you may not reinstall it with the same IP and/or Name as this will
crash your whole cluster!
I would use the node6 to move the ceph-part and then reinstall the node
with new ip and name.
For the boot disks we use 2 or 3 mirrored zfs disks to be sure...
Best regards
Ralf
Am 09.07.2021 um 11:43 schrieb Christoph Weber:
Hi everybody,
we have one proxmox node (node3) with ceph where the boot disk is beginning to
fail (In fact we already experienced some defective system libraries which led
to kernel panic on boot until we were able to determine the affected library to
be replaced with a working copy).
We see two possible ways:
a) clone the partially defective disk to a new ssd which would keep all
configuration, but might also copy defective files
b) install a fresh copy of proxmox 6.4 with two subvariants:
b1) only configure the same network interface address and name and join the
proxmox and ceph cluster when the node has booted up
b2) copy network configuration and /etc/ceph folder from defective node to
the new disk before booting - and then join the proxmox cluster. In this case
the question is, if there are more files to be copied like /etc/corosync?
Method b1 seems to be the most safe to me, but I'm not 100% sure if it might be
a problem when we cluster join the node3 again with the same name and ip
address as it was.
Would we have to prepare ceph or proxmox for this? Remove the node3 from ceph
and/or proxmox before we re-join it?
Additional Bonus: We have a fresh node (node6) without disks set up - we might
move the ceph disks from the node3 to the new node6 before we replace the
bootdisk.
Any opinions/suggestions would be greatly appreciated
_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user