Thank you for your prompt response, Dear Anthony.
I have fixed the problem.
As I had already removed all the OSDs from my third node, this time I removed
the ceph-node3 node from my Ceph Cluster. Then I re-added it as a new cluster
node. I followed the following method:
ceph osd crush remove
Hello,
I have a Ceph cluster created by orchestrator Cephadm. It consists of 3 Dell
PowerEdge R730XD servers. This cluster's hard drives used as OSD were
configured as RAID 0. The configuration summery is as the following:
ceph-node1 (mgr, mon)
Public network: 172.16.7.11/22
Cluster