[ceph-users] Re: Unable to add OSD after removing completely

2024-02-13 Thread salam
Thank you for your prompt response, Dear Anthony.

I have fixed the problem.

As I had already removed all the OSDs from my third node, this time I removed 
the ceph-node3 node from my Ceph Cluster. Then I re-added it as a new cluster 
node. I followed the following method:

ceph osd crush remove ceph-node3
ceph orch host drain ceph-node3

After the node has been drained of all its services

ceph orch host rm ceph-node3 --offline --force
ceph orch apply osd --all-available-devices --unmanaged=false

Then I logged into the ceph-node3 and disabled and removed all Ceph-related 
services. Removed all Ceph-related docker images and removed all Ceph-related 
directories in the OS filesystem, notably /etc/ceph, /var/lib/ceph/ and 
/var/log/ceph. Then I went back to ceph-node1 where the cephadm orchestrator 
was installed and added the ceph-node3 again into the Ceph Cluster.

ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-node3
ceph orch host add ceph-node3 10.10.10.13

Thus the node was re-added to the cluster, this time all the non-RAID hard 
drives were automatically added as OSD, and the Cluster was returning to normal 
state. Currently, the degraded PGs are recovering,

Thank you

> Anthony D'Atri wrote:
> You probably have the H330 HBA, rebadged LSI.  You can set the “mode” or 
> “personality”
> using storcli / perccli.  You might need to remove the VDs from them too.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Unable to add OSD after removing completely

2024-02-12 Thread salam
Hello,

I have a Ceph cluster created by orchestrator Cephadm. It consists of 3 Dell 
PowerEdge R730XD servers. This cluster's hard drives used as OSD were 
configured as RAID 0. The configuration summery is as the following:
ceph-node1 (mgr, mon)
  Public network: 172.16.7.11/22
  Cluster network: 10.10.10.11/24, 10.10.10.14/24
ceph-node2 (mgr, mon)
  Public network: 172.16.7.12/22
  Cluster network: 10.10.10.12/24, 10.10.10.15/24
ceph-node3 (mon)
  Public network: 172.16.7.13/22
  Cluster network: 10.10.10.13/24, 10.10.10.16/24

Recently I removed all OSDs from node3 with the following set of commands
  sudo ceph osd out osd.3
  sudo systemctl stop ceph@osd.3.service
  sudo ceph osd rm osd.3
  sudo ceph osd crush rm osd.3
  sudo ceph auth del osd.3

After this, I configured all OSD hard drives as non-RAIN from the server 
settings and tried to add the hard drives as OSD again. First I used the 
following command to add them automatically
  ceph orch apply osd --all-available-devices --unmanaged=false
But this was generating the following error in my Ceph GUI console
  CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): 
osd.all-available-devices
I am also unable to add the hard drives manually with the following command
  sudo ceph orch daemon add osd ceph-node3:/dev/sdb

Can anyone please help me with this issue?

I really appreciate any help you can provide.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io