It works again but I had to do after a start/stop of the OSD on an admin node:
# ceph orch daemon stop osd.2
# ceph orch daemon start tosd.2
What an adventure, thanks again so much for your help!
‐‐‐ Original Message ‐‐‐
On Thursday, May 27, 2021 3:37 PM, Eugen Block wrote:
> That
Nicely spotted about the missing file, it looks like I have the same case as
you can see below from the syslog:
May 27 15:33:12 ceph1f systemd[1]:
ceph-8d47792c-987d-11eb-9bb6-a5302e00e1fa@osd.2.service: Scheduled restart job,
restart counter is at 1.
May 27 15:33:12 ceph1f systemd[1]: Stopped
I managed to remove that wrongly created cluster on the node running:
sudo cephadm rm-cluster --fsid 91a86f20-8083-40b1-8bf1-fe35fac3d677 --force
So I am getting closed but the osd.2 service on that node simply does not want
to start as you can see below:
# ceph orch daemon start osd.2
I am trying to run "cephadm shell" on that newly installed OSD node and it
seems that I have now unfortunately configured a new cluster ID as it shows:
ubuntu@ceph1f:~$ sudo cephadm shell
ERROR: Cannot infer an fsid, one must be specified:
['8d47792c-987d-11eb-9bb6-a5302e00e1fa',
You are right, I used the FSID of the OSD and not of the cluster in the deploy
command. So now I tried again with the cluster ID as FSID but still it does not
work as you can see below:
ubuntu@ceph1f:~$ sudo cephadm deploy --name osd.2 --fsid
8d47792c-987d-11eb-9bb6-a5302e00e1fa
Deploy daemon
Hi Eugen,
What a good coincidence ;-)
So I ran "cephadm ceph-volume lvm list" on the OSD node which I re-instaled and
it saw my osd.2 OSD. So far so good, but the following suggested command does
not work as you can see below:
ubuntu@ceph1f:~$ sudo cephadm deploy --name osd.2 --fsid
That file is in the regular filesystem, you can copy it from a
different osd directory, it just a minimal ceph.conf. The directory
for the failing osd should now be present after the failed attempts.
Zitat von mabi :
Nicely spotted about the missing file, it looks like I have the same
Can you try with both cluster and osd fsid? Something like this:
pacific2:~ # cephadm deploy --name osd.2 --fsid
acbb46d6-bde3-11eb-9cf2-fa163ebb2a74 --osd-fsid
bc241cd4-e284-4c5a-aad2-5744632fc7fc
I tried to reproduce a similar scenario and found a missing config
file in the osd
ubuntu@ceph1f:~$ sudo cephadm deploy --name osd.2 --fsid
91a86f20-8083-40b1-8bf1-fe35fac3d677
Deploy daemon osd.2 ...
Which fsid is it, the cluster's or the OSD's? According to the
'cephadm deploy' help page it should be the cluster fsid.
Zitat von mabi :
Hi Eugen,
What a good
Hi,
I posted a link to the docs [1], [2] just yesterday ;-)
You should see the respective OSD in the output of 'cephadm
ceph-volume lvm list' on that node. You should then be able to get it
back to cephadm with
cephadm deploy --name osd.x
But I haven't tried this yet myself, so please
10 matches
Mail list logo