Hi Robert
>From: Robert Sander <[email protected]>
>Sent: Friday, March 7, 2025 7:02 AM
>
>For the original issue: Do you have an active (managed) OSD service in
>the orchestrator?
In fact, I do - I hadn't noticed it before, the cluster was configured
by a contractor a while ago and my expertise in Ceph still leaves
a lot to desire.
# ceph orch ls --service-type=osd
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
osd.osd_spec 167 10m ago 18M node-osd5
>If you can please post the output of
>ceph orch ls osd --export
# ceph orch ls osd --export
service_type: osd
service_id: osd_spec
service_name: osd.osd_spec
placement:
host_pattern: node-osd5
spec:
data_devices:
rotational: 1
db_devices:
rotational: 0
filter_logic: AND
objectstore: bluestore
>It looks like you did something on the new disk manually while the
>orchestrator was already creating an OSD.
I believe that was the case, indeed. Do you think if I zap
the disk using the orchestrator with:
ceph orch device zap node-osd1 /dev/sdah
And wait for it to be picked up for the orchestrator and have
the OSD create according to the OSD Spec this would work?
My main concern is some half-deployed OSD still lingering,
such as that "device2" entry in the crush map.
Thank you
-
Gustavo
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]