Hi,

We have the same issue on our lab cluster. The only way I found to have the 
osds on the new specification was to drain, remove and re-add the host. The 
orchestrator was happy to recreate the osds under the good specification.

But I do not think this is a good solution for production cluster. We are still 
looking for a more smooth way to do that.

Luis Domingues

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

On Monday, October 4th, 2021 at 10:01 PM, David Orman <orma...@corenode.com> 
wrote:

> We have an older cluster which has been iterated on many times. It's
>
> always been cephadm deployed, but I am certain the OSD specification
>
> used has changed over time. I believe at some point, it may have been
>
> 'rm'd.
>
> So here's our current state:
>
> root@ceph02:/# ceph orch ls osd --export
>
> service_type: osd
>
> service_id: osd_spec_foo
>
> service_name: osd.osd_spec_foo
>
> placement:
>
> label: osd
>
> spec:
>
> data_devices:
>
> rotational: 1
>
> db_devices:
>
> rotational: 0
>
> db_slots: 12
>
> filter_logic: AND
>
> objectstore: bluestore
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> service_type: osd
>
> service_id: unmanaged
>
> service_name: osd.unmanaged
>
> placement: {}
>
> unmanaged: true
>
> spec:
>
> filter_logic: AND
>
> objectstore: bluestore
>
> root@ceph02:/# ceph orch ls
>
> NAME PORTS RUNNING REFRESHED AGE PLACEMENT
>
> crash 7/7 10m ago 14M *
>
> mgr 5/5 10m ago 7M label:mgr
>
> mon 5/5 10m ago 14M label:mon
>
> osd.osd_spec_foo 0/7 - 24m label:osd
>
> osd.unmanaged 167/167 10m ago - <unmanaged>
>
> The osd_spec_foo would match these devices normally, so we're curious
>
> how we can get these 'managed' under this service specification.
>
> What's the appropriate way in order to effectively 'adopt' these
>
> pre-existing OSDs into the service specification that we want them to
>
> be managed under?
>
> ceph-users mailing list -- ceph-users@ceph.io
>
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to