ceph-deploy doesn't support that. You can use ceph-disk or ceph-volume
directly (with basically the same syntax as ceph-deploy), but you can
only explicitly re-use an OSD id if you set it to destroyed before.
I.e., the proper way to replace an OSD while avoiding unnecessary data
movement is:
ceph osd destroy osd.XX
ceph-volume lvm prepare ... --osd-id 10
Also, check out "ceph osd purge" to remove OSDs with one simple step.
Paul
Am Mo., 29. Okt. 2018 um 14:43 Uhr schrieb Jin Mao :
>
> Gents,
>
> My cluster had a gap in the OSD sequence numbers at certain point. Basically,
> because of missing osd auth del/rm" in a previous disk replacement task for
> osd.17, a new osd.34 was created. It did not really bother me until recently
> when I tried to replace all smaller disks to bigger disks.
>
> Ceph seems also pick up the next available osd sequence number. When I
> replace osd.18, the disk came up online as osd.17. When I am doing osd.19, it
> became osd.18. It generated more backfull_wait pgs than sticking to the
> original osd number.
>
> Using ceph-deploy in version 10.2.3, is there a way to specify osd id when
> doing osd activate?
>
> Thank you.
>
> Jin.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com