Hi,

I target to run just destroy and re-use the ID as stated in manual but seems 
not working.
Seems I’m unable to re-use the ID ?

Thanks.
/stwong


From: Paul Emmerich <paul.emmer...@croit.io>
Sent: Friday, July 5, 2019 5:54 PM
To: ST Wong (ITSC) <s...@itsc.cuhk.edu.hk>
Cc: Eugen Block <ebl...@nde.ag>; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph-volume failed after replacing disk


On Fri, Jul 5, 2019 at 11:25 AM ST Wong (ITSC) 
<s...@itsc.cuhk.edu.hk<mailto:s...@itsc.cuhk.edu.hk>> wrote:
Hi,

Yes, I run the commands before:

# ceph osd crush remove osd.71
device 'osd.71' does not appear in the crush map
# ceph auth del osd.71
entity osd.71 does not exist

which is probably the reason why you couldn't recycle the OSD ID.

Either run just destroy and re-use the ID or run purge and not re-use the ID.
Manually deleting auth and crush entries is no longer needed since purge was 
introduced.


Paul

--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io<http://www.croit.io>
Tel: +49 89 1896585 90


Thanks.
/stwong

-----Original Message-----
From: ceph-users 
<ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>> 
On Behalf Of Eugen Block
Sent: Friday, July 5, 2019 4:54 PM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] ceph-volume failed after replacing disk

Hi,

did you also remove that OSD from crush and also from auth before recreating it?

ceph osd crush remove osd.71
ceph auth del osd.71

Regards,
Eugen


Zitat von "ST Wong (ITSC)" 
<s...@itsc.cuhk.edu.hk<mailto:s...@itsc.cuhk.edu.hk>>:

> Hi all,
>
> We replaced a faulty disk out of N OSD and tried to follow steps
> according to "Replacing and OSD" in
> http://docs.ceph.com/docs/nautilus/rados/operations/add-or-rm-osds/,
> but got error:
>
> # ceph osd destroy 71--yes-i-really-mean-it # ceph-volume lvm create
> --bluestore --data /dev/data/lv01 --osd-id
> 71 --block.db /dev/db/lv01
> Running command: /bin/ceph-authtool --gen-print-key Running command:
> /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f json
> -->  RuntimeError: The osd ID 71 is already in use or does not exist.
>
> ceph -s still shows  N OSDS.   I then remove with "ceph osd rm 71".
>  Now "ceph -s" shows N-1 OSDS and id 71 doesn't appear in "ceph osd
> ls".
>
> However, repeating the ceph-volume command still gets same error.
> We're running CEPH 14.2.1.   I must have some steps missed.    Would
> anyone please help?     Thanks a lot.
>
> Rgds,
> /stwong



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to