Ok, your best bet is to remove osds 3,14,16:

ceph auth del osd.3
ceph osd crush rm osd.3
ceph osd rm osd.3

for each of them.  Each osd you remove may cause
some data re balancing, so you should be ready for
that.
-Sam

On Mon, Aug 12, 2013 at 3:01 PM, Jeff Moskow <j...@rtr.com> wrote:
> Sam,
>
>         3, 14 and 16 have been down for a while and I'll eventually replace 
> those drives (I could do it now)
> but didn't want to introduce more variables.
>
>         We are using RBD with Proxmox, so I think the answer about kernel 
> clients is yes....
>
> Jeff
>
> On Mon, Aug 12, 2013 at 02:41:11PM -0700, Samuel Just wrote:
>> Are you using any kernel clients?  Will osds 3,14,16 be coming back?
>> -Sam
>>
>> On Mon, Aug 12, 2013 at 2:26 PM, Jeff Moskow <j...@rtr.com> wrote:
>> > Sam,
>> >
>> >         I've attached both files.
>> >
>> > Thanks!
>> >         Jeff
>> >
>> > On Mon, Aug 12, 2013 at 01:46:57PM -0700, Samuel Just wrote:
>> >> Can you attach the output of ceph osd tree?
>> >>
>> >> Also, can you run
>> >>
>> >> ceph osd getmap -o /tmp/osdmap
>> >>
>> >> and attach /tmp/osdmap?
>> >> -Sam
>> >>
>> >> On Fri, Aug 9, 2013 at 4:28 AM, Jeff Moskow <j...@rtr.com> wrote:
>> >> > Thanks for the suggestion.  I had tried stopping each OSD for 30 
>> >> > seconds,
>> >> > then restarting it, waiting 2 minutes and then doing the next one (all 
>> >> > OSD's
>> >> > eventually restarted).  I tried this twice.
>> >> >
>> >> > --
>> >> >
>> >> > _______________________________________________
>> >> > ceph-users mailing list
>> >> > ceph-users@lists.ceph.com
>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> > --
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to