Re: [ceph-users] How do you replace an OSD?
>> I have tried to simulate a hard drive (OSD) failure: removed the OSD >> (out+stop), zapped it, and then >> prepared and activated it. It worked, but I ended up with one extra OSD (and >> the old one still showing in the ceph -w output). >> I guess this is not how I am supposed to do it? > It is. You can remove the old entry with 'ceph osd crush rm N' and/or > 'ceph osd rm N', or just leave it there. Thank you. >> Documentation recommends manually editing the configuration, however, there >> are no osd entries in my /etc/ceph/ceph.conf > That's old info; where did you read it so we can adjust the docs? Here you go: http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual (step 4,5) > Thanks! > sage ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Re: [ceph-users] How do you replace an OSD?
On Tue, 13 Aug 2013, Dmitry Postrigan wrote: > > I just got my small Ceph cluster running. I run 6 OSDs on the same server to > basically replace mdraid. > > I have tried to simulate a hard drive (OSD) failure: removed the OSD > (out+stop), zapped it, and then > prepared and activated it. It worked, but I ended up with one extra OSD (and > the old one still showing in the ceph -w output). > I guess this is not how I am supposed to do it? It is. You can remove the old entry with 'ceph osd crush rm N' and/or 'ceph osd rm N', or just leave it there. > Documentation recommends manually editing the configuration, however, there > are no osd entries in my /etc/ceph/ceph.conf That's old info; where did you read it so we can adjust the docs? Thanks! sage > So what would be the best way to replace a failed OSD? > > Dmitry > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[ceph-users] How do you replace an OSD?
I just got my small Ceph cluster running. I run 6 OSDs on the same server to basically replace mdraid. I have tried to simulate a hard drive (OSD) failure: removed the OSD (out+stop), zapped it, and then prepared and activated it. It worked, but I ended up with one extra OSD (and the old one still showing in the ceph -w output). I guess this is not how I am supposed to do it? Documentation recommends manually editing the configuration, however, there are no osd entries in my /etc/ceph/ceph.conf So what would be the best way to replace a failed OSD? Dmitry ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com