I just got my small Ceph cluster running. I run 6 OSDs on the same server to 
basically replace mdraid.

I have tried to simulate a hard drive (OSD) failure: removed the OSD 
(out+stop), zapped it, and then
prepared and activated it. It worked, but I ended up with one extra OSD (and 
the old one still showing in the ceph -w output).
I guess this is not how I am supposed to do it?

Documentation recommends manually editing the configuration, however, there are 
no osd entries in my /etc/ceph/ceph.conf

So what would be the best way to replace a failed OSD?

Dmitry

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to