I've got ceph up and running on a 3-node centos 6.4 cluster. However, after
I

a) set the cluster to nout as follows: ceph osd set noout
b) rebooted 1 node
c) logged into that 1 node, I tried to do: service ceph start osd.12

but it returned with error message:

/etc/init.d/ceph: osd.12 not found (/etc/ceph/ceph.conf defines ,
/var/lib/ceph defines )

Now, it *IS* true that the /etc/ceph/ceph.conf does NOT make any reference
to osd.12.

Am I supposed to *manually* update the master ceph.conf file to put each
and ever osd reference into (osd.1 through osd 14 in my case) and then copy
the ceph.conf file to each node? I would have thought that when I add a new
osd (using ceph-deploy) that it would *automatically* update either a) the
master ceph.conf file on my admin machine (where ceph-deploy runs) or least
it would update the ceph.conf file on the node that contains the osd that's
being added.

It feels as if ceph-deploy will allow you to add osds, but this addition
only updates the "in-memory" definitions of ceph.conf, and that after
server reboot, it looses the in-memory definitions and attempts to re-read
info from ceph.conf. Is this correct? If so, is there a way when using
ceph-deploy to get it to automtically create the necessary entries for the
ceph.conf file? If so, how? If not, then am we supposed to both a) use
ceph-deploy to add osds and then b) manually edit ceph.conf? If so, isn't
that dangerous/error prone?

Or am I missing something fundamental?

(I'm using ceph version 0.72.1).
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to