Matthew,

I think one of the central differences is that mkcephfs read the
ceph.conf file, and generated the OSDs from the ceph.conf file. It
also generated the fsid, and placed it into the cluster map, but
didn't modify the ceph.conf file itself.

By contrast, "ceph-deploy new" generates the fsid, the initial monitor
host(s), the initial monitor host(s) address, turns authentication on,
sets a journal size and assumes you'll be using omap for xattrs (i.e.,
typically used with ext4), and places these settings into an initial
ceph.conf file. ceph-deploy mon create uses this when creating the
monitors (i.e., bootstrapping requires creating the monitor keys).

http://ceph.com/docs/master/rados/configuration/mon-config-ref/#bootstrapping-monitors

You need at least one monitor for a ceph cluster. No monitor, no
cluster. You also need at least two OSDs for peering, heartbeats, etc.

With mkcephfs, the OSD map was generated from the ceph.conf file and
you had to specify the domain name of an OSD host in your ceph.conf
file. You'd simply mount drives under the default osd data path--one
disk for each OSD, and often one SSD disk or partition for each
journal for added performance. Just as mkcephfs did not put the fsid,
and mon initial members into ceph.conf, ceph-deploy doesn't put the
osd configuration into ceph.conf.  Personally, I'd rather it be there
for edification purposes. However, one reason to defer to maps is that
sometimes people don't keep config files updated across the cluster
(e.g., part of the rationale for ceph-deploy admin and ceph-deploy
push | pull config). For example, changing monitor IP addresses was an
issue that came up that wasn't particularly intuitive to end users,
because you can't just change the config file and have it update the
cluster map. Have a look here:
http://ceph.com/docs/master/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address

You might also want to look at
http://ceph.com/docs/master/architecture/#cluster-map to see the
contents of each component of a cluster map and its contents. That's
really what's deterministic for the daemons that get started. However,
if you add osd sections to your ceph.conf file and push them out to
the nodes, the new settings will get picked up. See
http://ceph.com/docs/master/rados/configuration/osd-config-ref/ for
OSD settings.  See
http://ceph.com/docs/master/rados/configuration/ceph-conf/ for a
general discussion of the ceph configuration file. You can also make
runtime changes as discussed in that document.

As far as mount options are concerned, I believe they have to be
specified at create time. Can someone correct me if I'm wrong?





On Tue, Jul 30, 2013 at 8:46 AM, Alfredo Deza <alfredo.d...@inktank.com> wrote:
> There seems to be a bug in `ceph-deploy mon create {node}` where it doesn't
> create the keyrings at all.
>
> Another problem here is that it is *very* difficult to tell what is
> happening remotely as `ceph-deploy` doesn't really tell you even when you
> have verbose flags.
>
> I really need to have my pull request merged
> (https://github.com/ceph/ceph-deploy/pull/24) so I can start working on
> getting better output (and easier to debug).
>
> I wonder what version of ceph-deploy is he using too
>
>
> On Mon, Jul 29, 2013 at 11:32 AM, Ian Colle <ian.co...@inktank.com> wrote:
>>
>> Any ideas?
>>
>> Ian R. Colle
>> Director of Engineering
>> Inktank
>> Cell: +1.303.601.7713 <tel:%2B1.303.601.7713>
>> Email: i...@inktank.com
>>
>>
>> Delivering the Future of Storage
>>
>>
>>  <http://www.linkedin.com/in/ircolle>
>>
>>  <http://www.twitter.com/ircolle>
>>
>>
>>
>>
>> On 7/29/13 9:56 AM, "Matthew Richardson" <m.richard...@ed.ac.uk> wrote:
>>
>> >I'm currently running test pools using mkcephfs, and am now
>> >investigating deploying using ceph-deploy.  I've hit a couple of
>> >conceptual changes which I can't find any documentation for, and was
>> >wondering if someone here could give me some answers as to how things
>> >now work.
>> >
>> >While ceph-deploy creates an initial ceph.conf, it doesn't update this
>> >when I do things like 'osd create' to add new osd sections.  However
>> >when I restart the ceph service, it picks up the new osds quite happily.
>> > How does it 'know' what osds it should be starting, and with what
>> >configuration?
>> >
>> >Since there are no sections corresponding to theses new osds, how do I
>> >go about adding specific configuration for them - such as 'cluster addr'
>> >and then push this new config?  Or is there a way to pass in custom
>> >configuration to the 'osd create' subcommand at the point of osd
>> > creation?
>> >
>> >I have subsequently updated the [osd] section to set
>> >'osd_mount_options_xfs' and done a 'config push' - however the mount
>> >options don't seem to change when I restart the ceph service.  Any clues
>> >why this might be?
>> >
>> >Thanks,
>> >
>> >Matthew
>> >
>> >--
>> >The University of Edinburgh is a charitable body, registered in
>> >Scotland, with registration number SC005336.
>> >
>> >_______________________________________________
>> >ceph-users mailing list
>> >ceph-users@lists.ceph.com
>> >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>



-- 
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to