If cluster is specified in /etc/default/ceph than I don't have any other
reservations to your proposal.

On Wed, Dec 10, 2014 at 1:39 PM, Sage Weil <sw...@redhat.com> wrote:

> On Wed, 10 Dec 2014, Robert LeBlanc wrote:
> > I guess you would have to specify the cluster name in
> /etc/ceph/ceph.conf?
> > That would be my only concern.
>
> ceph.conf is $cluster.conf (default cluster name is 'ceph').
>
> Unfortunately under systemd it's not possible to parameterize daemons with
> two variables (cluster and id) so the cluster is fixed per-host in
> /etc/defaults/ceph (or something like that).  At least that's how the
> current units work..
>
> sage
>
> >
> > On Wed, Dec 10, 2014 at 1:28 PM, Sage Weil <sw...@redhat.com> wrote:
> >       On Wed, 10 Dec 2014, Robert LeBlanc wrote:
> >       > I'm a big fan of /etc/*.d/ configs. Basically if the package
> >       maintained
> >       > /etc/ceph.conf includes all files in /etc/ceph.d/ then I can
> >       break up the
> >       > files however I'd like (mon, ods, mds, client, one per daemon,
> >       etc). Then
> >       > when upgrading, I don't have to worry about the new packages
> >       trying to
> >       > overwrite my conf file. If you have the include order be
> >       /var/lib/ceph/,
> >       > /etc/ceph.conf, /etc/ceph.d/ then package maintainers can put
> >       in things so
> >       > that it "just works" and can be easily overridden with
> >       /etc/ceph.d/.
> >       > Just my $0.02.
> >
> >       How about /etc/ceph/ceph.conf.d/* ?  In reality it's
> >       /etc/ceph/$cluster.conf.d/*
> >
> >       sage
> >
> >
> >       >
> >       > On Sat, Dec 6, 2014 at 11:39 AM, Sage Weil <sw...@redhat.com>
> >       wrote:
> >       >       Several things are different/annoying with radosgw than
> >       with
> >       >       other Ceph
> >       >       daemons:
> >       >
> >       >       - binary/package are named 'radosgw' instead of
> >       'ceph-rgw'.
> >       >
> >       >       This is cosmetic, but it also makes it fit less well
> >       into the
> >       >       new /var/lib/ceph/* view of things.
> >       >
> >       >       - default log location is
> >       /var/log/radosgw/$cluster-$name.log
> >       >       instead of
> >       >       /var/lib/ceph/$cluster-$rgw.$name.log (or similar).
> >       >
> >       >       - rgw_data default is /var/lib/ceph/radosgw instead of
> >       >       /var/lib/ceph/rgw
> >       >
> >       >       (not sure if 3 letters for consistency is better?)
> >       >
> >       >       - rgw usually authenticates as a client.something user,
> >       which
> >       >       means if you
> >       >       do use more standard rgw log names, then you get
> >       >       /var/log/ceph/client.something.$pid.$uniqueid.log.
> >       There is a
> >       >       loose
> >       >       convention that 'something' is 'rgw.hostname' (i.e.,
> >       >       client.rgw.hostname).
> >       >
> >       >       - radosgw has it's own separate sysvinit script that
> >       enumerates
> >       >       daemons
> >       >       from /etc/ceph/ceph.conf sections that start with
> >       >       client.radosgw.*
> >       >
> >       >       - radosgw upstart script is called radosgw (not
> >       ceph-rgw) and
> >       >       enumerates
> >       >       daemons from /var/lib/ceph/radosgw/*
> >       >
> >       >       (totally different than sysvinit!)
> >       >
> >       >       - radosgw instances usually need some stuff in ceph.conf
> >       to make
> >       >       them
> >       >       behave properly, which means they need a section in the
> >       shared
> >       >       /etc/ceph/ceph.conf.  more work for admins or config
> >       management
> >       >       systems.
> >       >
> >       >       - on rpm-based systems we have a separate sysvinit
> >       script that
> >       >       is
> >       >       slightly different (mostly because the username is
> >       different,
> >       >       apache
> >       >       instead of www-data).
> >       >
> >       >       ---
> >       >
> >       >       There is enough wrong here and little enough convention
> >       that my
> >       >       proposal is to essentially start fresh:
> >       >
> >       >       - rename package ceph-rgw.  obsoletes/replaces radosgw.
> >       >
> >       >       - merge rgw start/stop into standard sysvinit script
> >       (just
> >       >       another
> >       >       daemon type?).
> >       >
> >       >       - normalize upstart start/stop (just rename radosgw-* to
> >       >       ceph-rgw-*,
> >       >       basically).
> >       >
> >       >       - make upstart use /var/lib/ceph/rgw instead of
> >       >       /var/lib/ceph/radosgw (do
> >       >       this automagically on upgrade?  that will at least avoid
> >       the
> >       >       upgrade pain
> >       >       for ubuntu users)
> >       >
> >       >       - create new systemd start/stop that are correct the
> >       first time
> >       >       around.
> >       >
> >       >       - move log file to /var/log/ceph
> >       >
> >       >
> >       >       The part I'm not sure about is how to handle config.  I
> >       would
> >       >       really like
> >       >       the ability to put per-daemon config stuff in
> >       >       /var/lib/ceph/rgw/foo/conf
> >       >       or config, but there is no 'include file' function in
> >       our
> >       >       ini-file parser
> >       >       (or in other typical ini-file implementations that i can
> >       find)
> >       >       and I don't
> >       >       like the idea the per-daemon config would completely
> >       supplant
> >       >       the
> >       >       /etc/ceph/ one.
> >       >
> >       >       Basically, I'd like to get to a point where we can
> >       >       programatically deploy
> >       >       an rgw without any wonky code that edits the ceph.conf.
> >       Perhaps
> >       >       the way
> >       >       to accomplish this is to have the admin create a generic
> >       >       client.rgw user
> >       >       and generic [client.rgw] section in the global
> >       ceph.conf?
> >       >
> >       >       If there were an include file function, we could make
> >       the
> >       >       per-daemon
> >       >       config file something like
> >       >
> >       >        include /etc/ceph/$cluster.conf
> >       >        [client.rgw.hostname]
> >       >        rgw foo = bar
> >       >        rgw baz = blip
> >       >
> >       >       or whatever.
> >       >
> >       >       Thoughts?  Suggestions?
> >       >
> >       >       sage
> >       >       _______________________________________________
> >       >       ceph-users mailing list
> >       >       ceph-users@lists.ceph.com
> >       >       http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >       >
> >       >
> >       >
> >       >
> >
> >
> >
> >
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to