Re: [ceph-users] removing cluster name support

2017-11-07 Thread Erik McCormick
On Nov 8, 2017 7:33 AM, "Vasu Kulkarni"  wrote:

On Tue, Nov 7, 2017 at 11:38 AM, Sage Weil  wrote:
> On Tue, 7 Nov 2017, Alfredo Deza wrote:
>> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai  wrote:
>> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil  wrote:
>> >> At CDM yesterday we talked about removing the ability to name your
ceph
>> >> clusters.  There are a number of hurtles that make it difficult to
fully
>> >> get rid of this functionality, not the least of which is that some
>> >> (many?) deployed clusters make use of it.  We decided that the most
we can
>> >> do at this point is remove support for it in ceph-deploy and
ceph-ansible
>> >> so that no new clusters or deployed nodes use it.
>> >>
>> >> The first PR in this effort:
>> >>
>> >> https://github.com/ceph/ceph-deploy/pull/441
>> >
>> > okay, i am closing https://github.com/ceph/ceph/pull/18638 and
>> > http://tracker.ceph.com/issues/3253
>>
>> This brings us to a limbo were we aren't supporting it in some places
>> but we do in some others.
>>
>> It was disabled for ceph-deploy, but ceph-ansible wants to support it
>> still (see:  https://bugzilla.redhat.com/show_bug.cgi?id=1459861 )
>
> I still haven't seen a case where customer server names for *daemons* is
> needed.  Only for client-side $cluster.conf info for connecting.
>
>> Sebastien argues that these reasons are strong enough to keep that
support in:
>>
>> - Ceph cluster on demand with containers
>
> With kubernetes, the cluster will existin in a cluster namespace, and
> daemons live in containers, so inside the container hte cluster will be
> 'ceph'.
>
>> - Distributed compute nodes
>
> ?
>
>> - rbd-mirror integration as part of OSPd
>
> This is the client-side $cluster.conf for connecting to the remove
> cluster.
>
>> - Disaster scenario with OpenStack Cinder in OSPd
>
> Ditto.
>
>> The problem is that, as you can see with the ceph-disk PR just closed,
>> there are still other tools that have to implement the juggling of
>> custom cluster names
>> all over the place and they will hit some corner place where the
>> cluster name was not added and things will fail.
>>
>> Just recently ceph-volume hit one of these places:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1507943
>>
>> Are we going to support custom cluster names? In what
>> context/scenarios are we going to allow it?
>
> It seems like we could drop this support in ceph-volume, unless someone
> can present a compelling reason to keep it?
>
> ...
>
> I'd almost want to go a step further and change
>
> /var/lib/ceph/$type/$cluster-$id/
>
> to
>
>  /var/lib/ceph/$type/$id
+1 for custom name support to be disabled from master/stable ansible
releases,
And I think rbd-mirror and openstack are mostly configuration issues
that could use different conf files to talk
to different clusters.


Agreed on the Openstack part. I actually changed nothing on that side of
things. The clients still run with a custom config name with no issues.

-Erik


>
> In kubernetes, we're planning on bind mounting the host's
> /var/lib/ceph/$namespace/$type/$id to the container's
> /var/lib/ceph/$type/ceph-$id.  It might be a good time to drop some of the
> awkward path names, though.  Or is it useless churn?
>
> sage
>
>
>
>>
>>
>> >
>> >>
>> >> Background:
>> >>
>> >> The cluster name concept was added to allow multiple clusters to have
>> >> daemons coexist on the same host.  At the type it was a hypothetical
>> >> requirement for a user that never actually made use of it, and the
>> >> support is kludgey:
>> >>
>> >>  - default cluster name is 'ceph'
>> >>  - default config is /etc/ceph/$cluster.conf, so that the normal
>> >> 'ceph.conf' still works
>> >>  - daemon data paths include the cluster name,
>> >>  /var/lib/ceph/osd/$cluster-$id
>> >>which is weird (but mostly people are used to it?)
>> >>  - any cli command you want to touch a non-ceph cluster name
>> >> needs -C $name or --cluster $name passed to it.
>> >>
>> >> Also, as of jewel,
>> >>
>> >>  - systemd only supports a single cluster per host, as defined by
$CLUSTER
>> >> in /etc/{sysconfig,default}/ceph
>> >>
>> >> which you'll notice removes support for the original "requirement".
>> >>
>> >> Also note that you can get the same effect by specifying the config
path
>> >> explicitly (-c /etc/ceph/foo.conf) along with the various options that
>> >> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$
cluster-$id).
>> >>
>> >>
>> >> Crap preventing us from removing this entirely:
>> >>
>> >>  - existing daemon directories for existing clusters
>> >>  - various scripts parse the cluster name out of paths
>> >>
>> >>
>> >> Converting an existing cluster "foo" back to "ceph":
>> >>
>> >>  - rename /etc/ceph/foo.conf -> ceph.conf
>> >>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>> >>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>> >>  - reboot
>> >>
>> >>
>> >> 

Re: [ceph-users] removing cluster name support

2017-11-07 Thread Vasu Kulkarni
On Tue, Nov 7, 2017 at 11:38 AM, Sage Weil  wrote:
> On Tue, 7 Nov 2017, Alfredo Deza wrote:
>> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai  wrote:
>> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil  wrote:
>> >> At CDM yesterday we talked about removing the ability to name your ceph
>> >> clusters.  There are a number of hurtles that make it difficult to fully
>> >> get rid of this functionality, not the least of which is that some
>> >> (many?) deployed clusters make use of it.  We decided that the most we can
>> >> do at this point is remove support for it in ceph-deploy and ceph-ansible
>> >> so that no new clusters or deployed nodes use it.
>> >>
>> >> The first PR in this effort:
>> >>
>> >> https://github.com/ceph/ceph-deploy/pull/441
>> >
>> > okay, i am closing https://github.com/ceph/ceph/pull/18638 and
>> > http://tracker.ceph.com/issues/3253
>>
>> This brings us to a limbo were we aren't supporting it in some places
>> but we do in some others.
>>
>> It was disabled for ceph-deploy, but ceph-ansible wants to support it
>> still (see:  https://bugzilla.redhat.com/show_bug.cgi?id=1459861 )
>
> I still haven't seen a case where customer server names for *daemons* is
> needed.  Only for client-side $cluster.conf info for connecting.
>
>> Sebastien argues that these reasons are strong enough to keep that support 
>> in:
>>
>> - Ceph cluster on demand with containers
>
> With kubernetes, the cluster will existin in a cluster namespace, and
> daemons live in containers, so inside the container hte cluster will be
> 'ceph'.
>
>> - Distributed compute nodes
>
> ?
>
>> - rbd-mirror integration as part of OSPd
>
> This is the client-side $cluster.conf for connecting to the remove
> cluster.
>
>> - Disaster scenario with OpenStack Cinder in OSPd
>
> Ditto.
>
>> The problem is that, as you can see with the ceph-disk PR just closed,
>> there are still other tools that have to implement the juggling of
>> custom cluster names
>> all over the place and they will hit some corner place where the
>> cluster name was not added and things will fail.
>>
>> Just recently ceph-volume hit one of these places:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1507943
>>
>> Are we going to support custom cluster names? In what
>> context/scenarios are we going to allow it?
>
> It seems like we could drop this support in ceph-volume, unless someone
> can present a compelling reason to keep it?
>
> ...
>
> I'd almost want to go a step further and change
>
> /var/lib/ceph/$type/$cluster-$id/
>
> to
>
>  /var/lib/ceph/$type/$id
+1 for custom name support to be disabled from master/stable ansible releases,
And I think rbd-mirror and openstack are mostly configuration issues
that could use different conf files to talk
to different clusters.

>
> In kubernetes, we're planning on bind mounting the host's
> /var/lib/ceph/$namespace/$type/$id to the container's
> /var/lib/ceph/$type/ceph-$id.  It might be a good time to drop some of the
> awkward path names, though.  Or is it useless churn?
>
> sage
>
>
>
>>
>>
>> >
>> >>
>> >> Background:
>> >>
>> >> The cluster name concept was added to allow multiple clusters to have
>> >> daemons coexist on the same host.  At the type it was a hypothetical
>> >> requirement for a user that never actually made use of it, and the
>> >> support is kludgey:
>> >>
>> >>  - default cluster name is 'ceph'
>> >>  - default config is /etc/ceph/$cluster.conf, so that the normal
>> >> 'ceph.conf' still works
>> >>  - daemon data paths include the cluster name,
>> >>  /var/lib/ceph/osd/$cluster-$id
>> >>which is weird (but mostly people are used to it?)
>> >>  - any cli command you want to touch a non-ceph cluster name
>> >> needs -C $name or --cluster $name passed to it.
>> >>
>> >> Also, as of jewel,
>> >>
>> >>  - systemd only supports a single cluster per host, as defined by $CLUSTER
>> >> in /etc/{sysconfig,default}/ceph
>> >>
>> >> which you'll notice removes support for the original "requirement".
>> >>
>> >> Also note that you can get the same effect by specifying the config path
>> >> explicitly (-c /etc/ceph/foo.conf) along with the various options that
>> >> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>> >>
>> >>
>> >> Crap preventing us from removing this entirely:
>> >>
>> >>  - existing daemon directories for existing clusters
>> >>  - various scripts parse the cluster name out of paths
>> >>
>> >>
>> >> Converting an existing cluster "foo" back to "ceph":
>> >>
>> >>  - rename /etc/ceph/foo.conf -> ceph.conf
>> >>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>> >>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>> >>  - reboot
>> >>
>> >>
>> >> Questions:
>> >>
>> >>  - Does anybody on the list use a non-default cluster name?
>> >>  - If so, do you have a reason not to switch back to 'ceph'?
>> >>
>> >> Thanks!
>> >> sage
>> >> 

Re: [ceph-users] removing cluster name support

2017-11-07 Thread Sage Weil
On Tue, 7 Nov 2017, Alfredo Deza wrote:
> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai  wrote:
> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil  wrote:
> >> At CDM yesterday we talked about removing the ability to name your ceph
> >> clusters.  There are a number of hurtles that make it difficult to fully
> >> get rid of this functionality, not the least of which is that some
> >> (many?) deployed clusters make use of it.  We decided that the most we can
> >> do at this point is remove support for it in ceph-deploy and ceph-ansible
> >> so that no new clusters or deployed nodes use it.
> >>
> >> The first PR in this effort:
> >>
> >> https://github.com/ceph/ceph-deploy/pull/441
> >
> > okay, i am closing https://github.com/ceph/ceph/pull/18638 and
> > http://tracker.ceph.com/issues/3253
> 
> This brings us to a limbo were we aren't supporting it in some places
> but we do in some others.
> 
> It was disabled for ceph-deploy, but ceph-ansible wants to support it
> still (see:  https://bugzilla.redhat.com/show_bug.cgi?id=1459861 )

I still haven't seen a case where customer server names for *daemons* is 
needed.  Only for client-side $cluster.conf info for connecting.
 
> Sebastien argues that these reasons are strong enough to keep that support in:
> 
> - Ceph cluster on demand with containers

With kubernetes, the cluster will existin in a cluster namespace, and 
daemons live in containers, so inside the container hte cluster will be 
'ceph'.

> - Distributed compute nodes

?

> - rbd-mirror integration as part of OSPd

This is the client-side $cluster.conf for connecting to the remove 
cluster.

> - Disaster scenario with OpenStack Cinder in OSPd

Ditto.

> The problem is that, as you can see with the ceph-disk PR just closed,
> there are still other tools that have to implement the juggling of
> custom cluster names
> all over the place and they will hit some corner place where the
> cluster name was not added and things will fail.
> 
> Just recently ceph-volume hit one of these places:
> https://bugzilla.redhat.com/show_bug.cgi?id=1507943
> 
> Are we going to support custom cluster names? In what
> context/scenarios are we going to allow it?

It seems like we could drop this support in ceph-volume, unless someone 
can present a compelling reason to keep it?

...

I'd almost want to go a step further and change

/var/lib/ceph/$type/$cluster-$id/

to

 /var/lib/ceph/$type/$id

In kubernetes, we're planning on bind mounting the host's 
/var/lib/ceph/$namespace/$type/$id to the container's 
/var/lib/ceph/$type/ceph-$id.  It might be a good time to drop some of the 
awkward path names, though.  Or is it useless churn?

sage



> 
> 
> >
> >>
> >> Background:
> >>
> >> The cluster name concept was added to allow multiple clusters to have
> >> daemons coexist on the same host.  At the type it was a hypothetical
> >> requirement for a user that never actually made use of it, and the
> >> support is kludgey:
> >>
> >>  - default cluster name is 'ceph'
> >>  - default config is /etc/ceph/$cluster.conf, so that the normal
> >> 'ceph.conf' still works
> >>  - daemon data paths include the cluster name,
> >>  /var/lib/ceph/osd/$cluster-$id
> >>which is weird (but mostly people are used to it?)
> >>  - any cli command you want to touch a non-ceph cluster name
> >> needs -C $name or --cluster $name passed to it.
> >>
> >> Also, as of jewel,
> >>
> >>  - systemd only supports a single cluster per host, as defined by $CLUSTER
> >> in /etc/{sysconfig,default}/ceph
> >>
> >> which you'll notice removes support for the original "requirement".
> >>
> >> Also note that you can get the same effect by specifying the config path
> >> explicitly (-c /etc/ceph/foo.conf) along with the various options that
> >> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
> >>
> >>
> >> Crap preventing us from removing this entirely:
> >>
> >>  - existing daemon directories for existing clusters
> >>  - various scripts parse the cluster name out of paths
> >>
> >>
> >> Converting an existing cluster "foo" back to "ceph":
> >>
> >>  - rename /etc/ceph/foo.conf -> ceph.conf
> >>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
> >>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
> >>  - reboot
> >>
> >>
> >> Questions:
> >>
> >>  - Does anybody on the list use a non-default cluster name?
> >>  - If so, do you have a reason not to switch back to 'ceph'?
> >>
> >> Thanks!
> >> sage
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > --
> > Regards
> > Kefu Chai
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
___
ceph-users mailing list

Re: [ceph-users] removing cluster name support

2017-11-07 Thread kefu chai
On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil  wrote:
> At CDM yesterday we talked about removing the ability to name your ceph
> clusters.  There are a number of hurtles that make it difficult to fully
> get rid of this functionality, not the least of which is that some
> (many?) deployed clusters make use of it.  We decided that the most we can
> do at this point is remove support for it in ceph-deploy and ceph-ansible
> so that no new clusters or deployed nodes use it.
>
> The first PR in this effort:
>
> https://github.com/ceph/ceph-deploy/pull/441

okay, i am closing https://github.com/ceph/ceph/pull/18638 and
http://tracker.ceph.com/issues/3253

>
> Background:
>
> The cluster name concept was added to allow multiple clusters to have
> daemons coexist on the same host.  At the type it was a hypothetical
> requirement for a user that never actually made use of it, and the
> support is kludgey:
>
>  - default cluster name is 'ceph'
>  - default config is /etc/ceph/$cluster.conf, so that the normal
> 'ceph.conf' still works
>  - daemon data paths include the cluster name,
>  /var/lib/ceph/osd/$cluster-$id
>which is weird (but mostly people are used to it?)
>  - any cli command you want to touch a non-ceph cluster name
> needs -C $name or --cluster $name passed to it.
>
> Also, as of jewel,
>
>  - systemd only supports a single cluster per host, as defined by $CLUSTER
> in /etc/{sysconfig,default}/ceph
>
> which you'll notice removes support for the original "requirement".
>
> Also note that you can get the same effect by specifying the config path
> explicitly (-c /etc/ceph/foo.conf) along with the various options that
> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>
>
> Crap preventing us from removing this entirely:
>
>  - existing daemon directories for existing clusters
>  - various scripts parse the cluster name out of paths
>
>
> Converting an existing cluster "foo" back to "ceph":
>
>  - rename /etc/ceph/foo.conf -> ceph.conf
>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>  - reboot
>
>
> Questions:
>
>  - Does anybody on the list use a non-default cluster name?
>  - If so, do you have a reason not to switch back to 'ceph'?
>
> Thanks!
> sage
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Regards
Kefu Chai
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-11-06 Thread Erik McCormick
On Fri, Jun 9, 2017 at 12:30 PM, Sage Weil  wrote:
> On Fri, 9 Jun 2017, Erik McCormick wrote:
>> On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil  wrote:
>> > On Thu, 8 Jun 2017, Sage Weil wrote:
>> >> Questions:
>> >>
>> >>  - Does anybody on the list use a non-default cluster name?
>> >>  - If so, do you have a reason not to switch back to 'ceph'?
>> >
>> > It sounds like the answer is "yes," but not for daemons. Several users use
>> > it on the client side to connect to multiple clusters from the same host.
>> >
>>
>> I thought some folks said they were running with non-default naming
>> for daemons, but if not, then count me as one who does. This was
>> mainly a relic of the past, where I thought I would be running
>> multiple clusters on one host. Before long I decided it would be a bad
>> idea, but by then the cluster was already in heavy use and I couldn't
>> undo it.
>>
>> I will say that I am not opposed to renaming back to ceph, but it
>> would be great to have a documented process for accomplishing this
>> prior to deprecation. Even going so far as to remove --cluster from
>> deployment tools will leave me unable to add OSDs if I want to upgrade
>> when Luminous is released.
>
> Note that even if the tool doesn't support it, the cluster name is a
> host-local thing, so you can always deploy ceph-named daemons on other
> hosts.
>
> For an existing host, the removal process should be as simple as
>
>  - stop the daemons on the host
>  - rename /etc/ceph/foo.conf -> /etc/ceph/ceph.conf
>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-* (this mainly
> matters for non-osds, since the osd dirs will get dynamically created by
> ceph-disk, but renaming will avoid leaving clutter behind)
>  - comment out the CLUSTER= line in /etc/{sysconfig,default}/ceph (if
> you're on jewel)
>  - reboot
>
> If you wouldn't mind being a guinea pig and verifying that this is
> sufficient that would be really helpful!  We'll definitely want to
> document this process.
>
> Thanks!
> sage
>
Sitting here in a room with you reminded me I dropped the ball on
feeding back on the procedure. I did this a couple weeks ago and it
worked fine. I had a few problems with OSDs not wanting to unmount, so
I had to reboot each node along the way. I just used it as an excuse
to run updates.

-Erik
>
>>
>> > Nobody is colocating multiple daemons from different clusters on the same
>> > host.  Some have in the past but stopped.  If they choose to in the
>> > future, they can customize the systemd units themselves.
>> >
>> > The rbd-mirror daemon has a similar requirement to talk to multiple
>> > clusters as a client.
>> >
>> > This makes me conclude our current path is fine:
>> >
>> >  - leave existing --cluster infrastructure in place in the ceph code, but
>> >  - remove support for deploying daemons with custom cluster names from the
>> > deployment tools.
>> >
>> > This neatly avoids the systemd limitations for all but the most
>> > adventuresome admins and avoid the more common case of an admin falling
>> > into the "oh, I can name my cluster? cool! [...] oh, i have to add
>> > --cluster rover to every command? ick!" trap.
>> >
>>
>> Yeah, that was me in 2012. Oops.
>>
>> -Erik
>>
>> > sage
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-11 Thread Peter Maloney
On 06/08/17 21:37, Sage Weil wrote:
> Questions:
>
>  - Does anybody on the list use a non-default cluster name?
>  - If so, do you have a reason not to switch back to 'ceph'?
>
> Thanks!
> sage
Will it still be possible for clients to use multiple clusters?

Also how does this affect rbd mirroring?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-09 Thread Sage Weil
On Fri, 9 Jun 2017, Dan van der Ster wrote:
> On Fri, Jun 9, 2017 at 5:58 PM, Vasu Kulkarni  wrote:
> > On Fri, Jun 9, 2017 at 6:11 AM, Wes Dillingham
> >  wrote:
> >> Similar to Dan's situation we utilize the --cluster name concept for our
> >> operations. Primarily for "datamover" nodes which do incremental rbd
> >> import/export between distinct clusters. This is entirely coordinated by
> >> utilizing the --cluster option throughout.
> >>
> >> The way we set it up is that all clusters are actually named "ceph" on the
> >> mons and osds etc, but the clients themselves get /etc/ceph/clusterA.conf
> >> and /etc/ceph/clusterB.conf so that we can differentiate. I would like to
> >> see the functionality of clients being able to specify which conf file to
> >> read preserved.
> >
> > ceph.conf along with keyring file can stay in any location, the
> > default location is /etc/ceph but one could use
> > other location for clusterB.conf (
> > http://docs.ceph.com/docs/jewel/rados/configuration/ceph-conf/ ), At
> > least
> > for client which doesn't run any daemon this should be sufficient to
> > make it talk to different clusters.
> 
> So we start with this:
> 
> > ceph --cluster=flax health
> HEALTH_OK
> 
> Then for example do:
> > cd /etc/ceph/
> > mkdir flax
> > cp flax.conf flax/ceph.conf
> > cp flax.client.admin.keyring flax/ceph.client.admin.keyring
> 
> Now this works:
> 
> > ceph --conf=/etc/ceph/flax/ceph.conf 
> > --keyring=/etc/ceph/flax/ceph.client.admin.keyring health
> HEALTH_OK
> 
> So --cluster is just convenient shorthand for the CLI.

Yeah, although it's used elsewhere too:

$ grep \$cluster ../src/common/config_opts.h 
OPTION(admin_socket, OPT_STR, "$run_dir/$cluster-$name.asok") // default 
changed by common_preinit()
OPTION(log_file, OPT_STR, "/var/log/ceph/$cluster-$name.log") // default 
changed by common_preinit()
"default=/var/log/ceph/$cluster.$channel.log 
cluster=/var/log/ceph/$cluster.log")

"/etc/ceph/$cluster.$name.keyring,/etc/ceph/$cluster.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,"
 

"/usr/local/etc/ceph/$cluster.$name.keyring,/usr/local/etc/ceph/$cluster.keyring,"
OPTION(mon_data, OPT_STR, "/var/lib/ceph/mon/$cluster-$id")
OPTION(mon_debug_dump_location, OPT_STR, "/var/log/ceph/$cluster-$name.tdump")
OPTION(mds_data, OPT_STR, "/var/lib/ceph/mds/$cluster-$id")
OPTION(osd_data, OPT_STR, "/var/lib/ceph/osd/$cluster-$id")
OPTION(osd_journal, OPT_STR, "/var/lib/ceph/osd/$cluster-$id/journal")
OPTION(rgw_data, OPT_STR, "/var/lib/ceph/radosgw/$cluster-$id")
OPTION(mgr_data, OPT_STR, "/var/lib/ceph/mgr/$cluster-$id") // where to find 
keyring etc

The only non-daemon ones are admin_socket and log_file, so keep that in 
mind.

> I guess it won't be the end of the world if you drop it, but would it
> be so costly to keep that working? (CLI only -- no use-case for
> server-side named clusters over here).

But yeah... I don't think we'll change any of this except to make the 
deployment tools' lives easier by not support it there.

sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-09 Thread Dan van der Ster
On Fri, Jun 9, 2017 at 5:58 PM, Vasu Kulkarni  wrote:
> On Fri, Jun 9, 2017 at 6:11 AM, Wes Dillingham
>  wrote:
>> Similar to Dan's situation we utilize the --cluster name concept for our
>> operations. Primarily for "datamover" nodes which do incremental rbd
>> import/export between distinct clusters. This is entirely coordinated by
>> utilizing the --cluster option throughout.
>>
>> The way we set it up is that all clusters are actually named "ceph" on the
>> mons and osds etc, but the clients themselves get /etc/ceph/clusterA.conf
>> and /etc/ceph/clusterB.conf so that we can differentiate. I would like to
>> see the functionality of clients being able to specify which conf file to
>> read preserved.
>
> ceph.conf along with keyring file can stay in any location, the
> default location is /etc/ceph but one could use
> other location for clusterB.conf (
> http://docs.ceph.com/docs/jewel/rados/configuration/ceph-conf/ ), At
> least
> for client which doesn't run any daemon this should be sufficient to
> make it talk to different clusters.

So we start with this:

> ceph --cluster=flax health
HEALTH_OK

Then for example do:
> cd /etc/ceph/
> mkdir flax
> cp flax.conf flax/ceph.conf
> cp flax.client.admin.keyring flax/ceph.client.admin.keyring

Now this works:

> ceph --conf=/etc/ceph/flax/ceph.conf 
> --keyring=/etc/ceph/flax/ceph.client.admin.keyring health
HEALTH_OK

So --cluster is just convenient shorthand for the CLI.

I guess it won't be the end of the world if you drop it, but would it
be so costly to keep that working? (CLI only -- no use-case for
server-side named clusters over here).

--
Dan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-09 Thread Sage Weil
On Fri, 9 Jun 2017, Erik McCormick wrote:
> On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil  wrote:
> > On Thu, 8 Jun 2017, Sage Weil wrote:
> >> Questions:
> >>
> >>  - Does anybody on the list use a non-default cluster name?
> >>  - If so, do you have a reason not to switch back to 'ceph'?
> >
> > It sounds like the answer is "yes," but not for daemons. Several users use
> > it on the client side to connect to multiple clusters from the same host.
> >
> 
> I thought some folks said they were running with non-default naming
> for daemons, but if not, then count me as one who does. This was
> mainly a relic of the past, where I thought I would be running
> multiple clusters on one host. Before long I decided it would be a bad
> idea, but by then the cluster was already in heavy use and I couldn't
> undo it.
> 
> I will say that I am not opposed to renaming back to ceph, but it
> would be great to have a documented process for accomplishing this
> prior to deprecation. Even going so far as to remove --cluster from
> deployment tools will leave me unable to add OSDs if I want to upgrade
> when Luminous is released.

Note that even if the tool doesn't support it, the cluster name is a 
host-local thing, so you can always deploy ceph-named daemons on other 
hosts.

For an existing host, the removal process should be as simple as

 - stop the daemons on the host
 - rename /etc/ceph/foo.conf -> /etc/ceph/ceph.conf
 - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-* (this mainly 
matters for non-osds, since the osd dirs will get dynamically created by 
ceph-disk, but renaming will avoid leaving clutter behind)
 - comment out the CLUSTER= line in /etc/{sysconfig,default}/ceph (if 
you're on jewel)
 - reboot

If you wouldn't mind being a guinea pig and verifying that this is 
sufficient that would be really helpful!  We'll definitely want to 
document this process.

Thanks!
sage


> 
> > Nobody is colocating multiple daemons from different clusters on the same
> > host.  Some have in the past but stopped.  If they choose to in the
> > future, they can customize the systemd units themselves.
> >
> > The rbd-mirror daemon has a similar requirement to talk to multiple
> > clusters as a client.
> >
> > This makes me conclude our current path is fine:
> >
> >  - leave existing --cluster infrastructure in place in the ceph code, but
> >  - remove support for deploying daemons with custom cluster names from the
> > deployment tools.
> >
> > This neatly avoids the systemd limitations for all but the most
> > adventuresome admins and avoid the more common case of an admin falling
> > into the "oh, I can name my cluster? cool! [...] oh, i have to add
> > --cluster rover to every command? ick!" trap.
> >
> 
> Yeah, that was me in 2012. Oops.
> 
> -Erik
> 
> > sage
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-09 Thread Erik McCormick
On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil  wrote:
> On Thu, 8 Jun 2017, Sage Weil wrote:
>> Questions:
>>
>>  - Does anybody on the list use a non-default cluster name?
>>  - If so, do you have a reason not to switch back to 'ceph'?
>
> It sounds like the answer is "yes," but not for daemons. Several users use
> it on the client side to connect to multiple clusters from the same host.
>

I thought some folks said they were running with non-default naming
for daemons, but if not, then count me as one who does. This was
mainly a relic of the past, where I thought I would be running
multiple clusters on one host. Before long I decided it would be a bad
idea, but by then the cluster was already in heavy use and I couldn't
undo it.

I will say that I am not opposed to renaming back to ceph, but it
would be great to have a documented process for accomplishing this
prior to deprecation. Even going so far as to remove --cluster from
deployment tools will leave me unable to add OSDs if I want to upgrade
when Luminous is released.

> Nobody is colocating multiple daemons from different clusters on the same
> host.  Some have in the past but stopped.  If they choose to in the
> future, they can customize the systemd units themselves.
>
> The rbd-mirror daemon has a similar requirement to talk to multiple
> clusters as a client.
>
> This makes me conclude our current path is fine:
>
>  - leave existing --cluster infrastructure in place in the ceph code, but
>  - remove support for deploying daemons with custom cluster names from the
> deployment tools.
>
> This neatly avoids the systemd limitations for all but the most
> adventuresome admins and avoid the more common case of an admin falling
> into the "oh, I can name my cluster? cool! [...] oh, i have to add
> --cluster rover to every command? ick!" trap.
>

Yeah, that was me in 2012. Oops.

-Erik

> sage
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-09 Thread Sage Weil
On Thu, 8 Jun 2017, Sage Weil wrote:
> Questions:
> 
>  - Does anybody on the list use a non-default cluster name?
>  - If so, do you have a reason not to switch back to 'ceph'?

It sounds like the answer is "yes," but not for daemons. Several users use 
it on the client side to connect to multiple clusters from the same host.

Nobody is colocating multiple daemons from different clusters on the same 
host.  Some have in the past but stopped.  If they choose to in the 
future, they can customize the systemd units themselves.

The rbd-mirror daemon has a similar requirement to talk to multiple 
clusters as a client.

This makes me conclude our current path is fine:

 - leave existing --cluster infrastructure in place in the ceph code, but
 - remove support for deploying daemons with custom cluster names from the 
deployment tools.

This neatly avoids the systemd limitations for all but the most 
adventuresome admins and avoid the more common case of an admin falling 
into the "oh, I can name my cluster? cool! [...] oh, i have to add 
--cluster rover to every command? ick!" trap.

sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-09 Thread Vasu Kulkarni
On Fri, Jun 9, 2017 at 6:11 AM, Wes Dillingham
 wrote:
> Similar to Dan's situation we utilize the --cluster name concept for our
> operations. Primarily for "datamover" nodes which do incremental rbd
> import/export between distinct clusters. This is entirely coordinated by
> utilizing the --cluster option throughout.
>
> The way we set it up is that all clusters are actually named "ceph" on the
> mons and osds etc, but the clients themselves get /etc/ceph/clusterA.conf
> and /etc/ceph/clusterB.conf so that we can differentiate. I would like to
> see the functionality of clients being able to specify which conf file to
> read preserved.

ceph.conf along with keyring file can stay in any location, the
default location is /etc/ceph but one could use
other location for clusterB.conf (
http://docs.ceph.com/docs/jewel/rados/configuration/ceph-conf/ ), At
least
for client which doesn't run any daemon this should be sufficient to
make it talk to different clusters.

>
> As a note though we went the route of naming all clusters "ceph" to
> workaround difficulties in non-standard naming so this issue does need some
> attention.
It would be nice if you can throw in the steps in tracker which can be
then moved to docs so that it can help others follow
those steps to rename the cluster back to 'ceph'


>
> On Fri, Jun 9, 2017 at 8:19 AM, Alfredo Deza  wrote:
>>
>> On Thu, Jun 8, 2017 at 3:54 PM, Sage Weil  wrote:
>> > On Thu, 8 Jun 2017, Bassam Tabbara wrote:
>> >> Thanks Sage.
>> >>
>> >> > At CDM yesterday we talked about removing the ability to name your
>> >> > ceph
>> >> > clusters.
>> >>
>> >> Just to be clear, it would still be possible to run multiple ceph
>> >> clusters on the same nodes, right?
>> >
>> > Yes, but you'd need to either (1) use containers (so that different
>> > daemons see a different /etc/ceph/ceph.conf) or (2) modify the systemd
>> > unit files to do... something.
>>
>> In the container case, I need to clarify that ceph-docker deployed
>> with ceph-ansible is not capable of doing this, since
>> the ad-hoc systemd units use the hostname as part of the identifier
>> for the daemon, e.g:
>>
>> systemctl enable ceph-mon@{{ ansible_hostname }}.service
>>
>>
>> >
>> > This is actually no different from Jewel. It's just that currently you
>> > can
>> > run a single cluster on a host (without containers) but call it 'foo'
>> > and
>> > knock yourself out by passing '--cluster foo' every time you invoke the
>> > CLI.
>> >
>> > I'm guessing you're in the (1) case anyway and this doesn't affect you
>> > at
>> > all :)
>> >
>> > sage
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> > the body of a message to majord...@vger.kernel.org
>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
> Respectfully,
>
> Wes Dillingham
> wes_dilling...@harvard.edu
> Research Computing | Senior CyberInfrastructure Storage Engineer
> Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 102
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-09 Thread Wes Dillingham
Similar to Dan's situation we utilize the --cluster name concept for our
operations. Primarily for "datamover" nodes which do incremental rbd
import/export between distinct clusters. This is entirely coordinated by
utilizing the --cluster option throughout.

The way we set it up is that all clusters are actually named "ceph" on the
mons and osds etc, but the clients themselves get /etc/ceph/clusterA.conf
and /etc/ceph/clusterB.conf so that we can differentiate. I would like to
see the functionality of clients being able to specify which conf file to
read preserved.

As a note though we went the route of naming all clusters "ceph" to
workaround difficulties in non-standard naming so this issue does need some
attention.

On Fri, Jun 9, 2017 at 8:19 AM, Alfredo Deza  wrote:

> On Thu, Jun 8, 2017 at 3:54 PM, Sage Weil  wrote:
> > On Thu, 8 Jun 2017, Bassam Tabbara wrote:
> >> Thanks Sage.
> >>
> >> > At CDM yesterday we talked about removing the ability to name your
> ceph
> >> > clusters.
> >>
> >> Just to be clear, it would still be possible to run multiple ceph
> >> clusters on the same nodes, right?
> >
> > Yes, but you'd need to either (1) use containers (so that different
> > daemons see a different /etc/ceph/ceph.conf) or (2) modify the systemd
> > unit files to do... something.
>
> In the container case, I need to clarify that ceph-docker deployed
> with ceph-ansible is not capable of doing this, since
> the ad-hoc systemd units use the hostname as part of the identifier
> for the daemon, e.g:
>
> systemctl enable ceph-mon@{{ ansible_hostname }}.service
>
>
> >
> > This is actually no different from Jewel. It's just that currently you
> can
> > run a single cluster on a host (without containers) but call it 'foo' and
> > knock yourself out by passing '--cluster foo' every time you invoke the
> > CLI.
> >
> > I'm guessing you're in the (1) case anyway and this doesn't affect you at
> > all :)
> >
> > sage
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majord...@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Respectfully,

Wes Dillingham
wes_dilling...@harvard.edu
Research Computing | Senior CyberInfrastructure Storage Engineer
Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 102
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-09 Thread Alfredo Deza
On Thu, Jun 8, 2017 at 3:54 PM, Sage Weil  wrote:
> On Thu, 8 Jun 2017, Bassam Tabbara wrote:
>> Thanks Sage.
>>
>> > At CDM yesterday we talked about removing the ability to name your ceph
>> > clusters.
>>
>> Just to be clear, it would still be possible to run multiple ceph
>> clusters on the same nodes, right?
>
> Yes, but you'd need to either (1) use containers (so that different
> daemons see a different /etc/ceph/ceph.conf) or (2) modify the systemd
> unit files to do... something.

In the container case, I need to clarify that ceph-docker deployed
with ceph-ansible is not capable of doing this, since
the ad-hoc systemd units use the hostname as part of the identifier
for the daemon, e.g:

systemctl enable ceph-mon@{{ ansible_hostname }}.service


>
> This is actually no different from Jewel. It's just that currently you can
> run a single cluster on a host (without containers) but call it 'foo' and
> knock yourself out by passing '--cluster foo' every time you invoke the
> CLI.
>
> I'm guessing you're in the (1) case anyway and this doesn't affect you at
> all :)
>
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-09 Thread Tim Serong
On 06/09/2017 06:41 AM, Benjeman Meekhof wrote:
> Hi Sage,
> 
> We did at one time run multiple clusters on our OSD nodes and RGW
> nodes (with Jewel).  We accomplished this by putting code in our
> puppet-ceph module that would create additional systemd units with
> appropriate CLUSTER=name environment settings for clusters not named
> ceph.  IE, if the module were asked to configure OSD for a cluster
> named 'test' it would copy/edit the ceph-osd service to create a
> 'test-osd@.service' unit that would start instances with CLUSTER=test
> so they would point to the right config file, etc   Eventually on the
> RGW side I started doing instance-specific overrides like
> '/etc/systemd/system/ceph-rado...@client.name.d/override.conf' so as
> to avoid replicating the stock systemd unit.
> 
> We gave up on multiple clusters on the OSD nodes because it wasn't
> really that useful to maintain a separate 'test' cluster on the same
> hardware.  We continue to need ability to reference multiple clusters
> for RGW nodes and other clients. For the other example, users of our
> project might have their own Ceph clusters in addition to wanting to
> use ours.
> 
> If the daemon solution in the no-clustername future is to 'modify
> systemd unit files to do something' we're already doing that so it's
> not a big issue.  However the current modification of over-riding
> CLUSTER in the environment section of systemd files does seem cleaner
> than over-riding an exec command to specify a different config file
> and keyring path.   Maybe systemd units could ship with those
> arguments as variables for easily over-riding.

systemd units can be templated/parameterized, but with only one
parameter, the instance ID, which we're already using
(ceph-mon@$(hostname), ceph-osd@$ID, etc.)

> 
> thanks,
> Ben
> 
> On Thu, Jun 8, 2017 at 3:37 PM, Sage Weil  wrote:
>> At CDM yesterday we talked about removing the ability to name your ceph
>> clusters.  There are a number of hurtles that make it difficult to fully
>> get rid of this functionality, not the least of which is that some
>> (many?) deployed clusters make use of it.  We decided that the most we can
>> do at this point is remove support for it in ceph-deploy and ceph-ansible
>> so that no new clusters or deployed nodes use it.
>>
>> The first PR in this effort:
>>
>> https://github.com/ceph/ceph-deploy/pull/441
>>
>> Background:
>>
>> The cluster name concept was added to allow multiple clusters to have
>> daemons coexist on the same host.  At the type it was a hypothetical
>> requirement for a user that never actually made use of it, and the
>> support is kludgey:
>>
>>  - default cluster name is 'ceph'
>>  - default config is /etc/ceph/$cluster.conf, so that the normal
>> 'ceph.conf' still works
>>  - daemon data paths include the cluster name,
>>  /var/lib/ceph/osd/$cluster-$id
>>which is weird (but mostly people are used to it?)
>>  - any cli command you want to touch a non-ceph cluster name
>> needs -C $name or --cluster $name passed to it.
>>
>> Also, as of jewel,
>>
>>  - systemd only supports a single cluster per host, as defined by $CLUSTER
>> in /etc/{sysconfig,default}/ceph
>>
>> which you'll notice removes support for the original "requirement".
>>
>> Also note that you can get the same effect by specifying the config path
>> explicitly (-c /etc/ceph/foo.conf) along with the various options that
>> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>>
>>
>> Crap preventing us from removing this entirely:
>>
>>  - existing daemon directories for existing clusters
>>  - various scripts parse the cluster name out of paths
>>
>>
>> Converting an existing cluster "foo" back to "ceph":
>>
>>  - rename /etc/ceph/foo.conf -> ceph.conf
>>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>>  - reboot
>>
>>
>> Questions:
>>
>>  - Does anybody on the list use a non-default cluster name?
>>  - If so, do you have a reason not to switch back to 'ceph'?
>>
>> Thanks!
>> sage
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 


-- 
Tim Serong
Senior Clustering Engineer
SUSE
tser...@suse.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-08 Thread Vaibhav Bhembre
We have an internal management service that works at a higher layer
upstream on top of multiple Ceph clusters. It needs a way to
differentiate and connect separately to each of those clusters.
Presently making that distinction is relatively easy since we create
those connections based on /etc/conf/$cluster.conf, where each cluster
name is unique. I am not sure how this will work for us if we go away
from the way of uniquely identifying multiple clusters from a single client.

On Thu, Jun 8, 2017 at 3:37 PM, Sage Weil  wrote:
>
> At CDM yesterday we talked about removing the ability to name your ceph
> clusters.  There are a number of hurtles that make it difficult to fully
> get rid of this functionality, not the least of which is that some
> (many?) deployed clusters make use of it.  We decided that the most we can
> do at this point is remove support for it in ceph-deploy and ceph-ansible
> so that no new clusters or deployed nodes use it.
>
> The first PR in this effort:
>
> https://github.com/ceph/ceph-deploy/pull/441
>
> Background:
>
> The cluster name concept was added to allow multiple clusters to have
> daemons coexist on the same host.  At the type it was a hypothetical
> requirement for a user that never actually made use of it, and the
> support is kludgey:
>
>  - default cluster name is 'ceph'
>  - default config is /etc/ceph/$cluster.conf, so that the normal
> 'ceph.conf' still works
>  - daemon data paths include the cluster name,
>  /var/lib/ceph/osd/$cluster-$id
>which is weird (but mostly people are used to it?)
>  - any cli command you want to touch a non-ceph cluster name
> needs -C $name or --cluster $name passed to it.
>
> Also, as of jewel,
>
>  - systemd only supports a single cluster per host, as defined by $CLUSTER
> in /etc/{sysconfig,default}/ceph
>
> which you'll notice removes support for the original "requirement".
>
> Also note that you can get the same effect by specifying the config path
> explicitly (-c /etc/ceph/foo.conf) along with the various options that
> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>
>
> Crap preventing us from removing this entirely:
>
>  - existing daemon directories for existing clusters
>  - various scripts parse the cluster name out of paths
>
>
> Converting an existing cluster "foo" back to "ceph":
>
>  - rename /etc/ceph/foo.conf -> ceph.conf
>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>  - reboot
>
>
> Questions:
>
>  - Does anybody on the list use a non-default cluster name?
>  - If so, do you have a reason not to switch back to 'ceph'?
>
> Thanks!
> sage
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-08 Thread mmokhtar
Hi Sage, 

We do use cluster names, we do not use ceph-deploy or ceph-ansible so in
the short term it is not an issue. We have scripts that call cli
commands with the --cluster XX parameter, would that still work ? What
time frame do you have in mind for removing this ? 

Cheers /Maged 

On 2017-06-08 21:37, Sage Weil wrote:

> At CDM yesterday we talked about removing the ability to name your ceph 
> clusters.  There are a number of hurtles that make it difficult to fully 
> get rid of this functionality, not the least of which is that some 
> (many?) deployed clusters make use of it.  We decided that the most we can 
> do at this point is remove support for it in ceph-deploy and ceph-ansible 
> so that no new clusters or deployed nodes use it.
> 
> The first PR in this effort:
> 
> https://github.com/ceph/ceph-deploy/pull/441
> 
> Background:
> 
> The cluster name concept was added to allow multiple clusters to have 
> daemons coexist on the same host.  At the type it was a hypothetical 
> requirement for a user that never actually made use of it, and the 
> support is kludgey:
> 
> - default cluster name is 'ceph'
> - default config is /etc/ceph/$cluster.conf, so that the normal 
> 'ceph.conf' still works
> - daemon data paths include the cluster name,
> /var/lib/ceph/osd/$cluster-$id
> which is weird (but mostly people are used to it?)
> - any cli command you want to touch a non-ceph cluster name 
> needs -C $name or --cluster $name passed to it.
> 
> Also, as of jewel,
> 
> - systemd only supports a single cluster per host, as defined by $CLUSTER 
> in /etc/{sysconfig,default}/ceph
> 
> which you'll notice removes support for the original "requirement".
> 
> Also note that you can get the same effect by specifying the config path 
> explicitly (-c /etc/ceph/foo.conf) along with the various options that 
> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
> 
> Crap preventing us from removing this entirely:
> 
> - existing daemon directories for existing clusters
> - various scripts parse the cluster name out of paths
> 
> Converting an existing cluster "foo" back to "ceph":
> 
> - rename /etc/ceph/foo.conf -> ceph.conf
> - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
> - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph 
> - reboot
> 
> Questions:
> 
> - Does anybody on the list use a non-default cluster name?
> - If so, do you have a reason not to switch back to 'ceph'?
> 
> Thanks!
> sage
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-08 Thread Benjeman Meekhof
Hi Sage,

We did at one time run multiple clusters on our OSD nodes and RGW
nodes (with Jewel).  We accomplished this by putting code in our
puppet-ceph module that would create additional systemd units with
appropriate CLUSTER=name environment settings for clusters not named
ceph.  IE, if the module were asked to configure OSD for a cluster
named 'test' it would copy/edit the ceph-osd service to create a
'test-osd@.service' unit that would start instances with CLUSTER=test
so they would point to the right config file, etc   Eventually on the
RGW side I started doing instance-specific overrides like
'/etc/systemd/system/ceph-rado...@client.name.d/override.conf' so as
to avoid replicating the stock systemd unit.

We gave up on multiple clusters on the OSD nodes because it wasn't
really that useful to maintain a separate 'test' cluster on the same
hardware.  We continue to need ability to reference multiple clusters
for RGW nodes and other clients. For the other example, users of our
project might have their own Ceph clusters in addition to wanting to
use ours.

If the daemon solution in the no-clustername future is to 'modify
systemd unit files to do something' we're already doing that so it's
not a big issue.  However the current modification of over-riding
CLUSTER in the environment section of systemd files does seem cleaner
than over-riding an exec command to specify a different config file
and keyring path.   Maybe systemd units could ship with those
arguments as variables for easily over-riding.

thanks,
Ben

On Thu, Jun 8, 2017 at 3:37 PM, Sage Weil  wrote:
> At CDM yesterday we talked about removing the ability to name your ceph
> clusters.  There are a number of hurtles that make it difficult to fully
> get rid of this functionality, not the least of which is that some
> (many?) deployed clusters make use of it.  We decided that the most we can
> do at this point is remove support for it in ceph-deploy and ceph-ansible
> so that no new clusters or deployed nodes use it.
>
> The first PR in this effort:
>
> https://github.com/ceph/ceph-deploy/pull/441
>
> Background:
>
> The cluster name concept was added to allow multiple clusters to have
> daemons coexist on the same host.  At the type it was a hypothetical
> requirement for a user that never actually made use of it, and the
> support is kludgey:
>
>  - default cluster name is 'ceph'
>  - default config is /etc/ceph/$cluster.conf, so that the normal
> 'ceph.conf' still works
>  - daemon data paths include the cluster name,
>  /var/lib/ceph/osd/$cluster-$id
>which is weird (but mostly people are used to it?)
>  - any cli command you want to touch a non-ceph cluster name
> needs -C $name or --cluster $name passed to it.
>
> Also, as of jewel,
>
>  - systemd only supports a single cluster per host, as defined by $CLUSTER
> in /etc/{sysconfig,default}/ceph
>
> which you'll notice removes support for the original "requirement".
>
> Also note that you can get the same effect by specifying the config path
> explicitly (-c /etc/ceph/foo.conf) along with the various options that
> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>
>
> Crap preventing us from removing this entirely:
>
>  - existing daemon directories for existing clusters
>  - various scripts parse the cluster name out of paths
>
>
> Converting an existing cluster "foo" back to "ceph":
>
>  - rename /etc/ceph/foo.conf -> ceph.conf
>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>  - reboot
>
>
> Questions:
>
>  - Does anybody on the list use a non-default cluster name?
>  - If so, do you have a reason not to switch back to 'ceph'?
>
> Thanks!
> sage
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-08 Thread Dan van der Ster
Hi Sage,

We need named clusters on the client side. RBD or CephFS clients, or
monitoring/admin machines all need to be able to access several clusters.

Internally, each cluster is indeed called "ceph", but the clients use
distinct names to differentiate their configs/keyrings.

Cheers, Dan


On Jun 8, 2017 9:37 PM, "Sage Weil"  wrote:

At CDM yesterday we talked about removing the ability to name your ceph
clusters.  There are a number of hurtles that make it difficult to fully
get rid of this functionality, not the least of which is that some
(many?) deployed clusters make use of it.  We decided that the most we can
do at this point is remove support for it in ceph-deploy and ceph-ansible
so that no new clusters or deployed nodes use it.

The first PR in this effort:

https://github.com/ceph/ceph-deploy/pull/441

Background:

The cluster name concept was added to allow multiple clusters to have
daemons coexist on the same host.  At the type it was a hypothetical
requirement for a user that never actually made use of it, and the
support is kludgey:

 - default cluster name is 'ceph'
 - default config is /etc/ceph/$cluster.conf, so that the normal
'ceph.conf' still works
 - daemon data paths include the cluster name,
 /var/lib/ceph/osd/$cluster-$id
   which is weird (but mostly people are used to it?)
 - any cli command you want to touch a non-ceph cluster name
needs -C $name or --cluster $name passed to it.

Also, as of jewel,

 - systemd only supports a single cluster per host, as defined by $CLUSTER
in /etc/{sysconfig,default}/ceph

which you'll notice removes support for the original "requirement".

Also note that you can get the same effect by specifying the config path
explicitly (-c /etc/ceph/foo.conf) along with the various options that
substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).


Crap preventing us from removing this entirely:

 - existing daemon directories for existing clusters
 - various scripts parse the cluster name out of paths


Converting an existing cluster "foo" back to "ceph":

 - rename /etc/ceph/foo.conf -> ceph.conf
 - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
 - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
 - reboot


Questions:

 - Does anybody on the list use a non-default cluster name?
 - If so, do you have a reason not to switch back to 'ceph'?

Thanks!
sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-08 Thread Sage Weil
On Thu, 8 Jun 2017, Bassam Tabbara wrote:
> Thanks Sage.
> 
> > At CDM yesterday we talked about removing the ability to name your ceph 
> > clusters. 
> 
> Just to be clear, it would still be possible to run multiple ceph 
> clusters on the same nodes, right?

Yes, but you'd need to either (1) use containers (so that different 
daemons see a different /etc/ceph/ceph.conf) or (2) modify the systemd 
unit files to do... something.  

This is actually no different from Jewel. It's just that currently you can 
run a single cluster on a host (without containers) but call it 'foo' and 
knock yourself out by passing '--cluster foo' every time you invoke the 
CLI.

I'm guessing you're in the (1) case anyway and this doesn't affect you at 
all :)

sage
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] removing cluster name support

2017-06-08 Thread Bassam Tabbara
Thanks Sage.

> At CDM yesterday we talked about removing the ability to name your ceph 
> clusters. 


Just to be clear, it would still be possible to run multiple ceph clusters on 
the same nodes, right?


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com