Re: [openstack-dev] [all] Service Catalog TNG work in Mitaka ... next steps

2016-03-30 Thread Jay Pipes

On 03/29/2016 06:49 PM, Matt Riedemann wrote:

On 3/29/2016 2:30 PM, Sean Dague wrote:

At the Mitaka Summit we had a double session on the Service Catalog,
where we stood, and where we could move forward. Even though the service
catalog isn't used nearly as much as we'd like, it's used in just enough
odd places that every change pulls on a few other threads that are
unexpected. So this is going to be a slow process going forward, but I
do have faith we'll get there.



Thanks for the write up.


Indeed, thanks very much, Sean, it's super-helpful to read these status 
summaries.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Service Catalog TNG work in Mitaka ... next steps

2016-03-29 Thread Matt Riedemann



On 3/29/2016 2:30 PM, Sean Dague wrote:

At the Mitaka Summit we had a double session on the Service Catalog,
where we stood, and where we could move forward. Even though the service
catalog isn't used nearly as much as we'd like, it's used in just enough
odd places that every change pulls on a few other threads that are
unexpected. So this is going to be a slow process going forward, but I
do have faith we'll get there.

Thanks much to Brant, Chris, and Anne for putting in time this cycle to
keep this ball moving forward.

Mitaka did a lot of fact finding.

* public / admin / internal urls - mixed results

The notion of an internal url is used in many deployments because they
believe it means they won't be charged for data transfer. There is no
definitive semantic meaning to any of these. Many sites just make all of
these the same, and use the network to ensure that internal connections
hit internal interfaces.

Next Steps: this really needs a set of user stories built from what we
currently have. That's where that one is left.

* project_id optional in projects - good progress

One of the issues with lots of things that want to be done with the
service catalog, is that we've gone and hard coded project_id into urls
in projects where they are not really semantically meaningful. That
precluded things like an anonymous service catalog.

We decided to demonstrate this on Nova first. That landed as
microversion 2.18. It means that service catalog entries no longer need
project_id to be in the url. There is a patch up for devstack to enable
this - https://review.openstack.org/#/c/233079/ - though a Tempest patch
removing errant tests needs to land first.

The only real snag we found during this was that python Routes +
keystone's ability to have project id not be a uuid (even though it
defaults to one) made for the need to add a new config option to handle
this going either way.

This is probably easy to replicate on other projects during the next cycle.

Next Steps: get volunteers from additional projects to replicate this.

* service types authority

One of the things we know we need to make progress on is an actual
authority of all the service catalogue types which we recognize. We got
agreement to create this repository, I've got some outstanding patches
to restructure for starting off the repo -
https://review.openstack.org/#/q/project:openstack/service-types-authority

The thing we discovered here was even the apparently easy problems, some
times aren't. The assumption that there might be a single URL which
describes the API for a service, is an assumption we don't fulfil even
for most of the base services.

This bump in the road is part of what led to some shifted effort on the
API Reference in RST work - (see
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090659.html)

Next Steps: the API doc conversion probably trumps this work, and at
some level is a requirement for it. Once we get the API reference
transition in full swing, this probably becomes top of stack.

* service catalogue tng schema

Brant has done some early work setting up a schema based on the known
knowns, and leaving some holes for the known unknowns until we get a few
of these locked down (types / allowed urls).

Next Steps: review current schema

* Weekly Meetings

We had been meeting weekly in #openstack-meeting-cp up until release
crunch, when most of us got swamped with such things.

I'd like to keep focus on the API doc conversion in the near term, as
there is a mountain to get over with getting the first API converted,
then we can start making the docs more friendly to our users. I think
this means we probably keep the weekly meeting on hiatus until post
Austin, and start it up again the week after we all get back.


Thanks to folks that helped get us this far. Hopefully we'll start
picking up steam again once we get a bit of this backlog cleared and
getting chugging during the cycle.

-Sean



Thanks for the write up.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Service Catalog TNG work in Mitaka ... next steps

2016-03-29 Thread Sean Dague
At the Mitaka Summit we had a double session on the Service Catalog,
where we stood, and where we could move forward. Even though the service
catalog isn't used nearly as much as we'd like, it's used in just enough
odd places that every change pulls on a few other threads that are
unexpected. So this is going to be a slow process going forward, but I
do have faith we'll get there.

Thanks much to Brant, Chris, and Anne for putting in time this cycle to
keep this ball moving forward.

Mitaka did a lot of fact finding.

* public / admin / internal urls - mixed results

The notion of an internal url is used in many deployments because they
believe it means they won't be charged for data transfer. There is no
definitive semantic meaning to any of these. Many sites just make all of
these the same, and use the network to ensure that internal connections
hit internal interfaces.

Next Steps: this really needs a set of user stories built from what we
currently have. That's where that one is left.

* project_id optional in projects - good progress

One of the issues with lots of things that want to be done with the
service catalog, is that we've gone and hard coded project_id into urls
in projects where they are not really semantically meaningful. That
precluded things like an anonymous service catalog.

We decided to demonstrate this on Nova first. That landed as
microversion 2.18. It means that service catalog entries no longer need
project_id to be in the url. There is a patch up for devstack to enable
this - https://review.openstack.org/#/c/233079/ - though a Tempest patch
removing errant tests needs to land first.

The only real snag we found during this was that python Routes +
keystone's ability to have project id not be a uuid (even though it
defaults to one) made for the need to add a new config option to handle
this going either way.

This is probably easy to replicate on other projects during the next cycle.

Next Steps: get volunteers from additional projects to replicate this.

* service types authority

One of the things we know we need to make progress on is an actual
authority of all the service catalogue types which we recognize. We got
agreement to create this repository, I've got some outstanding patches
to restructure for starting off the repo -
https://review.openstack.org/#/q/project:openstack/service-types-authority

The thing we discovered here was even the apparently easy problems, some
times aren't. The assumption that there might be a single URL which
describes the API for a service, is an assumption we don't fulfil even
for most of the base services.

This bump in the road is part of what led to some shifted effort on the
API Reference in RST work - (see
http://lists.openstack.org/pipermail/openstack-dev/2016-March/090659.html)

Next Steps: the API doc conversion probably trumps this work, and at
some level is a requirement for it. Once we get the API reference
transition in full swing, this probably becomes top of stack.

* service catalogue tng schema

Brant has done some early work setting up a schema based on the known
knowns, and leaving some holes for the known unknowns until we get a few
of these locked down (types / allowed urls).

Next Steps: review current schema

* Weekly Meetings

We had been meeting weekly in #openstack-meeting-cp up until release
crunch, when most of us got swamped with such things.

I'd like to keep focus on the API doc conversion in the near term, as
there is a mountain to get over with getting the first API converted,
then we can start making the docs more friendly to our users. I think
this means we probably keep the weekly meeting on hiatus until post
Austin, and start it up again the week after we all get back.


Thanks to folks that helped get us this far. Hopefully we'll start
picking up steam again once we get a bit of this backlog cleared and
getting chugging during the cycle.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-12 Thread Sean Dague
On 10/09/2015 07:14 PM, Clint Byrum wrote:

> I don't think we're suggesting that we abandon the current one. We don't
> break userspace!
> 
> However, replacing the underpinnings of the current one with the new one,
> and leaving the current one as a compatibility layer _is_ a way to get
> progress on the new one without shafting users. So I think considerable
> consideration should be given to an approach where we limit working on
> the core of the current solution, and replace that core with the new
> solution + compatibility layer.
> 
>> And, as I've definitely discovered through this process the Service
>> Catalog today has been fluid enough that where it is used, and what
>> people rely on in it, isn't always clear all at once. For instance,
>> tenant_ids in urls are very surface features in Nova (we don't rely on
>> it, we're using the context), don't exist at all in most new services,
>> and are very corely embedded in Swift. This is part of what has also
>> required the service catalog is embedded in the Token, which causes toke
>> bloat, and has led to other features to try to shrink the catalog by
>> filtering it by what a user is allowed. Which in turn ended up being
>> used by Horizon to populate the feature matrix users see.
>>
>> So we're pulling on a thread, and we have to do that really carefully.
>>
>> I think the important thing is to focus on what we have in 6 months
>> doesn't break current users / applications, and is incrementally closer
>> to our end game. That's the lens I'm going to keep putting on this one.
>>
> 
> Right, so adopting a new catalog type that we see as the future, and
> making it the backend for the current solution, is the route I'd like
> to work toward. If we get the groundwork laid for that, but we don't
> make any user-visible improvement in 6 months, is that a failure or a win?

I consider it a fail. The issues with the service catalog today aren't
backend issues, they are front end issues. They are how that data is
represented and consumable. Changes in that representation will require
applications to adapt, so is the long pole in the tent.

I feel pretty strongly we have to start on the UX, and let backends
match once we get better interaction. That also lets you see whether or
not you are getting any uptake on the new approach before you go and
spend a ton of time retooling the backend.

-Sean


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-11 Thread Robert Collins
On 10 October 2015 at 12:14, Clint Byrum  wrote:

>> I think the important thing is to focus on what we have in 6 months
>> doesn't break current users / applications, and is incrementally closer
>> to our end game. That's the lens I'm going to keep putting on this one.
>>
>
> Right, so adopting a new catalog type that we see as the future, and
> making it the backend for the current solution, is the route I'd like
> to work toward. If we get the groundwork laid for that, but we don't
> make any user-visible improvement in 6 months, is that a failure or a win?

I don't think its either from this distance. IF we guess right about
what we need, the groundwork helps, and \o/.

If we guess wrong, then we now have new groundwork that still needs
hammering on, and it might be better or worse :/.

If we can be assessing the new thing against our known needs at each
step, that might reduce the risk.

But the biggest risk I see is the one Sean already articulated: we
have only a vague idea about how folk are /actually/ using what we
built, and thus its very hard to predict 'changing X will break
someone'.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Clint Byrum
Excerpts from Sean Dague's message of 2015-10-09 14:00:40 -0700:
> On 10/09/2015 02:52 PM, Jonathan D. Proulx wrote:
> > On Fri, Oct 09, 2015 at 02:17:26PM -0400, Monty Taylor wrote:
> > :On 10/09/2015 01:39 PM, David Stanek wrote:
> > :>
> > :>On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx  > :>> wrote:
> > :>As an operator I'd be happy to use SRV records to define endpoints,
> > :>though multiple regions could make that messy.
> > :>
> > :>would we make subdomins per region or include region name in the
> > :>service name?
> > :>
> > :>_compute-regionone._tcp.example.com 
> > :>-vs-
> > :>_compute._tcp.regionone.example.com 
> > :>
> > :>Also not all operators can controll their DNS to this level so it
> > :>couldn't be the only option.
> > :
> > :SO - XMPP does this. The way it works is that if your XMPP provider
> > :has put the approriate records in DNS, then everything Just Works. If
> > :not, then you, as a consumer, have several pieces of information you
> > :need to provide by hand.
> > :
> > :Of course, there are already several pieces of information you have
> > :to provide by hand to connect to OpenStack, so needing to download a
> > :manifest file or something like that to talk to a cloud in an
> > :environment where the people running a cloud do not have the ability
> > :to add information to DNS (boggles) shouldn't be that terrible.
> > 
> > yes but XMPP require 2 (maybe 3) SRV records so an equivelent number
> > of local config options is managable. A cloud with X endpoints and Y
> > regions is significantly more.
> > 
> > Not to say this couldn't be done by packing more stuff into the openrc
> > or equivelent so users don't need to directly enter all that, but that
> > would be a significant change and one I think would be more difficult
> > for smaller operations.
> > 
> > :One could also imagine an in-between option where OpenStack could run
> > :an _optional_ DNS for this purpose - and then the only 'by-hand'
> > :you'd need for clouds with no real DNS is the location of the
> > :discover DNS.
> > 
> > Yes a special purpose DNS (a la dnsbl) might be preferable to
> > pushing around static configs.
> 
> I do realize lots of people want to go in much more radical directions
> here. I think we have to be really careful about that. The current
> cinder v1 -> v2 transition challenges demonstrate how much inertia there
> is. 3 years of talking about a Tasks API is another instance of it.
> 
> We aren't starting with a blank slate. This is brownfield development.
> There are enough users of this that making shifts need to be done in
> careful shifts that enable a new thing similar enough to the old thing,
> that people will easily be able to take advantage of it. Which means I
> think deciding to jump off the REST bandwagon for this is currently a
> bridge too far. At least to get anything tangible done in the next 6 to
> 12 months.
> 

I'm 100% in agreement that we can't abandon things that we've created. If
we create a DNS based catalog that is ready for prime time tomorrow,
we will have the REST based catalog for _years_.

> I think getting us a service catalog served over REST that doesn't
> require auth, and doesn't require tenant_ids in urls, gets us someplace
> we could figure out a DNS representation (for those that wanted that).
> But we have to tick / tock this and not change transports and
> representations at the same time.
> 

I don't think we're suggesting that we abandon the current one. We don't
break userspace!

However, replacing the underpinnings of the current one with the new one,
and leaving the current one as a compatibility layer _is_ a way to get
progress on the new one without shafting users. So I think considerable
consideration should be given to an approach where we limit working on
the core of the current solution, and replace that core with the new
solution + compatibility layer.

> And, as I've definitely discovered through this process the Service
> Catalog today has been fluid enough that where it is used, and what
> people rely on in it, isn't always clear all at once. For instance,
> tenant_ids in urls are very surface features in Nova (we don't rely on
> it, we're using the context), don't exist at all in most new services,
> and are very corely embedded in Swift. This is part of what has also
> required the service catalog is embedded in the Token, which causes toke
> bloat, and has led to other features to try to shrink the catalog by
> filtering it by what a user is allowed. Which in turn ended up being
> used by Horizon to populate the feature matrix users see.
> 
> So we're pulling on a thread, and we have to do that really carefully.
> 
> I think the important thing is to focus on what we have in 6 months
> doesn't break current users / applications, and is incrementally closer
> to our end 

Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Shamail Tahir
Well said!

On Fri, Oct 9, 2015 at 5:00 PM, Sean Dague  wrote:

> On 10/09/2015 02:52 PM, Jonathan D. Proulx wrote:
> > On Fri, Oct 09, 2015 at 02:17:26PM -0400, Monty Taylor wrote:
> > :On 10/09/2015 01:39 PM, David Stanek wrote:
> > :>
> > :>On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx  > :>> wrote:
> > :>As an operator I'd be happy to use SRV records to define endpoints,
> > :>though multiple regions could make that messy.
> > :>
> > :>would we make subdomins per region or include region name in the
> > :>service name?
> > :>
> > :>_compute-regionone._tcp.example.com 
> > :>-vs-
> > :>_compute._tcp.regionone.example.com <
> http://tcp.regionone.example.com>
> > :>
> > :>Also not all operators can controll their DNS to this level so it
> > :>couldn't be the only option.
> > :
> > :SO - XMPP does this. The way it works is that if your XMPP provider
> > :has put the approriate records in DNS, then everything Just Works. If
> > :not, then you, as a consumer, have several pieces of information you
> > :need to provide by hand.
> > :
> > :Of course, there are already several pieces of information you have
> > :to provide by hand to connect to OpenStack, so needing to download a
> > :manifest file or something like that to talk to a cloud in an
> > :environment where the people running a cloud do not have the ability
> > :to add information to DNS (boggles) shouldn't be that terrible.
> >
> > yes but XMPP require 2 (maybe 3) SRV records so an equivelent number
> > of local config options is managable. A cloud with X endpoints and Y
> > regions is significantly more.
> >
> > Not to say this couldn't be done by packing more stuff into the openrc
> > or equivelent so users don't need to directly enter all that, but that
> > would be a significant change and one I think would be more difficult
> > for smaller operations.
> >
> > :One could also imagine an in-between option where OpenStack could run
> > :an _optional_ DNS for this purpose - and then the only 'by-hand'
> > :you'd need for clouds with no real DNS is the location of the
> > :discover DNS.
> >
> > Yes a special purpose DNS (a la dnsbl) might be preferable to
> > pushing around static configs.
>
> I do realize lots of people want to go in much more radical directions
> here. I think we have to be really careful about that. The current
> cinder v1 -> v2 transition challenges demonstrate how much inertia there
> is. 3 years of talking about a Tasks API is another instance of it.

Yep... very valid point.

>
>
We aren't starting with a blank slate. This is brownfield development.
> There are enough users of this that making shifts need to be done in
> careful shifts that enable a new thing similar enough to the old thing,
> that people will easily be able to take advantage of it. Which means I
> think deciding to jump off the REST bandwagon for this is currently a
> bridge too far. At least to get anything tangible done in the next 6 to
> 12 months.
>
++ but I think it does make sense to consider possible future design
considerations into account.  For example, we shouldn't abandon REST (for
the points you have raised) but if there is interest in possibly using DNS
in the future then we should try to make design choices today that would
allow for that direction in the future.  To further the compatibility
conversation, if/when we do decide to add DNS... we will still need to
support REST for an indefinite amount of time to let people choose their
desired mode of operation over a time window that should be (for the most
part) in their control due to their own pace of adopting changes.

>
> I think getting us a service catalog served over REST that doesn't
> require auth, and doesn't require tenant_ids in urls, gets us someplace
> we could figure out a DNS representation (for those that wanted that).
> But we have to tick / tock this and not change transports and
> representations at the same time.
>
> And, as I've definitely discovered through this process the Service
> Catalog today has been fluid enough that where it is used, and what
> people rely on in it, isn't always clear all at once. For instance,
> tenant_ids in urls are very surface features in Nova (we don't rely on
> it, we're using the context), don't exist at all in most new services,
> and are very corely embedded in Swift. This is part of what has also
> required the service catalog is embedded in the Token, which causes toke
> bloat, and has led to other features to try to shrink the catalog by
> filtering it by what a user is allowed. Which in turn ended up being
> used by Horizon to populate the feature matrix users see.
>
> ++

> So we're pulling on a thread, and we have to do that really carefully.
>
> I think the important thing is to focus on what we have in 6 months
> doesn't break current users / applications, and is incrementally closer

Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Everett Toews
On Oct 9, 2015, at 9:39 AM, Sean Dague  wrote:
> 
> It looks like some great conversation got going on the service catalog
> standardization spec / discussion at the last cross project meeting.
> Sorry I wasn't there to participate.
> 
> A lot of that ended up in here (which was an ether pad stevemar and I
> started working on the other day) -
> https://etherpad.openstack.org/p/mitaka-service-catalog which is great.
> 
> A couple of things that would make this more useful:
> 
> 1) if you are commenting, please (ircnick) your comments. It's not easy
> to always track down folks later if the comment was not understood.
> 
> 2) please provide link to code when explaining a point. Github supports
> the ability to very nicely link to (and highlight) a range of code by a
> stable object ref. For instance -
> https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132
> 
> That will make comments about X does Y, or Z can't do W, more clear
> because we'll all be looking at the same chunk of code and start to
> build more shared context here. One of the reasons this has been long
> and difficult is that we're missing a lot of that shared context between
> projects. Reassembling that by reading each other's relevant code will
> go a long way to understanding the whole picture.
> 
> 
> Lastly, I think it's pretty clear we probably need a dedicated workgroup
> meeting to keep this ball rolling, come to a reasonable plan that
> doesn't break any existing deployed code, but lets us get to a better
> world in a few cycles. annegentle, stevemar, and I have been pushing on
> that ball so far, however I'd like to know who else is willing to commit
> a chunk of time over this cycle to this. Once we know that we can try to
> figure out when a reasonable weekly meeting point would be.

It's likely you're already aware of it but see

https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog

for many examples of service catalogs from both public and private OpenStack 
clouds.

Everett


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Sean Dague
On 10/09/2015 02:52 PM, Jonathan D. Proulx wrote:
> On Fri, Oct 09, 2015 at 02:17:26PM -0400, Monty Taylor wrote:
> :On 10/09/2015 01:39 PM, David Stanek wrote:
> :>
> :>On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx  :>> wrote:
> :>As an operator I'd be happy to use SRV records to define endpoints,
> :>though multiple regions could make that messy.
> :>
> :>would we make subdomins per region or include region name in the
> :>service name?
> :>
> :>_compute-regionone._tcp.example.com 
> :>-vs-
> :>_compute._tcp.regionone.example.com 
> :>
> :>Also not all operators can controll their DNS to this level so it
> :>couldn't be the only option.
> :
> :SO - XMPP does this. The way it works is that if your XMPP provider
> :has put the approriate records in DNS, then everything Just Works. If
> :not, then you, as a consumer, have several pieces of information you
> :need to provide by hand.
> :
> :Of course, there are already several pieces of information you have
> :to provide by hand to connect to OpenStack, so needing to download a
> :manifest file or something like that to talk to a cloud in an
> :environment where the people running a cloud do not have the ability
> :to add information to DNS (boggles) shouldn't be that terrible.
> 
> yes but XMPP require 2 (maybe 3) SRV records so an equivelent number
> of local config options is managable. A cloud with X endpoints and Y
> regions is significantly more.
> 
> Not to say this couldn't be done by packing more stuff into the openrc
> or equivelent so users don't need to directly enter all that, but that
> would be a significant change and one I think would be more difficult
> for smaller operations.
> 
> :One could also imagine an in-between option where OpenStack could run
> :an _optional_ DNS for this purpose - and then the only 'by-hand'
> :you'd need for clouds with no real DNS is the location of the
> :discover DNS.
> 
> Yes a special purpose DNS (a la dnsbl) might be preferable to
> pushing around static configs.

I do realize lots of people want to go in much more radical directions
here. I think we have to be really careful about that. The current
cinder v1 -> v2 transition challenges demonstrate how much inertia there
is. 3 years of talking about a Tasks API is another instance of it.

We aren't starting with a blank slate. This is brownfield development.
There are enough users of this that making shifts need to be done in
careful shifts that enable a new thing similar enough to the old thing,
that people will easily be able to take advantage of it. Which means I
think deciding to jump off the REST bandwagon for this is currently a
bridge too far. At least to get anything tangible done in the next 6 to
12 months.

I think getting us a service catalog served over REST that doesn't
require auth, and doesn't require tenant_ids in urls, gets us someplace
we could figure out a DNS representation (for those that wanted that).
But we have to tick / tock this and not change transports and
representations at the same time.

And, as I've definitely discovered through this process the Service
Catalog today has been fluid enough that where it is used, and what
people rely on in it, isn't always clear all at once. For instance,
tenant_ids in urls are very surface features in Nova (we don't rely on
it, we're using the context), don't exist at all in most new services,
and are very corely embedded in Swift. This is part of what has also
required the service catalog is embedded in the Token, which causes toke
bloat, and has led to other features to try to shrink the catalog by
filtering it by what a user is allowed. Which in turn ended up being
used by Horizon to populate the feature matrix users see.

So we're pulling on a thread, and we have to do that really carefully.

I think the important thing is to focus on what we have in 6 months
doesn't break current users / applications, and is incrementally closer
to our end game. That's the lens I'm going to keep putting on this one.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Monty Taylor

On 10/09/2015 11:21 AM, Shamail wrote:




On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:

It looks like some great conversation got going on the service catalog
standardization spec / discussion at the last cross project meeting.
Sorry I wasn't there to participate.


Apologize if this is a question that has already been address but why can't we 
just leverage something like consul.io?


It's a good question and there have actually been some discussions about 
leveraging it on the backend. However, even if we did, we'd still need 
keystone to provide the multi-tenancy view on the subject. consul wasn't 
designed (quite correctly I think) to be a user-facing service for 50k 
users.


I think it would be an excellent backend.




A lot of that ended up in here (which was an ether pad stevemar and I
started working on the other day) -
https://etherpad.openstack.org/p/mitaka-service-catalog which is great.

I didn't see anything immediately in the etherpad that couldn't be covered with 
the tool mentioned above.  It is open-source so we could always try to 
contribute there if we need something extra (written in golang though).


A couple of things that would make this more useful:

1) if you are commenting, please (ircnick) your comments. It's not easy
to always track down folks later if the comment was not understood.

2) please provide link to code when explaining a point. Github supports
the ability to very nicely link to (and highlight) a range of code by a
stable object ref. For instance -
https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132

That will make comments about X does Y, or Z can't do W, more clear
because we'll all be looking at the same chunk of code and start to
build more shared context here. One of the reasons this has been long
and difficult is that we're missing a lot of that shared context between
projects. Reassembling that by reading each other's relevant code will
go a long way to understanding the whole picture.


Lastly, I think it's pretty clear we probably need a dedicated workgroup
meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to commit
a chunk of time over this cycle to this. Once we know that we can try to
figure out when a reasonable weekly meeting point would be.

Thanks,

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Adam Young

On 10/09/2015 12:28 PM, Monty Taylor wrote:

On 10/09/2015 11:21 AM, Shamail wrote:




On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:

It looks like some great conversation got going on the service catalog
standardization spec / discussion at the last cross project meeting.
Sorry I wasn't there to participate.

Apologize if this is a question that has already been address but why 
can't we just leverage something like consul.io?


It's a good question and there have actually been some discussions 
about leveraging it on the backend. However, even if we did, we'd 
still need keystone to provide the multi-tenancy view on the subject. 
consul wasn't designed (quite correctly I think) to be a user-facing 
service for 50k users.


I think it would be an excellent backend.


The better question is, "Why are we not using DNS for the service catalog?"

Right now, we have the aspect of "project filtering of endpoints" which 
means that a token does not need to have every endpoint for a specified 
service.  If we were to use DNS, how would that map to the existing 
functionality.



Can we make better use of regions to help in endpoint filtering/selection?

Do we still need a query to Keystone to play arbiter if there are two 
endpoints assigned for a specific use case to help determine which is 
appropriate?










A lot of that ended up in here (which was an ether pad stevemar and I
started working on the other day) -
https://etherpad.openstack.org/p/mitaka-service-catalog which is great.
I didn't see anything immediately in the etherpad that couldn't be 
covered with the tool mentioned above.  It is open-source so we could 
always try to contribute there if we need something extra (written in 
golang though).


A couple of things that would make this more useful:

1) if you are commenting, please (ircnick) your comments. It's not easy
to always track down folks later if the comment was not understood.

2) please provide link to code when explaining a point. Github supports
the ability to very nicely link to (and highlight) a range of code by a
stable object ref. For instance -
https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132 



That will make comments about X does Y, or Z can't do W, more clear
because we'll all be looking at the same chunk of code and start to
build more shared context here. One of the reasons this has been long
and difficult is that we're missing a lot of that shared context 
between

projects. Reassembling that by reading each other's relevant code will
go a long way to understanding the whole picture.


Lastly, I think it's pretty clear we probably need a dedicated 
workgroup

meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to 
commit
a chunk of time over this cycle to this. Once we know that we can 
try to

figure out when a reasonable weekly meeting point would be.

Thanks,

-Sean

--
Sean Dague
http://dague.net

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Shamail


> On Oct 9, 2015, at 12:28 PM, Monty Taylor  wrote:
> 
>> On 10/09/2015 11:21 AM, Shamail wrote:
>> 
>> 
>>> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
>>> 
>>> It looks like some great conversation got going on the service catalog
>>> standardization spec / discussion at the last cross project meeting.
>>> Sorry I wasn't there to participate.
>> Apologize if this is a question that has already been address but why can't 
>> we just leverage something like consul.io?
> 
> It's a good question and there have actually been some discussions about 
> leveraging it on the backend. However, even if we did, we'd still need 
> keystone to provide the multi-tenancy view on the subject. consul wasn't 
> designed (quite correctly I think) to be a user-facing service for 50k users.
> 
> I think it would be an excellent backend.
Thanks, that makes sense.  I agree that it might be a good backend but not the 
overall solution... I was bringing it up to ensure we consider existing options 
(where possible) and spend cycles on the unsolved bits.

I am going to look into the scaling limitations for consul to educate myself.
> 
>> 
>>> A lot of that ended up in here (which was an ether pad stevemar and I
>>> started working on the other day) -
>>> https://etherpad.openstack.org/p/mitaka-service-catalog which is great.
>> I didn't see anything immediately in the etherpad that couldn't be covered 
>> with the tool mentioned above.  It is open-source so we could always try to 
>> contribute there if we need something extra (written in golang though).
>>> 
>>> A couple of things that would make this more useful:
>>> 
>>> 1) if you are commenting, please (ircnick) your comments. It's not easy
>>> to always track down folks later if the comment was not understood.
>>> 
>>> 2) please provide link to code when explaining a point. Github supports
>>> the ability to very nicely link to (and highlight) a range of code by a
>>> stable object ref. For instance -
>>> https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132
>>> 
>>> That will make comments about X does Y, or Z can't do W, more clear
>>> because we'll all be looking at the same chunk of code and start to
>>> build more shared context here. One of the reasons this has been long
>>> and difficult is that we're missing a lot of that shared context between
>>> projects. Reassembling that by reading each other's relevant code will
>>> go a long way to understanding the whole picture.
>>> 
>>> 
>>> Lastly, I think it's pretty clear we probably need a dedicated workgroup
>>> meeting to keep this ball rolling, come to a reasonable plan that
>>> doesn't break any existing deployed code, but lets us get to a better
>>> world in a few cycles. annegentle, stevemar, and I have been pushing on
>>> that ball so far, however I'd like to know who else is willing to commit
>>> a chunk of time over this cycle to this. Once we know that we can try to
>>> figure out when a reasonable weekly meeting point would be.
>>> 
>>> Thanks,
>>> 
>>>-Sean
>>> 
>>> --
>>> Sean Dague
>>> http://dague.net
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Clint Byrum
Excerpts from Adam Young's message of 2015-10-09 09:51:55 -0700:
> On 10/09/2015 12:28 PM, Monty Taylor wrote:
> > On 10/09/2015 11:21 AM, Shamail wrote:
> >>
> >>
> >>> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
> >>>
> >>> It looks like some great conversation got going on the service catalog
> >>> standardization spec / discussion at the last cross project meeting.
> >>> Sorry I wasn't there to participate.
> >>>
> >> Apologize if this is a question that has already been address but why 
> >> can't we just leverage something like consul.io?
> >
> > It's a good question and there have actually been some discussions 
> > about leveraging it on the backend. However, even if we did, we'd 
> > still need keystone to provide the multi-tenancy view on the subject. 
> > consul wasn't designed (quite correctly I think) to be a user-facing 
> > service for 50k users.
> >
> > I think it would be an excellent backend.
> 
> The better question is, "Why are we not using DNS for the service catalog?"
> 

Agreed, we're using HTTP and JSON for what DNS is supposed to do.

As an aside, consul has a lovely DNS interface.

> Right now, we have the aspect of "project filtering of endpoints" which 
> means that a token does not need to have every endpoint for a specified 
> service.  If we were to use DNS, how would that map to the existing 
> functionality.
> 

There are a number of "how?" answers, but the "what?" question is the
more interesting one. As in, what is the actual point of this
functionality, and what do people want to do per-project?

I think what really ends up happening is you have 99.9% the same
catalogs to the majority of projects, with a few getting back a
different endpoint or two. For that, it seems like you would have two
queries needed in the "discovery" phase:

SRV compute.myprojectid.region1.mycloud.com
SRV compute.region1.mycloud.com

Use the first one you get an answer for. Keystone would simply add
or remove entries for special project<->endpoint mappings. You don't
need Keystone to tell you what your project ID is, so you just make
these queries. When you get a negative answer, respect the TTL and stop
querying for it.

Did I miss a use case with that?

> 
> Can we make better use of regions to help in endpoint filtering/selection?
> 
> Do we still need a query to Keystone to play arbiter if there are two 
> endpoints assigned for a specific use case to help determine which is 
> appropriate?
> 

I'd hope not. If the user is authorized then they should be able
to access the endpoint that they're assigned to. It's confusing to
me sometimes how keystone is thought of as an authorization service,
when it is named "Identity", and primarily performs authentication and
service discovery.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Jonathan D. Proulx
On Fri, Oct 09, 2015 at 01:01:20PM -0400, Shamail wrote:
:> On Oct 9, 2015, at 12:28 PM, Monty Taylor  wrote:
:> 
:>> On 10/09/2015 11:21 AM, Shamail wrote:
:>> 
:>> 
:>>> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
:>>> 
:>>> It looks like some great conversation got going on the service catalog
:>>> standardization spec / discussion at the last cross project meeting.
:>>> Sorry I wasn't there to participate.
:>> Apologize if this is a question that has already been address but why can't 
we just leverage something like consul.io?
:> 
:> It's a good question and there have actually been some discussions about 
leveraging it on the backend. However, even if we did, we'd still need keystone 
to provide the multi-tenancy view on the subject. consul wasn't designed (quite 
correctly I think) to be a user-facing service for 50k users.
:> 
:> I think it would be an excellent backend.
:Thanks, that makes sense.  I agree that it might be a good backend but not the 
overall solution... I was bringing it up to ensure we consider existing options 
(where possible) and spend cycles on the unsolved bits.

As an operator I'd be happy to use SRV records to define endpoints,
though multiple regions could make that messy.

would we make subdomins per region or include region name in the
service name? 

_compute-regionone._tcp.example.com 
   -vs-
_compute._tcp.regionone.example.com

Also not all operators can controll their DNS to this level so it
couldn't be the only option.

Or are you talking about using an internal DNS implementation private
to the OpenStack Deployment?  I'm actually a bit less happy with that
idea.

-Jon
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread David Stanek
On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx 
wrote:

> On Fri, Oct 09, 2015 at 01:01:20PM -0400, Shamail wrote:
> :> On Oct 9, 2015, at 12:28 PM, Monty Taylor  wrote:
> :>
> :>> On 10/09/2015 11:21 AM, Shamail wrote:
> :>>
> :>>
> :>>> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
> :>>>
> :>>> It looks like some great conversation got going on the service catalog
> :>>> standardization spec / discussion at the last cross project meeting.
> :>>> Sorry I wasn't there to participate.
> :>> Apologize if this is a question that has already been address but why
> can't we just leverage something like consul.io?
> :>
> :> It's a good question and there have actually been some discussions
> about leveraging it on the backend. However, even if we did, we'd still
> need keystone to provide the multi-tenancy view on the subject. consul
> wasn't designed (quite correctly I think) to be a user-facing service for
> 50k users.
> :>
> :> I think it would be an excellent backend.
> :Thanks, that makes sense.  I agree that it might be a good backend but
> not the overall solution... I was bringing it up to ensure we consider
> existing options (where possible) and spend cycles on the unsolved bits.
>
> As an operator I'd be happy to use SRV records to define endpoints,
> though multiple regions could make that messy.
>
> would we make subdomins per region or include region name in the
> service name?
>
> _compute-regionone._tcp.example.com
>-vs-
> _compute._tcp.regionone.example.com
>
> Also not all operators can controll their DNS to this level so it
> couldn't be the only option.
>
> Or are you talking about using an internal DNS implementation private
> to the OpenStack Deployment?  I'm actually a bit less happy with that
> idea.
>

I was able to put together an implementation[1] of DNS-SD loosely based on
RFC-6763[2]. It'd really a proof of concept, but we've talked so much about
it that I decided to get something working. Although if this seems like a
viable option then there's still much work to be done.

I'd love feedback.

1. https://gist.github.com/dstanek/093f851fdea8ebfd893d
2. https://tools.ietf.org/html/rfc6763

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Monty Taylor

On 10/09/2015 01:39 PM, David Stanek wrote:


On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx > wrote:

On Fri, Oct 09, 2015 at 01:01:20PM -0400, Shamail wrote:
:> On Oct 9, 2015, at 12:28 PM, Monty Taylor > wrote:
:>
:>> On 10/09/2015 11:21 AM, Shamail wrote:
:>>
:>>
:>>> On Oct 9, 2015, at 10:39 AM, Sean Dague > wrote:
:>>>
:>>> It looks like some great conversation got going on the service
catalog
:>>> standardization spec / discussion at the last cross project
meeting.
:>>> Sorry I wasn't there to participate.
:>> Apologize if this is a question that has already been address
but why can't we just leverage something like consul.io
?
:>
:> It's a good question and there have actually been some
discussions about leveraging it on the backend. However, even if we
did, we'd still need keystone to provide the multi-tenancy view on
the subject. consul wasn't designed (quite correctly I think) to be
a user-facing service for 50k users.
:>
:> I think it would be an excellent backend.
:Thanks, that makes sense.  I agree that it might be a good backend
but not the overall solution... I was bringing it up to ensure we
consider existing options (where possible) and spend cycles on the
unsolved bits.

As an operator I'd be happy to use SRV records to define endpoints,
though multiple regions could make that messy.

would we make subdomins per region or include region name in the
service name?

_compute-regionone._tcp.example.com 
-vs-
_compute._tcp.regionone.example.com 

Also not all operators can controll their DNS to this level so it
couldn't be the only option.


SO - XMPP does this. The way it works is that if your XMPP provider has 
put the approriate records in DNS, then everything Just Works. If not, 
then you, as a consumer, have several pieces of information you need to 
provide by hand.


Of course, there are already several pieces of information you have to 
provide by hand to connect to OpenStack, so needing to download a 
manifest file or something like that to talk to a cloud in an 
environment where the people running a cloud do not have the ability to 
add information to DNS (boggles) shouldn't be that terrible.


One could also imagine an in-between option where OpenStack could run an 
_optional_ DNS for this purpose - and then the only 'by-hand' you'd need 
for clouds with no real DNS is the location of the discover DNS.



Or are you talking about using an internal DNS implementation private
to the OpenStack Deployment?  I'm actually a bit less happy with that
idea.


I was able to put together an implementation[1] of DNS-SD loosely based
on RFC-6763[2]. It'd really a proof of concept, but we've talked so much
about it that I decided to get something working. Although if this seems
like a viable option then there's still much work to be done.

I'd love feedback.

1. https://gist.github.com/dstanek/093f851fdea8ebfd893d
2. https://tools.ietf.org/html/rfc6763

--
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Jonathan D. Proulx
On Fri, Oct 09, 2015 at 02:17:26PM -0400, Monty Taylor wrote:
:On 10/09/2015 01:39 PM, David Stanek wrote:
:>
:>On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx > wrote:
:>As an operator I'd be happy to use SRV records to define endpoints,
:>though multiple regions could make that messy.
:>
:>would we make subdomins per region or include region name in the
:>service name?
:>
:>_compute-regionone._tcp.example.com 
:>-vs-
:>_compute._tcp.regionone.example.com 
:>
:>Also not all operators can controll their DNS to this level so it
:>couldn't be the only option.
:
:SO - XMPP does this. The way it works is that if your XMPP provider
:has put the approriate records in DNS, then everything Just Works. If
:not, then you, as a consumer, have several pieces of information you
:need to provide by hand.
:
:Of course, there are already several pieces of information you have
:to provide by hand to connect to OpenStack, so needing to download a
:manifest file or something like that to talk to a cloud in an
:environment where the people running a cloud do not have the ability
:to add information to DNS (boggles) shouldn't be that terrible.

yes but XMPP require 2 (maybe 3) SRV records so an equivelent number
of local config options is managable. A cloud with X endpoints and Y
regions is significantly more.

Not to say this couldn't be done by packing more stuff into the openrc
or equivelent so users don't need to directly enter all that, but that
would be a significant change and one I think would be more difficult
for smaller operations.

:One could also imagine an in-between option where OpenStack could run
:an _optional_ DNS for this purpose - and then the only 'by-hand'
:you'd need for clouds with no real DNS is the location of the
:discover DNS.

Yes a special purpose DNS (a la dnsbl) might be preferable to
pushing around static configs.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Sean Dague
It looks like some great conversation got going on the service catalog
standardization spec / discussion at the last cross project meeting.
Sorry I wasn't there to participate.

A lot of that ended up in here (which was an ether pad stevemar and I
started working on the other day) -
https://etherpad.openstack.org/p/mitaka-service-catalog which is great.

A couple of things that would make this more useful:

1) if you are commenting, please (ircnick) your comments. It's not easy
to always track down folks later if the comment was not understood.

2) please provide link to code when explaining a point. Github supports
the ability to very nicely link to (and highlight) a range of code by a
stable object ref. For instance -
https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132

That will make comments about X does Y, or Z can't do W, more clear
because we'll all be looking at the same chunk of code and start to
build more shared context here. One of the reasons this has been long
and difficult is that we're missing a lot of that shared context between
projects. Reassembling that by reading each other's relevant code will
go a long way to understanding the whole picture.


Lastly, I think it's pretty clear we probably need a dedicated workgroup
meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to commit
a chunk of time over this cycle to this. Once we know that we can try to
figure out when a reasonable weekly meeting point would be.

Thanks,

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Dean Troyer
On Fri, Oct 9, 2015 at 9:39 AM, Sean Dague  wrote:

> Lastly, I think it's pretty clear we probably need a dedicated workgroup
> meeting to keep this ball rolling, come to a reasonable plan that
> doesn't break any existing deployed code, but lets us get to a better
> world in a few cycles. annegentle, stevemar, and I have been pushing on
> that ball so far, however I'd like to know who else is willing to commit
> a chunk of time over this cycle to this. Once we know that we can try to
> figure out when a reasonable weekly meeting point would be.
>

Count me in...

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread David Lyle
I'm in too.

David

On Fri, Oct 9, 2015 at 8:51 AM, Dean Troyer  wrote:
> On Fri, Oct 9, 2015 at 9:39 AM, Sean Dague  wrote:
>>
>> Lastly, I think it's pretty clear we probably need a dedicated workgroup
>> meeting to keep this ball rolling, come to a reasonable plan that
>> doesn't break any existing deployed code, but lets us get to a better
>> world in a few cycles. annegentle, stevemar, and I have been pushing on
>> that ball so far, however I'd like to know who else is willing to commit
>> a chunk of time over this cycle to this. Once we know that we can try to
>> figure out when a reasonable weekly meeting point would be.
>
>
> Count me in...
>
> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Monty Taylor

On 10/09/2015 11:07 AM, David Lyle wrote:

I'm in too.


Yes please.


On Fri, Oct 9, 2015 at 8:51 AM, Dean Troyer  wrote:

On Fri, Oct 9, 2015 at 9:39 AM, Sean Dague  wrote:


Lastly, I think it's pretty clear we probably need a dedicated workgroup
meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to commit
a chunk of time over this cycle to this. Once we know that we can try to
figure out when a reasonable weekly meeting point would be.



Count me in...

dt

--

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Shamail


> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
> 
> It looks like some great conversation got going on the service catalog
> standardization spec / discussion at the last cross project meeting.
> Sorry I wasn't there to participate.
> 
Apologize if this is a question that has already been address but why can't we 
just leverage something like consul.io?

> A lot of that ended up in here (which was an ether pad stevemar and I
> started working on the other day) -
> https://etherpad.openstack.org/p/mitaka-service-catalog which is great.
I didn't see anything immediately in the etherpad that couldn't be covered with 
the tool mentioned above.  It is open-source so we could always try to 
contribute there if we need something extra (written in golang though).
> 
> A couple of things that would make this more useful:
> 
> 1) if you are commenting, please (ircnick) your comments. It's not easy
> to always track down folks later if the comment was not understood.
> 
> 2) please provide link to code when explaining a point. Github supports
> the ability to very nicely link to (and highlight) a range of code by a
> stable object ref. For instance -
> https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132
> 
> That will make comments about X does Y, or Z can't do W, more clear
> because we'll all be looking at the same chunk of code and start to
> build more shared context here. One of the reasons this has been long
> and difficult is that we're missing a lot of that shared context between
> projects. Reassembling that by reading each other's relevant code will
> go a long way to understanding the whole picture.
> 
> 
> Lastly, I think it's pretty clear we probably need a dedicated workgroup
> meeting to keep this ball rolling, come to a reasonable plan that
> doesn't break any existing deployed code, but lets us get to a better
> world in a few cycles. annegentle, stevemar, and I have been pushing on
> that ball so far, however I'd like to know who else is willing to commit
> a chunk of time over this cycle to this. Once we know that we can try to
> figure out when a reasonable weekly meeting point would be.
> 
> Thanks,
> 
>-Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Monty Taylor

On 10/09/2015 10:39 AM, Sean Dague wrote:

It looks like some great conversation got going on the service catalog
standardization spec / discussion at the last cross project meeting.
Sorry I wasn't there to participate.


Just so folks know, the collection of existing service catalogs has been 
updated:


https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog

It now includes a new and correct catalog for Rackspace Private (the 
previous entry was just a copy of Rackspace Public) as well as entries 
for every public cloud I have an account on.


Hopefully that is useful information for folks looking at this.


A lot of that ended up in here (which was an ether pad stevemar and I
started working on the other day) -
https://etherpad.openstack.org/p/mitaka-service-catalog which is great.

A couple of things that would make this more useful:

1) if you are commenting, please (ircnick) your comments. It's not easy
to always track down folks later if the comment was not understood.

2) please provide link to code when explaining a point. Github supports
the ability to very nicely link to (and highlight) a range of code by a
stable object ref. For instance -
https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132

That will make comments about X does Y, or Z can't do W, more clear
because we'll all be looking at the same chunk of code and start to
build more shared context here. One of the reasons this has been long
and difficult is that we're missing a lot of that shared context between
projects. Reassembling that by reading each other's relevant code will
go a long way to understanding the whole picture.


Lastly, I think it's pretty clear we probably need a dedicated workgroup
meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to commit
a chunk of time over this cycle to this. Once we know that we can try to
figure out when a reasonable weekly meeting point would be.

Thanks,

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev