Re: [openstack-dev] [api] Request Validation - Stoplight

2014-10-17 Thread Sam Harwell
Hi Amit,

Keeping in mind this viewpoint is nothing but my own personal view, my 
recommendation would be to not mandate the use of a particular validation 
framework, but to instead define what kind of validation clients should expect 
the server to perform in general. For example, I would expect a service to 
return an error code and not perform any action if I called "Create server" but 
did not include a request body, but the actual manner in which that error is 
generated within the service does not matter from the client's perspective.

This is not to say the API Working Group wouldn't help you evaluate the 
potential of Stoplight to meet the needs of a service. To the contrary, by 
clearly defining the expectations of a service's responses to requests, you'll 
have a great idea of exactly what to look for in your evaluation, and your 
final decision would be based on objective results.

Thank you,
Sam Harwell

From: Amit Gandhi [mailto:amit.gan...@rackspace.com]
Sent: Friday, October 17, 2014 12:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: r...@ryanpetrello.com
Subject: [openstack-dev] [api] Request Validation - Stoplight

Hi API Working Group

Last night at the Openstack Meetup in Atlanta, a group of us discussed how 
request validation is being performed over various projects and how some teams 
are using pecan wsmi, or warlock, jsonschema etc.

Each of these libraries have their own pro's and con's.  My understanding is 
that the API working group is in the early stages of looking into these various 
libraries and will likely provide guidance in the near future on this.

I would like to suggest another library to evaluate when deciding this.  Some 
of our teams have started to use a library named "Stoplight"[1][2] in our 
projects.  For example, in the Poppy CDN project, we found it worked around 
some of the issues we had with warlock such as validating nested json correctly 
[3].

Stoplight is an input validation framework for python.  It can be used to 
decorate any function (including routes in pecan or falcon) to validate its 
parameters.

Some good examples can be found here [4] on how to use Spotlight.

Let us know your thoughts/interest and we would be happy to discuss further on 
if and how this would be valuable as a library for API request validation in 
Openstack.


Thanks


Amit Gandhi
Senior Manager - Rackspace



[1] https://pypi.python.org/pypi/stoplight
[2] https://github.com/painterjd/stoplight
[3] 
https://github.com/stackforge/poppy/blob/master/poppy/transport/pecan/controllers/v1/services.py#L108
[4] 
https://github.com/painterjd/stoplight/blob/master/stoplight/tests/test_validation.py#L138

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API recommendation

2014-10-15 Thread Sam Harwell
Hi Kevin,

In an asynchronous environment that may have multiple clients sending commands 
to the same resource in a service, an "operation"-type resource is a 
fundamental prerequisite to creating client applications which report the 
status of ongoing operations. Without this resource, there is no way to tell a 
user whether an operation they attempted succeeded or failed.

Due to the importance of allowing users to see the results of individual 
operations on resources, I would treat the other features potentially provided 
by this type of resource, such as queuing or canceling operations, separately 
from the fundamental status reporting behavior.

Thank you,
Sam Harwell

-Original Message-
From: Kevin L. Mitchell [mailto:kevin.mitch...@rackspace.com] 
Sent: Wednesday, October 15, 2014 10:49 AM
To: openstack-dev
Subject: [openstack-dev] [api] API recommendation

Now that we have an API working group forming, I'd like to kick off some 
discussion over one point I'd really like to see our APIs using (and I'll 
probably drop it in to the repo once that gets fully set up): the difference 
between synchronous and asynchronous operations.  Using nova as an 
example—right now, if you kick off a long-running operation, such as a server 
create or a reboot, you watch the resource itself to determine the status of 
the operation.  What I'd like to propose is that future APIs use a separate 
"operation" resource to track status information on the particular operation.  
For instance, if we were to rebuild the nova API with this idea in mind, 
booting a new server would give you a server handle and an operation handle; 
querying the server resource would give you summary information about the state 
of the server (running, not running) and pending operations, while querying the 
operation would give you detailed information about the status of the 
operation.  As another example, issuing a reboot would give you the operation 
handle; you'd see the operation in a queue on the server resource, but the 
actual state of the operation itself would be listed on that operation.  As a 
side effect, this would allow us (not require,
though) to queue up operations on a resource, and allow us to cancel an 
operation that has not yet been started.

Thoughts?
--
Kevin L. Mitchell  Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-09 Thread Sam Harwell
Option A can be made usable provided you do the following:


1.   Add an endpoint for determining whether or not the current service 
supports optional feature X.

2.   For each optional feature of the API, clearly document that the 
feature is optional, and name the feature it is part of.

3.   If the optional feature is defined within the core Marconi 
specification, require implementations to return a 501 for affected URIs if the 
feature is not supported (this is in addition to, not in place of, item #1 
above).

A description of some key documentation elements I am looking for when a 
service includes optional functionality is listed under the heading “Conceptual 
Grouping” in the following document:
https://github.com/sharwell/openstack.net/wiki/The-JSON-Checklist

Thank you,
Sam Harwell

From: Kurt Griffiths [mailto:kurt.griffi...@rackspace.com]
Sent: Monday, June 09, 2014 2:31 PM
To: OpenStack Dev
Subject: [openstack-dev] [marconi] Reconsidering the unified API model

Folks, this may be a bit of a bombshell, but I think we have been dancing 
around the issue for a while now and we need to address it head on. Let me 
start with some background.

Back when we started designing the Marconi API, we knew that we wanted to 
support several messaging patterns. We could do that using a unified queue 
resource, combining both task distribution and feed semantics. Or we could 
create disjoint resources in the API, or even create two separate services 
altogether, one each for the two semantic groups.

The decision was made to go with a unified API for these reasons:

  *   It would afford hybrid patterns, such as auditing or diagnosing a task 
distribution queue
  *   Once you implement guaranteed delivery for a message feed over HTTP, 
implementing task distribution is a relatively straightforward addition. If you 
want both types of semantics, you don’t necessarily gain anything by 
implementing them separately.
Lately we have been talking about writing drivers for traditional message 
brokers that will not be able to support the message feeds part of the API. 
I’ve started to think that having a huge part of the API that may or may not 
“work”, depending on how Marconi is deployed, is not a good story for users, 
esp. in light of the push to make different clouds more interoperable.

Therefore, I think we have a very big decision to make here as a team and a 
community. I see three options right now. I’ve listed several—but by no means 
conclusive—pros and cons for each, as well as some counterpoints, based on past 
discussions.

Option A. Allow drivers to only implement part of the API

For:

  *   Allows for a wider variety of backends. (counter: may create subtle 
differences in behavior between deployments)
  *   May provide opportunities for tuning deployments for specific workloads
Against:

  *   Makes it hard for users to create applications that work across multiple 
clouds, since critical functionality may or may not be available in a given 
deployment. (counter: how many users need cross-cloud compatibility? Can they 
degrade gracefully?)

Option B. Split the service in two. Different APIs, different services. One 
would be message feeds, while the other would be something akin to Amazon’s SQS.

For:

  *   Same as Option A, plus creates a clean line of functionality for 
deployment (deploy one service or the other, or both, with clear expectations 
of what messaging patterns are supported in any case).
Against:

  *   Removes support for hybrid messaging patterns (counter: how useful are 
such patterns in the first place?)
  *   Operators now have two services to deploy and support, rather than just 
one (counter: can scale them independently, perhaps leading to gains in 
efficiency)

Option C. Require every backend to support the entirety of the API as it now 
stands.

For:

  *   Least disruptive in terms of the current API design and implementation
  *   Affords a wider variety of messaging patterns (counter: YAGNI?)
  *   Reuses code in drivers and API between feed and task distribution 
operations (counter: there may be ways to continue sharing some code if the API 
is split)
Against:

  *   Requires operators to deploy a NoSQL cluster (counter: many operators are 
comfortable with NoSQL today)
  *   Currently requires MongoDB, which is AGPL (counter: a Redis driver is 
under development)
  *   A unified API is hard to tune for performance (counter: Redis driver 
should be able to handle high-throughput use cases, TBD)
I’d love to get everyone’s thoughts on these options; let's brainstorm for a 
bit, then we can home in on the option that makes the most sense. We may need 
to do some POCs or experiments to get enough information to make a good 
decision.

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Introducing task oriented workflows

2014-06-03 Thread Sam Harwell
When considering user interfaces, the choice of task and/or status reporting 
methods can have a big impact on the ability to communicate with the user. In 
particular, status properties (e.g. in the manner of compute V2 servers) 
prevent user interfaces from associating the result of an operation with a 
description of an executed action. Even though a REST API is a theoretically a 
set of synchronous operations to transform basic data resources, in reality 
users are initiating actions against their account that do not reach their 
final state immediately.

In designing an API, the ability to provide users with relevant information 
about decisions and actions they take is of utmost importance. Since a separate 
task representation (e.g. glance) does support providing users with information 
about the ongoing and final result of specific actions they perform, where a 
status field does not, we will eventually need to use a task representation in 
order to properly support our users.

Also, the specific detail of whether a resource supports more than one 
asynchronous operation concurrently (or supports queueing of task operations) 
is not applicable to this decision. Cloud resources are inherently a 
distributed system, and individual clients are not able to determine which 
status is associated with particular actions. For example, consider the 
following:

1. Client A request operation X be performed
2. Operation X completes successfully
3. Client B requests operation Y be performed
4. Operation Y results in the resource entering an error state
5. Client A checks the status of the resource

In this scenario, Client A is unable to report to the user which operation 
resulted in the resource entering its current error state. If it attempts to 
report the information according to the information available to it, the user 
would be under the impression that Operation X caused the resource to enter the 
error state, with clearly negative impacts on their ability to understand the 
problem(s) encountered and steps they should take to resolve the situation 
during their use of the API.

Please keep in mind that this message is not related to particular 
implementation, storage mechanism, or the manner in which clients communicate 
with the server. I am simply pointing out that the needs of end users can only 
be properly met by ensuring that particular information is available through 
the API their applications are using. This is (or should be) the primary driver 
for design decisions made during the creation of each API.

Thank you,
Sam Harwell

-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net] 
Sent: Tuesday, June 03, 2014 4:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Introducing task oriented workflows

On 23 May 2014 10:34, Salvatore Orlando  wrote:
> As most of you probably know already, this is one of the topics discussed
> during the Juno summit [1].
> I would like to kick off the discussion in order to move towards a concrete
> design.
>
> Preamble: Considering the meat that's already on the plate for Juno, I'm not
> advocating that whatever comes out of this discussion should be put on the
> Juno roadmap. However, preparation (or yak shaving) activities that should
> be identified as pre-requisite might happen during the Juno time frame
> assuming that they won't interfere with other critical or high priority
> activities.
> This is also a very long post; the TL;DR summary is that I would like to
> explore task-oriented communication with the backend and how it should be
> reflected in the API - gauging how the community feels about this, and
> collecting feedback regarding design, constructs, and related
> tools/techniques/technologies.

Hi, thanks for writing this up.

A few thoughts:

 - if there can be only one task on a resource at a time, you're
essentially forcing all other clients to poll for task completion
before coming back to do *their* change. Its kindof a pathological
edge case of no in-flight-conflicts :).
 - Please please please don't embed polling into the design - use
webhooks or something similar so that each client (be that Nova,
Ironic, Horizon or what-have-you - can get a push response when the
thing they want to happen has happened).
 - I'd think very very carefully about whether you're actually
modelling /tasks/ or whether tasks are the implementation and really
the core issue is modelling the desired vs obtained resource state
 - Ironic has a debate going on right now about very much the same
problem - the latency involved in some API tasks, and whether the API
should return when complete, or when the work his guaranteed to start,
or even immediately and maybe the work isn't guaranteed to start.

My feeling is that we need to balance ease and correctness of
implementation, ease (and ef

[openstack-dev] [compute] Server parameters for Create Server

2014-06-03 Thread Sam Harwell
I'm having trouble determining which parameters may be included in the Create 
Server request. In particular, I'm interested in the JSON properties which are 
supported by a base installation of Compute V2.

The documentation on the following page is not clear:
http://docs.openstack.org/api/openstack-compute/2/content/POST_createServer__v2__tenant_id__servers_CreateServers.html

The examples in that documentation include properties like max_count and 
min_count that I could not find a description for, and security_groups which 
appears to be a property added by an extension as opposed to being part of the 
base implementation of Create Server.

The separation of every JSON property according to the location where they are 
defined (base OpenStack installation, OpenStack-defined extension, or 
vendor-specific extension is a key design aspect of the SDK I am working on. 
How can I determine this information according to the documentation?

Thank you,
Sam Harwell
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] IDE extensions in .gitignore

2014-01-10 Thread Sam Harwell
I believe my comment may have been [slightly] misinterpreted. I was simply 
saying that we shouldn't assume that contributors are allowed to alter their 
global configuration. When deciding on a policy for ignoring files, we should 
be careful to choose a policy that does not prevent those users from 
participating just as easily as users who are able to alter their global 
configuration.

The implication of this is that users who sit down to read a guide about 
getting started with making contributions to OpenStack shouldn't find 
instructions in it like "Add the following lines to ~/.gitignore...".

Sam

-Original Message-
From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
Sent: Friday, January 10, 2014 9:10 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev] IDE extensions in .gitignore

On 2014-01-10 21:57:33 +1300 (+1300), Robert Collins wrote:
> I have *no* aversion to allowing contributors to police things on 
> their own.
[...]

I know you don't. It was stated in the message I was replying to (in context 
you trimmed) that "...the community should not accept or promote any policy 
which suggests a configuration that alters the behavior of systems beyond the 
scope of a local workspace used while working with OpenStack..." I disagree, 
and think we as a collective of individuals should feel free to exchange tips 
and suggestions on configuring our development environments even if they may 
have (potentially positive) implications outside of just work on OpenStack code.

> If we have to review for a trashfile pattern then we have contributors 
> using that. There are more editors than contributors :).
[...]
> I don't understand why you call it polluting. Pollution is toxic.
> What is toxic about the few rules needed to handle common editors?

For me, the ignore list is there so that someone doesn't have to worry about 
accidentally committing *.o files because they ran make and forgot to make 
clean when they were done. I'm less keen on it being used so that developers 
don't need to know that visual studio is leaving project directories all over 
the place.

Anyway I was using the term "polluting" more in reference to accidentally 
committing unwanted files to the repository, and only to a lesser extent 
inserting implementation details of this week's most popular code flosser. How 
do you determine when it's okay to clean up entries in the ever-growing 
.gitignore file (that one person who ran a tool once and added pattern for it 
has moved on to less messy choices)? A file with operational implications which 
grows in complexity without bounds worries me, even if only in principle.

Anyway, it's not a huge deal. I'm just unlikely to review these sorts of 
additions unless I've really run out of actual improvements to review or bugs 
to fix. (And I already feel bad for wasting time replying to several messages 
on the topic, but I couldn't let the "should not...promote any policy which 
suggests a configuration that alters the behavior of systems" comment go 
unanswered.)
--
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] IDE extensions in .gitignore

2014-01-03 Thread Sam Harwell
OpenStack does not have operational or administrative ownership over the 
computers used by contributors. As such, the community should not accept or 
promote any policy which suggests a configuration that alters the behavior of 
systems beyond the scope of a local workspace used while working with OpenStack 
project(s). Official alterations of a *global* .gitignore are completely 
unacceptable, but if certain files are not to be specified in the .gitignore 
committed with the project then a policy related to modifying the 
$GIT_DIR/info/exclude would be an acceptable alternative.

Thanks,
Sam

-Original Message-
From: John Griffith [mailto:john.griff...@solidfire.com] 
Sent: Tuesday, December 31, 2013 10:46 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [OpenStack-Dev] IDE extensions in .gitignore

Hey Everyone,

I wanted to see where we stand on IDE extensions in .gitignore files.
We seem to have some back and forth, one cycle there's a bug and a patch to add 
things like eclipse, idea etc and the next there's a bug and a patch to remove 
them.  I'd like to have some sort of consensus on what we want here.  I 
personally don't have a preference, I would just like to have consistency and 
quit thrashing back and forth.

Anyway, I'd like to see all of the projects agree on this... or even consider 
moving to a global .gitignore.  Thoughts??

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Marconi][Oslo] Discoverable home document for APIs (Was: Re: [Nova][Glance] Support of v1 and v2 glance APIs in Nova)

2013-11-26 Thread Sam Harwell
Nottingham's RFC is exceptionally well documented. I would be strongly against 
Marconi moving from that RFC to any other format unless the alternative was 
equally well documented. If they were equally well documented, then I would be 
neutral on changing it.

More importantly, if a project is providing discoverability for their API and 
recommending the use of that to clients, the primary unit and integration tests 
for the service need to use the discovery mechanism. It doesn't help to provide 
"discoverability" if the implementation doesn't actually provide working links 
in the service descriptor (regardless of the specific format used).

Sam

-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: Tuesday, November 26, 2013 2:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Jorge Williams
Subject: Re: [openstack-dev] [Keystone][Marconi][Oslo] Discoverable home 
document for APIs (Was: Re: [Nova][Glance] Support of v1 and v2 glance APIs in 
Nova)

On 25/11/13 16:50 -0600, Dolph Mathews wrote:
>
>On Mon, Nov 25, 2013 at 2:41 AM, Flavio Percoco  wrote:
>
>On 25/11/13 09:28 +1000, Jamie Lennox wrote:
>
>So the way we have this in keystone at least is that querying GET /
>will
>return all available API versions and querying /v2.0 for example is a
>similar result with just the v2 endpoint. So you can hard pin a version
>by using the versioned URL.
>
>I spoke to somebody the other day about the discovery process in
>services. The long term goal should be that the service catalog
>contains
>unversioned endpoints and that all clients should do discovery. For
>keystone the review has been underway for a while now:
>https://review.openstack.org/#/c/38414/ the basics of this should be
>able to be moved into OSLO for other projects if required.
>
>
>Did you guys create your own 'home document' language? or did you base
>it on some existing format? Is it documented somewhere? IIRC, there's
>a thread where part of this was discussed, it was related to horizon.
>
>I'm curious to know what you guys did and if you knew about
>JSON-Home[0] when you started working on this.
>
>
>It looks like our multiple choice response might predate Nottingham's 
>proposal, but not by much. In keystone, it's been stable since I joined 
>the project, midway through the diablo cycle (summer 2011). I don't 
>know any more history than that, but I've CC'd Jorge Williams, who probably 
>knows.
>
>I really like Nottingham's approach of adding relational links from the 
>base endpoint, I've been thinking about doing the same for keystone for 
>quite a while.

As crazy as it sounds, have you guys considered migrating to Nottingham's 
approach?

We picked this approach because we didn't want to invent it ourselves and this 
happens to have a well defined RFC as well.

If there's something Nottingham's proposal lacks of, I think we could provide 
some feedback and help making it better.

>
>We used json-home for Marconi v1 and we'd want the client to work in a
>'follow your nose' way. Since, I'd prefer OpenStack modules to use the
>same language for this, I'm curious to know why - if so - you
>created your own spec, what are the benefits and if it's documented
>somewhere.
>
>
>Then why didn't Marconi follow the lead of one of the other projects? 
>;)

LOOOL, I knew you were going to say that. I think I knew about you guys having 
something similar but at some point I most have forgotten about it. That being 
said, the main rationals were:

1) Using something documented and known upstream made more sense
and it also helps getting more contributions from the community.
2) We already knew it, which falls back in point 1.


>I completely agree though - standardized version discovery across the 
>ecosystem would be fantastic.

All that being said, I don't think it would be very hard to migrate Marconi to 
something common if we agree that json-home is not good enough for OpenStack. 
Nonetheless, it'd be a shame not to provide feedback to Mark Nottingham about 
it. So far, his approach has been good enough for us - but, you know, Marconi 
is still way too small.

Is keystone's home schema spec documented somewhere?

Cheers,
FF

--
@flaper87
Flavio Percoco
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Code review study

2013-08-15 Thread Sam Harwell
I like to take a different approach. If my commit message is going to take more 
than a couple lines for people to understand the decisions I made, I go and 
make an issue in the issue tracker before committing locally and then reference 
that issue in the commit message. This helps in a few ways:


1.   If I find a technical or grammatical error in the commit message, it 
can be corrected.

2.   Developers can provide feedback on the subject matter independently of 
the implementation, as well as feedback on the implementation itself.

3.   I like the ability to include formatting and hyperlinks in my 
documentation of the commit.

Sam

From: Christopher Yeoh [mailto:cbky...@gmail.com]
Sent: Thursday, August 15, 2013 7:12 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Code review study


On Thu, Aug 15, 2013 at 11:42 AM, Robert Collins 
mailto:robe...@robertcollins.net>> wrote:
This may interest data-driven types here.

https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
Note specifically the citation of 200-400 lines as the knee of the review 
effectiveness curve: that's lower than I thought - I thought 200 was clearly 
fine - but no.

Very interesting article. One other point which I think is pretty relevant is 
point 4 about getting authors to annotate the code better (and for those who 
haven't read it, they don't mean comments in the code but separately) because 
it results in the authors picking up more bugs before they even submit the code.

So I wonder if its worth asking people to write more detailed commit logs which 
include some reasoning about why some of the more complex changes were done in 
a certain way and not just what is implemented or fixed. As it is many of the 
commit messages are often very succinct so I think it would help on the review 
efficiency side too.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] V3 Extensions Discoverability

2013-08-07 Thread Sam Harwell
Please excuse me for being vague with many parts of this reply. Since I'm still 
learning the terminology used throughout this project, I chose to be 
non-specific rather than risk using the wrong name and distract from the points 
I'm trying to make.

>From a client perspective, the most important issue in writing a reliable 
>application that is truly portable across implementations is ensuring that the 
>API defines a way to determine whether or not a provider supports a particular 
>optional feature. The precise manner in which that functionality is exposed 
>does not matter so much. My only concern with consolidating feature 
>discoverability into a single "endpoints" function, where users are expected 
>to include standardized endpoints as well as non-standard endpoints 
>("extensions"), is the possibility of name collisions. In this case, it helps 
>to reserve certain names for use with standardized features (e.g. names 
>starting with OS- could be reserved for optional behavior defined in the 
>OpenStack specifications, and names starting with {Vendor}- could be reserved 
>for optional behavior defined elsewhere).

On the subject of "incrementing" an API version - this certainly makes sense 
for APIs that are linear. In practice, however, multiple implementations of 
similar features often produce aliased version numbers and/or overlapping 
version ranges, which makes incrementing the version number useless. This can 
be resolved by only using (and incrementing) API version numbers for the 
official, root-level specification. For a named extension, the "owner" of the 
extension acts as the root-level specification for the extension and should be 
the only one incrementing the version number. In cases where an API or 
extension has been altered from its original form, the alteration can be 
presented in a modular form, where the implementation supports the original 
versioned API under its originally published name and version, and offers the 
altered features as an extension with a new name. This allows the alterations 
to the core functionality to be linearly versioned independently from the core 
function
 ality itself.

Thank you,
Sam Harwell

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Tuesday, August 06, 2013 8:46 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Keystone] V3 Extensions Discoverability

On 08/06/2013 01:19 AM, Jamie Lennox wrote:
> Hi all,
>
> Partially in response to the trusts API review in keystoneclient 
> (https://review.openstack.org/#/c/39899/ ) and my work on keystone API 
> version discoverability (spell-check disagrees but I'm going to assume 
> that's a word - https://review.openstack.org/#/c/38414/ ) I was 
> thinking about how we should be able to know what/if an extension is 
> available. I even made a basic blueprint for how i think it should work:
> https://blueprints.launchpad.net/python-keystoneclient/+spec/keystoneclient-extensions
>  and then realized that GET /extensions is only a V2 API.
>
> Is this intentional? I was starting to make a review to add it to 
> identity-api but is there the intention that extensions should show up 
> within the endpoint APIs? There is no reason it couldn't work that way 
> and DELETE would just fail.

I would hope that extensions would *not* show up in the endpoints API.

Frankly, I'm not a fan of API extensions at all. I think they are silly and 
just promote an inconsistent and fractured user experience. I would highly 
prefer to just have a single API, versioned, with documentation online and in a 
versions/ resource that indicates what was changed, added, and deleted in each 
version.

If some vendor wants to provide some special API resource that naturally 
belongs in a related API -- for instance, trusts in the OpenStack Identity API 
-- then the new resource should simply be added to the one and only Identity 
API, the version of the API incremented, and on we go.

API extensions are more hassle than anything else. Let us promote standards, 
not endless extensibility at the expense of usability.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev