The way I see it, there isn't a singular OpenStack API (even today there is 
swift, nova, and glance).  OpenStack is a suite of IaaS each with their own API 
– so there is a SUITE of standard OS APIs.  And each OS service should strive 
to define the canonical API for automating that particular service.  If I just 
want to run an image repo, I deploy glance.  If my SAN guy can't get storage 
provisioned fast enough, I deploy the OS block storage service (once we have 
it).  And if I want a full cloud suite, I deploy all the services.  They are 
loosely coupled and (ideally) independent building blocks.  Whether one chooses 
to front the different service endpoints with a proxy to unify them or have 
separate service endpoints is purely a deployment decision.  Either way, there 
are no competing OS APIs.  Support for 3rd party APIs (e.g. EC2) is secondary 
IMO, and to some degree, detrimental.  Standards are defined largely in part by 
ubiquity.  We want OS to become ubiquitous and we want the OS APIs to become 
defacto.  Supporting additional APIs (or even variations of the same API like 
AMQP per the other thread) doesn't help us here.  I would love to see the 
community rally behind a per service standard OS REST API that we can own and 
drive.

To that end, the goal as I see it is to launch canonical OpenStack Compute 
(nova) and Image (glance) APIs with Cactus.  In Diablo, we would then work to 
introduce separate network and block storage services with REST APIs as well.  
All APIs would be independently versioned and stable.  I'm ALL for per language 
OpenStack bindings that implement support for the entire suite of services.

Re: extensions, it's actually the technical aspects that are driving it.  There 
is a tension between standards and innovation that needs to be resolved.  In 
addition, we need to be able to support niche functionality (e.g. Rackspace may 
want to support API operations related to managed services) without imposing it 
on everyone.  These problems are not new.  We've seen the same exact thing with 
OpenGL and they have a very successful extension model that has solved this.  
Jorge studied this when did his PhD and has designed extensions with that in 
mind.  He has a presentation on extensions here<4:20
Jorge%20Williams
http://wiki.openstack.org/JorgeWilliams?action=AttachFile&do=view&target=Extensions.pdf>
 if you haven't seen it.  I think extensions are critically important and would 
encourage dialog amongst the community to come to a consensus on this.  Per my 
points above, I would prefer to avoid separate APIs for the same service.  
Let's see if we can get behind a per service API that becomes THE defacto 
standard way for automating that service.

Erik

From: Justin Santa Barbara <jus...@fathomdb.com<mailto:jus...@fathomdb.com>>
Date: Fri, 18 Feb 2011 09:57:12 -0800
To: Paul Voccio <paul.voc...@rackspace.com<mailto:paul.voc...@rackspace.com>>
Cc: "openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
<openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] OpenStack Compute API 1.1

> How is the 1.1 api proposal breaking this?

Because if we launch an OpenStack API, the expectation is that this will be the 
OpenStack API :-)

If we support a third-party API (CloudServers or EC2), then people will 
continue to use their existing wrappers (e.g. jclouds)  Once there's an 
OpenStack API, then end-users will want to find a library for that, and we 
don't want that to be a poor experience.  To maintain a good experience, we 
either can't break the API, or we need to write and maintain a lot of proxying 
code to maintain compatibility.  We know we're not ready for the first 
commitment, and I don't think we get enough to justify the second.

> I think the proxy would make sense if you wanted to have a single api. Not 
> all service providers will but I see this as entirely optional, not required 
> to use the services.

But then we have two OpenStack APIs?  Our ultimate end users don't use the API, 
they use a wrapper library.  They want a stable library that works and is kept 
up to date with recent changes and don't care about what's going on under the 
covers.  Wrapper library authors want an API that is (1) one API and (2) stable 
with reasonable evolution, otherwise they'll abandon their wrapper or not 
update it.

> The extensions mechanism is the biggest change, iirc.

I'm not a big fan of the extensions idea, because it feels more like a 
reflection of a management goal, rather than a technical decision ("OpenStack 
is open to extensions")  Supporting separate APIs feels like a better way to do 
that.  I'm very open to be corrected here, but I think we need to see code that 
wants to use the extension API and isn't better done as a separate API.  Right 
now I haven't seen any patches, and that makes me uneasy.





On Fri, Feb 18, 2011 at 9:29 AM, Paul Voccio 
<paul.voc...@rackspace.com<mailto:paul.voc...@rackspace.com>> wrote:
The spec for 1.0 and 1.1 are pretty close. The extensions mechanism is the 
biggest change, iirc.

I think the proxy would make sense if you wanted to have a single api. Not all 
service providers will but I see this as entirely optional, not required to use 
the services.

The push to get a completed compute api is the desire move away from the ec2 
api to something that we can guide, extend and vote on as a community. The 
sooner we do the the better.

How is the 1.1 api proposal breaking this?

From: Justin Santa Barbara <jus...@fathomdb.com<mailto:jus...@fathomdb.com>>
Date: Fri, 18 Feb 2011 09:10:19 -0800
To: Paul Voccio <paul.voc...@rackspace.com<mailto:paul.voc...@rackspace.com>>
Cc: Jay Pipes <jaypi...@gmail.com<mailto:jaypi...@gmail.com>>, 
"openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>" 
<openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>>

Subject: Re: [Openstack] OpenStack Compute API 1.1

Jay: The AMQP->REST was the re-architecting I was referring to, which would not 
be customer-facing (other than likely introducing new bugs.)  Spinning off the 
services, if this is visible at the API level, is much more concerning to me.

So Paul, I think the proxy is good because it acknowledges the importance of 
keeping a consistent API.  But - if our API isn't finalized - why push it out 
at all, particularly if we're then going to have the overhead of maintaining 
another translation layer?  For Cactus, let's just support EC2 and/or 
CloudServers 1.0 API compatibility (again a translation layer, but one we 
probably have to support anyway.)  Then we can design the right OpenStack API 
at our leisure and meet all of our goals: a stable Cactus and stable APIs.  If 
anyone ends up coding to a Cactus OpenStack API, we shouldn't have them become 
second-class citizens 3 months later.

Justin





On Fri, Feb 18, 2011 at 6:31 AM, Paul Voccio 
<paul.voc...@rackspace.com<mailto:paul.voc...@rackspace.com>> wrote:
Jay,

I understand Justin's concern if we move /network and /images and /volume
to their own endpoints then it would be a change to the customer. I think
this could be solved by putting a proxy in front of each endpoint and
routing back to the appropriate service endpoint.

I added another image on the wiki page to describe what I'm trying to say.
http://wiki.openstack.org/api_transition

I think might not be as bad of a transition since the compute worker would
receive a request for a new compute node then it would proxy over to the
admin or public api of the network or volume node to request information.
It would work very similar to how the queues work now.

pvo

On 2/17/11 8:33 PM, "Jay Pipes" <jaypi...@gmail.com<mailto:jaypi...@gmail.com>> 
wrote:

>Sorry, I don't view the proposed changes from AMQP to REST as being
>"customer facing API changes". Could you explain? These are internal
>interfaces, no?
>
>-jay
>
>On Thu, Feb 17, 2011 at 8:13 PM, Justin Santa Barbara
><jus...@fathomdb.com<mailto:jus...@fathomdb.com>> wrote:
>> An API is for life, not just for Cactus.
>> I agree that stability is important.  I don't see how we can claim to
>> deliver 'stability' when the plan is then immediately to destablize
>> everything with a very disruptive change soon after, including customer
>> facing API changes and massive internal re-architecting.
>>
>>
>> On Thu, Feb 17, 2011 at 4:18 PM, Jay Pipes 
>> <jaypi...@gmail.com<mailto:jaypi...@gmail.com>> wrote:
>>>
>>> On Thu, Feb 17, 2011 at 6:57 PM, Justin Santa Barbara
>>> <jus...@fathomdb.com<mailto:jus...@fathomdb.com>> wrote:
>>> > Pulling volumes & images out into separate services (and moving from
>>> > AMQP to
>>> > REST) sounds like a huge breaking change, so if that is indeed the
>>>plan,
>>> > let's do that asap (i.e. Cactus).
>>>
>>> Sorry, I have to disagree with you here, Justin :)  The Cactus release
>>> is supposed to be about stability and the only features going into
>>> Cactus should be to achieve API parity of the OpenStack Compute API
>>> with the Rackspace Cloud Servers API. Doing such a huge change like
>>> moving communication from AMQP to HTTP for volume and network would be
>>> a change that would likely undermine the stability of the Cactus
>>> release severely.
>>>
>>> -jay
>>
>>



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com<mailto:ab...@rackspace.com>, and delete the original 
message.
Your cooperation is appreciated.



Confidentiality Notice: This e-mail message (including any attached or
embedded documents) is intended for the exclusive and confidential use of the
individual or entity to which this message is addressed, and unless otherwise
expressly indicated, is confidential and privileged information of Rackspace.
Any dissemination, distribution or copying of the enclosed material is 
prohibited.
If you receive this transmission in error, please notify us immediately by 
e-mail
at ab...@rackspace.com<mailto:ab...@rackspace.com>, and delete the original 
message.
Your cooperation is appreciated.


_______________________________________________ Mailing list: 
https://launchpad.net/~openstack Post to : 
openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net> Unsubscribe 
: https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp
_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to