On Feb 18, 2011, at 11:53 AM, Jay Pipes wrote:

I think your points are all valid, Jorge. Not disagreeing with them;
more just outlining that while saying all services must *publish* a
REST interface, services can listen and respond on more than one
protocol.

I'm glad we're *mostly* in agreement :-)


So, I agree with you basically, just pointing out that while having a
REST interface is a good standard, it shouldn't be the *only* way that
services can communicate with each other :)


Again, I'm not saying it's the *only* way services should communicate with one 
another especially if there exist protocols that make no sense replicating in 
REST.  That said, I don't like the idea of having to maintain different 
protocols otherwise.  I'm not convinced that doing so is necessary, it muddies 
the water on what exactly the true service interface is, it keeps us from 
consuming the same dog food we're selling, and I'm afraid it may lead to added 
work for service teams.


-jay

On Fri, Feb 18, 2011 at 12:46 PM, Jorge Williams
<jorge.willi...@rackspace.com<mailto:jorge.willi...@rackspace.com>> wrote:

On Feb 18, 2011, at 10:27 AM, Jay Pipes wrote:

Hi Jorge! Thanks for the detailed response. Comments inline. :)

On Fri, Feb 18, 2011 at 11:02 AM, Jorge Williams
<jorge.willi...@rackspace.com<mailto:jorge.willi...@rackspace.com>> wrote:
There are lots of advantages:

1) It allows services to be more autonomous, and gives us clearly defined 
service boundaries. Each service can be treated as a black box.

Agreed.

2) All service communication becomes versioned, not just the public API but 
also the admin API.  This means looser coupling which can help us work in 
parallel.  So glance can be on 1.2 of their API, but another API that depends 
on it (say compute) can continue to consume 1.1 until they're ready to switch 
-- we don't have the bottlenecks of everyone having to update everything 
together.

Agreed.

3) Also because things are loosely coupled and there are clearly defined 
boundaries  it positions us to have many other services (LBaaS, FWaaS, DBaaS, 
DNSaaS, etc).

Agreed.

4) It also becomes easier to deploy a subset of functionality ( you want 
compute and image, but not block).

Agreed.

5) Interested developers can get involved in only the services that they care 
about without worrying about other services.

Not quite sure how this has to do with REST vs. AMQP... AMQP is simply
the communication protocol between internal Nova services (network,
compute, and volume) right now. Developers can currently get involved
in the services they want to without messing with the other services.


I'm saying we can even package/deploy/run each service separately.  I supposed 
you can also do this with AMQP, I just see less roadblocks to doing this with 
HTTP.  So for example, AMQP requires a message bus which is external to the 
service.  That affects autonomy.  With an HTTP/REST approach, I can simply talk 
to the service directly. I suppose things could be a little different if had a 
queuing service.  But even then, do we really want all of our messages to go to 
the queue service first?


6) We already have 3 APIs (nova, swift, glance), we need to do this kind of 
integration as it is, it makes sense for us to standardize on it.

Unless I'm mistaken, we're not talking about APIs. We're talking about
protocols. AMQP vs. HTTP.

What we call APIs are really protocols, so the OpenStack compute API is really 
a protocol for talking to compute.  Keep in mind we intimately use HTTP in our 
restful protocol...content negotiation, headers, status codes, etc... all of 
these are part of the API.

Another thing I should note, is that I see benefits in keeping the  interface 
to service same regardless of whether it's a user or another service that's 
making a call.  This allows us to eat our own dog food. That is, there's no 
separate protocol for developers than there is for clients.  Sure there may be 
an Admin API, but the difference between the Admin API and the Public API is 
really defined in terms of security policies by the operator.


We are certainly changing the way we are doing things, but I don't really think 
we are throwing away a lot of functionality.  As PVO mentioned, things should 
work very similar to the way they are working now.  You still have compute 
workers, you may still have an internal queue, the only difference is that 
cross-service communication is now happening by issuing REST calls.

I guess I'm on the fence with this one. I agree that:

* Having clear boundaries between services is A Good Thing
* Having versioning in the interfaces between services is A Good Thing

I'm just not convinced that services shouldn't be able to communicate
on different protocols. REST over HTTP is a fine interface. Serialized
messages over AMQP is similarly a fine interface.

I don't think we're saying you can't use any protocol besides HTTP.  If it 
makes sense to use something like AMQP **within  your service boundary** use 
it.  One of the nice things about services being autonomous and loosely coupled 
is that you have a lot of freedom within your black box.  So if you want to use 
AMQP to talk to your compute nodes within your boundary go for it.

I do think we need to standardize communication *between services* and 
standardizing on REST is not a bad choice.  We learned this lesson the hard way 
at Rackspace.  Today we have services that use REST, RMI, XML-RPC, and SOAP.  
Because there's a lot of diversity in the protocols we have services that 
expose multiple protocols to different clients (say RMI and SOAP), often a 
feature will make it to one protocol but never gets exposed in the other. 
Having to support multiple protocols adds a lot of extra work for the service 
team and for the teams like the control panel team that needs to integrate with 
all sorts of services in all sorts of ways.  We've come to the conclusion that 
supporting a single protocol is a good thing, and that HTTP/REST is not a bad 
choice.

Now there are cases where it does make sense to expose a protocol other than 
HTTP/REST -- and that is when there's a native de-facto protocol with  
ubiquitous client support.  So for example, if we created a mail service, does 
it really make sense for us to redefine IMAP in REST -- I think not :-)   Same 
thing for protocols like iSCSI, etc.

The standardization
should occur at the *message* level, not the *protocol* level. REST
over HTTP, combined with the Atom Publishing Protocol, has those
messages already defined. Having standard message definitions that are
sent via AMQP seems to me to be the "missing link" in the
standardization process.

You're argument that the messages should be standardized but that we should be 
transport protocol independent is essentially what lead to the development of 
SOAP.  Today you could do:

1) SOAP over HTTP
2) SOAP over AMQP
3) SOAP over SMTP
4) SOAP over TCP...etc

And it works as you proposed,  the messages are standardized, the transport 
protocol doesn't matter. Unfortunately a side effect of this is that stuff that 
would otherwise be handled by the protocol ends up filtering its way up to the 
definition of the messages.  This adds a lot of complexity and it prevents 
clients and service providers from taking advantages of the underlying features 
of the protocol.  I'd say let's standardize on REST and take advantage of all 
of the stuff HTTP has to offer (proxying, caching, SSL, client support, etc...).

-jOrGe W.






_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to