Jan Algermissen wrote:
> On 12.12.2006, at 21:17, Gregg Wonderly wrote:
> With RPC/WS-* you need to design the interface and the payloads with 
> REST you only need to design the payloads.

I guess I'm confused because I thought you also had to have resources to send 
those payloads in and out of?  The design of the right URIs/resources is 
exactly 
the same as designing the right "interfaces" isn't it?  I am not seeing a 
"smaller" factor in this case.

> Now, if you add Atom to the set of axioms[1] you narrow the space even 
> further as you get a pervasive means to deal with items and collections 
> and how to create and edit items in general (+ a heap of other things I 
> won't  go into here). Effectively with HTTP+URI+Atom you only (luckily 
> so) get to design two remaining things:

The structure of ATOM documents and XML in general is trivially transported in 
a 
method call (remote or not) just as easy (if not easier considering it can 
already have programatic structure) as it is transferred by an HTTP session.

Within your outline above is the same number of things that I would design in 
an 
interface based architecture.

> - Atom extensions to enable the processes you have (e.g. purchasing, 
> bidding, billing,auditiing, trading...[2]); that is extensions to 
 > the Atom entries that enables them to participate in the processes
> - state-representing media types for the entities being managed (e.g. 
> invoices).

These standardizations are possible with any media, transport, invocation-layer 
or transfer-mechanism.  Where the value will be, in the market place, is the 
additional value available from the hyper-media experience.  In places where 
hyper-media is not useful/value-added, other technologies are going to be 
better 
choices.  We've been transporting and transferring data between machines with 
protocols of all sorts for ages.  This has always worked out, in general, when 
vendors have not been trying to "isolate" markets.

The value of the web is the information, not "how" it is transferred to my 
browser.  The semantics of the HTTP operations are not "new" concepts.  They 
were implemented to accomidate specific types of media to start with (remember 
HTTP without headers?). HTML, itself, is the driving force that carried it this 
far.  We had FTP and Gopher for getting "files" for ages.

> Essentially this means you are freed from putting the state transition 
> semantics discussed here over the last two days into the media types 
> (which is arguably a more difficult thing to do).

One of the important things that Steve tried to drive and arguments missed out 
on this go round, is the interaction of multiple services that are disconnected 
due to ownership, location or otherwise.  If I want to find the cheapest 2GB SD 
card on the planet, something has to interact with multiple services.  It's 
these interactions that are logically driven by the client.  It's these 
interactions that actually add the value to the SOA proposition.  There is real 
software and real coupling involved to find and project that value to a 
consumer.

> You also get fro free Atom's very clean extension model and you gain 
> radical (IMHO) overall simplification because you end up doing 
> everything the same way everywhere.

Only though standardized interfaces and cooperation can a "generic" 
hypermedia-based client with "no logic" possibly exist.  I agree that we have 
to 
move in a direction of standardization of data types.  What I'm arguing about 
is 
that the reward is not about simple or smaller software, yet.  So, just saying 
that is not adding value to the arguments.

Also, I'm arguing about internal vs external facing interfaces and about the 
ability to make a decision to change interface implementations and not impact 
your software architecture, your developers etc.  My arguments for Jini are 
about programming logic needs, configurable virtualization of key parts of 
distributed systems etc.   I am not arguing that there is no value in 
hyper-media or XML schemas.  I am saying that those, by themselves don't solve 
every programming problem in distributed systems.

Just using hyper-media display as a client interface, is something that works 
for many user based services.  Machine to machine conversation is often 
different logically than what hyper-media trivally allows.  For example, what 
happens in a hypermedia exchange when I can't follow the link?  How does a 
client with "no logic" make the right decision?  Idempotency is one of the 
service requirements to make "no logic" clients capable of this right?  But 
when 
something fails, I have to load something to get back to the "GET" screen so 
that I can see what actually happened when the timeout occured.  All of these 
scenarios need solutions that involve "some logic".  So, how do we reach the 
right architecture?

Gregg Wonderly


Reply via email to