On 02/07/06, Jan Algermissen <[EMAIL PROTECTED]> wrote:
>
>
>
>
>
>
>
>
>
>  On Jul 2, 2006, at 6:34 PM, Steve Jones wrote:
>  >
>  >
>  > Ahhh polling, the old running a watcher thread in the consumer, which
>  > of courses more complexity on the caller.
>
>
>  Hmm...having a server in the caller waiting for notifications is less
>  complex than
>  having the caller periodically poll the server? What am I missing?

number of callers > number of services

Implementing once v implemeting multiple times.

>
>  > >
>  > > (WATCH is non standard HTTP; this is no problem if you control the
>  > > whole system (e.g. inside an intranet))
>  >
>  > But doesn't this break the whole principle that HTTP is all you need?
>
>  There is no principle that HTTP is all you need. There is a principle
>  of a uniform interface. Making WATCH part of that is  likely to
>  happen for next gen HTTP.

But your implementation doesn't even use HTTP for the response

>
>
>  > I'd have expected something like
>  >
>  > PUT
>  > <watch query="lightbulb/state" respond="http://me.com/listener>
>  >
>  > Which would make a call to my server via a PUT statement with the
>  > current state. Either way leaves the big question of whom is calling
>  > me when I receive the event.
>
>  PUT's definition does not license that kind of use case.

Eh?  Of course it does, I send a request to ebay to send me an
email/sms/whatever(why not URI) via a PUT request when something
changes.

Which bit of PUT prohibts that?

>
>  >
>  > >
>  > > Presumably, Polling with HEAD doesn't even have the overhead to
>  > > justify the complexity of the notification based
>  > > solution (the server maintaining the subscribers, the watcher
>  > > maintaining a server to manage the Reply-To URI.)
>  >
>  > But the polling approach has the main overhead on the consumer,
>  > assuming that the number of consumers > number of publishers then
>  > polling is going to be a poor solution.
>
>  Is it? Maybe in some scenarios, but not in the general case I guess.
>  Any numbers?

How about the average ebay auction, 20 or so people waiting against
specific rules for updates, against 20 people polling for responses
every minute.

>
>
>  >
>  > >
>  > > And, BTW, the architectural style of a system does indeed make a
>  > > difference regarding the assumptions one can make about certain of
>  > > its properties such as performance!, maintainability or
>  > evolvability!!
>  >
>  > I don't disagree, paticularly about the last two. Personally I'm
>  > still struggling to see REST as anything beyond yet another A->B comms
>  > implementation.
>
>  Maybe if you'd dig deeper....? Do an example design, post it to rest-
>  discuss and
>  let's discuss it. Examples are usually best to learn

Being blunt, I've dug pretty deep, designed fairly complex systems,
and not seen anything new here.

>
>
>  > >
>  > > Tell me, can the client component cache the return value of
>  > > srv.GetBulbState()? Are you sure you can re-do the srvSetBulbState()
>  > > function as often as you like?
>  >
>  > There really isn't any difference in either model in terms of
>  > re-entrance and data latency, neither REST nor SOAP/WSDL address
>  > either of those two problems.
>
>  Hmmm...looking at REST's coverage of caching and HTTP's vast treatmet
>  of the
>  subject I wonder what leads you to that conclusion.

Having read the stuff?  Seriously what has REST added to caching or
data latency over previous studies.  I might be missing the magic
article but I haven't seen anything new so far.

>
>  >
>  > For instance, lets assume that you are using REST to write to compact
>  > flash and you call it 1 billion times, after a certain point it ceases
>  > to be re-entrant as the compact flash can no-longer accept updates.
>  >
>
>  So what? The additional PUTs still do not do anything else beyond the
>  first one.

Err yes they do.. if you keep writing "1" to flash then it overwrites,
updates and causes you to have less ability to re-write.  So while it
might still read "1" it isn't idempotent because after the billion
writes you can't then write "0" and have it work.

>
>
>  > >
>  > > Both of these for example come for free with HTTP - zero design- and
>  > > zero implementation cost!
>  >
>  > No they don't, see above. HTTP specifies nothing about data latency
>  > or re-entrance, that is purely a systems design element. Unless you
>  > are claiming that by using HTTP you can just keep calling back to the
>  > server.. which means you fall into the trap of the trap of the eight
>  > fallacies of network computing http://www.java.net/jag/Fallacies.html
>
>
>  And which one do you mean exactly?

Pretty much all of them?  Or are you saying that these no-longer apply
in the REST world?

>
>  And, speaking of "keeping calling back to the server".....Search engine
>  crawlers do exactly that quite successfully and without breaking
>  anything
>  (except when someone implements GET in the wrong way)

Yup, and the bandwidth bill at google is something I wouldn't like to
see on my AMEX this month.  Their entire model is to call back to the
server, this doesn't mean that this is sensible for everyone else.

One size doesn't fit all.


>
>  Jan
>
>  >
>  > >
>  > > Jan
>  > >
>  > >
>  > >
>  > >
>  >
>  >
>  >
>
>
>
>                   




------------------------ Yahoo! Groups Sponsor --------------------~--> 
Yahoo! Groups gets a make over. See the new email design.
http://us.click.yahoo.com/XISQkA/lOaOAA/yQLSAA/NhFolB/TM
--------------------------------------------------------------------~-> 

 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/service-orientated-architecture/

<*> To unsubscribe from this group, send an email to:
    [EMAIL PROTECTED]

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 


Reply via email to