John D. Mitchell a écrit :
On Thursday 2008.10.02, at 23:00 , Vincent Nonnenmacher wrote:
[...]
I'm confused... What are your dashboard clients doing that need to sustain 3 RPS (requests per second)? I.e., why do you need fast updating for a dashboard app? It's not like this is some sort of critical systems control panel (for e.g., an assembly line, nuclear power plant, medical monitoring, etc.).
if you look at the breakdown of queues, having members, calls being held, active calls coming and watching agents behaviour you got that number as each one is making its own timed poll loop, so this 3 req/s is on a 'small call-center'. As we do 'distributed' queues handling on several servers, you double this when a dashboard monitor two systems. Its true however that the size of the payload will augment with the number of agents/queues with the same frequency.

There's certainly absolutely no reason at all to be making separate requests for each piece of that information. I.e., you can easily compact that down to a single bulk update request or, at the very least, make each request over a single connection.

yes I came already to the same conclusion (and by the way your bellow note about REST abstraction is on the same verge) in order to minimize the number of requests as long as big list chunk is easily handled as a collection to update a DOM on the client side.

Rob Heittman example is on the same 'list technique'

Also using queuing tricks (like the ones available on jQuery and Dojo also help without too much complexity on the client code. I came down now to 1 req/s monitoring two servers and a dozen of qeues/members. As the requests are chunked and finally displayed in a aggregated <DIV> tree, the match between receiving a pre-digested structure and DOM building is on par, so the event to DOM marshalling is quite natural.

In terms of following multiple systems, again I'll recommend, if you really care about scaling this to lots of clients, that you add a "concentrator"/caching tier which would coalesce information from e.g., multiple backend services/systems/etc. for efficient use by the clients. This has so very many benefits, it's work to list them all. So, in terms of your concern about the number of connections/requests growing multiplicatively in the number of clients * systems, this approach can again compact that down into a single client connection to a concentrator/cache server and e.g., one single bulk update request.
my server does exactly that on the asterisk servers side, so there only one connexion for a lot of clients to one of the server distributed 'abstraction' (in fact it is where the 'innovation' really goes ;-)

But this doesn't solve the 'too much/too often' requests when using even 'subtle polling' techniques for solving the asynchronous/synchronous problem giving a close as possible 'real time' feeling.

In fact the only area in telephony where customers want to feel 'real time' is the time between hearing their phone ringing on the desk and seeing who's calling (even when working in openspace, 'who's calling my buddy phone'. If you solve that subtle feeling with a quick display reaction, they will excuse a certain amount of latency in further display adjustment (like seeing that a phone goes from ringing to answer, or a queue member status been updated somewhere else).

So in fact there is no need (as you pointed out so much euhhh 'precisely' ;-)) for an asynchronous refresh for ALL the client updates.

BTW, a helpful way to think of the bulk requests is as a higher-level publish/subscribe model (which is basically how Rob's solution for the police works, the subscription is to the "event log"). The bulk update request becomes: "give me every new 'event' since [the last time I asked]".

On the AJAX client side, if you're going to progress beyond the 'fun demo' level, you're going to need to switch to a clean, event driven model in the client anyways to make it efficient, manageable, etc. and that will actually make it easier for the web designers, eventually. As part of that layer, your code bursts the bulk request and fires off the appropriate AJAX events to update the DOM and the UI catches up as it will.

no pun intended for the 'fun demo level' indeed ;-)
but I'm convinced that simple/short clear demo sample help a lot of OSS to grab end user attention as they don't have a too steep learning curve. But you're absolutly right about the fact that handling the most part of the communication/process part is the only way to deliver a 'simple/elegant' interface for him.

The problem stay within the match between asynch handling inside the various server to get a consolidated abstraction and then breaking it when/by using an ajax client of a Restlet server because the only good way to 'conform' to habits is to pack them in a list, then rebuild the abstraction from such a chunked transmission and then recode an event push inside the client to pretend its asynchronous on the way events are dispatched on the DOM nodes.

Also, as mentioned previously, given that you're talking about human scales of time for the dashboard, doing things like only polling once per every e.g., 3 seconds instead of every second is a trivial way to get a 3X reduction in load; having your AJAX be smart enough to recognize when the user has moved their focus elsewhere and then change the polling rate to e.g., every 30 seconds or 5 minutes will have a large impact on your total systemic load since few people actually follow these sorts of "nice to have" dashboards very closely except for short periods of time; and as Jerome and Rob mentioned you can use Conditional-GETs.
yes it helped, even if I don't know how to monitor what you call 'user attention', but I'm considering also a estimation of call activity where I could gradualy slow done refreshes during period where there is no phone activity at all. Using a queue for ajax requests it will be easy to throttle the load of unnecessary requests as the server could provide such an running estimate.

Let me also second Rob's conclusion that for "dashboard" uses that require low, predictable latency that these HTTP based approaches are NOT advised. I.e., if one is writing a control panel for something where it really matters, the game is quite different and requires a different mindset to build.

I know, I'm planning for such high end needs (for example an fron desk operator console) to goes on the rich client way (using Granite Data service or the like and flash/air clients). But my goal here is to demonstrate that a simple light http dashboard is easy to code by an average web designer (without putting the server on its knee because of my 'zealous religion' and naiveness ;-)

But it clearly show the limit of 'polling'.

Nope, it merely shows some limits of a simplistic approach. A polling approach, even in this scenario can perform and scale quite well.

yes indeed but if you look at the mixed approach you'll find there both complementary than an exclusion path.

look at the result obtained here using such combo :

http://blogs.webtide.com/gregw/entry/asynchronous_restful_webapplication

the result is while a full REST synchronous approach putting the load on the number and time to hold for the threads on the server side for long request like technic is marginaly faster than the asynchronous one, the load on the server is much lighter and the possible request/s capacity is one or two order in magnitude higher.

That's why I was wondering in my first question, if a mixed approach is the way to go (and not 'if') then if using glashfish (or jetty continuation) is not too heavyweight for a simple 'asynchronous REST' request ?

I feel ashamed to myself as its sound like an oxymore when reading your reaction ;-)

[ As an aside and mild rant, one of the biggest things I hate about the zealous claims made by people pushing their latest fad (aka 'religion') in technical areas is that people see the simple, easy stuff and then believe that their system can magically grow to handle much larger, not so easy stuff without real thought and work. When "it" fails to magically do that, people often get pissed off and then claim that the whole approach is, a priori, bankrupt. ]

that's why as an old engenieer I'm not coding a simple line of code without seeing that hapening far away before and why I ask first for much wisdon. So thanks for your remarks as I got a pun filter working very well (and no religion).

Doing long request polling will be far better (and in the easy reach of Restlet) at the cost of a waiting thread on the Rest server, but definitly a publish/subscribe mechanism like bayeux, will have a more natural 'impedance match' than this approach.

You don't need Bayeux to do pub/sub nor do you need Comet to get e.g., lower latency. Comet ties up connections for very long periods of time and can have its own issues if you're doing this over the 'net (think about what the various intermediate proxies are allowed to do to the connection).
I thought about that, but coming from SIP RFCs and long session establishements with dialogs, transaction to build session like conversation on UDP for peers, I found the bayeux protocol refreshing, simple and sufficient to handle
this problem.

I should admit that it will also solve the fact that I could demonstrate it on my iphone ;-)
[....]
Having the choice of being called by the server (or get a long delayed response) will be much more easier on the REST syntax and DOM manipulation by more web designers/programmers.

That's totally false. It's all about have a clean structure that's event based. All of the ad hoc, simplistically programmatic AJAX approaches fail under their own weight as the system gets to any reasonable level of complexity. That's why the best AJAX frameworks are all about making the event handling faster/easier/cleaner/etc.

that's why jQuery (and to a less extend Dojo) make this simple, but it stay with the fact that a the asynch/synch barrier you are rebuilding a abstracted event -> list handling req/response (REST or not) -> rehashing and distribution of events to final DOM events subscribers.

When you look at the type of code you are obliged to do, it is much more efficient to have the publish/subscribe goes from events generator all the way down to the consumers.

One question (and I admit , I would just posture if I say I'm naive here) :

Would it be so awfull to have two such (PUBLISH/SUBSCRIBE) verbs added to the REST actions ? After all even if the http don't include them (yet) it is a logically wrong inference to tied the actual transport instead of grasping that the protocol pattern is 'just' a request/response one (synchronicity here is only
a modality, not the whole point).


As I've tried to point out, making a good, event-driven AJAX UI is actually cleaner, more robust, and more responsive to the users than the ad hoc approaches.

And you're absolutely right here, I guess is just that on a small fraction of the DOM interaction, a little push would be a very nice touch.



Reply via email to