On Wed, May 12, 2010 at 10:57 AM, Guillaume Nodet <gno...@gmail.com> wrote:
> On Wed, May 12, 2010 at 03:30, Claus Ibsen <claus.ib...@gmail.com> wrote:
>
>> On Wed, May 12, 2010 at 9:25 AM, Guillaume Nodet <gno...@gmail.com> wrote:
>> > On Wed, May 12, 2010 at 00:53, Claus Ibsen <claus.ib...@gmail.com>
>> wrote:
>> >
>> >>
>> >> You can use the ToAsync
>> >> http://camel.apache.org/toasync.html
>> >>
>> >> It leverages the AsyncProcessor API which you know from Camel 1.x.
>> >>
>> >> However it hasn't been full implemented and expanded to included other
>> >> Camel components than Jetty at the moment.
>> >> We got tickets in JIRA to improve this.
>> >>
>> >>
>> > Sorry if my email was a bit harsh.  Let me explain what I think the
>> problem
>> > is.
>> >
>> > In camel 1.6.x, we had support for the AsyncProcessor. I know it was not
>> > fully complete, but the jetty consumer was working and there was an
>> initial
>> > work done on the jhc component.  Remember at that time, the jetty client
>> did
>> > not exist at all iirc, so jhc was the only asynchronous http client
>> > available.
>> >  It may have not been perfect and I agree there was a need to improve it.
>>
>> Well it was not documented anywhere, and as such it was kept in the dark.
>> It would have been good if you had taken the time to finished what you
>> started
>> or make sure someone could take over and get it up to an acceptable level.
>>
>>
>>
>> >  But it was fullfilling a need which I think is not covered anymore.
>> > The AsyncProcessor was not intented to be used by users, it was an
>> > implementation detail.  The goal was really the following:
>> >
>> > let's say i have the following route
>> >
>>  from("http:xxx").something().to("jms:yyy").anotherthing().to("http:zzz")
>> > the purpose of the async api was to make sure this route could scale when
>> > using request-response.
>> >
>> > What was happening (on the jetty consumer side) was that continuations
>> were
>> > leveraged.  The jetty would receive an http request.  The request would
>> have
>>
>> Adding support for leveraging Jetty continuations is something we can
>> introduce back in Camel 2.x
>> but then as you say it should be part of a larger game plan.
>>
>>
>> > been processed and ultimately go to the jms component.  The jms
>> > AsyncProcessor would have sent the jms message and returned false to
>> > indicate the response was not available yet.  When the jms component
>> would
>> > have received the response, the route would have continued because of the
>> > call of the asynchronous callback.  The same thing would have happened on
>> > the http provider.
>> > This would have saved threads, hereby making the route more scalable.
>> >
>>
>> In Camel 1.x its only the camel-jhc and camel-spring-integration
>> component which implements and uses AsyncProcessor.
>>
>> So the JmsProducer will block and wait for the reply.
>> And the same would the HttpProducer, but as you said at that time JHC
>> was maybe the only async HTTP framework out there).
>>
>
> But the jetty http consumer did call the AsyncProcessor.  As I said, that
> was not really supposed to be used by end user.
>
>
>>
>>
>>
>>
>> > So the goal was to make that happen transparently without the user even
>> > being aware, because in this case, the user does not use a producer
>> template
>> > to send an exchange.  It's just a camel route definition.
>> >
>>
>> I dont see the difference between 1.x and 2.x here. They both define a
>> Camel route.
>>
>>
>> > What happens now (correct me if i'm wrong), is that you need to use
>> toAsync
>> > on a producer, which has the following effect:
>> >  * if the producer implements AsyncProcessor, it will be called
>> > asynchronously using the process(Exchange exchange, AsyncCallback
>> callback)
>> > method
>> >  * if the producer does not support AsyncProcessor, a new thread is spawn
>> > and the process(Exchange exchange) method is called followed by the
>> callback
>> > In all cases, the response can't be conveyed back because the above
>> > processing happens in another thread.
>> >
>>
>> Its acts the same. They both transfer the Exchange to another thread
>> to continue processing it.
>> The difference is the former leverages some native async API from the
>> given component in question.
>> The latter will fallback to a simulated mode from camel-core.
>>
>>
>> > So the to() and toAsync() verbs are actually really different. The
>> toAsync()
>> > one will spawn a new thread and continue the processing while forgetting
>> > about any possible response.
>> >
>>
>> Yeah toAsync lacks better feedback to the caller in terms of the response.
>> This is something we have to look into and improve.
>> I have updated the ToAsync wiki page with a notice about this, to make
>> it more public.
>>
>> Also the ToAsync was add to try to facilitate native async support
>> from the various components.
>> - CXF
>> - JBI
>> - Jetty
>> - Apache HTTP Client 4.x
>> - And other HTTP based
>> - Maybe MINA / Netty and others
>>
>> We only got around to have a single proof of concept with the Jetty
>> component.
>> Its subject to changes.
>> If it lies doormat for a long time it will be @deprecated and removed.
>>
>> And maybe in the mean time we come up with better solutions.
>> Such as having this discussion.
>>
>>
>> > So keeping away any argument, is there any way to make a simple route
>> such
>> > as
>> > from("jetty:http://localhost:8080/service1";).to("jetty:http://localhost<
>> http://localhost/service-impl>
>> > :8080/service2").to("jetty:http://localhost <
>> http://localhost/service-impl>
>> > :8080/service3");
>> >
>> > scalable in a way that the consumer thread would not be block while
>> waiting
>> > for the answer of the web service called, and still make sure that the
>> > answer is conveyed back to the client ?
>> >
>>
>> I assume you need Jetty Continuations to have the JettyConsumer not
>> block at all?
>> Or am I wrong?
>>
>> If so no there is no way then.
>> Currently the JettyConsumer will at some time block / wait for the
>> reply to get done.
>>
>
> That's precisely what the continuations are used for.   And that was exactly
> what the AsyncProcessor was enabling.
>

Yeah Jetty continuations has this "park" and "resume" feature which
its the only component which had.
And its great that it will be standard in Servlet 3.0.



> I think you still don't understand what i'm trying to achieve.
> What i want is the following diagram:
> http://camel.apache.org/asynchronous-processing.data/simple-async-route.png
> Just replace jhc by the jetty producer.
>
> If you block the thread in any way in this diagram, you can't scale to
> thousands of concurrent http clients, because you would need a thousands of
> threads.
> Asynchronous processing of http requests has been available since jetty 6
> and has now been brought into Servlet 3.0.
>
> That's the way CXF works.  That's also the way ServiceMix works.  It's also
> the way Synapse works.
>


> So please don't tell me this is not possible at all.  It's just not possible
> anymore right now.
>

Yes Jetty continuations is not leveraged in Camel 2.x.
Its something which can be implemented again, and there has been a
TODO in there since we created Camel 2.0.

There has and are many other things to look at. And people would most
likely have used SMX or CXF for their high scalable HTTP based
services.
That said its of course something that can be prioritized at Camel as
well and if we add the needed muscle to implement it.



>
>
>>
>> > From an api perspective, using the asyncCallback calls on the producer
>> > template could make sense, the problem is that they just spawn a thread,
>> > send the exchange, block for the answer and call the callback.  That does
>> > not really help scaling from a thread usage perspective.
>> >
>>
>> Where do you mean it blocks? Yeah the spawned thread processes it as
>> normally would do.
>> And invoke the callback when the reply is ready.
>>
>> The caller, however, is not blocked and can continue doing what it may
>> want to do.
>> It got a Future handle in case it wants to access the response as well.
>>
>>
> Well, that's a use case that could have been very easily handled by the
> client itself by spawning it's own thread.
>

Its not that easy for end users to spawn their own threads and whatnot.
Having a single method is handy, hence the Template pattern.



> Again, I'm not talking about the client being blocked or not blocked during
> the processing of the message.  I'm talking about not waiting for a long
> time for a response when we could just park the exchange and process it
> later when the response comes back.
>

Yeah got the idea about the park and resume.

I wonder how it would be possible to do that in a more general way in Camel?
Suppose you do not use Jetty as the starting consumer?





>
>>
>>
>>
>> >
>> >>
>> >>
>> >> > --
>> >> > Cheers,
>> >> > Guillaume Nodet
>> >> > ------------------------
>> >> > Blog: http://gnodet.blogspot.com/
>> >> > ------------------------
>> >> > Open Source SOA
>> >> > http://fusesource.com
>> >> >
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Cheers,
>> >> > Guillaume Nodet
>> >> > ------------------------
>> >> > Blog: http://gnodet.blogspot.com/
>> >> > ------------------------
>> >> > Open Source SOA
>> >> > http://fusesource.com
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Cheers,
>> >> > Guillaume Nodet
>> >> > ------------------------
>> >> > Blog: http://gnodet.blogspot.com/
>> >> > ------------------------
>> >> > Open Source SOA
>> >> > http://fusesource.com
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Claus Ibsen
>> >> Apache Camel Committer
>> >>
>> >> Author of Camel in Action: http://www.manning.com/ibsen/
>> >> Open Source Integration: http://fusesource.com
>> >> Blog: http://davsclaus.blogspot.com/
>> >> Twitter: http://twitter.com/davsclaus
>> >>
>> >
>> >
>> >
>> > --
>> > Cheers,
>> > Guillaume Nodet
>> > ------------------------
>> > Blog: http://gnodet.blogspot.com/
>> > ------------------------
>> > Open Source SOA
>> > http://fusesource.com
>> >
>>
>>
>>
>> --
>> Claus Ibsen
>> Apache Camel Committer
>>
>> Author of Camel in Action: http://www.manning.com/ibsen/
>> Open Source Integration: http://fusesource.com
>> Blog: http://davsclaus.blogspot.com/
>> Twitter: http://twitter.com/davsclaus
>>
>
>
>
> --
> Cheers,
> Guillaume Nodet
> ------------------------
> Blog: http://gnodet.blogspot.com/
> ------------------------
> Open Source SOA
> http://fusesource.com
>



--
Claus Ibsen
Apache Camel Committer

Author of Camel in Action: http://www.manning.com/ibsen/
Open Source Integration: http://fusesource.com
Blog: http://davsclaus.blogspot.com/
Twitter: http://twitter.com/davsclaus

Reply via email to