I think it's unfair to test parallel processing with super light workloads
like that. In my production application, we use parallel processing so we
can make several SOAP or HTTP calls to various services. Of course setting
up a threadpool to log messages is going to perform worse than the serial
version.

On 11 February 2016 at 17:12, Jan Zankowski <jan.zankow...@gmail.com> wrote:

> Hello,
>
> My application has the requirement of processing 100s of messages per
> second per CPU. The absolute minimum is 200, and the more the better.
> Obviously, to do that, each message must take, on average, 5 milliseconds
> or less. Additionally, our Camel routes are fairly long and complex -
> messages go through perhaps 10-20 processors/endpoints before being done.
>
> As a result of all this, the performance of individual Camel components on
> each route is really important to us.
>
> I wanted to ask for performance tips, especially as regards most commonly
> used constructs. E.g. we'd be very curious to know which of the expression
> languages is generally the fastest, or whether any of the standard EIP
> elements in Camel (wiretap, splitter, filter, etc) is known to be generally
> slow or fast.
>
> I will show two examples of such less known but helpful tips that I
> discovered today (but only after we used the underperforming variants in
> our code...):
>
> My test setup:
>
>   System.out.println("start: " + System.currentTimeMillis());
>   for (int i = 0 ; i < 10000 ; i++) {
>     producerTemplate.sendBody("direct:test", someObject);
>   }
>   System.out.println("end:   " + System.currentTimeMillis());
>
>   [...]
>
>   from("direct:test").<some Camel construct under
> test>.to("log:test?level=OFF");
>
> Using this, I got the following results when someObject is a HashMap with a
> single entry with key "key":
>
> 1. Measure route baseline
>
> from("direct:test").to("log:test?level=OFF");
>
> start: 1455231094057
> end:   1455231094896
> => 0.08ms per route execution
>
> 2. Simple language
>
> from("direct:test").setBody().simple("${body[key]}").to("log:test
> ?level=OFF");
>
> start: 1455230951614
> end:   1455230953302
> => 0.16ms per route execution
>
> 3. SPEL, same operation as in 2.
>
> from("direct:test").setBody().spel("#{request.body.get(\"key\")}").to("
> log:test?level=OFF");
>
> start: 1455230880059
> end:   1455230884194
> => 0.41ms per route execution
>
> Conclusion: simple language is probably generally faster than SPEL.
>
> 4. Multicast, without parallel processing.
>
> from("direct:test").multicast().setBody().simple("${body[key]}").to("
> log:test?level=OFF");
>
> start: 1455231272517
> end:   1455231274849
> => 0.23ms per route execution
>
> 5. Multicast, parallel processing, otherwise same as 4.
>
> from("direct:test
> ").multicast().parallelProcessing().setBody().simple("${body[key]}").to("
> log:test?level=OFF");
>
> start: 1455231408937
> end:   1455231427266
> => 1.83ms per route execution
>
> Conclusion: parallel processing on multicast can really degrade performance
> - contrary to my expectations. On a second thought, I imagine the context
> switch required takes the blame here.
>
> Any more such simple tips will be very welcome!
>
> Thanks,
> Jan
>



-- 
Matt Sicker <boa...@gmail.com>

Reply via email to