I finished back merging and fixed a few checkstyle errors. The branch is
now ready for the release. I am running a full test overnight, and
assuming all goes well, as it should, I'll start the release builds in
the morning.
Hadrian
On 04/21/2012 08:56 PM, Hadrian Zbarcea wrote:
Ack. I'll start it in the morning. I'll be out of town in the afternoon,
if I won't finish it by then I'll continue in the evening.
Hadrian
On 04/21/2012 09:58 AM, Claus Ibsen wrote:
Hi
We got that camel-jaxb bug backported now to the 2.8.x branch.
I think it would be a good time to start cutting a new 2.8.5 release.
On Wed, Apr 11, 2012 at 12:06 PM, Christian Müller
<christian.muel...@gmail.com> wrote:
I still have to back port the JAXB Marshaller/Unmarshaller
improvements. I
will do this today evening.
Best,
Christian
Sent from a mobile device
Am 11.04.2012 10:59 schrieb "Claus Ibsen"<claus.ib...@gmail.com>:
Hi
I fixed some of the know bugs and backported them to the 2.9 branch so
we will have those fixes in the release.
I added a new 2.9.3 version in JIRA and pushed a few tickets to that.
There is one pending ticket I will look at now, but its not a block
for cutting a release.
https://issues.apache.org/jira/browse/CAMEL-5157
And was there something about a JVM property being added to the JAXB
type converters, that was a copy/paste from Apache CXF?
If so that JVM system property should be named org.apache.camel etc.
And we need to document that.
On Wed, Apr 11, 2012 at 12:09 AM, Christian Müller
<christian.muel...@gmail.com> wrote:
The numbers are really good! Go ahead with the 2.9.2 release.
Best,
Christian
Sent from a mobile device
Am 10.04.2012 17:44 schrieb "Hadrian Zbarcea"<hzbar...@gmail.com>:
Christian,
Are those numbers good? Are they blocking the release? I'd like to
cut
the
release sometimes this week.
Cheers,
Hadrian
On 04/06/2012 10:58 AM, Christian Müller wrote:
By using a ByteArrayInputStream as payload, and a "warmed up"
system (I
sent 100 messages before the measurement to warm up the system) I
got
the
following results with the payload of 2046 bytes:
testUnmarshallConcurrent() took 2202ms (5196ms by using a string as
payload
and a cold system)
testUnmarshallFallbackConcurre**nt() took 1224ms (2761ms by using a
string as
payload and a cold system)
testMarshallConcurrent() took 875ms
testMarshallFallbackConcurrent**() took 999ms
Best,
Christian
On Thu, Apr 5, 2012 at 11:18 PM, Daniel Kulp<dk...@apache.org>
wrote:
Great numbers! Huge improvement!
Still wondering why the FallbackTypeConverter is faster than the
JaxbDataFormat...
The UnmarshalProcessor which is invoked in this does a:
InputStream stream = ExchangeHelper.**
getMandatoryInBody(exchange,
InputStream.class);
since the DataFormat things require a stream. The Fallback stuff
can
keep
the payload as a String and create the writer from that. Thus, part
of
the
time of the JaxbDataFormat tests is constantly converting from
String
to
InputStream.
Probably should update the tests to use a ByteArrayInputStream
as the
payload or something to make them more comparable.
Dan
On Thursday, April 05, 2012 11:05:46 PM Christian Müller wrote:
I did a few test an recorded the fastest one:
Length: 2046:
=============
Before:
testUnmarshallConcurrent() took 14122ms
testUnmarshallFallbackConcurre**nt() took 8479ms
After:
testUnmarshallConcurrent() took 5196ms
testUnmarshallFallbackConcurre**nt() took 2761ms
Length: 104
===========
Before:
testUnmarshallConcurrent() took 7281ms
testUnmarshallFallbackConcurre**nt() took 4815ms
After:
testUnmarshallConcurrent() took 2767ms
testUnmarshallFallbackConcurre**nt() took 2458ms
Still wondering why the FallbackTypeConverter is faster than the
JaxbDataFormat...
Best,
Christian
On Thu, Apr 5, 2012 at 6:55 PM, Daniel Kulp<dk...@apache.org>
wrote:
On Thursday, April 05, 2012 06:46:43 PM Christian Müller wrote:
Hi Dan,
great work! I got the following results:
testUnmarshallConcurrent() took 5898ms
testUnmarshallFallbackConcurre**nt() took 2477ms
testMarshallFallbackConcurrent**() took 1728ms
testMarshallConcurrent() took 1674ms
I'm wondering why 'testUnmarshallConcurrent()' is significant
slower
than
'**testUnmarshallFallbackConcurre**nt()'...
testUnmarshallFallbackConcurre**nt still uses your "old" tiny
payload.
I
only updated testUnmarshallConcurrent to use a much larger
payload.
(I
think around 2K). Feel free to play with the payload sizes and
such
to
get some comparisons and such.
Dan
Best,
Christian
On Thu, Apr 5, 2012 at 4:53 PM, Daniel Kulp<dk...@apache.org>
wrote:
Christian,
I just committed some updates to the jaxb component to
completely
flip
all the unmarshalling over to StAX which completely removes the
need
for the pooling and locks and such. Can you give it a quick run
on
your box and see how the performance looks? I'd also be curious
about
the difference with the larger XML messages created with the
code
below.
One note: right now, this REQUIRES Woodstox to be at all
threadsafe.
I'm going to try and update the StaxConverter to work better
with
the
in-jdk stax parser. There is a TON of code in CXF I can use,
just
depends on if it's easier to just copy the code over or somehow
shade
it in or something.
Anyway, can you give it a quick run with Woodstox and let me
know
how
it
goes?
Dan
On Wednesday, April 04, 2012 05:54:35 PM Daniel Kulp wrote:
On Wednesday, April 04, 2012 05:21:25 PM Daniel Kulp wrote:
On Wednesday, April 04, 2012 11:09:14 PM Christian Müller
wrote:
I just committed the last piece of code for [1] which improve
the
performance of XML unmarshalling with JAXB. I would like to
see
this
improvement also in Camel 2.9.2 and Camel 2.8.5. At present
I'm
waiting
for the users response whether this solution out performs
his
custom
pooling solution (using commons-pool).
I would also appreciate if you could have a look at the
changes.
I'll take a closer look tomorrow, but my initial look at the
code
definitely has concerns in a multi-threaded case. Putting a
lock
around
the unmarshall it really going to cause a performance hit in
multi-threaded cases, particularly with larger payloads.
I'll
play a
bit with it tomorrow to see what I can do with it.
Yea... a very quick test by creating the payload string via:
private int fooBarSize = 200;
public String createPayload() throws Exception {
Foo foo = new Foo();
for (int x = 0; x< fooBarSize; x++) {
Bar bar = new Bar();
bar.setName("Name: " + x);
bar.setValue("value: " + x);
foo.getBarRefs().add(bar);
}
Marshaller m = JAXBContext.newInstance(Foo.**class,
Bar.class).createMarshaller();
StringWriter writer = new StringWriter();
m.marshal(foo, writer);
return writer.toString();
}
(ends up just over 8K in size)
and then removing the locks and creating a new unmarshaller
per
request
drops the time from 12 seconds to 5.5 seconds on my machine.
(testUnmarshallConcurrent) That's huge. I'll look a bit more
tomorrow.
Dan
--
Daniel Kulp
dk...@apache.org - http://dankulp.com/blog
Talend Community Coder - http://coders.talend.com
--
Daniel Kulp
dk...@apache.org - http://dankulp.com/blog
Talend Community Coder - http://coders.talend.com
--
Daniel Kulp
dk...@apache.org - http://dankulp.com/blog
Talend Community Coder - http://coders.talend.com
--
Hadrian Zbarcea
Principal Software Architect
Talend, Inc
http://coders.talend.com/
http://camelbot.blogspot.com/
--
Claus Ibsen
-----------------
CamelOne 2012 Conference, May 15-16, 2012: http://camelone.com
FuseSource
Email: cib...@fusesource.com
Web: http://fusesource.com
Twitter: davsclaus, fusenews
Blog: http://davsclaus.blogspot.com/
Author of Camel in Action: http://www.manning.com/ibsen/
--
Hadrian Zbarcea
Principal Software Architect
Talend, Inc
http://coders.talend.com/
http://camelbot.blogspot.com/