By using a ByteArrayInputStream as payload, and a "warmed up" system (I
sent 100 messages before the measurement to warm up the system) I got the
following results with the payload of 2046 bytes:

testUnmarshallConcurrent() took 2202ms (5196ms by using a string as payload
and a cold system)
testUnmarshallFallbackConcurrent() took 1224ms (2761ms by using a string as
payload and a cold system)

testMarshallConcurrent() took 875ms
testMarshallFallbackConcurrent() took 999ms

Best,
Christian

On Thu, Apr 5, 2012 at 11:18 PM, Daniel Kulp <dk...@apache.org> wrote:

>
> Great numbers!   Huge improvement!
>
> > Still wondering why the FallbackTypeConverter is faster than the
> > JaxbDataFormat...
>
> The UnmarshalProcessor which is invoked in this does a:
>
>        InputStream stream = ExchangeHelper.getMandatoryInBody(exchange,
> InputStream.class);
>
> since the DataFormat things require a stream.   The Fallback stuff can keep
> the payload as a String and create the writer from that.   Thus, part of
> the
> time of the JaxbDataFormat tests is constantly converting from String to
> InputStream.
>
> Probably should update the tests to use a ByteArrayInputStream as the
> payload or something to make them more comparable.
>
>
> Dan
>
>
>
>
> On Thursday, April 05, 2012 11:05:46 PM Christian Müller wrote:
> > I did a few test an recorded the fastest one:
> >
> > Length: 2046:
> > =============
> > Before:
> > testUnmarshallConcurrent() took 14122ms
> > testUnmarshallFallbackConcurrent() took 8479ms
> >
> > After:
> > testUnmarshallConcurrent() took 5196ms
> > testUnmarshallFallbackConcurrent() took 2761ms
> >
> > Length: 104
> > ===========
> > Before:
> > testUnmarshallConcurrent() took 7281ms
> > testUnmarshallFallbackConcurrent() took 4815ms
> >
> > After:
> > testUnmarshallConcurrent() took 2767ms
> > testUnmarshallFallbackConcurrent() took 2458ms
> >
> > Still wondering why the FallbackTypeConverter is faster than the
> > JaxbDataFormat...
> >
> > Best,
> > Christian
> >
> > On Thu, Apr 5, 2012 at 6:55 PM, Daniel Kulp <dk...@apache.org> wrote:
> > > On Thursday, April 05, 2012 06:46:43 PM Christian Müller wrote:
> > > > Hi Dan,
> > > >
> > > > great work! I got the following results:
> > > >
> > > > testUnmarshallConcurrent() took 5898ms
> > > > testUnmarshallFallbackConcurrent() took 2477ms
> > > > testMarshallFallbackConcurrent() took 1728ms
> > > > testMarshallConcurrent() took 1674ms
> > > >
> > > > I'm wondering why 'testUnmarshallConcurrent()' is significant slower
> > > > than
> > > > 'testUnmarshallFallbackConcurrent()'...
> > >
> > > testUnmarshallFallbackConcurrent  still uses your "old" tiny payload.
> > > I
> > > only updated testUnmarshallConcurrent to use a much larger payload.  (I
> > > think around 2K).    Feel free to play with the payload sizes and such
> > > to
> > > get some comparisons and such.
> > >
> > >
> > > Dan
> > >
> > > > Best,
> > > > Christian
> > > >
> > > > On Thu, Apr 5, 2012 at 4:53 PM, Daniel Kulp <dk...@apache.org>
> wrote:
> > > > > Christian,
> > > > >
> > > > > I just committed some updates to the jaxb component to completely
> > > > > flip
> > > > > all the unmarshalling over to StAX which completely removes the
> need
> > > > > for the pooling and locks and such.   Can you give it a quick run
> on
> > > > > your box and see how the performance looks?   I'd also be curious
> > > > > about
> > > > > the difference with the larger XML messages created with the code
> > > > > below.
> > > > >
> > > > > One note:  right now, this REQUIRES Woodstox to be at all
> > > > > threadsafe.
> > > > > I'm going to try and update the StaxConverter to work better with
> > > > > the
> > > > > in-jdk stax parser.   There is a TON of code in CXF I can use, just
> > > > > depends on if it's easier to just copy the code over or somehow
> > > > > shade
> > > > > it in or something.
> > > > >
> > > > > Anyway, can you give it a quick run with Woodstox and let me know
> > > > > how
> > >
> > > it
> > >
> > > > > goes?
> > > > >
> > > > > Dan
> > > > >
> > > > > On Wednesday, April 04, 2012 05:54:35 PM Daniel Kulp wrote:
> > > > > > On Wednesday, April 04, 2012 05:21:25 PM Daniel Kulp wrote:
> > > > > > > On Wednesday, April 04, 2012 11:09:14 PM Christian Müller
> wrote:
> > > > > > > > I just committed the last piece of code for [1] which improve
> > > > > > > > the
> > > > > > > > performance of XML unmarshalling with JAXB. I would like to
> > > > > > > > see
> > > > > > > > this
> > > > > > > > improvement also in Camel 2.9.2 and Camel 2.8.5. At present
> > > > > > > > I'm
> > > > > > > > waiting
> > > > > > > > for the users response whether this solution out performs his
> > > > > > > > custom
> > > > > > > > pooling solution (using commons-pool).
> > > > > > > > I would also appreciate if you could have a look at the
> > > > > > > > changes.
> > > > > > >
> > > > > > > I'll take a closer look tomorrow, but my initial look at the
> > > > > > > code
> > > > > > > definitely has concerns in a multi-threaded case.   Putting a
> > > > > > > lock
> > > > > > > around
> > > > > > > the unmarshall it really going to cause a performance hit in
> > > > > > > multi-threaded cases, particularly with larger payloads.
>  I'll
> > > > > > > play a
> > > > > > > bit with it tomorrow to see what I can do with it.
> > > > > >
> > > > > > Yea...  a very quick test by creating the payload string via:
> > > > > >     private int fooBarSize = 200;
> > > > > >     public String createPayload() throws Exception {
> > > > > >
> > > > > >         Foo foo = new Foo();
> > > > > >         for (int x = 0; x < fooBarSize; x++) {
> > > > > >
> > > > > >             Bar bar = new Bar();
> > > > > >             bar.setName("Name: " + x);
> > > > > >             bar.setValue("value: " + x);
> > > > > >             foo.getBarRefs().add(bar);
> > > > > >
> > > > > >         }
> > > > > >         Marshaller m = JAXBContext.newInstance(Foo.class,
> > > > > >
> > > > > > Bar.class).createMarshaller();
> > > > > >
> > > > > >         StringWriter writer = new StringWriter();
> > > > > >         m.marshal(foo, writer);
> > > > > >         return writer.toString();
> > > > > >
> > > > > >     }
> > > > > >
> > > > > > (ends up just over 8K in size)
> > > > > >
> > > > > > and then removing the locks and creating a new unmarshaller per
> > > > > > request
> > > > > > drops the time from 12 seconds to 5.5 seconds on my machine.
> > > > > > (testUnmarshallConcurrent)   That's huge.   I'll look a bit more
> > > > >
> > > > > tomorrow.
> > > > >
> > > > > > Dan
> > > > >
> > > > > --
> > > > > Daniel Kulp
> > > > > dk...@apache.org - http://dankulp.com/blog
> > > > > Talend Community Coder - http://coders.talend.com
> > >
> > > --
> > > Daniel Kulp
> > > dk...@apache.org - http://dankulp.com/blog
> > > Talend Community Coder - http://coders.talend.com
> --
> Daniel Kulp
> dk...@apache.org - http://dankulp.com/blog
> Talend Community Coder - http://coders.talend.com
>
>

Reply via email to