On Aug 1, 2008, at 11:44 AM, Brice wrote:
OK, thx for the update, indeed our application will have to handle a big
load for two reasons:
 first : our application and the server will have to handle a lot of
request (~50 eventually more) per seconds
 second : the documents might weight from 200KB to several MB

I guess we will switch to the 2.1.2. As this release is not yet ready (7 issues left), do you have an ETA and if not do you think the actual code on
the trunk is stable enough to go in production?

I wouldn't count on JIRA for this. For patch releases, I tend to do "time based" releases instead of feature/issue based releases. Generally, I try to do patch releases about every 8 weeks or so. Whatever JIRA items have been fixed in that 8 weeks is included. Thus, if you REALLY want something included, submit a patch. :-) Otherwise it might not get done in time and may get pushed out.

2.1.1 was done mid June so probably mid August.


And finally just for my own information, when you talk about streaming, I'm guessing that the gzipped payload is stremed to the "xml handler" using stax
to map the objects with Jaxb. Am I correct?


It's mostly for the writing side of things. Normally, the path is basically:
JAXB -> Stax -> HttpOutputStream
While JAXB is writing to the Stax writer, it's streaming it directly onto the wire.

Ian's patch had this cool little feature where messages less than 1K in size (configurable) don't get gzip compressed. However, to determine if it is 1K in size, he basically did: JAXB -> Stax -> Cache (byte[] for the first 64K, file on disk if larger)
then check the cache size and do:
(big) Cache -> GZipOutputStream -> HttpOutputStream
or
(little) Cache -> HttpOutputStream

that's a bit inefficient as it involves using the disk for the large messages, etc.

The stuff on trunk sets it up like:

JAXB -> Stax -> 1K buffer (-> GZIPOut if 1k exceeded) -> HttpOutputStream

If the 1K buffer never overflows, it's flushed right out to the HttpOutputStream at the end. If it does overflow, the gzip stream is injected and from there on, all writing is immediately streamed out via gzip. The large buffer/cache of the whole message does not take place.

Does that help explain it?


Dan





Thx :)


On Fri, Aug 1, 2008 at 17:19, Daniel Kulp <[EMAIL PROTECTED]> wrote:


On Aug 1, 2008, at 6:38 AM, Ian Roberts wrote:

Brice wrote:

OK thx Google,
I found out that the GZip support is fixed for the 2.1.2 release:
https://issues.apache.org/jira/browse/CXF-1387
However I wonder if this patch is available in 2.0.8.


Not that I'm aware of, but if you're prepared to build CXF from source you should just be able to apply the patch from the JIRA issue directly onto the 2.0.x source code (I originally wrote the patch against the 2.0 branch as
that's what I use).


Keep in mind, what's on trunk is a bit different than your original patch. The original patch was an EXCELLENT starting point, but what's on trunk works quite a bit better for large messages in that it can stream instead of doing a "buffer than send" thing. That's partially why I didn't push it to 2.0.x. The classes used for the threshold management and such aren't available there so there is quite a bit more work to find everything that
would need to be merged.

---
Daniel Kulp
[EMAIL PROTECTED]
http://www.dankulp.com/blog







--
Bryce

---
Daniel Kulp
[EMAIL PROTECTED]
http://www.dankulp.com/blog




Reply via email to