Sylvain Wallez wrote:

> IFAIK, bucketmaps are used as soon as a component is looked up, and 
> getting a page from cache shouldn't reduce much the number of lookups 
> since the pipeline has to be built to get the cache key and validity.

True, but who is 'creating' those new BucketMaps$Nodes who are later 
garbage collected?

> <thinking-loudly>
> What could save some lookups is to have more ThreadSafe components, 
> including pipeline components. For example, a generator could 
> theroretically be threadsafe (it has mainly one generate() method), but 
> the fact that setup() and generate() are separated currently prevents this.
> 
> Also we have to consider that component lookup is more costly than 
> instanciating a small object. Knowing this, some transformers and 
> serializers can be thought of as factories of some lightweight content 
> handlers that do the actual job. These transformers and serializers 
> could then also be made ThreadSafe and thus avoid per-request lookup.
> 
> This would require some new interfaces, which should coexist with the 
> old ones to ensure backwards compatibility.
> 
> Thoughts ?

I don't think lookup is that expensive.

It is true that the JVM is optimized for object creation and GC, but 
extensive stress tests ran by one of the biggest cell phone companies in 
europe (I can't tell you which one, sorry) showed significant pauses in 
processing due to GC kicking in.

We have discovered thru profiling that each request handled by Cocoon 
generates a big amount of garbage: these are all the payloads of the SAX 
events that must be generated, passed along and GC at the end.

I've started to think on how we can recycle those objects, but I think 
we are stretching the limit of what we can do inside the JVM since 
pauses in JVM execution due to GC are probably a problem with the GC 
algorithm rather than poor use of resources from our side.

Personally, I would not go thru back-incompatible changes in interfaces 
just to avoid a few object lookups.

> </thinking-loudly>
> 
>> 3) Catalina seems to be spending 10% of the pipeline time. Having 
>> extensively profiled and carefully optimized a servlet engine (JServ) 
>> I can tell you that this is *WAY* too much. Catalina doesn't seem like 
>> the best choice to run a loaded servlet-based site (contact 
>> [EMAIL PROTECTED] if you want to do something about it: he's working on 
>> Jerry, a super-light servlet engine based on native APR and targetted 
>> expecially for Apache 2.0)
> 
> 
> 
> www.betaversion.org has been done for several weeks now...

Don't tell me: my mail went down the drain with it :/ You should mail 
pier directly if you need more info on that.

> I'm happy to hear that :-) The TreeProcessor was designed to be as fast 
> as possible, even if interpreted : pre-process everything that can be, 
> and pre-lookup components when they're ThreadSafe. Call stacks can be 
> impressive, but each frame performs very few computations.

Yep, profiling confirms that.

>> It's URI matching that is the thing that needs more work 
>> performance-wise.
>>
>> Don't get me wrong, my numbers indicate that URI matching takes for 3% 
>> to 8% of response time. Compared to the rest is nothing, but since 
>> this is the only thing we are in total control, this is where we 
>> should concentrate profiling efforts.
> 
> 
> 
> Do you mean the WildcardURIMatcher ?

Yes.

> Is this related to the matching 
> algorithm, or to the number of patterns that are to be tested for a 
> typical request handling ?

Don't know. the profiler adds the time spent on each class and method 
and sums them up. Anyway, that is the class in the org.apache.cocoon.* 
namespace where most time is spent (on average)

>> Ok, that's it. Enough for a rainy swiss afternoon.
>>
>> Anyway, Cocoon is pretty optimized for what we could see. So let's be 
>> happy about it.
> 
> 
> 
> Have you compared 2.0.x and 2.1 respective speeds on the same 
> application ? 

No, but those performance tests should not be done with this level of 
granularity but with some external load-stressing tools like Jmeter and 
the like.

> This would be interesting to know if the 2.1 performs 
> better than its ancestor.

Yes, most definately. Anyway willing to take the challenge?

-- 
Stefano Mazzocchi                               <[EMAIL PROTECTED]>
--------------------------------------------------------------------



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, email: [EMAIL PROTECTED]

Reply via email to