DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG 
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
<http://nagoya.apache.org/bugzilla/show_bug.cgi?id=6463>.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND 
INSERTED IN THE BUG DATABASE.

http://nagoya.apache.org/bugzilla/show_bug.cgi?id=6463

Severe Performance problems (Memory + Time taken) compared to SAXON





------- Additional Comments From [EMAIL PROTECTED]  2002-02-14 20:04 -------
Persistant memory demand is NOT that large in the current CVS code, probably as 
a result of our changes in how storage for RTFs is managed. We can now run your 
to completion _without_ requiring the -Xms and -Xmx switches. In fact increasing 
heap size unnecessarily may actually hurt your performance by scattering data 
all over memory,  defeating the processor's cache ("locality-of-reference" 
effects)... and you might even find that DECREASING the maximum heap from its 
default value was worth trying, despite the increased frequency of GC passes.

However, JProbe reports that we're seeing massive creation and discarding of 
WalkingIterators, FilterExprWalkers, int[] arrays (all at about the same rate, 
which probably means they're being used as a set) and XNumbers (at about twice 
the rate of the others). It's approaching 100MB of just plain storage churn in 
the current run, and it isn't done yet... So one thing to look at is whether all 
of these objects are required, whether their creation/destruction can be made 
less expensive... and whether some of them can be optimized away entirely.

(Classic JProbe memory-churn plot -- linear growth with time until GC runs, then 
sudden drop-off to almost nothing. A single GC request, manually requested 
partway through merging hsqldb.xconf, reports that it recovered storage from 
over 1.8 million discarded objects, adding up to roughly 1.2MB of heap space. 
You can see why I suggest not giving the JVM more storage than it legitimately 
needs...!)

Reply via email to