I'm chasing down a bug in my application where multiple threads were readingand
caching the same filter (same very common term, big index) and causedan Out of
Memory exception when I would expect there to be plenty ofmemory to spare.
There's a number of layers to this app to investigate (I was using
theXMLQueryParser and the CachedFilter tag too) but
CachingWrapperFilterunderpins all this stuff and I was led to this code in it...
public BitSet bits(IndexReader reader) throws IOException {
if (cache == null) {
cache = new WeakHashMap();
}
synchronized (cache) { // check cache
BitSet cached = (BitSet) cache.get(reader);
if (cached != null) {
return cached;
}
}
final BitSet bits = filter.bits(reader);
synchronized (cache) { // update cache
cache.put(reader, bits);
}
return bits;
}
The first observation is - why the use of"final" for the variable "bits" ?
Would there be anyside-effects to this?
Perhaps more worryingly I can see that multiple threads asking for the same
bitset simultaneously arelikely to unnecessarily read the same data from the
same reader (butultimately only one bitset should end up cached). My app only
had 2 simultaneous threads on the same reader so I don't see how that accounts
for the large memory bloat I saw. In a high traffic environment though, I can
see multiple requests for a popular term getting bottle-necked here creating
the same bitset and causing an OOM error. It looks like this multiple-load
scenario could/should be avoided with some careful synchronisation.
Unfortunately I've been unable to reproduce my OOM problem outside of the live
environment so can't fully pinpoint my particular issue or the solution just
yet.
Thoughts?
Mark
__________________________________________________________
Sent from Yahoo! Mail - a smarter inbox http://uk.mail.yahoo.com
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]