hi Mark, 

just out of curiosity, do you know the distribution of set bits  in these terms 
you have tried to cache? 
maybe this simple tip could help.
If you are lucky like we were, such terms typically used for filters are good 
candidates to be used to sort your index before indexing (once in a while) and 
then with some sort of IntervalDocIdSet you can reduce memory requirements 
dramatically. 

 

----- Original Message ----
From: markharw00d <[EMAIL PROTECTED]>
To: java-dev@lucene.apache.org
Sent: Tuesday, 19 February, 2008 9:20:02 AM
Subject: Re: Out of memory - CachingWrappperFilter and multiple threads

I now think the main issue here is that a busy JVM gets into trouble 
trying to find large free blocks of memory for large bitsets.
In my index of 64 million documents, ~8meg of contiguous free memory 
must be found for each bitset allocated. The terms I was trying to cache 
had 14 million entries so the new DocIdSet alternatives for bitsets 
probably fare no better.

The JVM (Sun 1..5) doesn't seem to deal with these allocations well. 
Perhaps there's an obscure JVM option I can set to reserve a section of 
RAM for large allocations.
However, I wonder if we should help the JVM out a little here by having 
pre-allocated pools of BitsSets/OpenBitSets that can be reserved and 
reused by the application. This would imply a change to filter classes 
so instead of constructing BitSets/OpenBitsets directly they get them 
from a pool instead.

Thoughts?


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]






      __________________________________________________________
Sent from Yahoo! Mail - a smarter inbox http://uk.mail.yahoo.com



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to