Hi,

>OK so this means it's not a leak, and instead it's just that stuff is
>consuming more RAM than expected.
Or that my test db is smaller than the production db which is indeed the case.

>Hmm -- there are quite a few buffered deletes pending.  It could be we
>are under-accounting for RAM used by buffered deletes.  I'll dig on
>that.
That sounds promising to me, how does a delete happen, we are talking of 
complete new re-indexing, no deletes at all should be happening. Are you saying 
that I remove docs from the index ?


>Also, your char[]'s are taking ~30 MB, byte[] ~26MB, which is odd if
>your RAM buffer is 16MB.  Does your app create these?
A fair amount is created by my app - I histogram without indexing shows about 
10MB chars created by my App:
class [C        132399  9457148,
though a few more could be created during indexing.

> Why is it, that creating a new Index Writer will let the indexing run fine 
> with 80MB, but keeping it will create an
> OutOfMemoryException running with 100MB heap size ?

Please explain those buffered deletes in a few more details.

Thanks,

Stefan

---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
For additional commands, e-mail: java-user-h...@lucene.apache.org

Reply via email to