<[EMAIL PROTECTED]> wrote:
Does each searchable have it's own copy of Term and TermInfo
arrays? So the amount in memory would grow with each new
Searchable instance? If so, it might be worthwhile to implement a
singleton MultiSearcher that is closed and re-opened periodically.
What d
Rich
From: Michael McCandless [EMAIL PROTECTED]
Sent: Monday, March 17, 2008 6:27 PM
To: java-user@lucene.apache.org
Subject: Re: Huge number of Term objects in memory gives OutOfMemory error
You can call IndexReader.setTermInfosIndexDivisor(int) to reduce how
You can call IndexReader.setTermInfosIndexDivisor(int) to reduce how
many index terms are loaded in memory. EG setting it to 10 will load
1/10th what's loaded now, but will slow down searches.
Also, you should understand why your index has so many terms. EG,
use Luke to peek at the terms
I'll bet the byte[] are the Norm data per field. If you have a lot of
fields and do not need the normalization data for every field, I'd
suggest turning that option off for fields you don't need the
normalization for scoring. The calculation I understand is:
1 byte x (# fields with normal