Eric, you said you aren't using any Field.Index.NO_NORMS fields, but 
SegmentReader.ones should only be used if you do use NO_NORMS, so things don't 
add up here.

Otis

----- Original Message ----
From: Yonik Seeley <[EMAIL PROTECTED]>
To: [email protected]
Sent: Monday, December 11, 2006 8:53:15 PM
Subject: Re: SegmentReader using too much memory?

On 12/11/06, Eric Jain <[EMAIL PROTECTED]> wrote:
> I've noticed that after stress-testing my application (uses Lucene 2.0) for
> I while, I have almost 200mb of byte[]s hanging around, the top two
> culprits being:
>
> 24 x SegmentReader.Norm.bytes = 112mb
>   2 x SegmentReader.ones       =  16mb

Each indexed field has a norm array that is the product of it's
index-time boost and the length normalization factor.  If you don't
need either, you can omit the norms (as it looks like you already have
on some fields given that "ones" is the fake norms used in place of
the "real" norms).

-Yonik
http://incubator.apache.org/solr Solr, the open-source Lucene search server

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to