Doug
Thank you for confirming this.
ZJ
Doug Cutting <[EMAIL PROTECTED]> wrote:
John Z wrote:
> We have indexes of around 1 million docs and around 25 searchable fields.
> We noticed that without any searches performed on the indexes, on startup, the
> memory taken up by the searcher is roughl
John Z wrote:
We have indexes of around 1 million docs and around 25 searchable fields.
We noticed that without any searches performed on the indexes, on startup, the memory taken up by the searcher is roughly 7 times the .tii file size. The .tii file is read into memory as per the code. Our .tii f
Hi
We are trying to get the memory footprint on our searchers.
We have indexes of around 1 million docs and around 25 searchable fields.
We noticed that without any searches performed on the indexes, on startup, the memory
taken up by the searcher is roughly 7 times the .tii file size. The .ti
gt; I'm not familiar with the new compound file format. Where
> > can I look to find more information?
> >
> > -- Mark
> >
> > -Original Message-
> > From: James Dunn [mailto:[EMAIL PROTECTED]
> > Sent: Friday, July 02, 2004 01:29 pm
> &g
nd file format. Where
can I look to find more information?
-- Mark
-Original Message-
From: James Dunn [mailto:[EMAIL PROTECTED]
Sent: Friday, July 02, 2004 01:29 pm
To: Lucene Users List
Subject: Re: Running OutOfMemory while optimizing and searching
Ah yes, I don't think I made that clea
st
Subject: Re: Running OutOfMemory while optimizing and searching
Ah yes, I don't think I made that clear enough. From
Mark's original post, I believe he mentioned that he
used seperate readers for each simultaneous query.
His other issue was that he was getting an OOM during
an optimi
Ah yes, I don't think I made that clear enough. From
Mark's original post, I believe he mentioned that he
used seperate readers for each simultaneous query.
His other issue was that he was getting an OOM during
an optimize, even when he set the JVM heap to 2GB. He
said his index was about 10.5GB
> What do your queries look like? The memory required
> for a query can be computed by the following equation:
>
> 1 Byte * Number of fields in your query * Number of
> docs in your index
>
> So if your query searches on all 50 fields of your 3.5
> Million document index then each search would tak
Wow, I have say that those sort of numbers are concerning to me... Now I
know 3.5 million documents is a lot, but still... What would be causing a
query to require and hold that much memory? I could understand that it
surely would be doing a lot of memory work, but why would it need to hold
onto/g
PROTECTED]
Sent: Tuesday, June 29, 2004 07:29 pm
To: Lucene Users List
Subject: RE: Running OutOfMemory while optimizing and searching
Mark,
What do your queries look like? The memory required
for a query can be computed by the following equation:
1 Byte * Number of fields in your query * Number
ut why mine is throwing OutOfMemory --
> not only on the
> optimize, but when 3-4 searchers are running, too.
>
> -- Mark
>
> -Original Message-
> From: Otis Gospodnetic
> [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, June 29, 2004 01:02 am
> To: Lucene Users
ine is throwing OutOfMemory -- not only on the
optimize, but when 3-4 searchers are running, too.
-- Mark
-Original Message-
From: Otis Gospodnetic [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 29, 2004 01:02 am
To: Lucene Users List
Subject: Re: Running OutOfMemory while optimizing and searching
Mark,
Tough situation. I hate when things like this happen on production :(.
You are not mentioning what you are using for various IndexWriter
parameters. You may be able to get this working by tweaking them (see
http://jakarta.apache.org/lucene/docs/api/org/apache/lucene/index/IndexWriter.html
13 matches
Mail list logo