ing test:
> > > I created the RAM folder on my Red Hat box and copied c. 1Gb of
> > > indexes
> > > there.
> > > I expected the queries to run much quicker.
> > > In reality it was even sometimes slower(sic!)
> > >
> > > Lucene has i
might find
the slowdown stops after a certain point, especially if you increase
your batch size.
Chuck
> -Original Message-
> From: John Wang [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, November 24, 2004 12:21 PM
> To: Lucene Users List
> Subject: Re: URGENT: He
Thanks Paul!
Using your suggestion, I have changed the update check code to use
only the indexReader:
try {
localReader = IndexReader.open(path);
while (keyIter.hasNext()) {
key = (String) keyIter.next();
term = new Term("key", key);
On Wednesday 24 November 2004 00:37, John Wang wrote:
> Hi:
>
>I am trying to index 1M documents, with batches of 500 documents.
>
>Each document has an unique text key, which is added as a
> Field.KeyWord(name,value).
>
>For each batch of 500, I need to make sure I am not adding a
>
Thanks Chuck! I missed the call: getIndexOffset.
I am profiling it again to pin-point where the performance problem is.
-John
On Tue, 23 Nov 2004 16:13:22 -0800, Chuck Williams <[EMAIL PROTECTED]> wrote:
> Are you sure you have a performance problem with
> TermInfosReader.get(Term)? It looks to
Are you sure you have a performance problem with
TermInfosReader.get(Term)? It looks to me like it scans sequentially
only within a small buffer window (of size
SegmentTermEnum.indexInterval) and that it uses binary search otherwise.
See TermInfosReader.getIndexOffset(Term).
Chuck
> -Origi