ing test:
> > > I created the RAM folder on my Red Hat box and copied c. 1Gb of
> > > indexes
> > > there.
> > > I expected the queries to run much quicker.
> > > In reality it was even sometimes slower(sic!)
> > >
> > > Lucene has i
might find
the slowdown stops after a certain point, especially if you increase
your batch size.
Chuck
> -Original Message-
> From: John Wang [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, November 24, 2004 12:21 PM
> To: Lucene Users List
> Subject: Re: URGENT: He
Thanks Paul!
Using your suggestion, I have changed the update check code to use
only the indexReader:
try {
localReader = IndexReader.open(path);
while (keyIter.hasNext()) {
key = (String) keyIter.next();
term = new Term("key", key);
On Wednesday 24 November 2004 00:37, John Wang wrote:
> Hi:
>
>I am trying to index 1M documents, with batches of 500 documents.
>
>Each document has an unique text key, which is added as a
> Field.KeyWord(name,value).
>
>For each batch of 500, I need to make sure I am not adding a
>
; > From: John Wang [mailto:[EMAIL PROTECTED]
> > Sent: Tuesday, November 23, 2004 3:38 PM
> > To: [EMAIL PROTECTED]
> > Subject: URGENT: Help indexing large document set
> >
> > Hi:
> >
> >I am trying to index 1M documents, with batche
> -Original Message-
> From: John Wang [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, November 23, 2004 3:38 PM
> To: [EMAIL PROTECTED]
> Subject: URGENT: Help indexing large document set
>
> Hi:
>
>I am trying to index 1M documents, with batches of 500
Hi:
I am trying to index 1M documents, with batches of 500 documents.
Each document has an unique text key, which is added as a
Field.KeyWord(name,value).
For each batch of 500, I need to make sure I am not adding a
document with a key that is already in the current index.
To do this