For a start, I would lower the merge factor quite a bit. A high merge
factor is over rated :) You will build the index faster, but searches
will be slower and an optimize takes much longer. Essentially, the time
you save when indexing is paid when optimizing anyway. You might as well
amortize the cost with a lower merge factor.
Grant seems to think the numbers are off anyway, so you may have more to
do -- just a suggestion about the merge factor. How much RAM are you
giving your application?
With a machine with 8 cores and 15,000rpm, days does seem a little
ridiculous.
- Mark
Barry Forrest wrote:
Hi,
Thanks for your help.
I'm using Lucene 2.3.
Raw document size is about 138G for 1.5M documents, which is about
250k per document.
IndexWriter settings are MergeFactor 50, MaxMergeDocs 2000,
RAMBufferSizeMB 32, MaxFieldLength Integer.MAX_VALUE.
Each document has about 10 short bibliographic fields and 3 longer
content fields and 1 field that contains the entire contents of the
document. The longer content fields are stored twice - in a stemmed
and unstemmed form. So actually there are about 8 longer content
fields. (The effect of storing stemmed and unstemmed versions is to
approximately double the index size over storing the content only
once). About half the short bibliographic fields are stored
(compressed) in the index. The longer content fields are not stored,
and no term vectors are stored.
The hardware is quite new and fast: 8 cores, 15,000 RPM disks.
Thanks again
Barry
On Nov 12, 2007 10:41 AM, Grant Ingersoll <[EMAIL PROTECTED]> wrote:
Hmmm, something doesn't sound quite right. You have 10 million docs,
split into 5 or so indexes, right? And each sub index is 150
gigabytes? How big are your documents?
Can you provide more info about what your Directory and IndexWriter
settings are? What version of Lucene are you using? What are your
Field settings? Are you storing info? What about Term Vectors?
Can you explain more about your documents, etc? 10 million doesn't
sound like it would need to be split up that much, if at all,
depending on your hardware.
The wiki has some excellent resources on improving both indexing and
search speed.
-Grant
On Nov 11, 2007, at 6:16 PM, Barry Forrest wrote:
Hi,
Optimizing my index of 1.5 million documents takes days and days.
I have a collection of 10 million documents that I am trying to index
with Lucene. I've divided the collection into chunks of about 1.5 - 2
million documents each. Indexing 1.5 documents is fast enough (about
12 hours), but this results in an index directory containing about
35000 files. Optimizing this index takes several days, which is a bit
too long for my purposes. Each sub-index is about 150G.
What can I do to make this process faster?
Thanks for your help,
Barry
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]