ParallellMultiSearcher Vs. One big Index

2005-01-18 Thread Ryan Aslett
 
Okay, so Im trying to find the sweet spot on how many index segments I
should have.

I have 47 million records of contact data (Name + Address). I used 7
machines to build indexes that resulted in the following spread of
individual indexes:

1503000
150
1497000
5604750
5379750
1437000
1458000
1446000
1422000
1425000
1425000
1404000
1413000
1404000
4893750
4689750
4519500
4497750
46919250 Total Records
(The faster machines built the bigger indexes)
I also joined all these indexes together into one large 47 million
record index, and ran my query pounder against both data sets, one using
the ParallellMultiSearcher for the multi indexes, and one using a normal
IndexSearcher against the large index.
What I found was that for queries with one term (First Name), the large
index beat the multiple indexes hands down (280 Queries/per second vs
170 Q/s).
But for queries with multiple terms (Address), the multiple indexes beat
out the Large index. (26 Q/s vs 16 Q/s)
Btw, Im running these on a 2 proc box with 16GB of ram.

So what Im trying to determine Is if there is some equations out there
that can help me find the sweet spot for splitting my indexes. Most
queries are going to be multi-term, and clearly the big O of the single
term search appears to be log n. (I verified with 470 million records..
The single term search returns at 140 qps, consistent with what I
believe about search algorithms).  The equation that Im missing is the
big O for the union of the result sets that match particular terms.  Im
assuming (havent looked at the source yet) that lucene finds all the
documents that match the first term, and all the documents that match
each subsequent term, and then finds the union between all the sets. Is
this correct?  Anybody have any ideas on how to iron out an equation for
this?

Ryan

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



addIndexes() Question

2004-12-22 Thread Ryan Aslett
 
Hi there, Im about to embark on a Lucene project of massive scale
(between 500 million and 2 billion documents).  I am currently working
on parallellizing the construction of the Index(es). 

Rough summary of my plan:
I have many, many physical machines, each with multiple processors that
I wish to dedicate to the construction of a single index. 
I plan on having each machine gather its documents from a central
sychronized source (network, JMS, whatever). 
Within each machine I will have multiple threads each responsible for
construcing an index slice.

When all machines and all threads are finished, I should have a slew of
index slices that I want to combine together to create one index.

My question is this:  Will it be more efficient to call
addIndexes(Directory[] dirs) on all the slices all at once? 

Or might it be better to continually merge small indexes into a larger
index, i.e. once an index slice reaches a particular size, merge it into
the main index and start building a new slice...

Any help would be appreciated.. 

Ryan Aslett


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]