I'm looking for information on the largest document collection that Lucene
has been used to index, the biggest benchmark I've been able to find so far
is 1MM documents.

I'd like to generate some benchmarks for large collections (1-100MM) records
and would like to know if this is feasible without using distributed
indexes, etc.  It's mostly to construct a performance profile relating
indexing/retrieval time and storage requirements to the number of documents.

Thanks.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to