Hello, On Mon, 2011-07-04 at 13:51 +0200, Jame Vaalet wrote: > What would be the maximum size of a single SOLR index file for resulting in > optimum search time ?
How do you define optimimum? Do you want the fastest possible response time at any cost or do you have a specific response time goal? Can you give us more details on your use case? What kind of load are you expecting? What kind of queries do you need to support? Some of the trade-offs depend if you are CPU bound or I/O bound. Assuming a fairly large index, if you *absolutely need* the fastest possible search response time and you can *afford the hardware*, you probably want to shard your index and size your indexes so they can all fit in memory (and do some work to make sure the index data is always in memory). If you can't afford that much memory, but still need very fast response times, you might want to size your indexes so they all fit on SSD's. As an example of a use case on the opposite side of the spectrum, here at HathiTrust, we have a very low number of queries per second and we are running an index that totals 6 TB in size with shards of about 500GB and average response times of 200ms (but 99th percentile times of about 2 seconds). Tom Burton-West http://www.hathitrust.org/blogs/large-scale-search