Anybody care to forecast when hardware will catch up with Solr and we can routinely look forward to newbies complaining that they indexed "some" data and after only 10 minutes they hit this weird 2G document count limit?

-- Jack Krupansky

-----Original Message----- From: Shawn Heisey
Sent: Tuesday, June 3, 2014 3:34 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr maximum Optimal Index Size per Shard

On 6/3/2014 12:54 PM, Jack Krupansky wrote:
How much free system memory do you have for the OS to cache file
system data? If your entire index fits in system memory operations
will be fast, but as your index grows beyond the space the OS can use
to cache the data, performance will decline.

But there's no hard limit in Solr per se.

Vineet,

There is only one hard limit in Solr: You can't put more than about 2
billion documents in one shard.  The exact number is 2147483647 -- the
largest number that you can store in a Java integer.  Because this
number also includes deleted documents, to be absolutely sure that
nothing will have a problem, it would be advisable to stay below 1
billion documents per shard.

Because of Solr's reliance on RAM for the OS disk cache, which Jack has
already mentioned, chances are very good that your shards will have
performance problems long before you reach a billion documents.

Thanks,
Shawn

Reply via email to