On 2/6/2014 9:56 AM, samarth s wrote:
> Size of index = 260 GB
> Total Docs = 100mn
> Usual writing speed = 50K per hour
> autoCommit-maxDocs = 400,000
> autoCommit-maxTime = 1500,000 (25 mins)
> merge factor = 10
> 
> M/c memory = 30 GB, Xmx = 20 GB
> Server - Jetty
> OS - Cent OS 6

With 30GB of RAM (is it Amazon EC2, by chance?) and a 20GB heap, you
have about 10GB of RAM left for caching your Solr index.  If that server
has all 260GB of index, I am really surprised that you have only been
having problems for a short time.  I would have expected problems from
day one.  Even if it only has half or one quarter of the index, there is
still a major discrepancy in RAM vs. index size.

You either need more memory or you need to reduce the size of your
index.  The size of the indexed portion generally has more of an impact
on performance than the size of the stored portion, but they do both
have an impact, especially on indexing and committing.  With regular
disks, it's best to have at least 50% of your index size available to
the OS disk cache, but 100% is better.

http://wiki.apache.org/solr/SolrPerformanceProblems#OS_Disk_Cache

If you are already using SSD, you might think there can't be
memory-related performance problems ... but you still need a pretty
significant chunk of disk cache.

https://wiki.apache.org/solr/SolrPerformanceProblems#SSD

Thanks,
Shawn

Reply via email to