[ 
https://issues.apache.org/jira/browse/SOLR-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16488989#comment-16488989
 ] 

Shawn Heisey commented on SOLR-12381:
-------------------------------------

There are no generic answers regarding the required heap size.  It will depend 
on details of your specific installation.  It is usually impossible for anyone 
to tell you that number.  We can make *guesses* with very detailed information 
about your setup, but the only way to know if the guess is right is to try it.

Recommendations like "half of physical memory" are often completely wrong.  The 
best value is a number that's as large as you need, and no larger.

https://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap

The oom killer script included with Solr is designed to kill Solr when Java 
throws ANY OutOfMemoryException.  There are multiple resource types that can 
result in OOME errors, not just heap space.  The 'boundary' that was mentioned 
just refers to the amount of heap memory that Solr needs for YOUR installation. 
 It is not something configurable.


> facet query causes down replicas
> --------------------------------
>
>                 Key: SOLR-12381
>                 URL: https://issues.apache.org/jira/browse/SOLR-12381
>             Project: Solr
>          Issue Type: Bug
>      Security Level: Public(Default Security Level. Issues are Public) 
>    Affects Versions: 6.6.1
>            Reporter: kiarash
>            Priority: Major
>
> Cluster description:
> I have a solr cluster with 3 nodes(node1, node2, node3).
> Each node has:
> 30 GB memory.
> 3 TB SATA Disk
> My cluster involves 5 collections which contain more than a billion document.
> I have a collection(news_archive collection) which contain 30 million 
> document. This collection is divided into 3 shards which each of them 
> contains 10 million document and occupies 100GB on the Disk. Each of the 
> shards has 3 replicas.
> Each of the cluster nodes contains one of the replicas of each shard. in 
> fact, the nodes are similar, i.e:
> node1 contains:
> shard1_replica1
> shard2_replica1
> shard3_replica1
> node2 contains:
> shard1_replica2
> shard2_replica2
> shard3_replica2
> node3 contains:
> shard1_replica3
> shard2_replica3
> shard3_replica3
> Problem description:
> when I run a heavy facet query, 
> such as 
> http://Node1IP:xxxx/solr/news_archive/select?q=*:*&fq=pubDate:[2018-1-18T12:06:57Z%20TO%202018-4-18T12:06:57Z]&facet.field=ngram_content&facet=true&facet.mincount=1&facet.limit=2000&rows=0&wt=json,
>  the solr instances are killed by the OOM killer in almost all of the nodes.
> I found the bellow log in 
> solr/logs/solr_oom_killer-xxxx-2018-05-21_19_17_41.log in each of the solr 
> instances,
> "Running OOM killer script for process 2766 for Solr on port xxxx
> Killed process 2766"
> It seems that the query is routed into different nodes of the clusters and 
> with attention to exhaustively use of memory which is caused by the query the 
> solr instances are killed by OOM Killer.
>  
> despite the fact that how the query is memory demanding, I think the 
> cluster's nodes should be preserved from being killed by any read query.
> for example by limiting the amount of memory which can be used by any query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to