[jira] [Comment Edited] (SOLR-12381) facet query causes down replicas

2018-05-23 Thread kiarash (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487582#comment-16487582
 ] 

kiarash edited comment on SOLR-12381 at 5/23/18 4:27 PM:
-

Thank you very much for your consideration.

could you please know what oom_killer boundary means?
 Would you mind providing some details how I can set SOLR_JAVA_M. As its 
suggested, I have set it a half of the physical memory(SOLR_JAVA_ME="-Xms512m 
-Xmx15240m").

In addition, I wanted to know if my problem is a bug which won't be fixed in 
version 6.


was (Author: zahirnia):
Thank you very much for your consideration.

could you please know what oom_killer boundary means?
Would you mind providing some details how I can set SOLR_JAVA_M. As its 
suggested, I have set it a half of the physical memory(SOLR_JAVA_ME="-Xms512m 
-Xmx10240m").

In addition, I wanted to know if my problem is a bug which won't be fixed in 
version 6.

> facet query causes down replicas
> 
>
> Key: SOLR-12381
> URL: https://issues.apache.org/jira/browse/SOLR-12381
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: kiarash
>Priority: Major
>
> Cluster description:
> I have a solr cluster with 3 nodes(node1, node2, node3).
> Each node has:
> 30 GB memory.
> 3 TB SATA Disk
> My cluster involves 5 collections which contain more than a billion document.
> I have a collection(news_archive collection) which contain 30 million 
> document. This collection is divided into 3 shards which each of them 
> contains 10 million document and occupies 100GB on the Disk. Each of the 
> shards has 3 replicas.
> Each of the cluster nodes contains one of the replicas of each shard. in 
> fact, the nodes are similar, i.e:
> node1 contains:
> shard1_replica1
> shard2_replica1
> shard3_replica1
> node2 contains:
> shard1_replica2
> shard2_replica2
> shard3_replica2
> node3 contains:
> shard1_replica3
> shard2_replica3
> shard3_replica3
> Problem description:
> when I run a heavy facet query, 
> such as 
> http://Node1IP:/solr/news_archive/select?q=*:*=pubDate:[2018-1-18T12:06:57Z%20TO%202018-4-18T12:06:57Z]=ngram_content=true=1=2000=0=json,
>  the solr instances are killed by the OOM killer in almost all of the nodes.
> I found the bellow log in 
> solr/logs/solr_oom_killer--2018-05-21_19_17_41.log in each of the solr 
> instances,
> "Running OOM killer script for process 2766 for Solr on port 
> Killed process 2766"
> It seems that the query is routed into different nodes of the clusters and 
> with attention to exhaustively use of memory which is caused by the query the 
> solr instances are killed by OOM Killer.
>  
> despite the fact that how the query is memory demanding, I think the 
> cluster's nodes should be preserved from being killed by any read query.
> for example by limiting the amount of memory which can be used by any query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12381) facet query causes down replicas

2018-05-23 Thread kiarash (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16487582#comment-16487582
 ] 

kiarash commented on SOLR-12381:


Thank you very much for your consideration.

could you please know what oom_killer boundary means?
Would you mind providing some details how I can set SOLR_JAVA_M. As its 
suggested, I have set it a half of the physical memory(SOLR_JAVA_ME="-Xms512m 
-Xmx10240m").

In addition, I wanted to know if my problem is a bug which won't be fixed in 
version 6.

> facet query causes down replicas
> 
>
> Key: SOLR-12381
> URL: https://issues.apache.org/jira/browse/SOLR-12381
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: kiarash
>Priority: Major
>
> Cluster description:
> I have a solr cluster with 3 nodes(node1, node2, node3).
> Each node has:
> 30 GB memory.
> 3 TB SATA Disk
> My cluster involves 5 collections which contain more than a billion document.
> I have a collection(news_archive collection) which contain 30 million 
> document. This collection is divided into 3 shards which each of them 
> contains 10 million document and occupies 100GB on the Disk. Each of the 
> shards has 3 replicas.
> Each of the cluster nodes contains one of the replicas of each shard. in 
> fact, the nodes are similar, i.e:
> node1 contains:
> shard1_replica1
> shard2_replica1
> shard3_replica1
> node2 contains:
> shard1_replica2
> shard2_replica2
> shard3_replica2
> node3 contains:
> shard1_replica3
> shard2_replica3
> shard3_replica3
> Problem description:
> when I run a heavy facet query, 
> such as 
> http://Node1IP:/solr/news_archive/select?q=*:*=pubDate:[2018-1-18T12:06:57Z%20TO%202018-4-18T12:06:57Z]=ngram_content=true=1=2000=0=json,
>  the solr instances are killed by the OOM killer in almost all of the nodes.
> I found the bellow log in 
> solr/logs/solr_oom_killer--2018-05-21_19_17_41.log in each of the solr 
> instances,
> "Running OOM killer script for process 2766 for Solr on port 
> Killed process 2766"
> It seems that the query is routed into different nodes of the clusters and 
> with attention to exhaustively use of memory which is caused by the query the 
> solr instances are killed by OOM Killer.
>  
> despite the fact that how the query is memory demanding, I think the 
> cluster's nodes should be preserved from being killed by any read query.
> for example by limiting the amount of memory which can be used by any query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12381) facet query causes down replicas

2018-05-21 Thread kiarash (JIRA)
kiarash created SOLR-12381:
--

 Summary: facet query causes down replicas
 Key: SOLR-12381
 URL: https://issues.apache.org/jira/browse/SOLR-12381
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6.1
Reporter: kiarash


Cluster description:


I have a solr cluster with 3 nodes(node1, node2, node3).

Each node has:
30 GB memory.
3 TB SATA Disk

My cluster involves 5 collections which contain more than a billion document.

I have a collection(news_archive collection) which contain 30 million document. 
This collection is divided into 3 shards which each of them contains 10 million 
document and occupies 100GB on the Disk. Each of the shards has 3 replicas.

Each of the cluster nodes contains one of the replicas of each shard. in fact, 
the nodes are similar, i.e:

node1 contains:
shard1_replica1
shard2_replica1
shard3_replica1
node2 contains:
shard1_replica2
shard2_replica2
shard3_replica2
node3 contains:
shard1_replica3
shard2_replica3
shard3_replica3

Problem description:


when I run a heavy facet query, 
such as 
http://Node1IP:/solr/news_archive/select?q=*:*=pubDate:[2018-1-18T12:06:57Z%20TO%202018-4-18T12:06:57Z]=ngram_content=true=1=2000=0=json,
 the solr instances are killed by the OOM killer in almost all of the nodes.
I found the bellow log in 
solr/logs/solr_oom_killer--2018-05-21_19_17_41.log in each of the solr 
instances,

"Running OOM killer script for process 2766 for Solr on port 
Killed process 2766"


It seems that the query is routed into different nodes of the clusters and with 
attention to exhaustively use of memory which is caused by the query the solr 
instances are killed by OOM Killer.

 

despite the fact that how the query is memory demanding, I think the cluster's 
nodes should be preserved from being killed by any read query.

for example by limiting the amount of memory which can be used by any query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org