[jira] [Commented] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15085146#comment-15085146
 ] 

Cao Manh Dat commented on SOLR-8492:


What a wonderful patch. I'm very excited on implementing ml algorithms by using 
streaming.

A couple of comments for this patch:
{code}
//wi = alpha(outcome - sigmoid)*wi + xi
double sig = sigmoid(sum(multiply(vals, weights)));
error = outcome - sig;

workingWeights = sum(vals, multiply(error * alpha, weights));

for(int i=0; i Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Attachments: SOLR-8492.patch
>
>
> This ticket is to add a new query called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to the shards and executes the LogisticRegressionQuery. The model data 
> is collected from the shards and the weights are averaged and sent back to 
> the shards with the next iteration. Each call to read() returns a Tuple with 
> the averaged weights and error from the shards. With this approach the 
> LogitStream streams the changing model back to the client after each 
> iteration.
> The LogitStream will return the EOF Tuple when it reaches the defined 
> maxIterations. When sent as a Streaming Expression to the Stream handler this 
> provides parallel iterative behavior. This same approach can be used to 
> implement other parallel iterative algorithms.
> The initial patch has  a test which simply tests the mechanics of the 
> iteration. More work will need to be done to ensure the SGD is properly 
> implemented. The distributed approach of the SGD will also need to be 
> reviewed.  
> This implementation is designed for use cases with a small number of features 
> because each feature is it's own discreet field.
> An implementation which supports a higher number of features would be 
> possible by packing features into a byte array and storing as binary 
> DocValues.
> This implementation is designed to support a large sample set. With a large 
> number of shards, a sample set into the billions may be possible.
> sample Streaming Expression Syntax:
> {code}
> logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8493) SolrHadoopAuthenticationFilter.getZkChroot: java.lang.StringIndexOutOfBoundsException: String index out of range: -1

2016-01-05 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15085140#comment-15085140
 ] 

Ishan Chattopadhyaya edited comment on SOLR-8493 at 1/6/16 7:37 AM:


SolrHadoopAuthenticationFilter is custom code introduced in CDH's version of 
Solr. Can you please check with Cloudera's support? (It is roughly equivalent 
to Solr's KerberosFilter in Solr.)


was (Author: ichattopadhyaya):
SolrHadoopAuthenticationFilter is custom code introduced in CDH's version of 
Solr. Can you please check with Cloudera's support?

> SolrHadoopAuthenticationFilter.getZkChroot: 
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> 
>
> Key: SOLR-8493
> URL: https://issues.apache.org/jira/browse/SOLR-8493
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 4.10.3
>Reporter: zuotingbing
>
> [error info]
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> at java.lang.String.substring(String.java:1904)
> at 
> org.apache.solr.servlet.SolrHadoopAuthenticationFilter.getZkChroot(SolrHadoopAuthenticationFilter.java:147)
> [source code]:
> SolrHadoopAuthenticationFilter.java
>   private String getZkChroot() {
> String zkHost = System.getProperty("zkHost");
> return zkHost != null?
>   zkHost.substring(zkHost.indexOf("/"), zkHost.length()) : "/solr";
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8493) SolrHadoopAuthenticationFilter.getZkChroot: java.lang.StringIndexOutOfBoundsException: String index out of range: -1

2016-01-05 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15085140#comment-15085140
 ] 

Ishan Chattopadhyaya commented on SOLR-8493:


SolrHadoopAuthenticationFilter is custom code introduced in CDH's version of 
Solr. Can you please check with Cloudera's support?

> SolrHadoopAuthenticationFilter.getZkChroot: 
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> 
>
> Key: SOLR-8493
> URL: https://issues.apache.org/jira/browse/SOLR-8493
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 4.10.3
>Reporter: zuotingbing
>
> [error info]
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> at java.lang.String.substring(String.java:1904)
> at 
> org.apache.solr.servlet.SolrHadoopAuthenticationFilter.getZkChroot(SolrHadoopAuthenticationFilter.java:147)
> [source code]:
> SolrHadoopAuthenticationFilter.java
>   private String getZkChroot() {
> String zkHost = System.getProperty("zkHost");
> return zkHost != null?
>   zkHost.substring(zkHost.indexOf("/"), zkHost.length()) : "/solr";
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_66) - Build # 15454 - Still Failing!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15454/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   "collection1":{ "replicationFactor":"1", "shards":{   
"shard1":{ "range":"8000-", "state":"active",   
  "replicas":{"core_node2":{ "core":"collection1", 
"base_url":"http://127.0.0.1:53250/rk/yy";, 
"node_name":"127.0.0.1:53250_rk%2Fyy", "state":"active",
 "leader":"true"}}},   "shard2":{ "range":"0-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:38418/rk/yy";,
 "node_name":"127.0.0.1:38418_rk%2Fyy", "state":"active",   
  "leader":"true"},   "core_node3":{ 
"core":"collection1", "base_url":"http://127.0.0.1:56474/rk/yy";,
 "node_name":"127.0.0.1:56474_rk%2Fyy", 
"state":"active", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "control_collection":{ "replicationFactor":"1",
 "shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{"core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:57012/rk/yy";,
 "node_name":"127.0.0.1:57012_rk%2Fyy", "state":"active",   
  "leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "c8n_1x2":{ "replicationFactor":"2", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:56474/rk/yy";, 
"node_name":"127.0.0.1:56474_rk%2Fyy", "state":"recovering"},   
"core_node2":{ "core":"c8n_1x2_shard1_replica2", 
"base_url":"http://127.0.0.1:38418/rk/yy";, 
"node_name":"127.0.0.1:38418_rk%2Fyy", "state":"active",
 "leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"},   "collMinRf_1x3":{ 
"replicationFactor":"3", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", "replicas":{ 
  "core_node1":{ "core":"collMinRf_1x3_shard1_replica3",
 "base_url":"http://127.0.0.1:57012/rk/yy";, 
"node_name":"127.0.0.1:57012_rk%2Fyy", "state":"active"},   
"core_node2":{ "core":"collMinRf_1x3_shard1_replica2", 
"base_url":"http://127.0.0.1:56474/rk/yy";, 
"node_name":"127.0.0.1:56474_rk%2Fyy", "state":"active"},   
"core_node3":{ "core":"collMinRf_1x3_shard1_replica1", 
"base_url":"http://127.0.0.1:38418/rk/yy";, 
"node_name":"127.0.0.1:38418_rk%2Fyy", "state":"active",
 "leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  "collection1":{
"replicationFactor":"1",
"shards":{
  "shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node2":{
"core":"collection1",
"base_url":"http://127.0.0.1:53250/rk/yy";,
"node_name":"127.0.0.1:53250_rk%2Fyy",
"state":"active",
"leader":"true"}}},
  "shard2":{
"range":"0-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:38418/rk/yy";,
"node_name":"127.0.0.1:38418_rk%2Fyy",
"state":"active",
"leader":"true"},
  "core_node3":{
"core":"collection1",
"base_url":"http://127.0.0.1:56474/rk/yy";,
"node_name":"127.0.0.1:56474_rk%2Fyy",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  "control_collection":{
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:57012/rk/yy";,
"node_name":"127.0.0.1:57012_rk%2Fyy",

[jira] [Created] (SOLR-8493) SolrHadoopAuthenticationFilter.getZkChroot: java.lang.StringIndexOutOfBoundsException: String index out of range: -1

2016-01-05 Thread zuotingbing (JIRA)
zuotingbing created SOLR-8493:
-

 Summary: SolrHadoopAuthenticationFilter.getZkChroot: 
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
 Key: SOLR-8493
 URL: https://issues.apache.org/jira/browse/SOLR-8493
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.10.3
Reporter: zuotingbing


[error info]
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
at java.lang.String.substring(String.java:1904)
at 
org.apache.solr.servlet.SolrHadoopAuthenticationFilter.getZkChroot(SolrHadoopAuthenticationFilter.java:147)

[source code]:
SolrHadoopAuthenticationFilter.java

  private String getZkChroot() {
String zkHost = System.getProperty("zkHost");
return zkHost != null?
  zkHost.substring(zkHost.indexOf("/"), zkHost.length()) : "/solr";
  }




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3910 - Still Failing

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3910/

All tests passed

Build Log:
[...truncated 9622 lines...]
[javac] Compiling 613 source files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/classes/test
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:103:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent(CoreDescriptor.CORE_CONFIG, 
"solrconfig-tlog.xml");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:104:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.maxBufferedDocs", 
"10");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:105:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.ramBufferSizeMB", 
"100");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:107:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergePolicy", 
"org.apache.lucene.index.TieredMergePolicy");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:108:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergeScheduler", 
"org.apache.lucene.index.ConcurrentMergeScheduler");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:109:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.directoryFactory", 
(persistIndex ? "solr.StandardDirectoryFactory" : "solr.RAMDirectoryFactory"));
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 errors

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:794: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:738: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:59: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build.xml:233:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/common-build.xml:526:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:808:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:822:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:1956:
 Compile failed; see the compiler error output for details.

Total time: 26 minutes 43 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15085065#comment-15085065
 ] 

Mark Miller edited comment on SOLR-8453 at 1/6/16 6:45 AM:
---

Okay, I'm starting to think it's a change in consumeAll in - 
https://github.com/eclipse/jetty.project/blame/jetty-9.3.x/jetty-server/src/main/java/org/eclipse/jetty/server/HttpInput.java#L443

I think perhaps that is now returning false in 
https://github.com/eclipse/jetty.project/blob/jetty-9.3.x/jetty-server/src/main/java/org/eclipse/jetty/server/HttpConnection.java#L403

It's looking to me like the server resets the connection because of unconsumed 
content, and previously it must have been properly consuming the extra.


was (Author: markrmil...@gmail.com):
Okay, I'm starting to think it's a change in consumeAll in - 
https://github.com/eclipse/jetty.project/blame/jetty-9.3.x/jetty-server/src/main/java/org/eclipse/jetty/server/HttpInput.java

I think perhaps that is now returning false in 
https://github.com/eclipse/jetty.project/blob/jetty-9.3.x/jetty-server/src/main/java/org/eclipse/jetty/server/HttpConnection.java

It's looking to me like the server resets the connection because of unconsumed 
content, and previously it must have been properly consuming the extra.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453_test.patch, SOLR-8453_test.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15085065#comment-15085065
 ] 

Mark Miller commented on SOLR-8453:
---

Okay, I'm starting to think it's a change in consumeAll in - 
https://github.com/eclipse/jetty.project/blame/jetty-9.3.x/jetty-server/src/main/java/org/eclipse/jetty/server/HttpInput.java

I think perhaps that is now returning false in 
https://github.com/eclipse/jetty.project/blob/jetty-9.3.x/jetty-server/src/main/java/org/eclipse/jetty/server/HttpConnection.java

It's looking to me like the server resets the connection because of unconsumed 
content, and previously it must have been properly consuming the extra.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453_test.patch, SOLR-8453_test.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1067 - Still Failing

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1067/

All tests passed

Build Log:
[...truncated 10091 lines...]
[javac] Compiling 613 source files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/classes/test
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:103:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent(CoreDescriptor.CORE_CONFIG, 
"solrconfig-tlog.xml");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:104:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.maxBufferedDocs", 
"10");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:105:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.ramBufferSizeMB", 
"100");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:107:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergePolicy", 
"org.apache.lucene.index.TieredMergePolicy");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:108:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergeScheduler", 
"org.apache.lucene.index.ConcurrentMergeScheduler");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:109:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.directoryFactory", 
(persistIndex ? "solr.StandardDirectoryFactory" : "solr.RAMDirectoryFactory"));
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 errors

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:801: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:738: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:59: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build.xml:233:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/common-build.xml:526:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:808:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:822:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:1956:
 Compile failed; see the compiler error output for details.

Total time: 133 minutes 9 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Breaking Java back-compat in Solr

2016-01-05 Thread Anshum Gupta
As I understand, seems like there's reasonable consensus that we will:

1. provide strong back-compat for for SolrJ and REST APIs
2. Strive to maintain but not guarantee *strong* back-compat for Java APIs.

Please correct me if I'm wrong.


On Mon, Jan 4, 2016 at 9:57 PM, Anshum Gupta  wrote:

> Hi,
>
> I was looking at refactoring code in Solr and it gets really tricky and
> confusing in terms of what level of back-compat needs to be maintained.
> Ideally, we should only maintain back-compat at the REST API level. We may
> annotate a few really important Java APIs where we're guarantee back-compat
> across minor versions, but we shouldn't certainly be doing that across the
> board.
>
> Thoughts?
>
> P.S: I hope this doesn't spin-off into something I fear :)
>
> --
> Anshum Gupta
>



-- 
Anshum Gupta


[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 15157 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15157/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 9752 lines...]
[javac] Compiling 613 source files to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/test
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:103:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent(CoreDescriptor.CORE_CONFIG, 
"solrconfig-tlog.xml");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:104:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.maxBufferedDocs", 
"10");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:105:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.ramBufferSizeMB", 
"100");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:107:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergePolicy", 
"org.apache.lucene.index.TieredMergePolicy");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:108:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergeScheduler", 
"org.apache.lucene.index.ConcurrentMergeScheduler");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:109:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.directoryFactory", 
(persistIndex ? "solr.StandardDirectoryFactory" : "solr.RAMDirectoryFactory"));
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 errors

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:794: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:738: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:59: The following error 
occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:233: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:526: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:808: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:822: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1956: 
Compile failed; see the compiler error output for details.

Total time: 22 minutes 39 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15453 - Still Failing!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15453/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   "collection1":{ "replicationFactor":"1", "shards":{   
"shard1":{ "range":"8000-", "state":"active",   
  "replicas":{"core_node2":{ "core":"collection1", 
"base_url":"http://127.0.0.1:44422/ccb";, 
"node_name":"127.0.0.1:44422_ccb", "state":"active", 
"leader":"true"}}},   "shard2":{ "range":"0-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:57216/ccb";,  
   "node_name":"127.0.0.1:57216_ccb", "state":"active", 
"leader":"true"},   "core_node3":{ 
"core":"collection1", "base_url":"http://127.0.0.1:56673/ccb";,  
   "node_name":"127.0.0.1:56673_ccb", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "autoCreated":"true"},   "control_collection":{  
   "replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", 
"replicas":{"core_node1":{ "core":"collection1", 
"base_url":"http://127.0.0.1:44293/ccb";, 
"node_name":"127.0.0.1:44293_ccb", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "c8n_1x2":{ "replicationFactor":"2", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"c8n_1x2_shard1_replica2", 
"base_url":"http://127.0.0.1:56673/ccb";, 
"node_name":"127.0.0.1:56673_ccb", "state":"recovering"},   
"core_node2":{ "core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:57216/ccb";, 
"node_name":"127.0.0.1:57216_ccb", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"},   "collMinRf_1x3":{ 
"replicationFactor":"3", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", "replicas":{ 
  "core_node1":{ "core":"collMinRf_1x3_shard1_replica3",
 "base_url":"http://127.0.0.1:44293/ccb";, 
"node_name":"127.0.0.1:44293_ccb", "state":"active"},   
"core_node2":{ "core":"collMinRf_1x3_shard1_replica2", 
"base_url":"http://127.0.0.1:57216/ccb";, 
"node_name":"127.0.0.1:57216_ccb", "state":"active", 
"leader":"true"},   "core_node3":{ 
"core":"collMinRf_1x3_shard1_replica1", 
"base_url":"http://127.0.0.1:44422/ccb";, 
"node_name":"127.0.0.1:44422_ccb", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  "collection1":{
"replicationFactor":"1",
"shards":{
  "shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node2":{
"core":"collection1",
"base_url":"http://127.0.0.1:44422/ccb";,
"node_name":"127.0.0.1:44422_ccb",
"state":"active",
"leader":"true"}}},
  "shard2":{
"range":"0-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:57216/ccb";,
"node_name":"127.0.0.1:57216_ccb",
"state":"active",
"leader":"true"},
  "core_node3":{
"core":"collection1",
"base_url":"http://127.0.0.1:56673/ccb";,
"node_name":"127.0.0.1:56673_ccb",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  "control_collection":{
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:44293/ccb";,
"node_name":"127.0.0.1:44293_ccb",
"state":"active",
"leader":"true"}}}

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_66) - Build # 15452 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15452/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.search.AnalyticsMergeStrategyTest.test

Error Message:
Error from server at http://127.0.0.1:45334//collection1: 
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
Scheme 'http' not registered.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:45334//collection1: 
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
Scheme 'http' not registered.
at 
__randomizedtesting.SeedInfo.seed([4C93A2BA95E86402:C4C79D603B1409FA]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:562)
at 
org.apache.solr.search.AnalyticsMergeStrategyTest.test(AnalyticsMergeStrategyTest.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Sta

[JENKINS] Lucene-Solr-5.x-Solaris (64bit/jdk1.8.0) - Build # 310 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/310/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest

Error Message:
There are still nodes recoverying - waited for 10 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 10 
seconds
at 
__randomizedtesting.SeedInfo.seed([9470A0F0FCD46C10:DAD3D523ED0F7D00]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:175)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:837)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1393)
at 
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest(TestAuthorizationFramework.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1

[jira] [Commented] (SOLR-4327) SolrJ code review indicates potential for leaked HttpClient connections

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084569#comment-15084569
 ] 

Mark Miller commented on SOLR-4327:
---

My mistake for not taking the time to really dig into this one. This was a 
mistake to add, though it had no ill affect. I've addressed it in SOLR-8451 and 
added some connection reuse testing.

> SolrJ code review indicates potential for leaked HttpClient connections
> ---
>
> Key: SOLR-4327
> URL: https://issues.apache.org/jira/browse/SOLR-4327
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0
>Reporter: Karl Wright
>Assignee: Mark Miller
> Fix For: 4.5.1, 4.6, Trunk
>
> Attachments: SOLR-4327.patch, SOLR-4327.patch
>
>
> The SolrJ HttpSolrServer implementation does not seem to handle errors 
> properly and seems capable of leaking HttpClient connections.  See the 
> request() method in org.apache.solr.client.solrj.impl.HttpSolrServer.  The 
> issue is that exceptions thrown from within this method do not necessarily 
> consume the stream when an exception is thrown.  There is a try/finally block 
> which reads (in part):
> {code}
> } finally {
>   if (respBody != null && processor!=null) {
> try {
>   respBody.close();
> } catch (Throwable t) {} // ignore
>   }
> }
> {code}
> But, in order to always guarantee consumption of the stream, it should 
> include:
> {code}
> method.abort();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8450) Internal HttpClient used in SolrJ is retries requests by default

2016-01-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8450:
--
Attachment: SOLR-8450.patch

> Internal HttpClient used in SolrJ is retries requests by default
> 
>
> Key: SOLR-8450
> URL: https://issues.apache.org/jira/browse/SOLR-8450
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Reporter: Shalin Shekhar Mangar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8450.patch, SOLR-8450.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8451) We should not call method.abort in HttpSolrClient.

2016-01-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8451:
--
Attachment: SOLR-8451.patch

> We should not call method.abort in HttpSolrClient.
> --
>
> Key: SOLR-8451
> URL: https://issues.apache.org/jira/browse/SOLR-8451
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8451.patch, SOLR-8451.patch, SOLR-8451.patch, 
> SOLR-8451.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8450) Internal HttpClient used in SolrJ is retries requests by default

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084556#comment-15084556
 ] 

Mark Miller commented on SOLR-8450:
---

Well, in the end, doesn't seem to be safe to retry on any updates, even if it's 
a single update in the request. How about just retrying on GET?

> Internal HttpClient used in SolrJ is retries requests by default
> 
>
> Key: SOLR-8450
> URL: https://issues.apache.org/jira/browse/SOLR-8450
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Reporter: Shalin Shekhar Mangar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8450.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 15155 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15155/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 9777 lines...]
[javac] Compiling 613 source files to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/test
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:103:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent(CoreDescriptor.CORE_CONFIG, 
"solrconfig-tlog.xml");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:104:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.maxBufferedDocs", 
"10");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:105:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.ramBufferSizeMB", 
"100");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:107:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergePolicy", 
"org.apache.lucene.index.TieredMergePolicy");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:108:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergeScheduler", 
"org.apache.lucene.index.ConcurrentMergeScheduler");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:109:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.directoryFactory", 
(persistIndex ? "solr.StandardDirectoryFactory" : "solr.RAMDirectoryFactory"));
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 errors

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:794: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:738: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:59: The following error 
occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:233: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:526: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:808: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:822: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1956: 
Compile failed; see the compiler error output for details.

Total time: 24 minutes 2 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 313 - Still Failing!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/313/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([F02F1A05AA45ED97:1975A13D34DC7D3F]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
at 
org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir(TestArbitraryIndexDir.java:107)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=*[count(//doc)=1]
xml response was: 

00


request was:q=id:2&qt=standard&start=0&rows=20&version=2.2
at org.apache.solr

[jira] [Commented] (SOLR-8450) Internal HttpClient used in SolrJ is retries requests by default

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084520#comment-15084520
 ] 

Mark Miller commented on SOLR-8450:
---

I guess even still, with non streaming and knowing the size, we still don't 
want to retry when batching either. Not so sure how easy that is to detect.

> Internal HttpClient used in SolrJ is retries requests by default
> 
>
> Key: SOLR-8450
> URL: https://issues.apache.org/jira/browse/SOLR-8450
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Reporter: Shalin Shekhar Mangar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8450.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8489) TestMiniSolrCloudCluster.createCollection to support extra & alternative collectionProperties

2016-01-05 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084514#comment-15084514
 ] 

Steve Rowe commented on SOLR-8489:
--

Compilation on branch_5x is failing while compiling 
{{TestMiniSolrCloudCluster.java}}.  See e.g. 
https://builds.apache.org/job/Lucene-Solr-Maven-5.x/1152/.

{{ant compile-test}} fails for me - {{Map.putIfAbsent()}} was added in Java8:

{noformat}
common.compile-test:
[javac] Compiling 7 source files to 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/build/solr-core/classes/test
[javac] 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:103:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent(CoreDescriptor.CORE_CONFIG, 
"solrconfig-tlog.xml");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:104:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.maxBufferedDocs", 
"10");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:105:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.ramBufferSizeMB", 
"100");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:107:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergePolicy", 
"org.apache.lucene.index.TieredMergePolicy");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:108:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergeScheduler", 
"org.apache.lucene.index.ConcurrentMergeScheduler");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:109:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.directoryFactory", 
(persistIndex ? "solr.StandardDirectoryFactory" : "solr.RAMDirectoryFactory"));
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] Note: 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/search/mlt/CloudMLTQParserTest.java
 uses unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 errors
{noformat}

> TestMiniSolrCloudCluster.createCollection to support extra & alternative 
> collectionProperties
> -
>
> Key: SOLR-8489
> URL: https://issues.apache.org/jira/browse/SOLR-8489
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8489.patch
>
>
> * add optional collectionProperties map arg and use putIfAbsent instead of 
> put with the map
> * move persistIndex i.e. solr.directoryFactory randomisation from the several 
> callers to just-once in createCollection
> These changes are refactors only and intended to *not* change the existing 
> tests' behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8450) Internal HttpClient used in SolrJ is retries requests by default

2016-01-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8450:
--
Attachment: SOLR-8450.patch

> Internal HttpClient used in SolrJ is retries requests by default
> 
>
> Key: SOLR-8450
> URL: https://issues.apache.org/jira/browse/SOLR-8450
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Reporter: Shalin Shekhar Mangar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8450.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8450) Internal HttpClient used in SolrJ is retries requests by default

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084496#comment-15084496
 ] 

Mark Miller commented on SOLR-8450:
---

This was causing my new connection reuse test in SOLR-8451 to fail on trunk 
(only with the jetty upgrade).

It seems that we were retrying on ConcurrentUpdateSolrClient requests. I had 
expected those retries to fail as non retriable.

Here is a patch with a subset of changes from SOLR-8451. We can use chunked 
encoding to detect streaming if we start using the content stream sizes in 
HttpSolrClient (which is more efficient anyway?).

> Internal HttpClient used in SolrJ is retries requests by default
> 
>
> Key: SOLR-8450
> URL: https://issues.apache.org/jira/browse/SOLR-8450
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Reporter: Shalin Shekhar Mangar
> Fix For: 5.5, Trunk
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #1152: POMs out of sync

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/1152/

No tests ran.

Build Log:
[...truncated 39672 lines...]
  [mvn] [INFO] -
  [mvn] [INFO] -
  [mvn] [ERROR] COMPILATION ERROR : 
  [mvn] [INFO] -

[...truncated 845 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:817: The 
following error occurred while executing this line:
: Java returned: 1

Total time: 19 minutes 49 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-5.3 - Build # 5 - Still Failing

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.3/5/

2 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at 
__randomizedtesting.SeedInfo.seed([667D296926F4C3DA:EE2916B38808AE22]:0)
at java.util.HashMap.entrySet0(HashMap.java:1073)
at java.util.HashMap.entrySet(HashMap.java:1068)
at java.util.AbstractMap.hashCode(AbstractMap.java:492)
at java.util.HashMap.hash(HashMap.java:362)
at java.util.HashMap.put(HashMap.java:492)
at java.util.HashSet.add(HashSet.java:217)
at 
org.apache.solr.cloud.CloudInspectUtil.showDiff(CloudInspectUtil.java:125)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:206)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:167)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:677)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:153)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)


FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=15287, name=collection0, 
state=RUNNABLE, group=TGRP-HdfsCollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=15287, name=collection0, state=RUNNABLE, 
group=TGRP-HdfsCollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:42995: Could not find collection : 
awholynewstresscollection_collection0_0
at __randomizedtesting.SeedInfo.seed([667D296926F4C3DA]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1098)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:869)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:805)
at org.apache.solr.client.solrj.SolrClient.reque

[jira] [Updated] (SOLR-8451) We should not call method.abort in HttpSolrClient.

2016-01-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8451:
--
Attachment: SOLR-8451.patch

> We should not call method.abort in HttpSolrClient.
> --
>
> Key: SOLR-8451
> URL: https://issues.apache.org/jira/browse/SOLR-8451
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8451.patch, SOLR-8451.patch, SOLR-8451.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15084077#comment-15084077
 ] 

Mark Miller commented on LUCENE-6938:
-

We don't need to vote yet - that only happens when consensus fails or someone 
wants to force something. We can warn the dev list again to make sure everyone 
is caught up, but no need to force a vote unless someone comes out against. 
There is a very visible discussion and a few JIRA issues that have been in 
progress for a long time now. Once we are ready to go, we can sum things up in 
a new dev thread.

I think in terms of what needs to be covered here, Uwe has detailed it pretty 
well. We want all the targets to work really - or to understand why any target 
does not work. We can wait for Uwe to create a new git validator though - all 
targets still work without that. 'svn' does not really have a very deep imprint 
in our build targets.

I think the main thing left to do in this issue is put the git hash in 
efficiently.

Some other things people are concerned about can get further JIRA issues, but I 
imagine a lot of that (such as python scripts) can be updated as used / needed 
by those that use them.

> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8451) We should not call method.abort in HttpSolrClient.

2016-01-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8451:
--
Attachment: SOLR-8451.patch

Patch with connection reuse test attached. This new test won't work until we 
address the troublesome Jetty upgrade.

> We should not call method.abort in HttpSolrClient.
> --
>
> Key: SOLR-8451
> URL: https://issues.apache.org/jira/browse/SOLR-8451
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8451.patch, SOLR-8451.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2995 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2995/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at https://127.0.0.1:50628/s_ze/f/awholynewcollection_0: non 
ok status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:50628/s_ze/f/awholynewcollection_0: non ok 
status: 500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([F21576DF506C287:877568B75BFAAF7F]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:658)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_66) - Build # 5395 - Still Failing!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5395/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MDCAwareThreadPoolExecutor, MockDirectoryWrapper, SolrCore, 
MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MDCAwareThreadPoolExecutor, MockDirectoryWrapper, SolrCore, 
MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([8F0DD6D4BB2873CB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=12176, name=searcherExecutor-5625-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=12176, name=searcherExecutor-5625-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __random

[jira] [Commented] (SOLR-8451) We should not call method.abort in HttpSolrClient.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083852#comment-15083852
 ] 

Mark Miller commented on SOLR-8451:
---

I have a connection reuse test that hits HttpSolrClient, CloudSolrClient, and 
ConcurrentUpdateSolrClient. Once I polish it up a little, I'll commit it with 
this issue.

> We should not call method.abort in HttpSolrClient.
> --
>
> Key: SOLR-8451
> URL: https://issues.apache.org/jira/browse/SOLR-8451
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8451.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8492:
-
Description: 
This ticket is to add a new query called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to the shards and executes the LogisticRegressionQuery. The model data is 
collected from the shards and the weights are averaged and sent back to the 
shards with the next iteration. Each call to read() returns a Tuple with the 
averaged weights and error from the shards. With this approach the LogitStream 
streams the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

sample Streaming Expression Syntax:

{code}

logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")

{code}



  was:
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to the shards and executes the LogisticRegressionQuery. The model data is 
collected from the shards and the weights are averaged and sent back to the 
shards with the next iteration. Each call to read() returns a Tuple with the 
averaged weights and error from the shards. With this approach the LogitStream 
streams the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

sample Streaming Expression Syntax:

{code}

logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")

{code}




> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Attachments: SOLR-8492.patch
>
>
> This ticket is to add a new query called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to the shards and executes the LogisticRegressionQuery. The model data 
> is collected from the shards and the weights are averaged and sent bac

[jira] [Commented] (SOLR-8489) TestMiniSolrCloudCluster.createCollection to support extra & alternative collectionProperties

2016-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083837#comment-15083837
 ] 

ASF subversion and git services commented on SOLR-8489:
---

Commit 1723170 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1723170 ]

SOLR-8489: TestMiniSolrCloudCluster.createCollection to support extra & 
alternative collectionProperties (merge in revision 1723162 from trunk)

> TestMiniSolrCloudCluster.createCollection to support extra & alternative 
> collectionProperties
> -
>
> Key: SOLR-8489
> URL: https://issues.apache.org/jira/browse/SOLR-8489
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8489.patch
>
>
> * add optional collectionProperties map arg and use putIfAbsent instead of 
> put with the map
> * move persistIndex i.e. solr.directoryFactory randomisation from the several 
> callers to just-once in createCollection
> These changes are refactors only and intended to *not* change the existing 
> tests' behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8489) TestMiniSolrCloudCluster.createCollection to support extra & alternative collectionProperties

2016-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-8489.
---
   Resolution: Fixed
Fix Version/s: Trunk
   5.5

> TestMiniSolrCloudCluster.createCollection to support extra & alternative 
> collectionProperties
> -
>
> Key: SOLR-8489
> URL: https://issues.apache.org/jira/browse/SOLR-8489
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8489.patch
>
>
> * add optional collectionProperties map arg and use putIfAbsent instead of 
> put with the map
> * move persistIndex i.e. solr.directoryFactory randomisation from the several 
> callers to just-once in createCollection
> These changes are refactors only and intended to *not* change the existing 
> tests' behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8492:
-
Description: 
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to the shards and executes the LogisticRegressionQuery. The model data is 
collected from the shards and the weights are averaged and sent back to the 
shards with the next iteration. Each call to read() returns a Tuple with the 
averaged weights and error from the shards. With this approach the LogitStream 
streams the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

sample Streaming Expression Syntax:

{code}

logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")

{code}



  was:
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards with 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

sample Streaming Expression Syntax:

{code}

logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")

{code}




> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Attachments: SOLR-8492.patch
>
>
> This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to the shards and executes the LogisticRegressionQuery. The model data 
> is collected from the shards and the weights are averaged and se

[jira] [Updated] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8492:
-
Description: 
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards with 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

sample Streaming Expression Syntax:

{code}

logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")

{code}



  was:
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards with 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.


> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Attachments: SOLR-8492.patch
>
>
> This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to shards and executes the LogisticRegressionQuery. The model data is 
> collected from the shards and the weights are averaged and sent back to the 
> shards with the next iteration. Each call to read() returns a Tuple with the 
> averaged weights and error from the shar

[jira] [Updated] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8492:
-
Description: 
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards with 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

  was:
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards for 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.


> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Attachments: SOLR-8492.patch
>
>
> This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to shards and executes the LogisticRegressionQuery. The model data is 
> collected from the shards and the weights are averaged and sent back to the 
> shards with the next iteration. Each call to read() returns a Tuple with the 
> averaged weights and error from the shards. With this approach the 
> LogitStream streams the changing model back to the client after each 
> iteration.
> The LogitStream 

[jira] [Updated] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8492:
-
Attachment: SOLR-8492.patch

> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Attachments: SOLR-8492.patch
>
>
> This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to shards and executes the LogisticRegressionQuery. The model data is 
> collected from the shards and the weights are averaged and sent back to the 
> shards for the next iteration. Each call to read() returns a Tuple with the 
> averaged weights and error from the shards. With this approach the 
> LogitStream streams the changing model back to the client after each 
> iteration.
> The LogitStream will return the EOF Tuple when it reaches the defined 
> maxIterations. When sent as a Streaming Expression to the Stream handler this 
> provides parallel iterative behavior. This same approach can be used to 
> implement other parallel iterative algorithms.
> The initial patch has  a test which simply tests the mechanics of the 
> iteration. More work will need to be done to ensure the SGD is properly 
> implemented. The distributed approach of the SGD will also need to be 
> reviewed.  
> This implementation is designed for use cases with a small number of features 
> because each feature is it's own discreet field.
> An implementation which supports a higher number of features would be 
> possible by packing features into a byte array and storing as binary 
> DocValues.
> This implementation is designed to support a large sample set. With a large 
> number of shards, a sample set into the billions may be possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8492:
-
Description: 
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards for 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

  was:
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards for 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.


> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>
> This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to shards and executes the LogisticRegressionQuery. The model data is 
> collected from the shards and the weights are averaged and sent back to the 
> shards for the next iteration. Each call to read() returns a Tuple with the 
> averaged weights and error from the shards. With this approach the 
> LogitStream streams the changing model back to the client after each 
> iteration.
> The LogitStream will return the EOF Tuple when it reaches the

[jira] [Created] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-8492:


 Summary: Add LogisticRegressionQuery and LogitStream
 Key: SOLR-8492
 URL: https://issues.apache.org/jira/browse/SOLR-8492
 Project: Solr
  Issue Type: New Feature
Reporter: Joel Bernstein


This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards for 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8489) TestMiniSolrCloudCluster.createCollection to support extra & alternative collectionProperties

2016-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083763#comment-15083763
 ] 

ASF subversion and git services commented on SOLR-8489:
---

Commit 1723162 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1723162 ]

SOLR-8489: TestMiniSolrCloudCluster.createCollection to support extra & 
alternative collectionProperties

> TestMiniSolrCloudCluster.createCollection to support extra & alternative 
> collectionProperties
> -
>
> Key: SOLR-8489
> URL: https://issues.apache.org/jira/browse/SOLR-8489
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8489.patch
>
>
> * add optional collectionProperties map arg and use putIfAbsent instead of 
> put with the map
> * move persistIndex i.e. solr.directoryFactory randomisation from the several 
> callers to just-once in createCollection
> These changes are refactors only and intended to *not* change the existing 
> tests' behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7733) remove/rename "optimize" references in the UI.

2016-01-05 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083736#comment-15083736
 ] 

Shawn Heisey commented on SOLR-7733:


bq. I vote for bringing the button back, but with an educational popup

Sounds good to me.  If the server is in cloud mode and the button is pressed on 
a core, the dialog might want to mention that it will in fact optimize the 
entire collection.  There is no way to disable this -- distrib=false is not 
honored.  I thought we had an issue to have optimize on SolrCloud honor 
distrib=false, but I can't find one.

bq. I have many times missed a Commit button in the core admin and collections 
tabs

That would be interesting.  Since I am reasonably sure that mechanisms are in 
place to ignore a commit operation when the index hasn't actually changed, this 
is probably a safe thing to add, and would be helpful for troubleshooting.


> remove/rename "optimize" references in the UI.
> --
>
> Key: SOLR-7733
> URL: https://issues.apache.org/jira/browse/SOLR-7733
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Affects Versions: 5.3, Trunk
>Reporter: Erick Erickson
>Assignee: Upayavira
>Priority: Minor
> Attachments: SOLR-7733.patch
>
>
> Since optimizing indexes is kind of a special circumstance thing, what do we 
> think about removing (or renaming) optimize-related stuff on the core admin 
> and core overview pages? The "optimize" button is already gone from the core 
> admin screen (was this intentional?).
> My personal feeling is that we should remove this entirely as it's too easy 
> to think "Of course I want my index optimized" and "look, this screen says my 
> index isn't optimized, that must mean I should optimize it".
> The core admin screen and the core overview page both have an "optimized" 
> checkmark, I propose just removing it from the "overview" page and on the 
> "core admin" page changing it to "Segment Count #". NOTE: the "overview" page 
> already has a "Segment Count" entry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8477) Let users choose compression mode in SchemaCodecFactory

2016-01-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-8477:

Attachment: SOLR-8477.patch

Some more tests and docs

> Let users choose compression mode in SchemaCodecFactory
> ---
>
> Key: SOLR-8477
> URL: https://issues.apache.org/jira/browse/SOLR-8477
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-8477.patch, SOLR-8477.patch
>
>
> Expose Lucene's compression mode (LUCENE-5914) via SchemaCodecFactory init 
> argument. By default use current default mode: Mode.BEST_SPEED.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3909 - Still Failing

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3909/

4 tests failed.
FAILED:  org.apache.solr.search.stats.TestDistribIDF.testMultiCollectionQuery

Error Message:
Could not create collection. 
Response{responseHeader={status=0,QTime=90423},success={null={responseHeader={status=0,QTime=1450},core=collection1_local_shard2_replica1}},failure={null=org.apache.solr.client.solrj.SolrServerException:Timeout
 occured while waiting response from server at: https://127.0.0.1:46969/solr}}

Stack Trace:
java.lang.AssertionError: Could not create collection. 
Response{responseHeader={status=0,QTime=90423},success={null={responseHeader={status=0,QTime=1450},core=collection1_local_shard2_replica1}},failure={null=org.apache.solr.client.solrj.SolrServerException:Timeout
 occured while waiting response from server at: https://127.0.0.1:46969/solr}}
at 
__randomizedtesting.SeedInfo.seed([359EE2EE6DD218AF:24ED25DFB1A41DD3]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.search.stats.TestDistribIDF.createCollection(TestDistribIDF.java:215)
at 
org.apache.solr.search.stats.TestDistribIDF.createCollection(TestDistribIDF.java:190)
at 
org.apache.solr.search.stats.TestDistribIDF.testMultiCollectionQuery(TestDistribIDF.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65

[jira] [Created] (SOLR-8491) solr.cmd SOLR_SSL_OPTS is overwritten

2016-01-05 Thread Sam Yi (JIRA)
Sam Yi created SOLR-8491:


 Summary: solr.cmd SOLR_SSL_OPTS is overwritten
 Key: SOLR-8491
 URL: https://issues.apache.org/jira/browse/SOLR-8491
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.2, Trunk
 Environment: Windows
Reporter: Sam Yi


In solr.cmd, the SOLR_SSL_OPTS variable is assigned within a block, but then 
assigned again later in the same block, using {{%SOLR_SSL_OPTS%}} to attempt to 
append to itself.  However, since we're still inside the same block for this 
2nd assignment, {{%SOLR_SSL_OPTS%}} resolves to nothing, so everything in the 
first assignment (the solr.jetty opts) becomes overwritten.

I was able to work around this by using {code}!SOLR_SSL_OPTS!{code} instead of 
{{%SOLR_SSL_OPTS%}} in the 2nd assignments (in both the {{IF}} and {{ELSE}} 
blocks), since delayed expension is enabled.

Here's the full block for reference, from commit 
d4e3f50a6f6bc7b96fa6317f028ae26be25c8928, lines 43-55:
{code}IF DEFINED SOLR_SSL_KEY_STORE (
  set "SOLR_JETTY_CONFIG=--module=https"
  set SOLR_URL_SCHEME=https
  set "SCRIPT_ERROR=Solr server directory %SOLR_SERVER_DIR% not found!"
  set "SOLR_SSL_OPTS=-Dsolr.jetty.keystore=%SOLR_SSL_KEY_STORE% 
-Dsolr.jetty.keystore.password=%SOLR_SSL_KEY_STORE_PASSWORD% 
-Dsolr.jetty.truststore=%SOLR_SSL_TRUST_STORE% 
-Dsolr.jetty.truststore.password=%SOLR_SSL_TRUST_STORE_PASSWORD% 
-Dsolr.jetty.ssl.needClientAuth=%SOLR_SSL_NEED_CLIENT_AUTH% 
-Dsolr.jetty.ssl.wantClientAuth=%SOLR_SSL_WANT_CLIENT_AUTH%"
  IF DEFINED SOLR_SSL_CLIENT_KEY_STORE  (
set "SOLR_SSL_OPTS=%SOLR_SSL_OPTS% 
-Djavax.net.ssl.keyStore=%SOLR_SSL_CLIENT_KEY_STORE% 
-Djavax.net.ssl.keyStorePassword=%SOLR_SSL_CLIENT_KEY_STORE_PASSWORD% 
-Djavax.net.ssl.trustStore=%SOLR_SSL_CLIENT_TRUST_STORE% 
-Djavax.net.ssl.trustStorePassword=%SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD%"
  ) ELSE (
set "SOLR_SSL_OPTS=%SOLR_SSL_OPTS% 
-Djavax.net.ssl.keyStore=%SOLR_SSL_KEY_STORE% 
-Djavax.net.ssl.keyStorePassword=%SOLR_SSL_KEY_STORE_PASSWORD% 
-Djavax.net.ssl.trustStore=%SOLR_SSL_TRUST_STORE% 
-Djavax.net.ssl.trustStorePassword=%SOLR_SSL_TRUST_STORE_PASSWORD%"
  )
) ELSE (
  set SOLR_SSL_OPTS=
)
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-839) XML Query Parser support

2016-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-839:
-
Affects Version/s: 5.4

> XML Query Parser support
> 
>
> Key: SOLR-839
> URL: https://issues.apache.org/jira/browse/SOLR-839
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Affects Versions: 1.3, 5.4, Trunk
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-839-object-parser.patch, SOLR-839.patch, 
> SOLR-839.patch, lucene-xml-query-parser-2.4-dev.jar
>
>
> Lucene contrib includes a query parser that is able to create the 
> full-spectrum of Lucene queries, using an XML data structure.
> This patch adds "xml" query parser support to Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-839) XML Query Parser support

2016-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-839:
-
Affects Version/s: Trunk

> XML Query Parser support
> 
>
> Key: SOLR-839
> URL: https://issues.apache.org/jira/browse/SOLR-839
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Affects Versions: 1.3, Trunk
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-839-object-parser.patch, SOLR-839.patch, 
> SOLR-839.patch, lucene-xml-query-parser-2.4-dev.jar
>
>
> Lucene contrib includes a query parser that is able to create the 
> full-spectrum of Lucene queries, using an XML data structure.
> This patch adds "xml" query parser support to Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-839) XML Query Parser support

2016-01-05 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083664#comment-15083664
 ] 

Christine Poerschke commented on SOLR-839:
--

[~ehatcher] - if you have no objections then I will re-assign this ticket to 
myself with a view towards committing it later this month, to trunk and 
branch_5x.

Everyone - the latest patch builds on previous patches and code blocks in this 
ticket (patch summary above), reviews, comments, suggestions etc. welcome. 
Thank you.

> XML Query Parser support
> 
>
> Key: SOLR-839
> URL: https://issues.apache.org/jira/browse/SOLR-839
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Affects Versions: 1.3
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-839-object-parser.patch, SOLR-839.patch, 
> SOLR-839.patch, lucene-xml-query-parser-2.4-dev.jar
>
>
> Lucene contrib includes a query parser that is able to create the 
> full-spectrum of Lucene queries, using an XML data structure.
> This patch adds "xml" query parser support to Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SOLR-5209 in time for 6.0.0 release?

2016-01-05 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Hello Folks,

Would anyone have a little time to review and comment on the latest
  https://issues.apache.org/jira/browse/SOLR-5209
patch which perhaps went simply unnoticed towards the end of 2015?

Thanks,

Christine

[jira] [Assigned] (SOLR-5209) last replica removal cascades to remove shard from clusterstate

2016-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reassigned SOLR-5209:
-

Assignee: Christine Poerschke  (was: Mark Miller)

> last replica removal cascades to remove shard from clusterstate
> ---
>
> Key: SOLR-5209
> URL: https://issues.apache.org/jira/browse/SOLR-5209
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.4
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-5209.patch, SOLR-5209.patch
>
>
> The problem we saw was that unloading of an only replica of a shard deleted 
> that shard's info from the clusterstate. Once it was gone then there was no 
> easy way to re-create the shard (other than dropping and re-creating the 
> whole collection's state).
> This seems like a bug?
> Overseer.java around line 600 has a comment and commented out code:
> // TODO TODO TODO!!! if there are no replicas left for the slice, and the 
> slice has no hash range, remove it
> // if (newReplicas.size() == 0 && slice.getRange() == null) {
> // if there are no replicas left for the slice remove it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-8312) Add doc set size and number of buckets metrics

2016-01-05 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-8312:
--
Comment: was deleted

(was: Here is the patch. It adds two metrics to facet telemetry:
1. inputDocSetSize: the size of input doc set for each sub-facet.
2. numBuckets: number of unique buckets. It is the same number to the 
numBuckets in facet query result if numBuckets param is set to true in query. 
and is for field facet only. The reason to dup in facet telemetry is 
* query user may not turn on numBuckets but the operation and monitoring team 
still want to view numBucket information.
* operation and monitoring team may not be allowed to view query result.)

> Add doc set size and number of buckets metrics
> --
>
> Key: SOLR-8312
> URL: https://issues.apache.org/jira/browse/SOLR-8312
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8312.patch
>
>
> The doc set size and number of buckets represents the input data size and 
> intermediate data size for each step of facet. Therefore they are useful 
> metrics to be included in telemetry. 
> The output data size is usually defined by user and not too large. Therefore 
> the output data set size is not included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_66) - Build # 15448 - Still Failing!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15448/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.util.TestSolrCLIRunExample

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor, TransactionLog]
at __randomizedtesting.SeedInfo.seed([F2A639CF0F0D0A1F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:229)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.util.TestSolrCLIRunExample

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.util.TestSolrCLIRunExample: 
1) Thread[id=2847, name=searcherExecutor-932-thread-1, state=WAITING, 
group=TGRP-TestSolrCLIRunExample] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.util.TestSolrCLIRunExample: 
   1) Thread[id=2847, name=searcherExecutor-932-thread-1, state=WAITING, 
group=TGRP-TestSolrCLIRunExample]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(T

[jira] [Commented] (SOLR-8312) Add doc set size and number of buckets metrics

2016-01-05 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083527#comment-15083527
 ] 

Michael Sun commented on SOLR-8312:
---

Here is the patch. It adds two metrics to facet telemetry:
1. inputDocSetSize: the size of input doc set for each sub-facet.
2. numBuckets: number of unique buckets. It is the same number to the 
numBuckets in facet query result if numBuckets param is set to true in query. 
and is for field facet only. The reason to dup in facet telemetry is 
* query user may not turn on numBuckets but the operation and monitoring team 
still want to view numBucket information.
* operation and monitoring team may not be allowed to view query result.

> Add doc set size and number of buckets metrics
> --
>
> Key: SOLR-8312
> URL: https://issues.apache.org/jira/browse/SOLR-8312
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8312.patch
>
>
> The doc set size and number of buckets represents the input data size and 
> intermediate data size for each step of facet. Therefore they are useful 
> metrics to be included in telemetry. 
> The output data size is usually defined by user and not too large. Therefore 
> the output data set size is not included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8312) Add doc set size and number of buckets metrics

2016-01-05 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083525#comment-15083525
 ] 

Michael Sun commented on SOLR-8312:
---

Here is the patch. It adds two metrics to facet telemetry:
1. inputDocSetSize: the size of input doc set for each sub-facet.
2. numBuckets: number of unique buckets. It is the same number to the 
numBuckets in facet query result if numBuckets param is set to true in query. 
and is for field facet only. The reason to dup in facet telemetry is 
* query user may not turn on numBuckets but the operation and monitoring team 
still want to view numBucket information.
* operation and monitoring team may not be allowed to view query result.



> Add doc set size and number of buckets metrics
> --
>
> Key: SOLR-8312
> URL: https://issues.apache.org/jira/browse/SOLR-8312
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8312.patch
>
>
> The doc set size and number of buckets represents the input data size and 
> intermediate data size for each step of facet. Therefore they are useful 
> metrics to be included in telemetry. 
> The output data size is usually defined by user and not too large. Therefore 
> the output data set size is not included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 903 - Still Failing

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/903/

No tests ran.

Build Log:
[...truncated 2230 lines...]
ERROR: Connection was broken: java.io.IOException: Unexpected termination of 
the channel
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50)
Caused by: java.io.EOFException
at 
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2325)
at 
java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2794)
at 
java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:801)
at java.io.ObjectInputStream.(ObjectInputStream.java:299)
at 
hudson.remoting.ObjectInputStreamEx.(ObjectInputStreamEx.java:40)
at 
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)

Build step 'Invoke Ant' marked build as failure
ERROR: Publisher 'Archive the artifacts' failed: no workspace for 
Lucene-Solr-NightlyTests-trunk #903
ERROR: Publisher 'Publish JUnit test result report' failed: no workspace for 
Lucene-Solr-NightlyTests-trunk #903
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
ERROR: lucene is offline; cannot locate latest1.8
ERROR: lucene is offline; cannot locate latest1.8



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083534#comment-15083534
 ] 

Yonik Seeley commented on SOLR-8453:


This test currently passes on Solr 5x.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453_test.patch, SOLR-8453_test.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8312) Add doc set size and number of buckets metrics

2016-01-05 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-8312:
--
Attachment: SOLR-8312.patch

> Add doc set size and number of buckets metrics
> --
>
> Key: SOLR-8312
> URL: https://issues.apache.org/jira/browse/SOLR-8312
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8312.patch
>
>
> The doc set size and number of buckets represents the input data size and 
> intermediate data size for each step of facet. Therefore they are useful 
> metrics to be included in telemetry. 
> The output data size is usually defined by user and not too large. Therefore 
> the output data set size is not included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8453:
---
Attachment: SOLR-8453_test.patch

Here's an updated test that dedups the exception summary and cranks up the 
number of client threads, just to see what type of errors we can get.

{code}
10714 ERROR (TEST-TestSolrJErrorHandling.testWithXml-seed#[CDCE136AF9E0FF01]) [ 
   ] o.a.s.c.s.TestSolrJErrorHandling EXCEPTION LIST:
98) 
SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
2) 
SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Protocol
 wrong type for socket)
{code}

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453_test.patch, SOLR-8453_test.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 312 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/312/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:49463/ji/cz","node_name":"127.0.0.1:49463_ji%2Fcz","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={   "replicationFactor":"3",   
"shards":{"shard1":{   "range":"8000-7fff",   "state":"active", 
  "replicas":{ "core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:60198/ji/cz";,   
"core":"c8n_1x3_lf_shard1_replica2",   
"node_name":"127.0.0.1:60198_ji%2Fcz"}, "core_node2":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:49463/ji/cz";,   
"node_name":"127.0.0.1:49463_ji%2Fcz",   "state":"active",   
"leader":"true"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:53584/ji/cz";,   
"node_name":"127.0.0.1:53584_ji%2Fcz",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:49463/ji/cz","node_name":"127.0.0.1:49463_ji%2Fcz","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:60198/ji/cz";,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:60198_ji%2Fcz"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:49463/ji/cz";,
  "node_name":"127.0.0.1:49463_ji%2Fcz",
  "state":"active",
  "leader":"true"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:53584/ji/cz";,
  "node_name":"127.0.0.1:53584_ji%2Fcz",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([3EB058554FF8D2FF:B6E4678FE104BF07]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:171)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.ran

[jira] [Updated] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8453:
---
Attachment: SOLR-8453_test.patch

Here's a test with normal solrj clients that reproduces HTTP level exceptions.  
It uses multiple threads and large request sizes.

Example exception summary (from 10 clients):
{code}
3567 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->SocketException(Connection reset)
3569 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3569 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3570 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3571 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3571 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3571 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3572 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3572 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3572 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3573 INFO  (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.SolrTestCaseJ4 ###Ending testWithBinary
{code}

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453_test.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15447 - Still Failing!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15447/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseG1GC -XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   "collection1":{ "replicationFactor":"1", "shards":{   
"shard1":{ "range":"8000-", "state":"active",   
  "replicas":{"core_node2":{ "core":"collection1", 
"base_url":"http://127.0.0.1:59248/gdw/df";, 
"node_name":"127.0.0.1:59248_gdw%2Fdf", "state":"active",   
  "leader":"true"}}},   "shard2":{ "range":"0-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:45491/gdw/df";,   
  "node_name":"127.0.0.1:45491_gdw%2Fdf", "state":"active", 
"leader":"true"},   "core_node3":{ 
"core":"collection1", "base_url":"http://127.0.0.1:49897/gdw/df";,   
  "node_name":"127.0.0.1:49897_gdw%2Fdf", 
"state":"active", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "control_collection":{ "replicationFactor":"1",
 "shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{"core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:41623/gdw/df";,   
  "node_name":"127.0.0.1:41623_gdw%2Fdf", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "c8n_1x2":{ "replicationFactor":"2", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:41623/gdw/df";, 
"node_name":"127.0.0.1:41623_gdw%2Fdf", "state":"active",   
  "leader":"true"},   "core_node2":{ 
"core":"c8n_1x2_shard1_replica2", 
"base_url":"http://127.0.0.1:49897/gdw/df";, 
"node_name":"127.0.0.1:49897_gdw%2Fdf", "state":"recovering",   
  "router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false"},   "collMinRf_1x3":{ "replicationFactor":"3",
 "shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collMinRf_1x3_shard1_replica3", 
"base_url":"http://127.0.0.1:41623/gdw/df";, 
"node_name":"127.0.0.1:41623_gdw%2Fdf", "state":"active",   
  "leader":"true"},   "core_node2":{ 
"core":"collMinRf_1x3_shard1_replica2", 
"base_url":"http://127.0.0.1:49897/gdw/df";, 
"node_name":"127.0.0.1:49897_gdw%2Fdf", "state":"active"},  
 "core_node3":{ "core":"collMinRf_1x3_shard1_replica1", 
"base_url":"http://127.0.0.1:59248/gdw/df";, 
"node_name":"127.0.0.1:59248_gdw%2Fdf", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  "collection1":{
"replicationFactor":"1",
"shards":{
  "shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node2":{
"core":"collection1",
"base_url":"http://127.0.0.1:59248/gdw/df";,
"node_name":"127.0.0.1:59248_gdw%2Fdf",
"state":"active",
"leader":"true"}}},
  "shard2":{
"range":"0-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:45491/gdw/df";,
"node_name":"127.0.0.1:45491_gdw%2Fdf",
"state":"active",
"leader":"true"},
  "core_node3":{
"core":"collection1",
"base_url":"http://127.0.0.1:49897/gdw/df";,
"node_name":"127.0.0.1:49897_gdw%2Fdf",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  "control_collection":{
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:41623/gdw/df";,

[jira] [Commented] (SOLR-8475) Some refactoring to SolrIndexSearcher

2016-01-05 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083345#comment-15083345
 ] 

Shai Erera commented on SOLR-8475:
--

It looks fine, though I really think it's an over-kill :). Let's see if we get 
to a consensus on that issue on the dev list, and if not, I'll try your 
approach.

> Some refactoring to SolrIndexSearcher
> -
>
> Key: SOLR-8475
> URL: https://issues.apache.org/jira/browse/SOLR-8475
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8475.patch, SOLR-8475.patch, SOLR-8475.patch, 
> SOLR-8475.patch, SOLR-8475.patch
>
>
> While reviewing {{SolrIndexSearcher}}, I started to correct a thing here and 
> there, and eventually it led to these changes:
> * Moving {{QueryCommand}} and {{QueryResult}} to their own classes.
> * Moving FilterImpl into a private static class (was package-private and 
> defined in the same .java file, but separate class).
> * Some code formatting, imports organizing and minor log changes.
> * Removed fieldNames (handled the TODO in the code)
> * Got rid of usage of deprecated classes such as {{LegacyNumericUtils}} and 
> {{Legacy-*-Field}}.
> I wish we'd cut down the size of this file much more (it's 2500 lines now), 
> but I've decided to stop here so that the patch is manageable. I would like 
> to explore further refactorings afterwards, e.g. extracting cache management 
> code to an outer class (but keep {{SolrIndexSearcher}}'s API the same, if 
> possible).
> If you have additional ideas of more cleanups / simplifications, I'd be glad 
> to do them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.3 - Build # 7 - Still Failing

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.3/7/

No tests ran.

Build Log:
[...truncated 53068 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (13.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.3.2-src.tgz...
   [smoker] 28.5 MB in 0.04 sec (725.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.2.tgz...
   [smoker] 65.7 MB in 0.08 sec (778.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.2.zip...
   [smoker] 75.9 MB in 0.10 sec (769.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.3.2.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.2.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.2-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   5.4.0
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/sm
   [smoker] okeTestRelease.py", line 1449, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1394, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1432, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, svnRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 583, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
svnRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 762, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1387, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCom

[jira] [Commented] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083272#comment-15083272
 ] 

Mark Miller commented on SOLR-8453:
---

bq. On trunk right now, you have to drop that poll to 12ms or less on my 
machine to get the test to pass.

And on 5x, Jetty is not sensitive to the length of the poll it seems (at least 
up to 30 seconds).

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8470) Make TTL of PKIAuthenticationPlugin's tokens configurable through a system property

2016-01-05 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083218#comment-15083218
 ] 

Noble Paul commented on SOLR-8470:
--

This is because ZK session time out . Maybe, you need to keep a higher timeout. 



> Make TTL of PKIAuthenticationPlugin's tokens configurable through a system 
> property
> ---
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 15446 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15446/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic

Error Message:
Connection reset

Stack Trace:
java.net.SocketException: Connection reset
at 
__randomizedtesting.SeedInfo.seed([9C0609EC27F49304:37FC14F9F828152A]:0)
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.lucene.replicator.http.HttpClientBase.executeGET(HttpClientBase.java:159)
at 
org.apache.lucene.replicator.http.HttpReplicator.checkForUpdate(HttpReplicator.java:51)
at 
org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:196)
at 
org.apache.lucene.replicator.ReplicationClient.updateNow(ReplicationClient.java:402)
at 
org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic(HttpReplicatorTest.java:122)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Sta

[jira] [Commented] (SOLR-8475) Some refactoring to SolrIndexSearcher

2016-01-05 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083191#comment-15083191
 ] 

Christine Poerschke commented on SOLR-8475:
---

bq. If it's possible to leave deprecated inner classes extending the extracted 
classes, then existing user code should work just fine. I haven't attempted to 
do this, but I think that should work.

SOLR-8490 (created as a sub-task of this ticket) is my attempt at this for 
{{QueryCommand}} only. Having the deprecated inner class extend the extracted 
class of the same name was a little tricky but having an interim helper class 
seems to work though perhaps there is another more proper alternative to that 
also.
{code}
+++ b/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java
...
-  public static class QueryCommand {
+  @Deprecated
+  public static class QueryCommand extends QueryCommandAdapter {

+++ b/solr/core/src/java/org/apache/solr/search/QueryCommandAdapter.java
...
+@Deprecated
+public class QueryCommandAdapter extends QueryCommand {

+++ b/solr/core/src/java/org/apache/solr/search/QueryCommand.java
...
+public class QueryCommand {
{code}

> Some refactoring to SolrIndexSearcher
> -
>
> Key: SOLR-8475
> URL: https://issues.apache.org/jira/browse/SOLR-8475
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8475.patch, SOLR-8475.patch, SOLR-8475.patch, 
> SOLR-8475.patch, SOLR-8475.patch
>
>
> While reviewing {{SolrIndexSearcher}}, I started to correct a thing here and 
> there, and eventually it led to these changes:
> * Moving {{QueryCommand}} and {{QueryResult}} to their own classes.
> * Moving FilterImpl into a private static class (was package-private and 
> defined in the same .java file, but separate class).
> * Some code formatting, imports organizing and minor log changes.
> * Removed fieldNames (handled the TODO in the code)
> * Got rid of usage of deprecated classes such as {{LegacyNumericUtils}} and 
> {{Legacy-*-Field}}.
> I wish we'd cut down the size of this file much more (it's 2500 lines now), 
> but I've decided to stop here so that the patch is manageable. I would like 
> to explore further refactorings afterwards, e.g. extracting cache management 
> code to an outer class (but keep {{SolrIndexSearcher}}'s API the same, if 
> possible).
> If you have additional ideas of more cleanups / simplifications, I'd be glad 
> to do them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8490) factor out a QueryCommand (super) class from SolrIndexSearcher.QueryCommand

2016-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8490:
--
Attachment: SOLR-8490-part2.patch
SOLR-8490-part1.patch
SOLR-8490-part0.patch

> factor out a QueryCommand (super) class from SolrIndexSearcher.QueryCommand
> ---
>
> Key: SOLR-8490
> URL: https://issues.apache.org/jira/browse/SOLR-8490
> Project: Solr
>  Issue Type: Sub-task
>  Components: search
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8490-part0.patch, SOLR-8490-part1.patch, 
> SOLR-8490-part2.patch
>
>
> part 0 (for trunk and branch_5x) - preparation:
>  * two minor changes in {{QueryComponent.java}} and 
> {{SolrIndexSearcher.java}} to simplify the subsequent actual changes
> part 1 (for trunk and branch_5x) - factor out a {{QueryCommand}} (super) 
> class from {{SolrIndexSearcher.QueryCommand}}:
> * for back-compat reasons {{SolrIndexSearcher.QueryCommand}} inherits from 
> the factored out class
> * for private variables and methods use {{QueryCommand}} instead of 
> {{SolrIndexSearcher.QueryCommand}}
> * public methods and constructors taking {{SolrIndexSearcher.QueryCommand}} 
> args marked @Deprecated and equivalents with {{QueryCommand}} arg created
> part 2 (for trunk only) - remove deprecated 
> {{SolrIndexSearcher.QueryCommand}} class:
> * affected/changed public or protected methods:
> ** {{ResponseBuilder.getQueryCommand()}}
> ** {{SolrIndexSearcher.search(QueryResult qr, QueryCommand cmd)}}
> ** {{SolrIndexSearcher.sortDocSet(QueryResult qr, QueryCommand cmd)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8490) factor out a QueryCommand (super) class from SolrIndexSearcher.QueryCommand

2016-01-05 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-8490:
-

 Summary: factor out a QueryCommand (super) class from 
SolrIndexSearcher.QueryCommand
 Key: SOLR-8490
 URL: https://issues.apache.org/jira/browse/SOLR-8490
 Project: Solr
  Issue Type: Sub-task
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


part 0 (for trunk and branch_5x) - preparation:
 * two minor changes in {{QueryComponent.java}} and {{SolrIndexSearcher.java}} 
to simplify the subsequent actual changes

part 1 (for trunk and branch_5x) - factor out a {{QueryCommand}} (super) class 
from {{SolrIndexSearcher.QueryCommand}}:
* for back-compat reasons {{SolrIndexSearcher.QueryCommand}} inherits from the 
factored out class
* for private variables and methods use {{QueryCommand}} instead of 
{{SolrIndexSearcher.QueryCommand}}
* public methods and constructors taking {{SolrIndexSearcher.QueryCommand}} 
args marked @Deprecated and equivalents with {{QueryCommand}} arg created

part 2 (for trunk only) - remove deprecated {{SolrIndexSearcher.QueryCommand}} 
class:
* affected/changed public or protected methods:
** {{ResponseBuilder.getQueryCommand()}}
** {{SolrIndexSearcher.search(QueryResult qr, QueryCommand cmd)}}
** {{SolrIndexSearcher.sortDocSet(QueryResult qr, QueryCommand cmd)}}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083159#comment-15083159
 ] 

Mark Miller commented on SOLR-8453:
---

bq. If you remove the 250ms poll that happens ... with the client kind of 
racing the server.

On trunk right now, you have to drop that poll to 12ms or less on my machine to 
get the test to pass.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083141#comment-15083141
 ] 

Mark Miller commented on SOLR-8453:
---

I think it's more related to using HttpClient than http. We see random 
connection resets in many tests that go away with this, but looking at the 
consistent test we have that fails (SolrExampleStreamingTest#testUpdateField), 
we seem to hit the problem when HttpClient is cleaning up and closing the 
outputstream, which flushes a buffer.

{code}
java.net.SocketException: Connection reset
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at 
org.apache.http.impl.io.AbstractSessionOutputBuffer.flushBuffer(AbstractSessionOutputBuffer.java:159)
at 
org.apache.http.impl.io.AbstractSessionOutputBuffer.flush(AbstractSessionOutputBuffer.java:166)
at 
org.apache.http.impl.io.ChunkedOutputStream.close(ChunkedOutputStream.java:205)
at 
org.apache.http.impl.entity.EntitySerializer.serialize(EntitySerializer.java:118)
at 
org.apache.http.impl.AbstractHttpClientConnection.sendRequestEntity(AbstractHttpClientConnection.java:265)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestEntity(ManagedClientConnectionImpl.java:203)
at 
org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:237)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:122)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:280)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:161)
{code}

It may also be that it only happens with this chunked encoding. On close, it 
tries to write a 'closing chunk' and then flush: 
https://github.com/apache/httpcore/blob/4.0.x/httpcore/src/main/java/org/apache/http/impl/io/ChunkedOutputStream.java

If there is a problem here we get the connection reset.

It does actually seem like a bit of a race to me and I'm not sure how to 
address that yet (other than this patch). If you remove the 250ms poll that 
happens in 
ConcurrentUpdateSolrClient->sendUpdateStream->EntityTemplate->writeTo, it seems 
to go away. But that would indicate our connection management is a bit fragile, 
with the client kind of racing the server.

Still playing around to try and find other potential fixes.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was 

[jira] [Commented] (SOLR-8470) Make TTL of PKIAuthenticationPlugin's tokens configurable through a system property

2016-01-05 Thread Nirmala Venkatraman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083111#comment-15083111
 ] 

Nirmala Venkatraman commented on SOLR-8470:
---

After applying the ttl patch and setting to 60sec, one of the nodes hit this 
error. Most likely culprit is slightly longer GC pauses . Do you think we 
should set autoReplicaFailoverWorkLoopDelay to a greater # than default of 10sec

2016-01-04 23:05:37.205 ERROR 
(OverseerHdfsCoreFailoverThread-239245611805900804-sgdsolar7.swg.usma.ibm.com:8984_solr-n_000133)
 [   ] o.a.s.c.OverseerAutoReplicaFailoverThread 
OverseerAutoReplicaFailoverThread had an error in its thread work 
loop.:org.apache.solr.common.SolrException: Error reading cluster properties
at 
org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:732)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.doWork(OverseerAutoReplicaFailoverThread.java:152)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:131)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1040)
at 
org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkClient.java:311)
at 
org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkClient.java:308)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at 
org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:308)
at 
org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:725)
... 3 more

2016-01-04 23:05:37.218 ERROR (OverseerExitThread) [   ] o.a.s.c.Overseer could 
not read the data
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /overseer_elect/leader
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:300)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.access$300(Overseer.java:87)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater$2.run(Overseer.java:261)
2016-01-04 23:05:37.206 ERROR (qtp829053325-487) [c:collection33 s:shard1 
r:core_node2 x:collection33_shard1_replica1] o.a.s.c.SolrCore 
org.apache.solr.common.SolrException: Cannot talk to ZooKeeper - Updates are 
disabled.


> Make TTL of PKIAuthenticationPlugin's tokens configurable through a system 
> property
> ---
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8470) Make TTL of PKIAuthenticationPlugin's tokens configurable through a system property

2016-01-05 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083084#comment-15083084
 ] 

Noble Paul commented on SOLR-8470:
--

[~nirmalav] Thanks a lot

> Make TTL of PKIAuthenticationPlugin's tokens configurable through a system 
> property
> ---
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8435) Long update times Solr 5.3.1

2016-01-05 Thread Kenny Knecht (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenny Knecht closed SOLR-8435.
--
Resolution: Fixed

This issue seemed to be caused by slower disks in our second setup, but the 
different behaviour between 5.2.1 and 5.3.1 led us to believe this was actually 
a bug. Sorry for bother you with it!

> Long update times Solr 5.3.1
> 
>
> Key: SOLR-8435
> URL: https://issues.apache.org/jira/browse/SOLR-8435
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 5.3.1
> Environment: Ubuntu server 128Gb
>Reporter: Kenny Knecht
> Fix For: 5.2.1
>
>
> We have 2 128GB ubuntu servers in solr cloud config. We update by curling 
> json files of 20,000 documents. In 5.2.1 this consistently takes between 19 
> and 24 seconds. In 5.3.1 most times this takes 20s but in about 20% of the 
> files this takes much longer: up to 500s! Which files seems to be quite 
> random. Is this a known bug? any workaround? fixed in 5.4?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8470) Make TTL of PKIAuthenticationPlugin's tokens configurable through a system property

2016-01-05 Thread Nirmala Venkatraman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083075#comment-15083075
 ] 

Nirmala Venkatraman commented on SOLR-8470:
---

I applied Noble's patch for pkiauth.ttl(SOLR-8470) and set the ttl parameter to 
60sec(default is 5sec) and ran another batch of indexing load. Good news is 
that  I didn't hit any of the 401 exceptions as seen in SOLR-8422 , butone 
of the nodes sgdsolar7 went into recovery with zksession expiration in 
/overseer/elect. 
So I think this is a good fix for 5.3.2

> Make TTL of PKIAuthenticationPlugin's tokens configurable through a system 
> property
> ---
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2016-01-05 Thread Nirmala Venkatraman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083072#comment-15083072
 ] 

Nirmala Venkatraman commented on SOLR-8422:
---

I applied Noble's patch for pkiauth.ttl(SOLR-8470) and set the ttl parameter to 
60sec(default is 5sec) and ran another batch of indexing load. Good news is 
that  I didn't hit any of the 401 exceptions , butone of the nodes 
sgdsolar7 went into recovery with zksession expiration in /overseer/elect. 

> Basic Authentication plugin is not working correctly in solrcloud
> -
>
> Key: SOLR-8422
> URL: https://issues.apache.org/jira/browse/SOLR-8422
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 5.3.1
> Environment: Solrcloud
>Reporter: Nirmala Venkatraman
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-8422.patch
>
>
> Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
> with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
> have 64 collections, each having 2 replicas distributed across the 5 servers 
> in the solr cloud. A sample screen shot of the collections/shard locations 
> shown below:-
> Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
> servers in the solrcloud and the request is sent to a server  which  doesn't 
> have the collection
> Here is the request sent by the indexing tool  to sgdsolar1, that includes 
> the correct BasicAuth credentials
> Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
> collection1, but no basic auth header is being passed. 
> As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
> all the way back to solr indexing tool
> 9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
> /solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&fl=unid,sequence,folderunid&wt=xml&rows=10
>  HTTP/1.1" 401 366
> 2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] 
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. 
> failed permission 
> org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
> 2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall 
> USER_REQUIRED auth header null context : userPrincipal: [null] type: [READ], 
> collections: [collection1,], Path: [/get] path : /get params 
> :fl=unid,sequence,folderunid&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&rows=10&wt=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!
> Step 3 - In another solrcloud , if the indexing tool sends the solr get 
> request to the server that has the collection1, I see that basic 
> authentication working as expected.
> I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
> solr-core and solr-solrj jar files under the solr-webapp folder that were 
> provided via earlier patches that Anshum/Noble worked on:-
> SOLR-8167 fixes the POST issue 
> SOLR-8326  fixing PKIAuthenticationPlugin.
> SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h.

[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_66) - Build # 5394 - Still Failing!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5394/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseG1GC

8 tests failed.
FAILED:  org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test

Error Message:
partialResults were expected expected: but was:

Stack Trace:
java.lang.AssertionError: partialResults were expected expected: but 
was:
at 
__randomizedtesting.SeedInfo.seed([AB6E62193E7C069E:233A5DC390806B66]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertPartialResults(CloudExitableDirectoryReaderTest.java:102)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:73)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test(CloudExitableDirectoryReaderTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util

[jira] [Commented] (SOLR-7733) remove/rename "optimize" references in the UI.

2016-01-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15083041#comment-15083041
 ] 

Jan Høydahl commented on SOLR-7733:
---

The current UI (both classic and Angular) still has a green "Optimized" 
checkmark, which seems to always stay green (both on overview page and on core 
admin page). Should we get rid of them?

Also, the Angular UI removes the "Optimize" button from the Core Admin page. I 
vote for bringing the button back, but with an educational popup

{panel:title=Are you 
sure?|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}This
 will read and write the *entire index*, merging all documents into one 
segment, and can be very expensive{panel}

Related: I have many times missed a {{Commit}} button in the core admin and 
collections tabs. What do you think?

> remove/rename "optimize" references in the UI.
> --
>
> Key: SOLR-7733
> URL: https://issues.apache.org/jira/browse/SOLR-7733
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Affects Versions: 5.3, Trunk
>Reporter: Erick Erickson
>Assignee: Upayavira
>Priority: Minor
> Attachments: SOLR-7733.patch
>
>
> Since optimizing indexes is kind of a special circumstance thing, what do we 
> think about removing (or renaming) optimize-related stuff on the core admin 
> and core overview pages? The "optimize" button is already gone from the core 
> admin screen (was this intentional?).
> My personal feeling is that we should remove this entirely as it's too easy 
> to think "Of course I want my index optimized" and "look, this screen says my 
> index isn't optimized, that must mean I should optimize it".
> The core admin screen and the core overview page both have an "optimized" 
> checkmark, I propose just removing it from the "overview" page and on the 
> "core admin" page changing it to "Segment Count #". NOTE: the "overview" page 
> already has a "Segment Count" entry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3141) Deprecate OPTIMIZE command in Solr

2016-01-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-3141:
--
Attachment: SOLR-3141.patch

Attached slightly modified patch. Will commit to trunk only tomorrow if no 
objections.

Unless someone feel inclined to implement more code changes to this issue, I'll 
rename and close this JIRA after commit of the log patch.

> Deprecate OPTIMIZE command in Solr
> --
>
> Key: SOLR-3141
> URL: https://issues.apache.org/jira/browse/SOLR-3141
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 3.5
>Reporter: Jan Høydahl
>  Labels: force, optimize
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-3141.patch, SOLR-3141.patch, SOLR-3141.patch
>
>
> Background: LUCENE-3454 renames optimize() as forceMerge(). Please read that 
> issue first.
> Now that optimize() is rarely necessary anymore, and renamed in Lucene APIs, 
> what should be done with Solr's ancient optimize command?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3141) Deprecate OPTIMIZE command in Solr

2016-01-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-3141:
-

Assignee: Jan Høydahl

> Deprecate OPTIMIZE command in Solr
> --
>
> Key: SOLR-3141
> URL: https://issues.apache.org/jira/browse/SOLR-3141
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 3.5
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: force, optimize
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-3141.patch, SOLR-3141.patch, SOLR-3141.patch
>
>
> Background: LUCENE-3454 renames optimize() as forceMerge(). Please read that 
> issue first.
> Now that optimize() is rarely necessary anymore, and renamed in Lucene APIs, 
> what should be done with Solr's ancient optimize command?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8489) TestMiniSolrCloudCluster.createCollection to support extra & alternative collectionProperties

2016-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8489:
--
Attachment: SOLR-8489.patch

> TestMiniSolrCloudCluster.createCollection to support extra & alternative 
> collectionProperties
> -
>
> Key: SOLR-8489
> URL: https://issues.apache.org/jira/browse/SOLR-8489
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8489.patch
>
>
> * add optional collectionProperties map arg and use putIfAbsent instead of 
> put with the map
> * move persistIndex i.e. solr.directoryFactory randomisation from the several 
> callers to just-once in createCollection
> These changes are refactors only and intended to *not* change the existing 
> tests' behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8489) TestMiniSolrCloudCluster.createCollection to support extra & alternative collectionProperties

2016-01-05 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-8489:
-

 Summary: TestMiniSolrCloudCluster.createCollection to support 
extra & alternative collectionProperties
 Key: SOLR-8489
 URL: https://issues.apache.org/jira/browse/SOLR-8489
 Project: Solr
  Issue Type: Test
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


* add optional collectionProperties map arg and use putIfAbsent instead of put 
with the map
* move persistIndex i.e. solr.directoryFactory randomisation from the several 
callers to just-once in createCollection

These changes are refactors only and intended to *not* change the existing 
tests' behaviour.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8488) Add support for leading wildcards to ComplexPhraseQParserPlugin

2016-01-05 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-8488:

Attachment: SOLR-8488.patch

Simple patch with a test case.

> Add support for leading wildcards to ComplexPhraseQParserPlugin
> ---
>
> Key: SOLR-8488
> URL: https://issues.apache.org/jira/browse/SOLR-8488
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8488.patch
>
>
> It would be useful to support leading wildcards in phrase searches as well. 
> Currently we support this query -
> {code}{!complexphrase inOrder=true}name:"Jo* Smith"{code}
> It would be useful to be support a query like -
> {code}{!complexphrase inOrder=true\}name:"*Jo* Smith"{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8488) Add support for leading wildcards to ComplexPhraseQParserPlugin

2016-01-05 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-8488:
---

 Summary: Add support for leading wildcards to 
ComplexPhraseQParserPlugin
 Key: SOLR-8488
 URL: https://issues.apache.org/jira/browse/SOLR-8488
 Project: Solr
  Issue Type: Improvement
Reporter: Varun Thacker
Priority: Minor
 Fix For: 5.5, Trunk


It would be useful to support leading wildcards in phrase searches as well. 

Currently we support this query -

{code}{!complexphrase inOrder=true}name:"Jo* Smith"{code}

It would be useful to be support a query like -

{code}!complexphrase inOrder=true\}name:"*Jo* Smith"{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8488) Add support for leading wildcards to ComplexPhraseQParserPlugin

2016-01-05 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-8488:

Description: 
It would be useful to support leading wildcards in phrase searches as well. 

Currently we support this query -

{code}{!complexphrase inOrder=true}name:"Jo* Smith"{code}

It would be useful to be support a query like -

{code}{!complexphrase inOrder=true\}name:"*Jo* Smith"{code}



  was:
It would be useful to support leading wildcards in phrase searches as well. 

Currently we support this query -

{code}{!complexphrase inOrder=true}name:"Jo* Smith"{code}

It would be useful to be support a query like -

{code}!complexphrase inOrder=true\}name:"*Jo* Smith"{code}




> Add support for leading wildcards to ComplexPhraseQParserPlugin
> ---
>
> Key: SOLR-8488
> URL: https://issues.apache.org/jira/browse/SOLR-8488
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Priority: Minor
> Fix For: 5.5, Trunk
>
>
> It would be useful to support leading wildcards in phrase searches as well. 
> Currently we support this query -
> {code}{!complexphrase inOrder=true}name:"Jo* Smith"{code}
> It would be useful to be support a query like -
> {code}{!complexphrase inOrder=true\}name:"*Jo* Smith"{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8483) tweak open-exchange-rates.json test-file to avoid OpenExchangeRatesOrgProvider.java warnings

2016-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-8483.
---
   Resolution: Fixed
Fix Version/s: Trunk
   5.5

> tweak open-exchange-rates.json test-file to avoid 
> OpenExchangeRatesOrgProvider.java warnings
> 
>
> Key: SOLR-8483
> URL: https://issues.apache.org/jira/browse/SOLR-8483
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8483.patch
>
>
> Tweak the {{open-exchange-rates.json}} test file so that 
> {{OpenExchangeRatesOrgProvider}} does not emit {{'Unknown key IMPORTANT 
> NOTE'}} and {{'Expected key, got STRING'}} warnings which can be confusing 
> when investigating unrelated test failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-01-05 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15082984#comment-15082984
 ] 

Dawid Weiss commented on LUCENE-6938:
-

Can we specify the commands (build/ precommit checks) that need to "work" with 
a git clone so that we can orderly go through them and know where we are with 
the migration process? It'd be good to have it done and then vote/ move on with 
the development to git. My candidates would be:

* {{ant clean test}}
* {{ant jar}}
* {{ant validate precommit}}

Then there are follow-ups:

* Maven POMs (scm defs)
* README and other help files referring to SVN
* various python scripts under {{dev-tools/scripts}} invoke SVN
* Jenkins CI job definitions, etc.


> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8483) tweak open-exchange-rates.json test-file to avoid OpenExchangeRatesOrgProvider.java warnings

2016-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15082981#comment-15082981
 ] 

ASF subversion and git services commented on SOLR-8483:
---

Commit 1723057 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1723057 ]

SOLR-8483: relocate 'IMPORTANT NOTE' in open-exchange-rates.json test-file to 
avoid OpenExchangeRatesOrgProvider.java warnings. (merge in revision 1723040 
from trunk)

> tweak open-exchange-rates.json test-file to avoid 
> OpenExchangeRatesOrgProvider.java warnings
> 
>
> Key: SOLR-8483
> URL: https://issues.apache.org/jira/browse/SOLR-8483
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8483.patch
>
>
> Tweak the {{open-exchange-rates.json}} test file so that 
> {{OpenExchangeRatesOrgProvider}} does not emit {{'Unknown key IMPORTANT 
> NOTE'}} and {{'Expected key, got STRING'}} warnings which can be confusing 
> when investigating unrelated test failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8485) SelectStream only works with all lowercase field names and doesn't handle quoted selected fields

2016-01-05 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-8485:
--
Attachment: SOLR-8485.patch

Patches fixes all issues. All relevant tests pass.

> SelectStream only works with all lowercase field names and doesn't handle 
> quoted selected fields
> 
>
> Key: SOLR-8485
> URL: https://issues.apache.org/jira/browse/SOLR-8485
> Project: Solr
>  Issue Type: Bug
>Reporter: Dennis Gove
>Priority: Minor
>  Labels: streaming
> Attachments: SOLR-8485.patch, SOLR-8485.patch
>
>
> Three issues exist if one creates a SelectStream with an expression.
> {code}
> select(
>   search(collection1, fl="personId_i,rating_f", q="rating_f:*", 
> sort="personId_i asc"),
>   personId_i as personId,
>   rating_f as rating
> )
> {code}
> "personId_i as personId" will be parsed as "personid_i as personid"
> 1. The incoming tuple will contain a field "personId_i" but the selection 
> will be looking for a field "personid_i". This field won't be found in the 
> incoming tuple (notice the case difference) and as such no field personId 
> will exist in the outgoing tuple.
> 2. If (1) wasn't an issue, the outgoing tuple would have in a field 
> "personid" and not the expected "personId" (notice the case difference). This 
> can lead to other down-the-road issues.
> 3. Also, if one were to quote the selected fields such as in
> {code}
> select(
>   search(collection1, fl="personId_i,rating_f", q="rating_f:*", 
> sort="personId_i asc"),
>   "personId_i as personId",
>   "rating_f as rating"
> )
> {code}
> then the quotes would be included in the field name. Wrapping quotes should 
> be handled properly such that they are removed from the parameters before 
> they are parsed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8475) Some refactoring to SolrIndexSearcher

2016-01-05 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15082968#comment-15082968
 ] 

Christine Poerschke commented on SOLR-8475:
---

bq. If it's possible to leave deprecated inner classes extending the extracted 
classes, then existing user code should work just fine. I haven't attempted to 
do this, but I think that should work.

I am in the process of attempting this for {{QueryCommand}} only (since my 
unrelated SOLR-8482 change also concerns that class), hoping to post patch(es) 
later today.

> Some refactoring to SolrIndexSearcher
> -
>
> Key: SOLR-8475
> URL: https://issues.apache.org/jira/browse/SOLR-8475
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8475.patch, SOLR-8475.patch, SOLR-8475.patch, 
> SOLR-8475.patch, SOLR-8475.patch
>
>
> While reviewing {{SolrIndexSearcher}}, I started to correct a thing here and 
> there, and eventually it led to these changes:
> * Moving {{QueryCommand}} and {{QueryResult}} to their own classes.
> * Moving FilterImpl into a private static class (was package-private and 
> defined in the same .java file, but separate class).
> * Some code formatting, imports organizing and minor log changes.
> * Removed fieldNames (handled the TODO in the code)
> * Got rid of usage of deprecated classes such as {{LegacyNumericUtils}} and 
> {{Legacy-*-Field}}.
> I wish we'd cut down the size of this file much more (it's 2500 lines now), 
> but I've decided to stop here so that the patch is manageable. I would like 
> to explore further refactorings afterwards, e.g. extracting cache management 
> code to an outer class (but keep {{SolrIndexSearcher}}'s API the same, if 
> possible).
> If you have additional ideas of more cleanups / simplifications, I'd be glad 
> to do them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3135) New binary request/response format using Avro

2016-01-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-3135.
-
Resolution: Won't Fix

Not a real user need - we have other binary formats which people are happy with

> New binary request/response format using Avro
> -
>
> Key: SOLR-3135
> URL: https://issues.apache.org/jira/browse/SOLR-3135
> Project: Solr
>  Issue Type: New Feature
>  Components: Response Writers, search
>Reporter: Jan Høydahl
>  Labels: Avro, RequestHandler, ResponseWriter, serialization
>
> Solr does not have a binary request/response format which can be supported by 
> any client/programming language. The JavaBin format is Java only and is also 
> not standards based.
> The proposal (spinoff from SOLR-1535 and SOLR-2204) is to investigate 
> creation of an [Apache Avro|http://avro.apache.org/] based serialization 
> format. First goal is to produce Avro 
> [Schemas|http://avro.apache.org/docs/current/#schemas] for Request and 
> Response and then provide {{AvroRequestHandler}} and {{AvroResponseWriter}}. 
> Secondary goal is to use it for replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-5612) Document ScandinavianNormalizationFilter and ScandinavianFoldingFilter in ref guide

2016-01-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-5612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-5612.
-
   Resolution: Fixed
Fix Version/s: (was: 4.9)
   (was: Trunk)

Done way back

> Document ScandinavianNormalizationFilter and ScandinavianFoldingFilter in ref 
> guide
> ---
>
> Key: SOLR-5612
> URL: https://issues.apache.org/jira/browse/SOLR-5612
> Project: Solr
>  Issue Type: Task
>  Components: documentation
>Reporter: Jan Høydahl
>Priority: Minor
>
> Add to 
> https://cwiki.apache.org/confluence/display/solr/Language+Analysis#LanguageAnalysis-Language-SpecificFactories
> See LUCENE-5013 as well as:
> http://lucene.apache.org/core/4_6_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilterFactory.html
> http://lucene.apache.org/core/4_6_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianNormalizationFilter.html
> http://lucene.apache.org/core/4_6_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilterFactory.html
> http://lucene.apache.org/core/4_6_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ScandinavianFoldingFilter.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3654) Add some tests using Tomcat as servlet container

2016-01-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-3654.
-
Resolution: Won't Fix

Not relevant anymore

> Add some tests using Tomcat as servlet container
> 
>
> Key: SOLR-3654
> URL: https://issues.apache.org/jira/browse/SOLR-3654
> Project: Solr
>  Issue Type: Task
>  Components: Build
> Environment: Tomcat
>Reporter: Jan Høydahl
>  Labels: Tomcat
>
> All tests use Jetty, we should add some tests for at least one other servlet 
> container (Tomcat). Ref discussion at http://search-lucene.com/m/6mo9Y1WZaWR1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein closed SOLR-7535.

Resolution: Fixed

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-01-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15082950#comment-15082950
 ] 

Joel Bernstein commented on SOLR-8487:
--

Closed this issue by mistake and then re-opened.

> Add CommitStream to Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8487
> URL: https://issues.apache.org/jira/browse/SOLR-8487
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: Trunk
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: Trunk
>
>
> (Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).
> With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
> However, there's no way currently using the Streaming API to force a commit 
> on the collection that received these updates.
> The purpose of this ticket is to add a CommitStream, which can be used to 
> trigger commit(s) on a given collection.
> The proposed usage/behavior would look a little bit like:
> {{commit(collection, parallel(update(search()))}}
> Note that...
> 1.) CommitStream has a positional collection parameter, to indicate which 
> collection to commit on. (Alternatively, it could recurse through 
> {{children()}} nodes until it finds the UpdateStream, and then retrieve the 
> collection from the UpdateStream).
> 2.) CommitStream forwards all tuples received by an underlying, wrapped 
> stream.
> 3.) CommitStream commits when the underlying stream emits its EOF tuple. 
> (Alternatively, it could commit every X tuples, based on a parameter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2370) Let some UpdateProcessors be default without explicitly configuring them

2016-01-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-2370.
-
Resolution: Implemented

Already implemented in other issues

> Let some UpdateProcessors be default without explicitly configuring them
> 
>
> Key: SOLR-2370
> URL: https://issues.apache.org/jira/browse/SOLR-2370
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Reporter: Jan Høydahl
>  Labels: UpdateProcessor, UpdateProcessorChain
>
> Problem:
> Today the user needs to make sure that crucial UpdateProcessors like the Log- 
> and Run UpdateProcessors are present when creating a new 
> UpdateRequestProcessorChain. This is error prone, and when introducing a new 
> core UpdateProcessor, like in SOLR-2358, all existing users need to insert 
> the changes into all their pipelines.
> A customer made pipeline should not need to care about distributed indexing, 
> logging or anything else, and should be as slim as possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reopened SOLR-8487:
--

> Add CommitStream to Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8487
> URL: https://issues.apache.org/jira/browse/SOLR-8487
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: Trunk
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: Trunk
>
>
> (Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).
> With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
> However, there's no way currently using the Streaming API to force a commit 
> on the collection that received these updates.
> The purpose of this ticket is to add a CommitStream, which can be used to 
> trigger commit(s) on a given collection.
> The proposed usage/behavior would look a little bit like:
> {{commit(collection, parallel(update(search()))}}
> Note that...
> 1.) CommitStream has a positional collection parameter, to indicate which 
> collection to commit on. (Alternatively, it could recurse through 
> {{children()}} nodes until it finds the UpdateStream, and then retrieve the 
> collection from the UpdateStream).
> 2.) CommitStream forwards all tuples received by an underlying, wrapped 
> stream.
> 3.) CommitStream commits when the underlying stream emits its EOF tuple. 
> (Alternatively, it could commit every X tuples, based on a parameter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8487) Add CommitStream to Streaming API and Streaming Expressions

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein closed SOLR-8487.

Resolution: Fixed

> Add CommitStream to Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-8487
> URL: https://issues.apache.org/jira/browse/SOLR-8487
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: Trunk
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: Trunk
>
>
> (Paraphrased from Joel's idea/suggestions in the comments of SOLR-7535).
> With SOLR-7535, users can now index documents/tuples using an UpdateStream.  
> However, there's no way currently using the Streaming API to force a commit 
> on the collection that received these updates.
> The purpose of this ticket is to add a CommitStream, which can be used to 
> trigger commit(s) on a given collection.
> The proposed usage/behavior would look a little bit like:
> {{commit(collection, parallel(update(search()))}}
> Note that...
> 1.) CommitStream has a positional collection parameter, to indicate which 
> collection to commit on. (Alternatively, it could recurse through 
> {{children()}} nodes until it finds the UpdateStream, and then retrieve the 
> collection from the UpdateStream).
> 2.) CommitStream forwards all tuples received by an underlying, wrapped 
> stream.
> 3.) CommitStream commits when the underlying stream emits its EOF tuple. 
> (Alternatively, it could commit every X tuples, based on a parameter).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_66) - Build # 5524 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5524/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
[index.20160106004517175, index.20160106004518438, index.properties, 
replication.properties] expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: [index.20160106004517175, index.20160106004518438, 
index.properties, replication.properties] expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([E9A85FB0F2D22DA7:32035F76F7FA4414]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:820)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:787)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomiz

  1   2   >