[JENKINS] Lucene-Solr-repro - Build # 3418 - Unstable

2019-07-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/3418/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/141/consoleText

[repro] Revision: cb1b86b80abe5418e4118991be7bb39e4840fe5f

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=CdcrReplicationHandlerTest 
-Dtests.method=testReplicationWithBufferedUpdates -Dtests.seed=C970837E2C03C48E 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=lv-LV -Dtests.timezone=America/Mendoza -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=HdfsAutoAddReplicasIntegrationTest 
-Dtests.method=testSimple -Dtests.seed=C970837E2C03C48E -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=he -Dtests.timezone=Asia/Dushanbe -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
dd4813d5b82d7e983a3541be54cb9b0e04f246ce
[repro] git fetch
[repro] git checkout cb1b86b80abe5418e4118991be7bb39e4840fe5f

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   HdfsAutoAddReplicasIntegrationTest
[repro]   CdcrReplicationHandlerTest
[repro] ant compile-test

[...truncated 3577 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.HdfsAutoAddReplicasIntegrationTest|*.CdcrReplicationHandlerTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=C970837E2C03C48E -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=he -Dtests.timezone=Asia/Dushanbe -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 5721 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest
[repro] git checkout dd4813d5b82d7e983a3541be54cb9b0e04f246ce

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-12.0.1) - Build # 351 - Still Unstable!

2019-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/351/
Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

7 tests failed.
FAILED:  
org.apache.solr.cloud.LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud

Error Message:
IOException occurred when talking to server at: https://127.0.0.1:59187/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occurred when 
talking to server at: https://127.0.0.1:59187/solr
at 
__randomizedtesting.SeedInfo.seed([2DB539A03D146340:FCB2CB25991BE872]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:670)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.LegacyCloudClusterPropTest.createAndTest(LegacyCloudClusterPropTest.java:87)
at 
org.apache.solr.cloud.LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud(LegacyCloudClusterPropTest.java:79)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-BadApples-NightlyTests-8.x - Build # 24 - Unstable

2019-07-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-8.x/24/

1 tests failed.
FAILED:  org.apache.solr.cloud.SharedFSAutoReplicaFailoverTest.test

Error Message:
Expected numSlices=5 numReplicas=1 but found 
DocCollection(solrj_collection4//collections/solrj_collection4/state.json/26)={ 
  "pullReplicas":"0",   "replicationFactor":"1",   "shards":{ "shard1":{
   "range":"8000-b332",   "state":"active",   "replicas":{}},   
  "shard2":{   "range":"b333-e665",   "state":"active",   
"replicas":{"core_node5":{   
"dataDir":"hdfs://lucene2-us-west.apache.org:37041/solr_hdfs_home/solrj_collection4/core_node5/data/",
   "base_url":"http://127.0.0.1:33923;,   
"node_name":"127.0.0.1:33923_",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://lucene2-us-west.apache.org:37041/solr_hdfs_home/solrj_collection4/core_node5/data/tlog",
   "core":"solrj_collection4_shard2_replica_n2",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}}}, "shard3":{   "range":"e666-1998",   
"state":"active",   "replicas":{"core_node7":{   
"dataDir":"hdfs://lucene2-us-west.apache.org:37041/solr_hdfs_home/solrj_collection4/core_node7/data/",
   "base_url":"http://127.0.0.1:37578;,   
"node_name":"127.0.0.1:37578_",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://lucene2-us-west.apache.org:37041/solr_hdfs_home/solrj_collection4/core_node7/data/tlog",
   "core":"solrj_collection4_shard3_replica_n4",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}}}, "shard4":{   "range":"1999-4ccb",   
"state":"active",   "replicas":{"core_node9":{   
"dataDir":"hdfs://lucene2-us-west.apache.org:37041/solr_hdfs_home/solrj_collection4/core_node9/data/",
   "base_url":"http://127.0.0.1:37578;,   
"node_name":"127.0.0.1:37578_",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://lucene2-us-west.apache.org:37041/solr_hdfs_home/solrj_collection4/core_node9/data/tlog",
   "core":"solrj_collection4_shard4_replica_n6",   
"shared_storage":"true",   "state":"active",   
"leader":"true"}}}, "shard5":{   "range":"4ccc-7fff",   
"state":"active",   "replicas":{"core_node10":{   
"dataDir":"hdfs://lucene2-us-west.apache.org:37041/solr_hdfs_home/solrj_collection4/core_node10/data/",
   "base_url":"http://127.0.0.1:33923;,   
"node_name":"127.0.0.1:33923_",   "type":"NRT",   
"force_set_state":"false",   
"ulogDir":"hdfs://lucene2-us-west.apache.org:37041/solr_hdfs_home/solrj_collection4/core_node10/data/tlog",
   "core":"solrj_collection4_shard5_replica_n8",   
"shared_storage":"true",   "state":"active",   
"leader":"true",   "router":{ "field":"text", 
"name":"compositeId"},   "maxShardsPerNode":"5",   "autoAddReplicas":"true",   
"nrtReplicas":"1",   "tlogReplicas":"0"} with /live_nodes: [127.0.0.1:33923_, 
127.0.0.1:33953_, 127.0.0.1:36262_, 127.0.0.1:37578_, 127.0.0.1:44299_]

Stack Trace:
java.lang.AssertionError: Expected numSlices=5 numReplicas=1 but found 
DocCollection(solrj_collection4//collections/solrj_collection4/state.json/26)={
  "pullReplicas":"0",
  "replicationFactor":"1",
  "shards":{
"shard1":{
  "range":"8000-b332",
  "state":"active",
  "replicas":{}},
"shard2":{
  "range":"b333-e665",
  "state":"active",
  "replicas":{"core_node5":{
  
"dataDir":"hdfs://lucene2-us-west.apache.org:37041/solr_hdfs_home/solrj_collection4/core_node5/data/",
  "base_url":"http://127.0.0.1:33923;,
  "node_name":"127.0.0.1:33923_",
  "type":"NRT",
  "force_set_state":"false",
  
"ulogDir":"hdfs://lucene2-us-west.apache.org:37041/solr_hdfs_home/solrj_collection4/core_node5/data/tlog",
  "core":"solrj_collection4_shard2_replica_n2",
  "shared_storage":"true",
  "state":"active",
  "leader":"true"}}},
"shard3":{
  "range":"e666-1998",
  "state":"active",
  "replicas":{"core_node7":{
  
"dataDir":"hdfs://lucene2-us-west.apache.org:37041/solr_hdfs_home/solrj_collection4/core_node7/data/",
  "base_url":"http://127.0.0.1:37578;,
  "node_name":"127.0.0.1:37578_",
  "type":"NRT",
  "force_set_state":"false",
  
"ulogDir":"hdfs://lucene2-us-west.apache.org:37041/solr_hdfs_home/solrj_collection4/core_node7/data/tlog",
  "core":"solrj_collection4_shard3_replica_n4",
  "shared_storage":"true",
  "state":"active",
  "leader":"true"}}},
"shard4":{
  "range":"1999-4ccb",
  "state":"active",

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-12.0.1) - Build # 24351 - Unstable!

2019-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24351/
Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.AliasIntegrationTest.testClusterStateProviderAPI

Error Message:
{} expected:<2> but was:<0>

Stack Trace:
java.lang.AssertionError: {} expected:<2> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([5D372FFE86BA67B8:42E0B3D2F5B19EF3]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.AliasIntegrationTest.testClusterStateProviderAPI(AliasIntegrationTest.java:303)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)




Build Log:
[...truncated 14162 lines...]
   [junit4] Suite: org.apache.solr.cloud.AliasIntegrationTest
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-8.x-Solaris (64bit/jdk1.8.0) - Build # 219 - Still Unstable!

2019-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Solaris/219/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.basicTest

Error Message:
Timeout waiting for replica win the election Timeout waiting to see state for 
collection=basicTest 
:DocCollection(basicTest//collections/basicTest/state.json/8)={   
"pullReplicas":"0",   "replicationFactor":"1",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node2":{   "core":"basicTest_shard1_replica_n1",   
"base_url":"http://127.0.0.1:50809/solr;,   
"node_name":"127.0.0.1:50809_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node4":{   "core":"basicTest_shard1_replica_n3",   
"base_url":"http://127.0.0.1:46957/solr;,   
"node_name":"127.0.0.1:46957_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0"} Live 
Nodes: [127.0.0.1:46957_solr, 127.0.0.1:50809_solr, 127.0.0.1:53345_solr] Last 
available state: DocCollection(basicTest//collections/basicTest/state.json/8)={ 
  "pullReplicas":"0",   "replicationFactor":"1",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node2":{   "core":"basicTest_shard1_replica_n1",   
"base_url":"http://127.0.0.1:50809/solr;,   
"node_name":"127.0.0.1:50809_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node4":{   "core":"basicTest_shard1_replica_n3",   
"base_url":"http://127.0.0.1:46957/solr;,   
"node_name":"127.0.0.1:46957_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Timeout waiting for replica win the election
Timeout waiting to see state for collection=basicTest 
:DocCollection(basicTest//collections/basicTest/state.json/8)={
  "pullReplicas":"0",
  "replicationFactor":"1",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node2":{
  "core":"basicTest_shard1_replica_n1",
  "base_url":"http://127.0.0.1:50809/solr;,
  "node_name":"127.0.0.1:50809_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node4":{
  "core":"basicTest_shard1_replica_n3",
  "base_url":"http://127.0.0.1:46957/solr;,
  "node_name":"127.0.0.1:46957_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0"}
Live Nodes: [127.0.0.1:46957_solr, 127.0.0.1:50809_solr, 127.0.0.1:53345_solr]
Last available state: 
DocCollection(basicTest//collections/basicTest/state.json/8)={
  "pullReplicas":"0",
  "replicationFactor":"1",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node2":{
  "core":"basicTest_shard1_replica_n1",
  "base_url":"http://127.0.0.1:50809/solr;,
  "node_name":"127.0.0.1:50809_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node4":{
  "core":"basicTest_shard1_replica_n3",
  "base_url":"http://127.0.0.1:46957/solr;,
  "node_name":"127.0.0.1:46957_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([21240472C62F9E44:D3D01310828A9377]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:310)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:288)
at 
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.basicTest(LeaderVoteWaitTimeoutTest.java:140)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 

Re: ReleaseWizard tool

2019-07-05 Thread Jan Høydahl
Go for it. For me it was a very interesting experience, and I will likely do it 
again at some point!

Jan Høydahl

> 5. jul. 2019 kl. 21:00 skrev David Smiley :
> 
> Nice Jan!  Maybe I'll be an RM one day, now that there's a nice tool to help 
> :-)
> 
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
> 
> 
>> On Thu, Jul 4, 2019 at 2:53 PM Jan Høydahl  wrote:
>> I wrote an article at LinkedIN pulse about the release process and the tool:
>> https://www.linkedin.com/pulse/releasing-lucene-just-61-steps-jan-høydahl/
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>> 
>>> 11. jun. 2019 kl. 10:46 skrev Jan Høydahl :
>>> 
>>> I have now pushed the ReleaseWizard tool in 
>>> https://issues.apache.org/jira/browse/LUCENE-8852
>>> Appreciate all kind of feedback!
>>> 
>>> --
>>> Jan Høydahl, search solution architect
>>> Cominvent AS - www.cominvent.com
>>> 
 1. jun. 2019 kl. 20:26 skrev Jan Høydahl :
 
 As I said, I’ll start a thread about this, please reply to that instead of 
 continuing discussion in this thread which is about releaseWizard :)
 
 Jan Høydahl
 
> 1. jun. 2019 kl. 15:53 skrev Michael Sokolov :
> 
> I'm not sure what the proper way to use fix version is. Suppose you back 
> port a fix to multiple branches? Should fixVersion list all of them? Just 
> pick one?
> 
>> On Wed, May 29, 2019, 6:00 PM Jan Høydahl  wrote:
>> My releaseWizard tool is getting more complete as the 7.7.2 release 
>> progresses. Will share the code just after I complete all steps.
>> 
>> I tested relasedocmaker and it digs up all the JIRA issues marked as 
>> RESOLVED for the version and creates two files.
>> CHANGELOG.md simply lists all issues under headings IMPROVEMENTS, BUG 
>> FIXES etc
>> One problem I found with how the CHANGELOG works is that it adds all 
>> issues having the version in fixVersion, even if the feature
>> was already released in an earlier version. That is because of the way 
>> we use JIRA fixVersion, adding both e.g. "master (9.0)" and "8.2"
>> at the same time, even if we know that 8.2 is the version the feature 
>> will be released. If we stop always adding "master" to fixVersion
>> but strive to keep it a list of version the feature/bugfix is FIRST 
>> introduced, then this tool will do the correct job.
>> 
>> RELEASENOTES.md lists "...new developer and user-facing 
>> incompatibilities, important issues, features, and major improvements.".
>> And if we enable the JIRA field "Release Notes" (we don't have it now), 
>> the content of that field will be used in the release notes instead of 
>> the JIRA description.
>> You can select any issue to surface in RELEASENOTES by adding a certain 
>> label, by default "backward-incompatible".
>> 
>> I think it could be a welcome addition to our flow. We cant' expect the 
>> output from the tool to be used as-is, sometimes a major feature spans 
>> multiple
>> JIRAs etc, but it could be a good starting point, and would shift the 
>> burden of documenting important and breaking changes from release-time 
>> to commit-time,
>> if we as committers manage to adjust our routines. We could even have a 
>> weekly job that runs the releasedocmaker and sends the output to dev@ 
>> list for active branches, to keep focus.
>>  
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>> 
>>> 17. mai 2019 kl. 13:45 skrev Jan Høydahl :
>>> 
>>> Yes, I thought we could use 
>>> https://yetus.apache.org/documentation/0.10.0/releasedocmaker/ to 
>>> generate the draft, and this could be wired into the releaseWizard tool.
>>> 
>>> --
>>> Jan Høydahl, search solution architect
>>> Cominvent AS - www.cominvent.com
>>> 
 17. mai 2019 kl. 06:40 skrev Ishan Chattopadhyaya 
 :
 
 Much needed. Thanks for working on it.
 
 Here's an idea I was thinking about yesterday: the most tedious step 
 is to generate release highlights. We should have a JIRA field 
 "release highlight" which, when populated, will have the text that 
 will be featured in the announce mail and on the website in news. That 
 way, generating those mails can be semi/fully automated.
 
 Alternatively, this field can just be a Boolean check box and title of 
 the Jira can be used as highlight. This will force the committer to 
 keep meaningful titles.
 
> On Thu, 16 May, 2019, 10:58 PM Jan Høydahl,  
> wrote:
> Just a heads-up that as part of my releasing 7.7.2 effort I'm also 
> hacking on
> a releaseWizard script to replace the ReleaseTodo wiki page. It will 
> act as a
> checklist 

[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 140 - Still Failing

2019-07-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/140/

No tests ran.

Build Log:
[...truncated 24989 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2587 links (2117 relative) to 3396 anchors in 259 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.2.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings 

[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk-12.0.1) - Build # 225 - Unstable!

2019-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/225/
Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionWithTlogReplicasTest.test

Error Message:
Timeout waiting for state down of replica core_node4, current state recovering 
expected: but was:

Stack Trace:
java.lang.AssertionError: Timeout waiting for state down of replica core_node4, 
current state recovering expected: but was:
at 
__randomizedtesting.SeedInfo.seed([18F683E1CEDB306D:90A2BC3B60275D95]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at 
org.apache.solr.cloud.HttpPartitionTest.waitForState(HttpPartitionTest.java:326)
at 
org.apache.solr.cloud.HttpPartitionTest.testDoRecoveryOnRestart(HttpPartitionTest.java:184)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:132)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

Re: ReleaseWizard tool

2019-07-05 Thread David Smiley
Nice Jan!  Maybe I'll be an RM one day, now that there's a nice tool to
help :-)

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Thu, Jul 4, 2019 at 2:53 PM Jan Høydahl  wrote:

> I wrote an article at LinkedIN pulse about the release process and the
> tool:
> https://www.linkedin.com/pulse/releasing-lucene-just-61-steps-jan-høydahl/
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 11. jun. 2019 kl. 10:46 skrev Jan Høydahl :
>
> I have now pushed the ReleaseWizard tool in
> https://issues.apache.org/jira/browse/LUCENE-8852
> Appreciate all kind of feedback!
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 1. jun. 2019 kl. 20:26 skrev Jan Høydahl :
>
> As I said, I’ll start a thread about this, please reply to that instead of
> continuing discussion in this thread which is about releaseWizard :)
>
> Jan Høydahl
>
> 1. jun. 2019 kl. 15:53 skrev Michael Sokolov :
>
> I'm not sure what the proper way to use fix version is. Suppose you back
> port a fix to multiple branches? Should fixVersion list all of them? Just
> pick one?
>
> On Wed, May 29, 2019, 6:00 PM Jan Høydahl  wrote:
>
>> My releaseWizard tool is getting more complete as the 7.7.2 release
>> progresses. Will share the code just after I complete all steps.
>>
>> I tested relasedocmaker and it digs up all the JIRA issues marked as
>> RESOLVED for the version and creates two files.
>> CHANGELOG.md simply lists all issues under headings IMPROVEMENTS, BUG
>> FIXES etc
>> One problem I found with how the CHANGELOG works is that it adds all
>> issues having the version in fixVersion, even if the feature
>> was already released in an earlier version. That is because of the way we
>> use JIRA fixVersion, adding both e.g. "master (9.0)" and "8.2"
>> at the same time, even if we know that 8.2 is the version the feature
>> will be released. If we stop always adding "master" to fixVersion
>> but strive to keep it a list of version the feature/bugfix is FIRST
>> introduced, then this tool will do the correct job.
>>
>> RELEASENOTES.md lists "...new developer and user-facing
>> incompatibilities, important issues, features, and major improvements.".
>> And if we enable the JIRA field "Release Notes" (we don't have it now),
>> the content of that field will be used in the release notes instead of the
>> JIRA description.
>> You can select any issue to surface in RELEASENOTES by adding a certain
>> label, by default "backward-incompatible".
>>
>> I think it could be a welcome addition to our flow. We cant' expect the
>> output from the tool to be used as-is, sometimes a major feature spans
>> multiple
>> JIRAs etc, but it could be a good starting point, and would shift the
>> burden of documenting important and breaking changes from release-time to
>> commit-time,
>> if we as committers manage to adjust our routines. We could even have a
>> weekly job that runs the releasedocmaker and sends the output to dev@
>> list for active branches, to keep focus.
>>
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>>
>> 17. mai 2019 kl. 13:45 skrev Jan Høydahl :
>>
>> Yes, I thought we could use
>> https://yetus.apache.org/documentation/0.10.0/releasedocmaker/ to
>> generate the draft, and this could be wired into the releaseWizard tool.
>>
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>>
>> 17. mai 2019 kl. 06:40 skrev Ishan Chattopadhyaya <
>> ichattopadhy...@gmail.com>:
>>
>> Much needed. Thanks for working on it.
>>
>> Here's an idea I was thinking about yesterday: the most tedious step is
>> to generate release highlights. We should have a JIRA field "release
>> highlight" which, when populated, will have the text that will be featured
>> in the announce mail and on the website in news. That way, generating those
>> mails can be semi/fully automated.
>>
>> Alternatively, this field can just be a Boolean check box and title of
>> the Jira can be used as highlight. This will force the committer to keep
>> meaningful titles.
>>
>> On Thu, 16 May, 2019, 10:58 PM Jan Høydahl, 
>> wrote:
>>
>>> Just a heads-up that as part of my releasing 7.7.2 effort I'm also
>>> hacking on
>>> a releaseWizard script to replace the ReleaseTodo wiki page. It will act
>>> as a
>>> checklist where you see tasks that needs to be done (different for
>>> major/minor/bug)
>>> and mark those completed. It will also run all the commands for you and
>>> preserve
>>> the logs, generate e-mail templates with all versions, dates etc in
>>> place, handle
>>> voting rules and counting etc. It will also generate an asciidoc + HTML
>>> page that
>>> gives a nice overview of the whole thing :)
>>>
>>> Here's a teaser:
>>>
>>> https://asciinema.org/a/246656
>>>
>>>
>>>   
>>> ┌─┐
>>>   │
>>> │
>>>   │  Releasing Lucene/Solr 7.7.2 RC1
>>> 

[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-07-05 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879449#comment-16879449
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 6d89abb4bd9899bd46829f0048f9e9bcf9c7f08f in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6d89abb ]

SOLR-13105: Add transformations


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13507) Remove support for "addr" parameter from the "/solr/admin/zookeeper" endpoint.

2019-07-05 Thread Anshum Gupta (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-13507.
-
Resolution: Fixed

> Remove support for "addr" parameter from the "/solr/admin/zookeeper" endpoint.
> --
>
> Key: SOLR-13507
> URL: https://issues.apache.org/jira/browse/SOLR-13507
> Project: Solr
>  Issue Type: Improvement
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: 8.2
>
> Attachments: SOLR-13507.02.patch, SOLR-13507.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The addr parameter isn't needed and it should be removed from the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13507) Remove support for "addr" parameter from the "/solr/admin/zookeeper" endpoint.

2019-07-05 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879432#comment-16879432
 ] 

ASF subversion and git services commented on SOLR-13507:


Commit 5d3a84fcd0f3d4bded24e6db0c78bbdcba6f3b2a in lucene-solr's branch 
refs/heads/branch_8x from Anshum Gupta
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5d3a84f ]

SOLR-13507: Remove support for addr parameter from the /solr/admin/zookeeper 
endpoint. (#759) (#766)



> Remove support for "addr" parameter from the "/solr/admin/zookeeper" endpoint.
> --
>
> Key: SOLR-13507
> URL: https://issues.apache.org/jira/browse/SOLR-13507
> Project: Solr
>  Issue Type: Improvement
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: 8.2
>
> Attachments: SOLR-13507.02.patch, SOLR-13507.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The addr parameter isn't needed and it should be removed from the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] anshumg merged pull request #766: SOLR-13507: Remove support for addr parameter from the /solr/admin/zookeeper endpoint. (#759)

2019-07-05 Thread GitBox
anshumg merged pull request #766: SOLR-13507: Remove support for addr parameter 
from the /solr/admin/zookeeper endpoint. (#759)
URL: https://github.com/apache/lucene-solr/pull/766
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] anshumg opened a new pull request #766: SOLR-13507: Remove support for addr parameter from the /solr/admin/zookeeper endpoint. (#759)

2019-07-05 Thread GitBox
anshumg opened a new pull request #766: SOLR-13507: Remove support for addr 
parameter from the /solr/admin/zookeeper endpoint. (#759)
URL: https://github.com/apache/lucene-solr/pull/766
 
 
   SOLR-13507: Remove support for addr parameter from the /solr/admin/zookeeper 
endpoint. (#759)
   
   back-port from master 
https://github.com/anshumg/lucene-solr/commit/b7090d9c25ba430442628b0dc77c7c700cb35b33
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] tokee commented on issue #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-05 Thread GitBox
tokee commented on issue #754: LUCENE-8875: Introduce Optimized Collector For 
Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#issuecomment-508818049
 
 
   I don't know if I have much to add to 
https://sbdevel.wordpress.com/2015/10/05/speeding-up-core-search/ that Atri 
linked to on the JIRA: Yes, it's definitely possible to do tricks with large 
result sets.  Especially with the simple "just sort on score"-case where the 
really large wins in my book is less GC pressure by using a single `long[]` to 
hold the structure instead of a gazillion small objects.
   
   Here's the but: I haven't pursued it further as we (Royal Danish Library) 
have little use for it. Being able to handle large result sets in a single 
shards does not help much with multi-shard setups, where the merging node is 
likely to blow up. An iterative approach, such as `cursorMark` or Solr's 
`export`, is less prone to surprises and scales indefinitely. That being said, 
I won't stand in the way of building a fine foot blowing gun - for some use 
cases it would be a great win.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-13507) Remove support for "addr" parameter from the "/solr/admin/zookeeper" endpoint.

2019-07-05 Thread Anshum Gupta (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta reopened SOLR-13507:
-

Accidentally closed this one before merging into 8x.

> Remove support for "addr" parameter from the "/solr/admin/zookeeper" endpoint.
> --
>
> Key: SOLR-13507
> URL: https://issues.apache.org/jira/browse/SOLR-13507
> Project: Solr
>  Issue Type: Improvement
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: 8.2
>
> Attachments: SOLR-13507.02.patch, SOLR-13507.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The addr parameter isn't needed and it should be removed from the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13507) Remove support for "addr" parameter from the "/solr/admin/zookeeper" endpoint.

2019-07-05 Thread Anshum Gupta (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-13507:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Remove support for "addr" parameter from the "/solr/admin/zookeeper" endpoint.
> --
>
> Key: SOLR-13507
> URL: https://issues.apache.org/jira/browse/SOLR-13507
> Project: Solr
>  Issue Type: Improvement
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: 8.2
>
> Attachments: SOLR-13507.02.patch, SOLR-13507.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The addr parameter isn't needed and it should be removed from the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4312) Index format to store position length per position

2019-07-05 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879425#comment-16879425
 ] 

Michael Gibney commented on LUCENE-4312:


Following up on discussion at Berlin Buzzwords with [~mikemccand], [~sokolov], 
[~simonw], and [~romseygeek]:

A lot of useful context (for, e.g., synonym generation, etc.) is available at 
index time that is not available at query time. Leveraging this context can 
result in index-time TokenStream manipulations that produce token graphs. Since 
position length is not indexed, it is impossible at query time to reconstruct 
index-time TokenStream "graph" structure.

Indexed position length is a prerequisite for any use case that calls for:
1. index-time graph TokenStreams
2. precise/accurate proximity query (via spans, intervals, etc.)

Could we discuss adding first-class support for this structural "position 
length" information?

Updating PostingsEnum to include endPosition() -- returning {{position+1}} by 
default -- would be a meaningful first step. This would facilitate the 
development of query implementations without requiring an API fork, and would 
signal an intention to move in the direction of supporting index-time token 
graphs.

Beyond that, I'm optimistic that codecs could be enhanced to index position 
length without introducing much additional overhead (I'd guess that position 
length for the common case of linear/non-graph index-time token streams could 
compress quite well).

> Index format to store position length per position
> --
>
> Key: LUCENE-4312
> URL: https://issues.apache.org/jira/browse/LUCENE-4312
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 6.0
>Reporter: Gang Luo
>Priority: Minor
>  Labels: Suggestion
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Mike Mccandless said:TokenStreams are actually graphs.
> Indexer ignores PositionLengthAttribute.Need change the index format (and 
> Codec APIs) to store an additional int position length per position.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-11.0.3) - Build # 5241 - Failure!

2019-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/5241/
Java: 64bit/jdk-11.0.3 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 64091 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1643274873
 [ecj-lint] Compiling 1280 source files to 
/var/folders/qg/h2dfw5s161s51l2bn79mrb7rgn/T/ecj1643274873
 [ecj-lint] Processing annotations
 [ecj-lint] Annotations processed
 [ecj-lint] Processing annotations
 [ecj-lint] No elements to process
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/Users/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 219)
 [ecj-lint] return (NamedList) new 
JavaBinCodec(resolver).unmarshal(in);
 [ecj-lint]^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java
 (at line 788)
 [ecj-lint] throw new UnsupportedOperationException("must add at least 1 
node first");
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'queryRequest' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java
 (at line 794)
 [ecj-lint] throw new UnsupportedOperationException("must add at least 1 
node first");
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'queryRequest' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 19)
 [ecj-lint] import javax.naming.Context;
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 20)
 [ecj-lint] import javax.naming.InitialContext;
 [ecj-lint]^^^
 [ecj-lint] The type javax.naming.InitialContext is not accessible
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 21)
 [ecj-lint] import javax.naming.NamingException;
 [ecj-lint]
 [ecj-lint] The type javax.naming.NamingException is not accessible
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 22)
 [ecj-lint] import javax.naming.NoInitialContextException;
 [ecj-lint]^^
 [ecj-lint] The type javax.naming.NoInitialContextException is not accessible
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 776)
 [ecj-lint] Context c = new InitialContext();
 [ecj-lint] ^^^
 [ecj-lint] Context cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 776)
 [ecj-lint] Context c = new InitialContext();
 [ecj-lint] ^^
 [ecj-lint] InitialContext cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 10. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 779)
 [ecj-lint] } catch (NoInitialContextException e) {
 [ecj-lint]  ^
 [ecj-lint] NoInitialContextException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 11. ERROR in 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java
 (at line 781)
 [ecj-lint] } catch (NamingException e) {
 [ecj-lint]  ^^^
 [ecj-lint] NamingException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 12. WARNING in 

[jira] [Commented] (SOLR-13609) Ability to know when an expunge has finished

2019-07-05 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879412#comment-16879412
 ] 

Christine Poerschke commented on SOLR-13609:


Some of the 
[https://lucene.apache.org/solr/guide/8_1/metrics-reporting.html#index-merge-metrics]
 metrics look like they might also provide the kind of visibility you are 
looking for, though presumably the metrics would not differentiate between any 
merges that you explicitly initiated and any merges that are occurring 
'naturally'.

> Ability to know when an expunge has finished
> 
>
> Key: SOLR-13609
> URL: https://issues.apache.org/jira/browse/SOLR-13609
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Richard
>Priority: Major
>
> At the company I work for, we do nightly expunges to clear down deleted docs 
> _(providing the threshold is above 5% in our case)_.
> Whilst this has been okay for us, we want the ability to know when an expunge 
> has completed. At the moment we do some calculations to estimate how long it 
> would take. 
> it would be nice if there was a way to see when an expunge has completed. 
> This could either be by assigning an async id to the call, or any other means 
> of having visibility.
> I started to look into this issue, but saw that the underlying call for 
> expunging starts to use the lucene side of the code base, so thought I was 
> digging to deep, so any advice on this issue would be much appreciated _(as 
> I'm trying to contribute more to OOS)_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-12.0.1) - Build # 24349 - Unstable!

2019-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24349/
Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

13 tests failed.
FAILED:  org.apache.solr.cloud.rule.RulesTest.doIntegrationTest

Error Message:
Timeout occurred while waiting response from server at: 
https://127.0.0.1:33971/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: https://127.0.0.1:33971/solr
at 
__randomizedtesting.SeedInfo.seed([475150478A00D4E1:A26217C6967426E3]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:667)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:547)
at 
org.apache.solr.cloud.rule.RulesTest.removeCollections(RulesTest.java:65)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-13609) Ability to know when an expunge has finished

2019-07-05 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879399#comment-16879399
 ] 

Erick Erickson commented on SOLR-13609:
---

Richard:

I usually recommend discussing this kind of thing on the user's  (or dev list 
if you're code-diving) before raising a JIRA, mostly because you get more eyes 
on it faster.

That aside, are you sure you need to expungeDeletes? It's actually rarely 
necessary. What i'm suggesting  is that this may be useless work. However, 
you've backed into a corner, see: 
https://lucidworks.com/post/segment-merging-deleted-documents-optimize-may-bad/

That blog is about optimize, but expungeDeletes has the same issue of creating 
potentially very large  segments.

Much of this behavior has changed in Solr 7.5+, the link above links to another 
blog about that.

All that aside, to answer your question: I don't know of a way to ask "what is 
the current state of merging" simply. But you could take a snapshot of the 
segments before you start by:

[http://solr:port/core/admin/segments|http://solrport/]

In your case you should see a bunch of segments disappear. Especially, given 
the above, you should see one very large segment be replaced by  another very 
large segment when it's done. Admittedly, this is indirect evidence and it'd be 
interesting if we could have an async optimize. And be a little careful, if 
you're indexing at the same time you'll see segments come and go due to 
background merging.

That said, since we discourage optimizing as much as we do, I doubt there'll  
be a lot of interest in adding it unless you want to make a patch.

> Ability to know when an expunge has finished
> 
>
> Key: SOLR-13609
> URL: https://issues.apache.org/jira/browse/SOLR-13609
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Richard
>Priority: Major
>
> At the company I work for, we do nightly expunges to clear down deleted docs 
> _(providing the threshold is above 5% in our case)_.
> Whilst this has been okay for us, we want the ability to know when an expunge 
> has completed. At the moment we do some calculations to estimate how long it 
> would take. 
> it would be nice if there was a way to see when an expunge has completed. 
> This could either be by assigning an async id to the call, or any other means 
> of having visibility.
> I started to look into this issue, but saw that the underlying call for 
> expunging starts to use the lucene side of the code base, so thought I was 
> digging to deep, so any advice on this issue would be much appreciated _(as 
> I'm trying to contribute more to OOS)_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release PyLucene 8.1.1 (rc2)

2019-07-05 Thread Petrus Hyvönen
Hi,

I would like to encourage PMC members to vote, please?

For me +1 for release (user, not PMC)

Best Regards
/Petrus


On Sun, Jun 23, 2019 at 7:50 PM Aric Coady  wrote:

> +1.  rc builds available:
>
> - docker pull coady/pylucene:rc
> - brew install —devel coady/tap/pylucene
>
> > On Jun 22, 2019, at 5:17 PM, Andi Vajda  wrote:
> >
> >
> > The PyLucene 8.1.1 (rc2) release tracking the recent release of
> > Apache Lucene 8.1.1 is ready.
> >
> > A release candidate is available from:
> >  https://dist.apache.org/repos/dist/dev/lucene/pylucene/8.1.1-rc2/
> >
> > PyLucene 8.1.1 is built with JCC 3.6, included in these release
> artifacts.
> >
> > JCC 3.6 supports Python 3.3+ (in addition to Python 2.3+).
> > PyLucene may be built with Python 2 or Python 3.
> >
> > Please vote to release these artifacts as PyLucene 8.1.1.
> > Anyone interested in this release can and should vote !
> >
> > Thanks !
> >
> > Andi..
> >
> > ps: the KEYS file for PyLucene release signing is at:
> > https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS
> > https://dist.apache.org/repos/dist/dev/lucene/pylucene/KEYS
> >
> > pps: here is my +1
>
>

-- 
_
Petrus Hyvönen, Uppsala, Sweden
Mobile Phone/SMS:+46 73 803 19 00


[jira] [Commented] (SOLR-9095) ReRanker should gracefully handle sorts without score

2019-07-05 Thread Alessandro Benedetti (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879398#comment-16879398
 ] 

Alessandro Benedetti commented on SOLR-9095:


Brilliant, thank you very much!

 

> ReRanker should gracefully handle sorts without score
> -
>
> Key: SOLR-9095
> URL: https://issues.apache.org/jira/browse/SOLR-9095
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10.4
> Environment: Solr 4.10.4 
> CentOS 6.5 64 bit
> Java 1.8.0_51 
>Reporter: Andrea Gazzarini
>Priority: Minor
>  Labels: re-ranking
>
> I have a Solr 4.10.4 instance with a RequestHandler that has a re-ranking 
> query configured like this:
> {code:title=solrconfig.xml|borderStyle=solid}
> 
> dismax
> ...
> {!boost b=someFunction() v=$q}
> {!rerank reRankQuery=$rqq reRankDocs=60 
> reRankWeight=1.2}
> score desc
> 
> {code}
> Everything is working until the client sends a sort params that doesn't 
> include the score field. So if for example the request contains "sort=price 
> asc" then a NullPointerException is thrown:
> {code}
> 09:46:08,548 ERROR [org.apache.solr.core.SolrCore] 
> java.lang.NullPointerException
> [INFO] [talledLocalContainer] at 
> org.apache.lucene.search.TopFieldCollector$OneComparatorScoringMaxScoreCollector.collect(TopFieldCollector.java:291)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.ReRankQParserPlugin$ReRankCollector.collect(ReRankQParserPlugin.java:263)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.sortDocSet(SolrIndexSearcher.java:1999)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1423)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> {code}
> The only way to avoid this exception is to explicitly add the "score desc" 
> value to the incoming field; that is  
> {code}
> ?q=...=price asc, score desc 
> {code}
> In this way I get no exception. I said "explicitly" because adding an 
> "appends" section in my handler
> {code}
> 
> score desc
> 
> {code}
> Even I don't know if that could solve my problem, in practice it is 
> completely ignoring (i.e. I'm still getting the NPE above).
> However, when I explicitly add "sort=price asc, score desc", as consequence 
> of the re-ranking, the top 60 results, although I said to Solr "order by 
> price", are still shuffled and that's not what I want.
> So, at the end, the issue is about the following two points: 
> 1. the NullPointerException above 
> 2.  a way to disable the re-ranking (automatically or not)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13240) UTILIZENODE action results in an exception

2019-07-05 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-13240:
---
Component/s: AutoScaling

> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
> Attachments: SOLR-13240.patch, solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
> org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\n\tat 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)\n\tat
>  
> 

[GitHub] [lucene-solr] atris commented on issue #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-05 Thread GitBox
atris commented on issue #754: LUCENE-8875: Introduce Optimized Collector For 
Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#issuecomment-508804369
 
 
   
   > Actually I don't think we need a growable priority queue. For such large 
number of hits it'd be probably more efficient to collect hits in an ArrayList 
first and only turn it into a PQ once there are `numHits` hits?
   
   Would that mean collecting all hits in the ArrayList, building a heap out of 
them iteratively once we exhaust documents and then calling top() numHits times?
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13240) UTILIZENODE action results in an exception

2019-07-05 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-13240:
---
Status: Patch Available  (was: Open)

> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
> Attachments: SOLR-13240.patch, solr-solrj-7.5.0.jar
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
> org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\n\tat 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)\n\tat
>  
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)\n\tat
>  
> 

[jira] [Created] (SOLR-13611) deprecated vs. replacement elements

2019-07-05 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-13611:
--

 Summary: deprecated  vs. replacement 
 elements
 Key: SOLR-13611
 URL: https://issues.apache.org/jira/browse/SOLR-13611
 Project: Solr
  Issue Type: Task
Reporter: Christine Poerschke


In 
[SolrXmlConfig.fromConfig|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.0/solr/core/src/java/org/apache/solr/core/SolrXmlConfig.java#L73-L97]
 the {{deprecatedUpdateConfig}} naming and usage suggests that some 
{{}} elements e.g. {{distribUpdateSoTimeout}} are deprecated in 
favour of a {{}} element with the same name.

The 
[solr.xml|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.0/solr/server/solr/solr.xml]
 included in our releases and the 
[documentation|https://lucene.apache.org/solr/guide/8_1/format-of-solr-xml.html]
 still use what would then appear to be the deprecated of the two choices and 
as far as I can tell no warnings about this are logged on Solr startup.

This ticket here is to start a pathway via which support for the deprecated 
choice would eventually be removed e.g. future 8.x releases could WARN if the 
deprecated choice is used and then in 9.x releases support would be removed and 
an exception thrown if the deprecated choice is still encountered in a 
{{solr.xml}} file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13606) DateTimeFormatter Exception on Create Core

2019-07-05 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879390#comment-16879390
 ] 

Erick Erickson commented on SOLR-13606:
---

Thanks for letting us know and _especially_ putting the resolution in the Jira 
for posterity. That may save someone else from going down that rabbit-hole.

And I fully sympathize with beating your head against a wall all day only to 
wake up the next day thinking "I should have checked that yesterday" ;)

 

> DateTimeFormatter Exception on Create Core
> --
>
> Key: SOLR-13606
> URL: https://issues.apache.org/jira/browse/SOLR-13606
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.1.1
> Environment: Red Hat 8.0
> Java 11
> Solr 8.1.1
>Reporter: Joseph Krauss
>Priority: Critical
>  Labels: newbie
>
> I have a fresh install of RH 8.0 with Java 11 JDK and I've run into an issue 
> with Solr 8.0.0 and 8.1.1 when attempting to create a core. I'm guessing 
> here, but the error appears to be an issue with the date format. From what 
> I've read Java date parser is expecting a period between seconds and 
> milliseconds? Hopefully, there's something simple I overlooked when I 
> configured the environment for solr. 
> Caused by: java.time.format.DateTimeParseException: Text 
> '2019-07-03T20:00:{color:#FF}00.050Z{color}'
> Oracle Corporation OpenJDK 64-Bit Server VM 11.0.3 11.0.3+7-LTS
> org.apache.solr.common.SolrException: Error CREATEing SolrCore 'testarms': 
> Unable to create core [testarms] Caused by: null
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1187)
>   at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
>   at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
>   at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
>   at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:796)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:762)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:522)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:397)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at org.eclipse.jetty.server.Server.handle(Server.java:502)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
>   at 
> 

[GitHub] [lucene-solr] jpountz commented on issue #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-05 Thread GitBox
jpountz commented on issue #754: LUCENE-8875: Introduce Optimized Collector For 
Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#issuecomment-508796547
 
 
   > That would require a different PriorityQueue implementation
   
   Actually I don't think we need a growable priority queue. For such large 
number of hits it'd be probably more efficient to collect hits in an ArrayList 
first and only turn it into a PQ once there are `numHits` hits?
   
   Pinging @tokee since this is a topic he already spent time thinking about. :)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9095) ReRanker should gracefully handle sorts without score

2019-07-05 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879367#comment-16879367
 ] 

Munendra S N commented on SOLR-9095:


This is duplicate of SOLR-9094(missed linking it before)

> ReRanker should gracefully handle sorts without score
> -
>
> Key: SOLR-9095
> URL: https://issues.apache.org/jira/browse/SOLR-9095
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10.4
> Environment: Solr 4.10.4 
> CentOS 6.5 64 bit
> Java 1.8.0_51 
>Reporter: Andrea Gazzarini
>Priority: Minor
>  Labels: re-ranking
>
> I have a Solr 4.10.4 instance with a RequestHandler that has a re-ranking 
> query configured like this:
> {code:title=solrconfig.xml|borderStyle=solid}
> 
> dismax
> ...
> {!boost b=someFunction() v=$q}
> {!rerank reRankQuery=$rqq reRankDocs=60 
> reRankWeight=1.2}
> score desc
> 
> {code}
> Everything is working until the client sends a sort params that doesn't 
> include the score field. So if for example the request contains "sort=price 
> asc" then a NullPointerException is thrown:
> {code}
> 09:46:08,548 ERROR [org.apache.solr.core.SolrCore] 
> java.lang.NullPointerException
> [INFO] [talledLocalContainer] at 
> org.apache.lucene.search.TopFieldCollector$OneComparatorScoringMaxScoreCollector.collect(TopFieldCollector.java:291)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.ReRankQParserPlugin$ReRankCollector.collect(ReRankQParserPlugin.java:263)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.sortDocSet(SolrIndexSearcher.java:1999)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1423)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> {code}
> The only way to avoid this exception is to explicitly add the "score desc" 
> value to the incoming field; that is  
> {code}
> ?q=...=price asc, score desc 
> {code}
> In this way I get no exception. I said "explicitly" because adding an 
> "appends" section in my handler
> {code}
> 
> score desc
> 
> {code}
> Even I don't know if that could solve my problem, in practice it is 
> completely ignoring (i.e. I'm still getting the NPE above).
> However, when I explicitly add "sort=price asc, score desc", as consequence 
> of the re-ranking, the top 60 results, although I said to Solr "order by 
> price", are still shuffled and that's not what I want.
> So, at the end, the issue is about the following two points: 
> 1. the NullPointerException above 
> 2.  a way to disable the re-ranking (automatically or not)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on a change in pull request #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-05 Thread GitBox
atris commented on a change in pull request #754: LUCENE-8875: Introduce 
Optimized Collector For Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#discussion_r300714990
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/search/LargeNumHitsTopDocsCollector.java
 ##
 @@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.lucene.search;
+
+import java.io.IOException;
+
+import org.apache.lucene.index.LeafReaderContext;
+
+/**
+ * Optimized collector for large number of hits. This collector does
+ * not prepopulate the priority queue with sentinel values to avoid
+ * undue costs
+ */
+public class LargeNumHitsTopDocsCollector extends TopScoreDocCollector {
+
+  LargeNumHitsTopDocsCollector(int numHits, int totalHitsThreshold) {
 
 Review comment:
   +1, will fix


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on issue #754: LUCENE-8875: Introduce Optimized Collector For Large Number Of Hits

2019-07-05 Thread GitBox
atris commented on issue #754: LUCENE-8875: Introduce Optimized Collector For 
Large Number Of Hits
URL: https://github.com/apache/lucene-solr/pull/754#issuecomment-508784416
 
 
   > Not prepopulating the hit queue is only one part of the problem, we would 
also need to not allocate `numHits` slots in the priority queue right away?
   >
   That would require a different PriorityQueue implementation, since the 
default implementation always allocates numHits slots and sets maxSize to 
numHits. We would need an implementation that only allocates two slots 
initially, but has the ability to "resize" as needed. WDYT?
   
   > I'd rather like that we don't do any change to TopDocsCollector and not 
try to extend it, in my opinion those collectors are solving a very different 
problem and would need to evolve independently.
   
   My main objective to use TopScoreDocsCollector was to reuse code as much as 
I could -- but I agree with your point. Will change the implementation and keep 
TopScoreDocsCollector pristine.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13606) DateTimeFormatter Exception on Create Core

2019-07-05 Thread Joseph Krauss (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Krauss resolved SOLR-13606.
--
Resolution: Fixed

The Java installed on my RHEL was JDK and I needed to install JRE to resolve 
the issue. I started down a rabbit hole when I got the date-time format error 
message. Fortunately after a nights rest and fresh eyes I quickly narrowed down 
my issue.

> DateTimeFormatter Exception on Create Core
> --
>
> Key: SOLR-13606
> URL: https://issues.apache.org/jira/browse/SOLR-13606
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.1.1
> Environment: Red Hat 8.0
> Java 11
> Solr 8.1.1
>Reporter: Joseph Krauss
>Priority: Critical
>  Labels: newbie
>
> I have a fresh install of RH 8.0 with Java 11 JDK and I've run into an issue 
> with Solr 8.0.0 and 8.1.1 when attempting to create a core. I'm guessing 
> here, but the error appears to be an issue with the date format. From what 
> I've read Java date parser is expecting a period between seconds and 
> milliseconds? Hopefully, there's something simple I overlooked when I 
> configured the environment for solr. 
> Caused by: java.time.format.DateTimeParseException: Text 
> '2019-07-03T20:00:{color:#FF}00.050Z{color}'
> Oracle Corporation OpenJDK 64-Bit Server VM 11.0.3 11.0.3+7-LTS
> org.apache.solr.common.SolrException: Error CREATEing SolrCore 'testarms': 
> Unable to create core [testarms] Caused by: null
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1187)
>   at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
>   at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
>   at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
>   at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:796)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:762)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:522)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:397)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at org.eclipse.jetty.server.Server.handle(Server.java:502)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
>   at 
> 

[jira] [Comment Edited] (SOLR-9095) ReRanker should gracefully handle sorts without score

2019-07-05 Thread Alessandro Benedetti (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879335#comment-16879335
 ] 

Alessandro Benedetti edited comment on SOLR-9095 at 7/5/19 2:47 PM:


[~munendrasn] , you resolved as duplicate, but duplicate of what?
 Can you please update the Jira ticket with link to the duplicate on closure?
 Thanks,


was (Author: alessandro.benedetti):
[~munendrasn] , tou resolved as duplicate, but duplicate of what?
Can you please update the Jira ticket with link to the duplicate on closure?
Thanks,

> ReRanker should gracefully handle sorts without score
> -
>
> Key: SOLR-9095
> URL: https://issues.apache.org/jira/browse/SOLR-9095
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10.4
> Environment: Solr 4.10.4 
> CentOS 6.5 64 bit
> Java 1.8.0_51 
>Reporter: Andrea Gazzarini
>Priority: Minor
>  Labels: re-ranking
>
> I have a Solr 4.10.4 instance with a RequestHandler that has a re-ranking 
> query configured like this:
> {code:title=solrconfig.xml|borderStyle=solid}
> 
> dismax
> ...
> {!boost b=someFunction() v=$q}
> {!rerank reRankQuery=$rqq reRankDocs=60 
> reRankWeight=1.2}
> score desc
> 
> {code}
> Everything is working until the client sends a sort params that doesn't 
> include the score field. So if for example the request contains "sort=price 
> asc" then a NullPointerException is thrown:
> {code}
> 09:46:08,548 ERROR [org.apache.solr.core.SolrCore] 
> java.lang.NullPointerException
> [INFO] [talledLocalContainer] at 
> org.apache.lucene.search.TopFieldCollector$OneComparatorScoringMaxScoreCollector.collect(TopFieldCollector.java:291)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.ReRankQParserPlugin$ReRankCollector.collect(ReRankQParserPlugin.java:263)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.sortDocSet(SolrIndexSearcher.java:1999)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1423)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> {code}
> The only way to avoid this exception is to explicitly add the "score desc" 
> value to the incoming field; that is  
> {code}
> ?q=...=price asc, score desc 
> {code}
> In this way I get no exception. I said "explicitly" because adding an 
> "appends" section in my handler
> {code}
> 
> score desc
> 
> {code}
> Even I don't know if that could solve my problem, in practice it is 
> completely ignoring (i.e. I'm still getting the NPE above).
> However, when I explicitly add "sort=price asc, score desc", as consequence 
> of the re-ranking, the top 60 results, although I said to Solr "order by 
> price", are still shuffled and that's not what I want.
> So, at the end, the issue is about the following two points: 
> 1. the NullPointerException above 
> 2.  a way to disable the re-ranking (automatically or not)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13606) DateTimeFormatter Exception on Create Core

2019-07-05 Thread Joseph Krauss (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879338#comment-16879338
 ] 

Joseph Krauss commented on SOLR-13606:
--

The error message lead me to believe there was a date time format issue when in 
fact I installed JDK version of Java when it should have been JRE of Java. I 
followed the instructions on some website, which of course provided the 
instructions for JDK. The issue is fixed and the lesson learned is to 
double-check the system requirements. ;)

> DateTimeFormatter Exception on Create Core
> --
>
> Key: SOLR-13606
> URL: https://issues.apache.org/jira/browse/SOLR-13606
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.1.1
> Environment: Red Hat 8.0
> Java 11
> Solr 8.1.1
>Reporter: Joseph Krauss
>Priority: Critical
>  Labels: newbie
>
> I have a fresh install of RH 8.0 with Java 11 JDK and I've run into an issue 
> with Solr 8.0.0 and 8.1.1 when attempting to create a core. I'm guessing 
> here, but the error appears to be an issue with the date format. From what 
> I've read Java date parser is expecting a period between seconds and 
> milliseconds? Hopefully, there's something simple I overlooked when I 
> configured the environment for solr. 
> Caused by: java.time.format.DateTimeParseException: Text 
> '2019-07-03T20:00:{color:#FF}00.050Z{color}'
> Oracle Corporation OpenJDK 64-Bit Server VM 11.0.3 11.0.3+7-LTS
> org.apache.solr.common.SolrException: Error CREATEing SolrCore 'testarms': 
> Unable to create core [testarms] Caused by: null
>   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1187)
>   at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
>   at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
>   at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
>   at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:796)
>   at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:762)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:522)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:397)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at org.eclipse.jetty.server.Server.handle(Server.java:502)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
>   at 
> 

[jira] [Commented] (SOLR-7830) topdocs facet function

2019-07-05 Thread Alessandro Benedetti (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879337#comment-16879337
 ] 

Alessandro Benedetti commented on SOLR-7830:


Any update on this issue? It is a very interesting feature, it is a shame it 
didn't make it to master after so many years!
Anything we could do to help?

> topdocs facet function
> --
>
> Key: SOLR-7830
> URL: https://issues.apache.org/jira/browse/SOLR-7830
> Project: Solr
>  Issue Type: New Feature
>  Components: Facet Module
>Reporter: Yonik Seeley
>Priority: Major
> Attachments: ALT-SOLR-7830.patch, SOLR-7830.patch, SOLR-7830.patch
>
>
> A topdocs() facet function would return the top N documents per facet bucket.
> This would be a big step toward unifying grouping and the new facet module.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9095) ReRanker should gracefully handle sorts without score

2019-07-05 Thread Alessandro Benedetti (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879335#comment-16879335
 ] 

Alessandro Benedetti commented on SOLR-9095:


[~munendrasn] , tou resolved as duplicate, but duplicate of what?
Can you please update the Jira ticket with link to the duplicate on closure?
Thanks,

> ReRanker should gracefully handle sorts without score
> -
>
> Key: SOLR-9095
> URL: https://issues.apache.org/jira/browse/SOLR-9095
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10.4
> Environment: Solr 4.10.4 
> CentOS 6.5 64 bit
> Java 1.8.0_51 
>Reporter: Andrea Gazzarini
>Priority: Minor
>  Labels: re-ranking
>
> I have a Solr 4.10.4 instance with a RequestHandler that has a re-ranking 
> query configured like this:
> {code:title=solrconfig.xml|borderStyle=solid}
> 
> dismax
> ...
> {!boost b=someFunction() v=$q}
> {!rerank reRankQuery=$rqq reRankDocs=60 
> reRankWeight=1.2}
> score desc
> 
> {code}
> Everything is working until the client sends a sort params that doesn't 
> include the score field. So if for example the request contains "sort=price 
> asc" then a NullPointerException is thrown:
> {code}
> 09:46:08,548 ERROR [org.apache.solr.core.SolrCore] 
> java.lang.NullPointerException
> [INFO] [talledLocalContainer] at 
> org.apache.lucene.search.TopFieldCollector$OneComparatorScoringMaxScoreCollector.collect(TopFieldCollector.java:291)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.ReRankQParserPlugin$ReRankCollector.collect(ReRankQParserPlugin.java:263)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.sortDocSet(SolrIndexSearcher.java:1999)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1423)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> {code}
> The only way to avoid this exception is to explicitly add the "score desc" 
> value to the incoming field; that is  
> {code}
> ?q=...=price asc, score desc 
> {code}
> In this way I get no exception. I said "explicitly" because adding an 
> "appends" section in my handler
> {code}
> 
> score desc
> 
> {code}
> Even I don't know if that could solve my problem, in practice it is 
> completely ignoring (i.e. I'm still getting the NPE above).
> However, when I explicitly add "sort=price asc, score desc", as consequence 
> of the re-ranking, the top 60 results, although I said to Solr "order by 
> price", are still shuffled and that's not what I want.
> So, at the end, the issue is about the following two points: 
> 1. the NullPointerException above 
> 2.  a way to disable the re-ranking (automatically or not)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13576) factor out a TopGroupsShardResponseProcessor.fillResultIds method

2019-07-05 Thread Diego Ceccarelli (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879333#comment-16879333
 ] 

Diego Ceccarelli commented on SOLR-13576:
-

LGTM, thanks [~cpoerschke]

> factor out a TopGroupsShardResponseProcessor.fillResultIds method
> -
>
> Key: SOLR-13576
> URL: https://issues.apache.org/jira/browse/SOLR-13576
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-13576.patch
>
>
> The {{TopGroupsShardResponseProcessor.process}} method e.g. 
> [#L54-L215|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/TopGroupsShardResponseProcessor.java#L54-L215]
>  does quite a few things and factoring out a {{fillResultIds}} (or similarly 
> named) method for the logically distinct 
> [#L192-L214|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/TopGroupsShardResponseProcessor.java#L192-L214]
>  portion could help with code comprehension.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-12.0.1) - Build # 350 - Still Unstable!

2019-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/350/
Java: 64bit/jdk-12.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC

20 tests failed.
FAILED:  
org.apache.solr.cloud.LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud

Error Message:
IOException occurred when talking to server at: https://127.0.0.1:61776/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occurred when 
talking to server at: https://127.0.0.1:61776/solr
at 
__randomizedtesting.SeedInfo.seed([E79629DCE76881B9:3691DB5943670A8B]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:670)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.LegacyCloudClusterPropTest.createAndTest(LegacyCloudClusterPropTest.java:87)
at 
org.apache.solr.cloud.LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud(LegacyCloudClusterPropTest.java:79)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Resolved] (SOLR-8558) Referencing an invalid query parser results in cryptic NPE

2019-07-05 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-8558.

   Resolution: Fixed
 Assignee: Munendra S N  (was: Shawn Heisey)
Fix Version/s: 8.2

> Referencing an invalid query parser results in cryptic NPE
> --
>
> Key: SOLR-8558
> URL: https://issues.apache.org/jira/browse/SOLR-8558
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 5.4
>Reporter: Shawn Heisey
>Assignee: Munendra S N
>Priority: Minor
> Fix For: 8.2
>
> Attachments: SOLR-8558.patch
>
>
> When an invalid query parser name is used with the defType parameter or in 
> localparams, Solr logs an extremely unhelpful NPE (this stacktrace obtained 
> from the solr-user mailing list):
> {code}
> java.lang.NullPointerException
> at org.apache.solr.search.QParser.getParser(QParser.java:315)
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:159)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:254)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> This should be improved so the user will know what actually went wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-11.0.3) - Build # 823 - Still Unstable!

2019-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/823/
Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.AliasIntegrationTest.testClusterStateProviderAPI

Error Message:
[testClusterStateProviderAPI] expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: [testClusterStateProviderAPI] expected:<2> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([9C27A3BD9BA3E785:83F03F91E8A81ECE]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.AliasIntegrationTest.testClusterStateProviderAPI(AliasIntegrationTest.java:299)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSnapshotCloudManager.testSimulatorFromSnapshot

Error Message:
expected:<[/, /aliases.json, /autoscaling, /autoscaling.json, 

[jira] [Resolved] (SOLR-5052) eDisMax Field Aliasing behaving oddly when invalid field is present

2019-07-05 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-5052.

   Resolution: Duplicate
Fix Version/s: (was: 6.0)
   (was: 4.9)

Resolving this in favour of SOLR-6376 (discussions happening in other one)

> eDisMax Field Aliasing behaving oddly when invalid field is present
> ---
>
> Key: SOLR-5052
> URL: https://issues.apache.org/jira/browse/SOLR-5052
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 4.3.1
> Environment: AWS / Ubuntu
>Reporter: Trey Grainger
>Priority: Minor
>  Labels: alias, edismax, parser, query
>
> Field Aliasing for the eDisMax query parser behaves in a very odd manner if 
> an invalid field is specified in any of the aliases.  Essentially, instead of 
> throwing an exception on an invalid alias, it breaks all of the other aliased 
> fields such that they will only handle the first term correctly.  Take the 
> following example:
> /select?defType=edismax=personLastName_t^30 
> personFirstName_t^10=itemName_t 
> companyName_t^5=cityName_t^10 INVALIDFIELDNAME^20 countryName_t^35 
> postalCodeName_t^30=who:(trey grainger) what:(solr) where:(atlanta, 
> ga)=true=text
> The terms "trey", "solr" and "atlanta" correctly search across the aliased 
> fields, but the terms "grainger" and "ga" are incorrectly being searched 
> across the default field ("text").  Here is parsed query from the debug:
> 
> 
> who:(trey grainger) what:(solr) where:(decatur, ga)
> 
> 
> who:(trey grainger) what:(solr) where:(decatur, ga)
> 
> 
> (+(DisjunctionMaxQuery((personFirstName_t:trey^10.0 | 
> personLastName_t:trey^30.0)) DisjunctionMaxQuery((text:grainger)) 
> DisjunctionMaxQuery((itemName_t:solr | companyName_t:solr^5.0)) 
> DisjunctionMaxQuery((postalCodeName_t:decatur^30.0 | 
> countryName_t:decatur^35.0 | cityName_t:decatur^10.0)) 
> DisjunctionMaxQuery((text:ga/no_coord
> 
> 
> +((personFirstName_t:trey^10.0 | personLastName_t:trey^30.0) (text:grainger) 
> (itemName_t:solr | companyName_t:solr^5.0) (postalCodeName_t:decatur^30.0 | 
> countryName_t:decatur^35.0 | cityName_t:decatur^10.0) (text:ga))
> 
> I think the presence of an invalid field in a qf parameter should throw an 
> exception (or throw the invalid field away in that alias), but it shouldn't 
> break the aliases for other fields.  
> For the record, if there are no invalid fields in any of the aliases, all of 
> the aliases work.  If there is one invalid field in any of the aliases, all 
> of the aliases act oddly like this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13196) NullPointerException in o.a.solr.search.facet.FacetFieldProcessor.findTopSlots()

2019-07-05 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-13196.
-
Resolution: Duplicate

> NullPointerException in 
> o.a.solr.search.facet.FacetFieldProcessor.findTopSlots()   
> ---
>
> Key: SOLR-13196
> URL: https://issues.apache.org/jira/browse/SOLR-13196
> Project: Solr
>  Issue Type: Bug
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Cesar Rodriguez
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?facet.sort=asc=on=2=x=2=genre
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.lambda$findTopSlots$1(FacetFieldProcessor.java:325)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor$1.lessThan(FacetFieldProcessor.java:331)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor$1.lessThan(FacetFieldProcessor.java:329)
>   at org.apache.lucene.util.PriorityQueue.upHeap(PriorityQueue.java:254)
>   at org.apache.lucene.util.PriorityQueue.add(PriorityQueue.java:131)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessor.findTopSlots(FacetFieldProcessor.java:363)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.calcFacets(FacetFieldProcessorByArray.java:114)
>   at 
> org.apache.solr.search.facet.FacetFieldProcessorByArray.process(FacetFieldProcessorByArray.java:62)
>   at 
> org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:401)
>   at 
> org.apache.solr.search.facet.FacetProcessor.processSubs(FacetProcessor.java:472)
>   at 
> org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetProcessor.java:429)
>   at 
> org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:64)
>   at 
> org.apache.solr.search.facet.FacetRequest.process(FacetRequest.java:401)
>   at 
> org.apache.solr.search.facet.FacetModule.process(FacetModule.java:139)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
> [...]
> {noformat}
> Removing any parameter in the URL makes the NPE disappear, so all URL 
> parameters seem to be involved.
> Method {{org.apache.solr.search.facet.FacetFieldProcessor.findTopSlots()}}, 
> at line 316 retrieves the field {{sortAcc}}, which is null. This null pointer 
> is subsequently used in a lambda expression, line 325 (and 320!). I guess the 
> problem is that some input validation is missing?
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. Find more 
> information on this [fuzz testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results?utm_source=solr-br].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (SOLR-9095) ReRanker should gracefully handle sorts without score

2019-07-05 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-9095.

Resolution: Duplicate

> ReRanker should gracefully handle sorts without score
> -
>
> Key: SOLR-9095
> URL: https://issues.apache.org/jira/browse/SOLR-9095
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10.4
> Environment: Solr 4.10.4 
> CentOS 6.5 64 bit
> Java 1.8.0_51 
>Reporter: Andrea Gazzarini
>Priority: Minor
>  Labels: re-ranking
>
> I have a Solr 4.10.4 instance with a RequestHandler that has a re-ranking 
> query configured like this:
> {code:title=solrconfig.xml|borderStyle=solid}
> 
> dismax
> ...
> {!boost b=someFunction() v=$q}
> {!rerank reRankQuery=$rqq reRankDocs=60 
> reRankWeight=1.2}
> score desc
> 
> {code}
> Everything is working until the client sends a sort params that doesn't 
> include the score field. So if for example the request contains "sort=price 
> asc" then a NullPointerException is thrown:
> {code}
> 09:46:08,548 ERROR [org.apache.solr.core.SolrCore] 
> java.lang.NullPointerException
> [INFO] [talledLocalContainer] at 
> org.apache.lucene.search.TopFieldCollector$OneComparatorScoringMaxScoreCollector.collect(TopFieldCollector.java:291)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.ReRankQParserPlugin$ReRankCollector.collect(ReRankQParserPlugin.java:263)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.sortDocSet(SolrIndexSearcher.java:1999)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1423)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:514)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:484)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> [INFO] [talledLocalContainer] at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> {code}
> The only way to avoid this exception is to explicitly add the "score desc" 
> value to the incoming field; that is  
> {code}
> ?q=...=price asc, score desc 
> {code}
> In this way I get no exception. I said "explicitly" because adding an 
> "appends" section in my handler
> {code}
> 
> score desc
> 
> {code}
> Even I don't know if that could solve my problem, in practice it is 
> completely ignoring (i.e. I'm still getting the NPE above).
> However, when I explicitly add "sort=price asc, score desc", as consequence 
> of the re-ranking, the top 60 results, although I said to Solr "order by 
> price", are still shuffled and that's not what I want.
> So, at the end, the issue is about the following two points: 
> 1. the NullPointerException above 
> 2.  a way to disable the re-ranking (automatically or not)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10258) Facet functions emit date fields as ticks

2019-07-05 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-10258.
-
   Resolution: Fixed
Fix Version/s: 7.1

> Facet functions emit date fields as ticks
> -
>
> Key: SOLR-10258
> URL: https://issues.apache.org/jira/browse/SOLR-10258
> Project: Solr
>  Issue Type: Bug
>  Components: Facet Module
>Affects Versions: 5.5.2
>Reporter: Chris Eldredge
>Priority: Minor
> Fix For: 7.1
>
>
> When invoking a facet function in the JSON Facet API, TrieDateField is 
> coerced into a numeric value and the result of the function is emitted as 
> numeric instead of being converted back into a formatted date.
> Example:
> curl 
> http://localhost:8983/solr/query?q=*:*={most_recent:'max(modified)'}
> Produces (in part):
> "facets":{
>   "count":38304,
>   "most_recent":1.489012400831E12}}
> The "most_recent" attribute would be more useful if it was converted back to 
> an iso8601 formatted date.
> There was a thread discussing this issue in 2016: 
> http://lucene.472066.n3.nabble.com/min-max-on-date-fields-using-JSON-facets-td4288736.html#a4288781



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz opened a new pull request #765: LUCENE-8150: Remove references to `segments.gen`.

2019-07-05 Thread GitBox
jpountz opened a new pull request #765: LUCENE-8150: Remove references to 
`segments.gen`.
URL: https://github.com/apache/lucene-solr/pull/765
 
 
   This file isn't used anymore since 4.0, so I tried to contain references to
   `segments.gen` to the minimum that is required to get the right exception 
when
   opening a too old index.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10313) Facet aggregation functions on date fields return numbers

2019-07-05 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-10313.
-
   Resolution: Fixed
Fix Version/s: 7.1

> Facet aggregation functions on date fields return numbers
> -
>
> Key: SOLR-10313
> URL: https://issues.apache.org/jira/browse/SOLR-10313
> Project: Solr
>  Issue Type: Bug
>  Components: faceting
>Reporter: Maxime Darçot
>Priority: Minor
> Fix For: 7.1
>
>
> When you use the JSON faceting API and you want to use aggregate functions on 
> date fields, you get numbers instead.
> e.g. 
> {quote}
> json.facet= {
> "test": {"type": "query", "q": "\*:\*", "facet": {"first": "min(date)", 
> "last": "max(date)"}}
> }
> {quote}
> Where _date_ has a date type, you'll get
> bq. {"count":15, "first":1.361525185E12, "last":1.387552939E12}
> It'd be nice to get the results in a standard TZ format.
> Someone already described this issue here:
> http://lucene.472066.n3.nabble.com/min-max-on-date-fields-using-JSON-facets-td4288736.html
> But I haven't found any JIRA ticket linked to this problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8878) Provide alternative sorting utility from SortField other than FieldComparator

2019-07-05 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879274#comment-16879274
 ] 

Michael McCandless commented on LUCENE-8878:


{quote}I believe you are talking about Scorer#setMinCompetitiveScore, ie. 
changing the FieldComparator API to only track the bottom bucket as opposed to 
every bucket? If this is the case I agree that it sounds like a good idea to 
explore.
{quote}
Ahh, yes, that ;)  +1

> Provide alternative sorting utility from SortField other than FieldComparator
> -
>
> Key: LUCENE-8878
> URL: https://issues.apache.org/jira/browse/LUCENE-8878
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 8.1.1
>Reporter: Tony Xu
>Priority: Major
>
> The `FieldComparator` has many responsibilities and users get all of them at 
> once. At high level the main functionalities of `FieldComparator` are
>  * Provide LeafFieldComparator
>  * Allocate storage for requested number of hits
>  * Read the values from DocValues/Custom source etc.
>  * Compare two values
> There are two major areas for improvement
>  # The logic of reading values and storing them are coupled.
>  # User need to specify the size in order to create a `FieldComparator` but 
> sometimes the size is unknown upfront.
>  # From `FieldComparator`'s API, one can't reason about thread-safety so it 
> is not suitable for concurrent search.
>  E.g. Can two concurrent thread use the same `FieldComparator` to call 
> `getLeafComparator` for two different segments they are working on? In fact, 
> almost all existing implementations of `FieldComparator` are not thread-safe.
> The proposal is to enhance `SortField` with two APIs
>  # {color:#14892c}int compare(Object v1, Object v2){color} – this is to 
> compare two values from different docs for this field
>  # {color:#14892c}ValueAccessor newValueAccessor(LeafReaderContext 
> leaf){color} – This encapsulate the logic for obtaining the right 
> implementation in order to read the field values.
>  `ValueAccessor` should be accessed in a similar way as `DocValues` to 
> provide the sort value for a document in an advance & read fashion.
> With this API, hopefully we can reduce the memory usage when using 
> `FieldComparator` because the users either store the sort values or at least 
> the slot number besides the storage allocated by `FieldComparator` itself. 
> Ideally, only once copy of the values should be stored.
> The proposed API is also more friendly to concurrent search since it provides 
> the `ValueAccessor` per leaf. Although same `ValueAccessor` can't be shared 
> if there are more than one thread working on the same leaf, at least they can 
> initialize their own `ValueAccessor`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3422 - Unstable

2019-07-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3422/

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster.testAddNode

Error Message:
did not finish processing all events in time: started=1, finished=0

Stack Trace:
java.lang.AssertionError: did not finish processing all events in time: 
started=1, finished=0
at 
__randomizedtesting.SeedInfo.seed([5A733CF35F891903:FD9C215090C4961B]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster.testAddNode(TestSimLargeCluster.java:341)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)




Build Log:
[...truncated 12905 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster
   [junit4]   2> 96608 INFO  
(SUITE-TestSimLargeCluster-seed#[5A733CF35F891903]-worker) [ ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
  

[JENKINS] Lucene-Solr-SmokeRelease-8.1 - Build # 52 - Still Failing

2019-07-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.1/52/

No tests ran.

Build Log:
[...truncated 23880 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2570 links (2103 relative) to 3374 anchors in 253 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/package/solr-8.1.2.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: 

[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 141 - Still Unstable

2019-07-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/141/

2 tests failed.
FAILED:  
org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testReplicationWithBufferedUpdates

Error Message:
Timeout while trying to assert number of documents @ source_collection

Stack Trace:
java.lang.AssertionError: Timeout while trying to assert number of documents @ 
source_collection
at 
__randomizedtesting.SeedInfo.seed([C970837E2C03C48E:1A79D36069905819]:0)
at 
org.apache.solr.cloud.cdcr.BaseCdcrDistributedZkTest.assertNumDocs(BaseCdcrDistributedZkTest.java:277)
at 
org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testReplicationWithBufferedUpdates(CdcrReplicationHandlerTest.java:233)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-12.0.1) - Build # 8036 - Unstable!

2019-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8036/
Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
org.apache.solr.cloud.TestCloudSearcherWarming.testRepFactor1LeaderStartup

Error Message:
IOException occurred when talking to server at: https://127.0.0.1:61149/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occurred when 
talking to server at: https://127.0.0.1:61149/solr
at 
__randomizedtesting.SeedInfo.seed([BBF11879524757D9:6CD955A143A71E37]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:670)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:547)
at 
org.apache.solr.cloud.TestCloudSearcherWarming.tearDown(TestCloudSearcherWarming.java:79)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
  

[jira] [Created] (SOLR-13610) document more solr.xml elements

2019-07-05 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-13610:
--

 Summary: document more solr.xml elements
 Key: SOLR-13610
 URL: https://issues.apache.org/jira/browse/SOLR-13610
 Project: Solr
  Issue Type: Wish
Reporter: Christine Poerschke


It appears that not all of the elements used in 
[SolrXmlConfig.loadUpdateConfig|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.0/solr/core/src/java/org/apache/solr/core/SolrXmlConfig.java#L292-L351]
 are currently documented on the [Format of 
solr.xml|https://lucene.apache.org/solr/guide/8_1/format-of-solr-xml.html] page 
e.g. the following four elements are currently undocumented:
* maxUpdateConnections
* maxUpdateConnectionsPerHost
* metricNameStrategy
* maxRecoveryThreads

This ticket here is to update 
https://github.com/apache/lucene-solr/blob/master/solr/solr-ref-guide/src/format-of-solr-xml.adoc
 to document more elements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13609) Ability to know when an expunge has finished

2019-07-05 Thread Richard (JIRA)
Richard created SOLR-13609:
--

 Summary: Ability to know when an expunge has finished
 Key: SOLR-13609
 URL: https://issues.apache.org/jira/browse/SOLR-13609
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.4
Reporter: Richard


At the company I work for, we do nightly expunges to clear down deleted docs 
_(providing the threshold is above 5% in our case)_.

Whilst this has been okay for us, we want the ability to know when an expunge 
has completed. At the moment we do some calculations to estimate how long it 
would take. 

it would be nice if there was a way to see when an expunge has completed. This 
could either be by assigning an async id to the call, or any other means of 
having visibility.

I started to look into this issue, but saw that the underlying call for 
expunging starts to use the lucene side of the code base, so thought I was 
digging to deep, so any advice on this issue would be much appreciated _(as I'm 
trying to contribute more to OOS)_.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (32bit/jdk1.8.0_201) - Build # 822 - Unstable!

2019-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/822/
Java: 32bit/jdk1.8.0_201 -server -XX:+UseSerialGC

6 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.document.TestFeatureSort

Error Message:
1 thread leaked from SUITE scope at org.apache.lucene.document.TestFeatureSort: 
1) Thread[id=14, name=LuceneTestCase-1-thread-1, state=WAITING, 
group=TGRP-TestFeatureSort] at sun.misc.Unsafe.park(Native Method)  
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.lucene.document.TestFeatureSort: 
   1) Thread[id=14, name=LuceneTestCase-1-thread-1, state=WAITING, 
group=TGRP-TestFeatureSort]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([C66B8BE487ED594E]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.document.TestFeatureSort

Error Message:
There are still zombie threads that couldn't be terminated:1) Thread[id=14, 
name=LuceneTestCase-1-thread-1, state=WAITING, group=TGRP-TestFeatureSort]  
   at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=14, name=LuceneTestCase-1-thread-1, state=WAITING, 
group=TGRP-TestFeatureSort]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([C66B8BE487ED594E]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.document.TestFeatureSort

Error Message:
1 thread leaked from SUITE scope at org.apache.lucene.document.TestFeatureSort: 
1) Thread[id=1829, name=LuceneTestCase-141-thread-1, state=WAITING, 
group=TGRP-TestFeatureSort] at sun.misc.Unsafe.park(Native Method)  
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at 

[JENKINS] Lucene-Solr-8.x-Solaris (64bit/jdk1.8.0) - Build # 218 - Unstable!

2019-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Solaris/218/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testEstimatedIndexSize

Error Message:
subNumDocs=0 should be less than subMaxDoc=0 due to link split

Stack Trace:
java.lang.AssertionError: subNumDocs=0 should be less than subMaxDoc=0 due to 
link split
at 
__randomizedtesting.SeedInfo.seed([7D8F5DD4E10D79D9:1DCB08E53064EBD7]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testEstimatedIndexSize(IndexSizeTriggerTest.java:1151)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
failed to create testMixedBounds_collection Live Nodes: [127.0.0.1:40799_solr, 

[jira] [Commented] (LUCENE-8860) LatLonShapeBoundingBoxQuery could make more decisions on inner nodes

2019-07-05 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879116#comment-16879116
 ] 

Adrien Grand commented on LUCENE-8860:
--

I made the issue about box queries, but that would actually work for polygons 
too.

> LatLonShapeBoundingBoxQuery could make more decisions on inner nodes
> 
>
> Key: LUCENE-8860
> URL: https://issues.apache.org/jira/browse/LUCENE-8860
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> Currently LatLonShapeBoundingBoxQuery with the INTERSECTS relation only 
> returns CELL_INSIDE_QUERY if the query contains ALL minimum bounding 
> rectangles of the indexed triangles.
> I think we could return CELL_INSIDE_QUERY if the box contains either of the 
> edges of all MBRs of indexed triangles since triangles are guaranteed to 
> touch all edges of their MBR by definition. In some cases this would help 
> save decoding triangles and running costly point-in-triangle computations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13608) Incremental backup for Solr

2019-07-05 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-13608:

Description: 
Currently every call to backup API requires backup the whole index with 
different backupName. This is very costly and nearly useless in case of large 
frequent change indexes.

Since index files of Lucene are written one only, they also contains the 
informantion about checksum of files. Then we can rely on these to support 
incremental backup -- only upload files that do not present in the repository.

The design for this issue will be like this
* Adding another parameter named {{incremental}} to backup API.
* Adding new methods to {{BackupRepository}}, like compute checksum, 
deletefiles..
* {{SnapShooter}} will skip uploading files from local if file in repository 
matches in checksum and length.
* Segments_N will be copied last to guarantee that even the backup process get 
interrupted in the middle, the old backup will still can be used.
* We only keep the last {{IndexCommit}} therefore after uploading Segments_N 
successfully, any file does not needed for the last {{IndexCommit}} will be 
deleted. We will try to improve this situation in another issue.
* Any files in ZK will be re-uploaded
** The ZK files coressponds first backup will be stored in same location as 
today (to maintain backward compatibility)
** On subsequent backups ZK files will be stored in folder {{gen-ith}}



  was:
Currently every call to backup API requires backup the whole index with 
different backupName. This is very costly and nearly useless in case of large 
frequent change indexes.

Since index files of Lucene are written one only, they also contains the 
informantion about checksum of files. Then we can rely on these to support 
incremental backup -- only upload files that do not present in the repository.

The design for this issue will be like this
* Adding another parameter named {{incremental}} to backup API.
* Adding new methods to {{BackupRepository}}, like compute checksum, 
deletefiles..
* {{UnsupportedOperationException}} on methods).
* {{SnapShooter}} will skip uploading files from local if file in repository 
matches in checksum and length.
* Segments_N will be copied last to guarantee that even the backup process get 
interrupted in the middle, the old backup will still can be used.
* We only keep the last {{IndexCommit}} therefore after uploading Segments_N 
successfully, any file does not needed for the last {{IndexCommit}} will be 
deleted. We will try to improve this situation in another issue.
* Any files in ZK will be re-uploaded
** The ZK files coressponds first backup will be stored in same location as 
today (to maintain backward compatibility)
** On subsequent backups ZK files will be stored in folder {{gen-ith}}




> Incremental backup for Solr
> ---
>
> Key: SOLR-13608
> URL: https://issues.apache.org/jira/browse/SOLR-13608
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently every call to backup API requires backup the whole index with 
> different backupName. This is very costly and nearly useless in case of large 
> frequent change indexes.
> Since index files of Lucene are written one only, they also contains the 
> informantion about checksum of files. Then we can rely on these to support 
> incremental backup -- only upload files that do not present in the repository.
> The design for this issue will be like this
> * Adding another parameter named {{incremental}} to backup API.
> * Adding new methods to {{BackupRepository}}, like compute checksum, 
> deletefiles..
> * {{SnapShooter}} will skip uploading files from local if file in repository 
> matches in checksum and length.
> * Segments_N will be copied last to guarantee that even the backup process 
> get interrupted in the middle, the old backup will still can be used.
> * We only keep the last {{IndexCommit}} therefore after uploading Segments_N 
> successfully, any file does not needed for the last {{IndexCommit}} will be 
> deleted. We will try to improve this situation in another issue.
> * Any files in ZK will be re-uploaded
> ** The ZK files coressponds first backup will be stored in same location as 
> today (to maintain backward compatibility)
> ** On subsequent backups ZK files will be stored in folder {{gen-ith}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-12.0.1) - Build # 24347 - Still Unstable!

2019-07-05 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24347/
Java: 64bit/jdk-12.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

14 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplicaLegacy

Error Message:
Timeout occurred while waiting response from server at: 
https://127.0.0.1:36815/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: https://127.0.0.1:36815/solr
at 
__randomizedtesting.SeedInfo.seed([1D3FE89FBE7F0AC9:6425F58B284A9621]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:667)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:384)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplicaLegacy(DeleteReplicaTest.java:264)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-13583) Impossible to delete a collection with the same name as an existing alias

2019-07-05 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879084#comment-16879084
 ] 

ASF subversion and git services commented on SOLR-13583:


Commit 1b6553cb31178f7afd8c6e17cd84506c64ccb994 in lucene-solr's branch 
refs/heads/branch_8_1 from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1b6553c ]

SOLR-13583: Return 400 Bad Request instead of 500 Server Error when a complex
alias is found but a simple alias was expected.


> Impossible to delete a collection with the same name as an existing alias
> -
>
> Key: SOLR-13583
> URL: https://issues.apache.org/jira/browse/SOLR-13583
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.1, 8.1.1
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Blocker
> Fix For: 8.1.2
>
> Attachments: SOLR-13583.patch, SOLR-13583.patch
>
>
> SOLR-13262 changed the behavior of most collection admin commands so that 
> they always resolve aliases by default. In most cases this is desireable 
> behavior but it also prevents executing commands on the collections that have 
> the same name as an existing alias (which usually points to a different 
> collection).
> This behavior also breaks the REINDEXCOLLECTION command with 
> {{removeSource=true,}} which can also lead to data loss.
> This issue can be resolved by adding either an opt-in or opt-out flag to the 
> collection admin commands that specifies whether the command should attempt 
> resolving the provided name as an alias first. From the point of view of ease 
> of use this could be an opt-out option, from the point of view of data safety 
> this could be an opt-in option.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13583) Impossible to delete a collection with the same name as an existing alias

2019-07-05 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879058#comment-16879058
 ] 

ASF subversion and git services commented on SOLR-13583:


Commit e616ed49a6688c011e4854afc509dd13a7222b6f in lucene-solr's branch 
refs/heads/branch_8x from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e616ed4 ]

SOLR-13583: Return 400 Bad Request instead of 500 Server Error when a complex
alias is found but a simple alias was expected.


> Impossible to delete a collection with the same name as an existing alias
> -
>
> Key: SOLR-13583
> URL: https://issues.apache.org/jira/browse/SOLR-13583
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.1, 8.1.1
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Blocker
> Fix For: 8.1.2
>
> Attachments: SOLR-13583.patch, SOLR-13583.patch
>
>
> SOLR-13262 changed the behavior of most collection admin commands so that 
> they always resolve aliases by default. In most cases this is desireable 
> behavior but it also prevents executing commands on the collections that have 
> the same name as an existing alias (which usually points to a different 
> collection).
> This behavior also breaks the REINDEXCOLLECTION command with 
> {{removeSource=true,}} which can also lead to data loss.
> This issue can be resolved by adding either an opt-in or opt-out flag to the 
> collection admin commands that specifies whether the command should attempt 
> resolving the provided name as an alias first. From the point of view of ease 
> of use this could be an opt-out option, from the point of view of data safety 
> this could be an opt-in option.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] romseygeek closed pull request #744: LUCENE-8856: Promote intervals queries from sandbox to queries module

2019-07-05 Thread GitBox
romseygeek closed pull request #744: LUCENE-8856: Promote intervals queries 
from sandbox to queries module
URL: https://github.com/apache/lucene-solr/pull/744
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13583) Impossible to delete a collection with the same name as an existing alias

2019-07-05 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879037#comment-16879037
 ] 

ASF subversion and git services commented on SOLR-13583:


Commit dd4813d5b82d7e983a3541be54cb9b0e04f246ce in lucene-solr's branch 
refs/heads/master from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=dd4813d ]

SOLR-13583: Return 400 Bad Request instead of 500 Server Error when a complex
alias is found but a simple alias was expected.


> Impossible to delete a collection with the same name as an existing alias
> -
>
> Key: SOLR-13583
> URL: https://issues.apache.org/jira/browse/SOLR-13583
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.1, 8.1.1
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Blocker
> Fix For: 8.1.2
>
> Attachments: SOLR-13583.patch, SOLR-13583.patch
>
>
> SOLR-13262 changed the behavior of most collection admin commands so that 
> they always resolve aliases by default. In most cases this is desireable 
> behavior but it also prevents executing commands on the collections that have 
> the same name as an existing alias (which usually points to a different 
> collection).
> This behavior also breaks the REINDEXCOLLECTION command with 
> {{removeSource=true,}} which can also lead to data loss.
> This issue can be resolved by adding either an opt-in or opt-out flag to the 
> collection admin commands that specifies whether the command should attempt 
> resolving the provided name as an alias first. From the point of view of ease 
> of use this could be an opt-out option, from the point of view of data safety 
> this could be an opt-in option.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8803) Provide a FieldComparator to allow sorting by a feature from a FeatureField

2019-07-05 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879030#comment-16879030
 ] 

ASF subversion and git services commented on LUCENE-8803:
-

Commit eff574f8b376f5a69a15f195b24cdc04b6b3 in lucene-solr's branch 
refs/heads/master from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=eff574f ]

LUCENE-8803: Ensure doc ID order is preserved in tests.


> Provide a FieldComparator to allow sorting by a feature from a FeatureField
> ---
>
> Key: LUCENE-8803
> URL: https://issues.apache.org/jira/browse/LUCENE-8803
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Colin Goodheart-Smithe
>Priority: Major
> Fix For: master (9.0), 8.2
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> It would be useful to be able to sort search hits by the value of a feature 
> from a feature field (e.g. pagerank). A FieldComparatorSource implementation 
> that enables this would create a convenient generic way to sort using values 
> from feature fields.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8803) Provide a FieldComparator to allow sorting by a feature from a FeatureField

2019-07-05 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16879029#comment-16879029
 ] 

ASF subversion and git services commented on LUCENE-8803:
-

Commit a0a16043522baddf42d6fc26a844b2b087d0ca5f in lucene-solr's branch 
refs/heads/branch_8x from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a0a1604 ]

LUCENE-8803: Ensure doc ID order is preserved in tests.


> Provide a FieldComparator to allow sorting by a feature from a FeatureField
> ---
>
> Key: LUCENE-8803
> URL: https://issues.apache.org/jira/browse/LUCENE-8803
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Colin Goodheart-Smithe
>Priority: Major
> Fix For: master (9.0), 8.2
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> It would be useful to be able to sort search hits by the value of a feature 
> from a feature field (e.g. pagerank). A FieldComparatorSource implementation 
> that enables this would create a convenient generic way to sort using values 
> from feature fields.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] iverase commented on a change in pull request #762: LUCENE-8903: Add LatLonShape point query

2019-07-05 Thread GitBox
iverase commented on a change in pull request #762: LUCENE-8903: Add 
LatLonShape point query
URL: https://github.com/apache/lucene-solr/pull/762#discussion_r300545922
 
 

 ##
 File path: lucene/sandbox/src/java/org/apache/lucene/document/LatLonShape.java
 ##
 @@ -94,9 +94,18 @@ private LatLonShape() {
 return new Field[] {new LatLonTriangle(fieldName, lat, lon, lat, lon, lat, 
lon)};
   }
 
+  /** create a query to find all indexed shapes that comply the {@link 
QueryRelation} with the provided point
+   **/
+  public static Query newPointQuery(String field, QueryRelation queryRelation, 
double lat, double lon) {
 
 Review comment:
   Not sure, I would keep it like that for two reasons:
   
   - Keep it consistent with all the other queries in LatLonShape
   
   - You can build queries like give me all my shapes that do not contain this 
point. For Within it becomes a term query matching all indexed point which 
encoded value are equal to the encoded value of the query.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 8.1.2 bug fix release

2019-07-05 Thread Adrien Grand
Agreed with Shalin, we might want to focus on 8.2 at this point.

On Fri, Jul 5, 2019 at 8:38 AM Shalin Shekhar Mangar
 wrote:
>
> Thanks Dat.
>
> I don't think we should release a broken version without a fix for 
> SOLR-13413. A workaround for SOLR-13413 exists (forcing http1.1 for 
> inter-node requests) but we don't test that configuration anymore in Sole so 
> I am hesitant to suggest it.
>
> I think that either we agree to upgrade jetty to 9.4.19 in this point release 
> or we scrap it altogether and focus on 8.2.
>
> On Thu, Jul 4, 2019 at 4:54 PM Đạt Cao Mạnh  wrote:
>>
>> Thanks Uwe!
>>
>> Hi guys, Ishan,
>> When I tryied to build the RC1 for branch_8_1. I did see this failure on 
>> test HttpPartitionWithTlogReplicasTest
>>
>> 215685 ERROR 
>> (updateExecutor-537-thread-1-processing-x:collDoRecoveryOnRestart_shard1_replica_t1
>>  r:core_node3 null n:127.0.0.1:55000_t_ayt%2Fs c:collDoRecoveryOnRestart 
>> s:shard1) [n:127.0.0.1:55000_t_ayt%2Fs c:collDoRecoveryOnRestart s:shard1 
>> r:core_node3 x:collDoRecoveryOnRestart_shard1_replica_t1] 
>> o.a.s.u.ErrorReportingConcurrentUpdateSolrClient Error when calling 
>> SolrCmdDistributor$Req: cmd=add{,id=(null)}; node=StdNode: 
>> http://127.0.0.1:54997/t_ayt/s/collDoRecoveryOnRestart_shard1_replica_t2/ to 
>> http://127.0.0.1:54997/t_ayt/s/collDoRecoveryOnRestart_shard1_replica_t2/
>>   => java.io.IOException: java.net.ConnectException: Connection 
>> refused
>> at 
>> org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193)
>> java.io.IOException: java.net.ConnectException: Connection refused
>> at 
>> org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193)
>>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
>> at 
>> org.eclipse.jetty.client.util.OutputStreamContentProvider$DeferredOutputStream.flush(OutputStreamContentProvider.java:152)
>>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
>> at 
>> org.eclipse.jetty.client.util.OutputStreamContentProvider$DeferredOutputStream.write(OutputStreamContentProvider.java:146)
>>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
>> at 
>> org.apache.solr.common.util.FastOutputStream.flush(FastOutputStream.java:216)
>>  ~[java/:?]
>> at 
>> org.apache.solr.common.util.FastOutputStream.flushBuffer(FastOutputStream.java:209)
>>  ~[java/:?]
>> at org.apache.solr.common.util.JavaBinCodec.marshal(JavaBinCodec.java:169) 
>> ~[java/:?]
>> at 
>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.marshal(JavaBinUpdateRequestCodec.java:102)
>>  ~[java/:?]
>> at 
>> org.apache.solr.client.solrj.impl.BinaryRequestWriter.write(BinaryRequestWriter.java:83)
>>  ~[java/:?]
>> at 
>> org.apache.solr.client.solrj.impl.Http2SolrClient.send(Http2SolrClient.java:337)
>>  ~[java/:?]
>> at 
>> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:231)
>>  ~[java/:?]
>> at 
>> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176)
>>  ~[java/:?]
>> at 
>> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181)
>>  ~[metrics-core-4.0.5.jar:4.0.5]
>> at 
>> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
>>  ~[java/:?]
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>  ~[?:1.8.0_191]
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>  ~[?:1.8.0_191]
>> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
>> Suppressed: java.io.IOException: java.net.ConnectException: Connection 
>> refused
>> at 
>> org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193)
>>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
>> at 
>> org.eclipse.jetty.client.util.OutputStreamContentProvider$DeferredOutputStream.flush(OutputStreamContentProvider.java:152)
>>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
>> at 
>> org.eclipse.jetty.client.util.OutputStreamContentProvider$DeferredOutputStream.write(OutputStreamContentProvider.java:146)
>>  ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
>> at 
>> org.apache.solr.common.util.FastOutputStream.flush(FastOutputStream.java:216)
>>  ~[java/:?]
>> at 
>> org.apache.solr.common.util.FastOutputStream.flushBuffer(FastOutputStream.java:209)
>>  ~[java/:?]
>> at org.apache.solr.common.util.JavaBinCodec.close(JavaBinCodec.java:1261) 
>> ~[java/:?]
>> at 
>> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.marshal(JavaBinUpdateRequestCodec.java:103)
>>  ~[java/:?]
>> at 
>> org.apache.solr.client.solrj.impl.BinaryRequestWriter.write(BinaryRequestWriter.java:83)
>>  ~[java/:?]
>> at 
>> org.apache.solr.client.solrj.impl.Http2SolrClient.send(Http2SolrClient.java:337)
>>  ~[java/:?]
>> at 
>> 

Re: 8.1.2 bug fix release

2019-07-05 Thread Shalin Shekhar Mangar
Thanks Dat.

I don't think we should release a broken version without a fix for
SOLR-13413. A workaround for SOLR-13413 exists (forcing http1.1 for
inter-node requests) but we don't test that configuration anymore in Sole
so I am hesitant to suggest it.

I think that either we agree to upgrade jetty to 9.4.19 in this point
release or we scrap it altogether and focus on 8.2.

On Thu, Jul 4, 2019 at 4:54 PM Đạt Cao Mạnh  wrote:

> Thanks Uwe!
>
> Hi guys, Ishan,
> When I tryied to build the RC1 for branch_8_1. I did see this failure on
> test HttpPartitionWithTlogReplicasTest
>
> 215685 ERROR
> (updateExecutor-537-thread-1-processing-x:collDoRecoveryOnRestart_shard1_replica_t1
> r:core_node3 null n:127.0.0.1:55000_t_ayt%2Fs c:collDoRecoveryOnRestart
> s:shard1) [n:127.0.0.1:55000_t_ayt%2Fs c:collDoRecoveryOnRestart s:shard1
> r:core_node3 x:collDoRecoveryOnRestart_shard1_replica_t1]
> o.a.s.u.ErrorReportingConcurrentUpdateSolrClient Error when calling
> SolrCmdDistributor$Req: cmd=add{,id=(null)}; node=StdNode:
> http://127.0.0.1:54997/t_ayt/s/collDoRecoveryOnRestart_shard1_replica_t2/
> to
> http://127.0.0.1:54997/t_ayt/s/collDoRecoveryOnRestart_shard1_replica_t2/
>   => java.io.IOException: java.net.ConnectException: Connection
> refused
> at
> org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193)
> java.io.IOException: java.net.ConnectException: Connection refused
> at
> org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193)
> ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at
> org.eclipse.jetty.client.util.OutputStreamContentProvider$DeferredOutputStream.flush(OutputStreamContentProvider.java:152)
> ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at
> org.eclipse.jetty.client.util.OutputStreamContentProvider$DeferredOutputStream.write(OutputStreamContentProvider.java:146)
> ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at
> org.apache.solr.common.util.FastOutputStream.flush(FastOutputStream.java:216)
> ~[java/:?]
> at
> org.apache.solr.common.util.FastOutputStream.flushBuffer(FastOutputStream.java:209)
> ~[java/:?]
> at org.apache.solr.common.util.JavaBinCodec.marshal(JavaBinCodec.java:169)
> ~[java/:?]
> at
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.marshal(JavaBinUpdateRequestCodec.java:102)
> ~[java/:?]
> at
> org.apache.solr.client.solrj.impl.BinaryRequestWriter.write(BinaryRequestWriter.java:83)
> ~[java/:?]
> at
> org.apache.solr.client.solrj.impl.Http2SolrClient.send(Http2SolrClient.java:337)
> ~[java/:?]
> at
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:231)
> ~[java/:?]
> at
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176)
> ~[java/:?]
> at
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181)
> ~[metrics-core-4.0.5.jar:4.0.5]
> at
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
> ~[java/:?]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> ~[?:1.8.0_191]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ~[?:1.8.0_191]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
> Suppressed: java.io.IOException: java.net.ConnectException: Connection
> refused
> at
> org.eclipse.jetty.client.util.DeferredContentProvider.flush(DeferredContentProvider.java:193)
> ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at
> org.eclipse.jetty.client.util.OutputStreamContentProvider$DeferredOutputStream.flush(OutputStreamContentProvider.java:152)
> ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at
> org.eclipse.jetty.client.util.OutputStreamContentProvider$DeferredOutputStream.write(OutputStreamContentProvider.java:146)
> ~[jetty-client-9.4.14.v20181114.jar:9.4.14.v20181114]
> at
> org.apache.solr.common.util.FastOutputStream.flush(FastOutputStream.java:216)
> ~[java/:?]
> at
> org.apache.solr.common.util.FastOutputStream.flushBuffer(FastOutputStream.java:209)
> ~[java/:?]
> at org.apache.solr.common.util.JavaBinCodec.close(JavaBinCodec.java:1261)
> ~[java/:?]
> at
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.marshal(JavaBinUpdateRequestCodec.java:103)
> ~[java/:?]
> at
> org.apache.solr.client.solrj.impl.BinaryRequestWriter.write(BinaryRequestWriter.java:83)
> ~[java/:?]
> at
> org.apache.solr.client.solrj.impl.Http2SolrClient.send(Http2SolrClient.java:337)
> ~[java/:?]
> at
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:231)
> ~[java/:?]
> at
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:176)
> ~[java/:?]
> at
> 

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 405 - Still Failing

2019-07-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/405/

1 tests failed.
FAILED:  
org.apache.solr.schema.TestUseDocValuesAsStored.testDuplicateMultiValued

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([616067DC51E2DFFA:8FBD73FC9F532946]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:947)
at 
org.apache.solr.schema.TestUseDocValuesAsStored.doTest(TestUseDocValuesAsStored.java:367)
at 
org.apache.solr.schema.TestUseDocValuesAsStored.testDuplicateMultiValued(TestUseDocValuesAsStored.java:165)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//arr[@name='test_ss_dv']/str[.='X']
xml response was: