[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 983 - Failure

2018-03-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/983/

No tests ran.

Build Log:
[...truncated 30102 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 230 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (41.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 30.3 MB in 0.03 sec (883.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 73.4 MB in 0.08 sec (970.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 83.9 MB in 0.09 sec (959.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6261 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6261 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6261 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6261 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 9 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] 
   [smoker] command "export JAVA_HOME="/home/jenkins/tools/java/latest1.9" 
PATH="/home/jenkins/tools/java/latest1.9/bin:$PATH" 
JAVACMD="/home/jenkins/tools/java/latest1.9/bin/java"; ant clean test 
-Dtests.badapples=false -Dtests.slow=false" failed:
   [smoker] Buildfile: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/build.xml
   [smoker] 
   [smoker] clean:
   [smoker][delete] Deleting directory 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/build
   [smoker] 
   [smoker] ivy-availability-check:
   [smoker] [loadresource] Do not set property disallowed.ivy.jars.list as its 
length is 0.
   [smoker] 
   [smoker] -ivy-fail-disallowed-ivy-version:
   [smoker] 
   [smoker] ivy-fail:
   [smoker] 
   [smoker] ivy-configure:
   [smoker] [ivy:configure] :: Apache Ivy 2.4.0 - 20141213170938 :: 
http://ant.apache.org/ivy/ ::
   [smoker] [ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/top-level-ivy-settings.xml
   [smoker] 
   [smoker] -clover.load:
   [smoker] 
   [smoker] resolve-groovy:
   [smoker] [ivy:cachepath] :: resolving dependencies :: 
org.codehaus.groovy#groovy-all-caller;working
   [smoker] [ivy:cachepath] confs: [default]
   [smoker] [ivy:cachepath] found org.codehaus.groovy#groovy-all;2.4.13 in 
public
   [smoker] [ivy:cachepath] :: resolution report :: resolve 764ms :: artifacts 
dl 14ms
   [smoker] 
-
   [smoker] |  |modules||   
artifacts   |
   [smoker] |   conf   | number| search|dwnlded|evicted|| 
number|dwnlded|
   [smoker] 
-
   [smoker] |  default |   1   |   0   |   0   |   0   ||   1   |   
0   |
   [smoker] 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7233 - Still Unstable!

2018-03-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7233/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

8 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:57821/_/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:57821/_/collection1
at 
__randomizedtesting.SeedInfo.seed([5E2F0CC4F873B6F4:D67B331E568FDB0C]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1105)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:885)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:818)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.commit(AbstractFullDistribZkTestBase.java:1591)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:212)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 17 - Still Unstable

2018-03-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/17/

2 tests failed.
FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 8 in https://127.0.0.1:38767/solr

Stack Trace:
java.lang.AssertionError: Can not find doc 8 in https://127.0.0.1:38767/solr
at 
__randomizedtesting.SeedInfo.seed([C92F345F45EBA322:8DF4DF368BB6985]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:889)
at 
org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:603)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest.testMultipleThreads

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception 

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 511 - Unstable!

2018-03-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/511/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

13 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Thu Mar 22 03:19:50 
GMT 2018

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Thu Mar 22 03:19:50 GMT 2018
at 
__randomizedtesting.SeedInfo.seed([137489AE4ABCC63F:C8DF89684F94AF8C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1572)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:893)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchWithMasterUrl

Error 

[jira] [Updated] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12087:

Attachment: SOLR-12087.patch

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: SOLR-12087.patch, SOLR-12087.patch, 
> SOLR-12087.test.patch, Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat reassigned SOLR-12087:
---

Assignee: Cao Manh Dat

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Assignee: Cao Manh Dat
>Priority: Critical
> Attachments: SOLR-12087.patch, SOLR-12087.patch, 
> SOLR-12087.test.patch, Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12087:

Attachment: (was: SOLR-12087.patch)

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: SOLR-12087.patch, SOLR-12087.patch, 
> SOLR-12087.test.patch, Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408962#comment-16408962
 ] 

Cao Manh Dat commented on SOLR-12087:
-

Uploaded the patch that 
 * Follow Varun's hints above.
 * If the replica does not exist, LIR process will be skipped.
 * In case of LIRThread already created a LIR znode (lagging in update 
clusterstate of leader), remove that the node on final.

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: SOLR-12087.patch, SOLR-12087.patch, 
> SOLR-12087.test.patch, Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12087:

Attachment: SOLR-12087.patch

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: SOLR-12087.patch, SOLR-12087.patch, 
> SOLR-12087.test.patch, Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408948#comment-16408948
 ] 

Cao Manh Dat commented on SOLR-12087:
-

Hi Varun, all your points are valid. Thanks!

Several points from the log you are mentioned are worth to concern
 * Multiple log {{ZkShardTerms failed to save terms, version is not a match, 
retrying}}, this is a ZK performance problem, all thread will try to fetch the 
latest term from ZK -> SOLR-12135
 * DistributedUpdateProcessor Core core_node4 belonging to 
deleteReplicaOnIndexing shard1, does not have error'd node -> SOLR-12073

 

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: SOLR-12087.patch, SOLR-12087.test.patch, Screen Shot 
> 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12135) ZK perfomance problem when multiple updates thread of leader trying to update term

2018-03-21 Thread Cao Manh Dat (JIRA)
Cao Manh Dat created SOLR-12135:
---

 Summary: ZK perfomance problem when multiple updates thread of 
leader trying to update term
 Key: SOLR-12135
 URL: https://issues.apache.org/jira/browse/SOLR-12135
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.3
Reporter: Cao Manh Dat
Assignee: Cao Manh Dat
 Fix For: 7.4


>From SOLR-12087: When multiple updates thread failed to send updates to a 
>replica, all the threads will try to fetch latest shard term from ZK, lead to 
>high load between leader and ZK. We should introduce a locking mechanism so 
>only on fetch will be made in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 522 - Still Unstable

2018-03-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/522/

1 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerCloud.test

Error Message:
Timeout occured while waiting response from server at: https://127.0.0.1:34649

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:34649
at 
__randomizedtesting.SeedInfo.seed([87D848EACC45C8DA:F8C773062B9A522]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1105)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:885)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:818)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:322)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1750 - Unstable!

2018-03-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1750/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteCollection

Error Message:
Error from server at http://127.0.0.1:62457/solr: Could not find collection : 
solrj_test

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:62457/solr: Could not find collection : 
solrj_test
at 
__randomizedtesting.SeedInfo.seed([C57DE4376BCA5EB8:C2A8E57C4C5E3BB5]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1105)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:885)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:818)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteCollection(CollectionsAPISolrJTest.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 19 - Still Unstable

2018-03-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/19/

6 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testEventQueue

Error Message:
action wasn't interrupted

Stack Trace:
java.lang.AssertionError: action wasn't interrupted
at 
__randomizedtesting.SeedInfo.seed([DEF89269F503DD1:C45ACB889637FB24]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testEventQueue(TriggerIntegrationTest.java:752)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeLostTrigger

Error Message:
Error from server at https://127.0.0.1:44760/solr: delete the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:44760/solr: delete the 

To all contributors: How to get your patch automatically validated

2018-03-21 Thread Steve Rowe
To enable automatic validation of a patch attached to a Jira issue, switch the 
issue's status to "Patch Available" by clicking on the "Submit Patch" button 
near the top of the page.

This will enqueue an ASF Jenkins job (PreCommit-LUCENE-Build[1] or 
PreCommit-SOLR-Build[2]) to run various quality checks on the patch and post a 
validation report as a comment (by "Lucene/Solr QA") on the issue.

Expect a delay of 12 hours or so before the patch validation job actually runs. 

Note that in order for an attached patch to trigger validation, its name must 
conform to the naming rules outlined here: 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames/

(FYI: I’ve added a version of these instructions to Lucene’s and Solr’s 
HowToContribute wiki pages.[3][4])

--
Steve
www.lucidworks.com

[1] https://builds.apache.org/job/PreCommit-LUCENE-Build/
[2] https://builds.apache.org/job/PreCommit-SOLR-Build/
[3] https://wiki.apache.org/lucene-java/HowToContribute#Contributing_your_work
[4] https://wiki.apache.org/solr/HowToContribute#Contributing_your_work


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1508 - Still Unstable

2018-03-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1508/

8 tests failed.
FAILED:  org.apache.solr.cloud.RestartWhileUpdatingTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([2512321D98A10DD9]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.RestartWhileUpdatingTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([2512321D98A10DD9]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SyncSliceTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [SolrZkClient] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.common.cloud.SolrZkClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:185)  
at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:120)  at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:110)  at 
org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:286)  at 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:155)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:399)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:828)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:818)
  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)  at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)  at 
org.apache.solr.client.solrj.impl.SolrClientCloudManager.request(SolrClientCloudManager.java:115)
  at 
org.apache.solr.cloud.autoscaling.SystemLogListener.onEvent(SystemLogListener.java:122)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$TriggerListeners.fireListeners(ScheduledTriggers.java:772)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$TriggerListeners.fireListeners(ScheduledTriggers.java:742)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$add$4(ScheduledTriggers.java:281)
  at 
org.apache.solr.cloud.autoscaling.NodeLostTrigger.run(NodeLostTrigger.java:159) 
 at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$TriggerWrapper.run(ScheduledTriggers.java:575)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [SolrZkClient]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.common.cloud.SolrZkClient
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:185)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:120)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:110)
at 
org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:286)
at 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:155)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:399)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:828)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:818)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.impl.SolrClientCloudManager.request(SolrClientCloudManager.java:115)
at 
org.apache.solr.cloud.autoscaling.SystemLogListener.onEvent(SystemLogListener.java:122)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$TriggerListeners.fireListeners(ScheduledTriggers.java:772)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$TriggerListeners.fireListeners(ScheduledTriggers.java:742)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$add$4(ScheduledTriggers.java:281)

[jira] [Commented] (SOLR-12125) Get terms in solr not working

2018-03-21 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408620#comment-16408620
 ] 

Erick Erickson commented on SOLR-12125:
---

So far you've told us nothing useful. Please respond on the user's list with 
the _exact_ query you're using.

> Get terms in solr not working
> -
>
> Key: SOLR-12125
> URL: https://issues.apache.org/jira/browse/SOLR-12125
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: adam rag
>Priority: Major
>
> To get top words in my Apache Solr instance, I am using "terms" query. When I 
> try it in 100 million of data, the data are fetching after a few minutes, But 
> if the data is 300 million the Solr is not responding. My server memory is 
> 100 GB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408606#comment-16408606
 ] 

Varun Thacker commented on SOLR-12087:
--

Hi Dat,

Great catch!

 

A couple of minor comments about the patch:
 * The log.warn in ReplicaMutator , should we just remove it? Like to a user 
going through the logs he will be confused when he sees it and there is no 
action to be taken anyways. Maybe it could be a DEBUG log entry?
 * In DeleteReplicaTest  can we change {{e.printStackTrace();}} to be written 
out with the logger ?
 * Just curious as to why is there a {{Thread.sleep(2000);}} wait in the test 
code? 

 

 

For reference, I ran the test patch on master ( which has the new LIR code ) 
and all 10 runs passed

A few things worth noting were these log lines
{code:java}
7471 INFO (qtp1388245618-49) [n:127.0.0.1:56141_solr ] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/cores 
params={deleteInstanceDir=true=deleteReplicaOnIndexing_shard1_replica_n1=/admin/cores=true=UNLOAD=javabin=2=true}
 status=0 QTime=85
7559 INFO (qtp1216387063-400) [n:127.0.0.1:56142_solr c:deleteReplicaOnIndexing 
s:shard1 r:core_node4 x:deleteReplicaOnIndexing_shard1_replica_n2] 
o.a.s.c.ZkShardTerms Successful update of terms at 
/collections/deleteReplicaOnIndexing/terms/shard1 to 
Terms{values={core_node4=1}, version=3}
7559 INFO (qtp1216387063-289) [n:127.0.0.1:56142_solr c:deleteReplicaOnIndexing 
s:shard1 r:core_node4 x:deleteReplicaOnIndexing_shard1_replica_n2] 
o.a.s.c.ZkShardTerms Failed to save terms, version is not a match, retrying
7559 INFO (qtp1216387063-57) [n:127.0.0.1:56142_solr c:deleteReplicaOnIndexing 
s:shard1 r:core_node4 x:deleteReplicaOnIndexing_shard1_replica_n2] 
o.a.s.c.ZkShardTerms Failed to save terms, version is not a match, retrying
7560 INFO (qtp1216387063-319) [n:127.0.0.1:56142_solr c:deleteReplicaOnIndexing 
s:shard1 r:core_node4 x:deleteReplicaOnIndexing_shard1_replica_n2] 
o.a.s.c.ZkShardTerms Failed to save terms, version is not a match, retrying
7560 INFO (qtp1216387063-476) [n:127.0.0.1:56142_solr c:deleteReplicaOnIndexing 
s:shard1 r:core_node4 x:deleteReplicaOnIndexing_shard1_replica_n2] 
o.a.s.c.ZkShardTerms Failed to save terms, version is not a match, retrying
7559 INFO (qtp1216387063-423) [n:127.0.0.1:56142_solr c:deleteReplicaOnIndexing 
s:shard1 r:core_node4 x:deleteReplicaOnIndexing_shard1_replica_n2] 
o.a.s.c.ZkShardTerms Failed to save terms, version is not a match, retrying
7561 INFO (qtp1216387063-444) [n:127.0.0.1:56142_solr c:deleteReplicaOnIndexing 
s:shard1 r:core_node4 x:deleteReplicaOnIndexing_shard1_replica_n2] 
o.a.s.c.ZkShardTerms Failed to save terms, version is not a match, retrying
7560 INFO (qtp1216387063-325) [n:127.0.0.1:56142_solr c:deleteReplicaOnIndexing 
s:shard1 r:core_node4 x:deleteReplicaOnIndexing_shard1_replica_n2] 
o.a.s.c.ZkShardTerms Failed to save terms, version is not a match, retrying
7563 INFO (qtp1216387063-324) [n:127.0.0.1:56142_solr c:deleteReplicaOnIndexing 
s:shard1 r:core_node4 x:deleteReplicaOnIndexing_shard1_replica_n2] 
o.a.s.c.ZkShardTerms Failed to save terms, version is not a match, retrying
7561 INFO (qtp1216387063-321) [n:127.0.0.1:56142_solr c:deleteReplicaOnIndexing 
s:shard1 r:core_node4 x:deleteReplicaOnIndexing_shard1_replica_n2] 
o.a.s.c.ZkShardTerms Failed to save terms, version is not a match, retrying



7705 ERROR 
(updateExecutor-16-thread-94-processing-https:127.0.0.1:56141//solr//deleteReplicaOnIndexing_shard1_replica_n1
 x:deleteReplicaOnIndexing_shard1_replica_n2 r:core_node4 
n:127.0.0.1:56142_solr s:shard1 c:deleteReplicaOnIndexing) 
[n:127.0.0.1:56142_solr c:deleteReplicaOnIndexing s:shard1 r:core_node4 
x:deleteReplicaOnIndexing_shard1_replica_n2] 
o.a.s.u.ErrorReportingConcurrentUpdateSolrClient error
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
https://127.0.0.1:56141/solr/deleteReplicaOnIndexing_shard1_replica_n1: Can not 
find: /solr/deleteReplicaOnIndexing_shard1_replica_n1/update



request: 
https://127.0.0.1:56141/solr/deleteReplicaOnIndexing_shard1_replica_n1/update?update.distrib=FROMLEADER=https%3A%2F%2F127.0.0.1%3A56142%2Fsolr%2FdeleteReplicaOnIndexing_shard1_replica_n2%2F=javabin=2

...

7751 WARN  (qtp1216387063-326) [n:127.0.0.1:56142_solr 
c:deleteReplicaOnIndexing s:shard1 r:core_node4 
x:deleteReplicaOnIndexing_shard1_replica_n2] 
o.a.s.u.p.DistributedUpdateProcessor Core core_node4 belonging to 
deleteReplicaOnIndexing shard1, does not have error'd node 
https://127.0.0.1:56141/solr/deleteReplicaOnIndexing_shard1_replica_n1/ as a 
replica. No request recovery command will be sent!{code}

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_162) - Build # 21678 - Still Unstable!

2018-03-21 Thread Policeman Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 79, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.1) - Build # 513 - Still Unstable!

2018-03-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/513/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

6 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.sim.TestNodeLostTrigger.testTrigger

Error Message:
[127.0.0.1:10016_solr] doesn't contain 127.0.0.1:10018_solr

Stack Trace:
java.lang.AssertionError: [127.0.0.1:10016_solr] doesn't contain 
127.0.0.1:10018_solr
at 
__randomizedtesting.SeedInfo.seed([C5E1F482539F0EC7:A62AC200CA507DEA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.sim.TestNodeLostTrigger.testTrigger(TestNodeLostTrigger.java:116)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
found:2[index.20180321172203314, index.20180321172203554, index.properties, 
replication.properties, snapshot_metadata]

Stack Trace:

[jira] [Commented] (SOLR-12134) validate links to javadocs in ref-guide & hook all ref-guide validation into top level documentation/precommit

2018-03-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408441#comment-16408441
 ] 

Hoss Man commented on SOLR-12134:
-

fair point – i was using build-pdf to ensure we could catch any errors that 
might *only* show up when building the PDF, but the more i think about it the 
less i can remember a time when that happeend that wasn't directly related to 
working on the ref-guide build process itself ... 
{{bare-bones-html-validation}} will catch 99% of the "content mistakes" in the 
ref-guide, so that's probably good enough for precommit.

> validate links to javadocs in ref-guide & hook all ref-guide validation into 
> top level documentation/precommit
> --
>
> Key: SOLR-12134
> URL: https://issues.apache.org/jira/browse/SOLR-12134
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-12134.patch, 
> nocommit.SOLR-12134.sample-failures.patch
>
>
> We've seen a couple problems come up recently where the ref-guide had broken 
> links ot javadocs.
> In some cases these are because people made typos in java classnames / 
> pathnames while editing the docs - but in other cases the problems were that 
> the docs were correct at one point, but then later the class was 
> moved/renamed/removed, or had it's access level downgraded from public to 
> private (after deprecation)
> I've worked up a patch with some ideas to help us catch these types of 
> mistakes - and in general to hook the "bare-bones HTML" validation (which 
> does not require jekyll or any non-ivy managed external dependencies) into 
> {{ant precommit}}
> Details to follow in comment/patch...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12134) validate links to javadocs in ref-guide & hook all ref-guide validation into top level documentation/precommit

2018-03-21 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408414#comment-16408414
 ] 

Steve Rowe commented on SOLR-12134:
---

bq. {{ant precommit  # now builds & validates barebones html ref-guide 
after building javadocs}}

{{precommit}} depends on {{documentation-lint}} under solr, which depends on 
{{documentation}}, which with your patch builds the ref guide PDF, in addition 
to the barebones html ref-guide.

On my 2012 macbook pro, {{ant build-pdf}} took 4 minutes 54 seconds, whereas 
{{ant bare-bones-html-validation}} took 40 seconds.  Adding 4 minutes to {{ant 
precommit}} may not go over well with people who already think it takes too 
long?

> validate links to javadocs in ref-guide & hook all ref-guide validation into 
> top level documentation/precommit
> --
>
> Key: SOLR-12134
> URL: https://issues.apache.org/jira/browse/SOLR-12134
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-12134.patch, 
> nocommit.SOLR-12134.sample-failures.patch
>
>
> We've seen a couple problems come up recently where the ref-guide had broken 
> links ot javadocs.
> In some cases these are because people made typos in java classnames / 
> pathnames while editing the docs - but in other cases the problems were that 
> the docs were correct at one point, but then later the class was 
> moved/renamed/removed, or had it's access level downgraded from public to 
> private (after deprecation)
> I've worked up a patch with some ideas to help us catch these types of 
> mistakes - and in general to hook the "bare-bones HTML" validation (which 
> does not require jekyll or any non-ivy managed external dependencies) into 
> {{ant precommit}}
> Details to follow in comment/patch...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12134) validate links to javadocs in ref-guide & hook all ref-guide validation into top level documentation/precommit

2018-03-21 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408408#comment-16408408
 ] 

Steve Rowe commented on SOLR-12134:
---

bq. let's see if the autopatch sumission review will like my patch

Be patient... so far it's taking 10-14 hours for enqueued patch validation jobs 
to run.

> validate links to javadocs in ref-guide & hook all ref-guide validation into 
> top level documentation/precommit
> --
>
> Key: SOLR-12134
> URL: https://issues.apache.org/jira/browse/SOLR-12134
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-12134.patch, 
> nocommit.SOLR-12134.sample-failures.patch
>
>
> We've seen a couple problems come up recently where the ref-guide had broken 
> links ot javadocs.
> In some cases these are because people made typos in java classnames / 
> pathnames while editing the docs - but in other cases the problems were that 
> the docs were correct at one point, but then later the class was 
> moved/renamed/removed, or had it's access level downgraded from public to 
> private (after deprecation)
> I've worked up a patch with some ideas to help us catch these types of 
> mistakes - and in general to hook the "bare-bones HTML" validation (which 
> does not require jekyll or any non-ivy managed external dependencies) into 
> {{ant precommit}}
> Details to follow in comment/patch...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12134) validate links to javadocs in ref-guide & hook all ref-guide validation into top level documentation/precommit

2018-03-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408244#comment-16408244
 ] 

Hoss Man edited comment on SOLR-12134 at 3/21/18 6:03 PM:
--

I think we should make 2 broad changes to the build, which i've been testing 
out in a patch...
 # add a optional sys prop when building the ref guide to use 
{{local.javadocs}} paths in the html versions of the guide
 ** when using {{local.javadocs}} the CheckLinksAndAnchors code will also 
validate that any javadoc file mentioned in the ref-guide exists locally – so 
you have to have built all javadocs to use it
 ** The PDF will never use local javadocs paths, always absolute urls on 
lucene.apache.org
 # update {{cd solr; ant documentation}} to also build the pdf (and 
bare-bones-html) versions of the guide using this {{local.javadocs}} option
 ** This will ensure that both {{ant precommit}} and any jenkins job that 
builds documentation will fail if someone breaks the ref-guide (either 
directly, or by changing javadocs in a way that will break the ref-guide)

Things that won't change...
 * if you run {{cd solr/solr-ref-guide; ant }} it should still work 
exactly as before
 ** intra ref-guide links will still be validated
 ** links to javadocs will still use absolute URLs...
 *** you won't know if they are broken – but on the flip side you can rapidly 
rebuild the ref-guide w/o needing to build all lucene/solr javadocs
 * the release process for either lucene/solr or the ref-guide stays the same

 * 
 ** In general this moves us closer to having a unified release process, but 
it's not all the way there
 ** Notable: the "official builds" of the ref-guide (both in PDF and the hosted 
HTML) will still use absolute URLs to link to javadocs


Some examples...
{code:java}
cd solr/solr-ref-guide
ant build-site   # no change from existing behavior
ant bare-bones-html-validation   # no change from existing behavior
ant build-pdf# no change from existing behavior

ant build-site -Dlocal.javadocs=true  # local jekyll build will now link to 
local
  # javadoc files and build will fail if 
any/all javadoc 
  # links don't exist
{code}
{code:java}
cd solr
ant documentation  # now builds PDF & validates barebones html ref-guide
   # (after building javadocs) to ensure no fatal PDF
   # errors or broken links 
{code}
{code:java}
ant precommit  # now builds & validates barebones html ref-guide after 
building javadocs
{code}

The attached patch makes this all work (including a fix to an existing link in 
{{solr-tutorial.adoc}} which currently depends on some weird rewrite rule 
behavior to work). If you also apply the 
"nocommit.SOLR-12134.sample-failures.patch" file you can see how some various 
examples of problems will affect things like {{cd solr/solr-ref-guide; ant}} vs 
{{cd solr; ant documentation}} (Of course: to test {{ant precommit}} you'll 
have to remove the 'nocommit' test from that patch, since it will cause 
precommit to fail fast before it even tries to build documentation)

 

(*EDIT* – based on an offline question from steve, i updated one comment in the 
examples above to clarify that {{ant documentation}} will actually build the 
pdf _and_ validate the links by building & verifying the bare-bones html ... 
that's a preexisting dependency in {{build-pdf)}}


was (Author: hossman):
I think we should make 2 broad changes to the build, which i've been testing 
out in a patch...
 # add a optional sys prop when building the ref guide to use 
{{local.javadocs}} paths in the html versions of the guide
 ** when using {{local.javadocs}} the CheckLinksAndAnchors code will also 
validate that any javadoc file mentioned in the ref-guide exists locally – so 
you have to have built all javadocs to use it
 ** The PDF will never use local javadocs paths, always absolute urls on 
lucene.apache.org
 # update {{cd solr; ant documentation}} to also build the pdf (and 
bare-bones-html) versions of the guide using this {{local.javadocs}} option
 ** This will ensure that both {{ant precommit}} and any jenkins job that 
builds documentation will fail if someone breaks the ref-guide (either 
directly, or by changing javadocs in a way that will break the ref-guide)

Things that won't change...
 * if you run {{cd solr/solr-ref-guide; ant }} it should still work 
exactly as before
 ** intra ref-guide links will still be validated
 ** links to javadocs will still use absolute URLs...
 *** you won't know if they are broken – but on the flip side you can rapidly 
rebuild the ref-guide w/o needing to build all lucene/solr javadocs
 * the release process for either lucene/solr or the ref-guide stays the same

 ** In general this moves us closer to having a unified release process, but 
it's not 

[JENKINS] Lucene-Solr-NightlyTests-7.3 - Build # 6 - Failure

2018-03-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.3/6/

1 tests failed.
FAILED:  org.apache.lucene.search.TestInetAddressRangeQueries.testRandomBig

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([8DE77F4FF83EEB13:AB002C069679793]:0)
at 
org.apache.lucene.util.bkd.HeapPointWriter.writePackedValue(HeapPointWriter.java:107)
at 
org.apache.lucene.util.bkd.HeapPointWriter.append(HeapPointWriter.java:128)
at org.apache.lucene.util.bkd.PointReader.split(PointReader.java:68)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1791)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1805)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1805)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1805)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1805)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1805)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1805)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1805)
at org.apache.lucene.util.bkd.BKDWriter.finish(BKDWriter.java:1008)
at 
org.apache.lucene.index.RandomCodec$1$1.writeField(RandomCodec.java:141)
at 
org.apache.lucene.codecs.PointsWriter.mergeOneField(PointsWriter.java:62)
at org.apache.lucene.codecs.PointsWriter.merge(PointsWriter.java:186)
at 
org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:144)
at 
org.apache.lucene.codecs.asserting.AssertingPointsFormat$AssertingPointsWriter.merge(AssertingPointsFormat.java:142)
at 
org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:187)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:136)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4443)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4083)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2247)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5098)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1732)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1464)
at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.verify(BaseRangeFieldQueryTestCase.java:190)
at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.doTestRandom(BaseRangeFieldQueryTestCase.java:160)
at 
org.apache.lucene.search.BaseRangeFieldQueryTestCase.testRandomBig(BaseRangeFieldQueryTestCase.java:75)
at 
org.apache.lucene.search.TestInetAddressRangeQueries.testRandomBig(TestInetAddressRangeQueries.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)




Build Log:
[...truncated 8945 lines...]
   [junit4] Suite: org.apache.lucene.search.TestInetAddressRangeQueries
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestInetAddressRangeQueries -Dtests.method=testRandomBig 
-Dtests.seed=8DE77F4FF83EEB13 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.3/test-data/enwiki.random.lines.txt
 -Dtests.locale=pt-PT -Dtests.timezone=America/Santo_Domingo 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   99.8s J1 | TestInetAddressRangeQueries.testRandomBig <<<
   [junit4]> Throwable #1: java.lang.OutOfMemoryError: Java heap space
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([8DE77F4FF83EEB13:AB002C069679793]:0)
   [junit4]>at 
org.apache.lucene.util.bkd.HeapPointWriter.writePackedValue(HeapPointWriter.java:107)
   [junit4]>at 
org.apache.lucene.util.bkd.HeapPointWriter.append(HeapPointWriter.java:128)
   [junit4]>at 
org.apache.lucene.util.bkd.PointReader.split(PointReader.java:68)
   [junit4]>at 
org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1791)
   [junit4]>at 
org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1805)
   [junit4]>at 
org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1805)
   [junit4]>at 
org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1805)
   [junit4]>at 
org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1805)
   [junit4]>at 
org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1805)
   [junit4]>at 

[jira] [Commented] (LUCENE-8202) Add a FixedShingleFilter

2018-03-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408314#comment-16408314
 ] 

ASF subversion and git services commented on LUCENE-8202:
-

Commit 2c4b78c43fe2e30ef748af34a1daa174d66e29cc in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2c4b78c ]

LUCENE-8202: Add checks for shingle size


> Add a FixedShingleFilter
> 
>
> Key: LUCENE-8202
> URL: https://issues.apache.org/jira/browse/LUCENE-8202
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8202.patch, LUCENE-8202.patch, LUCENE-8202.patch
>
>
> In LUCENE-3475 I tried to make a ShingleGraphFilter that could accept and 
> emit arbitrary graphs, while duplicating all the functionality of the 
> existing ShingleFilter.  This ends up being extremely hairy, and doesn't play 
> well with query parsers.
> I'd like to step back and try and create a simpler shingle filter that can be 
> used for index-time phrase tokenization only.  It will have a single fixed 
> shingle size, can deal with single-token synonyms, and won't emit unigrams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8202) Add a FixedShingleFilter

2018-03-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408313#comment-16408313
 ] 

ASF subversion and git services commented on LUCENE-8202:
-

Commit bd6cf168e0e129fa22545a7f614b2b146bd5f202 in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bd6cf16 ]

LUCENE-8202: Add checks for shingle size


> Add a FixedShingleFilter
> 
>
> Key: LUCENE-8202
> URL: https://issues.apache.org/jira/browse/LUCENE-8202
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8202.patch, LUCENE-8202.patch, LUCENE-8202.patch
>
>
> In LUCENE-3475 I tried to make a ShingleGraphFilter that could accept and 
> emit arbitrary graphs, while duplicating all the functionality of the 
> existing ShingleFilter.  This ends up being extremely hairy, and doesn't play 
> well with query parsers.
> I'd like to step back and try and create a simpler shingle filter that can be 
> used for index-time phrase tokenization only.  It will have a single fixed 
> shingle size, can deal with single-token synonyms, and won't emit unigrams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8213) offload caching to a dedicated threadpool

2018-03-21 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408260#comment-16408260
 ] 

Adrien Grand commented on LUCENE-8213:
--

You can still create an IndexSearcher with a single slice and/or wrap the cache 
to use a custom threadpool instead of the one from IndexSearcher, this is fine. 
I'd just like to make common use-cases, such as trading throughput for latency, 
simple.

> offload caching to a dedicated threadpool
> -
>
> Key: LUCENE-8213
> URL: https://issues.apache.org/jira/browse/LUCENE-8213
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/query/scoring
>Affects Versions: 7.2.1
>Reporter: Amir Hadadi
>Priority: Minor
>  Labels: performance
>
> IndexOrDocValuesQuery allows to combine non selective range queries with a 
> selective lead iterator in an optimized way. However, the range query at some 
> point gets cached by a querying thread in LRUQueryCache, which negates the 
> optimization of IndexOrDocValuesQuery for that specific query.
> It would be nice to see a caching implementation that offloads to a different 
> thread pool, so that queries involving IndexOrDocValuesQuery would have 
> consistent performance characteristics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12134) validate links to javadocs in ref-guide & hook all ref-guide validation into top level documentation/precommit

2018-03-21 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-12134:

Attachment: nocommit.SOLR-12134.sample-failures.patch
SOLR-12134.patch

> validate links to javadocs in ref-guide & hook all ref-guide validation into 
> top level documentation/precommit
> --
>
> Key: SOLR-12134
> URL: https://issues.apache.org/jira/browse/SOLR-12134
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-12134.patch, 
> nocommit.SOLR-12134.sample-failures.patch
>
>
> We've seen a couple problems come up recently where the ref-guide had broken 
> links ot javadocs.
> In some cases these are because people made typos in java classnames / 
> pathnames while editing the docs - but in other cases the problems were that 
> the docs were correct at one point, but then later the class was 
> moved/renamed/removed, or had it's access level downgraded from public to 
> private (after deprecation)
> I've worked up a patch with some ideas to help us catch these types of 
> mistakes - and in general to hook the "bare-bones HTML" validation (which 
> does not require jekyll or any non-ivy managed external dependencies) into 
> {{ant precommit}}
> Details to follow in comment/patch...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12134) validate links to javadocs in ref-guide & hook all ref-guide validation into top level documentation/precommit

2018-03-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408244#comment-16408244
 ] 

Hoss Man commented on SOLR-12134:
-

I think we should make 2 broad changes to the build, which i've been testing 
out in a patch...
 # add a optional sys prop when building the ref guide to use 
{{local.javadocs}} paths in the html versions of the guide
 ** when using {{local.javadocs}} the CheckLinksAndAnchors code will also 
validate that any javadoc file mentioned in the ref-guide exists locally – so 
you have to have built all javadocs to use it
 ** The PDF will never use local javadocs paths, always absolute urls on 
lucene.apache.org
 # update {{cd solr; ant documentation}} to also build the pdf (and 
bare-bones-html) versions of the guide using this {{local.javadocs}} option
 ** This will ensure that both {{ant precommit}} and any jenkins job that 
builds documentation will fail if someone breaks the ref-guide (either 
directly, or by changing javadocs in a way that will break the ref-guide)

Things that won't change...
 * if you run {{cd solr/solr-ref-guide; ant }} it should still work 
exactly as before
 ** intra ref-guide links will still be validated
 ** links to javadocs will still use absolute URLs...
 *** you won't know if they are broken – but on the flip side you can rapidly 
rebuild the ref-guide w/o needing to build all lucene/solr javadocs
 * the release process for either lucene/solr or the ref-guide stays the same

 ** In general this moves us closer to having a unified release process, but 
it's not all the way there
 ** Notable: the "official builds" of the ref-guide (both in PDF and the hosted 
HTML) will still use absolute URLs to link to javadocs


Some examples...
{code:java}
cd solr/solr-ref-guide
ant build-site   # no change from existing behavior
ant bare-bones-html-validation   # no change from existing behavior
ant build-pdf# no change from existing behavior

ant build-site -Dlocal.javadocs=true  # local jekyll build will now link to 
local
  # javadoc files and build will fail if 
any/all javadoc 
  # links don't exist
{code}
{code:java}
cd solr
ant documentation  # now builds & validates barebones html ref-guide after 
building javadocs
{code}
{code:java}
ant precommit  # now builds & validates barebones html ref-guide after 
building javadocs
{code}

The attached patch makes this all work (including a fix to an existing link in 
{{solr-tutorial.adoc}} which currently depends on some weird rewrite rule 
behavior to work). If you also apply the 
"nocommit.SOLR-12134.sample-failures.patch" file you can see how some various 
examples of problems will affect things like {{cd solr/solr-ref-guide; ant}} vs 
{{cd solr; ant documentation}} (Of course: to test {{ant precommit}} you'll 
have to remove the 'nocommit' test from that patch, since it will cause 
precommit to fail fast before it even tries to build documentation)

> validate links to javadocs in ref-guide & hook all ref-guide validation into 
> top level documentation/precommit
> --
>
> Key: SOLR-12134
> URL: https://issues.apache.org/jira/browse/SOLR-12134
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> We've seen a couple problems come up recently where the ref-guide had broken 
> links ot javadocs.
> In some cases these are because people made typos in java classnames / 
> pathnames while editing the docs - but in other cases the problems were that 
> the docs were correct at one point, but then later the class was 
> moved/renamed/removed, or had it's access level downgraded from public to 
> private (after deprecation)
> I've worked up a patch with some ideas to help us catch these types of 
> mistakes - and in general to hook the "bare-bones HTML" validation (which 
> does not require jekyll or any non-ivy managed external dependencies) into 
> {{ant precommit}}
> Details to follow in comment/patch...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12134) validate links to javadocs in ref-guide & hook all ref-guide validation into top level documentation/precommit

2018-03-21 Thread Hoss Man (JIRA)
Hoss Man created SOLR-12134:
---

 Summary: validate links to javadocs in ref-guide & hook all 
ref-guide validation into top level documentation/precommit
 Key: SOLR-12134
 URL: https://issues.apache.org/jira/browse/SOLR-12134
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Hoss Man
Assignee: Hoss Man


We've seen a couple problems come up recently where the ref-guide had broken 
links ot javadocs.

In some cases these are because people made typos in java classnames / 
pathnames while editing the docs - but in other cases the problems were that 
the docs were correct at one point, but then later the class was 
moved/renamed/removed, or had it's access level downgraded from public to 
private (after deprecation)

I've worked up a patch with some ideas to help us catch these types of mistakes 
- and in general to hook the "bare-bones HTML" validation (which does not 
require jekyll or any non-ivy managed external dependencies) into {{ant 
precommit}}

Details to follow in comment/patch...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10734) Multithreaded test/support for AtomicURP broken

2018-03-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408196#comment-16408196
 ] 

ASF subversion and git services commented on SOLR-10734:


Commit eb8a5a882f879a51389b5d43f74f3aceac9e68c9 in lucene-solr's branch 
refs/heads/branch_7_3 from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eb8a5a8 ]

SOLR-10734: Make this test an AwaitsFix again, it fails nearly all the time


> Multithreaded test/support for AtomicURP broken
> ---
>
> Key: SOLR-10734
> URL: https://issues.apache.org/jira/browse/SOLR-10734
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-10734.patch, SOLR-10734.patch, SOLR-10734.patch, 
> Screen Shot 2017-05-31 at 4.50.23 PM.png, log-snippet, testMaster_2500, 
> testResults7_10, testResultsMaster_10
>
>
> The multithreaded test doesn't actually start the threads, but only invokes 
> the run directly. The join afterwards doesn't do anything, hence.
> {code}
> diff --git 
> a/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
>  
> b/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
> index f3f833d..10b7770 100644
> --- 
> a/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
> +++ 
> b/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
> @@ -238,7 +238,7 @@ public class AtomicUpdateProcessorFactoryTest extends 
> SolrTestCaseJ4 {
>}
>  }
>};
> -  t.run();
> +  t.run(); // red flag, shouldn't this be t.start?
>threads.add(t);
>finalCount += index; //int_i
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408182#comment-16408182
 ] 

Cao Manh Dat edited comment on SOLR-12087 at 3/21/18 4:28 PM:
--

Uploaded a draft patch for fixing the problem. The idea here is when the leader 
publish state of a replica as DOWN in old LIR, if the replica cannot be found 
in current {{states.json}}, do nothing. 


was (Author: caomanhdat):
Uploaded a draft patch for fixing the old LIR problem. The idea here is when 
the leader publish state of a replica as DOWN in old LIR, if the replica cannot 
be found in current \{{states.json}}, do nothing. 

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: SOLR-12087.patch, SOLR-12087.test.patch, Screen Shot 
> 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408143#comment-16408143
 ] 

Cao Manh Dat edited comment on SOLR-12087 at 3/21/18 4:26 PM:
--

I uploaded a test patch which simulates the above case and the test failed 50% 
in my pc (both in master and branch_7_2)


was (Author: caomanhdat):
I uploaded a test patch which simulates the above case and the test failed 100% 
in my pc (both in master and branch_7_2)

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: SOLR-12087.patch, SOLR-12087.test.patch, Screen Shot 
> 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408182#comment-16408182
 ] 

Cao Manh Dat commented on SOLR-12087:
-

Uploaded a draft patch for fixing the old LIR problem. The idea here is when 
the leader publish state of a replica as DOWN in old LIR, if the replica cannot 
be found in current \{{states.json}}, do nothing. 

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: SOLR-12087.patch, SOLR-12087.test.patch, Screen Shot 
> 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_162) - Build # 21677 - Unstable!

2018-03-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21677/
Java: 64bit/jdk1.8.0_162 -XX:+UseCompressedOops -XX:+UseParallelGC

8 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.RecoveryZkTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([AA1C7753F0B62AA5]:0)


FAILED:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([AA1C7753F0B62AA5]:0)


FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Wed Mar 21 16:20:20 
GMT 2018

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Wed Mar 21 16:20:20 GMT 2018
at 
__randomizedtesting.SeedInfo.seed([AA1C7753F0B62AA5:71B77795F59E4316]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1572)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:893)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Updated] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12087:

Attachment: SOLR-12087.patch

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: SOLR-12087.patch, SOLR-12087.test.patch, Screen Shot 
> 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Jerry Bao (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408177#comment-16408177
 ] 

Jerry Bao commented on SOLR-12087:
--

[~caomanhdat] That sounds exactly like the case I'm running into. I can't 
verify that the logs you say I should see I saw but I did definitely see the 
leader logs you were mentioning.
{quote}You wrote that

Attempting to delete the downed replicas causes failures because the core does 
not exist anymore.
{quote}
Sorry I should have been more clear here: It causes failures but not failures 
that block the deletion of the replica; the replica does eventually get 
deleted. 
{quote}Make sure that on the 2nd call of DeleteReplica (for removing zombie 
replica), parameters are correct because the name of the replica may get 
changed, ie: from core_node3 to core_node4.
{quote}
I wrote a small script to find all downed replicas and issue a delete command 
against them, which does take into account the name change.

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: SOLR-12087.test.patch, Screen Shot 2018-03-16 at 
> 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11601) geodist fails for some fields when field is in parenthesis instead of sfield param

2018-03-21 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408160#comment-16408160
 ] 

Amrit Sarkar commented on SOLR-11601:
-

[~dsmiley], thank you for your feedback, always. I see. Now I think I may have 
written number of tests using that flawed technique when {{assertQEx}} being 
available :), thank you for pointing that, will take care in future.

Updated tests, please note: the error which still gets emitted is:
{{"sort param could not be parsed as a query, and is not a field that exists in 
the index: geodist(b4_location__geo_si,47.36667,8.55)"}}
and solr logs will point to the actual problem.

> geodist fails for some fields when field is in parenthesis instead of sfield 
> param
> --
>
> Key: SOLR-11601
> URL: https://issues.apache.org/jira/browse/SOLR-11601
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.6
>Reporter: Clemens Wyss
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-11601.patch, SOLR-11601.patch, SOLR-11601.patch
>
>
> Im switching my schemas from derprecated solr.LatLonType to 
> solr.LatLonPointSpatialField.
> Now my sortquery (which used to work with solr.LatLonType):
> *sort=geodist(b4_location__geo_si,47.36667,8.55) asc*
> raises the error
> {color:red}*"sort param could not be parsed as a query, and is not a field 
> that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color}
> Invoking sort using syntax 
> {color:#14892c}sfield=b4_location__geo_si=47.36667,8.55=geodist() asc
> works as expected though...{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11601) geodist fails for some fields when field is in parenthesis instead of sfield param

2018-03-21 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11601:

Attachment: SOLR-11601.patch

> geodist fails for some fields when field is in parenthesis instead of sfield 
> param
> --
>
> Key: SOLR-11601
> URL: https://issues.apache.org/jira/browse/SOLR-11601
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.6
>Reporter: Clemens Wyss
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-11601.patch, SOLR-11601.patch, SOLR-11601.patch
>
>
> Im switching my schemas from derprecated solr.LatLonType to 
> solr.LatLonPointSpatialField.
> Now my sortquery (which used to work with solr.LatLonType):
> *sort=geodist(b4_location__geo_si,47.36667,8.55) asc*
> raises the error
> {color:red}*"sort param could not be parsed as a query, and is not a field 
> that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color}
> Invoking sort using syntax 
> {color:#14892c}sfield=b4_location__geo_si=47.36667,8.55=geodist() asc
> works as expected though...{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408143#comment-16408143
 ] 

Cao Manh Dat commented on SOLR-12087:
-

I uploaded a test patch which simulates the above case and the test failed 100% 
in my pc (both in master and branch_7_2)

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: SOLR-12087.test.patch, Screen Shot 2018-03-16 at 
> 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12087:

Attachment: SOLR-12087.test.patch

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: SOLR-12087.test.patch, Screen Shot 2018-03-16 at 
> 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.3

2018-03-21 Thread Alan Woodward
FYI I’ve started building a release candidate.

I’ve updated the build script on 7.3 to allow building with ant 1.10, if this 
doesn’t produce any problems then I’ll forward-port to 7x and master.

> On 21 Mar 2018, at 02:37, Đạt Cao Mạnh  wrote:
> 
> Hi Alan, 
> 
> I committed the fix as well as resolve the issue.
> 
> Thanks!
> 
> On Tue, Mar 20, 2018 at 9:27 PM Alan Woodward  > wrote:
> OK, thanks. Let me know when it’s in.
> 
> 
>> On 20 Mar 2018, at 14:07, Đạt Cao Mạnh > > wrote:
>> 
>> Hi  Alan, guys,
>> 
>> I found a blocker issue SOLR-12129, I've already uploaded a patch and 
>> beasting the tests, if the result is good I will commit and notify your guys!
>> 
>> Thanks!
>> 
>> On Tue, Mar 20, 2018 at 2:37 AM Alan Woodward > > wrote:
>> Go ahead!
>> 
>> 
>>> On 19 Mar 2018, at 18:33, Andrzej Białecki >> > wrote:
>>> 
>>> Alan,
>>> 
>>> I would like to commit the change in SOLR-11407 
>>> (78d592d2fdfc64c227fc1bcb8fafa3d806fbb384) to branch_7_3. This fixes the 
>>> logic that waits for replica recovery and provides more details about any 
>>> failures.
>>> 
 On 17 Mar 2018, at 13:01, Alan Woodward > wrote:
 
 I’d like to build the RC on Monday, but it depends on SOLR-12070.  I can 
 help debugging that if need be.
 
 +1 to backport your fixes
 
> On 17 Mar 2018, at 01:42, Varun Thacker  > wrote:
> 
> I was going through the blockers for 7.3 and only SOLR-12070 came up. Is 
> the fix complete for this Andrzej?
> 
> @Alan : When do you plan on cutting an RC ? I committed SOLR-12083 
> yesterday and SOLR-12063 today to master/branch_7x. Both are important 
> fixes for CDCR so if you are okay I can backport it to the release branch
> 
> On Fri, Mar 16, 2018 at 4:58 PM, Đạt Cao Mạnh  > wrote:
> Hi guys, Alan
> 
> I committed the fix for SOLR-12110 to branch_7_3
> 
> Thanks!
> 
> On Fri, Mar 16, 2018 at 5:43 PM Đạt Cao Mạnh  > wrote:
> Hi Alan,
> 
> Sure the issue is marked as Blocker for 7.3.
> 
> On Fri, Mar 16, 2018 at 3:12 PM Alan Woodward  > wrote:
> Thanks Đạt, could you mark the issue as a Blocker and let me know when 
> it’s been resolved?
> 
>> On 16 Mar 2018, at 02:05, Đạt Cao Mạnh > > wrote:
>> 
>> Hi guys, Alan,
>> 
>> I found a blocker issue SOLR-12110, when investigating test failure. 
>> I've already uploaded a patch and beasting the tests, if the result is 
>> good I will commit soon.
>> 
>> Thanks!
>>  
>> On Tue, Mar 13, 2018 at 7:49 PM Alan Woodward > > wrote:
>> Just realised that I don’t have an ASF Jenkins account - Uwe or Steve, 
>> can you give me a hand setting up the 7.3 Jenkins jobs?
>> 
>> Thanks, Alan
>> 
>> 
>>> On 12 Mar 2018, at 09:32, Alan Woodward >> > wrote:
>>> 
>>> I’ve created the 7.3 release branch.  I’ll leave 24 hours for bug-fixes 
>>> and doc patches and then create a release candidate.
>>> 
>>> We’re now in feature-freeze for 7.3, so please bear in mind the 
>>> following:
>>> No new features may be committed to the branch.
>>> Documentation patches, build patches and serious bug fixes may be 
>>> committed to the branch. However, you should submit all patches you 
>>> want to commit to Jira first to give others the chance to review and 
>>> possibly vote against the patch. Keep in mind that it is our main 
>>> intention to keep the branch as stable as possible.
>>> All patches that are intended for the branch should first be committed 
>>> to the unstable branch, merged into the stable branch, and then into 
>>> the current release branch.
>>> Normal unstable and stable branch development may continue as usual. 
>>> However, if you plan to commit a big change to the unstable branch 
>>> while the branch feature freeze is in effect, think twice: can't the 
>>> addition wait a couple more days? Merges of bug fixes into the branch 
>>> may become more difficult.
>>> Only Jira issues with Fix version “7.3" and priority "Blocker" will 
>>> delay a release candidate build.
>>> 
>>> 
 On 9 Mar 2018, at 16:43, Alan Woodward 

[jira] [Commented] (SOLR-10912) Adding automatic patch validation

2018-03-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408092#comment-16408092
 ] 

ASF subversion and git services commented on SOLR-10912:


Commit c6ef6b67b10c74e9e427860873320f0d77b3fb3b in lucene-solr's branch 
refs/heads/branch_7x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c6ef6b6 ]

SOLR-10912: fix routing of Solr non-contrib build output dirs (e.g. solr/core 
-> ../build/solr-core; previously -> ../build/core)


> Adding automatic patch validation
> -
>
> Key: SOLR-10912
> URL: https://issues.apache.org/jira/browse/SOLR-10912
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mano Kovacs
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-10912.ok-patch-in-core.patch, SOLR-10912.patch, 
> SOLR-10912.patch, SOLR-10912.sample-patch.patch, 
> SOLR-10912.solj-contrib-facet-error.patch
>
>
> Proposing introduction of automated patch validation, similar what Hadoop or 
> other Apache projects are using (see link). This would ensure that every 
> patch passes a certain set of criterions before getting approved. It would 
> save time for developer (faster feedback loop), save time for committers 
> (less step to do manually), and would increase quality.
> Hadoop is currently using Apache Yetus to run validations, which seems to be 
> a good direction to start. This jira could be the board of discussing the 
> preferred solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10912) Adding automatic patch validation

2018-03-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408093#comment-16408093
 ] 

ASF subversion and git services commented on SOLR-10912:


Commit 51a6bec48d0bf6d9b972c69ba87a12ac44f485e4 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=51a6bec ]

SOLR-10912: fix routing of Solr non-contrib build output dirs (e.g. solr/core 
-> ../build/solr-core; previously -> ../build/core)


> Adding automatic patch validation
> -
>
> Key: SOLR-10912
> URL: https://issues.apache.org/jira/browse/SOLR-10912
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mano Kovacs
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-10912.ok-patch-in-core.patch, SOLR-10912.patch, 
> SOLR-10912.patch, SOLR-10912.sample-patch.patch, 
> SOLR-10912.solj-contrib-facet-error.patch
>
>
> Proposing introduction of automated patch validation, similar what Hadoop or 
> other Apache projects are using (see link). This would ensure that every 
> patch passes a certain set of criterions before getting approved. It would 
> save time for developer (faster feedback loop), save time for committers 
> (less step to do manually), and would increase quality.
> Hadoop is currently using Apache Yetus to run validations, which seems to be 
> a good direction to start. This jira could be the board of discussing the 
> preferred solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11601) geodist fails for some fields when field is in parenthesis instead of sfield param

2018-03-21 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408073#comment-16408073
 ] 

Steve Rowe commented on SOLR-11601:
---

bq. BTW Steve Rowe Thanks for working on patch validation  Some work to do 
still: "core in the patch failed." is confusing and has a typo.  Core->code.

This is not a typo: "core" is the (short) name of the module.  Unfortunately 
Yetus reduces module paths like ({{solr/core}}, {{lucene/analysis/icu}}) to 
({{core}}, {{icu}}).

bq. Even then, the failing test here is not one modified by this patch; it's 
some other test.

Agreed, though fortunately this is getting better as the project makes inroads 
on reducing test flakiness.


> geodist fails for some fields when field is in parenthesis instead of sfield 
> param
> --
>
> Key: SOLR-11601
> URL: https://issues.apache.org/jira/browse/SOLR-11601
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.6
>Reporter: Clemens Wyss
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-11601.patch, SOLR-11601.patch
>
>
> Im switching my schemas from derprecated solr.LatLonType to 
> solr.LatLonPointSpatialField.
> Now my sortquery (which used to work with solr.LatLonType):
> *sort=geodist(b4_location__geo_si,47.36667,8.55) asc*
> raises the error
> {color:red}*"sort param could not be parsed as a query, and is not a field 
> that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color}
> Invoking sort using syntax 
> {color:#14892c}sfield=b4_location__geo_si=47.36667,8.55=geodist() asc
> works as expected though...{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8202) Add a FixedShingleFilter

2018-03-21 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-8202.
---
   Resolution: Fixed
Fix Version/s: 7.4

> Add a FixedShingleFilter
> 
>
> Key: LUCENE-8202
> URL: https://issues.apache.org/jira/browse/LUCENE-8202
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8202.patch, LUCENE-8202.patch, LUCENE-8202.patch
>
>
> In LUCENE-3475 I tried to make a ShingleGraphFilter that could accept and 
> emit arbitrary graphs, while duplicating all the functionality of the 
> existing ShingleFilter.  This ends up being extremely hairy, and doesn't play 
> well with query parsers.
> I'd like to step back and try and create a simpler shingle filter that can be 
> used for index-time phrase tokenization only.  It will have a single fixed 
> shingle size, can deal with single-token synonyms, and won't emit unigrams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8202) Add a FixedShingleFilter

2018-03-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408056#comment-16408056
 ] 

ASF subversion and git services commented on LUCENE-8202:
-

Commit fac84c01c84b3693a8c1251ae77f349c38497e06 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fac84c0 ]

LUCENE-8202: Add FixedShingleFilter


> Add a FixedShingleFilter
> 
>
> Key: LUCENE-8202
> URL: https://issues.apache.org/jira/browse/LUCENE-8202
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8202.patch, LUCENE-8202.patch, LUCENE-8202.patch
>
>
> In LUCENE-3475 I tried to make a ShingleGraphFilter that could accept and 
> emit arbitrary graphs, while duplicating all the functionality of the 
> existing ShingleFilter.  This ends up being extremely hairy, and doesn't play 
> well with query parsers.
> I'd like to step back and try and create a simpler shingle filter that can be 
> used for index-time phrase tokenization only.  It will have a single fixed 
> shingle size, can deal with single-token synonyms, and won't emit unigrams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8202) Add a FixedShingleFilter

2018-03-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408055#comment-16408055
 ] 

ASF subversion and git services commented on LUCENE-8202:
-

Commit 230a77ce38ebe6294a06aebf23d85b68223b6ec2 in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=230a77c ]

LUCENE-8202: Add FixedShingleFilter


> Add a FixedShingleFilter
> 
>
> Key: LUCENE-8202
> URL: https://issues.apache.org/jira/browse/LUCENE-8202
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8202.patch, LUCENE-8202.patch, LUCENE-8202.patch
>
>
> In LUCENE-3475 I tried to make a ShingleGraphFilter that could accept and 
> emit arbitrary graphs, while duplicating all the functionality of the 
> existing ShingleFilter.  This ends up being extremely hairy, and doesn't play 
> well with query parsers.
> I'd like to step back and try and create a simpler shingle filter that can be 
> used for index-time phrase tokenization only.  It will have a single fixed 
> shingle size, can deal with single-token synonyms, and won't emit unigrams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11601) geodist fails for some fields when field is in parenthesis instead of sfield param

2018-03-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408038#comment-16408038
 ] 

David Smiley commented on SOLR-11601:
-

BTW [~steve_rowe] Thanks for working on patch validation :-)  Some work to do 
still: "core in the patch failed." is confusing and has a typo.  Core->code.  
Even then, the failing test here is _not_ one modified by this patch; it's some 
other test.  

[~sarkaramr...@gmail.com] please update the test to use {{assertQEx}} or if you 
prefer some similar test infrastructure facility for testing exceptions

> geodist fails for some fields when field is in parenthesis instead of sfield 
> param
> --
>
> Key: SOLR-11601
> URL: https://issues.apache.org/jira/browse/SOLR-11601
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.6
>Reporter: Clemens Wyss
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-11601.patch, SOLR-11601.patch
>
>
> Im switching my schemas from derprecated solr.LatLonType to 
> solr.LatLonPointSpatialField.
> Now my sortquery (which used to work with solr.LatLonType):
> *sort=geodist(b4_location__geo_si,47.36667,8.55) asc*
> raises the error
> {color:red}*"sort param could not be parsed as a query, and is not a field 
> that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color}
> Invoking sort using syntax 
> {color:#14892c}sfield=b4_location__geo_si=47.36667,8.55=geodist() asc
> works as expected though...{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11913) SolrParams ought to implement Iterable<Map.Entry<String,String[]>>

2018-03-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408001#comment-16408001
 ] 

David Smiley commented on SOLR-11913:
-

1. When uploading updated patches, please use an identical filename.  JIRA 
tracks revisions provided that the file name hasn't changed.
2. I suspect your latest patch won't compile.  I see SolrParams now implements 
Iterable (great), but in doing so you must provide an implementation of the 
{{iterator}} method, which you didn't.  Though you did provide a method 
{{getMapEntryIteretor}} which should be renamed to {{iterator}}.
3. 
bq. But there are other classes too which extend SolrParams and need to be 
modified.

_Only_ subclasses of SolrParams that can offer a more efficient implementation 
should override the default implementation you are adding to SolrParams.  
You'll hopefully realize this as you progress.

> SolrParams ought to implement Iterable>
> --
>
> Key: SOLR-11913
> URL: https://issues.apache.org/jira/browse/SOLR-11913
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11913.patch, SOLR-11913_v2.patch
>
>
> SolrJ ought to implement {{Iterable>}} so that 
> it's easier to iterate on it, either using Java 5 for-each style, or Java 8 
> streams.  The implementation on ModifiableSolrParams can delegate through to 
> the underlying LinkedHashMap entry set.  The default impl can produce a 
> Map.Entry with a getValue that calls through to getParams.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8273) deprecate implicitly uninverted fields, force people to either use docValues, or be explicit that they want query time uninversion

2018-03-21 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-8273:

Component/s: Schema and Analysis

> deprecate implicitly uninverted fields, force people to either use docValues, 
> or be explicit that they want query time uninversion
> --
>
> Key: SOLR-8273
> URL: https://issues.apache.org/jira/browse/SOLR-8273
> Project: Solr
>  Issue Type: Improvement
>  Components: Schema and Analysis
>Reporter: Hoss Man
>Priority: Major
>
> once upon a time, there was nothing we could do to *stop* people from using 
> the FieldCache - even if they didn't realize they were using it.
> Then DocValues was added - and now people have a choice: they can set 
> {{docValues=true}} on a field/fieldtype and know that when they do 
> functions/sorting/faceting on that field, it won't require a big hunk of ram 
> and a big stall everytime a reader was reopened.  But it's easy to overlook 
> when clients might be doing something that required the FieldCache w/o 
> realizing it -- and there is no way to stop them, because Solr automatically 
> uses UninvertingReader under the covers and automatically allows every field 
> to be uninverted in this way.
> we should change that.
> 
> Straw man proposal...
> * introduce a new boolean fieldType/field property {{uninvertable}}
> * all existing FieldType classes should default to {{uninvertable==false}}
> * a field or fieldType that contains {{indexed="false" uninvertable="true"}} 
> should be an error.
> * the Schema {{version}} value should be incremented, such that any Schema 
> with an older version is treated as if every field with {{docValues==false}} 
> has an implict {{uninvertable="true"}} on it.
> * the Map passed to UninvertedReader should now only list items that have an 
> effective value of {{uninvertable==true}}
> * sample schemas should be updated to use docValues on any field where the 
> examples using those schemas suggest using those fields in that way (ie: 
> sorting, faceting, etc...)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12133) TriggerIntegrationTest fails too easily.

2018-03-21 Thread Mark Miller (JIRA)
Mark Miller created SOLR-12133:
--

 Summary: TriggerIntegrationTest fails too easily.
 Key: SOLR-12133
 URL: https://issues.apache.org/jira/browse/SOLR-12133
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12130) Investigate why CdcrReplicationDistributedZkTest is slow

2018-03-21 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407929#comment-16407929
 ] 

Amrit Sarkar commented on SOLR-12130:
-

Tests total time: *30 minutes* approx.

Following is the approx time taken by each test-method in the testclass:
||test method||time in ms||time||
|testBatchBoundaries()|120651|2min|
|testReplicationStartStop()|240897|4min|
|testReplicationAfterLeaderChange()|231003|4min|
|testUpdateLogSynchronisation()|121818|2min|
|testDeleteCreateSourceCollection()|37309|36sec|
|testReplicationAfterRestart()|300130|5min|
|testBatchAddsWithDelete()|4143|4sec|
|testOps()|5048|5sec|
|testResilienceWithDeleteByQueryOnTarget()|122769|2min|
|testTargetCollectionNotAvailable()|5860|6sec|
|testBufferOnNonLeader()|33203|33sec|

total: round about *18 minutes, 20 seconds*.

In the test-class for each test-method, 2x2 collection of source and target are 
created. meaning we are creating 8 cores for every test-method, 4 for source, 4 
for target.

so total *11 minutes 40 seconds* are consumed in preparation (get collections 
up and running) and destruction (purging of cores and its data) for all the 
test methods.

Next thing to do is to understand what each test is doing and if we can 
optimise / avoid.

> Investigate why CdcrReplicationDistributedZkTest is slow
> 
>
> Key: SOLR-12130
> URL: https://issues.apache.org/jira/browse/SOLR-12130
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>
> CdcrReplicationDistributedZkTest seems to be a very slow test and probably 
> why it was marked nightly in the first place?
> Investigate why the test is so slow and see if we can speed it up 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12132) TestTriggerIntegration fails too easily.

2018-03-21 Thread Mark Miller (JIRA)
Mark Miller created SOLR-12132:
--

 Summary: TestTriggerIntegration fails too easily.
 Key: SOLR-12132
 URL: https://issues.apache.org/jira/browse/SOLR-12132
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8213) offload caching to a dedicated threadpool

2018-03-21 Thread Amir Hadadi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407861#comment-16407861
 ] 

Amir Hadadi commented on LUCENE-8213:
-

Indeed both are about trading throughput for latency.

However, there is a quantitative difference:

in parallel segments querying you would slice your index to e.g. 5 slices on 
each and every query.

async caching would happen only when caching is needed, and even then, only 
when the ratio between the caching cost and the leading query cost is big 
enough to justify async execution.

I would expect the additional async tasks triggered by async caching to be 100x 
less than parallel segments querying tasks.

Coupling these features together would mean that if someone is not willing to 
pay the overhead of parallel segments querying, he will not be able to use 
async caching.

> offload caching to a dedicated threadpool
> -
>
> Key: LUCENE-8213
> URL: https://issues.apache.org/jira/browse/LUCENE-8213
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/query/scoring
>Affects Versions: 7.2.1
>Reporter: Amir Hadadi
>Priority: Minor
>  Labels: performance
>
> IndexOrDocValuesQuery allows to combine non selective range queries with a 
> selective lead iterator in an optimized way. However, the range query at some 
> point gets cached by a querying thread in LRUQueryCache, which negates the 
> optimization of IndexOrDocValuesQuery for that specific query.
> It would be nice to see a caching implementation that offloads to a different 
> thread pool, so that queries involving IndexOrDocValuesQuery would have 
> consistent performance characteristics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12116) Autoscaling suggests to move a replica that does not exist (all numbers)

2018-03-21 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-12116.
--
Resolution: Duplicate

This was reported (by me) in SOLR-12023 and fixed by Noble in SOLR-12031. The 
fix will be released in 7.3

> Autoscaling suggests to move a replica that does not exist (all numbers)
> 
>
> Key: SOLR-12116
> URL: https://issues.apache.org/jira/browse/SOLR-12116
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: autoscaling.json, diagnostics.json, solr_instances, 
> suggestions.json
>
>
> Attaching suggestions, diagnostics, autoscaling settings, and the 
> solr_instances AZ's. One of the operations suggested is impossible:
> {code:java}
> {"type": "violation","violation": {"node": 
> "solr-0a7207d791bd08d4e:8983_solr","tagKey": "null","violation": {"node": 
> "4","delta": 1},"clause": {"cores": "<4","node": "#ANY"}},"operation": 
> {"method": "POST","path": "/c/r_posts","command": {"move-replica": 
> {"targetNode": "solr-0f0e86f34298f7e79:8983_solr","inPlaceMove": 
> "true","replica": "2151000"}}}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.3 - Build # 23 - Unstable

2018-03-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.3/23/

3 tests failed.
FAILED:  org.apache.solr.cloud.TestRebalanceLeaders.test

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:45142/sqd/l

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:45142/sqd/l
at 
__randomizedtesting.SeedInfo.seed([C3E174C30991EF8A:4BB54B19A76D8272]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1105)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:885)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:818)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1677)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1704)
at 
org.apache.solr.cloud.TestRebalanceLeaders.test(TestRebalanceLeaders.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-7.3-Windows (64bit/jdk-9.0.1) - Build # 12 - Unstable!

2018-03-21 Thread Policeman Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 79, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11913) SolrParams ought to implement Iterable<Map.Entry<String,String[]>>

2018-03-21 Thread Tapan Vaishnav (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407783#comment-16407783
 ] 

Tapan Vaishnav commented on SOLR-11913:
---

[~dsmiley] 
Thanks for your review. 
I have fixed the pointed out changes and attached as SOLR-11913_v2.patch. 
Please, have a look whenever you get time.

> The key part as referenced in the description – having SolrParams implement 
>Iterable wasn't done.
I thought that we had to implement the function not as in the _implements_ 
keyword.

> Why did you create SolrParams.getMapEntry? You could inline it to do an 
> anonymous inner class
It wasn't creating any unnecessary new object and I thought that we might use 
it in future but has been fixed now.

> Please override this for ModifiableSolrParams to return a more optimal 
> implementation.
I have overridden the _iterator()_ from Iterable class for 
ModifiableSolrParams. But there are other classes too which extend SolrParams 
and need to be modified. Will do after the next review.
 

> SolrParams ought to implement Iterable>
> --
>
> Key: SOLR-11913
> URL: https://issues.apache.org/jira/browse/SOLR-11913
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11913.patch, SOLR-11913_v2.patch
>
>
> SolrJ ought to implement {{Iterable>}} so that 
> it's easier to iterate on it, either using Java 5 for-each style, or Java 8 
> streams.  The implementation on ModifiableSolrParams can delegate through to 
> the underlying LinkedHashMap entry set.  The default impl can produce a 
> Map.Entry with a getValue that calls through to getParams.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11913) SolrParams ought to implement Iterable<Map.Entry<String,String[]>>

2018-03-21 Thread Tapan Vaishnav (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tapan Vaishnav updated SOLR-11913:
--
Attachment: SOLR-11913_v2.patch

> SolrParams ought to implement Iterable>
> --
>
> Key: SOLR-11913
> URL: https://issues.apache.org/jira/browse/SOLR-11913
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11913.patch, SOLR-11913_v2.patch
>
>
> SolrJ ought to implement {{Iterable>}} so that 
> it's easier to iterate on it, either using Java 5 for-each style, or Java 8 
> streams.  The implementation on ModifiableSolrParams can delegate through to 
> the underlying LinkedHashMap entry set.  The default impl can produce a 
> Map.Entry with a getValue that calls through to getParams.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release Announcement: General Availability of JDK 10

2018-03-21 Thread Rory O'Donnell

Thanks Uwe!


On 21/03/2018 10:49, Uwe Schindler wrote:


Thanks Rory,

I am currently on a meeting, but I will update the Jenkins servers 
this weekend. I will also add the JDK 11 preview builds.


Uwe

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de 

eMail: u...@thetaphi.de

*From:*Rory O'Donnell 
*Sent:* Wednesday, March 21, 2018 11:28 AM
*To:* dawid.we...@cs.put.poznan.pl; Uwe Schindler 
*Cc:* rory.odonn...@oracle.com; Balchandra Vaidya 
; Dalibor Topic 
; Muneer Kolarkunnu 
; dev@lucene.apache.org

*Subject:* Release Announcement: General Availability of JDK 10

Hi Uwe & Dawid,

A number of items to share with you today :

*1) JDK 10 General Availability *

JDK 10, the first release produced under the six-month rapid-cadence 
release model [1][2], is now Generally Available.
We've identified no P1 bugs since we promoted build 46 almost two 
weeks ago, so that is the official GA release, ready for production use.

GPL'd binaries from Oracle are available here: http://jdk.java.net/10

This release includes twelve features:

  * 286: Local-Variable Type Inference 
  * 296: Consolidate the JDK Forest into a Single Repository

  * 304: Garbage-Collector Interface 
  * 307: Parallel Full GC for G1 
  * 310: Application Class-Data Sharing 
  * 312: Thread-Local Handshakes 
  * 313: Remove the Native-Header Generation Tool (javah)

  * 314: Additional Unicode Language-Tag Extensions

  * 316: Heap Allocation on Alternative Memory Devices

  * 317: Experimental Java-Based JIT Compiler

  * 319: Root Certificates 
  * 322: Time-Based Release Versioning 


*2) JDK 11 EA build 5, under both the GPL and Oracle EA licenses, are 
now available at http://jdk.java.net/11 .*


  * Schedule, status & features

  o http://openjdk.java.net/projects/jdk/11/

  * Release Notes:

  o http://jdk.java.net/11/release-notes

  * Summary of changes

  o https://download.java.net/java/early_access/jdk11/5/jdk-11+5.html

*3) The Z Garbage Collector Project, early access builds available : *

The first EA binary from from The Z Garbage Collector Project, also 
known as ZGC, is now available. ZGC is a scalable low latency garbage 
collector. For information on how to enable and use ZGC, please see 
the project wiki.


  * Project page: http://openjdk.java.net/projects/zgc/
  * Wiki: https://wiki.openjdk.java.net/display/zgc/Main

*4) Quality Outreach Report for March 2018 is available*

  * 
https://wiki.openjdk.java.net/display/quality/Quality+Outreach+report+March+2018

*5) Java Client Roadmap Update*

  * We posted a blog [3] and related white paper [4] detailing our
plans for the Java Client.

Rgds,Rory

[1] https://mreinhold.org/blog/forward-faster
[2] 
http://mail.openjdk.java.net/pipermail/discuss/2017-September/004281.html
[3] Blog: 
https://blogs.oracle.com/java-platform-group/the-future-of-javafx-and-other-java-client-roadmap-updates
[4] Whitepaper: 
http://www.oracle.com/technetwork/java/javase/javaclientroadmapupdate2018mar-4414431.pdf



--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland


--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



[JENKINS] Lucene-Solr-7.3-Linux (32bit/jdk1.8.0_162) - Build # 43 - Unstable!

2018-03-21 Thread Policeman Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 79, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7232 - Still Unstable!

2018-03-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7232/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=42869200

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=42869200
at 
__randomizedtesting.SeedInfo.seed([F88BF72C7688292C:C0E78409E2588B6A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:48)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=13557200

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time 

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 179 - Still Failing

2018-03-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/179/

1 tests failed.
FAILED:  org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR

Error Message:
Timeout waiting for active replicas null Live Nodes: [127.0.0.1:34680_solr, 
127.0.0.1:57575_solr] Last available state: 
DocCollection(allReplicasInLIR//collections/allReplicasInLIR/state.json/18)={   
"pullReplicas":"0",   "replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"allReplicasInLIR_shard1_replica_n1",   
"base_url":"https://127.0.0.1:57575/solr;,   
"node_name":"127.0.0.1:57575_solr",   "state":"down",   
"type":"NRT"}, "core_node5":{   
"core":"allReplicasInLIR_shard1_replica_n2",   
"base_url":"https://127.0.0.1:51082/solr;,   
"node_name":"127.0.0.1:51082_solr",   "state":"down",   
"type":"NRT"}, "core_node6":{   
"core":"allReplicasInLIR_shard1_replica_n4",   
"base_url":"https://127.0.0.1:34680/solr;,   
"node_name":"127.0.0.1:34680_solr",   "state":"down",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"1",  
 "autoAddReplicas":"false",   "nrtReplicas":"3",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Timeout waiting for active replicas
null
Live Nodes: [127.0.0.1:34680_solr, 127.0.0.1:57575_solr]
Last available state: 
DocCollection(allReplicasInLIR//collections/allReplicasInLIR/state.json/18)={
  "pullReplicas":"0",
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"allReplicasInLIR_shard1_replica_n1",
  "base_url":"https://127.0.0.1:57575/solr;,
  "node_name":"127.0.0.1:57575_solr",
  "state":"down",
  "type":"NRT"},
"core_node5":{
  "core":"allReplicasInLIR_shard1_replica_n2",
  "base_url":"https://127.0.0.1:51082/solr;,
  "node_name":"127.0.0.1:51082_solr",
  "state":"down",
  "type":"NRT"},
"core_node6":{
  "core":"allReplicasInLIR_shard1_replica_n4",
  "base_url":"https://127.0.0.1:34680/solr;,
  "node_name":"127.0.0.1:34680_solr",
  "state":"down",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"3",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([FE84FFC1AA062054:A41CC507D48647B3]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
at 
org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR(LIROnShardRestartTest.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 

RE: Release Announcement: General Availability of JDK 10

2018-03-21 Thread Uwe Schindler
Thanks Rory,

 

I am currently on a meeting, but I will update the Jenkins servers this 
weekend. I will also add the JDK 11 preview builds.

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de  

eMail: u...@thetaphi.de

 

From: Rory O'Donnell  
Sent: Wednesday, March 21, 2018 11:28 AM
To: dawid.we...@cs.put.poznan.pl; Uwe Schindler 
Cc: rory.odonn...@oracle.com; Balchandra Vaidya ; 
Dalibor Topic ; Muneer Kolarkunnu 
; dev@lucene.apache.org
Subject: Release Announcement: General Availability of JDK 10

 

Hi Uwe & Dawid, 

A number of items to share with you today :

1) JDK 10 General Availability 

JDK 10, the first release produced under the six-month rapid-cadence release 
model [1][2], is now Generally Available. 
We've identified no P1 bugs since we promoted build 46 almost two weeks ago, so 
that is the official GA release, ready for production use. 
GPL'd binaries from Oracle are available here: http://jdk.java.net/10

This release includes twelve features: 

*   286: Local-Variable Type Inference  
*   296: Consolidate the JDK Forest into a Single Repository 
 
*   304: Garbage-Collector Interface  
*   307: Parallel Full GC for G1  
*   310: Application Class-Data Sharing  
*   312: Thread-Local Handshakes  
*   313: Remove the Native-Header Generation Tool (javah) 
 
*   314: Additional Unicode Language-Tag Extensions 
 
*   316: Heap Allocation on Alternative Memory Devices 
 
*   317: Experimental Java-Based JIT Compiler 
 
*   319: Root Certificates  
*   322: Time-Based Release Versioning  


2) JDK 11 EA build 5, under both the GPL and Oracle EA licenses, are now 
available at http://jdk.java.net/11 .

*   Schedule, status & features

*   http://openjdk.java.net/projects/jdk/11/

*   Release Notes:

*   http://jdk.java.net/11/release-notes

*   Summary of changes 

*   https://download.java.net/java/early_access/jdk11/5/jdk-11+5.html

3) The Z Garbage Collector Project, early access builds available : 

The first EA binary from from The Z Garbage Collector Project, also known as 
ZGC, is now available. ZGC is a scalable low latency garbage collector. For 
information on how to enable and use ZGC, please see the project wiki.

*   Project page: http://openjdk.java.net/projects/zgc/ 
*   Wiki: https://wiki.openjdk.java.net/display/zgc/Main

4) Quality Outreach Report for March 2018 is available

*   
https://wiki.openjdk.java.net/display/quality/Quality+Outreach+report+March+2018

5) Java Client Roadmap Update

*   We posted a blog [3] and related white paper [4] detailing our plans 
for the Java Client.

Rgds,Rory

[1] https://mreinhold.org/blog/forward-faster 
[2] http://mail.openjdk.java.net/pipermail/discuss/2017-September/004281.html 
[3] Blog: 
https://blogs.oracle.com/java-platform-group/the-future-of-javafx-and-other-java-client-roadmap-updates
[4] Whitepaper: 
http://www.oracle.com/technetwork/java/javase/javaclientroadmapupdate2018mar-4414431.pdf




-- 
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland 


Release Announcement: General Availability of JDK 10

2018-03-21 Thread Rory O'Donnell

Hi Uwe & Dawid,

A number of items to share with you today :

*1) JDK 10 General Availability *

JDK 10, the first release produced under the six-month rapid-cadence 
release model [1][2], is now Generally Available.
We've identified no P1 bugs since we promoted build 46 almost two weeks 
ago, so that is the official GA release, ready for production use.

GPL'd binaries from Oracle are available here: http://jdk.java.net/10

This release includes twelve features:

 * 286: Local-Variable Type Inference 
 * 296: Consolidate the JDK Forest into a Single Repository
   
 * 304: Garbage-Collector Interface 
 * 307: Parallel Full GC for G1 
 * 310: Application Class-Data Sharing 
 * 312: Thread-Local Handshakes 
 * 313: Remove the Native-Header Generation Tool (javah)
   
 * 314: Additional Unicode Language-Tag Extensions
   
 * 316: Heap Allocation on Alternative Memory Devices
   
 * 317: Experimental Java-Based JIT Compiler
   
 * 319: Root Certificates 
 * 322: Time-Based Release Versioning 


*2) JDK 11 EA build 5, under both the GPL and Oracle EA licenses, are 
now available at **http://jdk.java.net/11**.*


 * Schedule, status & features
 o http://openjdk.java.net/projects/jdk/11/
 * Release Notes:
 o http://jdk.java.net/11/release-notes
 * Summary of changes
 o https://download.java.net/java/early_access/jdk11/5/jdk-11+5.html

*3) The Z Garbage Collector Project, early access builds available : *

The first EA binary from from The Z Garbage Collector Project, also 
known as ZGC, is now available. ZGC is a scalable low latency garbage 
collector. For information on how to enable and use ZGC, please see the 
project wiki.


 * Project page: http://openjdk.java.net/projects/zgc/
 * Wiki: https://wiki.openjdk.java.net/display/zgc/Main

*4) Quality Outreach Report for **March 2018 **is available
*

 * 
https://wiki.openjdk.java.net/display/quality/Quality+Outreach+report+March+2018

*5) **Java Client Roadmap Update
*

 * We posted a blog [3] and related white paper [4] detailing our plans
   for the Java Client.

Rgds,Rory

[1] https://mreinhold.org/blog/forward-faster
[2] 
http://mail.openjdk.java.net/pipermail/discuss/2017-September/004281.html
[3] Blog: 
https://blogs.oracle.com/java-platform-group/the-future-of-javafx-and-other-java-client-roadmap-updates
[4] Whitepaper: 
http://www.oracle.com/technetwork/java/javase/javaclientroadmapupdate2018mar-4414431.pdf


--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



[jira] [Commented] (LUCENE-8202) Add a FixedShingleFilter

2018-03-21 Thread Jim Ferenczi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407723#comment-16407723
 ] 

Jim Ferenczi commented on LUCENE-8202:
--

+1, thanks Alan. 

> Add a FixedShingleFilter
> 
>
> Key: LUCENE-8202
> URL: https://issues.apache.org/jira/browse/LUCENE-8202
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8202.patch, LUCENE-8202.patch, LUCENE-8202.patch
>
>
> In LUCENE-3475 I tried to make a ShingleGraphFilter that could accept and 
> emit arbitrary graphs, while duplicating all the functionality of the 
> existing ShingleFilter.  This ends up being extremely hairy, and doesn't play 
> well with query parsers.
> I'd like to step back and try and create a simpler shingle filter that can be 
> used for index-time phrase tokenization only.  It will have a single fixed 
> shingle size, can deal with single-token synonyms, and won't emit unigrams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8202) Add a FixedShingleFilter

2018-03-21 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407715#comment-16407715
 ] 

Adrien Grand commented on LUCENE-8202:
--

+1

> Add a FixedShingleFilter
> 
>
> Key: LUCENE-8202
> URL: https://issues.apache.org/jira/browse/LUCENE-8202
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8202.patch, LUCENE-8202.patch, LUCENE-8202.patch
>
>
> In LUCENE-3475 I tried to make a ShingleGraphFilter that could accept and 
> emit arbitrary graphs, while duplicating all the functionality of the 
> existing ShingleFilter.  This ends up being extremely hairy, and doesn't play 
> well with query parsers.
> I'd like to step back and try and create a simpler shingle filter that can be 
> used for index-time phrase tokenization only.  It will have a single fixed 
> shingle size, can deal with single-token synonyms, and won't emit unigrams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8202) Add a FixedShingleFilter

2018-03-21 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407711#comment-16407711
 ] 

Alan Woodward commented on LUCENE-8202:
---

Updated patch using CannedTokenStream for tests.  Will commit shortly.

> Add a FixedShingleFilter
> 
>
> Key: LUCENE-8202
> URL: https://issues.apache.org/jira/browse/LUCENE-8202
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8202.patch, LUCENE-8202.patch, LUCENE-8202.patch
>
>
> In LUCENE-3475 I tried to make a ShingleGraphFilter that could accept and 
> emit arbitrary graphs, while duplicating all the functionality of the 
> existing ShingleFilter.  This ends up being extremely hairy, and doesn't play 
> well with query parsers.
> I'd like to step back and try and create a simpler shingle filter that can be 
> used for index-time phrase tokenization only.  It will have a single fixed 
> shingle size, can deal with single-token synonyms, and won't emit unigrams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8202) Add a FixedShingleFilter

2018-03-21 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8202:
--
Attachment: LUCENE-8202.patch

> Add a FixedShingleFilter
> 
>
> Key: LUCENE-8202
> URL: https://issues.apache.org/jira/browse/LUCENE-8202
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8202.patch, LUCENE-8202.patch, LUCENE-8202.patch
>
>
> In LUCENE-3475 I tried to make a ShingleGraphFilter that could accept and 
> emit arbitrary graphs, while duplicating all the functionality of the 
> existing ShingleFilter.  This ends up being extremely hairy, and doesn't play 
> well with query parsers.
> I'd like to step back and try and create a simpler shingle filter that can be 
> used for index-time phrase tokenization only.  It will have a single fixed 
> shingle size, can deal with single-token synonyms, and won't emit unigrams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8202) Add a FixedShingleFilter

2018-03-21 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407674#comment-16407674
 ] 

Adrien Grand commented on LUCENE-8202:
--

bq. The inner loop just continues the outer loop, so it should be linear.

I had read too quickly, thanks for clarifying.

Your reasoning about the dedicated factory also makes sense to me.

The latest patch looks good to me, I'd just like testing to be done on canned 
token streams rather than using other analysis components like the whitespace 
tokenizer or the synonym token filter. Other than that +1 to push.

> Add a FixedShingleFilter
> 
>
> Key: LUCENE-8202
> URL: https://issues.apache.org/jira/browse/LUCENE-8202
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8202.patch, LUCENE-8202.patch
>
>
> In LUCENE-3475 I tried to make a ShingleGraphFilter that could accept and 
> emit arbitrary graphs, while duplicating all the functionality of the 
> existing ShingleFilter.  This ends up being extremely hairy, and doesn't play 
> well with query parsers.
> I'd like to step back and try and create a simpler shingle filter that can be 
> used for index-time phrase tokenization only.  It will have a single fixed 
> shingle size, can deal with single-token synonyms, and won't emit unigrams.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8217) Remove IndexFileDeleter#decRefWhileHandlingExceptions

2018-03-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407639#comment-16407639
 ] 

ASF subversion and git services commented on LUCENE-8217:
-

Commit 2539578cb14091e5736ea57deb796e6e43c2739b in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2539578 ]

LUCENE-8217: Remove IndexFileDeleter#decRefWhileHandlingExceptions

This method is a duplicate of IDF#decRef(...) and hides exceptions
from the caller. This change removes this method and replaces it with
it's counterpart that escalades the exception.


>  Remove IndexFileDeleter#decRefWhileHandlingExceptions
> --
>
> Key: LUCENE-8217
> URL: https://issues.apache.org/jira/browse/LUCENE-8217
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
>Priority: Major
> Attachments: LUCENE-8217.patch
>
>
> This method is a duplicate of IDF#decRef(...) and hides exceptions from the 
> caller. This change removes this method and replaces it with it's counterpart 
> that escalades the exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8217) Remove IndexFileDeleter#decRefWhileHandlingExceptions

2018-03-21 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-8217.
-
Resolution: Fixed

fixed, thanks [~dweiss]

>  Remove IndexFileDeleter#decRefWhileHandlingExceptions
> --
>
> Key: LUCENE-8217
> URL: https://issues.apache.org/jira/browse/LUCENE-8217
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
>Priority: Major
> Attachments: LUCENE-8217.patch
>
>
> This method is a duplicate of IDF#decRef(...) and hides exceptions from the 
> caller. This change removes this method and replaces it with it's counterpart 
> that escalades the exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8217) Remove IndexFileDeleter#decRefWhileHandlingExceptions

2018-03-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407637#comment-16407637
 ] 

ASF subversion and git services commented on LUCENE-8217:
-

Commit d4e69c5cd868d0f5b71da0f4b23c2cd61d1b0ea0 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d4e69c5 ]

LUCENE-8217: Remove IndexFileDeleter#decRefWhileHandlingExceptions

This method is a duplicate of IDF#decRef(...) and hides exceptions
from the caller. This change removes this method and replaces it with
it's counterpart that escalades the exception.


>  Remove IndexFileDeleter#decRefWhileHandlingExceptions
> --
>
> Key: LUCENE-8217
> URL: https://issues.apache.org/jira/browse/LUCENE-8217
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
>Priority: Major
> Attachments: LUCENE-8217.patch
>
>
> This method is a duplicate of IDF#decRef(...) and hides exceptions from the 
> caller. This change removes this method and replaces it with it's counterpart 
> that escalades the exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8213) offload caching to a dedicated threadpool

2018-03-21 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407632#comment-16407632
 ] 

Adrien Grand commented on LUCENE-8213:
--

Actually I think we want this: both parallel segments querying and async 
caching are about trading throughput for latency?

> offload caching to a dedicated threadpool
> -
>
> Key: LUCENE-8213
> URL: https://issues.apache.org/jira/browse/LUCENE-8213
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/query/scoring
>Affects Versions: 7.2.1
>Reporter: Amir Hadadi
>Priority: Minor
>  Labels: performance
>
> IndexOrDocValuesQuery allows to combine non selective range queries with a 
> selective lead iterator in an optimized way. However, the range query at some 
> point gets cached by a querying thread in LRUQueryCache, which negates the 
> optimization of IndexOrDocValuesQuery for that specific query.
> It would be nice to see a caching implementation that offloads to a different 
> thread pool, so that queries involving IndexOrDocValuesQuery would have 
> consistent performance characteristics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 521 - Still Unstable

2018-03-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/521/

4 tests failed.
FAILED:  org.apache.solr.analytics.facet.ValueFacetTest.meanTest

Error Message:
java.lang.RuntimeException: org.apache.solr.client.solrj.SolrServerException: 
No live SolrServers available to handle this 
request:[http://127.0.0.1:37327/solr/collection1, 
http://127.0.0.1:44040/solr/collection1]

Stack Trace:
java.lang.RuntimeException: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:37327/solr/collection1, 
http://127.0.0.1:44040/solr/collection1]
at 
__randomizedtesting.SeedInfo.seed([A3EB4FF7DD019AFA:A96D9F52E3F31585]:0)
at 
org.apache.solr.analytics.facet.SolrAnalyticsFacetTestCase.testGrouping(SolrAnalyticsFacetTestCase.java:77)
at 
org.apache.solr.analytics.facet.ValueFacetTest.meanTest(ValueFacetTest.java:338)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

Re: [JENKINS] Lucene-Solr-BadApples-NightlyTests-7.x - Build # 3 - Still Unstable

2018-03-21 Thread Simon Willnauer
I pushed a fix for the Lucene issue. it was caused by some changes on
LUCENE-8212

On Wed, Mar 21, 2018 at 2:50 AM, Apache Jenkins Server
 wrote:
> Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/3/
>
> 10 tests failed.
> FAILED:  org.apache.lucene.index.TestIndexWriterOnVMError.testUnknownError
>
> Error Message:
> MockDirectoryWrapper: cannot close: there are still 1 open files: 
> {_0_Asserting_0.dvm=1}
>
> Stack Trace:
> java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are 
> still 1 open files: {_0_Asserting_0.dvm=1}
> at 
> __randomizedtesting.SeedInfo.seed([C1B6EA2337948F2A:2AE035A33513191A]:0)
> at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841)
> at 
> org.apache.lucene.index.TestIndexWriterOnVMError.doTest(TestIndexWriterOnVMError.java:89)
> at 
> org.apache.lucene.index.TestIndexWriterOnVMError.testUnknownError(TestIndexWriterOnVMError.java:258)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: unclosed IndexOutput: 
> _0_Asserting_0.dvm
> at 
> 

[jira] [Commented] (LUCENE-8212) Never swallow Exceptions in IndexWriter and DocumentsWriter

2018-03-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407607#comment-16407607
 ] 

ASF subversion and git services commented on LUCENE-8212:
-

Commit f664896d1fff951bb50aae414b043f97bb9159b8 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f664896 ]

LUCENE-8212: Ensure all closeables are closed even if an VMError is thrown


>  Never swallow Exceptions in IndexWriter and DocumentsWriter
> 
>
> Key: LUCENE-8212
> URL: https://issues.apache.org/jira/browse/LUCENE-8212
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8212.patch, LUCENE-8212.patch
>
>
>  IndexWriter as well as DocumentsWriter caught Throwable and ignored it. This 
> is mainly a relict from pre Java 7 were exceptions didn't have the needed API 
> to suppress exceptions. This change handles exceptions correctly where the 
> original exception is rethrown and all other exceptions are added as 
> suppressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8212) Never swallow Exceptions in IndexWriter and DocumentsWriter

2018-03-21 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407608#comment-16407608
 ] 

ASF subversion and git services commented on LUCENE-8212:
-

Commit af33bc8c3bbf15f8b56e9af0033b897c034176e6 in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=af33bc8 ]

LUCENE-8212: Ensure all closeables are closed even if an VMError is thrown


>  Never swallow Exceptions in IndexWriter and DocumentsWriter
> 
>
> Key: LUCENE-8212
> URL: https://issues.apache.org/jira/browse/LUCENE-8212
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8212.patch, LUCENE-8212.patch
>
>
>  IndexWriter as well as DocumentsWriter caught Throwable and ignored it. This 
> is mainly a relict from pre Java 7 were exceptions didn't have the needed API 
> to suppress exceptions. This change handles exceptions correctly where the 
> original exception is rethrown and all other exceptions are added as 
> suppressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407442#comment-16407442
 ] 

Cao Manh Dat edited comment on SOLR-12087 at 3/21/18 8:23 AM:
--

I'm not sure but this can be a race condition between delete replica and updates
 * Many updates are coming to the leader
 * The leader forward these updates to replicaA
 * DeleteReplica is called for replicaA
 * There are several updates sent to replicaA failed ( because replicaA is 
closed ) 
 * Entry of replicaA is removed from {{states.json}}
 * The leader put replicaA into LIR by publishing replicaA state to DOWN which 
adds back the entry of replicaA to {{states.json}}

[~jerry.bao] If this is your case there must be some log like this
 * On replica node on time t : log.info(logid+" CLOSING SolrCore " + this);
 * On leader node on time t+delta : log.warn("Leader is publishing core={} 
coreNodeName ={} state={} on behalf of un-reachable replica {}",
replicaCoreName, replicaCoreNodeName, Replica.State.DOWN.toString(), 
replicaUrl);

You wrote that
{quote}Attempting to delete the downed replicas causes failures because the 
core does not exist anymore.
{quote}
But if the case that I described above is correct, you will still be able to 
delete the replica from clusterstate even when the replica does not exist in 
the node. Make sure that on the 2nd call of DeleteReplica (for removing zombie 
replica), parameters are correct because the name of the replica may get 
changed, ie: from core_node3 to core_node4.

[~ausathya] Your log seems relate to REQUEST_RECOVERY call, not DELETE call, 
right?
 
 
BTW: In theory, SOLR-11702 will be able to solve this problem ( by not calling 
old LIR code ). Unfortunately that because of the backward compatibility we 
still run through old LIR cause we cannot distinguish whether the replica is 
removed or not registered its term (had not upgraded to 7.3).


was (Author: caomanhdat):
I'm not sure but this can be a race condition between delete replica and updates
 * Many updates are coming to the leader
 * The leader forward these updates to replicaA
 * DeleteReplica is called for replicaA
 * There are several updates sent to replicaA failed ( because replicaA is 
closed ) 
 * Entry of replicaA is removed from {{states.json}}
 * The leader put replicaA into LIR by publishing replicaA state to DOWN which 
adds back the entry of replicaA to {{states.json}}

[~jerry.bao] If this is your case there must be some log like this
 * On replica node on time t : log.info(logid+" CLOSING SolrCore " + this);
 * On leader node on time t+delta : log.warn("Leader is publishing core={} 
coreNodeName ={} state={} on behalf of un-reachable replica {}",
 replicaCoreName, replicaCoreNodeName, Replica.State.DOWN.toString(), 
replicaUrl);

BTW: The above case is fixed by SOLR-11702

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be 

[jira] [Issue Comment Deleted] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12087:

Comment: was deleted

(was: [~ausathya] Your log seems relate to REQUEST_RECOVERY call, not DELETE 
call, right?)

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12087:

Comment: was deleted

(was: [~jerry.bao] You wrote that

{quote}

Attempting to delete the downed replicas causes failures because the core does 
not exist anymore.

{quote}

But if the case that I described above is correct, you will still be able to 
delete the replica from clusterstate even when the replica does not exist in 
the node. Make sure that on the 2nd call of DeleteReplica (for removing zombie 
replica), parameters are correct because the name of the replica may get 
changed, ie: from core_node3 to core_node4.)

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 1568 - Unstable!

2018-03-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1568/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Expected to find shardAddress in the up shard info: 
{error=org.apache.solr.client.solrj.SolrServerException: Time allowed to handle 
this request exceeded,trace=org.apache.solr.client.solrj.SolrServerException: 
Time allowed to handle this request exceeded  at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:460)
  at 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:273)
  at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:175)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
  at java.base/java.lang.Thread.run(Thread.java:844) ,time=2}

Stack Trace:
java.lang.AssertionError: Expected to find shardAddress in the up shard info: 
{error=org.apache.solr.client.solrj.SolrServerException: Time allowed to handle 
this request exceeded,trace=org.apache.solr.client.solrj.SolrServerException: 
Time allowed to handle this request exceeded
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:460)
at 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:273)
at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:175)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)
,time=2}
at 
__randomizedtesting.SeedInfo.seed([BA3E1CFA45D25DB9:326A2320EB2E3041]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.TestDistributedSearch.comparePartialResponses(TestDistributedSearch.java:1191)
at 
org.apache.solr.TestDistributedSearch.queryPartialResults(TestDistributedSearch.java:1132)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:992)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1019)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 

[jira] [Commented] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407579#comment-16407579
 ] 

Cao Manh Dat commented on SOLR-12087:
-

[~jerry.bao] You wrote that

{quote}

Attempting to delete the downed replicas causes failures because the core does 
not exist anymore.

{quote}

But if the case that I described above is correct, you will still be able to 
delete the replica from clusterstate even when the replica does not exist in 
the node. Make sure that on the 2nd call of DeleteReplica (for removing zombie 
replica), parameters are correct because the name of the replica may get 
changed, ie: from core_node3 to core_node4.

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-21 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407571#comment-16407571
 ] 

Cao Manh Dat commented on SOLR-12087:
-

[~ausathya] Your log seems relate to REQUEST_RECOVERY call, not DELETE call, 
right?

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11601) geodist fails for some fields when field is in parenthesis instead of sfield param

2018-03-21 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407520#comment-16407520
 ] 

Lucene/Solr QA commented on SOLR-11601:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m  1s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-11601 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12914932/SOLR-11601.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / d2ef38d |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_144 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/8/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/8/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/8/console 
|
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> geodist fails for some fields when field is in parenthesis instead of sfield 
> param
> --
>
> Key: SOLR-11601
> URL: https://issues.apache.org/jira/browse/SOLR-11601
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.6
>Reporter: Clemens Wyss
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-11601.patch, SOLR-11601.patch
>
>
> Im switching my schemas from derprecated solr.LatLonType to 
> solr.LatLonPointSpatialField.
> Now my sortquery (which used to work with solr.LatLonType):
> *sort=geodist(b4_location__geo_si,47.36667,8.55) asc*
> raises the error
> {color:red}*"sort param could not be parsed as a query, and is not a field 
> that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color}
> Invoking sort using syntax 
> {color:#14892c}sfield=b4_location__geo_si=47.36667,8.55=geodist() asc
> works as expected though...{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 304 - Unstable

2018-03-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/304/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/18/consoleText

[repro] Revision: 7735f2c911c8c3fb7cc02f9ab9c8dbe4fb70f9cd

[repro] Repro line:  ant test  -Dtestcase=AtomicUpdateProcessorFactoryTest 
-Dtests.method=testMultipleThreads -Dtests.seed=8E8B8A1416CF0E60 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ar-KW -Dtests.timezone=America/Indiana/Vincennes 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
d2ef38d7845d967e259483bbf4a22cf8abec1309
[repro] git fetch
[repro] git checkout 7735f2c911c8c3fb7cc02f9ab9c8dbe4fb70f9cd

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro] ant compile-test

[...truncated 3310 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest" -Dtests.showOutput=onerror  
-Dtests.seed=8E8B8A1416CF0E60 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ar-KW 
-Dtests.timezone=America/Indiana/Vincennes -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 267 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest
[repro] git checkout d2ef38d7845d967e259483bbf4a22cf8abec1309

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org