Re: [JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 1 - Failure

2017-07-04 Thread Anshum Gupta
I guess we need to bump the version here to 7.1 as we have a 7.0 branch out.

On Tue, Jul 4, 2017 at 9:57 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/1/
>
> No tests ran.
>
> Build Log:
> [...truncated 25711 lines...]
> prepare-release-no-sign:
> [mkdir] Created dir:
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist
>  [copy] Copying 476 files to
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene
>  [copy] Copying 215 files to
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr
>[smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
>[smoker] NOTE: output encoding is UTF-8
>[smoker]
>[smoker] Load release URL
> "file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"...
>[smoker]
>[smoker] Test Lucene...
>[smoker]   test basics...
>[smoker]   get KEYS
>[smoker] 0.2 MB in 0.01 sec (25.3 MB/sec)
>[smoker]   check changes HTML...
>[smoker]   download lucene-7.0.0-src.tgz...
>[smoker] 29.5 MB in 0.03 sec (879.7 MB/sec)
>[smoker] verify md5/sha1 digests
>[smoker]   download lucene-7.0.0.tgz...
>[smoker] 69.0 MB in 0.08 sec (908.2 MB/sec)
>[smoker] verify md5/sha1 digests
>[smoker]   download lucene-7.0.0.zip...
>[smoker] 79.3 MB in 0.09 sec (881.0 MB/sec)
>[smoker] verify md5/sha1 digests
>[smoker]   unpack lucene-7.0.0.tgz...
>[smoker] verify JAR metadata/identity/no javax.* or java.*
> classes...
>[smoker] test demo with 1.8...
>[smoker]   got 6165 hits for query "lucene"
>[smoker] checkindex with 1.8...
>[smoker] check Lucene's javadoc JAR
>[smoker]   unpack lucene-7.0.0.zip...
>[smoker] verify JAR metadata/identity/no javax.* or java.*
> classes...
>[smoker] test demo with 1.8...
>[smoker]   got 6165 hits for query "lucene"
>[smoker] checkindex with 1.8...
>[smoker] check Lucene's javadoc JAR
>[smoker]   unpack lucene-7.0.0-src.tgz...
>[smoker] make sure no JARs/WARs in src dist...
>[smoker] run "ant validate"
>[smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
>[smoker] test demo with 1.8...
>[smoker]   got 213 hits for query "lucene"
>[smoker] checkindex with 1.8...
>[smoker] generate javadocs w/ Java 8...
>[smoker]
>[smoker] Crawl/parse...
>[smoker]
>[smoker] Verify...
>[smoker]   confirm all releases have coverage in
> TestBackwardsCompatibility
>[smoker] find all past Lucene releases...
>[smoker] run TestBackwardsCompatibility..
>[smoker] success!
>[smoker]
>[smoker] Test Solr...
>[smoker]   test basics...
>[smoker]   get KEYS
>[smoker] 0.2 MB in 0.00 sec (113.9 MB/sec)
>[smoker]   check changes HTML...
>[smoker] Traceback (most recent call last):
>[smoker]   File
> "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
> line 1484, in 
>[smoker] main()
>[smoker]   File
> "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
> line 1428, in main
>[smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir,
> c.is_signed, ' '.join(c.test_args))
>[smoker]   File
> "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
> line 1470, in smokeTest
>[smoker] checkSigs('solr', solrPath, version, tmpDir, isSigned)
>[smoker]   File
> "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
> line 370, in checkSigs
>[smoker] testChanges(project, version, changesURL)
>[smoker]   File
> "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
> line 418, in testChanges
>[smoker] checkChangesContent(s, version, changesURL, project, True)
>[smoker]   File
> "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
> line 475, in checkChangesContent
>[smoker] raise RuntimeError('Future release %s is greater than %s
> in %s' % (release, version, name))
>[smoker] RuntimeError: Future release 7.1.0 is greater than 7.0.0 in
> file:///x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr/changes/Changes.html
>
> BUILD FAILED
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/build.xml:606:
> exec returned: 1
>
> Total time: 38 minutes 1 second
> Build step 'Invoke Ant' marked build as failure
> Email was triggered for: Failure - 

[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 1 - Failure

2017-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/1/

No tests ran.

Build Log:
[...truncated 25711 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (25.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 29.5 MB in 0.03 sec (879.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 69.0 MB in 0.08 sec (908.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 79.3 MB in 0.09 sec (881.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6165 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6165 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (113.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
 line 1484, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
 line 1428, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
 line 1470, in smokeTest
   [smoker] checkSigs('solr', solrPath, version, tmpDir, isSigned)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
 line 370, in checkSigs
   [smoker] testChanges(project, version, changesURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
 line 418, in testChanges
   [smoker] checkChangesContent(s, version, changesURL, project, True)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py",
 line 475, in checkChangesContent
   [smoker] raise RuntimeError('Future release %s is greater than %s in %s' 
% (release, version, name))
   [smoker] RuntimeError: Future release 7.1.0 is greater than 7.0.0 in 
file:///x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr/changes/Changes.html

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/build.xml:606: 
exec returned: 1

Total time: 38 minutes 1 second
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-7.x-Linux (32bit/jdk-9-ea+175) - Build # 7 - Failure!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/7/
Java: 32bit/jdk-9-ea+175 -client -XX:+UseG1GC --illegal-access=deny

2 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test

Error Message:
Timed out waiting for replica core_node5 (1499228299714) to replicate from 
leader core_node1 (1499228327052)

Stack Trace:
java.lang.AssertionError: Timed out waiting for replica core_node5 
(1499228299714) to replicate from leader core_node1 (1499228327052)
at 
__randomizedtesting.SeedInfo.seed([1F333B2ED96D5800:976704F4779135F8]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForReplicationFromReplicas(AbstractFullDistribZkTestBase.java:2133)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test(ChaosMonkeySafeLeaderWithPullReplicasTest.java:209)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 1 - Unstable

2017-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/1/

3 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
The Monkey ran for over 45 seconds and no jetties were stopped - this is worth 
investigating!

Stack Trace:
java.lang.AssertionError: The Monkey ran for over 45 seconds and no jetties 
were stopped - this is worth investigating!
at 
__randomizedtesting.SeedInfo.seed([F025D098B63D7EEE:7871EF4218C11316]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:587)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-10983) Fix DOWNNODE -> queue-work explosion

2017-07-04 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074225#comment-16074225
 ] 

Scott Blum commented on SOLR-10983:
---

Thanks!  Will do

> Fix DOWNNODE -> queue-work explosion
> 
>
> Key: SOLR-10983
> URL: https://issues.apache.org/jira/browse/SOLR-10983
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
> Attachments: SOLR-10983.patch
>
>
> Every DOWNNODE command enqueues N copies of itself into queue-work, where N 
> is number of collections affected by the DOWNNODE.
> This rarely matters in practice, because queue-work gets immediately dumped-- 
> however, if anything throws an exception (such as ZK bad version), we don't 
> clear queue-work.  Then the next time through the loop we run the expensive 
> DOWNNODE command potentially hundreds of times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 3 - Still Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/3/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.lucene.index.TestMixedDocValuesUpdates.testManyReopensAndFields

Error Message:
invalid binary value for doc=0, field=f2, reader=_f(7.0.0):c68 expected:<5> but 
was:<4>

Stack Trace:
java.lang.AssertionError: invalid binary value for doc=0, field=f2, 
reader=_f(7.0.0):c68 expected:<5> but was:<4>
at 
__randomizedtesting.SeedInfo.seed([CFF4D78D701ACBC9:F908B5A2F1EFA8D5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.index.TestMixedDocValuesUpdates.testManyReopensAndFields(TestMixedDocValuesUpdates.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 1498 lines...]
   [junit4] Suite: org.apache.lucene.index.TestMixedDocValuesUpdates
   [junit4] IGNOR/A 0.00s J1 | TestMixedDocValuesUpdates.testTonsOfUpdates
   [junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestMixedDocValuesUpdates 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_131) - Build # 20061 - Still Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20061/
Java: 64bit/jdk1.8.0_131 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.test

Error Message:
Could not find collection : movereplicatest_coll

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
movereplicatest_coll
at 
__randomizedtesting.SeedInfo.seed([41EB0B5C781BBEEC:C9BF3486D6E7D314]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:194)
at 
org.apache.solr.cloud.MoveReplicaTest.getRandomReplica(MoveReplicaTest.java:185)
at org.apache.solr.cloud.MoveReplicaTest.test(MoveReplicaTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.schema.TestUseDocValuesAsStored.testRandomSingleAndMultiValued

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during 

[jira] [Commented] (SOLR-10983) Fix DOWNNODE -> queue-work explosion

2017-07-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074176#comment-16074176
 ] 

Shalin Shekhar Mangar commented on SOLR-10983:
--

On second thought, creating a batch enqueue command is not so straightforward 
and the callback is called once per enqueue as per the contract of 
ZkWriteCallback so it is technically not a bug. So I am fine with your solution 
as it exists. +1 to commit. Please make sure it is backported to the branch_7x 
and branch_7_0 so that it makes it into the 7.0 release.

> Fix DOWNNODE -> queue-work explosion
> 
>
> Key: SOLR-10983
> URL: https://issues.apache.org/jira/browse/SOLR-10983
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
> Attachments: SOLR-10983.patch
>
>
> Every DOWNNODE command enqueues N copies of itself into queue-work, where N 
> is number of collections affected by the DOWNNODE.
> This rarely matters in practice, because queue-work gets immediately dumped-- 
> however, if anything throws an exception (such as ZK bad version), we don't 
> clear queue-work.  Then the next time through the loop we run the expensive 
> DOWNNODE command potentially hundreds of times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10983) Fix DOWNNODE -> queue-work explosion

2017-07-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074173#comment-16074173
 ] 

Shalin Shekhar Mangar commented on SOLR-10983:
--

Nice catch!

Your patch solves another problem -- today if an exception happens, we run 
through items in the work-queue and the last item from state-update-queue (the 
one during which the exception happened) so we run the same item twice.

Considering that DOWNNODE is the only command that enqueues multiple 
ZkWriteCommands, I think we should add a method to ZkStateWriter which calls 
enqueue only once for the entire batch. That and your patch solve all problems 
nicely i.e. 
# DOWNNODE creating multiple work queue items 
# Exceptions not clearing work queue
# Overseer executing same item twice from work queue and state update queue on 
an exception

> Fix DOWNNODE -> queue-work explosion
> 
>
> Key: SOLR-10983
> URL: https://issues.apache.org/jira/browse/SOLR-10983
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
> Attachments: SOLR-10983.patch
>
>
> Every DOWNNODE command enqueues N copies of itself into queue-work, where N 
> is number of collections affected by the DOWNNODE.
> This rarely matters in practice, because queue-work gets immediately dumped-- 
> however, if anything throws an exception (such as ZK bad version), we don't 
> clear queue-work.  Then the next time through the loop we run the expensive 
> DOWNNODE command potentially hundreds of times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-9-ea+175) - Build # 6 - Still Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/6/
Java: 64bit/jdk-9-ea+175 -XX:-UseCompressedOops -XX:+UseG1GC 
--illegal-access=deny

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest

Error Message:
9 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest: 1) Thread[id=18420, 
name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@9/java.lang.Thread.run(Thread.java:844)2) 
Thread[id=18553, name=zkCallback-3901-thread-2, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1085)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@9/java.lang.Thread.run(Thread.java:844)3) 
Thread[id=18554, name=zkCallback-3901-thread-3, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1085)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@9/java.lang.Thread.run(Thread.java:844)4) 
Thread[id=18423, name=zkCallback-3901-thread-1, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1085)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@9/java.lang.Thread.run(Thread.java:844)5) 
Thread[id=18422, 
name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[AB6FAE1FCDA76A4F]-EventThread,
 state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)   
  at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
 at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)6) 
Thread[id=18580, name=zkCallback-3901-thread-4, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 

[jira] [Commented] (SOLR-10123) Analytics Component 2.0

2017-07-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074149#comment-16074149
 ] 

ASF GitHub Bot commented on SOLR-10123:
---

Github user dennisgove commented on the issue:

https://github.com/apache/lucene-solr/pull/215
  
Where are point fields specifically handled, or do they not need to be 
specifically handled like other Trie fields?


> Analytics Component 2.0
> ---
>
> Key: SOLR-10123
> URL: https://issues.apache.org/jira/browse/SOLR-10123
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>  Labels: features
> Attachments: SOLR-10123.patch, SOLR-10123.patch, SOLR-10123.patch
>
>
> A completely redesigned Analytics Component, introducing the following 
> features:
> * Support for distributed collections
> * New JSON request language, and response format that fits JSON better.
> * Faceting over mapping functions in addition to fields (Value Faceting)
> * PivotFaceting with ValueFacets
> * More advanced facet sorting
> * Support for PointField types
> * Expressions over multi-valued fields
> * New types of mapping functions
> ** Logical
> ** Conditional
> ** Comparison
> * Concurrent request execution
> * Custom user functions, defined within the request
> Fully backwards compatible with the orifinal Analytics Component with the 
> following exceptions:
> * All fields used must have doc-values enabled
> * Expression results can no longer be used when defining Range and Query 
> facets
> * The reverse(string) mapping function is no longer a native function



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #215: SOLR-10123: Fix to better support numeric PointField...

2017-07-04 Thread dennisgove
Github user dennisgove commented on the issue:

https://github.com/apache/lucene-solr/pull/215
  
Where are point fields specifically handled, or do they not need to be 
specifically handled like other Trie fields?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10456) timeout-related setters should be deprecated in favor of SolrClientBuilder methods

2017-07-04 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074143#comment-16074143
 ] 

Jason Gerlowski commented on SOLR-10456:


No problem Anshum; thanks for the review.

I've created SOLR-11004 to address the duplication issue you mentioned above.  
Have a patch up for review over there, if you'd like to see that merged it's 
ready for whoever would like to take a look.

> timeout-related setters should be deprecated in favor of SolrClientBuilder 
> methods
> --
>
> Key: SOLR-10456
> URL: https://issues.apache.org/jira/browse/SOLR-10456
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Jason Gerlowski
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 7.0
>
> Attachments: SOLR-10456.patch, SOLR-10456.patch, SOLR-10456.patch, 
> SOLR-10456.patch
>
>
> Now that builders are in place for {{SolrClients}}, the setters used in each 
> {{SolrClient}} can be deprecated, and their functionality moved over to the 
> Builders. This change brings a few benefits:
> - unifies {{SolrClient}} configuration under the new Builders. It'll be nice 
> to have all the knobs, and levers used to tweak {{SolrClient}}s available in 
> a single place (the Builders).
> - reduces {{SolrClient}} thread-safety concerns. Currently, clients are 
> mutable. Using some {{SolrClient}} setters can result in erratic and "trappy" 
> behavior when the clients are used across multiple threads.
> This subtask endeavors to change this behavior for the 
> {{setConnectionTimeout}} and {{setSoTimeout}} setters on all {{SolrClient}} 
> implementations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11004) Consolidate SolrClient Builder code in abstract parent class

2017-07-04 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074140#comment-16074140
 ] 

Jason Gerlowski edited comment on SOLR-11004 at 7/5/17 12:57 AM:
-

Patch attached; a few notes:

- all SolrClient Builders now extend {{SolrClientBuilder}}
- was able to move 4 setters: {{withHttpClient}}, {{withResponseParser}}, 
{{withConnectionTimeout}}, and {{withSocketTimeout}}.  This number will grow 
pretty quickly though as the SolrClient setters gain Builder equivalent methods 
(see SOLR-8975) 
- A naive implementation would have the SolrClientBuilder setters return a 
SolrClientBuilder reference.  This introduces limitations on the order that 
setters can be called in.  This appears to be a well documented problem when 
creating Builders.  So I implemented the solution detailed 
[here|https://stackoverflow.com/questions/17164375/subclassing-a-java-builder-class],
 which involves using generics to allow SolrClientBuilder to return a reference 
typed as the concrete class.

Tests and precommit pass.


was (Author: gerlowskija):
Patch attached; a few notes:

- all SolrClient Builders now extend {{SolrClientBuilder}}
- was able to move 4 setters: {{withHttpClient}}, {{withResponseParser}}, 
{{withConnectionTimeout}}, and {{withSocketTimeout}}.  This number will grow 
pretty quickly though as the SolrClient setters gain Builder equivalent methods 
(see SOLR-8975) 
- A naive implementation would have the SolrClientBuilder setters return a 
SolrClientBuilder reference.  This would cause problems if a Builder method 
chain attempted to call a subclass method after calling a SolrClientBuilder 
method.  This appears to be a well understood problem when creating Builders.  
So I implemented the solution detailed 
[here|https://stackoverflow.com/questions/17164375/subclassing-a-java-builder-class],
 which involves using generics to allow SolrClientBuilder to return a reference 
typed as the concrete class.

> Consolidate SolrClient Builder code in abstract parent class
> 
>
> Key: SOLR-11004
> URL: https://issues.apache.org/jira/browse/SOLR-11004
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: SOLR-11004.patch
>
>
> As [~anshumg] pointed out in SOLR-10456, the Builder code for each SolrClient 
> has a lot of duplication in it.
> For example, each SolrClient allows configuration of the connection timeout: 
> all 4 builders have a field to store this value, all 4 builders have a 
> {{withConnectionTimeout}} method to set this value, and all 4 builders have 
> very similar Javadocs documenting what this value can be used for.
> The same can be said for 5 or 6 other properties common to most/all 
> SolrClient's.
> This duplication could be removed by creating an abstract SolrClientBuilder 
> class, which each of the specific Builders extend.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11004) Consolidate SolrClient Builder code in abstract parent class

2017-07-04 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-11004:
---
Attachment: SOLR-11004.patch

Patch attached; a few notes:

- all SolrClient Builders now extend {{SolrClientBuilder}}
- was able to move 4 setters: {{withHttpClient}}, {{withResponseParser}}, 
{{withConnectionTimeout}}, and {{withSocketTimeout}}.  This number will grow 
pretty quickly though as the SolrClient setters gain Builder equivalent methods 
(see SOLR-8975) 
- A naive implementation would have the SolrClientBuilder setters return a 
SolrClientBuilder reference.  This would cause problems if a Builder method 
chain attempted to call a subclass method after calling a SolrClientBuilder 
method.  This appears to be a well understood problem when creating Builders.  
So I implemented the solution detailed 
[here|https://stackoverflow.com/questions/17164375/subclassing-a-java-builder-class],
 which involves using generics to allow SolrClientBuilder to return a reference 
typed as the concrete class.

> Consolidate SolrClient Builder code in abstract parent class
> 
>
> Key: SOLR-11004
> URL: https://issues.apache.org/jira/browse/SOLR-11004
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: SOLR-11004.patch
>
>
> As [~anshumg] pointed out in SOLR-10456, the Builder code for each SolrClient 
> has a lot of duplication in it.
> For example, each SolrClient allows configuration of the connection timeout: 
> all 4 builders have a field to store this value, all 4 builders have a 
> {{withConnectionTimeout}} method to set this value, and all 4 builders have 
> very similar Javadocs documenting what this value can be used for.
> The same can be said for 5 or 6 other properties common to most/all 
> SolrClient's.
> This duplication could be removed by creating an abstract SolrClientBuilder 
> class, which each of the specific Builders extend.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_131) - Build # 20060 - Still Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20060/
Java: 64bit/jdk1.8.0_131 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CdcrBootstrapTest

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [SolrCore, 
MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1019)  at 
org.apache.solr.core.SolrCore.reload(SolrCore.java:636)  at 
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1202)  at 
org.apache.solr.handler.IndexFetcher.lambda$reloadCore$0(IndexFetcher.java:900) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:349)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:709)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:934)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:843)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:969)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:904)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:91)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:745)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:726)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:507)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:322)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
 at org.eclipse.jetty.server.Server.handle(Server.java:534)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)  at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100)
  at org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 

[JENKINS] Lucene-Solr-Tests-master - Build # 1942 - Still Unstable

2017-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1942/

3 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:37057/f_/vk

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:37057/f_/vk
at 
__randomizedtesting.SeedInfo.seed([B15E4BE1A36CFDAB:390A743B0D909053]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:637)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:252)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1667)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1694)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:254)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-9-ea+175) - Build # 5 - Still Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/5/
Java: 64bit/jdk-9-ea+175 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 
--illegal-access=deny

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest

Error Message:
9 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest: 1) 
Thread[id=10809, name=zkCallback-2629-thread-3, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1085)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@9/java.lang.Thread.run(Thread.java:844)2) 
Thread[id=10462, name=zkCallback-2629-thread-1, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1085)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@9/java.lang.Thread.run(Thread.java:844)3) 
Thread[id=10872, name=zkCallback-2629-thread-5, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1085)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@9/java.lang.Thread.run(Thread.java:844)4) 
Thread[id=10888, name=zkCallback-2629-thread-6, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1085)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@9/java.lang.Thread.run(Thread.java:844)5) 
Thread[id=10461, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[63BE87287437BEC3]-EventThread,
 state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]
 at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)   
  at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
 at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 

[jira] [Commented] (LUCENE-7882) Maybe expression compiler should cache recently compiled expressions?

2017-07-04 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074108#comment-16074108
 ] 

Michael McCandless commented on LUCENE-7882:


I ran my searcher instance with {{-XX:+PrintSafepointStatistics 
-XX:+PrintCodeCacheOnCompilation -XX:+PrintCompilation}} and I see the 
CodeCache getting close to full over time, e.g.:

{noformat}
CodeCache: size=245760Kb used=22Kb max_used=230003Kb free=15760Kb
[GC (Allocation Failure)  2872668K->1304388K(16581312K), 0.1226819 secs]
10708156 764218   4   
org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression::evaluate
 (21 bytes)
10708156 764217   4   
org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression::evaluate
 (21 bytes)
10708162 764219   4   
org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression::evaluate
 (21 bytes)
CodeCache: size=245760Kb used=230003Kb max_used=230007Kb free=15756Kb
CodeCache: size=245760Kb used=230007Kb max_used=230011Kb free=15752Kb
CodeCache: size=245760Kb used=230011Kb max_used=230015Kb free=15748Kb
10708178 764220   4   
org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression::evaluate
 (21 bytes)
CodeCache: size=245760Kb used=230015Kb max_used=230020Kb free=15744Kb
10708192 764221   4   
org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression::evaluate
 (21 bytes)
{noformat}

And then periodically I see tons and tons of lines like this at once:

{noformat}
11108619 689344   4   
org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression::evaluate
 (21 bytes)   made not entrant
11108619 689541   4   
org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression::evaluate
 (21 bytes)   made not entrant
11108619 689540   4   
org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression::evaluate
 (21 bytes)   made not entrant
11108619 689543   4   
org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression::evaluate
 (21 bytes)   made not entrant
{noformat}

And also {{made zombie}}:

{noformat}
11236528 748217   4  (method)   made zombie
11236528 748210   4  (method)   made zombie
11236528 748211   4  (method)   made zombie
11236528 748207   4  (method)   made zombie
11236528 748206   4  (method)   made zombie
11236528 748203   4  (method)   made zombie
11236528 748200   4  (method)   made zombie
11236528 748198   4  (method)   made zombie
11236528 748196   4  (method)   made zombie
{noformat}

I think net/net Java is just working hard to clean up all the one-off compiled 
methods I was creating by not re-using my expression...

> Maybe expression compiler should cache recently compiled expressions?
> -
>
> Key: LUCENE-7882
> URL: https://issues.apache.org/jira/browse/LUCENE-7882
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/expressions
>Reporter: Michael McCandless
>
> I've been running search performance tests using a simple expression 
> ({{_score + ln(1000+unit_sales)}}) for sorting and hit this odd bottleneck:
> {noformat}
> "pool-1-thread-30" #70 prio=5 os_prio=0 tid=0x7eea7000a000 nid=0x1ea8a 
> waiting for monitor entry [0x7eea867dd000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.lucene.expressions.js.JavascriptCompiler$CompiledExpression.evaluate(_score
>  + ln(1000+unit_sales))
>   at 
> org.apache.lucene.expressions.ExpressionFunctionValues.doubleValue(ExpressionFunctionValues.java:49)
>   at 
> com.amazon.lucene.OrderedVELeafCollector.collectInternal(OrderedVELeafCollector.java:123)
>   at 
> com.amazon.lucene.OrderedVELeafCollector.collect(OrderedVELeafCollector.java:108)
>   at 
> org.apache.lucene.search.MultiCollectorManager$Collectors$LeafCollectors.collect(MultiCollectorManager.java:102)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:241)
>   at 
> org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:184)
>   at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
>   at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:658)
>   at org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:600)
>   at org.apache.lucene.search.IndexSearcher$5.call(IndexSearcher.java:597)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I couldn't see any 

[jira] [Created] (LUCENE-7899) Add "exists" query for doc values

2017-07-04 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-7899:
--

 Summary: Add "exists" query for doc values
 Key: LUCENE-7899
 URL: https://issues.apache.org/jira/browse/LUCENE-7899
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 7.1


I don't think we have a query today to efficiently test whether a doc values 
field exists (has any value) for each document in the index?

Now that we use iterators to access doc values, this should be an efficient 
query: we can return the DISI we get for the doc values.

ElasticSearch indexes its own field to record which field names occur in a 
document, so it's able to do "exists" for any field (not just doc values 
fields), but I think doc values fields we can just get "for free".

I haven't started on this ... just wanted to open the issue first for 
discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7898) Remove hasSegID from SegmentInfos

2017-07-04 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074104#comment-16074104
 ] 

Michael McCandless commented on LUCENE-7898:


+1

> Remove hasSegID from SegmentInfos
> -
>
> Key: LUCENE-7898
> URL: https://issues.apache.org/jira/browse/LUCENE-7898
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7898.patch
>
>
> This is only necesarry for backward compatibility with pre-5.3 indices, which 
> 7.0 does not need to support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_131) - Build # 20059 - Still Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20059/
Java: 64bit/jdk1.8.0_131 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest

Error Message:
8 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest: 1) 
Thread[id=17288, name=zkCallback-2241-thread-4, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:748)2) Thread[id=17335, 
name=zkCallback-2241-thread-5, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:748)3) Thread[id=16887, 
name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:748)4) Thread[id=16888, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[32A4053DA1B3B794]-SendThread(127.0.0.1:36823),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]  
   at java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)
5) Thread[id=17238, name=zkCallback-2241-thread-2, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:748)6) Thread[id=16890, 
name=zkCallback-2241-thread-1, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:748)7) Thread[id=16889, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[32A4053DA1B3B794]-EventThread,
 state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]
 at sun.misc.Unsafe.park(Native 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 4 - Unstable

2017-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/4/

4 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:35192

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:35192
at 
__randomizedtesting.SeedInfo.seed([61D39AFAC83C524F:E987A52066C03FB7]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:637)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:252)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1667)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1694)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test(ChaosMonkeyNothingIsSafeWithPullReplicasTest.java:297)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 2 - Still Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/2/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.test

Error Message:
Could not find collection : movereplicatest_coll

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
movereplicatest_coll
at 
__randomizedtesting.SeedInfo.seed([DE2D989C82B0839D:5679A7462C4CEE65]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:194)
at 
org.apache.solr.cloud.MoveReplicaTest.getRandomReplica(MoveReplicaTest.java:185)
at org.apache.solr.cloud.MoveReplicaTest.test(MoveReplicaTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenRenew

Error Message:
expected:<200> but was:<403>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<403>
  

[jira] [Commented] (SOLR-10986) TestScoreJoinQPScore.testDeleteByScoreJoinQuery() failure: mismatch: '0'!='1' @ response/numFound

2017-07-04 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074084#comment-16074084
 ] 

Mikhail Khludnev commented on SOLR-10986:
-

# it's nice.
# it occurs when there are two segments 
# deleteByQ starts to feed joinQ with segments one-by-one
# it makes 1st phase search on single segment only, missing terms from other 
segments. 
h3. we can
# either revert SOLR-9127 (assuming it fixes the problem) and further pursue 
its' aim separately
# or somehow detects leaf segment scorer in JoinQP and get the enclosing parent 
searcher for the 1st phase search that's a little bit inefficient.
 
Opinions, proposals? 
Thanks, [~steve_rowe] for headsup, and [~thelabdude] for the test SOLR-6357


> TestScoreJoinQPScore.testDeleteByScoreJoinQuery() failure: mismatch: '0'!='1' 
> @ response/numFound
> -
>
> Key: SOLR-10986
> URL: https://issues.apache.org/jira/browse/SOLR-10986
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0, master (8.0), 7.1
>Reporter: Steve Rowe
>
> Reproduces for me on branch_6x but not on master, from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3861/] - {{git 
> bisect}} blames commit {{c215c78}} on SOLR-9217:
> {noformat}
> Checking out Revision 9947a811e83cc0f848f9ddaa37a4137f19efff1a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestScoreJoinQPScore -Dtests.method=testDeleteByScoreJoinQuery 
> -Dtests.seed=6DE98178CA5DE220 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=el-GR -Dtests.timezone=Asia/Vientiane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.02s J1 | 
> TestScoreJoinQPScore.testDeleteByScoreJoinQuery <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '0'!='1' 
> @ response/numFound
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([6DE98178CA5DE220:7A8B1D8F401EA807]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:989)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:936)
>[junit4]>  at 
> org.apache.solr.search.join.TestScoreJoinQPScore.testDeleteByScoreJoinQuery(TestScoreJoinQPScore.java:125)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {t_description=BlockTreeOrds(blocksize=128), 
> title_stemmed=PostingsFormat(name=Memory doPackFST= false), 
> price_s=BlockTreeOrds(blocksize=128), name=BlockTreeOrds(blocksize=128), 
> id=BlockTreeOrds(blocksize=128), 
> text=PostingsFormat(name=LuceneVarGapFixedInterval), 
> movieId_s=BlockTreeOrds(blocksize=128), title=PostingsFormat(name=Memory 
> doPackFST= false), title_lettertok=BlockTreeOrds(blocksize=128), 
> productId_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128)))},
>  docValues:{}, maxPointsInLeafNode=166, maxMBSortInHeap=7.4808509338680995, 
> sim=RandomSimilarity(queryNorm=false,coord=yes): {}, locale=el-GR, 
> timezone=Asia/Vientiane
>[junit4]   2> NOTE: Linux 4.10.0-21-generic i386/Oracle Corporation 
> 1.8.0_131 (32-bit)/cpus=8,threads=1,free=159538432,total=510918656
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10986) TestScoreJoinQPScore.testDeleteByScoreJoinQuery() failure: mismatch: '0'!='1' @ response/numFound

2017-07-04 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-10986:

Affects Version/s: 7.1
   master (8.0)
   7.0
   6.6

> TestScoreJoinQPScore.testDeleteByScoreJoinQuery() failure: mismatch: '0'!='1' 
> @ response/numFound
> -
>
> Key: SOLR-10986
> URL: https://issues.apache.org/jira/browse/SOLR-10986
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0, master (8.0), 7.1
>Reporter: Steve Rowe
>
> Reproduces for me on branch_6x but not on master, from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3861/] - {{git 
> bisect}} blames commit {{c215c78}} on SOLR-9217:
> {noformat}
> Checking out Revision 9947a811e83cc0f848f9ddaa37a4137f19efff1a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestScoreJoinQPScore -Dtests.method=testDeleteByScoreJoinQuery 
> -Dtests.seed=6DE98178CA5DE220 -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=el-GR -Dtests.timezone=Asia/Vientiane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.02s J1 | 
> TestScoreJoinQPScore.testDeleteByScoreJoinQuery <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '0'!='1' 
> @ response/numFound
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([6DE98178CA5DE220:7A8B1D8F401EA807]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:989)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:936)
>[junit4]>  at 
> org.apache.solr.search.join.TestScoreJoinQPScore.testDeleteByScoreJoinQuery(TestScoreJoinQPScore.java:125)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {t_description=BlockTreeOrds(blocksize=128), 
> title_stemmed=PostingsFormat(name=Memory doPackFST= false), 
> price_s=BlockTreeOrds(blocksize=128), name=BlockTreeOrds(blocksize=128), 
> id=BlockTreeOrds(blocksize=128), 
> text=PostingsFormat(name=LuceneVarGapFixedInterval), 
> movieId_s=BlockTreeOrds(blocksize=128), title=PostingsFormat(name=Memory 
> doPackFST= false), title_lettertok=BlockTreeOrds(blocksize=128), 
> productId_s=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128)))},
>  docValues:{}, maxPointsInLeafNode=166, maxMBSortInHeap=7.4808509338680995, 
> sim=RandomSimilarity(queryNorm=false,coord=yes): {}, locale=el-GR, 
> timezone=Asia/Vientiane
>[junit4]   2> NOTE: Linux 4.10.0-21-generic i386/Oracle Corporation 
> 1.8.0_131 (32-bit)/cpus=8,threads=1,free=159538432,total=510918656
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7837) Use indexCreatedVersionMajor to fail opening too old indices

2017-07-04 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074082#comment-16074082
 ] 

Simon Willnauer commented on LUCENE-7837:
-

+1 LGTM

> Use indexCreatedVersionMajor to fail opening too old indices
> 
>
> Key: LUCENE-7837
> URL: https://issues.apache.org/jira/browse/LUCENE-7837
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-7837.patch, LUCENE-7837.patch
>
>
> Even though in theory we only support reading indices created with version N 
> or N-1, in practice it is possible to run a forceMerge in order to make 
> Lucene accept to open the index since we only record the version that wrote 
> segments and commit points. However as of Lucene 7.0, we also record the 
> major version that was used to initially create the index, meaning we could 
> also fail to open N-2 indices that have only been merged with version N-1.
> The current state of things where we could read old data without knowing it 
> raises issues with everything that is performed on top of the codec API such 
> as analysis, input validation or norms encoding, especially now that we plan 
> to change the defaults (LUCENE-7730).
> For instance, we are only starting to reject broken offsets in term vectors 
> in Lucene 7. If we do not enforce the index to be created with either Lucene 
> 7 or 8 once we move to Lucene 8, then it means codecs could still be fed with 
> broken offsets, which is a pity since assuming that offsets go forward makes 
> things easier to encode and also potentially allows for better compression.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10879) DELETEREPLICA and DELETENODE commands should prevent data loss when replicationFactor==1

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074073#comment-16074073
 ] 

ASF subversion and git services commented on SOLR-10879:


Commit 0324da8289e148d627b9a45c3105bab6ed7573e6 in lucene-solr's branch 
refs/heads/branch_7_0 from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0324da8 ]

SOLR-10879: Make sure we don't lose single replicas when deleting a node.


> DELETEREPLICA and DELETENODE commands should prevent data loss when 
> replicationFactor==1
> 
>
> Key: SOLR-10879
> URL: https://issues.apache.org/jira/browse/SOLR-10879
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0, 6.7
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>
> There should be some level of protection against inadvertent data loss when 
> issuing these commands when replicationFactor is 1 - deleting a node or a 
> replica in this case will be equivalent to completely deleting some shards.
> This is further complicated by the replica types - there could be still 
> remaining replicas after the operation, but if they are all of PULL type then 
> none of them will ever become a shard leader.
> We could require that  the command should fail in such case unless a boolean 
> option "force==true" is specified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10879) DELETEREPLICA and DELETENODE commands should prevent data loss when replicationFactor==1

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074072#comment-16074072
 ] 

ASF subversion and git services commented on SOLR-10879:


Commit 30352e72505dd33901158bf8fc76aa98861ab8cc in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=30352e7 ]

SOLR-10879: Make sure we don't lose single replicas when deleting a node.


> DELETEREPLICA and DELETENODE commands should prevent data loss when 
> replicationFactor==1
> 
>
> Key: SOLR-10879
> URL: https://issues.apache.org/jira/browse/SOLR-10879
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0, 6.7
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>
> There should be some level of protection against inadvertent data loss when 
> issuing these commands when replicationFactor is 1 - deleting a node or a 
> replica in this case will be equivalent to completely deleting some shards.
> This is further complicated by the replica types - there could be still 
> remaining replicas after the operation, but if they are all of PULL type then 
> none of them will ever become a shard leader.
> We could require that  the command should fail in such case unless a boolean 
> option "force==true" is specified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_131) - Build # 4 - Still Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/4/
Java: 64bit/jdk1.8.0_131 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.test

Error Message:
Could not find collection : movereplicatest_coll

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
movereplicatest_coll
at 
__randomizedtesting.SeedInfo.seed([D0910DC9D3ED4782:58C532137D112A7A]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:194)
at 
org.apache.solr.cloud.MoveReplicaTest.getRandomReplica(MoveReplicaTest.java:185)
at org.apache.solr.cloud.MoveReplicaTest.test(MoveReplicaTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 11449 lines...]
   [junit4] Suite: org.apache.solr.cloud.MoveReplicaHDFSTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-10879) DELETEREPLICA and DELETENODE commands should prevent data loss when replicationFactor==1

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16074070#comment-16074070
 ] 

ASF subversion and git services commented on SOLR-10879:


Commit cb23fa9b4efa5fc7c17f215f507901d459e9aa6f in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cb23fa9 ]

SOLR-10879: Make sure we don't lose single replicas when deleting a node.


> DELETEREPLICA and DELETENODE commands should prevent data loss when 
> replicationFactor==1
> 
>
> Key: SOLR-10879
> URL: https://issues.apache.org/jira/browse/SOLR-10879
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0, 6.7
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>
> There should be some level of protection against inadvertent data loss when 
> issuing these commands when replicationFactor is 1 - deleting a node or a 
> replica in this case will be equivalent to completely deleting some shards.
> This is further complicated by the replica types - there could be still 
> remaining replicas after the operation, but if they are all of PULL type then 
> none of them will ever become a shard leader.
> We could require that  the command should fail in such case unless a boolean 
> option "force==true" is specified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1941 - Still unstable

2017-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1941/

5 tests failed.
FAILED:  
org.apache.lucene.index.TestBinaryDocValuesUpdates.testManyReopensAndFields

Error Message:
invalid value for doc=0, field=f4, 
reader=_16(8.0.0):C354:fieldInfosGen=1:dvGen=1 expected:<8> but was:<7>

Stack Trace:
java.lang.AssertionError: invalid value for doc=0, field=f4, 
reader=_16(8.0.0):C354:fieldInfosGen=1:dvGen=1 expected:<8> but was:<7>
at 
__randomizedtesting.SeedInfo.seed([400BB60842A011FD:76F7D427C35572E1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.index.TestBinaryDocValuesUpdates.testManyReopensAndFields(TestBinaryDocValuesUpdates.java:844)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.analytics.NoFacetCloudTest

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([237CFD7DB994CF3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 2 - Still Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/2/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:49755/collection1, 
http://127.0.0.1:49765/collection1, http://127.0.0.1:49760/collection1, 
http://127.0.0.1:49771/collection1]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:49755/collection1, 
http://127.0.0.1:49765/collection1, http://127.0.0.1:49760/collection1, 
http://127.0.0.1:49771/collection1]
at 
__randomizedtesting.SeedInfo.seed([2A662FE2EDBA5DDE:A141FC33ACBCF65A]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1332)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:474)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_131) - Build # 20058 - Still unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20058/
Java: 32bit/jdk1.8.0_131 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.test

Error Message:
Could not find collection : movereplicatest_coll

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
movereplicatest_coll
at 
__randomizedtesting.SeedInfo.seed([63042B0AB5101961:EB5014D01BEC7499]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:194)
at 
org.apache.solr.cloud.MoveReplicaTest.getRandomReplica(MoveReplicaTest.java:185)
at org.apache.solr.cloud.MoveReplicaTest.test(MoveReplicaTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 11434 lines...]
   [junit4] Suite: org.apache.solr.cloud.MoveReplicaHDFSTest
   [junit4]   2> Creating dataDir: 

[jira] [Updated] (SOLR-10742) SolrCores.getNamesForCore is quite inefficient and blocks other core operations

2017-07-04 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-10742:
--
Summary: SolrCores.getNamesForCore is quite inefficient and blocks other 
core operations  (was: SolrCores.getCoreNames is quite inefficient and blocks 
other core operations)

> SolrCores.getNamesForCore is quite inefficient and blocks other core 
> operations
> ---
>
> Key: SOLR-10742
> URL: https://issues.apache.org/jira/browse/SOLR-10742
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> SolrCores.getCoreNames iterates through all the cores to find all the aliases 
> to this core. It does this in a synchronized block which blocks other core 
> operations.
> For installations with many cores this can be a performance issue. I'm not 
> sure it makes sense to do it this way anyway, perhaps SolrCore should have a 
> list of its current aliases and we can be more efficient about this? Or 
> otherwise get this information in a less heavy-weight fashion?
> I'm assigning this to myself to keep track of it, but anyone who wants to 
> grab it please feel free.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10742) SolrCores.getNamesForCore is quite inefficient and blocks other core operations

2017-07-04 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-10742:
--
Description: 
SolrCores.getNamesForCore iterates through all the cores to find all the 
aliases to this core. It does this in a synchronized block which blocks other 
core operations.

For installations with many cores this can be a performance issue. I'm not sure 
it makes sense to do it this way anyway, perhaps SolrCore should have a list of 
its current aliases and we can be more efficient about this? Or otherwise get 
this information in a less heavy-weight fashion?

I'm assigning this to myself to keep track of it, but anyone who wants to grab 
it please feel free.



  was:
SolrCores.getCoreNames iterates through all the cores to find all the aliases 
to this core. It does this in a synchronized block which blocks other core 
operations.

For installations with many cores this can be a performance issue. I'm not sure 
it makes sense to do it this way anyway, perhaps SolrCore should have a list of 
its current aliases and we can be more efficient about this? Or otherwise get 
this information in a less heavy-weight fashion?

I'm assigning this to myself to keep track of it, but anyone who wants to grab 
it please feel free.




> SolrCores.getNamesForCore is quite inefficient and blocks other core 
> operations
> ---
>
> Key: SOLR-10742
> URL: https://issues.apache.org/jira/browse/SOLR-10742
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>
> SolrCores.getNamesForCore iterates through all the cores to find all the 
> aliases to this core. It does this in a synchronized block which blocks other 
> core operations.
> For installations with many cores this can be a performance issue. I'm not 
> sure it makes sense to do it this way anyway, perhaps SolrCore should have a 
> list of its current aliases and we can be more efficient about this? Or 
> otherwise get this information in a less heavy-weight fashion?
> I'm assigning this to myself to keep track of it, but anyone who wants to 
> grab it please feel free.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10878) MOVEREPLICA command may lose data when replicationFactor==1

2017-07-04 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-10878.
--
   Resolution: Fixed
Fix Version/s: (was: 6.7)
   master (8.0)

> MOVEREPLICA command may lose data when replicationFactor==1
> ---
>
> Key: SOLR-10878
> URL: https://issues.apache.org/jira/browse/SOLR-10878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6, 7.0, 6.7
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 7.0, master (8.0)
>
>
> Follow-up to SOLR-10704, a similar scenario occurs in {{MoveReplicaCmd}} when 
> replication factor is 1 - the only copy of the source replica may be deleted 
> while the target replica is still recovering.
> Also, even when replicationFactor > 1 but the only remaining replicas are of 
> the PULL type then leader election won't be able to find any replica to 
> become a leader for this shard, which will result in effective data loss for 
> that shard.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10878) MOVEREPLICA command may lose data when replicationFactor==1

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073975#comment-16073975
 ] 

ASF subversion and git services commented on SOLR-10878:


Commit a32ba2c9560cb1e6ad79854a382de668145994c4 in lucene-solr's branch 
refs/heads/branch_7_0 from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a32ba2c ]

SOLR-10878: MOVEREPLICA command may lose data when replicationFactor==1.


> MOVEREPLICA command may lose data when replicationFactor==1
> ---
>
> Key: SOLR-10878
> URL: https://issues.apache.org/jira/browse/SOLR-10878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6, 7.0, 6.7
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 7.0, master (8.0)
>
>
> Follow-up to SOLR-10704, a similar scenario occurs in {{MoveReplicaCmd}} when 
> replication factor is 1 - the only copy of the source replica may be deleted 
> while the target replica is still recovering.
> Also, even when replicationFactor > 1 but the only remaining replicas are of 
> the PULL type then leader election won't be able to find any replica to 
> become a leader for this shard, which will result in effective data loss for 
> that shard.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10878) MOVEREPLICA command may lose data when replicationFactor==1

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073976#comment-16073976
 ] 

ASF subversion and git services commented on SOLR-10878:


Commit 174d55f41486b5bc30661d116e1119abfcc30ba3 in lucene-solr's branch 
refs/heads/branch_7_0 from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=174d55f ]

SOLR-10878: Fix precommit.


> MOVEREPLICA command may lose data when replicationFactor==1
> ---
>
> Key: SOLR-10878
> URL: https://issues.apache.org/jira/browse/SOLR-10878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6, 7.0, 6.7
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 7.0, master (8.0)
>
>
> Follow-up to SOLR-10704, a similar scenario occurs in {{MoveReplicaCmd}} when 
> replication factor is 1 - the only copy of the source replica may be deleted 
> while the target replica is still recovering.
> Also, even when replicationFactor > 1 but the only remaining replicas are of 
> the PULL type then leader election won't be able to find any replica to 
> become a leader for this shard, which will result in effective data loss for 
> that shard.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10878) MOVEREPLICA command may lose data when replicationFactor==1

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073972#comment-16073972
 ] 

ASF subversion and git services commented on SOLR-10878:


Commit ce6a82e2ddbb916d06ce6792afb11e8bfa405db1 in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ce6a82e ]

SOLR-10878: MOVEREPLICA command may lose data when replicationFactor==1.


> MOVEREPLICA command may lose data when replicationFactor==1
> ---
>
> Key: SOLR-10878
> URL: https://issues.apache.org/jira/browse/SOLR-10878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6, 7.0, 6.7
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 7.0, master (8.0)
>
>
> Follow-up to SOLR-10704, a similar scenario occurs in {{MoveReplicaCmd}} when 
> replication factor is 1 - the only copy of the source replica may be deleted 
> while the target replica is still recovering.
> Also, even when replicationFactor > 1 but the only remaining replicas are of 
> the PULL type then leader election won't be able to find any replica to 
> become a leader for this shard, which will result in effective data loss for 
> that shard.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10878) MOVEREPLICA command may lose data when replicationFactor==1

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073973#comment-16073973
 ] 

ASF subversion and git services commented on SOLR-10878:


Commit 478ecba4cad9bb829ef6bf30354ba29823057916 in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=478ecba ]

SOLR-10878: Fix precommit.


> MOVEREPLICA command may lose data when replicationFactor==1
> ---
>
> Key: SOLR-10878
> URL: https://issues.apache.org/jira/browse/SOLR-10878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6, 7.0, 6.7
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 7.0, master (8.0)
>
>
> Follow-up to SOLR-10704, a similar scenario occurs in {{MoveReplicaCmd}} when 
> replication factor is 1 - the only copy of the source replica may be deleted 
> while the target replica is still recovering.
> Also, even when replicationFactor > 1 but the only remaining replicas are of 
> the PULL type then leader election won't be able to find any replica to 
> become a leader for this shard, which will result in effective data loss for 
> that shard.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1940 - Still Failing

2017-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1940/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestPullReplicaErrorHandling

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, SolrCore, MockDirectoryWrapper, MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:478)  
at org.apache.solr.core.SolrCore.(SolrCore.java:928)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:843)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:969)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:904)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:91)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:745)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:726)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:507)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:322)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
 at org.eclipse.jetty.server.Server.handle(Server.java:534)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)  at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1019)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:843)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:969)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:904)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:91)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:745)  
at 

[jira] [Commented] (SOLR-10878) MOVEREPLICA command may lose data when replicationFactor==1

2017-07-04 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073963#comment-16073963
 ] 

Andrzej Bialecki  commented on SOLR-10878:
--

[~mkhludnev]: fixed, thanks!

> MOVEREPLICA command may lose data when replicationFactor==1
> ---
>
> Key: SOLR-10878
> URL: https://issues.apache.org/jira/browse/SOLR-10878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6, 7.0, 6.7
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 7.0, 6.7
>
>
> Follow-up to SOLR-10704, a similar scenario occurs in {{MoveReplicaCmd}} when 
> replication factor is 1 - the only copy of the source replica may be deleted 
> while the target replica is still recovering.
> Also, even when replicationFactor > 1 but the only remaining replicas are of 
> the PULL type then leader election won't be able to find any replica to 
> become a leader for this shard, which will result in effective data loss for 
> that shard.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10878) MOVEREPLICA command may lose data when replicationFactor==1

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073962#comment-16073962
 ] 

ASF subversion and git services commented on SOLR-10878:


Commit ddfa074214dc1e1a3aa53fdcb387796aadbcb914 in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ddfa074 ]

SOLR-10878: Fix precommit.


> MOVEREPLICA command may lose data when replicationFactor==1
> ---
>
> Key: SOLR-10878
> URL: https://issues.apache.org/jira/browse/SOLR-10878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6, 7.0, 6.7
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 7.0, 6.7
>
>
> Follow-up to SOLR-10704, a similar scenario occurs in {{MoveReplicaCmd}} when 
> replication factor is 1 - the only copy of the source replica may be deleted 
> while the target replica is still recovering.
> Also, even when replicationFactor > 1 but the only remaining replicas are of 
> the PULL type then leader election won't be able to find any replica to 
> become a leader for this shard, which will result in effective data loss for 
> that shard.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-9-ea+175) - Build # 3 - Still Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3/
Java: 64bit/jdk-9-ea+175 -XX:+UseCompressedOops -XX:+UseParallelGC 
--illegal-access=deny

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest

Error Message:
8 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest: 1) Thread[id=13237, 
name=zkCallback-2605-thread-4, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1085)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@9/java.lang.Thread.run(Thread.java:844)2) 
Thread[id=13306, name=zkCallback-2605-thread-5, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1085)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@9/java.lang.Thread.run(Thread.java:844)3) 
Thread[id=13002, name=zkCallback-2605-thread-1, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1085)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@9/java.lang.Thread.run(Thread.java:844)4) 
Thread[id=13001, 
name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[8DF93338F3ABA638]-EventThread,
 state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)   
  at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
 at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)5) 
Thread[id=13000, 
name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[8DF93338F3ABA638]-SendThread(127.0.0.1:35633),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1051)6) 
Thread[id=12999, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@9/java.lang.Thread.run(Thread.java:844)7) 
Thread[id=13202, name=zkCallback-2605-thread-2, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_131) - Build # 6706 - Failure!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6706/
Java: 32bit/jdk1.8.0_131 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestSQLHandler.doTest

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([484E5F0E4018CF7C:EF0AE7AA2DA3DCC5]:0)
at 
org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:186)
at org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 11 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to 

[jira] [Closed] (SOLR-10955) o.a.s.analytics.facet.{Query|Range}FacetTest docs were sent out-of-order: lastDocID=99 vs docID=5

2017-07-04 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev closed SOLR-10955.
---
Resolution: Won't Fix

closing since SOLR-9981 is reverted. 

> o.a.s.analytics.facet.{Query|Range}FacetTest docs were sent out-of-order: 
> lastDocID=99 vs docID=5
> -
>
> Key: SOLR-10955
> URL: https://issues.apache.org/jira/browse/SOLR-10955
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Fix For: 7.0, 6.7
>
>
> reproduced on master
> {{ant test  -Dtestcase=QueryFacetTest -Dtests.method=queryTest 
> -Dtests.seed=B57925D3BDCDB40B -Dtests.multiplier=2 -Dtests.slow=true 
> -Dtests.locale=mt -Dtests.timezone=Indian/Kerguelen -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8}}
> https://builds.apache.org/job/Lucene-Solr-Tests-master/1902/
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19970/
> https://builds.apache.org/job/Lucene-Solr-Tests-master/1905/
> have no idea, whatsup 
> {quote}
>[junit4] ERROR   1.91s | QueryFacetTest.queryTest <<<
>[junit4]> Throwable #1: java.lang.IllegalArgumentException: docs were 
> sent out-of-order: lastDocID=99 vs docID=5
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([B57925D3BDCDB40B:28606B32DB67FB9E]:0)
>[junit4]>  at 
> org.apache.lucene.queries.function.valuesource.IntFieldSource$1.getValueForDoc(IntFieldSource.java:62)
>[junit4]>  at 
> org.apache.lucene.queries.function.valuesource.IntFieldSource$1.access$000(IntFieldSource.java:57)
>[junit4]>  at 
> org.apache.lucene.queries.function.valuesource.IntFieldSource$1$1.fillValue(IntFieldSource.java:104)
>[junit4]>  at 
> org.apache.solr.analytics.statistics.MinMaxStatsCollector.collect(MinMaxStatsCollector.java:68)
>[junit4]>  at 
> org.apache.solr.analytics.statistics.NumericStatsCollector.collect(NumericStatsC
> {qoute}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Feature freeze @ 7.0 branch

2017-07-04 Thread Anshum Gupta
Sure Ab, this is an important bug fix.

-Anshum

On Tue, Jul 4, 2017 at 9:35 AM Andrzej Białecki <
andrzej.biale...@lucidworks.com> wrote:

> SOLR-10878 and SOLR-10879 didn’t make it before the branches were cut, but
> I think they should be included in 7x and 7_0 - I’m going to cherry-pick
> the commits from master.
>
> On 3 Jul 2017, at 22:29, Anshum Gupta  wrote:
>
> Hi,
>
> I just wanted to call it out and remove any confusions around the fact
> that we shouldn’t we committing ‘new features’ to branch_7_0. As far as
> whatever was already agreed upon in previous communications, let’s get that
> stuff in if it’s ready or almost there. For everything else, kindly check
> before you commit to the release branch.
>
> Let us make sure that the bugs and edge cases are all taken care of, the
> deprecations, and cleanups too.
>
> P.S: Feel free to commit bug fixes without checking, but make sure that we
> aren’t hiding features in those commits.
>
>
> -Anshum
>
>
>
>
>


[jira] [Resolved] (SOLR-10456) timeout-related setters should be deprecated in favor of SolrClientBuilder methods

2017-07-04 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-10456.
-
Resolution: Fixed

Thanks Jason.

> timeout-related setters should be deprecated in favor of SolrClientBuilder 
> methods
> --
>
> Key: SOLR-10456
> URL: https://issues.apache.org/jira/browse/SOLR-10456
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Jason Gerlowski
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 7.0
>
> Attachments: SOLR-10456.patch, SOLR-10456.patch, SOLR-10456.patch, 
> SOLR-10456.patch
>
>
> Now that builders are in place for {{SolrClients}}, the setters used in each 
> {{SolrClient}} can be deprecated, and their functionality moved over to the 
> Builders. This change brings a few benefits:
> - unifies {{SolrClient}} configuration under the new Builders. It'll be nice 
> to have all the knobs, and levers used to tweak {{SolrClient}}s available in 
> a single place (the Builders).
> - reduces {{SolrClient}} thread-safety concerns. Currently, clients are 
> mutable. Using some {{SolrClient}} setters can result in erratic and "trappy" 
> behavior when the clients are used across multiple threads.
> This subtask endeavors to change this behavior for the 
> {{setConnectionTimeout}} and {{setSoTimeout}} setters on all {{SolrClient}} 
> implementations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 803 - Failure

2017-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/803/

No tests ran.

Build Log:
[...truncated 25695 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (19.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 28.9 MB in 0.03 sec (868.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 68.9 MB in 0.08 sec (915.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 79.2 MB in 0.09 sec (905.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6128 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6128 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (219.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 49.6 MB in 0.06 sec (820.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 141.6 MB in 0.17 sec (833.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 142.6 MB in 0.18 sec (811.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]  
   [smoker] Started Solr server on port 8983 (pid=14097). Happy searching!
   

Re: Feature freeze @ 7.0 branch

2017-07-04 Thread Andrzej Białecki
SOLR-10878 and SOLR-10879 didn’t make it before the branches were cut, but I 
think they should be included in 7x and 7_0 - I’m going to cherry-pick the 
commits from master.

> On 3 Jul 2017, at 22:29, Anshum Gupta  wrote:
> 
> Hi,
> 
> I just wanted to call it out and remove any confusions around the fact that 
> we shouldn’t we committing ‘new features’ to branch_7_0. As far as whatever 
> was already agreed upon in previous communications, let’s get that stuff in 
> if it’s ready or almost there. For everything else, kindly check before you 
> commit to the release branch.
> 
> Let us make sure that the bugs and edge cases are all taken care of, the 
> deprecations, and cleanups too.
> 
> P.S: Feel free to commit bug fixes without checking, but make sure that we 
> aren’t hiding features in those commits.
> 
> 
> -Anshum
> 
> 
> 



[jira] [Commented] (SOLR-10456) timeout-related setters should be deprecated in favor of SolrClientBuilder methods

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073884#comment-16073884
 ] 

ASF subversion and git services commented on SOLR-10456:


Commit b73e8e5ef764366764772876246dfa5b8d80ac74 in lucene-solr's branch 
refs/heads/branch_7_0 from [~anshumg]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b73e8e5 ]

SOLR-10456: Deprecate timeout related setters from SolrClients, and replace 
with Builder based implementation


> timeout-related setters should be deprecated in favor of SolrClientBuilder 
> methods
> --
>
> Key: SOLR-10456
> URL: https://issues.apache.org/jira/browse/SOLR-10456
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: Jason Gerlowski
>Assignee: Anshum Gupta
>Priority: Minor
> Fix For: 7.0
>
> Attachments: SOLR-10456.patch, SOLR-10456.patch, SOLR-10456.patch, 
> SOLR-10456.patch
>
>
> Now that builders are in place for {{SolrClients}}, the setters used in each 
> {{SolrClient}} can be deprecated, and their functionality moved over to the 
> Builders. This change brings a few benefits:
> - unifies {{SolrClient}} configuration under the new Builders. It'll be nice 
> to have all the knobs, and levers used to tweak {{SolrClient}}s available in 
> a single place (the Builders).
> - reduces {{SolrClient}} thread-safety concerns. Currently, clients are 
> mutable. Using some {{SolrClient}} setters can result in erratic and "trappy" 
> behavior when the clients are used across multiple threads.
> This subtask endeavors to change this behavior for the 
> {{setConnectionTimeout}} and {{setSoTimeout}} setters on all {{SolrClient}} 
> implementations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+175) - Build # 20057 - Still Failing!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20057/
Java: 32bit/jdk-9-ea+175 -client -XX:+UseConcMarkSweepGC --illegal-access=deny

All tests passed

Build Log:
[...truncated 1704 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J0-20170704_152907_21412704146649144136439.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J2-20170704_152907_2141685273513255637980.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J1-20170704_152907_2138414997738643610007.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 285 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J1-20170704_153624_4098149673793609413691.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J0-20170704_153624_409783902714997503277.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 6 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J2-20170704_153624_41010475306868669733155.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 1051 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20170704_153748_1267007169365530654649.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20170704_153748_1269581926656546388528.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20170704_153748_1265472134012405837128.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 221 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/icu/test/temp/junit4-J0-20170704_154004_223898234634590714884.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J2: stderr was not empty, see: 

[jira] [Updated] (LUCENE-7837) Use indexCreatedVersionMajor to fail opening too old indices

2017-07-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7837:
-
Attachment: LUCENE-7837.patch

Here is an updated patch. I'd like to merge it soon if there are no objections.

> Use indexCreatedVersionMajor to fail opening too old indices
> 
>
> Key: LUCENE-7837
> URL: https://issues.apache.org/jira/browse/LUCENE-7837
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-7837.patch, LUCENE-7837.patch
>
>
> Even though in theory we only support reading indices created with version N 
> or N-1, in practice it is possible to run a forceMerge in order to make 
> Lucene accept to open the index since we only record the version that wrote 
> segments and commit points. However as of Lucene 7.0, we also record the 
> major version that was used to initially create the index, meaning we could 
> also fail to open N-2 indices that have only been merged with version N-1.
> The current state of things where we could read old data without knowing it 
> raises issues with everything that is performed on top of the codec API such 
> as analysis, input validation or norms encoding, especially now that we plan 
> to change the defaults (LUCENE-7730).
> For instance, we are only starting to reject broken offsets in term vectors 
> in Lucene 7. If we do not enforce the index to be created with either Lucene 
> 7 or 8 once we move to Lucene 8, then it means codecs could still be fed with 
> broken offsets, which is a pity since assuming that offsets go forward makes 
> things easier to encode and also potentially allows for better compression.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7837) Use indexCreatedVersionMajor to fail opening too old indices

2017-07-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7837:
-
Fix Version/s: master (8.0)

> Use indexCreatedVersionMajor to fail opening too old indices
> 
>
> Key: LUCENE-7837
> URL: https://issues.apache.org/jira/browse/LUCENE-7837
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-7837.patch
>
>
> Even though in theory we only support reading indices created with version N 
> or N-1, in practice it is possible to run a forceMerge in order to make 
> Lucene accept to open the index since we only record the version that wrote 
> segments and commit points. However as of Lucene 7.0, we also record the 
> major version that was used to initially create the index, meaning we could 
> also fail to open N-2 indices that have only been merged with version N-1.
> The current state of things where we could read old data without knowing it 
> raises issues with everything that is performed on top of the codec API such 
> as analysis, input validation or norms encoding, especially now that we plan 
> to change the defaults (LUCENE-7730).
> For instance, we are only starting to reject broken offsets in term vectors 
> in Lucene 7. If we do not enforce the index to be created with either Lucene 
> 7 or 8 once we move to Lucene 8, then it means codecs could still be fed with 
> broken offsets, which is a pity since assuming that offsets go forward makes 
> things easier to encode and also potentially allows for better compression.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7898) Remove hasSegID from SegmentInfos

2017-07-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7898:
-
Attachment: LUCENE-7898.patch

Here is a patch. It also removes some backward compatibility code from 
readCodec which is not necessary anymore in 7.0.

> Remove hasSegID from SegmentInfos
> -
>
> Key: LUCENE-7898
> URL: https://issues.apache.org/jira/browse/LUCENE-7898
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7898.patch
>
>
> This is only necesarry for backward compatibility with pre-5.3 indices, which 
> 7.0 does not need to support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7898) Remove hasSegID from SegmentInfos

2017-07-04 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7898:


 Summary: Remove hasSegID from SegmentInfos
 Key: LUCENE-7898
 URL: https://issues.apache.org/jira/browse/LUCENE-7898
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
 Fix For: 7.0, master (8.0)


This is only necesarry for backward compatibility with pre-5.3 indices, which 
7.0 does not need to support.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 1 - Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/1/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([7182A8E3079C8FBE:13EF56A2C812EF80]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at org.junit.Assert.assertNotNull(Assert.java:537)
at 
org.apache.solr.handler.admin.MetricsHandlerTest.testPropertyFilter(MetricsHandlerTest.java:201)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.update.AutoCommitTest.testCommitWithin

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 

Re: 10 Resource Leak warnings dated to Q2 2017

2017-07-04 Thread Erick Erickson
Christine:

I fixed the JavaBinCodec warnings in SOLR-10779 for master/7.0, but
didn't backport to 6x. So if those warnings are creeping back in to
the 7x code line we can take a look.

I didn't backport to 6x since this seems to be long-term enough that
there isn't much point, along with the feeling that we'll introduce
problems at times in the effort and my view is that 6x is close enough
to end of development that we shouldn't expend the effort or introduce
instabilities. Or, put another way, I didn't want to be responsible
for introducing bugs in 6x, 7x is fair game ;)

Along the lines of making forward progress though Is it possible
to make precommit fail for resource leaks for specific classes only?
Or for specific files? It wouldn't be perfect, but cleaning up
warnings for a class then having precommit fail if resource leaks came
back in would feel less like Sisyphus.

I'm looking for either of the following. Or both of course.
- fail if precommit issues resource leak warnings for the _class_
JavaBinCodec wherever it's used.
- fail if precommit issues resource leak warnings in the _file_
whatever.java if any resource leak warnings are found for any class.

The first one is the one I'd probably use on the theory that one gets
familiar with the quirks of a particular class and it's easier to
clean up the resource leak warnings for that class than all the
warnings that might be in a file. But that's a personal preference.

Erick

On Tue, Jul 4, 2017 at 3:47 AM, Christine Poerschke (BLOOMBERG/
LONDON)  wrote:
> Hi Everyone,
>
> The following list is the latest Q2 2017 portion of the dated-warnings.log 
> file I've attached to https://issues.apache.org/jira/browse/SOLR-10778 and it 
> was generated by the also attached shell script that correlates warnings with 
> git commit history.
>
> Any help to investigate and take care of these warnings would be appreciated. 
> The short term goal is to not increase the number of warnings we have and in 
> the medium to long term the goal would be to fail precommit if any warnings 
> are detected.
>
> Christine
>
> PS: @SuppressWarnings("resource") can be used to suppress inappropriate 
> warnings and Erick Erickson is already looking into warnings related to 
> JavaBinCodec.
>
> -
>  ant precommit warnings dated to Q2 2017
> -
>
> 2017-06-21 
> http://www.github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/index/ReadersAndUpdates.java#L845
> 2017-06-21 
> http://www.github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java#L186
> 2017-06-21 
> http://www.github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedUpdates.java#L144
> 2017-06-16 
> http://www.github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Utils.java#L110
> 2017-06-16 
> http://www.github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/CommandOperation.java#L248
> 2017-06-16 
> http://www.github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/util/TestUtils.java#L186
> 2017-05-30 
> http://www.github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/autoscaling/TestPolicyCloud.java#L161
> 2017-05-16 
> http://www.github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/codecs/CodecUtil.java#L523
> 2017-04-12 
> http://www.github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CoreContainer.java#L969
> 2017-04-11 
> http://www.github.com/apache/lucene-solr/blob/master/solr/solrj/src/test/org/apache/solr/client/solrj/io/stream/StreamExpressionTest.java#L232

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7871) false positive match BlockJoinSelector[SortedDV] when child value is absent

2017-07-04 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved LUCENE-7871.
--
Resolution: Fixed

> false positive match BlockJoinSelector[SortedDV] when child value is absent 
> 
>
> Key: LUCENE-7871
> URL: https://issues.apache.org/jira/browse/LUCENE-7871
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Mikhail Khludnev
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7871.patch, LUCENE-7871.patch, LUCENE-7871.patch, 
> LUCENE-7871.patch, LUCENE-7871.patch
>
>
> * fix false positive match for SortedSetDV
> * make {{children}} an iterator instead of bitset.
> see [the 
> comment|https://issues.apache.org/jira/browse/LUCENE-7407?focusedCommentId=16042640=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16042640]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7871) false positive match BlockJoinSelector[SortedDV] when child value is absent

2017-07-04 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-7871:
-
Fix Version/s: master (8.0)

> false positive match BlockJoinSelector[SortedDV] when child value is absent 
> 
>
> Key: LUCENE-7871
> URL: https://issues.apache.org/jira/browse/LUCENE-7871
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Mikhail Khludnev
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7871.patch, LUCENE-7871.patch, LUCENE-7871.patch, 
> LUCENE-7871.patch, LUCENE-7871.patch
>
>
> * fix false positive match for SortedSetDV
> * make {{children}} an iterator instead of bitset.
> see [the 
> comment|https://issues.apache.org/jira/browse/LUCENE-7407?focusedCommentId=16042640=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16042640]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7871) false positive match BlockJoinSelector[SortedDV] when child value is absent

2017-07-04 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073797#comment-16073797
 ] 

Mikhail Khludnev commented on LUCENE-7871:
--

follow up SOLR-11006 for refactoring children to DISI.

> false positive match BlockJoinSelector[SortedDV] when child value is absent 
> 
>
> Key: LUCENE-7871
> URL: https://issues.apache.org/jira/browse/LUCENE-7871
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Mikhail Khludnev
> Fix For: 7.0
>
> Attachments: LUCENE-7871.patch, LUCENE-7871.patch, LUCENE-7871.patch, 
> LUCENE-7871.patch, LUCENE-7871.patch
>
>
> * fix false positive match for SortedSetDV
> * make {{children}} an iterator instead of bitset.
> see [the 
> comment|https://issues.apache.org/jira/browse/LUCENE-7407?focusedCommentId=16042640=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16042640]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11006) approach BlockJoinSelector.warp(...DISI children) in sort=childfield(..)

2017-07-04 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-11006:
---

 Summary: approach BlockJoinSelector.warp(...DISI children) in 
sort=childfield(..)
 Key: SOLR-11006
 URL: https://issues.apache.org/jira/browse/SOLR-11006
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mikhail Khludnev


the great idea [was 
provided|https://issues.apache.org/jira/browse/LUCENE-7407?focusedCommentId=16042640=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16042640]:
 _.. we should make (BitSet for the children) a DocIdSetIterator, which would 
have the nice side-effect that we could easily combine the doc values iterator 
and the child filter using a ConjunctionDISI to efficiently only iterate over 
child doc ids that both have a value and match the child filter_, but the 
[first 
scratch|https://issues.apache.org/jira/browse/LUCENE-7871?focusedCommentId=16044351=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16044351]
 scared me much.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10878) MOVEREPLICA command may lose data when replicationFactor==1

2017-07-04 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073783#comment-16073783
 ] 

Mikhail Khludnev commented on SOLR-10878:
-

I'm afraid {{precommit}} fails. 
{code}
  [rat] *
  [rat]  Printing headers for files without AL header...
  [rat]  
  [rat]  
  [rat] 
===
  [rat] 
==lucene-solr/solr/core/src/test/org/apache/solr/cloud/MoveReplicaHDFSTest.java
common-build.xml:1937: Rat problems were found!
{code}

> MOVEREPLICA command may lose data when replicationFactor==1
> ---
>
> Key: SOLR-10878
> URL: https://issues.apache.org/jira/browse/SOLR-10878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6, 7.0, 6.7
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
> Fix For: 7.0, 6.7
>
>
> Follow-up to SOLR-10704, a similar scenario occurs in {{MoveReplicaCmd}} when 
> replication factor is 1 - the only copy of the source replica may be deleted 
> while the target replica is still recovering.
> Also, even when replicationFactor > 1 but the only remaining replicas are of 
> the PULL type then leader election won't be able to find any replica to 
> become a leader for this shard, which will result in effective data loss for 
> that shard.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7892) LatLonDocValuesField methods should be clearly marked as slow

2017-07-04 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7892.
--
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.0

> LatLonDocValuesField methods should be clearly marked as slow
> -
>
> Key: LUCENE-7892
> URL: https://issues.apache.org/jira/browse/LUCENE-7892
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Fix For: 7.0, master (8.0)
>
>
> It is very trappy that LatLonDocValuesField has stuff like 
> newBoxQuery/newDistanceQuery.
> Users bring this up on the user list and are confused as to why the resulting 
> queries are slow.
> Here, we hurt the typical use case, to try to slightly speed up an esoteric 
> one (sparse stuff). Its a terrible tradeoff for the API.
> If we truly must have such slow methods in the public API, then they should 
> have {{slow}} in their name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7892) LatLonDocValuesField methods should be clearly marked as slow

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073770#comment-16073770
 ] 

ASF subversion and git services commented on LUCENE-7892:
-

Commit 667e9c66cae9b19044c5c5d1facc147a6e3277fe in lucene-solr's branch 
refs/heads/branch_7_0 from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=667e9c6 ]

LUCENE-7892: Add "slow" to factory methods of doc-values queries.


> LatLonDocValuesField methods should be clearly marked as slow
> -
>
> Key: LUCENE-7892
> URL: https://issues.apache.org/jira/browse/LUCENE-7892
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> It is very trappy that LatLonDocValuesField has stuff like 
> newBoxQuery/newDistanceQuery.
> Users bring this up on the user list and are confused as to why the resulting 
> queries are slow.
> Here, we hurt the typical use case, to try to slightly speed up an esoteric 
> one (sparse stuff). Its a terrible tradeoff for the API.
> If we truly must have such slow methods in the public API, then they should 
> have {{slow}} in their name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7892) LatLonDocValuesField methods should be clearly marked as slow

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073771#comment-16073771
 ] 

ASF subversion and git services commented on LUCENE-7892:
-

Commit 7d8634807a902502b792d539e3a3b8b4713cb0a2 in lucene-solr's branch 
refs/heads/branch_7x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7d86348 ]

LUCENE-7892: Add "slow" to factory methods of doc-values queries.


> LatLonDocValuesField methods should be clearly marked as slow
> -
>
> Key: LUCENE-7892
> URL: https://issues.apache.org/jira/browse/LUCENE-7892
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> It is very trappy that LatLonDocValuesField has stuff like 
> newBoxQuery/newDistanceQuery.
> Users bring this up on the user list and are confused as to why the resulting 
> queries are slow.
> Here, we hurt the typical use case, to try to slightly speed up an esoteric 
> one (sparse stuff). Its a terrible tradeoff for the API.
> If we truly must have such slow methods in the public API, then they should 
> have {{slow}} in their name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7892) LatLonDocValuesField methods should be clearly marked as slow

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073772#comment-16073772
 ] 

ASF subversion and git services commented on LUCENE-7892:
-

Commit 1e6e4022cf6b8f927ec6a10f4d4c4b866fce8f0f in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1e6e402 ]

LUCENE-7892: Add "slow" to factory methods of doc-values queries.


> LatLonDocValuesField methods should be clearly marked as slow
> -
>
> Key: LUCENE-7892
> URL: https://issues.apache.org/jira/browse/LUCENE-7892
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> It is very trappy that LatLonDocValuesField has stuff like 
> newBoxQuery/newDistanceQuery.
> Users bring this up on the user list and are confused as to why the resulting 
> queries are slow.
> Here, we hurt the typical use case, to try to slightly speed up an esoteric 
> one (sparse stuff). Its a terrible tradeoff for the API.
> If we truly must have such slow methods in the public API, then they should 
> have {{slow}} in their name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_131) - Build # 2 - Still Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2/
Java: 32bit/jdk1.8.0_131 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestPullReplica.testKillLeader

Error Message:
Replica state not updated in cluster state null Live Nodes: 
[127.0.0.1:37547_solr, 127.0.0.1:35119_solr] Last available state: 
DocCollection(pull_replica_test_kill_leader//collections/pull_replica_test_kill_leader/state.json/5)={
   "pullReplicas":"1",   "replicationFactor":"1",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   
"core":"pull_replica_test_kill_leader_shard1_replica_n1",   
"base_url":"http://127.0.0.1:37547/solr;,   
"node_name":"127.0.0.1:37547_solr",   "state":"down",   
"type":"NRT",   "leader":"true"}, "core_node2":{   
"core":"pull_replica_test_kill_leader_shard1_replica_p1",   
"base_url":"http://127.0.0.1:35119/solr;,   
"node_name":"127.0.0.1:35119_solr",   "state":"active",   
"type":"PULL",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"100",   "autoAddReplicas":"false",   "nrtReplicas":"1",   
"tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Replica state not updated in cluster state
null
Live Nodes: [127.0.0.1:37547_solr, 127.0.0.1:35119_solr]
Last available state: 
DocCollection(pull_replica_test_kill_leader//collections/pull_replica_test_kill_leader/state.json/5)={
  "pullReplicas":"1",
  "replicationFactor":"1",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"pull_replica_test_kill_leader_shard1_replica_n1",
  "base_url":"http://127.0.0.1:37547/solr;,
  "node_name":"127.0.0.1:37547_solr",
  "state":"down",
  "type":"NRT",
  "leader":"true"},
"core_node2":{
  "core":"pull_replica_test_kill_leader_shard1_replica_p1",
  "base_url":"http://127.0.0.1:35119/solr;,
  "node_name":"127.0.0.1:35119_solr",
  "state":"active",
  "type":"PULL",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"100",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([56591437A7A7CE7B:1F4FE083C51C5A2D]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
at 
org.apache.solr.cloud.TestPullReplica.doTestNoLeader(TestPullReplica.java:401)
at 
org.apache.solr.cloud.TestPullReplica.testKillLeader(TestPullReplica.java:290)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 

[jira] [Commented] (LUCENE-7892) LatLonDocValuesField methods should be clearly marked as slow

2017-07-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073759#comment-16073759
 ] 

Adrien Grand commented on LUCENE-7892:
--

+1 I'll fix it.

> LatLonDocValuesField methods should be clearly marked as slow
> -
>
> Key: LUCENE-7892
> URL: https://issues.apache.org/jira/browse/LUCENE-7892
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>
> It is very trappy that LatLonDocValuesField has stuff like 
> newBoxQuery/newDistanceQuery.
> Users bring this up on the user list and are confused as to why the resulting 
> queries are slow.
> Here, we hurt the typical use case, to try to slightly speed up an esoteric 
> one (sparse stuff). Its a terrible tradeoff for the API.
> If we truly must have such slow methods in the public API, then they should 
> have {{slow}} in their name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7871) false positive match BlockJoinSelector[SortedDV] when child value is absent

2017-07-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073747#comment-16073747
 ] 

Adrien Grand commented on LUCENE-7871:
--

Thanks [~mkhludnev]!

> false positive match BlockJoinSelector[SortedDV] when child value is absent 
> 
>
> Key: LUCENE-7871
> URL: https://issues.apache.org/jira/browse/LUCENE-7871
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Mikhail Khludnev
> Fix For: 7.0
>
> Attachments: LUCENE-7871.patch, LUCENE-7871.patch, LUCENE-7871.patch, 
> LUCENE-7871.patch, LUCENE-7871.patch
>
>
> * fix false positive match for SortedSetDV
> * make {{children}} an iterator instead of bitset.
> see [the 
> comment|https://issues.apache.org/jira/browse/LUCENE-7407?focusedCommentId=16042640=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16042640]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11005) inconsistency when maxShardsPerNode used along with policies

2017-07-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073742#comment-16073742
 ] 

Shalin Shekhar Mangar commented on SOLR-11005:
--

The renaming of maxShardsPerNode deserves its own issue I think. We shouldn't 
club the two together.

> inconsistency when maxShardsPerNode used along with policies
> 
>
> Key: SOLR-11005
> URL: https://issues.apache.org/jira/browse/SOLR-11005
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Noble Paul
> Fix For: 7.1
>
>
> The attribute maxShardsPerNode conflicts with the conditions in the new 
> Policy framework
> for example , I can say maxShardsPerNode=5 and I can have a policy 
> {code}
> { replica:"<3" , shard: "#ANY", node:"#ANY"}
> {code}
> So, it makes no sense to persist this attribute in collection state.json . 
> Ideally, we would like to keep this as a part of the policy and policy only.
> h3. proposed new behavior
> if the new policy framework is being used {maxShardsPerNode} should result in 
> creating a new collection specific policy with the correct condition. for 
> example, if a collection "x" is created with the parameter 
> {{maxShardsPerNode=2}} we will  create a new policy in autoscaling.json
> {code}
> {
> "policies":{
> "x_COLL_POLICY" : [{replica:"<3", shard:"#ANY" , node:"ANY"}]
> }
> }
> {code}
> this policy will be referred to in the state.json. There will be no attribute 
> called {{maxShardsPerNode}} persisted to the state.json.
> if there is already a policy being specified for the collection, solr should 
> throw an error asking the user to edit the policy directly
> h3.the name is bad
> We must rename the attribute {{maxShardsPerNode}} to {{maxReplicasPerNode}}. 
> This should be a backward compatible change. The old name will continue to 
> work and the API would give a friendly warning if the old name is used



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1939 - Failure

2017-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1939/

4 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:56181

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:56181
at 
__randomizedtesting.SeedInfo.seed([CB9D8AAC20368943:43C9B5768ECAE4BB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:637)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:252)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1667)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1694)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:254)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-10827) factor out abstract FilteringSolrMetricReporter

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073743#comment-16073743
 ] 

ASF subversion and git services commented on SOLR-10827:


Commit 53cb15506e540d034ffa42fa416d2a4d0e2680d9 in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=53cb155 ]

SOLR-10827: Factor out abstract FilteringSolrMetricReporter class.


> factor out abstract FilteringSolrMetricReporter
> ---
>
> Key: SOLR-10827
> URL: https://issues.apache.org/jira/browse/SOLR-10827
> Project: Solr
>  Issue Type: Task
>  Components: metrics
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-10827.patch
>
>
> Currently multiple SolrMetricReporter classes have their own local filter 
> settings, a common setting somewhere will reduce code duplication for 
> existing, future and custom reporters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10957) fix potential NPE in SolrCoreParser.init

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073744#comment-16073744
 ] 

ASF subversion and git services commented on SOLR-10957:


Commit be06dd30c829632ee3b590bbe514faaa25d1a71d in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=be06dd3 ]

SOLR-10957: Changed SolrCoreParser.init to use the resource loader from 
getSchema() instead of the resource loader from getCore().


> fix potential NPE in SolrCoreParser.init
> 
>
> Key: SOLR-10957
> URL: https://issues.apache.org/jira/browse/SOLR-10957
> Project: Solr
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-10957.patch, SOLR-10957.patch
>
>
> [SolrQueryRequestBase|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/request/SolrQueryRequestBase.java]
>  accommodates requests with a null SolrCore and this small change is for 
> SolrCoreParser.init to do likewise.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11005) inconsistency when maxShardsPerNode used along with policies

2017-07-04 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-11005:
-
Fix Version/s: 7.1
  Component/s: SolrCloud
   AutoScaling

> inconsistency when maxShardsPerNode used along with policies
> 
>
> Key: SOLR-11005
> URL: https://issues.apache.org/jira/browse/SOLR-11005
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Noble Paul
> Fix For: 7.1
>
>
> The attribute maxShardsPerNode conflicts with the conditions in the new 
> Policy framework
> for example , I can say maxShardsPerNode=5 and I can have a policy 
> {code}
> { replica:"<3" , shard: "#ANY", node:"#ANY"}
> {code}
> So, it makes no sense to persist this attribute in collection state.json . 
> Ideally, we would like to keep this as a part of the policy and policy only.
> h3. proposed new behavior
> if the new policy framework is being used {maxShardsPerNode} should result in 
> creating a new collection specific policy with the correct condition. for 
> example, if a collection "x" is created with the parameter 
> {{maxShardsPerNode=2}} we will  create a new policy in autoscaling.json
> {code}
> {
> "policies":{
> "x_COLL_POLICY" : [{replica:"<3", shard:"#ANY" , node:"ANY"}]
> }
> }
> {code}
> this policy will be referred to in the state.json. There will be no attribute 
> called {{maxShardsPerNode}} persisted to the state.json.
> if there is already a policy being specified for the collection, solr should 
> throw an error asking the user to edit the policy directly
> h3.the name is bad
> We must rename the attribute {{maxShardsPerNode}} to {{maxReplicasPerNode}}. 
> This should be a backward compatible change. The old name will continue to 
> work and the API would give a friendly warning if the old name is used



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11005) inconsistency when maxShardsPerNode used along with policies

2017-07-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073700#comment-16073700
 ] 

Shalin Shekhar Mangar edited comment on SOLR-11005 at 7/4/17 2:29 PM:
--

-I also think that we should do this in 7.0 because the policy framework is 
already being released as part of it. Therefore I propose that this be made a 
blocker for 7.0 release.-

Actually on further thought, although this behavior is not desirable, there is 
no technical reason why this should block 7.0. Lets defer this for 7.1


was (Author: shalinmangar):
I also think that we should do this in 7.0 because the policy framework is 
already being released as part of it. Therefore I propose that this be made a 
blocker for 7.0 release.

> inconsistency when maxShardsPerNode used along with policies
> 
>
> Key: SOLR-11005
> URL: https://issues.apache.org/jira/browse/SOLR-11005
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> The attribute maxShardsPerNode conflicts with the conditions in the new 
> Policy framework
> for example , I can say maxShardsPerNode=5 and I can have a policy 
> {code}
> { replica:"<3" , shard: "#ANY", node:"#ANY"}
> {code}
> So, it makes no sense to persist this attribute in collection state.json . 
> Ideally, we would like to keep this as a part of the policy and policy only.
> h3. proposed new behavior
> if the new policy framework is being used {maxShardsPerNode} should result in 
> creating a new collection specific policy with the correct condition. for 
> example, if a collection "x" is created with the parameter 
> {{maxShardsPerNode=2}} we will  create a new policy in autoscaling.json
> {code}
> {
> "policies":{
> "x_COLL_POLICY" : [{replica:"<3", shard:"#ANY" , node:"ANY"}]
> }
> }
> {code}
> this policy will be referred to in the state.json. There will be no attribute 
> called {{maxShardsPerNode}} persisted to the state.json.
> if there is already a policy being specified for the collection, solr should 
> throw an error asking the user to edit the policy directly
> h3.the name is bad
> We must rename the attribute {{maxShardsPerNode}} to {{maxReplicasPerNode}}. 
> This should be a backward compatible change. The old name will continue to 
> work and the API would give a friendly warning if the old name is used



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7835) ToChildBlockJoinSortField to sort children by a parent field

2017-07-04 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated LUCENE-7835:
-
Attachment: LUCENE-7835.patch

[^LUCENE-7835.patch] adds {{ToChildBlockJoinSortField}}. Please have a look. 
There are plenty of OO spaghetti. 

> ToChildBlockJoinSortField to sort children by a parent field  
> --
>
> Key: LUCENE-7835
> URL: https://issues.apache.org/jira/browse/LUCENE-7835
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Mikhail Khludnev
> Attachments: LUCENE-7835.patch
>
>
> When searching by {{ToChildBlockJoinQuery}} compare child docs by parent 
> fields.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 1 - Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/1/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([307E089E1484ABBD:B82A3744BA78C645]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:907)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7895) Add hooks to QueryBuilder to allow for the construction of MultiTermQueries in phrases

2017-07-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073706#comment-16073706
 ] 

Adrien Grand commented on LUCENE-7895:
--

I'm also reluctant to giving first-class support to such slow queries in 
QueryBuilder.

> Add hooks to QueryBuilder to allow for the construction of MultiTermQueries 
> in phrases
> --
>
> Key: LUCENE-7895
> URL: https://issues.apache.org/jira/browse/LUCENE-7895
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Attachments: LUCENE-7895.patch
>
>
> QueryBuilder currently allows subclasses to override simple term query 
> construction, which lets you support wildcard querying.  However, there is 
> currently no easy way to override phrase query construction to support 
> wildcards.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11005) inconsistency when maxShardsPerNode used along with policies

2017-07-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073700#comment-16073700
 ] 

Shalin Shekhar Mangar commented on SOLR-11005:
--

I also think that we should do this in 7.0 because the policy framework is 
already being released as part of it. Therefore I propose that this be made a 
blocker for 7.0 release.

> inconsistency when maxShardsPerNode used along with policies
> 
>
> Key: SOLR-11005
> URL: https://issues.apache.org/jira/browse/SOLR-11005
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> The attribute maxShardsPerNode conflicts with the conditions in the new 
> Policy framework
> for example , I can say maxShardsPerNode=5 and I can have a policy 
> {code}
> { replica:"<3" , shard: "#ANY", node:"#ANY"}
> {code}
> So, it makes no sense to persist this attribute in collection state.json . 
> Ideally, we would like to keep this as a part of the policy and policy only.
> h3. proposed new behavior
> if the new policy framework is being used {maxShardsPerNode} should result in 
> creating a new collection specific policy with the correct condition. for 
> example, if a collection "x" is created with the parameter 
> {{maxShardsPerNode=2}} we will  create a new policy in autoscaling.json
> {code}
> {
> "policies":{
> "x_COLL_POLICY" : [{replica:"<3", shard:"#ANY" , node:"ANY"}]
> }
> }
> {code}
> this policy will be referred to in the state.json. There will be no attribute 
> called {{maxShardsPerNode}} persisted to the state.json.
> if there is already a policy being specified for the collection, solr should 
> throw an error asking the user to edit the policy directly
> h3.the name is bad
> We must rename the attribute {{maxShardsPerNode}} to {{maxReplicasPerNode}}. 
> This should be a backward compatible change. The old name will continue to 
> work and the API would give a friendly warning if the old name is used



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11005) inconsistency when maxShardsPerNode used along with policies

2017-07-04 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073698#comment-16073698
 ] 

Shalin Shekhar Mangar commented on SOLR-11005:
--

Thanks Noble. The problem isn't about persisting this value. It is actually 
about how to use it such that we avoid confusion.

We support {{replica}} in the cluster and collection policy using which a user 
can limit the number of replicas on each node. The same thing is also supported 
today with maxShardsPerNode parameter. However the two can easily conflict and 
when they do, the current implementation tries to satisfy both (and the minimum 
of them wins). This is very confusing. For example, if the cluster policy has:
{code}
{'replica':'<2', 'shard': '#EACH', 'node': '#ANY'}
{code}
and a user invokes the following on a cluster of 2 nodes:
{code}
/admin/collections?action=create=1=6=10
{code}
Then the command fails saying that the cluster policy rule could not be 
satisfied. This is in spite of the fact that the user had explicitly provided 
{{maxShardsPerNode}} parameter.

I like this proposal. It removes confusion and preserves compatibility.

> inconsistency when maxShardsPerNode used along with policies
> 
>
> Key: SOLR-11005
> URL: https://issues.apache.org/jira/browse/SOLR-11005
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> The attribute maxShardsPerNode conflicts with the conditions in the new 
> Policy framework
> for example , I can say maxShardsPerNode=5 and I can have a policy 
> {code}
> { replica:"<3" , shard: "#ANY", node:"#ANY"}
> {code}
> So, it makes no sense to persist this attribute in collection state.json . 
> Ideally, we would like to keep this as a part of the policy and policy only.
> h3. proposed new behavior
> if the new policy framework is being used {maxShardsPerNode} should result in 
> creating a new collection specific policy with the correct condition. for 
> example, if a collection "x" is created with the parameter 
> {{maxShardsPerNode=2}} we will  create a new policy in autoscaling.json
> {code}
> {
> "policies":{
> "x_COLL_POLICY" : [{replica:"<3", shard:"#ANY" , node:"ANY"}]
> }
> }
> {code}
> this policy will be referred to in the state.json. There will be no attribute 
> called {{maxShardsPerNode}} persisted to the state.json.
> if there is already a policy being specified for the collection, solr should 
> throw an error asking the user to edit the policy directly
> h3.the name is bad
> We must rename the attribute {{maxShardsPerNode}} to {{maxReplicasPerNode}}. 
> This should be a backward compatible change. The old name will continue to 
> work and the API would give a friendly warning if the old name is used



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11005) inconsistency when maxShardsPerNode used along with policies

2017-07-04 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11005:
--
Summary: inconsistency when maxShardsPerNode used along with policies  
(was: inconsistency maxShardsPerNode when used along with policies)

> inconsistency when maxShardsPerNode used along with policies
> 
>
> Key: SOLR-11005
> URL: https://issues.apache.org/jira/browse/SOLR-11005
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> The attribute maxShardsPerNode conflicts with the conditions in the new 
> Policy framework
> for example , I can say maxShardsPerNode=5 and I can have a policy 
> {code}
> { replica:"<3" , shard: "#ANY", node:"#ANY"}
> {code}
> So, it makes no sense to persist this attribute in collection state.json . 
> Ideally, we would like to keep this as a part of the policy and policy only.
> h3. proposed new behavior
> if the new policy framework is being used {maxShardsPerNode} should result in 
> creating a new collection specific policy with the correct condition. for 
> example, if a collection "x" is created with the parameter 
> {{maxShardsPerNode=2}} we will  create a new policy in autoscaling.json
> {code}
> {
> "policies":{
> "x_COLL_POLICY" : [{replica:"<3", shard:"#ANY" , node:"ANY"}]
> }
> }
> {code}
> this policy will be referred to in the state.json. There will be no attribute 
> called {{maxShardsPerNode}} persisted to the state.json.
> if there is already a policy being specified for the collection, solr should 
> throw an error asking the user to edit the policy directly
> h3.the name is bad
> We must rename the attribute {{maxShardsPerNode}} to {{maxReplicasPerNode}}. 
> This should be a backward compatible change. The old name will continue to 
> work and the API would give a friendly warning if the old name is used



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11005) inconsistency maxShardsPerNode when used along with policies

2017-07-04 Thread Noble Paul (JIRA)
Noble Paul created SOLR-11005:
-

 Summary: inconsistency maxShardsPerNode when used along with 
policies
 Key: SOLR-11005
 URL: https://issues.apache.org/jira/browse/SOLR-11005
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul


The attribute maxShardsPerNode conflicts with the conditions in the new Policy 
framework

for example , I can say maxShardsPerNode=5 and I can have a policy 

{code}
{ replica:"<3" , shard: "#ANY", node:"#ANY"}
{code}

So, it makes no sense to persist this attribute in collection state.json . 
Ideally, we would like to keep this as a part of the policy and policy only.

h3. proposed new behavior
if the new policy framework is being used {maxShardsPerNode} should result in 
creating a new collection specific policy with the correct condition. for 
example, if a collection "x" is created with the parameter 
{{maxShardsPerNode=2}} we will  create a new policy in autoscaling.json
{code}
{
"policies":{
"x_COLL_POLICY" : [{replica:"<3", shard:"#ANY" , node:"ANY"}]
}
}
{code}
this policy will be referred to in the state.json. There will be no attribute 
called {{maxShardsPerNode}} persisted to the state.json.

if there is already a policy being specified for the collection, solr should 
throw an error asking the user to edit the policy directly

h3.the name is bad

We must rename the attribute {{maxShardsPerNode}} to {{maxReplicasPerNode}}. 
This should be a backward compatible change. The old name will continue to work 
and the API would give a friendly warning if the old name is used



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7897) RangeQuery optimization in IndexOrDocValuesQuery

2017-07-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073691#comment-16073691
 ] 

Adrien Grand commented on LUCENE-7897:
--

Right, we sometimes make the wrong decision, finding the optimal threshold is 
hard!

For this particular query, I'm wondering that there might be another issue: 
does your range query match more than 50% of the index content? If yes, then 
LUCENE-7641 probably kicks in, which makes the scorer much cheaper to create 
than what IndexOrDocValuesQuery assumes. If you look at the charts at 
https://www.elastic.co/blog/better-query-planning-for-range-queries-in-elasticsearch,
 LUCENE-7641 is what makes points become fast again (faster than doc values in 
some cases) when the range query matches most documents of the index.

> RangeQuery optimization in IndexOrDocValuesQuery 
> -
>
> Key: LUCENE-7897
> URL: https://issues.apache.org/jira/browse/LUCENE-7897
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: trunk, 7.0
>Reporter: Murali Krishna P
>
> For range queries, Lucene uses either Points or Docvalues based on cost 
> estimation 
> (https://lucene.apache.org/core/6_5_0/core/org/apache/lucene/search/IndexOrDocValuesQuery.html).
>  Scorer is chosen based on the minCost here: 
> https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/Boolean2ScorerSupplier.java#L16
> However, the cost calculation for TermQuery and IndexOrDocvalueQuery seems to 
> have same weightage. Essentially, cost depends upon the docfreq in TermDict, 
> number of points visited and number of docvalues. In a situation where 
> docfreq is not too restrictive, this is lot of lookups for docvalues and 
> using points would have been better.
> Following query with 1M matches, takes 60ms with docvalues, but only 27ms 
> with points. If I change the query to "message:*", which matches all docs, it 
> choses the points(since cost is same), but with message:xyz it choses 
> docvalues eventhough doc frequency is 1million which results in many docvalue 
> fetches. Would it make sense to change the cost of docvalues query to be 
> higher or use points if the docfreq is too high for the term query(find an 
> optimum threshold where points cost < docvalue cost)?
> {noformat}
> {
>   "query": {
> "bool": {
>   "must": [
> {
>   "query_string": {
> "query": "message:xyz"
>   }
> },
> {
>   "range": {
> "@timestamp": {
>   "gte": 149865240,
>   "lte": 149890500,
>   "format": "epoch_millis"
> }
>   }
> }
>   ]
> }
>   }
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7897) RangeQuery optimization in IndexOrDocValuesQuery

2017-07-04 Thread Murali Krishna P (JIRA)
Murali Krishna P created LUCENE-7897:


 Summary: RangeQuery optimization in IndexOrDocValuesQuery 
 Key: LUCENE-7897
 URL: https://issues.apache.org/jira/browse/LUCENE-7897
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: trunk, 7.0
Reporter: Murali Krishna P


For range queries, Lucene uses either Points or Docvalues based on cost 
estimation 
(https://lucene.apache.org/core/6_5_0/core/org/apache/lucene/search/IndexOrDocValuesQuery.html).
 Scorer is chosen based on the minCost here: 
https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/Boolean2ScorerSupplier.java#L16

However, the cost calculation for TermQuery and IndexOrDocvalueQuery seems to 
have same weightage. Essentially, cost depends upon the docfreq in TermDict, 
number of points visited and number of docvalues. In a situation where docfreq 
is not too restrictive, this is lot of lookups for docvalues and using points 
would have been better.

Following query with 1M matches, takes 60ms with docvalues, but only 27ms with 
points. If I change the query to "message:*", which matches all docs, it choses 
the points(since cost is same), but with message:xyz it choses docvalues 
eventhough doc frequency is 1million which results in many docvalue fetches. 
Would it make sense to change the cost of docvalues query to be higher or use 
points if the docfreq is too high for the term query(find an optimum threshold 
where points cost < docvalue cost)?

{noformat}
{
  "query": {
"bool": {
  "must": [
{
  "query_string": {
"query": "message:xyz"
  }
},
{
  "range": {
"@timestamp": {
  "gte": 149865240,
  "lte": 149890500,
  "format": "epoch_millis"
}
  }
}
  ]
}
  }
}
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+175) - Build # 20056 - Failure!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20056/
Java: 32bit/jdk-9-ea+175 -server -XX:+UseConcMarkSweepGC --illegal-access=deny

All tests passed

Build Log:
[...truncated 1676 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J0-20170704_123119_1447352693641767033008.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J2-20170704_123119_1448521433760996459384.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 27 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J1-20170704_123119_1447532228485863485210.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 294 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J2-20170704_123747_4371362905427617631469.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J1-20170704_123747_435806952159662269.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J0-20170704_123747_4356624330174841857179.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 1054 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20170704_123912_3612531401853002043113.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20170704_123912_362351014830201335650.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20170704_123912_36114358709828380073974.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 207 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/icu/test/temp/junit4-J1-20170704_124104_8978809260142638430904.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 14 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/icu/test/temp/junit4-J2-20170704_124104_89717241361601482339172.syserr
   

[jira] [Updated] (SOLR-10962) replicationHandler's reserveCommitDuration configurable in SolrCloud mode

2017-07-04 Thread Ramsey Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramsey Haddad updated SOLR-10962:
-
Attachment: SOLR-10962.patch

This patch takes [~cpoerschke]'s patch and adds [~hossman]'s suggestion.
I will look into [~shalinmangar]'s suggestion within the next week.

> replicationHandler's reserveCommitDuration configurable in SolrCloud mode
> -
>
> Key: SOLR-10962
> URL: https://issues.apache.org/jira/browse/SOLR-10962
> Project: Solr
>  Issue Type: New Feature
>  Components: replication (java)
>Reporter: Ramsey Haddad
>Priority: Minor
> Attachments: SOLR-10962.patch, SOLR-10962.patch, SOLR-10962.patch, 
> SOLR-10962.patch
>
>
> With SolrCloud mode, when doing replication via IndexFetcher, we occasionally 
> see the Fetch fail and then get restarted from scratch in cases where an 
> Index file is deleted after fetch manifest is computed and before the fetch 
> actually transfers the file. The risk of this happening can be reduced with a 
> higher value of reserveCommitDuration. However, the current configuration 
> only allows this value to be adjusted for "master" mode. This change allows 
> the value to also be changed when using "SolrCloud" mode.
> https://lucene.apache.org/solr/guide/6_6/index-replication.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7896) Upgrade to RandomizedRunner 2.5.2

2017-07-04 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073662#comment-16073662
 ] 

Adrien Grand commented on LUCENE-7896:
--

+1

> Upgrade to RandomizedRunner 2.5.2
> -
>
> Key: LUCENE-7896
> URL: https://issues.apache.org/jira/browse/LUCENE-7896
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
>Priority: Minor
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7896.patch
>
>
> RR 2.5.2 fixed a nasty error message that gets printed while running tests 
> that is pretty annoying if you have the environment hitting this. Lets 
> upgrade to 2.5.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7896) Upgrade to RandomizedRunner 2.5.2

2017-07-04 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073659#comment-16073659
 ] 

Simon Willnauer commented on LUCENE-7896:
-

here is the link to the RR issue 
https://github.com/randomizedtesting/randomizedtesting/issues/250

> Upgrade to RandomizedRunner 2.5.2
> -
>
> Key: LUCENE-7896
> URL: https://issues.apache.org/jira/browse/LUCENE-7896
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
>Priority: Minor
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7896.patch
>
>
> RR 2.5.2 fixed a nasty error message that gets printed while running tests 
> that is pretty annoying if you have the environment hitting this. Lets 
> upgrade to 2.5.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7896) Upgrade to RandomizedRunner 2.5.2

2017-07-04 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-7896:

Attachment: LUCENE-7896.patch

> Upgrade to RandomizedRunner 2.5.2
> -
>
> Key: LUCENE-7896
> URL: https://issues.apache.org/jira/browse/LUCENE-7896
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
>Priority: Minor
> Fix For: 7.0, master (8.0)
>
> Attachments: LUCENE-7896.patch
>
>
> RR 2.5.2 fixed a nasty error message that gets printed while running tests 
> that is pretty annoying if you have the environment hitting this. Lets 
> upgrade to 2.5.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7896) Upgrade to RandomizedRunner 2.5.2

2017-07-04 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-7896:
---

 Summary: Upgrade to RandomizedRunner 2.5.2
 Key: LUCENE-7896
 URL: https://issues.apache.org/jira/browse/LUCENE-7896
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Simon Willnauer
Priority: Minor
 Fix For: 7.0, master (8.0)


RR 2.5.2 fixed a nasty error message that gets printed while running tests that 
is pretty annoying if you have the environment hitting this. Lets upgrade to 
2.5.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 1 - Unstable

2017-07-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/1/

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxTime

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([EEBE0BCB5C8CA480:744A7629C21638BC]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:878)
at 
org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:270)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529==0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:871)
... 40 more




Build Log:
[...truncated 11782 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-10957) fix potential NPE in SolrCoreParser.init

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073648#comment-16073648
 ] 

ASF subversion and git services commented on SOLR-10957:


Commit db71c5615ac2f150e6e0b9f8e4126a0d46a29ef6 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=db71c56 ]

SOLR-10957: Changed SolrCoreParser.init to use the resource loader from 
getSchema() instead of the resource loader from getCore().


> fix potential NPE in SolrCoreParser.init
> 
>
> Key: SOLR-10957
> URL: https://issues.apache.org/jira/browse/SOLR-10957
> Project: Solr
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-10957.patch, SOLR-10957.patch
>
>
> [SolrQueryRequestBase|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/request/SolrQueryRequestBase.java]
>  accommodates requests with a null SolrCore and this small change is for 
> SolrCoreParser.init to do likewise.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10827) factor out abstract FilteringSolrMetricReporter

2017-07-04 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073647#comment-16073647
 ] 

ASF subversion and git services commented on SOLR-10827:


Commit d3c67cf5e4c4f1809e7c7ff921c55562fa6cb13f in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d3c67cf ]

SOLR-10827: Factor out abstract FilteringSolrMetricReporter class.


> factor out abstract FilteringSolrMetricReporter
> ---
>
> Key: SOLR-10827
> URL: https://issues.apache.org/jira/browse/SOLR-10827
> Project: Solr
>  Issue Type: Task
>  Components: metrics
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-10827.patch
>
>
> Currently multiple SolrMetricReporter classes have their own local filter 
> settings, a common setting somewhere will reduce code duplication for 
> existing, future and custom reporters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_131) - Build # 1 - Unstable!

2017-07-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/1/
Java: 32bit/jdk1.8.0_131 -server -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:54045/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:54045/collection1
at 
__randomizedtesting.SeedInfo.seed([774BC5144E354C2:8F20838BEA1F393A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:637)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:252)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.commit(AbstractFullDistribZkTestBase.java:1581)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:209)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Updated] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-07-04 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11003:

Attachment: SOLR-11003.patch

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
> Attachments: SOLR-11003.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-07-04 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073586#comment-16073586
 ] 

Amrit Sarkar commented on SOLR-11003:
-

Patch uploaded:

{code}
modified:   
solr/core/src/java/org/apache/solr/handler/CdcrReplicator.java
modified:   
solr/core/src/java/org/apache/solr/update/CdcrTransactionLog.java
modified:   
solr/core/src/java/org/apache/solr/update/TransactionLog.java
new file:   
solr/core/src/test-files/solr/configsets/cdcr-cluster1/conf/schema.xml
new file:   
solr/core/src/test-files/solr/configsets/cdcr-cluster1/conf/solrconfig.xml
new file:   
solr/core/src/test-files/solr/configsets/cdcr-cluster2/conf/schema.xml
new file:   
solr/core/src/test-files/solr/configsets/cdcr-cluster2/conf/solrconfig.xml
new file:   
solr/core/src/test/org/apache/solr/cloud/CdcrBidirectionalTest.java
{code}

Added testclass CdcrBidirectionalTest where two active clusters are talking to 
each other runtime.

The write operations in TransactionLog are repeated in CdcrTransactionLog to 
accommodate the extra entry for each update. Repeated code! henceforth planning 
to have TLogCommonUtils / UpdateLogCommonUtils to put the common code for both 
classes' methods.

Eagerly looking forward to feedback, suggestions and improvements.

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-07-04 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11003:

Description: 
The latest version of Solr CDCR across collections / clusters is in 
active-passive format, where we can index into source collection and the 
updates gets forwarded to the passive one and vice-versa is not supported.

https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
https://issues.apache.org/jira/browse/SOLR-6273

We are try to get a  design ready to index in both collections and the updates 
gets reflected across the collections in real-time. ClusterACollectionA => 
ClusterBCollectionB | ClusterBCollectionB => ClusterACollectionA.

The best use-case would be to we keep indexing in ClusterACollectionA which 
forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets down, 
we point the indexer and searcher application to ClusterBCollectionB. Once 
ClusterACollectionA is up, depending on updates count, they will be 
bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
keep indexing on the ClusterBCollectionB.

  was:
The latest version of Solr CDCR across collections / clusters is in 
active-passive format, where we can index into source collection and the 
updates gets forwarded to the passive one and vice-versa is not supported.

https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
https://issues.apache.org/jira/browse/SOLR-6273

We are try to get a  design ready to index in both collections and the updates 
gets reflected across the collections in real-time. ClusterACollectionA => 
ClusterBCollectionB | ClusterBCollectionB => ClusterACollectionA.

The best use-case would be to we keep indexing in ClusterACollectionA which 
forwarded the updates to ClusterBCollectionB. If ClusterACollectionA gets down, 
we point the indexer and searcher application to ClusterBCollectionB. Once 
ClusterACollectionA is up, depending updates count, they will be bootstrapped 
or forwarded to ClusterACollectionA from ClusterBCollectionB.


> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-07-04 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073575#comment-16073575
 ] 

Amrit Sarkar edited comment on SOLR-11003 at 7/4/17 12:27 PM:
--

Backward-compat::

tlogs entries index are definite for each operation: identifier(0) and 
version(1) common to all.

*DELETE:*
identifier: delete
update version
payload

if you see UpdateLog: doReplay: 1801
{code}
case UpdateLog.DELETE: {
recoveryInfo.deletes++;
byte[] idBytes = (byte[]) entry.get(2);
DeleteUpdateCommand cmd = new DeleteUpdateCommand(req);
cmd.setIndexedId(new BytesRef(idBytes));
cmd.setVersion(version);
cmd.setFlags(UpdateCommand.REPLAY | 
UpdateCommand.IGNORE_AUTOCOMMIT);
if (debug) log.debug("delete " + cmd);
proc.processDelete(cmd);
break;
  }
{code}
hardcoded, the position of payload(2) is hardcoded. See CdcrReplicator, same. 
So we can put anything on next index(3).

*DELETE_BY_QUERY:*
Ditto same!!

*ADD:*
indentifier: add or inplace
version
payload
*IN_PLACE_ADD:*
indentifier: add or inplace
version
previous pointer
previous version
payload

UpdateLog:: 1916, since our current code handles both adds in one manner, it 
assumes the last index of the entries is our payload:
{code}
public static AddUpdateCommand 
convertTlogEntryToAddUpdateCommand(SolrQueryRequest req, List entry,
int 
operation, long version) {
assert operation == UpdateLog.ADD || operation == UpdateLog.UPDATE_INPLACE;
SolrInputDocument sdoc = (SolrInputDocument) entry.get(entry.size()-1);
AddUpdateCommand cmd = new AddUpdateCommand(req);
cmd.solrDoc = sdoc;
cmd.setVersion(version);
if (operation == UPDATE_INPLACE) {
  long prevVersion = (Long) entry.get(UpdateLog.PREV_VERSION_IDX);
  cmd.prevVersion = prevVersion;
}
return cmd;
  }
{code}
So our window of adding something is the index just before the last. entry.size 
- 2. Please see CdcrReplicator for the same.


was (Author: sarkaramr...@gmail.com):
Backward-compat::

tlogs entries index are definite for each operation: identifier(0) and 
version(1) common to all.

*DELETE:*
identifier: delete
update version
payload

if you see UpdateLog: doReplay: 1801
{code}
case UpdateLog.DELETE: {
recoveryInfo.deletes++;
byte[] idBytes = (byte[]) entry.get(2);
DeleteUpdateCommand cmd = new DeleteUpdateCommand(req);
cmd.setIndexedId(new BytesRef(idBytes));
cmd.setVersion(version);
cmd.setFlags(UpdateCommand.REPLAY | 
UpdateCommand.IGNORE_AUTOCOMMIT);
if (debug) log.debug("delete " + cmd);
proc.processDelete(cmd);
break;
  }
{code}
hardcoded, the position of payload(2) is hardcoded. See CdcrReplicator, same. 
So we can put anything on next index(3).

*DELETE_BY_QUERY:*
Ditto same!!

*ADD:*
indentifier: add or inplace
version
payload
*IN_PLACE_ADD:*
indentifier: add or inplace
version
previous pointer
previous version
payload

UpdateLog:: 1916, since our current code handles both adds in one manner, it 
assumes the last index of the entries is our payload:
{code}
public static AddUpdateCommand 
convertTlogEntryToAddUpdateCommand(SolrQueryRequest req, List entry,
int 
operation, long version) {
assert operation == UpdateLog.ADD || operation == UpdateLog.UPDATE_INPLACE;
SolrInputDocument sdoc = (SolrInputDocument) entry.get(entry.size()-1);
AddUpdateCommand cmd = new AddUpdateCommand(req);
cmd.solrDoc = sdoc;
cmd.setVersion(version);
if (operation == UPDATE_INPLACE) {
  long prevVersion = (Long) entry.get(UpdateLog.PREV_VERSION_IDX);
  cmd.prevVersion = prevVersion;
}
return cmd;
  }
{code}
So our window of adding something is the index just before the last. entry.size 
- 2. Please see CdcrReplictor for the same.

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are 

  1   2   >