[jira] [Commented] (SOLR-12357) TRA: Pre-emptively create next collection

2019-06-18 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867248#comment-16867248
 ] 

mosh commented on SOLR-12357:
-

Lately we have encountered Time series data, which is sometimes broken and does 
not have a date.
We have been planning on indexing the broken data using our indexing pipeline 
into a separate collection,
though this got us wondering, whether we could propose an improvement to TRA, 
and add this feature to its core logic.

Perhaps adding a new configuration for un-routable documents to be routed to a 
specified collection could solve this?
Is this a broad issue others have encountered or are likely to encounter?

WDYT?

> TRA: Pre-emptively create next collection 
> --
>
> Key: SOLR-12357
> URL: https://issues.apache.org/jira/browse/SOLR-12357
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12357.patch
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> When adding data to a Time Routed Alias (TRA), we sometimes need to create 
> new collections.  Today we only do this synchronously – on-demand when a 
> document is coming in.  But this can add delays as the documents inbound are 
> held up for a collection to be created.  And, there may be a problem like a 
> lack of resources (e.g. ample SolrCloud nodes with space) that the policy 
> framework defines.  Such problems could be rectified sooner rather than later 
> assume there is log alerting in place (definitely out of scope here).
> Pre-emptive TRA collection needs a time window configuration parameter, 
> perhaps named something like "preemptiveCreateWindowMs".  If a document's 
> timestamp is within this time window _from the end time of the head/lead 
> collection_ then the collection can be created pre-eptively.  If no data is 
> being sent to the TRA, no collections will be auto created, nor will it 
> happen if older data is being added.  It may be convenient to effectively 
> limit this time setting to the _smaller_ of this value and the TRA interval 
> window, which I think is a fine limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (32bit/jdk1.8.0_201) - Build # 729 - Still Failing!

2019-06-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/729/
Java: 32bit/jdk1.8.0_201 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName

Error Message:
java.lang.IllegalArgumentException: DTLSv1.2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
java.lang.IllegalArgumentException: DTLSv1.2
at 
__randomizedtesting.SeedInfo.seed([FFB12B927FC09796:4C35FC62C4910F47]:0)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:408)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:273)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:243)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:164)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName(TestMiniSolrCloudClusterSSL.java:146)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-11.0.2) - Build # 5207 - Failure!

2019-06-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/5207/
Java: 64bit/jdk-11.0.2 -XX:-UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  org.apache.solr.cloud.SplitShardTest.testSplitFuzz

Error Message:
Error from server at https://127.0.0.1:49618/solr: Underlying core creation 
failed while creating collection: splitFuzzCollection

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:49618/solr: Underlying core creation failed 
while creating collection: splitFuzzCollection
at 
__randomizedtesting.SeedInfo.seed([C75FB3BF7D7C14CF:76B605258660A4E4]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:656)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.SplitShardTest.testSplitFuzz(SplitShardTest.java:113)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 126 - Still Failing

2019-06-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/126/

5 tests failed.
FAILED:  
org.apache.lucene.search.TestSimpleSearchEquivalence.testSloppyPhraseRelativePositions

Error Message:
expected:<5142> but was:<5102>

Stack Trace:
java.lang.AssertionError: expected:<5142> but was:<5102>
at 
__randomizedtesting.SeedInfo.seed([6D893C98C28434A2:4E81ED6EDB8C759B]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.lucene.search.SearchEquivalenceTestBase.assertSameScores(SearchEquivalenceTestBase.java:255)
at 
org.apache.lucene.search.SearchEquivalenceTestBase.assertSameScores(SearchEquivalenceTestBase.java:228)
at 
org.apache.lucene.search.TestSimpleSearchEquivalence.testSloppyPhraseRelativePositions(TestSimpleSearchEquivalence.java:207)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.lucene.search.TestSimpleSearchEquivalence.testBooleanBoostPropagation

Error Message:
expected:<5079> but was:<5022>

Stack Trace:
java.lang.AssertionError: expected:<5079> but was:<5022>
at 
__randomizedtesting.SeedInfo.seed([6D893C98C28434A2:41C4FBEF23607279]:0)

[JENKINS] Lucene-Solr-repro - Build # 3360 - Unstable

2019-06-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/3360/

[...truncated 32 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-8.x/247/consoleText

[repro] Revision: 0a915c32926257cb7406463b9829914a34540bee

[repro] Repro line:  ant test  -Dtestcase=TestMiniSolrCloudClusterSSL 
-Dtests.method=testSslWithInvalidPeerName -Dtests.seed=F80304785AFE1490 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=da 
-Dtests.timezone=Atlantic/Bermuda -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestMiniSolrCloudClusterSSL 
-Dtests.method=testSslWithCheckPeerName -Dtests.seed=F80304785AFE1490 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=da 
-Dtests.timezone=Atlantic/Bermuda -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
8a35088947321681b8850d2908a4d9bc83d960f6
[repro] git fetch
[repro] git checkout 0a915c32926257cb7406463b9829914a34540bee

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestMiniSolrCloudClusterSSL
[repro] ant compile-test

[...truncated 3578 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestMiniSolrCloudClusterSSL" -Dtests.showOutput=onerror  
-Dtests.seed=F80304785AFE1490 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=da -Dtests.timezone=Atlantic/Bermuda -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 19450 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   5/5 failed: org.apache.solr.cloud.TestMiniSolrCloudClusterSSL

[repro] Re-testing 100% failures at the tip of branch_8x
[repro] git fetch
[repro] git checkout branch_8x

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 23 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestMiniSolrCloudClusterSSL
[repro] ant compile-test

[...truncated 3578 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestMiniSolrCloudClusterSSL" -Dtests.showOutput=onerror  
-Dtests.seed=F80304785AFE1490 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=da -Dtests.timezone=Atlantic/Bermuda -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 19573 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_8x:
[repro]   5/5 failed: org.apache.solr.cloud.TestMiniSolrCloudClusterSSL

[repro] Re-testing 100% failures at the tip of branch_8x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestMiniSolrCloudClusterSSL
[repro] ant compile-test

[...truncated 3578 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestMiniSolrCloudClusterSSL" -Dtests.showOutput=onerror  
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=da 
-Dtests.timezone=Atlantic/Bermuda -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 19389 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_8x without a seed:
[repro]   5/5 failed: org.apache.solr.cloud.TestMiniSolrCloudClusterSSL
[repro] git checkout 8a35088947321681b8850d2908a4d9bc83d960f6

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-13-ea+shipilev-fastdebug) - Build # 24251 - Still Unstable!

2019-06-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24251/
Java: 64bit/jdk-13-ea+shipilev-fastdebug -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC

8 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth

Error Message:
must have failed

Stack Trace:
java.lang.AssertionError: must have failed
at 
__randomizedtesting.SeedInfo.seed([8549C6D4E43104D5:3927B0C6406287AF]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth(BasicAuthIntegrationTest.java:204)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:830)


FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Timeout occurred while waiting response from server at: 
https://127.0.0.1:37097/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: https://127.0.0.1:37097/solr
at 

[JENKINS] Lucene-Solr-Tests-8.x - Build # 248 - Still Failing

2019-06-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/248/

2 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName

Error Message:
Unexpected exception type, expected SolrServerException but got 
java.lang.IllegalArgumentException: DTLSv1.2

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception type, expected 
SolrServerException but got java.lang.IllegalArgumentException: DTLSv1.2
at 
__randomizedtesting.SeedInfo.seed([C8C96C78C69CE6F3:9F7829C3066019E2]:0)
at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2731)
at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2720)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName(TestMiniSolrCloudClusterSSL.java:207)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-9.0.4) - Build # 316 - Unstable!

2019-06-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/316/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

9 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName

Error Message:
java.lang.IllegalArgumentException: Not DTLS protocol

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
java.lang.IllegalArgumentException: Not DTLS protocol
at 
__randomizedtesting.SeedInfo.seed([AB7D3F4BD5BDF608:18F9E8BB6EEC6ED9]:0)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:408)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:273)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:243)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:164)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName(TestMiniSolrCloudClusterSSL.java:146)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-13560) Add isNull and notNull Stream Evaluators

2019-06-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867176#comment-16867176
 ] 

ASF subversion and git services commented on SOLR-13560:


Commit d82fe011bfeaf67fdff9e363b80b1489c2da369f in lucene-solr's branch 
refs/heads/branch_8x from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d82fe01 ]

SOLR-13560: Fix precommit


> Add isNull and notNull Stream Evaluators
> 
>
> Key: SOLR-13560
> URL: https://issues.apache.org/jira/browse/SOLR-13560
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13560.patch
>
>
> This ticket adds two Stream Evaluators for testing for null values in Tuples. 
> These are much needed functions as currently null values are not possible to 
> detect with the *eq* Stream Evaluator because null values are evaluated to 
> the parameter name, rather then null. This change was made to support String 
> literal parameters without quotes.
> The isNull and notNull Stream Evaluators properly detect nulls so they can be 
> used to filter tuples in a *having* expression or replace nulls in a *select* 
> expression.
> Sample syntax for null filtering:
> {code:java}
> having(random(testapp, q="*:*", fl="response_d", rows="2"),
>notNull(response_d)){code}
> Sample syntax for null filterring:
> {code:java}
> select(random(testapp, q="*:*", fl="id, response_d", rows="2"),
>id,
>if(isNull(response_d),-1, response_d) as response_d){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13560) Add isNull and notNull Stream Evaluators

2019-06-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867175#comment-16867175
 ] 

ASF subversion and git services commented on SOLR-13560:


Commit ecd702bf4a00fcc2215c815b81d7080724e2ee78 in lucene-solr's branch 
refs/heads/branch_8x from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ecd702b ]

SOLR-13560: Add isNull and notNull Stream Evaluators


> Add isNull and notNull Stream Evaluators
> 
>
> Key: SOLR-13560
> URL: https://issues.apache.org/jira/browse/SOLR-13560
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13560.patch
>
>
> This ticket adds two Stream Evaluators for testing for null values in Tuples. 
> These are much needed functions as currently null values are not possible to 
> detect with the *eq* Stream Evaluator because null values are evaluated to 
> the parameter name, rather then null. This change was made to support String 
> literal parameters without quotes.
> The isNull and notNull Stream Evaluators properly detect nulls so they can be 
> used to filter tuples in a *having* expression or replace nulls in a *select* 
> expression.
> Sample syntax for null filtering:
> {code:java}
> having(random(testapp, q="*:*", fl="response_d", rows="2"),
>notNull(response_d)){code}
> Sample syntax for null filterring:
> {code:java}
> select(random(testapp, q="*:*", fl="id, response_d", rows="2"),
>id,
>if(isNull(response_d),-1, response_d) as response_d){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13560) Add isNull and notNull Stream Evaluators

2019-06-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867173#comment-16867173
 ] 

ASF subversion and git services commented on SOLR-13560:


Commit 8a35088947321681b8850d2908a4d9bc83d960f6 in lucene-solr's branch 
refs/heads/master from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8a35088 ]

SOLR-13560: Fix precommit


> Add isNull and notNull Stream Evaluators
> 
>
> Key: SOLR-13560
> URL: https://issues.apache.org/jira/browse/SOLR-13560
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13560.patch
>
>
> This ticket adds two Stream Evaluators for testing for null values in Tuples. 
> These are much needed functions as currently null values are not possible to 
> detect with the *eq* Stream Evaluator because null values are evaluated to 
> the parameter name, rather then null. This change was made to support String 
> literal parameters without quotes.
> The isNull and notNull Stream Evaluators properly detect nulls so they can be 
> used to filter tuples in a *having* expression or replace nulls in a *select* 
> expression.
> Sample syntax for null filtering:
> {code:java}
> having(random(testapp, q="*:*", fl="response_d", rows="2"),
>notNull(response_d)){code}
> Sample syntax for null filterring:
> {code:java}
> select(random(testapp, q="*:*", fl="id, response_d", rows="2"),
>id,
>if(isNull(response_d),-1, response_d) as response_d){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13560) Add isNull and notNull Stream Evaluators

2019-06-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867172#comment-16867172
 ] 

ASF subversion and git services commented on SOLR-13560:


Commit 1dd98ca65dd90db6998fe54b56860e90d8f398d1 in lucene-solr's branch 
refs/heads/master from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1dd98ca ]

SOLR-13560: Add isNull and notNull Stream Evaluators


> Add isNull and notNull Stream Evaluators
> 
>
> Key: SOLR-13560
> URL: https://issues.apache.org/jira/browse/SOLR-13560
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13560.patch
>
>
> This ticket adds two Stream Evaluators for testing for null values in Tuples. 
> These are much needed functions as currently null values are not possible to 
> detect with the *eq* Stream Evaluator because null values are evaluated to 
> the parameter name, rather then null. This change was made to support String 
> literal parameters without quotes.
> The isNull and notNull Stream Evaluators properly detect nulls so they can be 
> used to filter tuples in a *having* expression or replace nulls in a *select* 
> expression.
> Sample syntax for null filtering:
> {code:java}
> having(random(testapp, q="*:*", fl="response_d", rows="2"),
>notNull(response_d)){code}
> Sample syntax for null filterring:
> {code:java}
> select(random(testapp, q="*:*", fl="id, response_d", rows="2"),
>id,
>if(isNull(response_d),-1, response_d) as response_d){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13560) Add isNull and notNull Stream Evaluators

2019-06-18 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13560:
--
Description: 
This ticket adds two Stream Evaluators for testing for null values in Tuples. 
These are much needed functions as currently null values are not possible to 
detect with the *eq* Stream Evaluator because null values are evaluated to the 
parameter name, rather then null. This change was made to support String 
literal parameters without quotes.

The isNull and notNull Stream Evaluators properly detect nulls so they can be 
used to filter tuples in a *having* expression or replace nulls in a *select* 
expression.

Sample syntax for null filtering:
{code:java}
having(random(testapp, q="*:*", fl="response_d", rows="2"),
   notNull(response_d)){code}
Sample syntax for null filterring:
{code:java}
select(random(testapp, q="*:*", fl="id, response_d", rows="2"),
   id,
   if(isNull(response_d),-1, response_d) as response_d){code}
 

  was:
This ticket adds two Stream Evaluators for testing for null values in Tuples. 
These are much needed functions as currently null values are not possible to 
detect with the *eq* Stream Evaluator because null values are evaluated to the 
parameter name, rather then null. This change was made to support String 
literal parameters without quotes.

The isNull and notNull Stream Evaluators properly detect nulls so they can be 
used to filter tuples in a *having* expression or replace nulls in a *select* 
expression.

Sample syntax for null filtering:
{code:java}
having(random(testapp, q="*:*", fl="response_d", rows="2"),
   notNull(response_d)){code}
Sample syntax for null filterring:
{code:java}
select(random(testapp, q="*:*", fl="id, response_d", rows="2"),
   id,
   if(isNull(response_d),-1, response_d) as response_d){code}
 

 


> Add isNull and notNull Stream Evaluators
> 
>
> Key: SOLR-13560
> URL: https://issues.apache.org/jira/browse/SOLR-13560
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13560.patch
>
>
> This ticket adds two Stream Evaluators for testing for null values in Tuples. 
> These are much needed functions as currently null values are not possible to 
> detect with the *eq* Stream Evaluator because null values are evaluated to 
> the parameter name, rather then null. This change was made to support String 
> literal parameters without quotes.
> The isNull and notNull Stream Evaluators properly detect nulls so they can be 
> used to filter tuples in a *having* expression or replace nulls in a *select* 
> expression.
> Sample syntax for null filtering:
> {code:java}
> having(random(testapp, q="*:*", fl="response_d", rows="2"),
>notNull(response_d)){code}
> Sample syntax for null filterring:
> {code:java}
> select(random(testapp, q="*:*", fl="id, response_d", rows="2"),
>id,
>if(isNull(response_d),-1, response_d) as response_d){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13560) Add isNull and notNull Stream Evaluators

2019-06-18 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13560:
--
Description: 
This ticket adds two Stream Evaluators for testing for null values in Tuples. 
These are much needed functions as currently null values are not possible to 
detect with the *eq* Stream Evaluator because null values are evaluated to the 
parameter name, rather then null. This change was made to support String 
literal parameters without quotes.

The isNull and notNull Stream Evaluators properly detect nulls so they can be 
used to filter tuples in a *having* expression or replace nulls in a *select* 
expression.

Sample syntax for null filtering:
{code:java}
having(random(testapp, q="*:*", fl="response_d", rows="2"),
   notNull(response_d)){code}
Sample syntax for null filterring:
{code:java}
select(random(testapp, q="*:*", fl="id, response_d", rows="2"),
   id,
   if(isNull(response_d),-1, response_d) as response_d){code}
 

 

  was:
This ticket adds two Stream Evaluators for testing for null values in Tuples. 
These are much needed functions as currently null values are not possible to 
detect with the *eq* Stream Evaluator because null values are evaluated to the 
parameter name, rather then null. This change was made to support String 
literal parameters without quotes.

The isNull and notNull Stream Evaluators properly detect nulls so they can be 
used to filter tuples in a *having* expression or replace nulls in a *select* 
expression.

Sample syntax:
{code:java}
having(random(testapp, q="*:*", fl="response_d", rows="2"),
   notNull(response_d)){code}
 


> Add isNull and notNull Stream Evaluators
> 
>
> Key: SOLR-13560
> URL: https://issues.apache.org/jira/browse/SOLR-13560
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13560.patch
>
>
> This ticket adds two Stream Evaluators for testing for null values in Tuples. 
> These are much needed functions as currently null values are not possible to 
> detect with the *eq* Stream Evaluator because null values are evaluated to 
> the parameter name, rather then null. This change was made to support String 
> literal parameters without quotes.
> The isNull and notNull Stream Evaluators properly detect nulls so they can be 
> used to filter tuples in a *having* expression or replace nulls in a *select* 
> expression.
> Sample syntax for null filtering:
> {code:java}
> having(random(testapp, q="*:*", fl="response_d", rows="2"),
>notNull(response_d)){code}
> Sample syntax for null filterring:
> {code:java}
> select(random(testapp, q="*:*", fl="id, response_d", rows="2"),
>id,
>if(isNull(response_d),-1, response_d) as response_d){code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13560) Add isNull and notNull Stream Evaluators

2019-06-18 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13560:
--
Attachment: SOLR-13560.patch

> Add isNull and notNull Stream Evaluators
> 
>
> Key: SOLR-13560
> URL: https://issues.apache.org/jira/browse/SOLR-13560
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-13560.patch
>
>
> This ticket adds two Stream Evaluators for testing for null values in Tuples. 
> These are much needed functions as currently null values are not possible to 
> detect with the *eq* Stream Evaluator because null values are evaluated to 
> the parameter name, rather then null. This change was made to support String 
> literal parameters without quotes.
> The isNull and notNull Stream Evaluators properly detect nulls so they can be 
> used to filter tuples in a *having* expression or replace nulls in a *select* 
> expression.
> Sample syntax:
> {code:java}
> having(random(testapp, q="*:*", fl="response_d", rows="2"),
>notNull(response_d)){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13560) Add isNull and notNull Stream Evaluators

2019-06-18 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13560:
--
Description: 
This ticket adds two Stream Evaluators for testing for null values in Tuples. 
These are much needed functions as currently null values are not possible to 
detect with the *eq* Stream Evaluator because null values are evaluated to the 
parameter name, rather then null. This change was made to support String 
literal parameters without quotes.

The isNull and notNull Stream Evaluators properly detect nulls so they can be 
used to filter tuples in a *having* expression or replace nulls in a *select* 
expression.

Sample syntax:
{code:java}
having(random(testapp, q="*:*", fl="response_d", rows="2"),
   notNull(response_d)){code}
 

  was:
This ticket adds two Stream Evaluators for testing for null values in Tuples. 
These are much needed functions as currently null values are not possible to 
detect with the *eq* Stream Evaluator because null values are evaluated to the 
variable/field name, rather then null. This change was made to support String 
literals without quotes.

The isNull and notNull Stream Evaluators properly detect nulls so they can be 
used to filter tuples in a *having* expression or replace nulls in a *select* 
expression.

Sample syntax:
{code:java}
having(random(testapp, q="*:*", fl="response_d", rows="2"),
   notNull(response_d)){code}
 


> Add isNull and notNull Stream Evaluators
> 
>
> Key: SOLR-13560
> URL: https://issues.apache.org/jira/browse/SOLR-13560
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> This ticket adds two Stream Evaluators for testing for null values in Tuples. 
> These are much needed functions as currently null values are not possible to 
> detect with the *eq* Stream Evaluator because null values are evaluated to 
> the parameter name, rather then null. This change was made to support String 
> literal parameters without quotes.
> The isNull and notNull Stream Evaluators properly detect nulls so they can be 
> used to filter tuples in a *having* expression or replace nulls in a *select* 
> expression.
> Sample syntax:
> {code:java}
> having(random(testapp, q="*:*", fl="response_d", rows="2"),
>notNull(response_d)){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13560) Add isNull and notNull Stream Evaluators

2019-06-18 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13560:
--
Description: 
This ticket adds two Stream Evaluators for testing for null values in Tuples. 
These are much needed functions as currently null values are not possible to 
detect with the *eq* Stream Evaluator because null values are evaluated to the 
variable/field name, rather then null. This change was made to support String 
literals without quotes.

The isNull and notNull Stream Evaluators properly detect nulls so they can be 
used to filter tuples in a *having* expression or replace nulls in a *select* 
expression.

Sample syntax:
{code:java}
having(random(testapp, q="*:*", fl="response_d", rows="2"),
   notNull(response_d)){code}
 

  was:
This ticket adds two Stream Evaluators for testing for null values in Tuples. 
These are much needed functions as currently null values are not possible to 
detect with the *eq* Stream Evaluator because null values are evaluated to the 
variable name, rather then null. This change was made to support String 
literals without quotes.

The isNull and notNull Stream Evaluators properly detect nulls so they can be 
used to filter tuples in a *having* expression or replace nulls in a *select* 
expression.

Sample syntax:
{code:java}
having(random(testapp, q="*:*", fl="response_d", rows="2"),
   notNull(response_d)){code}
 


> Add isNull and notNull Stream Evaluators
> 
>
> Key: SOLR-13560
> URL: https://issues.apache.org/jira/browse/SOLR-13560
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> This ticket adds two Stream Evaluators for testing for null values in Tuples. 
> These are much needed functions as currently null values are not possible to 
> detect with the *eq* Stream Evaluator because null values are evaluated to 
> the variable/field name, rather then null. This change was made to support 
> String literals without quotes.
> The isNull and notNull Stream Evaluators properly detect nulls so they can be 
> used to filter tuples in a *having* expression or replace nulls in a *select* 
> expression.
> Sample syntax:
> {code:java}
> having(random(testapp, q="*:*", fl="response_d", rows="2"),
>notNull(response_d)){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13560) Add isNull and notNull Stream Evaluators

2019-06-18 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-13560:
--
Description: 
This ticket adds two Stream Evaluators for testing for null values in Tuples. 
These are much needed functions as currently null values are not possible to 
detect with the *eq* Stream Evaluator because null values are evaluated to the 
variable name, rather then null. This change was made to support String 
literals without quotes.

The isNull and notNull Stream Evaluators properly detect nulls so they can be 
used to filter tuples in a *having* expression or replace nulls in a *select* 
expression.

Sample syntax:
{code:java}
having(random(testapp, q="*:*", fl="response_d", rows="2"),
   notNull(response_d)){code}
 

  was:
This ticket adds two Stream Evaluators for testing for null values in Tuples. 
These are much needed functions as currently null values are not possible to 
detect with the *eq* Stream Evaluator because null values are evaluated to the 
variable name, rather then null. This change was made to support String 
literals without quotes.

The isNull and notNull Stream Evaluators properly detect nulls so they can be 
used to filter tuples in a *having* expression or replace nulls in a *select* 
expression.


> Add isNull and notNull Stream Evaluators
> 
>
> Key: SOLR-13560
> URL: https://issues.apache.org/jira/browse/SOLR-13560
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> This ticket adds two Stream Evaluators for testing for null values in Tuples. 
> These are much needed functions as currently null values are not possible to 
> detect with the *eq* Stream Evaluator because null values are evaluated to 
> the variable name, rather then null. This change was made to support String 
> literals without quotes.
> The isNull and notNull Stream Evaluators properly detect nulls so they can be 
> used to filter tuples in a *having* expression or replace nulls in a *select* 
> expression.
> Sample syntax:
> {code:java}
> having(random(testapp, q="*:*", fl="response_d", rows="2"),
>notNull(response_d)){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13560) Add isNull and notNull Stream Evaluators

2019-06-18 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-13560:
-

 Summary: Add isNull and notNull Stream Evaluators
 Key: SOLR-13560
 URL: https://issues.apache.org/jira/browse/SOLR-13560
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket adds two Stream Evaluators for testing for null values in Tuples. 
These are much needed functions as currently null values are not possible to 
detect with the *eq* Stream Evaluator because null values are evaluated to the 
variable name, rather then null. This change was made to support String 
literals without quotes.

The isNull and notNull Stream Evaluators properly detect nulls so they can be 
used to filter tuples in a *having* expression or replace nulls in a *select* 
expression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-13-ea+18) - Build # 8002 - Still Unstable!

2019-06-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8002/
Java: 64bit/jdk-13-ea+18 -XX:+UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth

Error Message:
Software caused connection abort: recv failed

Stack Trace:
javax.net.ssl.SSLException: Software caused connection abort: recv failed
at 
__randomizedtesting.SeedInfo.seed([A5272EB44E258D6F:194958A6EA760E15]:0)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:127)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:320)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:263)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:258)
at 
java.base/sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1501)
at 
java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:935)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at 
org.apache.solr.cloud.SolrCloudAuthTestCase.verifySecurityStatus(SolrCloudAuthTestCase.java:200)
at 
org.apache.solr.cloud.SolrCloudAuthTestCase.verifySecurityStatus(SolrCloudAuthTestCase.java:176)
at 
org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth(BasicAuthIntegrationTest.java:127)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
   

[jira] [Commented] (SOLR-12988) Avoid using TLSv1.3 for HttpClient

2019-06-18 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867132#comment-16867132
 ] 

Hoss Man commented on SOLR-12988:
-

We're also seeing failures like this on branch_8x w/older JVMS...

[https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/728/]
Java: 64bit/jdk1.8.0_201 -XX:-UseCompressedOops -XX:+UseSerialGC

{noformat}
2 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName

Error Message:
java.lang.IllegalArgumentException: DTLSv1.2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
java.lang.IllegalArgumentException: DTLSv1.2
at 
__randomizedtesting.SeedInfo.seed([6EC85966027A671:B5685266DB763EA0]:0)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:408)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:273)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:243)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:164)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName(TestMiniSolrCloudClusterSSL.java:146)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
...
Caused by: java.lang.IllegalArgumentException: DTLSv1.2
at sun.security.ssl.ProtocolVersion.valueOf(ProtocolVersion.java:187)
at sun.security.ssl.ProtocolList.convert(ProtocolList.java:84)
at sun.security.ssl.ProtocolList.(ProtocolList.java:52)
at 
sun.security.ssl.SSLSocketImpl.setEnabledProtocols(SSLSocketImpl.java:2534)
at 
org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:371)
at 
org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355)
at 
org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
at 
org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:373)
at 
org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:394)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:555)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)

{noformat}

> Avoid using TLSv1.3 for HttpClient
> --
>
> Key: SOLR-12988
> URL: https://issues.apache.org/jira/browse/SOLR-12988
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
>  Labels: Java11, Java12
> Attachments: SOLR-13413.patch
>
>
> HTTPCLIENT-1967 indicates that HttpClient can't be used properly with 
> TLSv1.3. It caused some test failures below, therefore we should enforce 
> HttpClient to uses TLSv1.2 or lower versions.
> TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName seems to fail 100% of 
> the time when run 

[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk1.8.0_201) - Build # 728 - Failure!

2019-06-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/728/
Java: 64bit/jdk1.8.0_201 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName

Error Message:
java.lang.IllegalArgumentException: DTLSv1.2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
java.lang.IllegalArgumentException: DTLSv1.2
at 
__randomizedtesting.SeedInfo.seed([6EC85966027A671:B5685266DB763EA0]:0)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:408)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:273)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:243)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:164)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName(TestMiniSolrCloudClusterSSL.java:146)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 3388 - Still Unstable

2019-06-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3388/

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.embedded.LargeVolumeBinaryJettyTest.testMultiThreaded

Error Message:
Captured an uncaught exception in thread: Thread[id=63, name=DocThread-4, 
state=RUNNABLE, group=TGRP-LargeVolumeBinaryJettyTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=63, name=DocThread-4, state=RUNNABLE, 
group=TGRP-LargeVolumeBinaryJettyTest]
Caused by: java.lang.AssertionError: DocThread-4---IOException occurred when 
talking to server at: https://127.0.0.1:44037/solr/collection1
at __randomizedtesting.SeedInfo.seed([9641F760127029FD]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.client.solrj.LargeVolumeTestBase$DocThread.run(LargeVolumeTestBase.java:128)




Build Log:
[...truncated 16263 lines...]
   [junit4] Suite: 
org.apache.solr.client.solrj.embedded.LargeVolumeBinaryJettyTest
   [junit4]   2> 8563 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-solrj/test/J1/temp/solr.client.solrj.embedded.LargeVolumeBinaryJettyTest_9641F760127029FD-001/init-core-data-001
   [junit4]   2> 8568 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 8572 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason="", ssl=0.0/0.0, value=0.0/0.0, 
clientAuth=0.0/0.0)
   [junit4]   2> 8951 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.a.s.SolrTestCaseJ4 initCore
   [junit4]   2> 8951 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.a.s.SolrTestCaseJ4 initCore end
   [junit4]   2> 8952 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-solrj/test/J1/temp/solr.client.solrj.embedded.LargeVolumeBinaryJettyTest_9641F760127029FD-001/tempDir-002/cores/core
   [junit4]   2> 9274 WARN  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time
   [junit4]   2> 9407 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.a.s.c.s.e.JettySolrRunner Start Jetty (original configured port=0)
   [junit4]   2> 9409 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.a.s.c.s.e.JettySolrRunner Trying to start Jetty on port 0 try number 1 ...
   [junit4]   2> 9424 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.e.j.s.Server jetty-9.4.19.v20190610; built: 2019-06-10T16:30:51.723Z; git: 
afcf563148970e98786327af5e07c261fda175d3; jvm 11.0.1+13-LTS
   [junit4]   2> 9439 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 9439 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 9441 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.e.j.s.session node0 Scavenging every 66ms
   [junit4]   2> 9457 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@cf4e708{/solr,null,AVAILABLE}
   [junit4]   2> 9514 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.e.j.s.AbstractConnector Started ServerConnector@2f2f71d{ssl,[ssl, alpn, 
http/1.1, h2]}{127.0.0.1:44037}
   [junit4]   2> 9514 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.e.j.s.Server Started @9561ms
   [junit4]   2> 9515 INFO  
(SUITE-LargeVolumeBinaryJettyTest-seed#[9641F760127029FD]-worker) [ ] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
configSetBaseDir=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-solrj/test/J1/temp/solr.client.solrj.embedded.LargeVolumeBinaryJettyTest_9641F760127029FD-001/tempDir-001,
 hostPort=44037, 
coreRootDirectory=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-solrj/test/J1/temp/solr.client.solrj.embedded.LargeVolumeBinaryJettyTest_9641F760127029FD-001/tempDir-002/cores}
   [junit4]   2> 

[jira] [Commented] (SOLR-12988) Avoid using TLSv1.3 for HttpClient

2019-06-18 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867086#comment-16867086
 ] 

Hoss Man commented on SOLR-12988:
-

{quote}We could potentially try to make the detection very sophisticated, and 
dependent on checkPeerName ...
{quote}
 
while looking into SOLR-12990 i just realized i missread your commit: you *did* 
make the "don't allow TLSv1.3" logic conditional on whether chechPeerName=true, 
but it's also a silent modification of the defaults -- users won't get any 
logging/notice unless they've explicitly set the "https.protocols" sysprop to 
_only_ specify TLSv1.3 (and get a failure)  ... which really seems like bad 
default behavior ... useres who set checkPeerNames=true to try and ensure 
_more_ security, silently get _downgraded_ cipher support?



I really think that we should just:
* make sure jenkins boxes are running 11.0.3
* revert most of your commit, except for the test changes that re-enable SSL 
testing on java11
* document the known JDK bugs

And then consider as a future imporvement logging/warnings about those JDK bugs 
if we can auto-detect them.

> Avoid using TLSv1.3 for HttpClient
> --
>
> Key: SOLR-12988
> URL: https://issues.apache.org/jira/browse/SOLR-12988
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
>  Labels: Java11, Java12
> Attachments: SOLR-13413.patch
>
>
> HTTPCLIENT-1967 indicates that HttpClient can't be used properly with 
> TLSv1.3. It caused some test failures below, therefore we should enforce 
> HttpClient to uses TLSv1.2 or lower versions.
> TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName seems to fail 100% of 
> the time when run with java11 (or java12), regardless of seed, on both master 
> & 7x.
> The nature of the problem and the way our htp stack works suggests it *may* 
> ultimately be a jetty bug (perhaps related to [jetty 
> issue#2711|https://github.com/eclipse/jetty.project/issues/2711]?)
> *HOWEVER* ... as far as i can tell, whatever the root cause is, seems to have 
> been fixed on the {{jira/http2}} branch (as of 
> 52bc163dc1804c31af09c1fba99647005da415ad) which should hopefully be getting 
> merged to master soon.
> Filing this issue largely for tracking purpose, although we may also want to 
> use it for discussions/considerations of other backports/fixes to 7x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-13-ea+shipilev-fastdebug) - Build # 24250 - Still Unstable!

2019-06-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24250/
Java: 64bit/jdk-13-ea+shipilev-fastdebug -XX:+UseCompressedOops 
-XX:+UseParallelGC

13 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplicaLegacy

Error Message:
Timeout occurred while waiting response from server at: 
https://127.0.0.1:32839/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: https://127.0.0.1:32839/solr
at 
__randomizedtesting.SeedInfo.seed([7A9A7C63E623978E:380617770160B66]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:667)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:384)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplicaLegacy(DeleteReplicaTest.java:264)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-12990) High test failure rate on Java11/12 when (randomized) ssl=true clientAuth=false

2019-06-18 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867081#comment-16867081
 ] 

Hoss Man commented on SOLR-12990:
-

ah ... wait a minute...
{quote}...but Dat's commits yesterday already force TLSv1.2 ... so is this yet 
another TLSv1.3 bug in the JDK...
{quote}

...looking over Dat's 
[6d5453d508|https://gitbox.apache.org/repos/asf?p=lucene-solr.git;a=commitdiff;h=c838289;hp=6d5453d508bd9609ccaaec06c62c0adebc7496d8]
 commit more closely, i realize now that it _only_ uses 
getSupportedSSLProtocols() / SUPPORTED_SSL_PROTOCOLS when checkPeerName=true 
... which means most tests (except 
TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName IIUC ... it's not a 
variable we randomize at the moment) should still be using the system default 
protocol, which is going to be TLSv1.3 on java11.

So that suggests that upgrading to 11.0.3 to get theabove mentioned JDK fixes 
might be all we need.

> High test failure rate on Java11/12 when (randomized) ssl=true 
> clientAuth=false
> ---
>
> Key: SOLR-12990
> URL: https://issues.apache.org/jira/browse/SOLR-12990
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Major
>  Labels: Java11, Java12
> Attachments: DistributedDebugComponentTest.ssl.debug.log.txt, 
> enable.ssl.debug.patch
>
>
> Ever since the policeman's Jenkins instance started running tests on Java11, 
> we've seen an abnormally high number of test failures that seem to be related 
> to randomzed ssl.
> I've been investigating these logs, and trying to reproduce and have found 
> the following observations:
> * In all the policeman jenkins logs i looked at, these SSL related failures 
> only occur when the RandomizeSSL annotation picks {{ssl=true 
> clientAuth=false}}
> ** NOTE: this doesn't mean that every test using {{ssl=true 
> clientAuth=false}} failed -- since our build system only prints test output 
> when tests fail, it's possible/probably (based on how often the value should 
> be picked) that many tests randomly use {{ssl=true clientAuth=false}} and pass
> * the failures usually showed an exception that was {{Caused by: 
> javax.net.ssl.SSLException: Received fatal alert: internal_error}} in the 
> logs.
> * when i attempted to re-produce some of these failing seeds on my own 
> machine using Java11, i could not _reliably_ reproduce these failures w/the 
> same seeds
> ** beasting could _occasionally_ reproduce the failures, at roughly 1/10 runs
> ** suggesting that system load/timing contributed to these SSL related 
> failures
> * picking one particularly trivial test (DistributedDebugComponentTest)
> ** with {{javax.net.debug=all}} enabled, i was able to see more details...
> *** notably: {{Fatal (INTERNAL_ERROR): Session has no PSK}}
> ** when I patched the test to force {{ssl=true clientAuth=true}} I was unable 
> to trigger any failures with the same seed.
> * on the jira/http2 branch I was unable to reproduce these failures at all, 
> w/o any patching
> ** similar to SOLR-12988, this may be because of bug fixes in the upgraded 
> jetty.
> 
> Filing this issue largely for tracking purpose, although we may also want to 
> use it for discussions/considerations of other backports/fixes to 7x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7530) Wrong JSON response using Terms Component with distrib=true

2019-06-18 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867079#comment-16867079
 ] 

David Smiley commented on SOLR-7530:


FYI: 
https://issues.apache.org/jira/browse/SOLR-12959?focusedCommentId=16675811=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16675811
  Quite a few bugs over the years confusing NamedList and SimpleOrderedMap.  
Not intuitive!

> Wrong JSON response using Terms Component with distrib=true
> ---
>
> Key: SOLR-7530
> URL: https://issues.apache.org/jira/browse/SOLR-7530
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers, SearchComponents - other, SolrCloud
>Affects Versions: 4.9
>Reporter: Raúl Grande
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: SOLR-7530.patch, SOLR-7530.patch, SOLR-7530.patch, 
> SOLR-7530.patch, SOLR-7530.patch, SOLR-7530.patch, SOLR-7530.patch
>
>
> When using TermsComponent in SolrCloud there are differences in the JSON 
> response if parameter distrib is true or false. If distrib=true JSON is not 
> well formed (please note at the [ ] marks)
> JSON Response when distrib=false. Correct response:
> {"responseHeader":{ 
>   "status":0, 
>   "QTime":3
> }, 
> "terms":{ 
> "FileType":
> [ 
>   "EMAIL",20060, 
>   "PDF",7051, 
>   "IMAGE",5108, 
>   "OFFICE",4912, 
>   "TXT",4405, 
>   "OFFICE_EXCEL",4122, 
>   "OFFICE_WORD",2468
>   ]
> } } 
> JSON Response when distrib=true. Incorrect response:
> { 
> "responseHeader":{
>   "status":0, 
>   "QTime":94
> }, 
> "terms":{ 
> "FileType":{ 
>   "EMAIL":31923, 
>   "PDF":11545, 
>   "IMAGE":9807, 
>   "OFFICE_EXCEL":8195, 
>   "OFFICE":5147, 
>   "OFFICE_WORD":4820, 
>   "TIFF":1156, 
>   "XML":851, 
>   "HTML":821, 
>   "RTF":303
>   } 
> } } 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-8.x - Build # 247 - Still Failing

2019-06-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/247/

2 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName

Error Message:
Unexpected exception type, expected SolrServerException but got 
java.lang.IllegalArgumentException: DTLSv1.2

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception type, expected 
SolrServerException but got java.lang.IllegalArgumentException: DTLSv1.2
at 
__randomizedtesting.SeedInfo.seed([F80304785AFE1490:AFB241C39A02EB81]:0)
at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2731)
at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2720)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithInvalidPeerName(TestMiniSolrCloudClusterSSL.java:207)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

Need to upgrade jenkins jdk-11 jobs >= 11.0.3 to fix JVM SSL bugs

2019-06-18 Thread Chris Hostetter



TL;DR: Uwe: can you please upgrade the jdk-11 used on the apache lucene 
jenkis jobs and your policeman jenkins jobs to 11.0.3 ?


---

Dat & I have (coincidently) found ourselves both looking into some (long 
standing) SSL weirdness that has only ever manifested on java>=11.


Details can be found in SOLR-12988 & SOLR-12990 but the long and short of 
it is there are at least 2 known OpenJDK bugs in SSL that have been 
fixed in 11.0.3, which we are seeing evidence of in jenkins builds using 
11.0.2


https://bugs.openjdk.java.net/browse/JDK-8213202
https://bugs.openjdk.java.net/browse/JDK-8212885 / JDK-8220723

(The nature of these bugs makes it hard -- at least AFAICT -- to try to 
write any "assume" logic to auto-detect if they apply to the current JVM.)


There may in fact still be other SSL related bugs in jdk 11.0.3, but it 
will be hard to know until we at least upgrade to 11.0.3 to see what still 
fails.


Uwe / whomever has access: if you could help us out here it would be 
appreciated.




-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12990) High test failure rate on Java11/12 when (randomized) ssl=true clientAuth=false

2019-06-18 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867061#comment-16867061
 ] 

Hoss Man commented on SOLR-12990:
-

After [~caomanhdat]'s commits in SOLR-12988 yesterday, which re-enabled SSL 
randomization testing under java 11, we've started to see these symptoms pop up 
again in jenkins jobs..

[http://fucit.org/solr-jenkins-reports/job-data/thetaphi/Lucene-Solr-8.x-Linux/726/]
 [https://jenkins.thetaphi.de/view/Lucene-Solr/job/Lucene-Solr-8.x-Linux/726/]
{noformat}
-print-java-info:
[java-info] java version "11.0.2"
[java-info] OpenJDK Runtime Environment (11.0.2+9, Oracle Corporation)
[java-info] OpenJDK 64-Bit Server VM (11.0.2+9, Oracle Corporation)
[java-info] Test args: [-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC]
...
  [junit4]   2> 2027070 INFO  
(SUITE-IndexSizeEstimatorTest-seed#[8F30F6AA795F0A16]-worker) [ ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason="", ssl=0.0/0.0, value=0.0/0.0, 
clientAuth=0.0/0.0)
 ...
   [junit4]   2> 2028080 ERROR 
(OverseerThreadFactory-11245-thread-1-processing-n:127.0.0.1:36765_solr) 
[n:127.0.0.1:36765_solr ] o.a.s.c.a.c.OverseerCollectionMessageHandler 
Error from shard: https
://127.0.0.1:36765/solr
   [junit4]   2>   => org.apache.solr.client.solrj.SolrServerException: 
IOException occurred when talking to server at: https://127.0.0.1:36765/solr
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:670)
   [junit4]   2> org.apache.solr.client.solrj.SolrServerException: IOException 
occurred when talking to server at: https://127.0.0.1:36765/solr
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:670)
 ~[java/:?]
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
 ~[java/:?]
   [junit4]   2>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
 ~[java/:?]
   [junit4]   2>at 
org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1274) ~[java/:?]
   [junit4]   2>at 
org.apache.solr.handler.component.HttpShardHandlerFactory$1.request(HttpShardHandlerFactory.java:176)
 ~[java/:?]
   [junit4]   2>at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:199)
 ~[java/:?]
   [junit4]   2>at 
java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
   [junit4]   2>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
   [junit4]   2>at 
java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
   [junit4]   2>at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181)
 ~[metrics-core-4.0.5.jar:4.0.5]
   [junit4]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 ~[java/:?]
   [junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
~[?:?]
   [junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
~[?:?]
   [junit4]   2>at java.lang.Thread.run(Thread.java:834) [?:?]
   [junit4]   2> Caused by: javax.net.ssl.SSLException: Received fatal alert: 
internal_error
   [junit4]   2>at 
sun.security.ssl.Alert.createSSLException(Alert.java:129) ~[?:?]
   [junit4]   2>at 
sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
   [junit4]   2>at 
sun.security.ssl.TransportContext.fatal(TransportContext.java:308) ~[?:?]
   [junit4]   2>at 
sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:279) ~[?:?]
   [junit4]   2>at 
sun.security.ssl.TransportContext.dispatch(TransportContext.java:181) ~[?:?]
   [junit4]   2>at 
sun.security.ssl.SSLTransport.decode(SSLTransport.java:164) ~[?:?]
   [junit4]   2>at 
sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1152) ~[?:?]
   [junit4]   2>at 
sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1063) 
~[?:?]
   [junit4]   2>at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:402) ~[?:?]
   [junit4]   2>at 
org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:396)
 ~[httpclient-4.5.6.jar:4.5.6]
   [junit4]   2>at 
org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355)
 ~[httpclient-4.5.6.jar:4.5.6]
   [junit4]   2>at 
org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
 ~[httpclient-4.5.6.jar:4.5.6]
   [junit4]   2>at 

[jira] [Updated] (LUCENE-8858) Migrate Lucene's Moin wiki to Confluence

2019-06-18 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/LUCENE-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated LUCENE-8858:

Component/s: general/website

> Migrate Lucene's Moin wiki to Confluence
> 
>
> Key: LUCENE-8858
> URL: https://issues.apache.org/jira/browse/LUCENE-8858
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/website
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> We have a deadline end of June to migrate Moin wiki to Confluence.
> This Jira will track migration of Lucene's 
> https://wiki.apache.org/lucene-java/ over to 
> https://cwiki.apache.org/confluence/display/LUCENE
> The old Confluence space will be overwritten as it is not used.
> After migration we'll clean up and weed out what is not needed, and then 
> start moving developer-centric content into the main git repo (which will be 
> covered in other JIRAs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8858) Migrate Lucene's Moin wiki to Confluence

2019-06-18 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/LUCENE-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved LUCENE-8858.
-
Resolution: Fixed

> Migrate Lucene's Moin wiki to Confluence
> 
>
> Key: LUCENE-8858
> URL: https://issues.apache.org/jira/browse/LUCENE-8858
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> We have a deadline end of June to migrate Moin wiki to Confluence.
> This Jira will track migration of Lucene's 
> https://wiki.apache.org/lucene-java/ over to 
> https://cwiki.apache.org/confluence/display/LUCENE
> The old Confluence space will be overwritten as it is not used.
> After migration we'll clean up and weed out what is not needed, and then 
> start moving developer-centric content into the main git repo (which will be 
> covered in other JIRAs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8858) Migrate Lucene's Moin wiki to Confluence

2019-06-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LUCENE-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867026#comment-16867026
 ] 

Jan Høydahl commented on LUCENE-8858:
-

The migration of the Lucene Moin wiki is complete. The new Cwiki home page is

[https://cwiki.apache.org/confluence/display/LUCENE/Home]

You find all the Old Moin pages under the Old Moin wiki sub page

There are 232 pages and I think the first candidate for deletion are all the 
"personal home pages" that were hosted in Moin, such as 
[https://cwiki.apache.org/confluence/display/LUCENE/Ahahi], 
[https://cwiki.apache.org/confluence/display/LUCENE/AlexAlishevskikh] and 
[https://cwiki.apache.org/confluence/display/LUCENE/AndyNeill] etc. Great if 
someone on the PMC would care to delete all of these as a start :)

I'll now resolve this Jira. Feel free to create new JIRAs for concrete followup 
actions such as moving content X to git etc.

> Migrate Lucene's Moin wiki to Confluence
> 
>
> Key: LUCENE-8858
> URL: https://issues.apache.org/jira/browse/LUCENE-8858
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> We have a deadline end of June to migrate Moin wiki to Confluence.
> This Jira will track migration of Lucene's 
> https://wiki.apache.org/lucene-java/ over to 
> https://cwiki.apache.org/confluence/display/LUCENE
> The old Confluence space will be overwritten as it is not used.
> After migration we'll clean up and weed out what is not needed, and then 
> start moving developer-centric content into the main git repo (which will be 
> covered in other JIRAs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8781) Explore FST direct array arc encoding

2019-06-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867022#comment-16867022
 ] 

ASF subversion and git services commented on LUCENE-8781:
-

Commit badcc4e6c723678cda9c7990a8d2c4bf1e556f42 in lucene-solr's branch 
refs/heads/branch_8x from Michael Sokolov
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=badcc4e ]

LUCENE-8781: add FST array-with-gap addressing to Util.readCeilArc


> Explore FST direct array arc encoding 
> --
>
> Key: LUCENE-8781
> URL: https://issues.apache.org/jira/browse/LUCENE-8781
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Assignee: Dawid Weiss
>Priority: Major
> Fix For: master (9.0), 8.2
>
> Attachments: FST-2-4.png, FST-6-9.png, FST-size.png
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> This issue is for exploring an alternate FST encoding of Arcs as full-sized 
> arrays so Arcs are addressed directly by label, avoiding binary search that 
> we use today for arrays of Arcs. PR: 
> https://github.com/apache/lucene-solr/pull/657
> h3. Testing
> ant test passes. I added some unit tests that were helpful in uncovering bugs 
> while
> implementing which are more difficult to chase down when uncovered by the 
> randomized testing we already do. They don't really test anything new; 
> they're just more focused.
> I'm not sure why, but ant precommit failed for me with:
> {noformat}
>  ...lucene-solr/solr/common-build.xml:536: Check for forbidden API calls 
> failed while scanning class 
> 'org.apache.solr.metrics.reporters.SolrGangliaReporterTest' 
> (SolrGangliaReporterTest.java): java.lang.ClassNotFoundException: 
> info.ganglia.gmetric4j.gmetric.GMetric (while looking up details about 
> referenced class 'info.ganglia.gmetric4j.gmetric.GMetric')
> {noformat}
> I also got Test2BFST running (it was originally timing out due to excessive 
> calls to ramBytesUsage(), which seems to have gotten slow), and it passed; 
> that change isn't include here.
> h4. Micro-benchmark
> I timed lookups in FST via FSTEnum.seekExact in a unit test under various 
> conditions. 
> h5. English words
> A test of looking up existing words in a dictionary of ~17 English words 
> shows improvements; the numbers listed are % change in FST size, time to look 
> up (FSTEnum.seekExact) words that are in the dict, and time to look up random 
> strings that are not in the dict. The comparison is against the current 
> codebase with the optimization disabled. A separate comparison of showed no 
> significant change of the baseline (no opto applied) vs the current master 
> FST impl with no code changes applied.
> ||  load=2||   load=4 ||  load=16 ||
> | +4, -6, -7  | +18, -11, -8 | +22, -11.5, -7 |
> The "load factor" used for those measurements controls when direct array arc 
> encoding is used;
> namely when the number of outgoing arcs was > load * (max label - min label).
> h5. sequential and random terms
> The same test, with terms being a sequence of integers as strings shows a 
> larger improvement, around 20% (load=4). This is presumably the best case for 
> this delta, where every Arc is encoded as a direct lookup.
> When random lowercase ASCII strings are used, a smaller improvement of around 
> 4% is seen.
> h4. luceneutil
> Testing w/luceneutil (wikimediumall) we see improvements mostly in the 
> PKLookup case. Other results seem noisy, with perhaps a small improvment in 
> some of the queries.
> {noformat}
> TaskQPS base  StdDevQPS opto  StdDev  
>   Pct diff
>   OrHighHigh6.93  (3.0%)6.89  (3.1%)   
> -0.5% (  -6% -5%)
>OrHighMed   45.15  (3.9%)   44.92  (3.5%)   
> -0.5% (  -7% -7%)
> Wildcard8.72  (4.7%)8.69  (4.6%)   
> -0.4% (  -9% -9%)
>   AndHighLow  274.11  (2.6%)  273.58  (3.1%)   
> -0.2% (  -5% -5%)
>OrHighLow  241.41  (1.9%)  241.11  (3.5%)   
> -0.1% (  -5% -5%)
>   AndHighMed   52.23  (4.1%)   52.41  (5.3%)
> 0.3% (  -8% -   10%)
>  MedTerm 1026.24  (3.1%) 1030.52  (4.3%)
> 0.4% (  -6% -8%)
> HighTerm .10  (3.4%) 1116.70  (4.0%)
> 0.5% (  -6% -8%)
>HighTermDayOfYearSort   14.59  (8.2%)   14.73  (9.3%)
> 1.0% ( -15% -   20%)
>  AndHighHigh   13.45  (6.2%)   13.61  (4.4%)
> 1.2% (  -8% -   12%)
>HighTermMonthSort   63.09 (12.5%)   64.13 (10.9%)
> 1.6% ( -19% -   28%)
>  LowTerm 1338.94  (3.3%) 

[jira] [Commented] (LUCENE-8781) Explore FST direct array arc encoding

2019-06-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867019#comment-16867019
 ] 

ASF subversion and git services commented on LUCENE-8781:
-

Commit 2e49f13aa1ec5afbee0afb61e797a6acf9ad07e3 in lucene-solr's branch 
refs/heads/master from Michael Sokolov
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2e49f13 ]

LUCENE-8781: add FST array-with-gap addressing to Util.readCeilArc


> Explore FST direct array arc encoding 
> --
>
> Key: LUCENE-8781
> URL: https://issues.apache.org/jira/browse/LUCENE-8781
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Assignee: Dawid Weiss
>Priority: Major
> Fix For: master (9.0), 8.2
>
> Attachments: FST-2-4.png, FST-6-9.png, FST-size.png
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> This issue is for exploring an alternate FST encoding of Arcs as full-sized 
> arrays so Arcs are addressed directly by label, avoiding binary search that 
> we use today for arrays of Arcs. PR: 
> https://github.com/apache/lucene-solr/pull/657
> h3. Testing
> ant test passes. I added some unit tests that were helpful in uncovering bugs 
> while
> implementing which are more difficult to chase down when uncovered by the 
> randomized testing we already do. They don't really test anything new; 
> they're just more focused.
> I'm not sure why, but ant precommit failed for me with:
> {noformat}
>  ...lucene-solr/solr/common-build.xml:536: Check for forbidden API calls 
> failed while scanning class 
> 'org.apache.solr.metrics.reporters.SolrGangliaReporterTest' 
> (SolrGangliaReporterTest.java): java.lang.ClassNotFoundException: 
> info.ganglia.gmetric4j.gmetric.GMetric (while looking up details about 
> referenced class 'info.ganglia.gmetric4j.gmetric.GMetric')
> {noformat}
> I also got Test2BFST running (it was originally timing out due to excessive 
> calls to ramBytesUsage(), which seems to have gotten slow), and it passed; 
> that change isn't include here.
> h4. Micro-benchmark
> I timed lookups in FST via FSTEnum.seekExact in a unit test under various 
> conditions. 
> h5. English words
> A test of looking up existing words in a dictionary of ~17 English words 
> shows improvements; the numbers listed are % change in FST size, time to look 
> up (FSTEnum.seekExact) words that are in the dict, and time to look up random 
> strings that are not in the dict. The comparison is against the current 
> codebase with the optimization disabled. A separate comparison of showed no 
> significant change of the baseline (no opto applied) vs the current master 
> FST impl with no code changes applied.
> ||  load=2||   load=4 ||  load=16 ||
> | +4, -6, -7  | +18, -11, -8 | +22, -11.5, -7 |
> The "load factor" used for those measurements controls when direct array arc 
> encoding is used;
> namely when the number of outgoing arcs was > load * (max label - min label).
> h5. sequential and random terms
> The same test, with terms being a sequence of integers as strings shows a 
> larger improvement, around 20% (load=4). This is presumably the best case for 
> this delta, where every Arc is encoded as a direct lookup.
> When random lowercase ASCII strings are used, a smaller improvement of around 
> 4% is seen.
> h4. luceneutil
> Testing w/luceneutil (wikimediumall) we see improvements mostly in the 
> PKLookup case. Other results seem noisy, with perhaps a small improvment in 
> some of the queries.
> {noformat}
> TaskQPS base  StdDevQPS opto  StdDev  
>   Pct diff
>   OrHighHigh6.93  (3.0%)6.89  (3.1%)   
> -0.5% (  -6% -5%)
>OrHighMed   45.15  (3.9%)   44.92  (3.5%)   
> -0.5% (  -7% -7%)
> Wildcard8.72  (4.7%)8.69  (4.6%)   
> -0.4% (  -9% -9%)
>   AndHighLow  274.11  (2.6%)  273.58  (3.1%)   
> -0.2% (  -5% -5%)
>OrHighLow  241.41  (1.9%)  241.11  (3.5%)   
> -0.1% (  -5% -5%)
>   AndHighMed   52.23  (4.1%)   52.41  (5.3%)
> 0.3% (  -8% -   10%)
>  MedTerm 1026.24  (3.1%) 1030.52  (4.3%)
> 0.4% (  -6% -8%)
> HighTerm .10  (3.4%) 1116.70  (4.0%)
> 0.5% (  -6% -8%)
>HighTermDayOfYearSort   14.59  (8.2%)   14.73  (9.3%)
> 1.0% ( -15% -   20%)
>  AndHighHigh   13.45  (6.2%)   13.61  (4.4%)
> 1.2% (  -8% -   12%)
>HighTermMonthSort   63.09 (12.5%)   64.13 (10.9%)
> 1.6% ( -19% -   28%)
>  LowTerm 1338.94  (3.3%) 

[jira] [Commented] (SOLR-13548) Migrate Solr's Moin wiki to Confluence

2019-06-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867001#comment-16867001
 ] 

Jan Høydahl commented on SOLR-13548:


Ok, thanks

> Migrate Solr's Moin wiki to Confluence
> --
>
> Key: SOLR-13548
> URL: https://issues.apache.org/jira/browse/SOLR-13548
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: website
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> We have a deadline end of June to migrate Moin wiki to Confluence.
> This Jira will track migration of Solr's [https://wiki.apache.org/solr/] over 
> to [https://cwiki.apache.org/confluence/display/SOLR]
> The old Confluence space currently hosts the old Reference Guide for version 
> 6.5 before we moved to asciidoc. This will be overwritten.
> Steps:
>  # Delete all pages in current SOLR space
>  ## Q: Can we do a bulk delete ourselves or do we need to ask INFRA?
>  # The rules in {{.htaccess}} which redirects to the 6.6 guide will remain as 
> is
>  # Run the migration tool at 
> [https://selfserve.apache.org|https://selfserve.apache.org/]
>  # Add a clearly visible link from front page to the ref guide for people 
> landing there for docs
> After migration we'll clean up and weed out what is not needed, and then 
> start moving developer-centric content into the main git repo (which will be 
> covered in other JIRAs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7530) Wrong JSON response using Terms Component with distrib=true

2019-06-18 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866996#comment-16866996
 ] 

Lucene/Solr QA commented on SOLR-7530:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m  5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m  5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate ref guide {color} | 
{color:green}  0m  5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:black}{color} | {color:black} {color} | {color:black}  1m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-7530 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12972112/SOLR-7530.patch |
| Optional Tests |  ratsources  validatesourcepatterns  validaterefguide  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 2e468ab |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| modules | C: solr/solr-ref-guide U: solr/solr-ref-guide |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/444/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Wrong JSON response using Terms Component with distrib=true
> ---
>
> Key: SOLR-7530
> URL: https://issues.apache.org/jira/browse/SOLR-7530
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers, SearchComponents - other, SolrCloud
>Affects Versions: 4.9
>Reporter: Raúl Grande
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: SOLR-7530.patch, SOLR-7530.patch, SOLR-7530.patch, 
> SOLR-7530.patch, SOLR-7530.patch, SOLR-7530.patch, SOLR-7530.patch
>
>
> When using TermsComponent in SolrCloud there are differences in the JSON 
> response if parameter distrib is true or false. If distrib=true JSON is not 
> well formed (please note at the [ ] marks)
> JSON Response when distrib=false. Correct response:
> {"responseHeader":{ 
>   "status":0, 
>   "QTime":3
> }, 
> "terms":{ 
> "FileType":
> [ 
>   "EMAIL",20060, 
>   "PDF",7051, 
>   "IMAGE",5108, 
>   "OFFICE",4912, 
>   "TXT",4405, 
>   "OFFICE_EXCEL",4122, 
>   "OFFICE_WORD",2468
>   ]
> } } 
> JSON Response when distrib=true. Incorrect response:
> { 
> "responseHeader":{
>   "status":0, 
>   "QTime":94
> }, 
> "terms":{ 
> "FileType":{ 
>   "EMAIL":31923, 
>   "PDF":11545, 
>   "IMAGE":9807, 
>   "OFFICE_EXCEL":8195, 
>   "OFFICE":5147, 
>   "OFFICE_WORD":4820, 
>   "TIFF":1156, 
>   "XML":851, 
>   "HTML":821, 
>   "RTF":303
>   } 
> } } 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13403) Terms component fails for DatePointField

2019-06-18 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866993#comment-16866993
 ] 

Lucene/Solr QA commented on SOLR-13403:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
4s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  2m  2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  2m  2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  2m  2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 33s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.TestUtilizeNode |
|   | solr.schema.TestBulkSchemaConcurrent |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13403 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12972109/SOLR-13403.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 2e468ab |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | LTS |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/443/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/443/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/443/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Terms component fails for DatePointField
> 
>
> Key: SOLR-13403
> URL: https://issues.apache.org/jira/browse/SOLR-13403
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-13403.patch, SOLR-13403.patch
>
>
> Getting terms for PointFields except DatePointField. For DatePointField, the 
> request fails NPE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-13-ea+18) - Build # 727 - Still Unstable!

2019-06-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/727/
Java: 64bit/jdk-13-ea+18 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

8 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TestPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout occurred while waiting response from server at: 
https://127.0.0.1:35847/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: https://127.0.0.1:35847/solr
at 
__randomizedtesting.SeedInfo.seed([A4613D71479501E0:2441585F56D6E946]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:667)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:547)
at 
org.apache.solr.cloud.autoscaling.TestPolicyCloud.after(TestPolicyCloud.java:87)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-12988) Avoid using TLSv1.3 for HttpClient

2019-06-18 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866979#comment-16866979
 ] 

Hoss Man commented on SOLR-12988:
-

NOTE: [~thetaphi]'s jenins box is still building w/ "64bit/jdk-11.0.2" and the 
apache jenkins boxes are still using "11.0.1+13-LTS" ... so we'll either need 
to get those updated before we can revert the TLSv1.2 forcing code or disable  
TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName on java < 11.0.3 some 
other way.

> Avoid using TLSv1.3 for HttpClient
> --
>
> Key: SOLR-12988
> URL: https://issues.apache.org/jira/browse/SOLR-12988
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
>  Labels: Java11, Java12
> Attachments: SOLR-13413.patch
>
>
> HTTPCLIENT-1967 indicates that HttpClient can't be used properly with 
> TLSv1.3. It caused some test failures below, therefore we should enforce 
> HttpClient to uses TLSv1.2 or lower versions.
> TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName seems to fail 100% of 
> the time when run with java11 (or java12), regardless of seed, on both master 
> & 7x.
> The nature of the problem and the way our htp stack works suggests it *may* 
> ultimately be a jetty bug (perhaps related to [jetty 
> issue#2711|https://github.com/eclipse/jetty.project/issues/2711]?)
> *HOWEVER* ... as far as i can tell, whatever the root cause is, seems to have 
> been fixed on the {{jira/http2}} branch (as of 
> 52bc163dc1804c31af09c1fba99647005da415ad) which should hopefully be getting 
> merged to master soon.
> Filing this issue largely for tracking purpose, although we may also want to 
> use it for discussions/considerations of other backports/fixes to 7x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-8.x - Build # 246 - Still Failing

2019-06-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/246/

3 tests failed.
FAILED:  
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName

Error Message:
java.lang.IllegalArgumentException: DTLSv1.2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
java.lang.IllegalArgumentException: DTLSv1.2
at 
__randomizedtesting.SeedInfo.seed([60654235E0C594EA:D3E195C55B940C3B]:0)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:408)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkCreateCollection(TestMiniSolrCloudClusterSSL.java:273)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithCollectionCreations(TestMiniSolrCloudClusterSSL.java:243)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.checkClusterWithNodeReplacement(TestMiniSolrCloudClusterSSL.java:164)
at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName(TestMiniSolrCloudClusterSSL.java:146)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[GitHub] [lucene-solr] dsmiley commented on a change in pull request #633: LUCENE-8753 UniformSplit PostingsFormat

2019-06-18 Thread GitBox
dsmiley commented on a change in pull request #633: LUCENE-8753 UniformSplit 
PostingsFormat
URL: https://github.com/apache/lucene-solr/pull/633#discussion_r294980717
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/codecs/lucene80/Lucene80Codec.java
 ##
 @@ -91,7 +91,11 @@ public Lucene80Codec() {
* flushed/merged segments.
*/
   public Lucene80Codec(Mode mode) {
-super("Lucene80");
+this("Lucene80", mode);
+  }
+  
+  protected Lucene80Codec(String name, Mode mode) {
+super(name);
 
 Review comment:
   > Maybe it would be cleaner for this API to have a public interface for 
IntBlockTermState, or make it public since its fields are accessed directly.
   
   +1!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] msokolov commented on issue #721: LUCENE-8781: add FST array-with-gap addressing to Util.readCeilArc

2019-06-18 Thread GitBox
msokolov commented on issue #721: LUCENE-8781: add FST array-with-gap 
addressing to Util.readCeilArc
URL: https://github.com/apache/lucene-solr/pull/721#issuecomment-503264446
 
 
   Yes, naturally, since that was what this was intended to fix. I thought I 
had said so, but I see there's no such comment here. Anyway I had a seed that 
failed before and succeeded after. In fact you didn't need any special seed, 
since it always failed before. One thing that confuses me is that the 
suggesters also seem to use this method, yet they never failed in my testing. 
I'm running again just to be sure, then I'll push.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8864) Add Query Memory Estimation Ability in QueryVisitor

2019-06-18 Thread Atri Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866933#comment-16866933
 ] 

Atri Sharma commented on LUCENE-8864:
-

Right, the purpose of this Jira was twofold:

 

1) To throw out thoughts about making memory accounting a first class citizen 
within QueryVisitor. I think it would be good if we added a method which 
returned the overall size of the underlying query. This fits in nicely with 
QueryVisitor's model since queries can be nested, so it is good to get the 
"deep" memory usage of the parent query. As you said, the new method could 
return the Accountable's estimate or shallow size if Accountable is not 
supported.

 

2) Borrow ideas from QueryVisitor design to see if we can improve Accountable 
itself. While this is orthogonal and I have not really thought through every 
corner case, my instinct says that there might be opportunities to improve 
Accountable's APIs to be more recursive in nature. For eg, there are a ton of 
instanceof checks present today, for each Query type. Should we think about 
delegating some of that calculation to a visitor type model which localizes the 
per query calculation to the query's scope?

> Add Query Memory Estimation Ability in QueryVisitor
> ---
>
> Key: LUCENE-8864
> URL: https://issues.apache.org/jira/browse/LUCENE-8864
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Priority: Major
>
> In LUCENE-8855, there is a discussion around adding memory accounting 
> capabilities to QueryVisitor to allow estimation of memory consumption by 
> queries.'
> This Jira tracks the effort



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 389 - Unstable

2019-06-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/389/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestRebalanceLeaders

Error Message:
Error from server at https://127.0.0.1:34455/solr: Underlying core creation 
failed while creating collection: TestColl

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:34455/solr: Underlying core creation failed 
while creating collection: TestColl
at __randomizedtesting.SeedInfo.seed([DD477F7D0F4C9D01]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:656)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.TestRebalanceLeaders.setupCluster(TestRebalanceLeaders.java:72)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)




Build Log:
[...truncated 13283 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestRebalanceLeaders
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build/solr-core/test/J0/temp/solr.cloud.TestRebalanceLeaders_DD477F7D0F4C9D01-001/init-core-data-001
   [junit4]   2> 720176 WARN  
(SUITE-TestRebalanceLeaders-seed#[DD477F7D0F4C9D01]-worker) [ ] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=45 numCloses=45
   [junit4]   2> 720177 INFO  
(SUITE-TestRebalanceLeaders-seed#[DD477F7D0F4C9D01]-worker) [ ] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 720178 INFO  
(SUITE-TestRebalanceLeaders-seed#[DD477F7D0F4C9D01]-worker) [ ] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 

[jira] [Commented] (SOLR-13548) Migrate Solr's Moin wiki to Confluence

2019-06-18 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866929#comment-16866929
 ] 

Cassandra Targett commented on SOLR-13548:
--

I will try to get to it before I leave on a short trip on Thursday. If I can't 
do it tomorrow sometime I will let you know.

> Migrate Solr's Moin wiki to Confluence
> --
>
> Key: SOLR-13548
> URL: https://issues.apache.org/jira/browse/SOLR-13548
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: website
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> We have a deadline end of June to migrate Moin wiki to Confluence.
> This Jira will track migration of Solr's [https://wiki.apache.org/solr/] over 
> to [https://cwiki.apache.org/confluence/display/SOLR]
> The old Confluence space currently hosts the old Reference Guide for version 
> 6.5 before we moved to asciidoc. This will be overwritten.
> Steps:
>  # Delete all pages in current SOLR space
>  ## Q: Can we do a bulk delete ourselves or do we need to ask INFRA?
>  # The rules in {{.htaccess}} which redirects to the 6.6 guide will remain as 
> is
>  # Run the migration tool at 
> [https://selfserve.apache.org|https://selfserve.apache.org/]
>  # Add a clearly visible link from front page to the ref guide for people 
> landing there for docs
> After migration we'll clean up and weed out what is not needed, and then 
> start moving developer-centric content into the main git repo (which will be 
> covered in other JIRAs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] msokolov commented on issue #728: LUCENE-8867: Store point with cardinality for low cardinality leaves

2019-06-18 Thread GitBox
msokolov commented on issue #728: LUCENE-8867: Store point with cardinality for 
low cardinality leaves
URL: https://github.com/apache/lucene-solr/pull/728#issuecomment-503261067
 
 
   We discussed this today - thanks for the change! I'm not familiar with this 
area of the code, and not really qualified to assess it, but I did notice there 
are no tests with the patch. I guess we are supporting existing functionality, 
but are we sure that the existing tests are covering the different 
cardinalities?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8863) Improve handling of edge cases in Kuromoji's DIctionaryBuilder

2019-06-18 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866918#comment-16866918
 ] 

Mike Sokolov commented on LUCENE-8863:
--

 I'll push this in a couple of days if there are not other concerns. @mocobeta 
I think the linked PR is already taking a step towards LUCENE-8616 since it 
allows loading an external system dictionary. Not sure if you saw it, but if 
you have a moment maybe you could check if it is along the lines you were 
planning?

> Improve handling of edge cases in Kuromoji's DIctionaryBuilder
> --
>
> Key: LUCENE-8863
> URL: https://issues.apache.org/jira/browse/LUCENE-8863
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mike Sokolov
>Assignee: Mike Sokolov
>Priority: Major
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> While building a custom Kuromoji system dictionary, I discovered a few issues.
> First, the dictionary encoding has room for 13-bit (left and right) ids, but 
> really only supports 12 bits since this was all that was needed for the 
> IPADIC dictionary that ships with Kuromoji. The good news is we can easily 
> add support by fixing the bit-twiddling math.
> Second, the dictionary builder has a number of assertions that help uncover 
> problems in the input (like these overlarge ids), but the assertions aren't 
> enabled by default, so an unsuspecting new user doesn't get any benefit from 
> them, so we should upgrade to "real" exceptions.
> Finally, we want to handle the case of empty base forms differently. Kuromoji 
> does stemming by substituting a base form for a word when there is a base 
> form in the dictionary. Missing base forms are expected to be supplied as 
> {{*}}, but if a dictionary provides an empty string base form, we would end 
> up stripping that token completely. Since there is no possible meaning for an 
> empty base form (and the dictionary builder already treats {{*}} and empty 
> strings as equivalent in a number of other cases), I think we should simply 
> ignore empty base forms (rather than replacing words with empty strings when 
> tokenizing!)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] msokolov commented on a change in pull request #722: LUCENE-8863: handle some edge cases in Kuromoji DictionaryBuilder, en…

2019-06-18 Thread GitBox
msokolov commented on a change in pull request #722: LUCENE-8863: handle some 
edge cases in Kuromoji DictionaryBuilder, en…
URL: https://github.com/apache/lucene-solr/pull/722#discussion_r294967154
 
 

 ##
 File path: lucene/analysis/kuromoji/build.xml
 ##
 @@ -136,8 +136,8 @@
  
   
 
-  
-
+  
+
 
 Review comment:
   yeah, this felt kind of like a dusty corner where things would easily get 
lost. I'm just learning about the tools and modules concepts in lucene builds. 
It seems fine to have the builder available in the analysis jar.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8866) Remove ICU dependency of kuromoji tools/test-tools

2019-06-18 Thread Mike Sokolov (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866916#comment-16866916
 ] 

Mike Sokolov commented on LUCENE-8866:
--

+1 if people have more precise normalization requirements, they can encode them 
in their dictionary – I think we can presume this is not noisy user data, and 
should already have been cleaned.

> Remove ICU dependency of kuromoji tools/test-tools
> --
>
> Key: LUCENE-8866
> URL: https://issues.apache.org/jira/browse/LUCENE-8866
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
>Priority: Major
> Attachments: LUCENE-8866.patch
>
>
> The tooling stuff has an off-by-default option to normalize entries, 
> currently using the ICU api.
> But I think since its off-by-default, and just doing NFKC normalization at 
> dictionary-build-time, its a better tradeoff to use the JDK here?
> I would rather remove the ICU dependency for the tooling and look at 
> simplifying the build to have less modules (e.g. investigate moving the 
> tooling and tests into src/java and src/tools, so that [~msoko...@gmail.com] 
> new tests in LUCENE-8863 are running by default, dictionary tool is shipped 
> as a commandline tool in the JAR, etc)
> "ant regenerate" should be enough to prevent any chicken-and-eggs in the 
> dictionary construction code, so I don't think we need separate modules to 
> enforce it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jtibshirani edited a comment on issue #715: LUCENE-7714 Add a range query that takes advantage of index sorting.

2019-06-18 Thread GitBox
jtibshirani edited a comment on issue #715: LUCENE-7714 Add a range query that 
takes advantage of index sorting.
URL: https://github.com/apache/lucene-solr/pull/715#issuecomment-501436447
 
 
   Thanks @atris and @jimczi for taking a look!
   
   > However I wonder if we should expose this query as is or if we should use 
it only internally in the `IndexOrDocValuesQuery` ?
   
   I agree that the query is not so helpful on its own. I was unsure about 
integrating it into `IndexOrDocValuesQuery`, since that deals with queries 
generally and this query is specifically for `long` ranges. Would 
`IndexOrDocValuesQuery` optionally accept a third query, and only run it if the 
segment is sorted and also contains `NumericDocValues`? That seemed a bit 
specific to add to a fairly general query type. Another idea is for 
`IndexSortDocValuesRangeQuery` to accept a fallback range query, and delegate 
to it if the necessary conditions aren't met?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8858) Migrate Lucene's Moin wiki to Confluence

2019-06-18 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866913#comment-16866913
 ] 

Hoss Man commented on LUCENE-8858:
--

it looks like i do (still) have special confluence karma ... and now that it's 
ldap linked i was able to grant the "lucene-pmc" groupd full admin rights – so 
Jan you should now have access to the space permission...

[https://cwiki.apache.org/confluence/spaces/spacepermissions.action?key=LUCENE]

...which i'm assuming means you can use that migration tool?

if you can confirm that you do have all the space admin permissions needed, can 
you then please remove the special perms i have (same screen) ... there's no 
reason i should have special perms for that space : )

> Migrate Lucene's Moin wiki to Confluence
> 
>
> Key: LUCENE-8858
> URL: https://issues.apache.org/jira/browse/LUCENE-8858
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>
> We have a deadline end of June to migrate Moin wiki to Confluence.
> This Jira will track migration of Lucene's 
> https://wiki.apache.org/lucene-java/ over to 
> https://cwiki.apache.org/confluence/display/LUCENE
> The old Confluence space will be overwritten as it is not used.
> After migration we'll clean up and weed out what is not needed, and then 
> start moving developer-centric content into the main git repo (which will be 
> covered in other JIRAs)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12988) Avoid using TLSv1.3 for HttpClient

2019-06-18 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866905#comment-16866905
 ] 

Hoss Man commented on SOLR-12988:
-

{quote}But that is the reason why you saw the nature of the failure changed
{quote}
...ah , interesting yes.

FWIW: i realize now i may not have been very clear about the point i was trying 
to make in that part of my comment: which was that we shouldn't lose sight of 
the fact/appearance of 2 similar but possibly distinct bugs. but it looks like 
fundementally it's the same bug (JDK-8212885) and only the _manifestation_ 
changed due to where/how we use Jetty client vs HttpCLient ... which is god to 
know.
{quote}If possible I will try to do that enforcement for Java 11.0.2 or lower 
versions. Does that makes sense Hoss Man?
{quote}
I think a better solution is that if they are running java <= 11.0.2 then we 
should *WARN* that some SSL features are known to be problematic with that 
version of java ... and then le it fail if it's going to fail.

I don't think we should force TLSv1.2 based purely on the version of java – 
because IIUC: the problem isn't that TLSv1.3 doesn't work at all, on 11.0.2, 
it's that if TLSv1.3 is used in 11.0.2 then checkPeerName doesn't work – which 
means if you blanket force TLSv1.2 based on the java version you'll be silently 
downgrading the security for people who may not be using checkPeerName=true and 
may be completely unaffected.

We could potentially try to make the detection very sophisticated, and 
dependent on checkPeerName, but that could still lead to weird situations 
if/when people use non-OpenJDK based JVMs, or potentially use their own builds 
of Open-JDK that they've patched, etc...

Fundementally i think it's a very bad idea to have Solr's behavior radically 
change based on introspection of the JVM details – it makes it very hard to 
test/reproduce problems. I think it makes a lot more sense for solr to simply 
log "Your JVM is known to have some problems, see URL for details" and let the 
failures happen if they are going to happen.

(on the Junit tests side, having assumes around JVM version is fine – because 
even then it's not a "silent" behavior change, it's an explicitly "test ignored 
because XYZ")

If there was a way to programatically inspect the SSLEngine to say "does this 
impl of TLSv1.3 manifest this problem" and change behavior based on that, it 
would be one thing – but just looking at the java version constants seems 
dangerous for this type of situation

> Avoid using TLSv1.3 for HttpClient
> --
>
> Key: SOLR-12988
> URL: https://issues.apache.org/jira/browse/SOLR-12988
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
>  Labels: Java11, Java12
> Attachments: SOLR-13413.patch
>
>
> HTTPCLIENT-1967 indicates that HttpClient can't be used properly with 
> TLSv1.3. It caused some test failures below, therefore we should enforce 
> HttpClient to uses TLSv1.2 or lower versions.
> TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName seems to fail 100% of 
> the time when run with java11 (or java12), regardless of seed, on both master 
> & 7x.
> The nature of the problem and the way our htp stack works suggests it *may* 
> ultimately be a jetty bug (perhaps related to [jetty 
> issue#2711|https://github.com/eclipse/jetty.project/issues/2711]?)
> *HOWEVER* ... as far as i can tell, whatever the root cause is, seems to have 
> been fixed on the {{jira/http2}} branch (as of 
> 52bc163dc1804c31af09c1fba99647005da415ad) which should hopefully be getting 
> merged to master soon.
> Filing this issue largely for tracking purpose, although we may also want to 
> use it for discussions/considerations of other backports/fixes to 7x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8864) Add Query Memory Estimation Ability in QueryVisitor

2019-06-18 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866896#comment-16866896
 ] 

Andrzej Bialecki  commented on LUCENE-8864:
---

What kind of API do you propose here? Theoretically, if a query tree contains 
queries that implement Accountable we can already use existing QueryVisitor API 
to walk through the query tree and collect ramBytesUsed() from each sub-query. 
And if a Query doesn't implement Accountable we have no way to predict this 
(apart from \{{RamUsageEstimator.shallowSizeOf(Object)}}).

> Add Query Memory Estimation Ability in QueryVisitor
> ---
>
> Key: LUCENE-8864
> URL: https://issues.apache.org/jira/browse/LUCENE-8864
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Priority: Major
>
> In LUCENE-8855, there is a discussion around adding memory accounting 
> capabilities to QueryVisitor to allow estimation of memory consumption by 
> queries.'
> This Jira tracks the effort



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #723: SOLR-13545: AutoClose stream in ContentStreamUpdateRequest

2019-06-18 Thread GitBox
dsmiley commented on a change in pull request #723: SOLR-13545: AutoClose 
stream in ContentStreamUpdateRequest
URL: https://github.com/apache/lucene-solr/pull/723#discussion_r294955402
 
 

 ##
 File path: 
solr/solrj/src/test/org/apache/solr/client/solrj/SolrExampleTests.java
 ##
 @@ -710,10 +714,18 @@ public void testContentStreamRequest() throws Exception {
 Assert.assertEquals(0, rsp.getResults().getNumFound());
 
 ContentStreamUpdateRequest up = new ContentStreamUpdateRequest("/update");
-up.addFile(getFile("solrj/books.csv"), "application/csv");
+// Create a copy of the file, which can be deleted after uploading.
+// If the stream isn't closed, the file deletion will fail on Windows,
+// though it will succeed on linux regardless
+final File file = new File("temp/books_copy.csv");
 
 Review comment:
   I see you're creating a file relative to the current working directory.  
That's a bad practice for tests generally.  Call `createTempFile(...)` instead 
(declared in the inheritance chain).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8855) Add Accountable to Query implementations

2019-06-18 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866893#comment-16866893
 ] 

Andrzej Bialecki  commented on LUCENE-8855:
---

Yes, this sounds like a good enough compromise. I'll work on a new patch.

> Add Accountable to Query implementations
> 
>
> Key: LUCENE-8855
> URL: https://issues.apache.org/jira/browse/LUCENE-8855
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: LUCENE-8855.patch, LUCENE-8855.patch
>
>
> Query implementations should also support {{Accountable}} API in order to 
> monitor the memory consumption e.g. in caches where either keys or values are 
> {{Query}} instances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 24249 - Still Unstable!

2019-06-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24249/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseSerialGC

6 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SolrCloudExampleTest

Error Message:
54 threads leaked from SUITE scope at 
org.apache.solr.cloud.SolrCloudExampleTest: 1) Thread[id=2390, 
name=qtp7706496-2390, state=RUNNABLE, group=TGRP-SolrCloudExampleTest] 
at java.base@11/sun.nio.ch.EPoll.wait(Native Method) at 
java.base@11/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)  
   at 
java.base@11/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124) 
at java.base@11/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:141)   
  at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:464)
 at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:401)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:357)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:181)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:132)
 at 
app//org.eclipse.jetty.io.ManagedSelector$$Lambda$176/0x00010039cc40.run(Unknown
 Source) at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:781)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:917)
 at java.base@11/java.lang.Thread.run(Thread.java:834)2) 
Thread[id=2344, name=qtp1247560523-2344, state=RUNNABLE, 
group=TGRP-SolrCloudExampleTest] at 
java.base@11/sun.nio.ch.EPoll.wait(Native Method) at 
java.base@11/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)  
   at 
java.base@11/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124) 
at java.base@11/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:141)   
  at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:464)
 at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:401)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:357)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:181)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:132)
 at 
app//org.eclipse.jetty.io.ManagedSelector$$Lambda$176/0x00010039cc40.run(Unknown
 Source) at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:781)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:917)
 at java.base@11/java.lang.Thread.run(Thread.java:834)3) 
Thread[id=2357, name=qtp1587530077-2357, state=RUNNABLE, 
group=TGRP-SolrCloudExampleTest] at 
java.base@11/sun.nio.ch.EPoll.wait(Native Method) at 
java.base@11/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)  
   at 
java.base@11/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124) 
at java.base@11/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:141)   
  at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:464)
 at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:401)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:357)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:181)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
 at 
app//org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:132)
 at 
app//org.eclipse.jetty.io.ManagedSelector$$Lambda$176/0x00010039cc40.run(Unknown
 Source) at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:781)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:917)
 at java.base@11/java.lang.Thread.run(Thread.java:834)4) 
Thread[id=2456, name=SolrRrdBackendFactory-819-thread-1, state=TIMED_WAITING, 
group=TGRP-SolrCloudExampleTest] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 

[jira] [Commented] (LUCENE-8769) Range Query Type With Logically Connected Ranges

2019-06-18 Thread Atri Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866884#comment-16866884
 ] 

Atri Sharma commented on LUCENE-8769:
-

Thinking more about this, I think what can be done is:

 

1) Introduce NOT semantics by translating NOT (a, b) to (-infinity, a) AND (b, 
infinity)

2) Introduce a RangeClause which contains a bunch of ranges and associated AND 
and NOT clauses (not OR). Each RangeClause will be independently executed, and 
then the final result then ANDed or ORed. For eg:

 

(a AND B) OR (c NOT d) converts to two RangeClauses: \{a, b, AND}, \{c, d, 
NOT}, where the RangeClauses are connected by OR, so the independent results of 
both clauses are then ORed to give final result.

 

Does this seem useful and a doable approach?

> Range Query Type With Logically Connected Ranges
> 
>
> Key: LUCENE-8769
> URL: https://issues.apache.org/jira/browse/LUCENE-8769
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Priority: Major
> Attachments: LUCENE-8769.patch, LUCENE-8769.patch, LUCENE-8769.patch
>
>
> Today, we visit BKD tree for each range specified for PointRangeQuery. It 
> would be good to have a range query type which can take multiple ranges 
> logically ANDed or ORed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8867) Optimise BKD tree for low cardinality leaves

2019-06-18 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866873#comment-16866873
 ] 

Adrien Grand commented on LUCENE-8867:
--

+1 to split

> Optimise BKD tree for low cardinality leaves
> 
>
> Key: LUCENE-8867
> URL: https://issues.apache.org/jira/browse/LUCENE-8867
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently if a leaf on the BKD tree contains only few values, then the leaf 
> is treated the same way as it all values are different. It many cases it can 
> be much more efficient to store the distinct values with the cardinality.
> In addition, in this case the method IntersectVisitor#visit(docId, byte[]) is 
> called n times with the same byte array but different docID. This issue 
> proposes to add a new method to the interface that accepts an array of docs 
> so it can be override by implementors and gain search performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8867) Optimise BKD tree for low cardinality leaves

2019-06-18 Thread Ignacio Vera (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866868#comment-16866868
 ] 

Ignacio Vera commented on LUCENE-8867:
--

{quote}
Right, this is what I had in mind when I said this is only a problem if you 
have data dimensions. Because if you don't, then you could call 
IntersectVisitor.compare(A, A) as a way to know whether value A matches, and we 
wouldn't need any new API?
{quote}

True, that would not work when you have data dimensions. In addition 
IntersectVisitor.compare(A, A) is intended to compare the query with a range 
which normally is more expensive that a comparison with a point so it would 
defeat the purpose of the optimisation.

I propose to break this change in two so we can work in the storage 
optimisation first and then we can think in the right API and make 
IntersectVisitor more efficient in these cases.

> Optimise BKD tree for low cardinality leaves
> 
>
> Key: LUCENE-8867
> URL: https://issues.apache.org/jira/browse/LUCENE-8867
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently if a leaf on the BKD tree contains only few values, then the leaf 
> is treated the same way as it all values are different. It many cases it can 
> be much more efficient to store the distinct values with the cardinality.
> In addition, in this case the method IntersectVisitor#visit(docId, byte[]) is 
> called n times with the same byte array but different docID. This issue 
> proposes to add a new method to the interface that accepts an array of docs 
> so it can be override by implementors and gain search performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13559) AliasIntegrationTest.testClusterStateProviderAPI fails to often

2019-06-18 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866866#comment-16866866
 ] 

Andrzej Bialecki  commented on SOLR-13559:
--

I can't reproduce this locally either. Theoretically this situation is 
impossible ;) the test sets the props, which eventually updates aliases.json, 
and then {{waitForAliasesUpdate}} verifies that the local copy of aliases has 
been updated, too.

> AliasIntegrationTest.testClusterStateProviderAPI fails to often
> ---
>
> Key: SOLR-13559
> URL: https://issues.apache.org/jira/browse/SOLR-13559
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Priority: Minor
>
> Recent failure rates for AliasIntegrationTest.testClusterStateProviderAPI 
> have been around 4% which is too high. 
> (http://fucit.org/solr-jenkins-reports/failure-report.html). I've beasted 100 
> runs a couple times and not reproduced it but then hit a failure in it during 
> a normal test run today, so I'm going to start this ticket and record the 
> trace. I have yet to dig through the zips for the logs from the builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12988) Avoid using TLSv1.3 for HttpClient

2019-06-18 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866833#comment-16866833
 ] 

Cao Manh Dat edited comment on SOLR-12988 at 6/18/19 4:56 PM:
--

{code}
At some point after that, after http2 was merged to master, the nature of the 
failure changed – with openjdk 11.0.2 a NEW similar looing failure caused a 
similar looking stack trace from 
TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName – but only on the 
internode communication – NOT on the connection between the test client and the 
first node.
{code}
I understand why this happened, since in beginning of branch http2, {{Jetty 
client}} was used for many places. But SOLR-12081 makes some changes 
(https://github.com/apache/lucene-solr/blob/2e468abecc98ffc6137fc5de2aefe8cd19cd6c8d/solr/core/src/java/org/apache/solr/cloud/api/collections/CreateCollectionCmd.java#L207).
 So instead of using {{Jetty client}} we switched to {{Http Client}} in many 
places on merge. I just don't want to revert Mark changes at that point of 
time, since I'm not totally understand the reason of that.
But that is the reason why you saw {{the nature of the failure changed}}.

Anyway, l kinda *missed* this
{quote}
but that has been fixed in OpenJDK 11.0.3.
{quote}
If that is the case I'm good with remove the changes for enforcing HttpClient 
to uses TLSv1.2 or lower versions.
If possible I will try to do that enforcement for *Java 11.0.2* or lower 
versions. Does that makes sense [~hossman]?


was (Author: caomanhdat):
{code}
At some point after that, after http2 was merged to master, the nature of the 
failure changed – with openjdk 11.0.2 a NEW similar looing failure caused a 
similar looking stack trace from 
TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName – but only on the 
internode communication – NOT on the connection between the test client and the 
first node.
{code}
I understand why this happened, since in beginning of branch http2, {{Jetty 
client}} was used for many places. But SOLR-12081 makes some changes 
(https://github.com/apache/lucene-solr/blob/2e468abecc98ffc6137fc5de2aefe8cd19cd6c8d/solr/core/src/java/org/apache/solr/cloud/api/collections/CreateCollectionCmd.java#L207).
 So instead of using {{Jetty client}} we switched to {{Http Client}} in many 
places on merge. I just don't want to revert Mark changes at that point of 
time, since I'm not totally understand the reason of that.
But that is the reason why you saw {{the nature of the failure changed}}.

Anyway, l kinda *missed* this
{quote}
but that has been fixed in OpenJDK 11.0.3.
{quote}
If that is the case I'm good with remove the changes for enforcing HttpClient 
to uses TLSv1.2 or lower versions.
If possible I will try to do that enforcement for *Java 11.0.2* or lower 
versions. Does that makes sense [~hossman]

> Avoid using TLSv1.3 for HttpClient
> --
>
> Key: SOLR-12988
> URL: https://issues.apache.org/jira/browse/SOLR-12988
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
>  Labels: Java11, Java12
> Attachments: SOLR-13413.patch
>
>
> HTTPCLIENT-1967 indicates that HttpClient can't be used properly with 
> TLSv1.3. It caused some test failures below, therefore we should enforce 
> HttpClient to uses TLSv1.2 or lower versions.
> TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName seems to fail 100% of 
> the time when run with java11 (or java12), regardless of seed, on both master 
> & 7x.
> The nature of the problem and the way our htp stack works suggests it *may* 
> ultimately be a jetty bug (perhaps related to [jetty 
> issue#2711|https://github.com/eclipse/jetty.project/issues/2711]?)
> *HOWEVER* ... as far as i can tell, whatever the root cause is, seems to have 
> been fixed on the {{jira/http2}} branch (as of 
> 52bc163dc1804c31af09c1fba99647005da415ad) which should hopefully be getting 
> merged to master soon.
> Filing this issue largely for tracking purpose, although we may also want to 
> use it for discussions/considerations of other backports/fixes to 7x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12988) Avoid using TLSv1.3 for HttpClient

2019-06-18 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866833#comment-16866833
 ] 

Cao Manh Dat edited comment on SOLR-12988 at 6/18/19 4:55 PM:
--

{code}
At some point after that, after http2 was merged to master, the nature of the 
failure changed – with openjdk 11.0.2 a NEW similar looing failure caused a 
similar looking stack trace from 
TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName – but only on the 
internode communication – NOT on the connection between the test client and the 
first node.
{code}
I understand why this happened, since in beginning of branch http2, {{Jetty 
client}} was used for many places. But SOLR-12081 makes some changes 
(https://github.com/apache/lucene-solr/blob/2e468abecc98ffc6137fc5de2aefe8cd19cd6c8d/solr/core/src/java/org/apache/solr/cloud/api/collections/CreateCollectionCmd.java#L207).
 So instead of using {{Jetty client}} we switched to {{Http Client}} in many 
places on merge. I just don't want to revert Mark changes at that point of 
time, since I'm not totally understand the reason of that.
But that is the reason why you saw {{the nature of the failure changed}}.

Anyway, l kinda *missed* this
{quote}
but that has been fixed in OpenJDK 11.0.3.
{quote}
If that is the case I'm good with remove the changes for enforcing HttpClient 
to uses TLSv1.2 or lower versions.
If possible I will try to do that enforcement for *Java 11.0.2* or lower 
versions. Does that makes sense [~hossman]


was (Author: caomanhdat):
{code}
At some point after that, after http2 was merged to master, the nature of the 
failure changed – with openjdk 11.0.2 a NEW similar looing failure caused a 
similar looking stack trace from 
TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName – but only on the 
internode communication – NOT on the connection between the test client and the 
first node.
{code}
I understand why this happened, since in branch http2 we used to use {{Jetty 
client}} for many places. But SOLR-12081 makes some changes 
(https://github.com/apache/lucene-solr/blob/2e468abecc98ffc6137fc5de2aefe8cd19cd6c8d/solr/core/src/java/org/apache/solr/cloud/api/collections/CreateCollectionCmd.java#L207).
 So instead of using {{Jetty client}} we switched to {{Http Client}}. I just 
don't want to revert Mark changes at that point of time, since I'm not totally 
understand the reason of that.
But that is the reason why you saw {{the nature of the failure changed}}.

Anyway, l kinda *missed* this
{quote}
but that has been fixed in OpenJDK 11.0.3.
{quote}
If that is the case I'm good with remove the changes for enforcing HttpClient 
to uses TLSv1.2 or lower versions.
If possible I will try to do that enforcement for *Java 11.0.2* or lower 
versions. Does that makes sense [~hossman]

> Avoid using TLSv1.3 for HttpClient
> --
>
> Key: SOLR-12988
> URL: https://issues.apache.org/jira/browse/SOLR-12988
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
>  Labels: Java11, Java12
> Attachments: SOLR-13413.patch
>
>
> HTTPCLIENT-1967 indicates that HttpClient can't be used properly with 
> TLSv1.3. It caused some test failures below, therefore we should enforce 
> HttpClient to uses TLSv1.2 or lower versions.
> TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName seems to fail 100% of 
> the time when run with java11 (or java12), regardless of seed, on both master 
> & 7x.
> The nature of the problem and the way our htp stack works suggests it *may* 
> ultimately be a jetty bug (perhaps related to [jetty 
> issue#2711|https://github.com/eclipse/jetty.project/issues/2711]?)
> *HOWEVER* ... as far as i can tell, whatever the root cause is, seems to have 
> been fixed on the {{jira/http2}} branch (as of 
> 52bc163dc1804c31af09c1fba99647005da415ad) which should hopefully be getting 
> merged to master soon.
> Filing this issue largely for tracking purpose, although we may also want to 
> use it for discussions/considerations of other backports/fixes to 7x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12988) Avoid using TLSv1.3 for HttpClient

2019-06-18 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866833#comment-16866833
 ] 

Cao Manh Dat commented on SOLR-12988:
-

{code}
At some point after that, after http2 was merged to master, the nature of the 
failure changed – with openjdk 11.0.2 a NEW similar looing failure caused a 
similar looking stack trace from 
TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName – but only on the 
internode communication – NOT on the connection between the test client and the 
first node.
{code}
I understand why this happened, since in branch http2 we used to use {{Jetty 
client}} for many places. But SOLR-12081 makes some changes 
(https://github.com/apache/lucene-solr/blob/2e468abecc98ffc6137fc5de2aefe8cd19cd6c8d/solr/core/src/java/org/apache/solr/cloud/api/collections/CreateCollectionCmd.java#L207).
 So instead of using {{Jetty client}} we switched to {{Http Client}}. I just 
don't want to revert Mark changes at that point of time, since I'm not totally 
understand the reason of that.
But that is the reason why you saw {{the nature of the failure changed}}.

Anyway, l kinda *missed* this
{quote}
but that has been fixed in OpenJDK 11.0.3.
{quote}
If that is the case I'm good with remove the changes for enforcing HttpClient 
to uses TLSv1.2 or lower versions.
If possible I will try to do that enforcement for *Java 11.0.2* or lower 
versions. Does that makes sense [~hossman]

> Avoid using TLSv1.3 for HttpClient
> --
>
> Key: SOLR-12988
> URL: https://issues.apache.org/jira/browse/SOLR-12988
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
>  Labels: Java11, Java12
> Attachments: SOLR-13413.patch
>
>
> HTTPCLIENT-1967 indicates that HttpClient can't be used properly with 
> TLSv1.3. It caused some test failures below, therefore we should enforce 
> HttpClient to uses TLSv1.2 or lower versions.
> TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName seems to fail 100% of 
> the time when run with java11 (or java12), regardless of seed, on both master 
> & 7x.
> The nature of the problem and the way our htp stack works suggests it *may* 
> ultimately be a jetty bug (perhaps related to [jetty 
> issue#2711|https://github.com/eclipse/jetty.project/issues/2711]?)
> *HOWEVER* ... as far as i can tell, whatever the root cause is, seems to have 
> been fixed on the {{jira/http2}} branch (as of 
> 52bc163dc1804c31af09c1fba99647005da415ad) which should hopefully be getting 
> merged to master soon.
> Filing this issue largely for tracking purpose, although we may also want to 
> use it for discussions/considerations of other backports/fixes to 7x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8867) Optimise BKD tree for low cardinality leaves

2019-06-18 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866830#comment-16866830
 ] 

Adrien Grand commented on LUCENE-8867:
--

Sorry, reading my comment again I realize it wasn't clear. I see two distinct 
changes in the pull request. One is about adding a new storage strategy for the 
case that a leaf only has a handful of unique values, I'm +1 on it. The second 
one is about taking advantage of this special case to not compute a relation 
with the same byte[] over and over again, the solution is a bit more 
controversial in my opinion.

bq. another option would be to change more radically the interface and add a 
matches(byte[]) method that returns a boolean and then use the visit(docID) 
method.

Right, this is what I had in mind when I said this is only a problem if you 
have data dimensions. Because if you don't, then you could call 
IntersectVisitor.compare(A, A) as a way to know whether value A matches, and we 
wouldn't need any new API?

> Optimise BKD tree for low cardinality leaves
> 
>
> Key: LUCENE-8867
> URL: https://issues.apache.org/jira/browse/LUCENE-8867
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently if a leaf on the BKD tree contains only few values, then the leaf 
> is treated the same way as it all values are different. It many cases it can 
> be much more efficient to store the distinct values with the cardinality.
> In addition, in this case the method IntersectVisitor#visit(docId, byte[]) is 
> called n times with the same byte array but different docID. This issue 
> proposes to add a new method to the interface that accepts an array of docs 
> so it can be override by implementors and gain search performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-11.0.2) - Build # 726 - Unstable!

2019-06-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/726/
Java: 64bit/jdk-11.0.2 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

19 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.IndexSizeEstimatorTest

Error Message:
Error from server at https://127.0.0.1:36009/solr: Underlying core creation 
failed while creating collection: IndexSizeEstimator_collection

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:36009/solr: Underlying core creation failed 
while creating collection: IndexSizeEstimator_collection
at __randomizedtesting.SeedInfo.seed([8F30F6AA795F0A16]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:656)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.handler.admin.IndexSizeEstimatorTest.setupCluster(IndexSizeEstimatorTest.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.IndexSizeEstimatorTest

Error Message:
Error from server at https://127.0.0.1:37547/solr: Underlying core creation 
failed while creating collection: IndexSizeEstimator_collection

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:37547/solr: Underlying core creation failed 
while creating collection: IndexSizeEstimator_collection
at __randomizedtesting.SeedInfo.seed([8F30F6AA795F0A16]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:656)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 

[jira] [Comment Edited] (SOLR-13559) AliasIntegrationTest.testClusterStateProviderAPI fails to often

2019-06-18 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866811#comment-16866811
 ] 

Gus Heck edited comment on SOLR-13559 at 6/18/19 4:32 PM:
--

Note: this seed does not reproduce, the {{& gt;}} etc is because I pulled this 
from the xml file from the test output.
{code:java}
java.lang.AssertionError: {} expected:2 but was:0
at 
__randomizedtesting.SeedInfo.seed([DBB80F0F76EA5A34:C46F932305E1A37F]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.AliasIntegrationTest.testClusterStateProviderAPI(AliasIntegrationTest.java:303)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)
{code}


was (Author: gus_heck):
Note: this seed does not reproduce, the  etc is because I pulled this from 
the xml file from 

[jira] [Commented] (SOLR-13559) AliasIntegrationTest.testClusterStateProviderAPI fails to often

2019-06-18 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866811#comment-16866811
 ] 

Gus Heck commented on SOLR-13559:
-

Note: this seed does not reproduce, the  etc is because I pulled this from 
the xml file from the test output.
{code:java}
java.lang.AssertionError: {} expected:2 but was:0
at 
__randomizedtesting.SeedInfo.seed([DBB80F0F76EA5A34:C46F932305E1A37F]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.AliasIntegrationTest.testClusterStateProviderAPI(AliasIntegrationTest.java:303)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)
{code}

> AliasIntegrationTest.testClusterStateProviderAPI fails to often
> ---
>
> Key: SOLR-13559
>

[GitHub] [lucene-solr] jtibshirani commented on issue #715: LUCENE-7714 Add a range query that takes advantage of index sorting.

2019-06-18 Thread GitBox
jtibshirani commented on issue #715: LUCENE-7714 Add a range query that takes 
advantage of index sorting.
URL: https://github.com/apache/lucene-solr/pull/715#issuecomment-503212702
 
 
   Thanks @jpountz for the review, it's now ready for another look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13559) AliasIntegrationTest.testClusterStateProviderAPI fails to often

2019-06-18 Thread Gus Heck (JIRA)
Gus Heck created SOLR-13559:
---

 Summary: AliasIntegrationTest.testClusterStateProviderAPI fails to 
often
 Key: SOLR-13559
 URL: https://issues.apache.org/jira/browse/SOLR-13559
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Tests
Affects Versions: master (9.0)
Reporter: Gus Heck


Recent failure rates for AliasIntegrationTest.testClusterStateProviderAPI have 
been around 4% which is too high. 
(http://fucit.org/solr-jenkins-reports/failure-report.html). I've beasted 100 
runs a couple times and not reproduced it but then hit a failure in it during a 
normal test run today, so I'm going to start this ticket and record the trace. 
I have yet to dig through the zips for the logs from the builds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8867) Optimise BKD tree for low cardinality leaves

2019-06-18 Thread Ignacio Vera (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866798#comment-16866798
 ] 

Ignacio Vera edited comment on LUCENE-8867 at 6/18/19 4:16 PM:
---

{quote}
This is only an issue in the case that not all dimensions are indexed, right? 
Otherwise you could figure out that all values are equal in 
IntersectVisitor#compare?
{quote}

I think this is generic issue. The problem here is not when are values are 
equal but when you have a very low cardinality on the leaf nodes. In this case 
the can safe lots of space by storing the values in the proposed way.


{quote}
One concern I have with the patch is that it assumes that the codec has doc IDs 
available in an int[] slice as opposed to streaming them from disk directly to 
the IntersectVisitor for instance.
{quote}

I see your concern , another option would be to change more radically the 
interface and add a matches(byte[]) method that returns a boolean and then use 
the visit(docID) method.





was (Author: ivera):
{quote}
This is only an issue in the case that not all dimensions are indexed, right? 
Otherwise you could figure out that all values are equal in 
IntersectVisitor#compare?
{quote}

I think this is generic issue. The problem here is not when are values are 
equal but when you have a very low cardinality on the leaf nodes. In this case 
the can safe lots of space by storing the values in the proposed way.


{quote}
One concern I have with the patch is that it assumes that the codec has doc IDs 
available in an int[] slice as opposed to streaming them from disk directly to 
the IntersectVisitor for instance.
{quote}

I see your concern , another option would be to change more radically the 
interface and add a matches(byte[]) method and then use the visit(docID) method.




> Optimise BKD tree for low cardinality leaves
> 
>
> Key: LUCENE-8867
> URL: https://issues.apache.org/jira/browse/LUCENE-8867
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently if a leaf on the BKD tree contains only few values, then the leaf 
> is treated the same way as it all values are different. It many cases it can 
> be much more efficient to store the distinct values with the cardinality.
> In addition, in this case the method IntersectVisitor#visit(docId, byte[]) is 
> called n times with the same byte array but different docID. This issue 
> proposes to add a new method to the interface that accepts an array of docs 
> so it can be override by implementors and gain search performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8867) Optimise BKD tree for low cardinality leaves

2019-06-18 Thread Ignacio Vera (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866798#comment-16866798
 ] 

Ignacio Vera commented on LUCENE-8867:
--

{quote}
This is only an issue in the case that not all dimensions are indexed, right? 
Otherwise you could figure out that all values are equal in 
IntersectVisitor#compare?
{quote}

I think this is generic issue. The problem here is not when are values are 
equal but when you have a very low cardinality on the leaf nodes. In this case 
the can safe lots of space by storing the values in the proposed way.


{quote}
One concern I have with the patch is that it assumes that the codec has doc IDs 
available in an int[] slice as opposed to streaming them from disk directly to 
the IntersectVisitor for instance.
{quote}

I see your concern , another option would be to change more radically the 
interface and add a matches(byte[]) method and then use the visit(docID) method.




> Optimise BKD tree for low cardinality leaves
> 
>
> Key: LUCENE-8867
> URL: https://issues.apache.org/jira/browse/LUCENE-8867
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently if a leaf on the BKD tree contains only few values, then the leaf 
> is treated the same way as it all values are different. It many cases it can 
> be much more efficient to store the distinct values with the cardinality.
> In addition, in this case the method IntersectVisitor#visit(docId, byte[]) is 
> called n times with the same byte array but different docID. This issue 
> proposes to add a new method to the interface that accepts an array of docs 
> so it can be override by implementors and gain search performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1366 - Still Failing

2019-06-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1366/

No tests ran.

Build Log:
[...truncated 24664 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2581 links (2111 relative) to 3394 anchors in 259 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:


[jira] [Commented] (LUCENE-8769) Range Query Type With Logically Connected Ranges

2019-06-18 Thread Atri Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866797#comment-16866797
 ] 

Atri Sharma commented on LUCENE-8769:
-

I think we would still need a way to process NOT ranges (for NOT (a, b), we 
could rewrite to (-infinity, a), (b, infinity)?)

 

I agree, OR should be supported. The subtle catch there is that OR would 
normally be across multiple ranges, not single ranges. EG: (A AND B) OR (C AND 
D NOT E). How do we flatten that to a single MultiRangeQuery? The other option 
is to convert it into two MultiRangeQueries, execute both and then OR their 
result (of course, short circuit if the first one evaluates to true). This can 
be done for AND across clauses as well.

 

I am happy to put it in sandbox first, and then eventually move it to core? is 
there a criteria model that we follow for that transition?

> Range Query Type With Logically Connected Ranges
> 
>
> Key: LUCENE-8769
> URL: https://issues.apache.org/jira/browse/LUCENE-8769
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Priority: Major
> Attachments: LUCENE-8769.patch, LUCENE-8769.patch, LUCENE-8769.patch
>
>
> Today, we visit BKD tree for each range specified for PointRangeQuery. It 
> would be good to have a range query type which can take multiple ranges 
> logically ANDed or ORed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12988) Avoid using TLSv1.3 for HttpClient

2019-06-18 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866796#comment-16866796
 ] 

Hoss Man commented on SOLR-12988:
-


bq. Hi guys, this is a problem belongs to HttpClient + Java 11 (TLSv1.3) 
(HTTPCLIENT-1967).

Ok ... for start -- I don't think that's entirely true.

The bug as *originally* reported happened when a remote client was 
communicating with jetty, and was evidently tied to jetty issue#2711? ... it 
did not reproduce on the http2 branch at that time.

At some point after that, after http2 was merged to master, the nature of the 
failure changed -- with openjdk 11.0.2 a _NEW_ *similar* looing failure caused 
a similar looking stack trace from 
TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName -- but only on the 
internode communication -- *NOT* on the connection between the test client and 
the first node.

This new manifestation of the failure appears to be directly realted to 
JDK-8220723 / JDK-8212885, which are cited in the HTTPCLIENT-1967 issue you 
mentioned ... but that has been fixed in OpenJDK 11.0.3.

IE: setting asside the changes you already commited: with a patch to revert the 
test disabling SSL changes from Mark's decemeber commits, 
TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName fails 100% of the time on 
OpenJSK 11.0.2 ... but it passes 100% of the time (for me) using OpenJDK 11.0.3 
(even w/o your changes to disable TLSv1.3)



I think it's a mistake to outright disable TLSv1.3 ... it's *DEFINITELY* a 
mistake to claim this is because HttpClient doesn't work with (or doesn't 
support) TLSv3, or that we must "Avoid using TLSv1.3 for HttpClient" ... 
HttpClient works just fine with TLSv3, as long as it works correctly in the 
JVM.  (which is exactly what the comments in HTTPCLIENT-1967 already say)

In my opinion, as a last resort we should simply re-enable the tests, and leave 
the TLSv3 handling up to the JVM, w/o any special changes.  We can note in the 
docs that to use checkPeerName on java11 you *must* have 11.0.3 or higher.

A better solution, if possible, would be if there is a way to programatically 
detect when this bug exists in the jvm impl, so we can log a clear ERROR 
instead of just SSLPeerUnverifiedException that would be prefered.

But i do *NOT* think the way things stand w/your commit to disable TLSv1.3 is a 
good long term plan.


> Avoid using TLSv1.3 for HttpClient
> --
>
> Key: SOLR-12988
> URL: https://issues.apache.org/jira/browse/SOLR-12988
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
>  Labels: Java11, Java12
> Attachments: SOLR-13413.patch
>
>
> HTTPCLIENT-1967 indicates that HttpClient can't be used properly with 
> TLSv1.3. It caused some test failures below, therefore we should enforce 
> HttpClient to uses TLSv1.2 or lower versions.
> TestMiniSolrCloudClusterSSL.testSslWithCheckPeerName seems to fail 100% of 
> the time when run with java11 (or java12), regardless of seed, on both master 
> & 7x.
> The nature of the problem and the way our htp stack works suggests it *may* 
> ultimately be a jetty bug (perhaps related to [jetty 
> issue#2711|https://github.com/eclipse/jetty.project/issues/2711]?)
> *HOWEVER* ... as far as i can tell, whatever the root cause is, seems to have 
> been fixed on the {{jira/http2}} branch (as of 
> 52bc163dc1804c31af09c1fba99647005da415ad) which should hopefully be getting 
> merged to master soon.
> Filing this issue largely for tracking purpose, although we may also want to 
> use it for discussions/considerations of other backports/fixes to 7x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13472) HTTP requests to a node that does not hold a core of the collection are unauthorized

2019-06-18 Thread Ishan Chattopadhyaya (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reassigned SOLR-13472:
---

Assignee: Ishan Chattopadhyaya

> HTTP requests to a node that does not hold a core of the collection are 
> unauthorized
> 
>
> Key: SOLR-13472
> URL: https://issues.apache.org/jira/browse/SOLR-13472
> Project: Solr
>  Issue Type: Bug
>  Components: Authorization
>Affects Versions: 7.7.1, 8.0
>Reporter: adfel
>Assignee: Ishan Chattopadhyaya
>Priority: Minor
>  Labels: security
>
> When creating collection in SolrCloud, collection is available for queries 
> and updates through all Solr nodes, in particular nodes that does not hold 
> one of collection's cores. This is expected behaviour that works when using 
> SolrJ client or HTTP requests.
> When enabling authorization rules it seems that this behaviour is broken for 
> HTTP requests:
>  - executing request to a node that holds part of the collection (core) obey 
> to authorization rules as expected.
>  - other nodes respond with code 403 - unauthorized request.
> SolrJ still works as expected.
> Tested both with BasicAuthPlugin and KerberosPlugin authentication plugins.
> +Steps for reproduce:+
> 1. Create a cloud made of 2 nodes (node_1, node_2).
> 2. Configure authentication and authorization by uploading following 
> security.json file to zookeeper:
>  
> {code:java}
> {
>  "authentication": {
>"blockUnknown": true,
>"class": "solr.BasicAuthPlugin",
>"credentials": {
>  "solr": "'solr' user password_hash",
>  "indexer_app": "'indexer_app' password_hash",
>  "read_user": "'read_user' password_hash"
>}
>  },
>  "authorization": {
>"class": "solr.RuleBasedAuthorizationPlugin",
>"permissions": [
>  {
>"name": "read",
>"role": "*"
>  },
>  {
>"name": "update",
>"role": [
>  "indexer",
>  "admin"
>]
>  },
>  {
>"name": "all",
>"role": "admin"
>  }
>],
>"user-role": {
>  "solr": "admin",
>  "indexer_app": "indexer"
>}
>  }
> }{code}
>  
> 3. create 'test' collection with one shard on *node_1*.
> -- 
> The following requests expected to succeed but return 403 status 
> (unauthorized request):
> {code:java}
> curl -u read_user:read_user "http://node_2/solr/test/select?q=*:*;
> curl -u indexer_app:indexer_app "http://node_2/solr/test/select?q=*:*;
> curl -u indexer_app:indexer_app "http://node_2/solr/test/update?commit=true;
> {code}
>  
> Authenticated '_solr_' user requests works as expected. My guess is due to 
> the special '_all_' role.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13480) Collection creation failure when using Kerberos authentication combined with rule-base authorization

2019-06-18 Thread Ishan Chattopadhyaya (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reassigned SOLR-13480:
---

Assignee: Ishan Chattopadhyaya

> Collection creation failure when using Kerberos authentication combined with 
> rule-base authorization
> 
>
> Key: SOLR-13480
> URL: https://issues.apache.org/jira/browse/SOLR-13480
> Project: Solr
>  Issue Type: Bug
>  Components: Authorization, security
>Affects Versions: 7.7.1
>Reporter: mosh
>Assignee: Ishan Chattopadhyaya
>Priority: Major
>  Labels: kerberos
>
> Creation of collection with an authorized user fails with the following error:
> {code:java}
> org.apache.solr.common.SolrException: Error getting replica locations : 
> unable to get autoscaling policy session{code}
> At first it may seem like SOLR-13355 duplication as we are using “all” 
> permission, but bug is specific to Kerberos (tested and found ok using basic 
> auth) plus we verified the failure with 7.7.2 snapshot that included the 
> relevant patch.
> +How to reproduce:+
> 1. Configure solr cloud with kerberos authentication and rule-based 
> authorization plugins using the following security.json file:
> {code:java}
> {
> "authentication":{
>    "class":"org.apache.solr.security.KerberosPlugin"
> },
> "authorization":{
>    "class":"solr.RuleBasedAuthorizationPlugin",
>    "permissions":[
>  {
>    "name":"read",
>    "role":"*"
>  },
>  {
>    "name":"all",
>    "role":"admin_user"
>  }
>    ],
>    "user-role":{
>  "admin_user@OUR_REALM":"admin_user"
>    }
> }}{code}
> 2. Create collection using an authorized user:
> {code:java}
> kinit admin_user@OUR_REALM
> curl --negotiate -u : 
> "http:///solr/admin/collections?action=CREATE=mycoll=1=_default"{code}
> {color:#d04437}==> request fails with the error written above.{color}
> 3. Disable authorization by removing _authorization_ section from 
> security.json, so file should be as follow:
> {code:java}
> {
>   "authentication":{
>     "class":"org.apache.solr.security.KerberosPlugin"
>   }
> }{code}
> 4. Create collection again as in step 2.
> {color:#14892c}==> request succeeds.{color}
> 5. Return authorization section to security.json (file from step 1) and make 
> sure authorization works as expected by inserting documents and executing 
> search queries with different users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7530) Wrong JSON response using Terms Component with distrib=true

2019-06-18 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866791#comment-16866791
 ] 

Munendra S N commented on SOLR-7530:


[~mkhludnev]
 [^SOLR-7530.patch] 
I have updated the documentation to add JSON example when terms.ttf=true. Also, 
note about terms.list. I have not included some bugs/limitation which are in 
progress. Please suggest if any changes required

While updating this, observed that when terms.ttf=true, the value returned is 
long but when terms.ttf=false the value returned is integer in standalone mode 
but it could be integer or long in distributed based on the value. Currently, 
value type varies b/w standalone and distributed mode in other components too 
but here(terms), I feel slightly more inconsistent as terms.ttf value affects 
the type and based on actual value type varies in distributed mode. For now, I 
have left it as it is 

> Wrong JSON response using Terms Component with distrib=true
> ---
>
> Key: SOLR-7530
> URL: https://issues.apache.org/jira/browse/SOLR-7530
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers, SearchComponents - other, SolrCloud
>Affects Versions: 4.9
>Reporter: Raúl Grande
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: SOLR-7530.patch, SOLR-7530.patch, SOLR-7530.patch, 
> SOLR-7530.patch, SOLR-7530.patch, SOLR-7530.patch, SOLR-7530.patch
>
>
> When using TermsComponent in SolrCloud there are differences in the JSON 
> response if parameter distrib is true or false. If distrib=true JSON is not 
> well formed (please note at the [ ] marks)
> JSON Response when distrib=false. Correct response:
> {"responseHeader":{ 
>   "status":0, 
>   "QTime":3
> }, 
> "terms":{ 
> "FileType":
> [ 
>   "EMAIL",20060, 
>   "PDF",7051, 
>   "IMAGE",5108, 
>   "OFFICE",4912, 
>   "TXT",4405, 
>   "OFFICE_EXCEL",4122, 
>   "OFFICE_WORD",2468
>   ]
> } } 
> JSON Response when distrib=true. Incorrect response:
> { 
> "responseHeader":{
>   "status":0, 
>   "QTime":94
> }, 
> "terms":{ 
> "FileType":{ 
>   "EMAIL":31923, 
>   "PDF":11545, 
>   "IMAGE":9807, 
>   "OFFICE_EXCEL":8195, 
>   "OFFICE":5147, 
>   "OFFICE_WORD":4820, 
>   "TIFF":1156, 
>   "XML":851, 
>   "HTML":821, 
>   "RTF":303
>   } 
> } } 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8867) Optimise BKD tree for low cardinality leaves

2019-06-18 Thread Ignacio Vera (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8867:
-
Description: 
Currently if a leaf on the BKD tree contains only few values, then the leaf is 
treated the same way as it all values are different. It many cases it can be 
much more efficient to store the distinct values with the cardinality.

In addition, in this case the method IntersectVisitor#visit(docId, byte[]) is 
called n times with the same byte array but different docID. This issue 
proposes to add a new method to the interface that accepts an array of docs so 
it can be override by implementors and gain search performance.

  was:
Currently if a leaf on the BKD tree contains only few values, then the leaf is 
treated the same way as it all values are different. It many cases it can be 
much more efficient to store the distinct values with the cardinality.

In addition, in this cases the method IntersectVisitor#visit(docId, byte[]) is 
called n times with the same byte array but different docID. This issue 
proposes to add a new method to the interface that accepts an array of docs so 
it can be override by implementors and gain search performance.


> Optimise BKD tree for low cardinality leaves
> 
>
> Key: LUCENE-8867
> URL: https://issues.apache.org/jira/browse/LUCENE-8867
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently if a leaf on the BKD tree contains only few values, then the leaf 
> is treated the same way as it all values are different. It many cases it can 
> be much more efficient to store the distinct values with the cardinality.
> In addition, in this case the method IntersectVisitor#visit(docId, byte[]) is 
> called n times with the same byte array but different docID. This issue 
> proposes to add a new method to the interface that accepts an array of docs 
> so it can be override by implementors and gain search performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8867) Optimise BKD tree for low cardinality leaves

2019-06-18 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866785#comment-16866785
 ] 

Adrien Grand commented on LUCENE-8867:
--

This is only an issue in the case that not all dimensions are indexed, right? 
Otherwise you could figure out that all values are equal in 
IntersectVisitor#compare?

One concern I have with the patch is that it assumes that the codec has doc IDs 
available in an int[] slice as opposed to streaming them from disk directly to 
the IntersectVisitor for instance.

> Optimise BKD tree for low cardinality leaves
> 
>
> Key: LUCENE-8867
> URL: https://issues.apache.org/jira/browse/LUCENE-8867
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently if a leaf on the BKD tree contains only few values, then the leaf 
> is treated the same way as it all values are different. It many cases it can 
> be much more efficient to store the distinct values with the cardinality.
> In addition, in this cases the method IntersectVisitor#visit(docId, byte[]) 
> is called n times with the same byte array but different docID. This issue 
> proposes to add a new method to the interface that accepts an array of docs 
> so it can be override by implementors and gain search performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7530) Wrong JSON response using Terms Component with distrib=true

2019-06-18 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-7530:
---
Attachment: SOLR-7530.patch

> Wrong JSON response using Terms Component with distrib=true
> ---
>
> Key: SOLR-7530
> URL: https://issues.apache.org/jira/browse/SOLR-7530
> Project: Solr
>  Issue Type: Bug
>  Components: Response Writers, SearchComponents - other, SolrCloud
>Affects Versions: 4.9
>Reporter: Raúl Grande
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: SOLR-7530.patch, SOLR-7530.patch, SOLR-7530.patch, 
> SOLR-7530.patch, SOLR-7530.patch, SOLR-7530.patch, SOLR-7530.patch
>
>
> When using TermsComponent in SolrCloud there are differences in the JSON 
> response if parameter distrib is true or false. If distrib=true JSON is not 
> well formed (please note at the [ ] marks)
> JSON Response when distrib=false. Correct response:
> {"responseHeader":{ 
>   "status":0, 
>   "QTime":3
> }, 
> "terms":{ 
> "FileType":
> [ 
>   "EMAIL",20060, 
>   "PDF",7051, 
>   "IMAGE",5108, 
>   "OFFICE",4912, 
>   "TXT",4405, 
>   "OFFICE_EXCEL",4122, 
>   "OFFICE_WORD",2468
>   ]
> } } 
> JSON Response when distrib=true. Incorrect response:
> { 
> "responseHeader":{
>   "status":0, 
>   "QTime":94
> }, 
> "terms":{ 
> "FileType":{ 
>   "EMAIL":31923, 
>   "PDF":11545, 
>   "IMAGE":9807, 
>   "OFFICE_EXCEL":8195, 
>   "OFFICE":5147, 
>   "OFFICE_WORD":4820, 
>   "TIFF":1156, 
>   "XML":851, 
>   "HTML":821, 
>   "RTF":303
>   } 
> } } 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8769) Range Query Type With Logically Connected Ranges

2019-06-18 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866782#comment-16866782
 ] 

Adrien Grand commented on LUCENE-8769:
--

It feels a bit wrong to me to implement support for AND and NOT this way: the 
next step I imagine will be support for OR, which shouldn't be any more 
complicated than the current patch. And any combination of AND/NOT/OR clauses 
can be rewritten to a combination of ranges that only have OR clauses? So it 
would feel more natural to start with OR, and then possibly add support for AND 
and NOT via rewrite rules.

Another thing is that this feature feels useful but maybe a bit too esoteric 
for lucene/core, could we have it in the sandbox first? I suspect it'll make it 
hard to reuse the packing logic but in such a case it'd probably be fine to 
duplicate?

> Range Query Type With Logically Connected Ranges
> 
>
> Key: LUCENE-8769
> URL: https://issues.apache.org/jira/browse/LUCENE-8769
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Priority: Major
> Attachments: LUCENE-8769.patch, LUCENE-8769.patch, LUCENE-8769.patch
>
>
> Today, we visit BKD tree for each range specified for PointRangeQuery. It 
> would be good to have a range query type which can take multiple ranges 
> logically ANDed or ORed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3387 - Unstable

2019-06-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3387/

4 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
Number of replicas in the state does not match what we set:4 vs 5

Stack Trace:
org.apache.solr.cloud.ZkController$NotInClusterStateException: Number of 
replicas in the state does not match what we set:4 vs 5
at 
__randomizedtesting.SeedInfo.seed([8E179CE9030A0BEF:643A333ADF66617]:0)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForActiveReplicaCount(AbstractFullDistribZkTestBase.java:603)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:552)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:358)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1080)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
69 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest: 1) Thread[id=21127, 
name=qtp981519604-21127, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeySafeLeaderTest] at 
java.base@11.0.1/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@11.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123)
   

[jira] [Commented] (SOLR-13403) Terms component fails for DatePointField

2019-06-18 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866760#comment-16866760
 ] 

Munendra S N commented on SOLR-13403:
-

 [^SOLR-13403.patch] 
* moved mutableValToString to TermsComponent
* added test for distrib=true



> Terms component fails for DatePointField
> 
>
> Key: SOLR-13403
> URL: https://issues.apache.org/jira/browse/SOLR-13403
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-13403.patch, SOLR-13403.patch
>
>
> Getting terms for PointFields except DatePointField. For DatePointField, the 
> request fails NPE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13403) Terms component fails for DatePointField

2019-06-18 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-13403:

Attachment: SOLR-13403.patch

> Terms component fails for DatePointField
> 
>
> Key: SOLR-13403
> URL: https://issues.apache.org/jira/browse/SOLR-13403
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-13403.patch, SOLR-13403.patch
>
>
> Getting terms for PointFields except DatePointField. For DatePointField, the 
> request fails NPE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13403) Terms component fails for DatePointField

2019-06-18 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-13403:

Attachment: (was: SOLR-13403.patch)

> Terms component fails for DatePointField
> 
>
> Key: SOLR-13403
> URL: https://issues.apache.org/jira/browse/SOLR-13403
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-13403.patch
>
>
> Getting terms for PointFields except DatePointField. For DatePointField, the 
> request fails NPE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13403) Terms component fails for DatePointField

2019-06-18 Thread Munendra S N (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-13403:

Attachment: SOLR-13403.patch

> Terms component fails for DatePointField
> 
>
> Key: SOLR-13403
> URL: https://issues.apache.org/jira/browse/SOLR-13403
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Reporter: Munendra S N
>Priority: Major
> Attachments: SOLR-13403.patch, SOLR-13403.patch
>
>
> Getting terms for PointFields except DatePointField. For DatePointField, the 
> request fails NPE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8853) FileSwitchDirectory is broken if temp outputs are used

2019-06-18 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866720#comment-16866720
 ] 

Adrien Grand commented on LUCENE-8853:
--

I had to change the random directory creation logic to only use a 
FileSwitchDirectory when newDirectory is called as opposed to newFSDirectory 
since the hard-linking tests rely on the fact that newFSDirectory creates a 
FSDirectory.

{noformat}
11:52:25[junit4] Suite: 
org.apache.lucene.store.TestHardLinkCopyDirectoryWrapper
11:52:25[junit4] IGNOR/A 0.03s J3 | 
TestHardLinkCopyDirectoryWrapper.testFsyncDoesntCreateNewFiles
11:52:25[junit4]> Assumption #1: test only works for FSDirectory 
subclasses
11:52:25[junit4] IGNOR/A 0.02s J3 | 
TestHardLinkCopyDirectoryWrapper.testPendingDeletions
11:52:25[junit4]> Assumption #1: we can only install VirusCheckingFS on 
an FSDirectory
11:52:25[junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestHardLinkCopyDirectoryWrapper -Dtests.method=testCopyHardLinks 
-Dtests.seed=F143ACB5B6E7F830 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=chr-US -Dtests.timezone=America/Metlakatla -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
11:52:25[junit4] FAILURE 0.04s J3 | 
TestHardLinkCopyDirectoryWrapper.testCopyHardLinks <<<
11:52:25[junit4]> Throwable #1: java.lang.AssertionError: 
expected:<(dev=801,ino=4719568)> but was:<(dev=801,ino=4719567)>
11:52:25[junit4]>   at 
__randomizedtesting.SeedInfo.seed([F143ACB5B6E7F830:1016E6C04A87B1E0]:0)
11:52:25[junit4]>   at 
org.apache.lucene.store.TestHardLinkCopyDirectoryWrapper.testCopyHardLinks(TestHardLinkCopyDirectoryWrapper.java:83)
11:52:25[junit4]>   at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
11:52:25[junit4]>   at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
11:52:25[junit4]>   at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
11:52:25[junit4]>   at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
11:52:25[junit4]>   at 
java.base/java.lang.Thread.run(Thread.java:834)
11:52:25[junit4]   2> NOTE: leaving temporary files on disk at: 
/var/lib/jenkins/workspace/apache+lucene-solr+master/lucene/build/misc/test/J3/temp/lucene.store.TestHardLinkCopyDirectoryWrapper_F143ACB5B6E7F830-001
11:52:25[junit4]   2> NOTE: test params are: codec=Lucene80, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@3be4c8d6),
 locale=chr-US, timezone=America/Metlakatla
11:52:25[junit4]   2> NOTE: Linux 3.16.0-9-amd64 amd64/Oracle Corporation 
11.0.2 (64-bit)/cpus=16,threads=1,free=416622920,total=536870912
11:52:25[junit4]   2> NOTE: All tests run in this JVM: 
[TestDiversifiedTopDocsCollector, TestHardLinkCopyDirectoryWrapper]
11:52:25[junit4] Completed [7/14 (1!)] on J3 in 1.62s, 43 tests, 1 failure, 
2 skipped <<< FAILURES!
{noformat}

> FileSwitchDirectory is broken if temp outputs are used
> --
>
> Key: LUCENE-8853
> URL: https://issues.apache.org/jira/browse/LUCENE-8853
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: master (9.0), 8.2
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> FileSwitchDirectory basically doesn't work if tmp output are used for files 
> that are explicitly mapped with extensions. here is a failing test:
> {code}
> 16:49:40[junit4] Suite: 
> org.apache.lucene.search.suggest.analyzing.BlendedInfixSuggesterTest
> 16:49:40[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=BlendedInfixSuggesterTest 
> -Dtests.method=testBlendedSort_fieldWeightZero_shouldRankSuggestionsByPositionMatch
>  -Dtests.seed=16D8C93DC8FE5192 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=pt-LU -Dtests.timezone=US/Michigan -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> 16:49:40[junit4] ERROR   0.05s J1 | 
> BlendedInfixSuggesterTest.testBlendedSort_fieldWeightZero_shouldRankSuggestionsByPositionMatch
>  <<<
> 16:49:40[junit4]> Throwable #1: 
> java.nio.file.AtomicMoveNotSupportedException: _0.fdx__0.tmp -> _0.fdx: 
> source and dest are in different directories
> 16:49:40[junit4]> at 
> __randomizedtesting.SeedInfo.seed([16D8C93DC8FE5192:20E180A9490374CE]:0)
> 16:49:40[junit4]> at 
> org.apache.lucene.store.FileSwitchDirectory.rename(FileSwitchDirectory.java:201)
> 16:49:40[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.rename(MockDirectoryWrapper.java:231)
> 16:49:40[junit4]> at 
> 

[jira] [Commented] (LUCENE-8853) FileSwitchDirectory is broken if temp outputs are used

2019-06-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866691#comment-16866691
 ] 

ASF subversion and git services commented on LUCENE-8853:
-

Commit 0a915c32926257cb7406463b9829914a34540bee in lucene-solr's branch 
refs/heads/branch_8x from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0a915c3 ]

LUCENE-8853: Don't return a FileSwitchDirectory when asked for a FS directory.


> FileSwitchDirectory is broken if temp outputs are used
> --
>
> Key: LUCENE-8853
> URL: https://issues.apache.org/jira/browse/LUCENE-8853
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: master (9.0), 8.2
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> FileSwitchDirectory basically doesn't work if tmp output are used for files 
> that are explicitly mapped with extensions. here is a failing test:
> {code}
> 16:49:40[junit4] Suite: 
> org.apache.lucene.search.suggest.analyzing.BlendedInfixSuggesterTest
> 16:49:40[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=BlendedInfixSuggesterTest 
> -Dtests.method=testBlendedSort_fieldWeightZero_shouldRankSuggestionsByPositionMatch
>  -Dtests.seed=16D8C93DC8FE5192 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=pt-LU -Dtests.timezone=US/Michigan -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> 16:49:40[junit4] ERROR   0.05s J1 | 
> BlendedInfixSuggesterTest.testBlendedSort_fieldWeightZero_shouldRankSuggestionsByPositionMatch
>  <<<
> 16:49:40[junit4]> Throwable #1: 
> java.nio.file.AtomicMoveNotSupportedException: _0.fdx__0.tmp -> _0.fdx: 
> source and dest are in different directories
> 16:49:40[junit4]> at 
> __randomizedtesting.SeedInfo.seed([16D8C93DC8FE5192:20E180A9490374CE]:0)
> 16:49:40[junit4]> at 
> org.apache.lucene.store.FileSwitchDirectory.rename(FileSwitchDirectory.java:201)
> 16:49:40[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.rename(MockDirectoryWrapper.java:231)
> 16:49:40[junit4]> at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.rename(LockValidatingDirectoryWrapper.java:56)
> 16:49:40[junit4]> at 
> org.apache.lucene.store.TrackingDirectoryWrapper.rename(TrackingDirectoryWrapper.java:64)
> 16:49:40[junit4]> at 
> org.apache.lucene.store.FilterDirectory.rename(FilterDirectory.java:89)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.SortingStoredFieldsConsumer.flush(SortingStoredFieldsConsumer.java:56)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:152)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:468)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:555)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:722)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3199)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3444)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3409)
> 16:49:40[junit4]> at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.commit(AnalyzingInfixSuggester.java:345)
> 16:49:40[junit4]> at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.build(AnalyzingInfixSuggester.java:315)
> 16:49:40[junit4]> at 
> org.apache.lucene.search.suggest.analyzing.BlendedInfixSuggesterTest.getBlendedInfixSuggester(BlendedInfixSuggesterTest.java:125)
> 16:49:40[junit4]> at 
> org.apache.lucene.search.suggest.analyzing.BlendedInfixSuggesterTest.testBlendedSort_fieldWeightZero_shouldRankSuggestionsByPositionMatch(BlendedInfixSuggesterTest.java:79)
> 16:49:40[junit4]> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 16:49:40[junit4]> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 16:49:40[junit4]> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 16:49:40[junit4]> at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 16:49:40[junit4]> at 
> java.base/java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (LUCENE-8853) FileSwitchDirectory is broken if temp outputs are used

2019-06-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866692#comment-16866692
 ] 

ASF subversion and git services commented on LUCENE-8853:
-

Commit 2e468abecc98ffc6137fc5de2aefe8cd19cd6c8d in lucene-solr's branch 
refs/heads/master from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2e468ab ]

LUCENE-8853: Don't return a FileSwitchDirectory when asked for a FS directory.


> FileSwitchDirectory is broken if temp outputs are used
> --
>
> Key: LUCENE-8853
> URL: https://issues.apache.org/jira/browse/LUCENE-8853
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: master (9.0), 8.2
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> FileSwitchDirectory basically doesn't work if tmp output are used for files 
> that are explicitly mapped with extensions. here is a failing test:
> {code}
> 16:49:40[junit4] Suite: 
> org.apache.lucene.search.suggest.analyzing.BlendedInfixSuggesterTest
> 16:49:40[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=BlendedInfixSuggesterTest 
> -Dtests.method=testBlendedSort_fieldWeightZero_shouldRankSuggestionsByPositionMatch
>  -Dtests.seed=16D8C93DC8FE5192 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=pt-LU -Dtests.timezone=US/Michigan -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> 16:49:40[junit4] ERROR   0.05s J1 | 
> BlendedInfixSuggesterTest.testBlendedSort_fieldWeightZero_shouldRankSuggestionsByPositionMatch
>  <<<
> 16:49:40[junit4]> Throwable #1: 
> java.nio.file.AtomicMoveNotSupportedException: _0.fdx__0.tmp -> _0.fdx: 
> source and dest are in different directories
> 16:49:40[junit4]> at 
> __randomizedtesting.SeedInfo.seed([16D8C93DC8FE5192:20E180A9490374CE]:0)
> 16:49:40[junit4]> at 
> org.apache.lucene.store.FileSwitchDirectory.rename(FileSwitchDirectory.java:201)
> 16:49:40[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.rename(MockDirectoryWrapper.java:231)
> 16:49:40[junit4]> at 
> org.apache.lucene.store.LockValidatingDirectoryWrapper.rename(LockValidatingDirectoryWrapper.java:56)
> 16:49:40[junit4]> at 
> org.apache.lucene.store.TrackingDirectoryWrapper.rename(TrackingDirectoryWrapper.java:64)
> 16:49:40[junit4]> at 
> org.apache.lucene.store.FilterDirectory.rename(FilterDirectory.java:89)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.SortingStoredFieldsConsumer.flush(SortingStoredFieldsConsumer.java:56)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:152)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:468)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:555)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:722)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3199)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3444)
> 16:49:40[junit4]> at 
> org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3409)
> 16:49:40[junit4]> at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.commit(AnalyzingInfixSuggester.java:345)
> 16:49:40[junit4]> at 
> org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.build(AnalyzingInfixSuggester.java:315)
> 16:49:40[junit4]> at 
> org.apache.lucene.search.suggest.analyzing.BlendedInfixSuggesterTest.getBlendedInfixSuggester(BlendedInfixSuggesterTest.java:125)
> 16:49:40[junit4]> at 
> org.apache.lucene.search.suggest.analyzing.BlendedInfixSuggesterTest.testBlendedSort_fieldWeightZero_shouldRankSuggestionsByPositionMatch(BlendedInfixSuggesterTest.java:79)
> 16:49:40[junit4]> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 16:49:40[junit4]> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 16:49:40[junit4]> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 16:49:40[junit4]> at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 16:49:40[junit4]> at 
> java.base/java.lang.Thread.run(Thread.java:834)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-13-ea+18) - Build # 24248 - Unstable!

2019-06-18 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24248/
Java: 64bit/jdk-13-ea+18 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

12 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TestPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout occurred while waiting response from server at: 
https://127.0.0.1:41011/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: https://127.0.0.1:41011/solr
at 
__randomizedtesting.SeedInfo.seed([EEEF357DFDDCF432:6ECF5053EC9F1C94]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:667)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:547)
at 
org.apache.solr.cloud.autoscaling.TestPolicyCloud.after(TestPolicyCloud.java:87)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Resolved] (SOLR-13329) Placing exact number of replicas on a set of solr nodes, instead of each solr node.

2019-06-18 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-13329.
---
   Resolution: Fixed
Fix Version/s: 8.2
   master (9.0)

> Placing exact number of replicas on a set of solr nodes, instead of each solr 
> node.
> ---
>
> Key: SOLR-13329
> URL: https://issues.apache.org/jira/browse/SOLR-13329
> Project: Solr
>  Issue Type: Improvement
>  Components: AutoScaling
>Affects Versions: master (9.0)
>Reporter: Amrit Sarkar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (9.0), 8.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Let's say we have a requirement where we would like to place:
> {code}
> exact X replica on a set of solr nodes comprises of solr-node-1, solr-node-2, 
> ... solr-node-N.
> {code}
> e.g. exact 1 replica on either of the respective 3 solr nodes, solr-node-1, 
> solr-node-2, solr-node-3, and rest of the replicas can be placed on 
> corresponding solr nodes.
> Right now we don't have a straightforward manner of doing the same. 
> Autoscaling cluster policy also doesn't support such behavior, but instead 
> takes an array of solr node names and treat them as separate rules as per 
> https://lucene.apache.org/solr/guide/7_7/solrcloud-autoscaling-policy-preferences.html#sysprop-attribute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13329) Placing exact number of replicas on a set of solr nodes, instead of each solr node.

2019-06-18 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866681#comment-16866681
 ] 

ASF subversion and git services commented on SOLR-13329:


Commit 545b61ca235dbaf4858cff61b8fa32c621574ce6 in lucene-solr's branch 
refs/heads/branch_8x from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=545b61c ]

SOLR-13329: changed the put:on-each to put: on-each-node


> Placing exact number of replicas on a set of solr nodes, instead of each solr 
> node.
> ---
>
> Key: SOLR-13329
> URL: https://issues.apache.org/jira/browse/SOLR-13329
> Project: Solr
>  Issue Type: Improvement
>  Components: AutoScaling
>Affects Versions: master (9.0)
>Reporter: Amrit Sarkar
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Let's say we have a requirement where we would like to place:
> {code}
> exact X replica on a set of solr nodes comprises of solr-node-1, solr-node-2, 
> ... solr-node-N.
> {code}
> e.g. exact 1 replica on either of the respective 3 solr nodes, solr-node-1, 
> solr-node-2, solr-node-3, and rest of the replicas can be placed on 
> corresponding solr nodes.
> Right now we don't have a straightforward manner of doing the same. 
> Autoscaling cluster policy also doesn't support such behavior, but instead 
> takes an array of solr node names and treat them as separate rules as per 
> https://lucene.apache.org/solr/guide/7_7/solrcloud-autoscaling-policy-preferences.html#sysprop-attribute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jtibshirani commented on a change in pull request #715: LUCENE-7714 Add a range query that takes advantage of index sorting.

2019-06-18 Thread GitBox
jtibshirani commented on a change in pull request #715: LUCENE-7714 Add a range 
query that takes advantage of index sorting.
URL: https://github.com/apache/lucene-solr/pull/715#discussion_r294873227
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/search/IndexSortSortedNumericDocValuesRangeQuery.java
 ##
 @@ -0,0 +1,305 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.Objects;
+import java.util.function.Function;
+import java.util.function.Supplier;
+
+import com.carrotsearch.randomizedtesting.annotations.Repeat;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.LeafReader;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.NumericDocValues;
+import org.apache.lucene.index.SortedNumericDocValues;
+
+/**
+ * A range query that can take advantage of the fact that the index is sorted 
to speed up
+ * execution. If the index is sorted on the same field as the query, it 
performs binary
+ * search on the field's {@link SortedNumericDocValues} to find the documents 
at the lower
+ * and upper ends of the range.
+ * 
+ * This optimized execution strategy is only used if the following conditions 
hold:
+ * - The index is sorted, and its primary sort is on the same field as the 
query.
+ * - The segments must have at most one field value per document (otherwise we 
cannot easily
+ * determine the matching document IDs through a binary search).
+ *
+ * If any of these conditions isn't met, the search is delegated to {@code 
fallbackQuery}.
+ *
+ * This fallback must be an equivalent range query -- it should produce the 
same documents and give
+ * constant scores. As an example, an {@link 
IndexSortSortedNumericDocValuesRangeQuery} might be
+ * constructed as follows:
+ * 
+ *   String field = "field";
+ *   long lowerValue = 0, long upperValue = 10;
+ *   Query fallbackQuery = LongPoint.newRangeQuery(field, lowerValue, 
upperValue);
+ *   Query rangeQuery = new IndexSortSortedNumericDocValuesRangeQuery(
+ *   field, lowerValue, upperValue, fallbackQuery);
+ * 
+ *
+ * @lucene.experimental
+ */
+public class IndexSortSortedNumericDocValuesRangeQuery extends Query {
+
+  private final String field;
+  private final long lowerValue;
+  private final long upperValue;
+  private final Query fallbackQuery;
+
+  /**
+   * Creates a new {@link IndexSortSortedNumericDocValuesRangeQuery}.
+   *
+   * @param field The field name.
+   * @param lowerValue The lower end of the range (inclusive).
+   * @param upperValue The upper end of the range (exclusive).
+   * @param fallbackQuery A query to fall back to if the optimization cannot 
be applied.
+  */
+  public IndexSortSortedNumericDocValuesRangeQuery(String field,
+   long lowerValue,
+   long upperValue,
+   Query fallbackQuery) {
+this.field = Objects.requireNonNull(field);
+this.lowerValue = lowerValue;
+this.upperValue = upperValue;
+this.fallbackQuery = fallbackQuery;
+  }
+
+  @Override
+  public boolean equals(Object o) {
+if (this == o) return true;
+if (o == null || getClass() != o.getClass()) return false;
+IndexSortSortedNumericDocValuesRangeQuery that = 
(IndexSortSortedNumericDocValuesRangeQuery) o;
+return lowerValue == that.lowerValue &&
+upperValue == that.upperValue &&
+Objects.equals(field, that.field) &&
+Objects.equals(fallbackQuery, that.fallbackQuery);
+  }
+
+  @Override
+  public int hashCode() {
+return Objects.hash(field, lowerValue, upperValue, fallbackQuery);
+  }
+
+  @Override
+  public void visit(QueryVisitor visitor) {
+if (visitor.acceptField(field)) {
+  visitor.visitLeaf(this);
+  fallbackQuery.visit(visitor);
+}
+  }
+
+  @Override
+  public String toString(String field) {
+StringBuilder b = new StringBuilder();
+if (this.field.equals(field) == false) {
+  b.append(this.field).append(":");
+}
+return b
+.append("[")

[GitHub] [lucene-solr] jtibshirani commented on a change in pull request #715: LUCENE-7714 Add a range query that takes advantage of index sorting.

2019-06-18 Thread GitBox
jtibshirani commented on a change in pull request #715: LUCENE-7714 Add a range 
query that takes advantage of index sorting.
URL: https://github.com/apache/lucene-solr/pull/715#discussion_r294873037
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/search/IndexSortSortedNumericDocValuesRangeQuery.java
 ##
 @@ -0,0 +1,305 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.Objects;
+import java.util.function.Function;
+import java.util.function.Supplier;
+
+import com.carrotsearch.randomizedtesting.annotations.Repeat;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.LeafReader;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.NumericDocValues;
+import org.apache.lucene.index.SortedNumericDocValues;
+
+/**
+ * A range query that can take advantage of the fact that the index is sorted 
to speed up
+ * execution. If the index is sorted on the same field as the query, it 
performs binary
+ * search on the field's {@link SortedNumericDocValues} to find the documents 
at the lower
+ * and upper ends of the range.
+ * 
+ * This optimized execution strategy is only used if the following conditions 
hold:
+ * - The index is sorted, and its primary sort is on the same field as the 
query.
+ * - The segments must have at most one field value per document (otherwise we 
cannot easily
 
 Review comment:
    


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jtibshirani commented on a change in pull request #715: LUCENE-7714 Add a range query that takes advantage of index sorting.

2019-06-18 Thread GitBox
jtibshirani commented on a change in pull request #715: LUCENE-7714 Add a range 
query that takes advantage of index sorting.
URL: https://github.com/apache/lucene-solr/pull/715#discussion_r294872999
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/search/IndexSortSortedNumericDocValuesRangeQuery.java
 ##
 @@ -0,0 +1,305 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.Objects;
+import java.util.function.Function;
+import java.util.function.Supplier;
+
+import com.carrotsearch.randomizedtesting.annotations.Repeat;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.LeafReader;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.NumericDocValues;
+import org.apache.lucene.index.SortedNumericDocValues;
+
+/**
+ * A range query that can take advantage of the fact that the index is sorted 
to speed up
+ * execution. If the index is sorted on the same field as the query, it 
performs binary
+ * search on the field's {@link SortedNumericDocValues} to find the documents 
at the lower
+ * and upper ends of the range.
+ * 
+ * This optimized execution strategy is only used if the following conditions 
hold:
+ * - The index is sorted, and its primary sort is on the same field as the 
query.
+ * - The segments must have at most one field value per document (otherwise we 
cannot easily
+ * determine the matching document IDs through a binary search).
+ *
+ * If any of these conditions isn't met, the search is delegated to {@code 
fallbackQuery}.
+ *
+ * This fallback must be an equivalent range query -- it should produce the 
same documents and give
+ * constant scores. As an example, an {@link 
IndexSortSortedNumericDocValuesRangeQuery} might be
+ * constructed as follows:
+ * 
+ *   String field = "field";
+ *   long lowerValue = 0, long upperValue = 10;
+ *   Query fallbackQuery = LongPoint.newRangeQuery(field, lowerValue, 
upperValue);
+ *   Query rangeQuery = new IndexSortSortedNumericDocValuesRangeQuery(
+ *   field, lowerValue, upperValue, fallbackQuery);
+ * 
+ *
+ * @lucene.experimental
+ */
+public class IndexSortSortedNumericDocValuesRangeQuery extends Query {
+
+  private final String field;
+  private final long lowerValue;
+  private final long upperValue;
+  private final Query fallbackQuery;
+
+  /**
+   * Creates a new {@link IndexSortSortedNumericDocValuesRangeQuery}.
+   *
+   * @param field The field name.
+   * @param lowerValue The lower end of the range (inclusive).
+   * @param upperValue The upper end of the range (exclusive).
+   * @param fallbackQuery A query to fall back to if the optimization cannot 
be applied.
+  */
+  public IndexSortSortedNumericDocValuesRangeQuery(String field,
+   long lowerValue,
+   long upperValue,
+   Query fallbackQuery) {
+this.field = Objects.requireNonNull(field);
+this.lowerValue = lowerValue;
+this.upperValue = upperValue;
+this.fallbackQuery = fallbackQuery;
+  }
+
+  @Override
+  public boolean equals(Object o) {
+if (this == o) return true;
+if (o == null || getClass() != o.getClass()) return false;
+IndexSortSortedNumericDocValuesRangeQuery that = 
(IndexSortSortedNumericDocValuesRangeQuery) o;
+return lowerValue == that.lowerValue &&
+upperValue == that.upperValue &&
+Objects.equals(field, that.field) &&
+Objects.equals(fallbackQuery, that.fallbackQuery);
+  }
+
+  @Override
+  public int hashCode() {
+return Objects.hash(field, lowerValue, upperValue, fallbackQuery);
+  }
+
+  @Override
+  public void visit(QueryVisitor visitor) {
+if (visitor.acceptField(field)) {
+  visitor.visitLeaf(this);
+  fallbackQuery.visit(visitor);
+}
+  }
+
+  @Override
+  public String toString(String field) {
+StringBuilder b = new StringBuilder();
+if (this.field.equals(field) == false) {
+  b.append(this.field).append(":");
+}
+return b
+.append("[")

[GitHub] [lucene-solr] jtibshirani commented on a change in pull request #715: LUCENE-7714 Add a range query that takes advantage of index sorting.

2019-06-18 Thread GitBox
jtibshirani commented on a change in pull request #715: LUCENE-7714 Add a range 
query that takes advantage of index sorting.
URL: https://github.com/apache/lucene-solr/pull/715#discussion_r294872966
 
 

 ##
 File path: 
lucene/sandbox/src/java/org/apache/lucene/search/IndexSortSortedNumericDocValuesRangeQuery.java
 ##
 @@ -0,0 +1,305 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.search;
+
+import java.io.IOException;
+import java.util.Objects;
+import java.util.function.Function;
+import java.util.function.Supplier;
+
+import com.carrotsearch.randomizedtesting.annotations.Repeat;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.LeafReader;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.NumericDocValues;
+import org.apache.lucene.index.SortedNumericDocValues;
+
+/**
+ * A range query that can take advantage of the fact that the index is sorted 
to speed up
+ * execution. If the index is sorted on the same field as the query, it 
performs binary
+ * search on the field's {@link SortedNumericDocValues} to find the documents 
at the lower
+ * and upper ends of the range.
+ * 
+ * This optimized execution strategy is only used if the following conditions 
hold:
+ * - The index is sorted, and its primary sort is on the same field as the 
query.
+ * - The segments must have at most one field value per document (otherwise we 
cannot easily
+ * determine the matching document IDs through a binary search).
+ *
+ * If any of these conditions isn't met, the search is delegated to {@code 
fallbackQuery}.
+ *
+ * This fallback must be an equivalent range query -- it should produce the 
same documents and give
+ * constant scores. As an example, an {@link 
IndexSortSortedNumericDocValuesRangeQuery} might be
+ * constructed as follows:
+ * 
+ *   String field = "field";
+ *   long lowerValue = 0, long upperValue = 10;
+ *   Query fallbackQuery = LongPoint.newRangeQuery(field, lowerValue, 
upperValue);
+ *   Query rangeQuery = new IndexSortSortedNumericDocValuesRangeQuery(
+ *   field, lowerValue, upperValue, fallbackQuery);
+ * 
+ *
+ * @lucene.experimental
+ */
+public class IndexSortSortedNumericDocValuesRangeQuery extends Query {
+
+  private final String field;
+  private final long lowerValue;
+  private final long upperValue;
+  private final Query fallbackQuery;
+
+  /**
+   * Creates a new {@link IndexSortSortedNumericDocValuesRangeQuery}.
+   *
+   * @param field The field name.
+   * @param lowerValue The lower end of the range (inclusive).
+   * @param upperValue The upper end of the range (exclusive).
+   * @param fallbackQuery A query to fall back to if the optimization cannot 
be applied.
+  */
+  public IndexSortSortedNumericDocValuesRangeQuery(String field,
+   long lowerValue,
+   long upperValue,
+   Query fallbackQuery) {
+this.field = Objects.requireNonNull(field);
+this.lowerValue = lowerValue;
+this.upperValue = upperValue;
+this.fallbackQuery = fallbackQuery;
+  }
+
+  @Override
+  public boolean equals(Object o) {
+if (this == o) return true;
+if (o == null || getClass() != o.getClass()) return false;
+IndexSortSortedNumericDocValuesRangeQuery that = 
(IndexSortSortedNumericDocValuesRangeQuery) o;
+return lowerValue == that.lowerValue &&
+upperValue == that.upperValue &&
+Objects.equals(field, that.field) &&
+Objects.equals(fallbackQuery, that.fallbackQuery);
+  }
+
+  @Override
+  public int hashCode() {
+return Objects.hash(field, lowerValue, upperValue, fallbackQuery);
+  }
+
+  @Override
+  public void visit(QueryVisitor visitor) {
+if (visitor.acceptField(field)) {
+  visitor.visitLeaf(this);
+  fallbackQuery.visit(visitor);
+}
+  }
+
+  @Override
+  public String toString(String field) {
+StringBuilder b = new StringBuilder();
+if (this.field.equals(field) == false) {
+  b.append(this.field).append(":");
+}
+return b
+.append("[")

[jira] [Created] (SOLR-13558) Allow dynamic resizing of SolrCache-s

2019-06-18 Thread Andrzej Bialecki (JIRA)
Andrzej Bialecki  created SOLR-13558:


 Summary: Allow dynamic resizing of SolrCache-s
 Key: SOLR-13558
 URL: https://issues.apache.org/jira/browse/SOLR-13558
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Andrzej Bialecki 
Assignee: Andrzej Bialecki 


Currently SolrCache limits are configured statically and can't be reconfigured 
without cache re-initialization (core reload), which is costly. In some 
situations it would help to be able to dynamically re-size the cache based on 
the resource contention (such as the total heap size used for caching across 
all cores in a node).

Each cache implementation already knows how to evict its entries when it runs 
into configured limits - what is missing is to expose this mechanism using a 
uniform API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13329) Placing exact number of replicas on a set of solr nodes, instead of each solr node.

2019-06-18 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-13329:
-

Assignee: Noble Paul

> Placing exact number of replicas on a set of solr nodes, instead of each solr 
> node.
> ---
>
> Key: SOLR-13329
> URL: https://issues.apache.org/jira/browse/SOLR-13329
> Project: Solr
>  Issue Type: Improvement
>  Components: AutoScaling
>Affects Versions: master (9.0)
>Reporter: Amrit Sarkar
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Let's say we have a requirement where we would like to place:
> {code}
> exact X replica on a set of solr nodes comprises of solr-node-1, solr-node-2, 
> ... solr-node-N.
> {code}
> e.g. exact 1 replica on either of the respective 3 solr nodes, solr-node-1, 
> solr-node-2, solr-node-3, and rest of the replicas can be placed on 
> corresponding solr nodes.
> Right now we don't have a straightforward manner of doing the same. 
> Autoscaling cluster policy also doesn't support such behavior, but instead 
> takes an array of solr node names and treat them as separate rules as per 
> https://lucene.apache.org/solr/guide/7_7/solrcloud-autoscaling-policy-preferences.html#sysprop-attribute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13504) improve autoscaling syntax by adding a nodeset attribute

2019-06-18 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-13504.
---
   Resolution: Fixed
Fix Version/s: 8.2

> improve autoscaling syntax by adding a nodeset attribute
> 
>
> Key: SOLR-13504
> URL: https://issues.apache.org/jira/browse/SOLR-13504
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.2
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> {code}
> {"replica" : 1,  "shard": "#EACH",  "nodeset" : {"sysprop.x" :"y"} }
> //is equivalent to
> {"replica" : 1, "shard": "#EACH", "sysprop.x" :"y" }
> {"replica" : 1, "nodeset" :["node-1", "node-2] }
> //is equivalent to
> {"replica" : 1, "node" :["node-1", "node-2] }
> {code}
> all properties such as {{nodeRole}}, {{freedisk}}, {{port}},{{host}}, 
> {{diskType}} etc will move to the nodeset attribute and we will eventually 
> deprecate the old syntax



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >