[jira] [Commented] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-05-09 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836929#comment-16836929
 ] 

Jan Høydahl commented on SOLR-11959:


Thanks for working on this :) I agree that we cannot have custom code in PKI 
for every plugin that wants to use it. So it would be better to try to force 
CDCR into using a "solr thread pool" for its communication in such a way that 
the existing code in path will classify it as a request that needs the header. 
Or it is OK to introduce another additional way of detecting need for header as 
you have begun, if that is a generic mechanism that is documented for other 
components to use as well. Wdyt?

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch, SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13263) Facet Heat Map should support GeoJSON

2019-05-09 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836861#comment-16836861
 ] 

David Smiley commented on SOLR-13263:
-

bq. Even after changing the max Latitude value to 70, the parsed shape's 
bounding box is still GeoWorld.

Yeah I noticed that to and can't explain it.  So I think it's likely there's a 
bug in Geo3D... but not a serious bug since getBounds is allowed to return an 
approximation that contains the true bounds... granted the world is quite an 
approximation :-)  Usually the ramification would be sub-optimal performance as 
this bounds is used as a fast-path short-circuit.

Any way... lets back away from using Geo3D here.  It complicates your test 
because now you need another field type as well.  Can you simply add one test 
method that does the heatmap on a lineString?  I know it probably has no 
practical application and seems silly but the point here is only to test 
GeoJSON can be parsed and the heatmap gives results on it that roughly appear 
appropriate.  That's it.

If you are in the mood to increase the scope some, you might look at merging 
the logic of  {{org.apache.solr.schema.AbstractSpatialFieldType#parseShape}} 
into {{SpatialUtils.parseGeomSolrException}} such that the parseShape can 
simply call parseGeomSolrException, and would likely accept an optional 
ShapeReader argument.  Thus one method that looks for a plain point "lat,lon", 
or rectangle range syntax "[lat,lon TO lat2,lon2]" or finally what Spatial4j 
supports (WKT & GeoJson).  That would have greater impact than this issue -- 
it'd mean the rectangle-range syntax would then work at indexing time for the 
spatial field types that use that.  On second thought; that'd deserve its own 
issue.


I think just keep things

> Facet Heat Map should support GeoJSON
> -
>
> Key: SOLR-13263
> URL: https://issues.apache.org/jira/browse/SOLR-13263
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting
>Affects Versions: 8.0, 8.1, master (9.0)
>Reporter: Bar Rotstein
>Priority: Major
>  Labels: Facets, Geolocation, facet, faceting, geo
> Attachments: SOLR-13263-nocommit-geo3d-failure.patch, 
> SOLR-13263-nocommit.patch
>
>
> Currently Facet Heatmap(Geographical facets) do not support any other 
> subjects other than WKT or '[ ]'. This seems to be caused since 
> FacetHeatmap.Parser#parse uses SpatialUtils#parseGeomSolrException, which in 
> turn uses a deprecated JTS method (SpatialContext#readShapeFromWkt) to parse 
> the string input.
> The newer method of parsing a String to a Shape object should be used, makes 
> the code a lot cleaner and should support more formats (including GeoJSON).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 24058 - Failure!

2019-05-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24058/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 2006 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J0-20190510_013128_1457772369167748430425.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J1-20190510_013128_1538049447670328727631.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 5 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J2-20190510_013128_13817486568083834952470.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 304 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J1-20190510_014317_46390971055814542772.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J0-20190510_014317_4632688164424406518229.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/test-framework/test/temp/junit4-J2-20190510_014317_4631101507747871335419.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 1075 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20190510_014513_469640066081616261007.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20190510_014513_4691791848332082661419.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20190510_014513_46811154682281359643574.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 241 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/icu/test/temp/junit4-J0-20190510_014809_47914398335104328597941.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/analysis/icu/test/temp/junit4-J1-20190510_014809_4798933754909535205990

[JENKINS] Lucene-Solr-Tests-8.x - Build # 194 - Unstable

2019-05-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/194/

1 tests failed.
FAILED:  
org.apache.solr.cloud.MetricsHistoryWithAuthIntegrationTest.testValuesAreCollected

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([E1C57C3BB11A3190:C938215C60FEAED5]:0)
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertNotNull(Assert.java:712)
at org.junit.Assert.assertNotNull(Assert.java:722)
at 
org.apache.solr.cloud.MetricsHistoryWithAuthIntegrationTest.testValuesAreCollected(MetricsHistoryWithAuthIntegrationTest.java:86)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12904 lines...]
   [junit4] Suite: org.apache.solr.cloud.MetricsHistoryWithAuthIntegrationTest
   [junit4]   2> 68899 INFO  
(SUITE-MetricsHistoryWithAuthIntegrationTest-seed#[E1C57C3BB11A3190]-worker) [  
  ] o.a.s.

[JENKINS] Lucene-Solr-8.x-Solaris (64bit/jdk1.8.0) - Build # 117 - Unstable!

2019-05-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Solaris/117/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest

Error Message:
expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([24EFFC1DAC3459F7:898F4816B10BF182]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13658 lines...]
   [junit4] Suite: org.apache.solr.cloud.DeleteReplicaTest
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-8.x-Sol

[JENKINS] Lucene-Solr-NightlyTests-8.1 - Build # 30 - Unstable

2019-05-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.1/30/

1 tests failed.
FAILED:  
org.apache.lucene.search.TestSearcherManager.testConcurrentIndexCloseSearchAndRefresh

Error Message:
Captured an uncaught exception in thread: Thread[id=7853, name=Thread-7649, 
state=RUNNABLE, group=TGRP-TestSearcherManager]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=7853, name=Thread-7649, state=RUNNABLE, 
group=TGRP-TestSearcherManager]
Caused by: java.lang.RuntimeException: java.nio.file.FileSystemException: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.1/checkout/lucene/build/core/test/J1/temp/lucene.search.TestSearcherManager_1EA183A66A8BB81E-001/tempDir-001/_5f_BlockTreeOrds_0.pay:
 Too many open files
at __randomizedtesting.SeedInfo.seed([1EA183A66A8BB81E]:0)
at 
org.apache.lucene.search.TestSearcherManager$11.run(TestSearcherManager.java:677)
Caused by: java.nio.file.FileSystemException: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.1/checkout/lucene/build/core/test/J1/temp/lucene.search.TestSearcherManager_1EA183A66A8BB81E-001/tempDir-001/_5f_BlockTreeOrds_0.pay:
 Too many open files
at 
org.apache.lucene.mockfile.HandleLimitFS.onOpen(HandleLimitFS.java:48)
at 
org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:81)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:271)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:212)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:240)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:212)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at 
org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:77)
at 
org.apache.lucene.util.LuceneTestCase.slowFileExists(LuceneTestCase.java:2820)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:747)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.(Lucene50PostingsReader.java:97)
at 
org.apache.lucene.codecs.blocktreeords.BlockTreeOrdsPostingsFormat.fieldsProducer(BlockTreeOrdsPostingsFormat.java:90)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:288)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:368)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:114)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:84)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:177)
at 
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:219)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:109)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:526)
at 
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:116)
at 
org.apache.lucene.search.SearcherManager.(SearcherManager.java:108)
at 
org.apache.lucene.search.SearcherManager.(SearcherManager.java:76)
at 
org.apache.lucene.search.TestSearcherManager$11.run(TestSearcherManager.java:665)




Build Log:
[...truncated 955 lines...]
   [junit4] Suite: org.apache.lucene.search.TestSearcherManager
   [junit4]   2> ماي 09, 2019 10:50:59 م 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: Uncaught exception in thread: 
Thread[Thread-7649,5,TGRP-TestSearcherManager]
   [junit4]   2> java.lang.RuntimeException: java.nio.file.FileSystemException: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.1/checkout/lucene/build/core/test/J1/temp/lucene.search.TestSearcherManager_1EA183A66A8BB81E-001/tempDir-001/_5f_BlockTreeOrds_0.pay:
 Too many open files
   [junit4]   2>at 
__randomizedtesting.SeedInfo.seed([1EA183A66A8BB81E]:0)
   [junit4]   2>at 
org.apache.lucene.search.TestSearcherManager$11.run(TestSearcherManager.java:677)
   [junit4]   2> Caused by: java.nio.file.FileSystemException: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.1/checkout/lucene/build/core/test/J1/temp/lucene.search.TestSearcherManager_1EA183A66A8BB81E-001/tempDir-001/_5f_BlockTreeOrds_0.pay:
 Too many open files
   [junit4]   2>at 
org.apache.lucene.mockfile.HandleLimitFS.onOpen(HandleLimitFS.java:48)
   [junit4]   2>at 
org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:81)
   [junit4]   2>at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteCha

Re: [VOTE] Release Lucene/Solr 8.1.0 RC2

2019-05-09 Thread Varun Thacker
+1
SUCCESS! [1:08:17.391699]

On Thu, May 9, 2019 at 11:01 AM jim ferenczi  wrote:

> +1
> SUCCESS! [1:14:41.737009]
>
> Le jeu. 9 mai 2019 à 18:56, Kevin Risden  a écrit :
>
>> +1
>> SUCCESS! [1:17:45.727492]
>>
>> Kevin Risden
>>
>>
>> On Thu, May 9, 2019 at 11:37 AM Ishan Chattopadhyaya <
>> ichattopadhy...@gmail.com> wrote:
>>
>>> Please vote for release candidate 2 for Lucene/Solr 8.1.0
>>>
>>> The artifacts can be downloaded from:
>>>
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC2-revdbe5ed0b2f17677ca6c904ebae919363f2d36a0a
>>>
>>> You can run the smoke tester directly with this command:
>>>
>>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>>
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC2-revdbe5ed0b2f17677ca6c904ebae919363f2d36a0a
>>>
>>> Here's my +1
>>> SUCCESS! [0:44:31.244021]
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>


Re: Lucene/Solr 7.7.2

2019-05-09 Thread Jan Høydahl
Looks safe to backport, go ahead!

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 9. mai 2019 kl. 23:07 skrev Cassandra Targett :
> 
> Someone brought https://issues.apache.org/jira/browse/SOLR-13112 
>  to my attention today, 
> which upgraded the Jackson dependencies to 2.9.8. Would it be possible for 
> those to be backported to branch_7_7 for inclusion in 7.7.2?
> 
> Cassandra
> On May 8, 2019, 2:51 PM -0500, Jan Høydahl , wrote:
>> Yes please do!
>> 
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com 
>> 
>>> 8. mai 2019 kl. 18:25 skrev Ishan Chattopadhyaya >> >:
>>> 
>>> I would like to backport SOLR-13410, as without this the ADDROLE of
>>> "overseer" is effectively broken. Please let me know if that is fine.
>>> 
>>> On Sat, May 4, 2019 at 2:22 AM Jan Høydahl >> > wrote:
 
 Sure, go ahead!
 
 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com 
 
 3. mai 2019 kl. 17:53 skrev Andrzej Białecki 
 mailto:andrzej.biale...@lucidworks.com>>:
 
 Hi,
 
 I would like to back-port the recent changes in the re-opened SOLR-12833, 
 since the increased memory consumption adversely affects existing 7x users.
 
 On 3 May 2019, at 10:38, Jan Høydahl >>> > wrote:
 
 To not confuse two releases at the same time, I'll delay the first 7.7.2 
 RC until after a successful 8.1 vote.
 Uwe, can you re-enable the Jenkins 7.7 jobs to make sure we have a healthy 
 branch_7_7?
 Feel free to push important bug fixes to the branch in the meantime, 
 announcing them in this thread.
 
 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com 
 
 30. apr. 2019 kl. 18:19 skrev Ishan Chattopadhyaya 
 mailto:ichattopadhy...@gmail.com>>:
 
 +1 Jan for May 7th.
 Hopefully, 8.1 would be already out by then (or close to being there).
 
 On Tue, Apr 30, 2019 at 1:33 PM Bram Van Dam >>> > wrote:
 
 
 On 29/04/2019 23:33, Jan Høydahl wrote:
 
 I'll vounteer as RM for 7.7.2 and aim at first RC on Tuesday May 7th
 
 
 Thank you!
 
 
 
 
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
>>> 
>>> For additional commands, e-mail: dev-h...@lucene.apache.org 
>>> 
>>> 
>> 



[jira] [Commented] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-05-09 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836758#comment-16836758
 ] 

Amrit Sarkar commented on SOLR-11959:
-

Thank you, Jan for the guidance. I have cooked up a patch around the design you 
have stated. 
I may have done this cleaner, and I intend to refactor accordingly. 
Especially I don't like mentioning Cdcr in *{{PKIAuthPlugin}}* code, I am 
looking for a better way to do that.

Looking forward to feedback. 



> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch, SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-05-09 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11959:

Attachment: SOLR-11959.patch

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch, SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-13445) Preferred replicas on nodes with same system properties as the query master

2019-05-09 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-13445:
-

jenkins has found at least 2 problems with the new 
RoutingToNodesWithPropertiesTest class...

[https://jenkins.thetaphi.de/view/Lucene-Solr/job/Lucene-Solr-8.x-Linux/536/]

First: a reproducing failing seed (on branch_8x)...
{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=RoutingToNodesWithPropertiesTest -Dtests.method=test 
-Dtests.seed=13525A4073A0EB3F -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=zh-HK -Dtests.timezone=Brazil/Acre -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.45s J1 | RoutingToNodesWithPropertiesTest.test <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Hitting same zone 
after 10 queries
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([13525A4073A0EB3F:9B06659ADD5C86C7]:0)
   [junit4]>at 
org.apache.solr.cloud.RoutingToNodesWithPropertiesTest.test(RoutingToNodesWithPropertiesTest.java:251)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{noformat}
At a glance, the problem seems to be that the test assumes if it tries a query 
10 times, at least one of those queries is will hit 2 nodes in different 
"zones" – but there's no guarantee of that, it's pure dumb luck – it's like 
having a test that calls {{random().nextInt(2)}} in a loop 10 times and asserts 
that it got a value of "0" at least iteration ... it's statistically going to 
fail some fixed percentage of time.

Second: when jenkins tries to reproduce the seed, it runs with 
{{-Dtests.dups=5}} but this causes an initialization failure in the BeforeClass 
method ... i'm not certain, but at a glance I'm guessing this is because of 
static variables that aren't being cleaned up in the AfterClass method?
{noformat}
   [junit4] ERROR   0.00s J2 | RoutingToNodesWithPropertiesTest (suite) <<<
   [junit4]> Throwable #1: java.lang.AssertionError: expected: 
but was:
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([13525A4073A0EB3F]:0)
   [junit4]>at 
org.apache.solr.cloud.RoutingToNodesWithPropertiesTest.setupCluster(RoutingToNodesWithPropertiesTest.java:115)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
 {noformat}

> Preferred replicas on nodes with same system properties as the query master
> ---
>
> Key: SOLR-13445
> URL: https://issues.apache.org/jira/browse/SOLR-13445
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: master (9.0), 8.2
>
> Attachments: SOLR-13445.patch, SOLR-13445.patch, SOLR-13445.patch
>
>
> Currently, Solr chooses a random replica for each shard to fan out the query 
> request. However, this presents a problem when running Solr in multiple 
> availability zones.
> If one availability zone fails then it affects all Solr nodes because they 
> will try to connect to Solr nodes in the failed availability zone until the 
> request times out. This can lead to a build up of threads on each Solr node 
> until the node goes out of memory. This results in a cascading failure.
> This issue try to solve this problem by adding
> * another shardPreference param named {{node.sysprop}}, so the query will be 
> routed to nodes with same defined system properties as the current one.
> * default shardPreferences on the whole cluster, which will be stored in 
> {{/clusterprops.json}}.
> * a cacher for fetching other nodes system properties whenever /live_nodes 
> get changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.7.2

2019-05-09 Thread Cassandra Targett
Someone brought https://issues.apache.org/jira/browse/SOLR-13112 to my 
attention today, which upgraded the Jackson dependencies to 2.9.8. Would it be 
possible for those to be backported to branch_7_7 for inclusion in 7.7.2?

Cassandra
On May 8, 2019, 2:51 PM -0500, Jan Høydahl , wrote:
> Yes please do!
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> > 8. mai 2019 kl. 18:25 skrev Ishan Chattopadhyaya 
> > :
> >
> > I would like to backport SOLR-13410, as without this the ADDROLE of
> > "overseer" is effectively broken. Please let me know if that is fine.
> >
> > On Sat, May 4, 2019 at 2:22 AM Jan Høydahl  wrote:
> > >
> > > Sure, go ahead!
> > >
> > > --
> > > Jan Høydahl, search solution architect
> > > Cominvent AS - www.cominvent.com
> > >
> > > 3. mai 2019 kl. 17:53 skrev Andrzej Białecki 
> > > :
> > >
> > > Hi,
> > >
> > > I would like to back-port the recent changes in the re-opened SOLR-12833, 
> > > since the increased memory consumption adversely affects existing 7x 
> > > users.
> > >
> > > On 3 May 2019, at 10:38, Jan Høydahl  wrote:
> > >
> > > To not confuse two releases at the same time, I'll delay the first 7.7.2 
> > > RC until after a successful 8.1 vote.
> > > Uwe, can you re-enable the Jenkins 7.7 jobs to make sure we have a 
> > > healthy branch_7_7?
> > > Feel free to push important bug fixes to the branch in the meantime, 
> > > announcing them in this thread.
> > >
> > > --
> > > Jan Høydahl, search solution architect
> > > Cominvent AS - www.cominvent.com
> > >
> > > 30. apr. 2019 kl. 18:19 skrev Ishan Chattopadhyaya 
> > > :
> > >
> > > +1 Jan for May 7th.
> > > Hopefully, 8.1 would be already out by then (or close to being there).
> > >
> > > On Tue, Apr 30, 2019 at 1:33 PM Bram Van Dam  wrote:
> > >
> > >
> > > On 29/04/2019 23:33, Jan Høydahl wrote:
> > >
> > > I'll vounteer as RM for 7.7.2 and aim at first RC on Tuesday May 7th
> > >
> > >
> > > Thank you!
> > >
> > >
> > >
> > >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>


[jira] [Commented] (SOLR-13454) Investigate ReindexCollectionTest failures

2019-05-09 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836660#comment-16836660
 ] 

Erick Erickson commented on SOLR-13454:
---

I've committed some test-only changes. If this does trip we won't see any 
_additional_ failures, but the ones we do see will indicate that this test 
failure is fixed by the bandaid.

I'll be monitoring of course.

> Investigate ReindexCollectionTest failures
> --
>
> Key: SOLR-13454
> URL: https://issues.apache.org/jira/browse/SOLR-13454
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> This _looks_ like it might be another example of commits not quite happening 
> correctly, see 
> SOLR-11035. Problem is I can’t get it to fail locally after 2,000 iterations.
> So I’m going to add a bit to the bandaid to allow tests to conditionally fail 
> if the bandaid would have made it pass. That way we can positively detect 
> that the bandaid is indeed the case rather than change code and hope.
> This _shouldn’t_ add any noise to the Jenkins lists, as the test won’t fail 
> in cases where it didn’t before.
> In case people wonder what the heck I’m doing.
> BTW, if we ever really understand/fix the underlying cause, we should make 
> the bandaid code fail and see, then remove it if so.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13454) Investigate ReindexCollectionTest failures

2019-05-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836657#comment-16836657
 ] 

ASF subversion and git services commented on SOLR-13454:


Commit 8bac8a70a1ff4ea12dfdd65e2de4c0fb7c7b66a5 in lucene-solr's branch 
refs/heads/branch_8x from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8bac8a7 ]

SOLR-13454: Investigate ReindexCollectionTest failures

(cherry picked from commit 577be08bf278e90df6c119b0b50498828e1879d4)


> Investigate ReindexCollectionTest failures
> --
>
> Key: SOLR-13454
> URL: https://issues.apache.org/jira/browse/SOLR-13454
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> This _looks_ like it might be another example of commits not quite happening 
> correctly, see 
> SOLR-11035. Problem is I can’t get it to fail locally after 2,000 iterations.
> So I’m going to add a bit to the bandaid to allow tests to conditionally fail 
> if the bandaid would have made it pass. That way we can positively detect 
> that the bandaid is indeed the case rather than change code and hope.
> This _shouldn’t_ add any noise to the Jenkins lists, as the test won’t fail 
> in cases where it didn’t before.
> In case people wonder what the heck I’m doing.
> BTW, if we ever really understand/fix the underlying cause, we should make 
> the bandaid code fail and see, then remove it if so.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13454) Investigate ReindexCollectionTest failures

2019-05-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836650#comment-16836650
 ] 

ASF subversion and git services commented on SOLR-13454:


Commit 577be08bf278e90df6c119b0b50498828e1879d4 in lucene-solr's branch 
refs/heads/master from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=577be08 ]

SOLR-13454: Investigate ReindexCollectionTest failures


> Investigate ReindexCollectionTest failures
> --
>
> Key: SOLR-13454
> URL: https://issues.apache.org/jira/browse/SOLR-13454
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> This _looks_ like it might be another example of commits not quite happening 
> correctly, see 
> SOLR-11035. Problem is I can’t get it to fail locally after 2,000 iterations.
> So I’m going to add a bit to the bandaid to allow tests to conditionally fail 
> if the bandaid would have made it pass. That way we can positively detect 
> that the bandaid is indeed the case rather than change code and hope.
> This _shouldn’t_ add any noise to the Jenkins lists, as the test won’t fail 
> in cases where it didn’t before.
> In case people wonder what the heck I’m doing.
> BTW, if we ever really understand/fix the underlying cause, we should make 
> the bandaid code fail and see, then remove it if so.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13459) Streaming Expressions experience a hard coded timeout

2019-05-09 Thread Gus Heck (JIRA)
Gus Heck created SOLR-13459:
---

 Summary: Streaming Expressions experience a hard coded timeout
 Key: SOLR-13459
 URL: https://issues.apache.org/jira/browse/SOLR-13459
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: streaming expressions
Reporter: Gus Heck


SolrClientBuilder has the capability to configure a timeout, but the usage in 
SolrStream accepts the hard-coded default:
{code:java}
  /**
  * Opens the stream to a single Solr instance.
  **/

  public void open() throws IOException {
if(cache == null) {
  client = new HttpSolrClient.Builder(baseUrl).build();
} else {
  client = cache.getHttpSolrClient(baseUrl);
}
 {code}

While it might also be possible to specify the timeout in the expression, that 
also sounds like something that bloats the high level expression with low level 
concerns, so this ticket proposes to have SolrStream set a timeout on the 
builder which it will get from the StreamContext. When instantiated by the 
stream handler, the stream context in turn will set this based on a default 
timeout for inter-node communication defined in solr.xml. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk1.8.0_201) - Build # 536 - Still Unstable!

2019-05-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/536/
Java: 64bit/jdk1.8.0_201 -XX:+UseCompressedOops -XX:+UseParallelGC

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.RoutingToNodesWithPropertiesTest

Error Message:
expected: but was:

Stack Trace:
java.lang.AssertionError: expected: but was:
at __randomizedtesting.SeedInfo.seed([13525A4073A0EB3F]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.solr.cloud.RoutingToNodesWithPropertiesTest.setupCluster(RoutingToNodesWithPropertiesTest.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.RoutingToNodesWithPropertiesTest

Error Message:
expected: but was:

Stack Trace:
java.lang.AssertionError: expected: but was:
at __randomizedtesting.SeedInfo.seed([13525A4073A0EB3F]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.solr.cloud.RoutingToNodesWithPropertiesTest.setupCluster(RoutingToNodesWithPropertiesTest.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carro

[jira] [Resolved] (SOLR-13453) JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider behave nicely

2019-05-09 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-13453.

Resolution: Fixed

Fixed. If there's a 8.1 RC3 it will make it into 8.1.0, otherwise will need to 
move CHANGES entry to 8.1.1 after 8.1.0 release.

> JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed 
> when SolrClientNodeStateProvider behave nicely
> 
>
> Key: SOLR-13453
> URL: https://issues.apache.org/jira/browse/SOLR-13453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 8.1
>Reporter: Cao Manh Dat
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> SOLR-13449 is a trivial fix for SolrClientNodeStateProvider, it makes the 
> provider stop retry once the metrics are successfully grabbed. 
> In an unexpected way, JWTAuthPluginIntegrationTest and 
> TestSolrCloudWithHadoopAuthPlugin get failed 100%. This is bugs of tests for 
> sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13453) JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider behave nicely

2019-05-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836635#comment-16836635
 ] 

ASF subversion and git services commented on SOLR-13453:


Commit 47c4e4184a5482894753fb4f7aa5126cf9f035c8 in lucene-solr's branch 
refs/heads/branch_8_1 from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=47c4e41 ]

SOLR-13453: Adjust auth metrics asserts in tests after SOLR-13449 (#668)

(cherry picked from commit 5b772f7c9d8ba557287b0a0e01c459f07cdac9c4)


> JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed 
> when SolrClientNodeStateProvider behave nicely
> 
>
> Key: SOLR-13453
> URL: https://issues.apache.org/jira/browse/SOLR-13453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 8.1
>Reporter: Cao Manh Dat
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> SOLR-13449 is a trivial fix for SolrClientNodeStateProvider, it makes the 
> provider stop retry once the metrics are successfully grabbed. 
> In an unexpected way, JWTAuthPluginIntegrationTest and 
> TestSolrCloudWithHadoopAuthPlugin get failed 100%. This is bugs of tests for 
> sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13449) SolrClientNodeStateProvider always retries on requesting metrics from other nodes

2019-05-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836636#comment-16836636
 ] 

ASF subversion and git services commented on SOLR-13449:


Commit 47c4e4184a5482894753fb4f7aa5126cf9f035c8 in lucene-solr's branch 
refs/heads/branch_8_1 from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=47c4e41 ]

SOLR-13453: Adjust auth metrics asserts in tests after SOLR-13449 (#668)

(cherry picked from commit 5b772f7c9d8ba557287b0a0e01c459f07cdac9c4)


> SolrClientNodeStateProvider always retries on requesting metrics from other 
> nodes
> -
>
> Key: SOLR-13449
> URL: https://issues.apache.org/jira/browse/SOLR-13449
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7.1, 8.0
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: failure.txt
>
>
> Even in case of a success call, SolrClientNodeStateProvider always retry the 
> getting metrics request. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13458) Make Jetty timeouts configurable system wide

2019-05-09 Thread Gus Heck (JIRA)
Gus Heck created SOLR-13458:
---

 Summary: Make Jetty timeouts configurable system wide
 Key: SOLR-13458
 URL: https://issues.apache.org/jira/browse/SOLR-13458
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: master (9.0)
Reporter: Gus Heck


Our jetty container has several timeouts associated with it, and at least one 
of these is regularly getting in my way (the idle timeout after 120 sec). I 
tried setting a system property, with no effect and I've tried altering a 
jetty.xml found at solr-install/solr/server/etc/jetty.xml on all (50) machines 
and rebooting all servers only to have an exception with the old 120 sec 
timeout still show up. This ticket proposes that these values are by nature 
"Global System Timeouts" and should be made configurable in solr.xml (which may 
be difficult because they will be needed early in the boot sequence). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13453) JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider behave nicely

2019-05-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836622#comment-16836622
 ] 

ASF subversion and git services commented on SOLR-13453:


Commit 03cca62af3a49de07b44255b61574e63ff141f78 in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=03cca62 ]

SOLR-13453: Adjust auth metrics asserts in tests after SOLR-13449 (#668)

(cherry picked from commit 5b772f7c9d8ba557287b0a0e01c459f07cdac9c4)


> JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed 
> when SolrClientNodeStateProvider behave nicely
> 
>
> Key: SOLR-13453
> URL: https://issues.apache.org/jira/browse/SOLR-13453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 8.1
>Reporter: Cao Manh Dat
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> SOLR-13449 is a trivial fix for SolrClientNodeStateProvider, it makes the 
> provider stop retry once the metrics are successfully grabbed. 
> In an unexpected way, JWTAuthPluginIntegrationTest and 
> TestSolrCloudWithHadoopAuthPlugin get failed 100%. This is bugs of tests for 
> sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13449) SolrClientNodeStateProvider always retries on requesting metrics from other nodes

2019-05-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836623#comment-16836623
 ] 

ASF subversion and git services commented on SOLR-13449:


Commit 03cca62af3a49de07b44255b61574e63ff141f78 in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=03cca62 ]

SOLR-13453: Adjust auth metrics asserts in tests after SOLR-13449 (#668)

(cherry picked from commit 5b772f7c9d8ba557287b0a0e01c459f07cdac9c4)


> SolrClientNodeStateProvider always retries on requesting metrics from other 
> nodes
> -
>
> Key: SOLR-13449
> URL: https://issues.apache.org/jira/browse/SOLR-13449
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7.1, 8.0
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: failure.txt
>
>
> Even in case of a success call, SolrClientNodeStateProvider always retry the 
> getting metrics request. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13457) Managing Timeout values in Solr

2019-05-09 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836612#comment-16836612
 ] 

Gus Heck edited comment on SOLR-13457 at 5/9/19 6:26 PM:
-

Also as I think about this more, we may need an additional category for solrj 
client timeout settings, and since solrj is used in core, something fancy to 
distinguish the two cases.


was (Author: gus_heck):
Also as I think about this more, we may need an additional type for solrj 
client timeout settings, and since solrj is used in core, something fancy to 
distinguish the two cases.

> Managing Timeout values in Solr
> ---
>
> Key: SOLR-13457
> URL: https://issues.apache.org/jira/browse/SOLR-13457
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Priority: Major
>
> Presently, Solr has a variety of timeouts for various connections or 
> operations. These timeouts have been added, tweaked and refined and in some 
> cases made configurable in an ad-hoc manner by the contributors of individual 
> features. Throughout the history of the project. This is all well and good 
> until one experiences a timeout during an otherwise valid use case and needs 
> to adjust it.
> This has also made managing timeouts in unit tests "interesting" as noted in 
> SOLR-13389.
> Probably nobody has the spare time to do a tour de force through the code and 
> coordinate every single timeout, so in this ticket I'd like to establish a 
> framework for categorizing time outs, a standard for how we make each 
> category configurable, and then add sub-tickets to address individual 
> timeouts.
> The intention is that eventually, there will be no "magic number" timeout 
> values in code, and one can predict where to find the configuration for a 
> timeout by determining it's category.
> Initial strawman categories (feel free to knock down or suggest alternatives):
>  # *Feature-Instance Timeout*: Timeouts that relate to a particular 
> instantiation of a feature, for example a database connection timeout for a 
> connection to a particular database by DIH. These should be set in the 
> configuration of that instance.
>  # *Optional Feature Timeout*: A timeout that only has meaning in the context 
> of a particular feature that is not required for solr to function... i.e. 
> something that can be turned on or off. Perhaps a timeout for communication 
> with an external ldap for authentication purposes. These should be configured 
> in the same configuration that enables this feature.
>  # *Global System Timeout*: A timeout that will always be an active part of 
> Solr these should be configured in a new  section of solr.xml. For 
> example the Jetty thread idle timeout, or the default timeout for http calls 
> between nodes.
>  # *Node Specific Timeout*: A timeout which may differ on different nodes. I 
> don't know of any of these, but I'll grant the possibility. These (and only 
> these) should be set by setting system properties. If we don't have any of 
> these, that's just fine :).
> *Note that in no case is a hard-coded value the correct solution.*
> If we get a consensus on categories and their locations, then the next step 
> is to begin adding sub tickets to bring specific timeouts into compliance. 
> Every such ticket should include an update to the section of the ref guide 
> documenting the configuration to which the timeout has been added (e.g. docs 
> for solr.xml for Global System Timeouts) describing what exactly is affected 
> by the timeout, the maximum allowed value and how zero and negative numbers 
> are handled.
> It is of course true that some of these values will have the potential to 
> destroy system performance or integrity, and that should be mentioned in the 
> update to documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13457) Managing Timeout values in Solr

2019-05-09 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836612#comment-16836612
 ] 

Gus Heck commented on SOLR-13457:
-

Also as I think about this more, we may need an additional type for solrj 
client timeout settings, and since solrj is used in core, something fancy to 
distinguish the two cases.

> Managing Timeout values in Solr
> ---
>
> Key: SOLR-13457
> URL: https://issues.apache.org/jira/browse/SOLR-13457
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Priority: Major
>
> Presently, Solr has a variety of timeouts for various connections or 
> operations. These timeouts have been added, tweaked and refined and in some 
> cases made configurable in an ad-hoc manner by the contributors of individual 
> features. Throughout the history of the project. This is all well and good 
> until one experiences a timeout during an otherwise valid use case and needs 
> to adjust it.
> This has also made managing timeouts in unit tests "interesting" as noted in 
> SOLR-13389.
> Probably nobody has the spare time to do a tour de force through the code and 
> coordinate every single timeout, so in this ticket I'd like to establish a 
> framework for categorizing time outs, a standard for how we make each 
> category configurable, and then add sub-tickets to address individual 
> timeouts.
> The intention is that eventually, there will be no "magic number" timeout 
> values in code, and one can predict where to find the configuration for a 
> timeout by determining it's category.
> Initial strawman categories (feel free to knock down or suggest alternatives):
>  # *Feature-Instance Timeout*: Timeouts that relate to a particular 
> instantiation of a feature, for example a database connection timeout for a 
> connection to a particular database by DIH. These should be set in the 
> configuration of that instance.
>  # *Optional Feature Timeout*: A timeout that only has meaning in the context 
> of a particular feature that is not required for solr to function... i.e. 
> something that can be turned on or off. Perhaps a timeout for communication 
> with an external ldap for authentication purposes. These should be configured 
> in the same configuration that enables this feature.
>  # *Global System Timeout*: A timeout that will always be an active part of 
> Solr these should be configured in a new  section of solr.xml. For 
> example the Jetty thread idle timeout, or the default timeout for http calls 
> between nodes.
>  # *Node Specific Timeout*: A timeout which may differ on different nodes. I 
> don't know of any of these, but I'll grant the possibility. These (and only 
> these) should be set by setting system properties. If we don't have any of 
> these, that's just fine :).
> *Note that in no case is a hard-coded value the correct solution.*
> If we get a consensus on categories and their locations, then the next step 
> is to begin adding sub tickets to bring specific timeouts into compliance. 
> Every such ticket should include an update to the section of the ref guide 
> documenting the configuration to which the timeout has been added (e.g. docs 
> for solr.xml for Global System Timeouts) describing what exactly is affected 
> by the timeout, the maximum allowed value and how zero and negative numbers 
> are handled.
> It is of course true that some of these values will have the potential to 
> destroy system performance or integrity, and that should be mentioned in the 
> update to documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13453) JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider behave nicely

2019-05-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836599#comment-16836599
 ] 

ASF subversion and git services commented on SOLR-13453:


Commit 5b772f7c9d8ba557287b0a0e01c459f07cdac9c4 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5b772f7 ]

SOLR-13453: Adjust auth metrics asserts in tests after SOLR-13449 (#668)




> JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed 
> when SolrClientNodeStateProvider behave nicely
> 
>
> Key: SOLR-13453
> URL: https://issues.apache.org/jira/browse/SOLR-13453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 8.1
>Reporter: Cao Manh Dat
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> SOLR-13449 is a trivial fix for SolrClientNodeStateProvider, it makes the 
> provider stop retry once the metrics are successfully grabbed. 
> In an unexpected way, JWTAuthPluginIntegrationTest and 
> TestSolrCloudWithHadoopAuthPlugin get failed 100%. This is bugs of tests for 
> sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13449) SolrClientNodeStateProvider always retries on requesting metrics from other nodes

2019-05-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836600#comment-16836600
 ] 

ASF subversion and git services commented on SOLR-13449:


Commit 5b772f7c9d8ba557287b0a0e01c459f07cdac9c4 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5b772f7 ]

SOLR-13453: Adjust auth metrics asserts in tests after SOLR-13449 (#668)




> SolrClientNodeStateProvider always retries on requesting metrics from other 
> nodes
> -
>
> Key: SOLR-13449
> URL: https://issues.apache.org/jira/browse/SOLR-13449
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7.1, 8.0
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: failure.txt
>
>
> Even in case of a success call, SolrClientNodeStateProvider always retry the 
> getting metrics request. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy merged pull request #668: SOLR-13453: Adjust auth metrics asserts in tests after SOLR-13449

2019-05-09 Thread GitBox
janhoy merged pull request #668: SOLR-13453: Adjust auth metrics asserts in 
tests after SOLR-13449
URL: https://github.com/apache/lucene-solr/pull/668
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2019-05-09 Thread Fredrik Rodland (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836249#comment-16836249
 ] 

Fredrik Rodland edited comment on SOLR-12243 at 5/9/19 6:07 PM:


I am aware that this issue is closed, but nonetheless:

I think this actually broke something regarding expansion of synonyms for large 
queries (possibly large {{OR}}-queries).

Having {{pf}} enabled on fields with a substantial amount of synonym resulted 
in the pf-portion of the query growing "exponentially" and resulted in one 
single query taking down an entire solr-server.

By adjusting the number of {{OR}}-queries we were able to increase the memory 
required for running the query.

example (id has synonyms enabled, companyname has not):

*{{A.}}*

{{q=( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR ortopedi OR 
odontologi )&debugQuery=true&pf=companyname}}

results in pf-part of edismax-query

{{(+DisjunctionMaxQuery((companyname:\"? samfunnsviter klima miljø ? ? psykolog 
rus ortopedi odontologi\"~5)~0.01))}}

*{{B.}}*

{{q=( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR ortopedi OR 
odontologi )&debugQuery=true&pf=id companyname}}

results in pf-part of edismax-query

{{(+DisjunctionMaxQuery(((id:\"samfunnsviter klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsviter klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"samfunnsvitar klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsvitar klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykologspesialist rus 
ortopedi odontologi\"~5 id:\"statsviter klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"statsviter klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykologspesialist rus ortopedi 
odontologi\"~5) | companyname:\"? samfunnsviter klima miljø ? ? psykolog rus 
ortopedi odontologi\"~5)~0.01))}}

 

B. above is just a reasonably short example to show our point.  Our actually 
queries (and resulting {{pf}} {{DisjunctionMaxQuery}} are a *lot longer*.  
Increasing the number of OR-terms or synonyms results in the id-part of the 
query growing "exponentially"


was (Author: fmr):
I am aware that this issue is closed, but nonetheless:

I think this actually broke something regarding expansion of synonyms for large 
queries (possibly large OR-queries).

Having \{code}pf\{code} enabled on fields with a substansial amount of synonym 
resulted in the pf-portion of the query growing "exponentially" and resulted in 
one single query taking down an entire solr-server.

By adjusting the number of OR-queries we were able to increase the memory 
required for running the query.

example (id has synonyms enabled, companyname has not):

q=( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR ortopedi OR 
odontologi )&debugQuery=true&pf=companyname

results in pf-part of edismax-query

(+DisjunctionMaxQuery((companyname:\"? samfunnsviter klima miljø ? ? psykolog 
rus ortopedi odontologi\"~5)~0.01)) 

q=( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR ortopedi OR 
odontologi )&debugQuery=true&pf=id companyname

results in pf-part of edismax-query

(+DisjunctionMaxQuery(((id:\"samfunnsviter klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsviter klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"samfunnsvitar klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsvitar klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykologspesialist rus 
ortopedi odontologi\"~5 id:\"statsviter klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"statsviter klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykologspesialist rus ortopedi 
odontologi\"~5) | companyname:\"? samfunnsviter klima miljø ? ? psykolog rus 
ortopedi odontologi\"~5)~0.01))\{code}

 

 increasing the number of OR-terms or synonyms results in the id-part of the 
query growing "exponentially"

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>

[GitHub] [lucene-solr] joel-bernstein opened a new pull request #669: Facet2d

2019-05-09 Thread GitBox
joel-bernstein opened a new pull request #669: Facet2d
URL: https://github.com/apache/lucene-solr/pull/669
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.1.0 RC2

2019-05-09 Thread jim ferenczi
+1
SUCCESS! [1:14:41.737009]

Le jeu. 9 mai 2019 à 18:56, Kevin Risden  a écrit :

> +1
> SUCCESS! [1:17:45.727492]
>
> Kevin Risden
>
>
> On Thu, May 9, 2019 at 11:37 AM Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
>> Please vote for release candidate 2 for Lucene/Solr 8.1.0
>>
>> The artifacts can be downloaded from:
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC2-revdbe5ed0b2f17677ca6c904ebae919363f2d36a0a
>>
>> You can run the smoke tester directly with this command:
>>
>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC2-revdbe5ed0b2f17677ca6c904ebae919363f2d36a0a
>>
>> Here's my +1
>> SUCCESS! [0:44:31.244021]
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>


[jira] [Commented] (SOLR-13457) Managing Timeout values in Solr

2019-05-09 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836590#comment-16836590
 ] 

Erick Erickson commented on SOLR-13457:
---

Finding them all will be a challenge, I count over 2,300 mentions of "timeout" 
in the source code. The overwhelming majority of them are fine (e.g. variable 
names, method names, default values. declarations of the TimeOut class). And no 
doubt even many/most of the ones that look potentially problematic are legit

I suppose insuring that we have them _all_ is secondary to finding what we can.

> Managing Timeout values in Solr
> ---
>
> Key: SOLR-13457
> URL: https://issues.apache.org/jira/browse/SOLR-13457
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Priority: Major
>
> Presently, Solr has a variety of timeouts for various connections or 
> operations. These timeouts have been added, tweaked and refined and in some 
> cases made configurable in an ad-hoc manner by the contributors of individual 
> features. Throughout the history of the project. This is all well and good 
> until one experiences a timeout during an otherwise valid use case and needs 
> to adjust it.
> This has also made managing timeouts in unit tests "interesting" as noted in 
> SOLR-13389.
> Probably nobody has the spare time to do a tour de force through the code and 
> coordinate every single timeout, so in this ticket I'd like to establish a 
> framework for categorizing time outs, a standard for how we make each 
> category configurable, and then add sub-tickets to address individual 
> timeouts.
> The intention is that eventually, there will be no "magic number" timeout 
> values in code, and one can predict where to find the configuration for a 
> timeout by determining it's category.
> Initial strawman categories (feel free to knock down or suggest alternatives):
>  # *Feature-Instance Timeout*: Timeouts that relate to a particular 
> instantiation of a feature, for example a database connection timeout for a 
> connection to a particular database by DIH. These should be set in the 
> configuration of that instance.
>  # *Optional Feature Timeout*: A timeout that only has meaning in the context 
> of a particular feature that is not required for solr to function... i.e. 
> something that can be turned on or off. Perhaps a timeout for communication 
> with an external ldap for authentication purposes. These should be configured 
> in the same configuration that enables this feature.
>  # *Global System Timeout*: A timeout that will always be an active part of 
> Solr these should be configured in a new  section of solr.xml. For 
> example the Jetty thread idle timeout, or the default timeout for http calls 
> between nodes.
>  # *Node Specific Timeout*: A timeout which may differ on different nodes. I 
> don't know of any of these, but I'll grant the possibility. These (and only 
> these) should be set by setting system properties. If we don't have any of 
> these, that's just fine :).
> *Note that in no case is a hard-coded value the correct solution.*
> If we get a consensus on categories and their locations, then the next step 
> is to begin adding sub tickets to bring specific timeouts into compliance. 
> Every such ticket should include an update to the section of the ref guide 
> documenting the configuration to which the timeout has been added (e.g. docs 
> for solr.xml for Global System Timeouts) describing what exactly is affected 
> by the timeout, the maximum allowed value and how zero and negative numbers 
> are handled.
> It is of course true that some of these values will have the potential to 
> destroy system performance or integrity, and that should be mentioned in the 
> update to documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on a change in pull request #662: SOLR-12584: Describe getting Prometheus metrics from a secure Solr

2019-05-09 Thread GitBox
janhoy commented on a change in pull request #662: SOLR-12584: Describe getting 
Prometheus metrics from a secure Solr
URL: https://github.com/apache/lucene-solr/pull/662#discussion_r282591108
 
 

 ##
 File path: 
solr/solr-ref-guide/src/monitoring-solr-with-prometheus-and-grafana.adoc
 ##
 @@ -108,6 +108,38 @@ The number of seconds between collecting metrics from 
Solr. The `solr-exporter`
 
 The Solr's metrics exposed by `solr-exporter` can be seen at: 
`\http://localhost:9983/solr/admin/metrics`.
 
+=== Getting metrics from a secure Solr(Cloud)
+
+Your Solr(Cloud) might be secured by measures described in 
<>. The security configuration 
can be injected into `solr-exporter` using environment variables in a fashion 
similar to other clients using <>. This is 
possible because the main script picks up two external environment variables 
and passes them on to the Java process:
+
+* `JAVA_OPTS` allows to add extra JVM options
+* `CLASSPATH_PREFIX` allows to add extra libraries
+
+Example for a SolrCloud instance secured by
+
+* <>
 
 Review comment:
   Instead of a bullet list, just make it a sentence : "Example for a SolrCloud 
instance secured by both Basic Authentication, SSL and ZooKeeper Access 
Control:"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on a change in pull request #662: SOLR-12584: Describe getting Prometheus metrics from a secure Solr

2019-05-09 Thread GitBox
janhoy commented on a change in pull request #662: SOLR-12584: Describe getting 
Prometheus metrics from a secure Solr
URL: https://github.com/apache/lucene-solr/pull/662#discussion_r282590676
 
 

 ##
 File path: 
solr/solr-ref-guide/src/monitoring-solr-with-prometheus-and-grafana.adoc
 ##
 @@ -108,6 +108,38 @@ The number of seconds between collecting metrics from 
Solr. The `solr-exporter`
 
 The Solr's metrics exposed by `solr-exporter` can be seen at: 
`\http://localhost:9983/solr/admin/metrics`.
 
+=== Getting metrics from a secure Solr(Cloud)
+
+Your Solr(Cloud) might be secured by measures described in 
<>. The security configuration 
can be injected into `solr-exporter` using environment variables in a fashion 
similar to other clients using <>. This is 
possible because the main script picks up two external environment variables 
and passes them on to the Java process:
+
+* `JAVA_OPTS` allows to add extra JVM options
+* `CLASSPATH_PREFIX` allows to add extra libraries
+
+Example for a SolrCloud instance secured by
+
+* <>
+* <>
+* <>
+
+Suppose you have a file `basicauth.properties` with the Solr Basic-Auth 
credentials:
+
+
+httpBasicAuthUser=myUser
+httpBasicAuthPassword=myPassword
+
+
+Then you can start the Exporter as follows (Linux).
+
+[source,bash]
+
+$ cd contrib/prometheus-exporter
+$ export JAVA_OPTS="-Djavax.net.ssl.trustStore=truststore.jks 
-Djavax.net.ssl.trustStorePassword=truststorePassword 
-Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactory
 -Dsolr.httpclient.config=basicauth.properties 
-DzkCredentialsProvider=org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
 -DzkDigestUsername=readonly-user -DzkDigestPassword=zkUserPassword"
+$ export 
CLASSPATH_PREFIX="../../server/solr-webapp/webapp/WEB-INF/lib/commons-codec-1.11.jar"
+$ ./bin/solr-exporter -p 9854 -z zk1:2181,zk2:2181,zk3:2181 -f 
./conf/solr-exporter-config.xml -n 16
+
+
+Note: The Exporter needs the `commons-codec` library for SSL/BasicAuth, but 
does not bring it. Therefor the example reuses it from the Solr web app. Of 
course, you can use a different source.
 
 Review comment:
   Therefor -> Therefore


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on a change in pull request #662: SOLR-12584: Describe getting Prometheus metrics from a secure Solr

2019-05-09 Thread GitBox
janhoy commented on a change in pull request #662: SOLR-12584: Describe getting 
Prometheus metrics from a secure Solr
URL: https://github.com/apache/lucene-solr/pull/662#discussion_r282591957
 
 

 ##
 File path: 
solr/solr-ref-guide/src/monitoring-solr-with-prometheus-and-grafana.adoc
 ##
 @@ -108,6 +108,38 @@ The number of seconds between collecting metrics from 
Solr. The `solr-exporter`
 
 The Solr's metrics exposed by `solr-exporter` can be seen at: 
`\http://localhost:9983/solr/admin/metrics`.
 
+=== Getting metrics from a secure Solr(Cloud)
+
+Your Solr(Cloud) might be secured by measures described in 
<>. The security configuration 
can be injected into `solr-exporter` using environment variables in a fashion 
similar to other clients using <>. This is 
possible because the main script picks up two external environment variables 
and passes them on to the Java process:
+
+* `JAVA_OPTS` allows to add extra JVM options
+* `CLASSPATH_PREFIX` allows to add extra libraries
+
+Example for a SolrCloud instance secured by
+
+* <>
+* <>
+* <>
+
+Suppose you have a file `basicauth.properties` with the Solr Basic-Auth 
credentials:
+
+
+httpBasicAuthUser=myUser
+httpBasicAuthPassword=myPassword
+
+
+Then you can start the Exporter as follows (Linux).
+
+[source,bash]
+
+$ cd contrib/prometheus-exporter
+$ export JAVA_OPTS="-Djavax.net.ssl.trustStore=truststore.jks 
-Djavax.net.ssl.trustStorePassword=truststorePassword 
-Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactory
 -Dsolr.httpclient.config=basicauth.properties 
-DzkCredentialsProvider=org.apache.solr.common.cloud.VMParamsSingleSetCredentialsDigestZkCredentialsProvider
 -DzkDigestUsername=readonly-user -DzkDigestPassword=zkUserPassword"
 
 Review comment:
   How would this long line render in HTML ref guide and in PDF ref guide? Can 
you try to build the ref-guide
   
   cd solr-refguide && ant dist
   
   and then QA how it looks. An alternative is to split the lines ourselves, 
e.g.:
   
   $ export JAVA_OPTS="-Djavax.net.ssl.trustStore=truststore.jks \
   -Djavax.net.ssl.trustStorePassword=truststorePassword \
   
-Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactory
 \
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13453) JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider behave nicely

2019-05-09 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13453:
---
Component/s: Tests

> JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed 
> when SolrClientNodeStateProvider behave nicely
> 
>
> Key: SOLR-13453
> URL: https://issues.apache.org/jira/browse/SOLR-13453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 8.1
>Reporter: Cao Manh Dat
>Priority: Major
> Fix For: 8.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> SOLR-13449 is a trivial fix for SolrClientNodeStateProvider, it makes the 
> provider stop retry once the metrics are successfully grabbed. 
> In an unexpected way, JWTAuthPluginIntegrationTest and 
> TestSolrCloudWithHadoopAuthPlugin get failed 100%. This is bugs of tests for 
> sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13453) JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider behave nicely

2019-05-09 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-13453:
--

Assignee: Jan Høydahl

> JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed 
> when SolrClientNodeStateProvider behave nicely
> 
>
> Key: SOLR-13453
> URL: https://issues.apache.org/jira/browse/SOLR-13453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 8.1
>Reporter: Cao Manh Dat
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> SOLR-13449 is a trivial fix for SolrClientNodeStateProvider, it makes the 
> provider stop retry once the metrics are successfully grabbed. 
> In an unexpected way, JWTAuthPluginIntegrationTest and 
> TestSolrCloudWithHadoopAuthPlugin get failed 100%. This is bugs of tests for 
> sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13453) JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider behave nicely

2019-05-09 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13453:
---
Affects Version/s: 8.1

> JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed 
> when SolrClientNodeStateProvider behave nicely
> 
>
> Key: SOLR-13453
> URL: https://issues.apache.org/jira/browse/SOLR-13453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.1
>Reporter: Cao Manh Dat
>Priority: Major
> Fix For: 8.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> SOLR-13449 is a trivial fix for SolrClientNodeStateProvider, it makes the 
> provider stop retry once the metrics are successfully grabbed. 
> In an unexpected way, JWTAuthPluginIntegrationTest and 
> TestSolrCloudWithHadoopAuthPlugin get failed 100%. This is bugs of tests for 
> sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13453) JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider behave nicely

2019-05-09 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13453:
---
Fix Version/s: 8.2

> JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed 
> when SolrClientNodeStateProvider behave nicely
> 
>
> Key: SOLR-13453
> URL: https://issues.apache.org/jira/browse/SOLR-13453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
> Fix For: 8.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> SOLR-13449 is a trivial fix for SolrClientNodeStateProvider, it makes the 
> provider stop retry once the metrics are successfully grabbed. 
> In an unexpected way, JWTAuthPluginIntegrationTest and 
> TestSolrCloudWithHadoopAuthPlugin get failed 100%. This is bugs of tests for 
> sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13047) Add facet2D Streaming Expression

2019-05-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836576#comment-16836576
 ] 

ASF subversion and git services commented on SOLR-13047:


Commit a97076cb091443089dbc8835b8433238ceaaac76 in lucene-solr's branch 
refs/heads/facet2d from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a97076c ]

SOLR-13047: Basic test working


> Add facet2D Streaming Expression
> 
>
> Key: SOLR-13047
> URL: https://issues.apache.org/jira/browse/SOLR-13047
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The current facet expression is a generic tool for creating multi-dimension 
> aggregations. The *facet2D* Streaming Expression has semantics specific for 2 
> dimensional facets which are designed to be *pivoted* into a matrix and 
> operated on by *Math Expressions*. 
> facet2D will use the json facet API under the covers. 
> Proposed syntax:
> {code:java}
> facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 10", 
> count(*)){code}
> The example above will return tuples containing the top 300 diseases and the 
> top ten symptoms for each disease. 
> Using math expression the tuples can be *pivoted* into a matrix where the 
> rows of the matrix are the diseases, the columns of the matrix are the 
> symptoms and the cells in the matrix contain the counts. This matrix can then 
> be *clustered* to find clusters of *diseases* that are correlated by 
> *symptoms*. 
> {code:java}
> let(a=facet2D(medrecords, q=*:*, x=diseases, y=symptoms, dimensions="300, 
> 10", count(*)),
> b=pivot(a, diseases, symptoms, count(*)),
> c=kmeans(b, 10)){code}
>  
> *Implementation Note:*
> The implementation plan for this ticket is to create a new stream called 
> Facet2DStream. The FacetStream code is a good starting point for the new 
> implementation and can be adapted for the Facet2D parameters. Similar tests 
> to the FacetStream can be added to StreamExpressionTest
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.1.0 RC1

2019-05-09 Thread Jan Høydahl
It's just test bugs (which should have been fixed as part of SOLR-13449 
really). I'll merge to master and branch_8x, that's enough.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 9. mai 2019 kl. 17:38 skrev Ishan Chattopadhyaya :
> 
> Jan, please feel free to merge it to the 8.1 branch for RC3, if that
> happens. Sorry for missing it this time.
> 
> On Thu, May 9, 2019 at 8:51 PM Ishan Chattopadhyaya
>  wrote:
>> 
>> Oops, I just uploaded the RC2 and was about to send out the mail for it.
>> Since they are test only changes, even if I build another RC2 now, it
>> won't be of much impact to the user, right?
>> 
>> On Thu, May 9, 2019 at 7:44 PM Jan Høydahl  wrote:
>>> 
>>> I fixed https://issues.apache.org/jira/browse/SOLR-13453, see 
>>> https://github.com/apache/lucene-solr/pull/668
>>> Can we merge to 8.1 for RC2?
>>> 
>>> --
>>> Jan Høydahl, search solution architect
>>> Cominvent AS - www.cominvent.com
>>> 
>>> 9. mai 2019 kl. 08:42 skrev Ishan Chattopadhyaya 
>>> :
>>> 
>>> Okay, sure. I'll re-spin. :-(
>>> This vote is cancelled. Thanks to Tomoko, David, Kevin and Varun for voting.
>>> 
>>> On Thu, May 9, 2019 at 7:19 AM Noble Paul  wrote:
>>> 
>>> 
>>> It's a bug fix. So,we should include it
>>> 
>>> On Thu, May 9, 2019 at 8:13 AM Ishan Chattopadhyaya
>>>  wrote:
>>> 
>>> 
>>> Hi Dat,
>>> 
>>> Should we respin the release for SOLR-13449.
>>> 
>>> 
>>> I don't fully understand the implications of not having SOLR-13449. If
>>> you (or someone else) suggest(s) that this needs to go into 8.1, then
>>> I'll re-spin RC2 tomorrow.
>>> 
>>> Thanks,
>>> Ishan
>>> 
>>> On Thu, May 9, 2019 at 3:29 AM Varun Thacker  wrote:
>>> 
>>> 
>>> SUCCESS! [1:08:48.869786]
>>> 
>>> 
>>> On Wed, May 8, 2019 at 1:16 PM Đạt Cao Mạnh  wrote:
>>> 
>>> 
>>> Hi Ishan,
>>> 
>>> Should we respin the release for SOLR-13449.
>>> 
>>> On Wed, 8 May 2019 at 17:45, Kevin Risden  wrote:
>>> 
>>> 
>>> +1 SUCCESS! [1:15:45.039228]
>>> 
>>> Kevin Risden
>>> 
>>> 
>>> On Wed, May 8, 2019 at 11:12 AM David Smiley  
>>> wrote:
>>> 
>>> 
>>> +1
>>> SUCCESS! [1:29:43.016321]
>>> 
>>> Thanks for doing the release Ishan!
>>> 
>>> ~ David Smiley
>>> Apache Lucene/Solr Search Developer
>>> http://www.linkedin.com/in/davidwsmiley
>>> 
>>> 
>>> On Tue, May 7, 2019 at 1:49 PM Ishan Chattopadhyaya 
>>>  wrote:
>>> 
>>> 
>>> Please vote for release candidate 1 for Lucene/Solr 8.1.0
>>> 
>>> The artifacts can be downloaded from:
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>>> 
>>> You can run the smoke tester directly with this command:
>>> 
>>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>>> 
>>> Here's my +1
>>> SUCCESS! [0:46:38.948020]
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>>> --
>>> Best regards,
>>> Cao Mạnh Đạt
>>> D.O.B : 31-07-1991
>>> Cell: (+84) 946.328.329
>>> E-mail: caomanhdat...@gmail.com
>>> 
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>>> 
>>> 
>>> --
>>> -
>>> Noble Paul
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>>> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



[jira] [Commented] (SOLR-13049) make contrib/ltr Feature.defaultValue configurable

2019-05-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836562#comment-16836562
 ] 

ASF subversion and git services commented on SOLR-13049:


Commit 726fb8facc15653fc3358521d226f10bd7dfff9c in lucene-solr's branch 
refs/heads/branch_8x from Christine Poerschke
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=726fb8f ]

SOLR-13049: Make contrib/ltr Feature.defaultValue configurable. (Stanislav 
Livotov, Christine Poerschke)


> make contrib/ltr Feature.defaultValue configurable
> --
>
> Key: SOLR-13049
> URL: https://issues.apache.org/jira/browse/SOLR-13049
> Project: Solr
>  Issue Type: New Feature
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Assignee: Christine Poerschke
>Priority: Major
> Attachments: SOLR-13049.patch, SOLR-13049.patch
>
>
> [~slivotov] wrote in SOLR-12697:
> {quote}
> I had also done a couple of additional code changes:
> 1. fixed small issue with defaultValue(previously it was impossible to set it 
> from feature.json, and the tests were written where Feature was created 
> manually, and not by parsing json). Tests are added which are validating 
> defaultValue from schema field configuration and from a feature default value.
> {quote}
> (Please see 
> https://issues.apache.org/jira/browse/SOLR-12697?focusedCommentId=16708618&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16708618
>  for more context.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.1.0 RC2

2019-05-09 Thread Kevin Risden
+1
SUCCESS! [1:17:45.727492]

Kevin Risden


On Thu, May 9, 2019 at 11:37 AM Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> Please vote for release candidate 2 for Lucene/Solr 8.1.0
>
> The artifacts can be downloaded from:
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC2-revdbe5ed0b2f17677ca6c904ebae919363f2d36a0a
>
> You can run the smoke tester directly with this command:
>
> python3 -u dev-tools/scripts/smokeTestRelease.py \
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC2-revdbe5ed0b2f17677ca6c904ebae919363f2d36a0a
>
> Here's my +1
> SUCCESS! [0:44:31.244021]
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-13049) make contrib/ltr Feature.defaultValue configurable

2019-05-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836527#comment-16836527
 ] 

ASF subversion and git services commented on SOLR-13049:


Commit 38573881368344aba24b8e819955f428f52873fd in lucene-solr's branch 
refs/heads/master from Christine Poerschke
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3857388 ]

SOLR-13049: Make contrib/ltr Feature.defaultValue configurable. (Stanislav 
Livotov, Christine Poerschke)


> make contrib/ltr Feature.defaultValue configurable
> --
>
> Key: SOLR-13049
> URL: https://issues.apache.org/jira/browse/SOLR-13049
> Project: Solr
>  Issue Type: New Feature
>  Components: contrib - LTR
>Reporter: Stanislav Livotov
>Assignee: Christine Poerschke
>Priority: Major
> Attachments: SOLR-13049.patch, SOLR-13049.patch
>
>
> [~slivotov] wrote in SOLR-12697:
> {quote}
> I had also done a couple of additional code changes:
> 1. fixed small issue with defaultValue(previously it was impossible to set it 
> from feature.json, and the tests were written where Feature was created 
> manually, and not by parsing json). Tests are added which are validating 
> defaultValue from schema field configuration and from a feature default value.
> {quote}
> (Please see 
> https://issues.apache.org/jira/browse/SOLR-12697?focusedCommentId=16708618&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16708618
>  for more context.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13263) Facet Heat Map should support GeoJSON

2019-05-09 Thread Bar Rotstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836513#comment-16836513
 ] 

Bar Rotstein commented on SOLR-13263:
-

 {quote}So the test is fundamentally wrong to compare a surface-of-sphere shape 
to a lat-lon rectangle (which isn't surface-of-sphere) wherein the inputs are 
the same since the result won't match.{quote}
I am having trouble coming up with better tests, since I am very new to GIS.
I am currently looking through Solr's test cases to better understand this 
issue.
Could you have any test case that I should look at to better understand why my 
test is wrong?

> Facet Heat Map should support GeoJSON
> -
>
> Key: SOLR-13263
> URL: https://issues.apache.org/jira/browse/SOLR-13263
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting
>Affects Versions: 8.0, 8.1, master (9.0)
>Reporter: Bar Rotstein
>Priority: Major
>  Labels: Facets, Geolocation, facet, faceting, geo
> Attachments: SOLR-13263-nocommit-geo3d-failure.patch, 
> SOLR-13263-nocommit.patch
>
>
> Currently Facet Heatmap(Geographical facets) do not support any other 
> subjects other than WKT or '[ ]'. This seems to be caused since 
> FacetHeatmap.Parser#parse uses SpatialUtils#parseGeomSolrException, which in 
> turn uses a deprecated JTS method (SpatialContext#readShapeFromWkt) to parse 
> the string input.
> The newer method of parsing a String to a Shape object should be used, makes 
> the code a lot cleaner and should support more formats (including GeoJSON).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13389) rectify discrepencies in socket (and connect) timeout values used throughout the code and tests - probably helping to reduce TimeoutExceptions in tests

2019-05-09 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836510#comment-16836510
 ] 

Gus Heck commented on SOLR-13389:
-

I've filed SOLR-13457 so that this issue can be about test values vs production 
vales

> rectify discrepencies in socket (and connect) timeout values used throughout 
> the code and tests - probably helping to reduce TimeoutExceptions in tests
> ---
>
> Key: SOLR-13389
> URL: https://issues.apache.org/jira/browse/SOLR-13389
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> While looking into some jenkins test failures caused by distributed requests 
> that timeout, i realized that the "socket timeout" aka "idle timeout" aka 
> "SO_TIMEOUT" values used in various places in the code & sample configs can 
> vary significantly, and in the case of *test* configs/code can differ from 
> the default / production configs by an order of magnitude.
> I think we should consider rectifying some of the various places/ways that 
> different values are sprinkled through out the code to reduce the number of 
> (different) places we have magic constants.  I believe a large number of 
> jenkins test failures we currently see due to timeout exceptions are simply 
> because tests (or test configs) override sensible defaults w/values that are 
> too low to be useful.
> (NOTE: all of these problems / discrepancies also apply to "connect timeout" 
> which should probably be addressed at the same time, but for now i'm focusing 
> on the "socket timeout" since it seems to be the bigger problem in jenkins 
> failures -- if we reach consensus on standardizing some values across the 
> board the same approach can be made to connect timeouts at the same time)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13457) Managing Timeout values in Solr

2019-05-09 Thread Gus Heck (JIRA)
Gus Heck created SOLR-13457:
---

 Summary: Managing Timeout values in Solr
 Key: SOLR-13457
 URL: https://issues.apache.org/jira/browse/SOLR-13457
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (9.0)
Reporter: Gus Heck


Presently, Solr has a variety of timeouts for various connections or 
operations. These timeouts have been added, tweaked and refined and in some 
cases made configurable in an ad-hoc manner by the contributors of individual 
features. Throughout the history of the project. This is all well and good 
until one experiences a timeout during an otherwise valid use case and needs to 
adjust it.

This has also made managing timeouts in unit tests "interesting" as noted in 
SOLR-13389.

Probably nobody has the spare time to do a tour de force through the code and 
coordinate every single timeout, so in this ticket I'd like to establish a 
framework for categorizing time outs, a standard for how we make each category 
configurable, and then add sub-tickets to address individual timeouts.

The intention is that eventually, there will be no "magic number" timeout 
values in code, and one can predict where to find the configuration for a 
timeout by determining it's category.

Initial strawman categories (feel free to knock down or suggest alternatives):
 # *Feature-Instance Timeout*: Timeouts that relate to a particular 
instantiation of a feature, for example a database connection timeout for a 
connection to a particular database by DIH. These should be set in the 
configuration of that instance.
 # *Optional Feature Timeout*: A timeout that only has meaning in the context 
of a particular feature that is not required for solr to function... i.e. 
something that can be turned on or off. Perhaps a timeout for communication 
with an external ldap for authentication purposes. These should be configured 
in the same configuration that enables this feature.
 # *Global System Timeout*: A timeout that will always be an active part of 
Solr these should be configured in a new  section of solr.xml. For 
example the Jetty thread idle timeout, or the default timeout for http calls 
between nodes.
 # *Node Specific Timeout*: A timeout which may differ on different nodes. I 
don't know of any of these, but I'll grant the possibility. These (and only 
these) should be set by setting system properties. If we don't have any of 
these, that's just fine :).

*Note that in no case is a hard-coded value the correct solution.*

If we get a consensus on categories and their locations, then the next step is 
to begin adding sub tickets to bring specific timeouts into compliance. Every 
such ticket should include an update to the section of the ref guide 
documenting the configuration to which the timeout has been added (e.g. docs 
for solr.xml for Global System Timeouts) describing what exactly is affected by 
the timeout, the maximum allowed value and how zero and negative numbers are 
handled.

It is of course true that some of these values will have the potential to 
destroy system performance or integrity, and that should be mentioned in the 
update to documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13263) Facet Heat Map should support GeoJSON

2019-05-09 Thread Bar Rotstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836499#comment-16836499
 ] 

Bar Rotstein edited comment on SOLR-13263 at 5/9/19 4:07 PM:
-

{quote}Your top latitude is 90 which touches the north pole. _Debatably_ this 
wraps the world; you could argue it either way. This makes reasoning about what 
the bounding box _should_ be in a test debatable and thus not a good test 
input. You could lower to say 70.
{quote}
Even after changing the max Latitude value to 70, the parsed shape's bounding 
box is still GeoWorld.
{quote}On the surface of a sphere, the parsed shape of 4 points is different 
than a Euclidean 2D plane. Geo3D is surface-of-sphere. Thus horizontal lines 
above the equator bow upwards when viewed on a 2D plane. So the test is 
fundamentally wrong to compare a surface-of-sphere shape to a lat-lon rectangle 
(which isn't surface-of-sphere) wherein the inputs are the same since the 
result won't match.
{quote}
Ouch, my bad.
 I am very inexperienced when it comes to GIS.
 Would keeping all polygons make this test OK logic-wise?


was (Author: brot):
{quote}Your top latitude is 90 which touches the north pole. _Debatably_ this 
wraps the world; you could argue it either way. This makes reasoning about what 
the bounding box _should_ be in a test debatable and thus not a good test 
input. You could lower to say 70.{quote}
Even after changing the maqY value to 70, the parsed shape's bounding box is 
still GeoWorld.

{quote}On the surface of a sphere, the parsed shape of 4 points is different 
than a Euclidean 2D plane. Geo3D is surface-of-sphere. Thus horizontal lines 
above the equator bow upwards when viewed on a 2D plane. So the test is 
fundamentally wrong to compare a surface-of-sphere shape to a lat-lon rectangle 
(which isn't surface-of-sphere) wherein the inputs are the same since the 
result won't match.{quote}
Ouch, my bad.
I am very inexperienced when it comes to GIS.
Would keeping all polygons make this test OK logic-wise?

> Facet Heat Map should support GeoJSON
> -
>
> Key: SOLR-13263
> URL: https://issues.apache.org/jira/browse/SOLR-13263
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting
>Affects Versions: 8.0, 8.1, master (9.0)
>Reporter: Bar Rotstein
>Priority: Major
>  Labels: Facets, Geolocation, facet, faceting, geo
> Attachments: SOLR-13263-nocommit-geo3d-failure.patch, 
> SOLR-13263-nocommit.patch
>
>
> Currently Facet Heatmap(Geographical facets) do not support any other 
> subjects other than WKT or '[ ]'. This seems to be caused since 
> FacetHeatmap.Parser#parse uses SpatialUtils#parseGeomSolrException, which in 
> turn uses a deprecated JTS method (SpatialContext#readShapeFromWkt) to parse 
> the string input.
> The newer method of parsing a String to a Shape object should be used, makes 
> the code a lot cleaner and should support more formats (including GeoJSON).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13263) Facet Heat Map should support GeoJSON

2019-05-09 Thread Bar Rotstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836499#comment-16836499
 ] 

Bar Rotstein commented on SOLR-13263:
-

{quote}Your top latitude is 90 which touches the north pole. _Debatably_ this 
wraps the world; you could argue it either way. This makes reasoning about what 
the bounding box _should_ be in a test debatable and thus not a good test 
input. You could lower to say 70.{quote}
Even after changing the maqY value to 70, the parsed shape's bounding box is 
still GeoWorld.

{quote}On the surface of a sphere, the parsed shape of 4 points is different 
than a Euclidean 2D plane. Geo3D is surface-of-sphere. Thus horizontal lines 
above the equator bow upwards when viewed on a 2D plane. So the test is 
fundamentally wrong to compare a surface-of-sphere shape to a lat-lon rectangle 
(which isn't surface-of-sphere) wherein the inputs are the same since the 
result won't match.{quote}
Ouch, my bad.
I am very inexperienced when it comes to GIS.
Would keeping all polygons make this test OK logic-wise?

> Facet Heat Map should support GeoJSON
> -
>
> Key: SOLR-13263
> URL: https://issues.apache.org/jira/browse/SOLR-13263
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting
>Affects Versions: 8.0, 8.1, master (9.0)
>Reporter: Bar Rotstein
>Priority: Major
>  Labels: Facets, Geolocation, facet, faceting, geo
> Attachments: SOLR-13263-nocommit-geo3d-failure.patch, 
> SOLR-13263-nocommit.patch
>
>
> Currently Facet Heatmap(Geographical facets) do not support any other 
> subjects other than WKT or '[ ]'. This seems to be caused since 
> FacetHeatmap.Parser#parse uses SpatialUtils#parseGeomSolrException, which in 
> turn uses a deprecated JTS method (SpatialContext#readShapeFromWkt) to parse 
> the string input.
> The newer method of parsing a String to a Shape object should be used, makes 
> the code a lot cleaner and should support more formats (including GeoJSON).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11.0.2) - Build # 24056 - Failure!

2019-05-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24056/
Java: 64bit/jdk-11.0.2 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 62648 lines...]
-ecj-javadoc-lint-tests:
[mkdir] Created dir: /tmp/ecj533909846
 [ecj-lint] Compiling 48 source files to /tmp/ecj533909846
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 23)
 [ecj-lint] import javax.naming.NamingException;
 [ecj-lint]
 [ecj-lint] The type javax.naming.NamingException is not accessible
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 28)
 [ecj-lint] public class MockInitialContextFactory implements 
InitialContextFactory {
 [ecj-lint]  ^
 [ecj-lint] The type MockInitialContextFactory must implement the inherited 
abstract method InitialContextFactory.getInitialContext(Hashtable)
 [ecj-lint] --
 [ecj-lint] 3. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 30)
 [ecj-lint] private final javax.naming.Context context;
 [ecj-lint]   
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 4. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
 [ecj-lint] context = mock(javax.naming.Context.class);
 [ecj-lint] ^^^
 [ecj-lint] context cannot be resolved to a variable
 [ecj-lint] --
 [ecj-lint] 5. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 33)
 [ecj-lint] context = mock(javax.naming.Context.class);
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 6. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 36)
 [ecj-lint] when(context.lookup(anyString())).thenAnswer(invocation -> 
objects.get(invocation.getArgument(0)));
 [ecj-lint]  ^^^
 [ecj-lint] context cannot be resolved
 [ecj-lint] --
 [ecj-lint] 7. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 38)
 [ecj-lint] } catch (NamingException e) {
 [ecj-lint]  ^^^
 [ecj-lint] NamingException cannot be resolved to a type
 [ecj-lint] --
 [ecj-lint] 8. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 45)
 [ecj-lint] public javax.naming.Context getInitialContext(Hashtable env) {
 [ecj-lint]
 [ecj-lint] The type javax.naming.Context is not accessible
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java
 (at line 46)
 [ecj-lint] return context;
 [ecj-lint]^^^
 [ecj-lint] context cannot be resolved to a variable
 [ecj-lint] --
 [ecj-lint] 9 problems (9 errors)

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:634: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:101: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build.xml:687: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/common-build.xml:479: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:2016: 
The following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/common-build.xml:2049: 
Compile failed; see the compiler error output for 

Re: [VOTE] Release Lucene/Solr 8.1.0 RC1

2019-05-09 Thread Thomas Wöckinger
Fixed three other bugs (SOLR-13347, SOLR-13331, SOLR11841) with PR
https://github.com/apache/lucene-solr/pull/665 which needs to be reviewed,
ensures correct behavior when using solrj as client!

On Thu, May 9, 2019 at 4:13 PM Jan Høydahl  wrote:

> I fixed https://issues.apache.org/jira/browse/SOLR-13453, see
> https://github.com/apache/lucene-solr/pull/668
> Can we merge to 8.1 for RC2?
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 9. mai 2019 kl. 08:42 skrev Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com>:
>
> Okay, sure. I'll re-spin. :-(
> This vote is cancelled. Thanks to Tomoko, David, Kevin and Varun for
> voting.
>
> On Thu, May 9, 2019 at 7:19 AM Noble Paul  wrote:
>
>
> It's a bug fix. So,we should include it
>
> On Thu, May 9, 2019 at 8:13 AM Ishan Chattopadhyaya
>  wrote:
>
>
> Hi Dat,
>
> Should we respin the release for SOLR-13449.
>
>
> I don't fully understand the implications of not having SOLR-13449. If
> you (or someone else) suggest(s) that this needs to go into 8.1, then
> I'll re-spin RC2 tomorrow.
>
> Thanks,
> Ishan
>
> On Thu, May 9, 2019 at 3:29 AM Varun Thacker  wrote:
>
>
> SUCCESS! [1:08:48.869786]
>
>
> On Wed, May 8, 2019 at 1:16 PM Đạt Cao Mạnh 
> wrote:
>
>
> Hi Ishan,
>
> Should we respin the release for SOLR-13449.
>
> On Wed, 8 May 2019 at 17:45, Kevin Risden  wrote:
>
>
> +1 SUCCESS! [1:15:45.039228]
>
> Kevin Risden
>
>
> On Wed, May 8, 2019 at 11:12 AM David Smiley 
> wrote:
>
>
> +1
> SUCCESS! [1:29:43.016321]
>
> Thanks for doing the release Ishan!
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Tue, May 7, 2019 at 1:49 PM Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
>
> Please vote for release candidate 1 for Lucene/Solr 8.1.0
>
> The artifacts can be downloaded from:
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>
> You can run the smoke tester directly with this command:
>
> python3 -u dev-tools/scripts/smokeTestRelease.py \
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>
> Here's my +1
> SUCCESS! [0:46:38.948020]
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
> Best regards,
> Cao Mạnh Đạt
> D.O.B : 31-07-1991
> Cell: (+84) 946.328.329
> E-mail: caomanhdat...@gmail.com 
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
>
>
>
> --
> -
> Noble Paul
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
>
>
>


Re: [VOTE] Release Lucene/Solr 8.1.0 RC1

2019-05-09 Thread Ishan Chattopadhyaya
Jan, please feel free to merge it to the 8.1 branch for RC3, if that
happens. Sorry for missing it this time.

On Thu, May 9, 2019 at 8:51 PM Ishan Chattopadhyaya
 wrote:
>
> Oops, I just uploaded the RC2 and was about to send out the mail for it.
> Since they are test only changes, even if I build another RC2 now, it
> won't be of much impact to the user, right?
>
> On Thu, May 9, 2019 at 7:44 PM Jan Høydahl  wrote:
> >
> > I fixed https://issues.apache.org/jira/browse/SOLR-13453, see 
> > https://github.com/apache/lucene-solr/pull/668
> > Can we merge to 8.1 for RC2?
> >
> > --
> > Jan Høydahl, search solution architect
> > Cominvent AS - www.cominvent.com
> >
> > 9. mai 2019 kl. 08:42 skrev Ishan Chattopadhyaya 
> > :
> >
> > Okay, sure. I'll re-spin. :-(
> > This vote is cancelled. Thanks to Tomoko, David, Kevin and Varun for voting.
> >
> > On Thu, May 9, 2019 at 7:19 AM Noble Paul  wrote:
> >
> >
> > It's a bug fix. So,we should include it
> >
> > On Thu, May 9, 2019 at 8:13 AM Ishan Chattopadhyaya
> >  wrote:
> >
> >
> > Hi Dat,
> >
> > Should we respin the release for SOLR-13449.
> >
> >
> > I don't fully understand the implications of not having SOLR-13449. If
> > you (or someone else) suggest(s) that this needs to go into 8.1, then
> > I'll re-spin RC2 tomorrow.
> >
> > Thanks,
> > Ishan
> >
> > On Thu, May 9, 2019 at 3:29 AM Varun Thacker  wrote:
> >
> >
> > SUCCESS! [1:08:48.869786]
> >
> >
> > On Wed, May 8, 2019 at 1:16 PM Đạt Cao Mạnh  wrote:
> >
> >
> > Hi Ishan,
> >
> > Should we respin the release for SOLR-13449.
> >
> > On Wed, 8 May 2019 at 17:45, Kevin Risden  wrote:
> >
> >
> > +1 SUCCESS! [1:15:45.039228]
> >
> > Kevin Risden
> >
> >
> > On Wed, May 8, 2019 at 11:12 AM David Smiley  
> > wrote:
> >
> >
> > +1
> > SUCCESS! [1:29:43.016321]
> >
> > Thanks for doing the release Ishan!
> >
> > ~ David Smiley
> > Apache Lucene/Solr Search Developer
> > http://www.linkedin.com/in/davidwsmiley
> >
> >
> > On Tue, May 7, 2019 at 1:49 PM Ishan Chattopadhyaya 
> >  wrote:
> >
> >
> > Please vote for release candidate 1 for Lucene/Solr 8.1.0
> >
> > The artifacts can be downloaded from:
> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
> >
> > You can run the smoke tester directly with this command:
> >
> > python3 -u dev-tools/scripts/smokeTestRelease.py \
> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
> >
> > Here's my +1
> > SUCCESS! [0:46:38.948020]
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> > --
> > Best regards,
> > Cao Mạnh Đạt
> > D.O.B : 31-07-1991
> > Cell: (+84) 946.328.329
> > E-mail: caomanhdat...@gmail.com
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> >
> >
> > --
> > -
> > Noble Paul
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> >

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] thomaswoeckinger commented on issue #665: Fixes for SOLR-13331 and SOLR-13347

2019-05-09 Thread GitBox
thomaswoeckinger commented on issue #665: Fixes for SOLR-13331 and SOLR-13347
URL: https://github.com/apache/lucene-solr/pull/665#issuecomment-490954190
 
 
   Found two other bugs when using different codec than JavaBinCodec, will do 
another PR
   @gerlowskija Did you have time to review the PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[VOTE] Release Lucene/Solr 8.1.0 RC2

2019-05-09 Thread Ishan Chattopadhyaya
Please vote for release candidate 2 for Lucene/Solr 8.1.0

The artifacts can be downloaded from:
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC2-revdbe5ed0b2f17677ca6c904ebae919363f2d36a0a

You can run the smoke tester directly with this command:

python3 -u dev-tools/scripts/smokeTestRelease.py \
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC2-revdbe5ed0b2f17677ca6c904ebae919363f2d36a0a

Here's my +1
SUCCESS! [0:44:31.244021]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8796) Use exponential search in IntArrayDocIdSet advance method

2019-05-09 Thread Luca Cavanna (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836467#comment-16836467
 ] 

Luca Cavanna edited comment on LUCENE-8796 at 5/9/19 3:21 PM:
--

I have updated the PR after applying Yonik's suggestion and re-run benchmarks a 
few times. The run with the least noise had these results (note that I disabled 
the bitset optimization on both sides):
{noformat}
Report after iter 19:
TaskQPS baseline  StdDevQPS my_modified_version  
StdDevPct diff
HighTerm 1575.07  (5.9%) 1541.27  (6.9%)   
-2.1% ( -14% -   11%)
 MedTerm 1363.22  (6.5%) 1337.03  (7.0%)   
-1.9% ( -14% -   12%)
 LowTerm 1441.86  (4.2%) 1420.77  (5.2%)   
-1.5% ( -10% -8%)
   IntNRQConjMedTerm  280.55  (4.0%)  277.64  (4.1%)   
-1.0% (  -8% -7%)
   MedPhrase  153.84  (3.5%)  152.44  (3.3%)   
-0.9% (  -7% -6%)
 Prefix3  224.92  (4.0%)  223.13  (3.7%)   
-0.8% (  -8% -7%)
HighSloppyPhrase   19.70  (3.7%)   19.56  (4.5%)   
-0.7% (  -8% -7%)
 MedSloppyPhrase   18.23  (4.3%)   18.11  (4.7%)   
-0.7% (  -9% -8%)
OrNotHighMed  586.33  (3.4%)  582.47  (4.9%)   
-0.7% (  -8% -7%)
 LowSloppyPhrase   18.56  (3.6%)   18.46  (3.9%)   
-0.5% (  -7% -7%)
  HighPhrase   22.64  (2.7%)   22.54  (3.0%)   
-0.4% (  -6% -5%)
   LowPhrase  144.10  (3.8%)  143.55  (3.3%)   
-0.4% (  -7% -6%)
  AndHighLow  539.26  (3.7%)  537.25  (3.2%)   
-0.4% (  -7% -6%)
PKLookup  132.96  (3.0%)  132.48  (4.6%)   
-0.4% (  -7% -7%)
   OrHighMed  115.79  (2.7%)  115.49  (3.5%)   
-0.3% (  -6% -6%)
  PrefixConjHighTerm   36.98  (2.8%)   36.93  (3.4%)   
-0.1% (  -6% -6%)
WildcardConjHighTerm   45.79  (3.0%)   45.73  (3.1%)   
-0.1% (  -6% -6%)
   OrHighLow  448.91  (3.7%)  448.70  (6.3%)   
-0.0% (  -9% -   10%)
Wildcard   78.89  (3.2%)   78.95  (3.6%)
0.1% (  -6% -7%)
  IntNRQConjHighTerm   78.35  (2.3%)   78.48  (2.4%)
0.2% (  -4% -4%)
  IntNRQ  100.56  (2.7%)  100.84  (2.8%)
0.3% (  -5% -5%)
OrHighNotLow  732.45  (2.8%)  734.56  (5.3%)
0.3% (  -7% -8%)
   OrHighNotHigh  544.87  (2.8%)  546.47  (4.6%)
0.3% (  -6% -7%)
   IntNRQConjLowTerm  249.20  (4.2%)  249.99  (3.8%)
0.3% (  -7% -8%)
 Respell   73.05  (3.1%)   73.28  (3.4%)
0.3% (  -6% -7%)
  OrHighHigh   35.56  (3.0%)   35.68  (4.2%)
0.3% (  -6% -7%)
OrNotHighLow  695.41  (4.8%)  697.88  (6.5%)
0.4% ( -10% -   12%)
 MedSpanNear   59.99  (3.8%)   60.30  (4.0%)
0.5% (  -7% -8%)
  AndHighMed  190.02  (3.1%)  191.04  (3.6%)
0.5% (  -5% -7%)
 LowSpanNear   12.73  (3.9%)   12.81  (4.2%)
0.6% (  -7% -8%)
   HighTermDayOfYearSort   88.42  (7.0%)   89.09  (7.1%)
0.8% ( -12% -   15%)
   PrefixConjLowTerm   54.95  (3.7%)   55.43  (3.8%)
0.9% (  -6% -8%)
OrHighNotMed  628.44  (3.4%)  634.02  (6.1%)
0.9% (  -8% -   10%)
HighSpanNear   28.86  (3.2%)   29.11  (3.5%)
0.9% (  -5% -7%)
 WildcardConjMedTerm   72.48  (3.4%)   73.19  (4.8%)
1.0% (  -7% -9%)
  Fuzzy2   49.17  (9.9%)   49.68 (11.7%)
1.0% ( -18% -   25%)
 AndHighHigh   63.44  (3.8%)   64.11  (3.8%)
1.1% (  -6% -9%)
  Fuzzy1   79.43  (9.9%)   80.55  (9.7%)
1.4% ( -16% -   23%)
   OrNotHighHigh  574.89  (3.6%)  584.43  (5.5%)
1.7% (  -7% -   11%)
   PrefixConjMedTerm   79.00  (3.2%)   80.50  (3.6%)
1.9% (  -4% -8%)
 WildcardConjLowTerm   90.67  (2.9%)   92.49  (3.7%)
2.0% (  -4% -8%)
   HighTermMonthSort   86.13 (11.8%)   88.79 (12.4%)
3.1% ( -18% -   30%)
{noformat}
I also ran benchmarks with the bitset optimization in place on both ends:

{{{noformat}}}
 Report after iter 19:
 TaskQPS baseline StdDevQPS my_modified_version StdDev Pct diff
 IntNRQ 63.46 (24.6%) 62.28 (24.2%) -1.9% ( -40%

Re: [VOTE] Release Lucene/Solr 8.1.0 RC1

2019-05-09 Thread Ishan Chattopadhyaya
Oops, I just uploaded the RC2 and was about to send out the mail for it.
Since they are test only changes, even if I build another RC2 now, it
won't be of much impact to the user, right?

On Thu, May 9, 2019 at 7:44 PM Jan Høydahl  wrote:
>
> I fixed https://issues.apache.org/jira/browse/SOLR-13453, see 
> https://github.com/apache/lucene-solr/pull/668
> Can we merge to 8.1 for RC2?
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 9. mai 2019 kl. 08:42 skrev Ishan Chattopadhyaya :
>
> Okay, sure. I'll re-spin. :-(
> This vote is cancelled. Thanks to Tomoko, David, Kevin and Varun for voting.
>
> On Thu, May 9, 2019 at 7:19 AM Noble Paul  wrote:
>
>
> It's a bug fix. So,we should include it
>
> On Thu, May 9, 2019 at 8:13 AM Ishan Chattopadhyaya
>  wrote:
>
>
> Hi Dat,
>
> Should we respin the release for SOLR-13449.
>
>
> I don't fully understand the implications of not having SOLR-13449. If
> you (or someone else) suggest(s) that this needs to go into 8.1, then
> I'll re-spin RC2 tomorrow.
>
> Thanks,
> Ishan
>
> On Thu, May 9, 2019 at 3:29 AM Varun Thacker  wrote:
>
>
> SUCCESS! [1:08:48.869786]
>
>
> On Wed, May 8, 2019 at 1:16 PM Đạt Cao Mạnh  wrote:
>
>
> Hi Ishan,
>
> Should we respin the release for SOLR-13449.
>
> On Wed, 8 May 2019 at 17:45, Kevin Risden  wrote:
>
>
> +1 SUCCESS! [1:15:45.039228]
>
> Kevin Risden
>
>
> On Wed, May 8, 2019 at 11:12 AM David Smiley  wrote:
>
>
> +1
> SUCCESS! [1:29:43.016321]
>
> Thanks for doing the release Ishan!
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Tue, May 7, 2019 at 1:49 PM Ishan Chattopadhyaya 
>  wrote:
>
>
> Please vote for release candidate 1 for Lucene/Solr 8.1.0
>
> The artifacts can be downloaded from:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>
> You can run the smoke tester directly with this command:
>
> python3 -u dev-tools/scripts/smokeTestRelease.py \
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
>
> Here's my +1
> SUCCESS! [0:46:38.948020]
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
> Best regards,
> Cao Mạnh Đạt
> D.O.B : 31-07-1991
> Cell: (+84) 946.328.329
> E-mail: caomanhdat...@gmail.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
>
> --
> -
> Noble Paul
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8796) Use exponential search in IntArrayDocIdSet advance method

2019-05-09 Thread Luca Cavanna (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835542#comment-16835542
 ] 

Luca Cavanna edited comment on LUCENE-8796 at 5/9/19 3:22 PM:
--

I have made the change and played with luceneutil to run some benchmark. I 
opened a PR here: [https://github.com/apache/lucene-solr/pull/667] .

Luceneutil does not currently benchmark the queries that should be affected by 
this change, hence I added benchmarks for numeric range queries, prefix queries 
and wildcard queries in conjunction with term queries (low, medium and high 
frequency). See the changes I made to my luceneutil fork: 
[https://github.com/mikemccand/luceneutil/compare/master...javanna:conjunctions]
 .  Also, for the benchmarks I temporarily modified DocIdSetBuilder#grow to 
never call upgradeToBitSet (on both baseline and modified version), so that the 
updated code is exercised as much as possible during the benchmarks run, 
otherwise in many cases we would use bitsets instead and the changed code would 
not be exercised at all.

I ran the wikimedium10m benchmarks a few times, here is probably the run with 
the least noise, results show a little improvement for some queries, and no 
regressions in general:
  

 
{noformat}
 Report after iter 19:
 TaskQPS baseline StdDevQPS my_modified_version StdDev Pct diff
 WildcardConjMedTerm 75.49 (2.2%) 72.79 (2.0%) -3.6% ( -7% - 0%)
 OrHighNotMed 607.01 (5.7%) 593.10 (4.4%) -2.3% ( -11% - 8%)
 WildcardConjHighTerm 64.00 (1.7%) 62.55 (1.4%) -2.3% ( -5% - 0%)
 Fuzzy2 20.14 (3.4%) 19.72 (4.6%) -2.1% ( -9% - 6%)
 HighTerm 1174.41 (4.7%) 1150.11 (4.2%) -2.1% ( -10% - 7%)
 OrHighLow 483.40 (5.1%) 473.69 (6.9%) -2.0% ( -13% - 10%)
 OrNotHighLow 526.75 (3.6%) 516.47 (3.6%) -2.0% ( -8% - 5%)
 OrNotHighHigh 600.38 (4.9%) 590.21 (3.7%) -1.7% ( -9% - 7%)
 HighTermMonthSort 110.05 (11.7%) 108.58 (11.5%) -1.3% ( -21% - 24%)
 OrHighMed 107.83 (2.6%) 106.48 (4.7%) -1.3% ( -8% - 6%)
 PrefixConjMedTerm 56.98 (2.5%) 56.33 (1.7%) -1.1% ( -5% - 3%)
 AndHighLow 432.27 (3.6%) 427.46 (3.2%) -1.1% ( -7% - 5%)
 PrefixConjLowTerm 44.43 (2.8%) 43.98 (1.8%) -1.0% ( -5% - 3%)
 MedTerm 1409.97 (5.5%) 1396.33 (4.9%) -1.0% ( -10% - 9%)
 HighSloppyPhrase 11.98 (4.3%) 11.87 (5.1%) -0.9% ( -9% - 8%)
 OrNotHighMed 614.19 (4.6%) 608.74 (3.8%) -0.9% ( -8% - 7%)
 Respell 58.11 (2.4%) 57.61 (2.4%) -0.9% ( -5% - 3%)
 LowTerm 1342.33 (4.8%) 1330.86 (4.0%) -0.9% ( -9% - 8%)
 PrefixConjHighTerm 68.50 (2.9%) 67.93 (1.8%) -0.8% ( -5% - 3%)
 OrHighNotHigh 566.30 (5.2%) 561.88 (4.5%) -0.8% ( -9% - 9%)
 WildcardConjLowTerm 32.75 (2.5%) 32.56 (2.1%) -0.6% ( -5% - 4%)
 PKLookup 131.80 (2.4%) 131.28 (2.3%) -0.4% ( -5% - 4%)
 OrHighHigh 29.90 (3.4%) 29.79 (5.3%) -0.4% ( -8% - 8%)
 OrHighNotLow 497.65 (6.6%) 495.84 (5.2%) -0.4% ( -11% - 12%)
 AndHighMed 175.08 (3.5%) 174.58 (3.0%) -0.3% ( -6% - 6%)
 LowSpanNear 15.17 (1.8%) 15.13 (2.5%) -0.2% ( -4% - 4%)
 Fuzzy1 71.14 (5.9%) 70.97 (6.3%) -0.2% ( -11% - 12%)
 LowSloppyPhrase 35.23 (2.0%) 35.16 (2.6%) -0.2% ( -4% - 4%)
 LowPhrase 74.10 (1.7%) 73.98 (1.8%) -0.2% ( -3% - 3%)
 HighPhrase 34.18 (2.1%) 34.13 (2.0%) -0.1% ( -4% - 3%)
 Prefix3 45.33 (2.3%) 45.28 (2.1%) -0.1% ( -4% - 4%)
 MedPhrase 28.30 (2.1%) 28.27 (1.7%) -0.1% ( -3% - 3%)
 MedSloppyPhrase 6.80 (3.6%) 6.80 (3.2%) -0.0% ( -6% - 6%)
 AndHighHigh 53.79 (3.9%) 53.79 (4.0%) -0.0% ( -7% - 8%)
 MedSpanNear 61.78 (2.2%) 61.83 (1.7%) 0.1% ( -3% - 4%)
 Wildcard 37.83 (2.5%) 37.91 (1.7%) 0.2% ( -3% - 4%)
 IntNRQConjHighTerm 20.17 (3.8%) 20.24 (4.9%) 0.3% ( -8% - 9%)
 HighTermDayOfYearSort 53.55 (7.8%) 53.76 (7.3%) 0.4% ( -13% - 16%)
 HighSpanNear 5.39 (2.6%) 5.42 (2.6%) 0.5% ( -4% - 5%)
 IntNRQConjLowTerm 19.69 (4.3%) 19.86 (4.3%) 0.9% ( -7% - 9%)
 IntNRQConjMedTerm 15.93 (4.5%) 16.12 (5.4%) 1.2% ( -8% - 11%)
 IntNRQ 114.28 (10.3%) 116.41 (14.0%) 1.9% ( -20% - 29%)
 {noformat}
 


was (Author: lucacavanna):
I have made the change and played with luceneutil to run some benchmark. I 
opened a PR here: [https://github.com/apache/lucene-solr/pull/667] .

Luceneutil does not currently benchmark the queries that should be affected by 
this change, hence I added benchmarks for numeric range queries, prefix queries 
and wildcard queries in conjunction with term queries (low, medium and high 
frequency). See the changes I made to my luceneutil fork: 
[https://github.com/mikemccand/luceneutil/compare/master...javanna:conjunctions]
 .  Also, for the benchmarks I temporarily modified DocIdSetBuilder#grow to 
never call upgradeToBitSet (on both baseline and modified version), so that the 
updated code is exercised as much as possible during the benchmarks run, 
otherwise in many cases we would use bitsets instead and the changed code would 
not be exercised at all.

I ran the wikimedium10m benchmarks a few times, here is probably the run with 
the least noise, results show a little improvement for some queries, and no

[jira] [Comment Edited] (LUCENE-8796) Use exponential search in IntArrayDocIdSet advance method

2019-05-09 Thread Luca Cavanna (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836467#comment-16836467
 ] 

Luca Cavanna edited comment on LUCENE-8796 at 5/9/19 3:22 PM:
--

I have updated the PR after applying Yonik's suggestion and re-run benchmarks a 
few times. The run with the least noise had these results (note that I disabled 
the bitset optimization on both sides):
{noformat}
Report after iter 19:
TaskQPS baseline  StdDevQPS my_modified_version  
StdDevPct diff
HighTerm 1575.07  (5.9%) 1541.27  (6.9%)   
-2.1% ( -14% -   11%)
 MedTerm 1363.22  (6.5%) 1337.03  (7.0%)   
-1.9% ( -14% -   12%)
 LowTerm 1441.86  (4.2%) 1420.77  (5.2%)   
-1.5% ( -10% -8%)
   IntNRQConjMedTerm  280.55  (4.0%)  277.64  (4.1%)   
-1.0% (  -8% -7%)
   MedPhrase  153.84  (3.5%)  152.44  (3.3%)   
-0.9% (  -7% -6%)
 Prefix3  224.92  (4.0%)  223.13  (3.7%)   
-0.8% (  -8% -7%)
HighSloppyPhrase   19.70  (3.7%)   19.56  (4.5%)   
-0.7% (  -8% -7%)
 MedSloppyPhrase   18.23  (4.3%)   18.11  (4.7%)   
-0.7% (  -9% -8%)
OrNotHighMed  586.33  (3.4%)  582.47  (4.9%)   
-0.7% (  -8% -7%)
 LowSloppyPhrase   18.56  (3.6%)   18.46  (3.9%)   
-0.5% (  -7% -7%)
  HighPhrase   22.64  (2.7%)   22.54  (3.0%)   
-0.4% (  -6% -5%)
   LowPhrase  144.10  (3.8%)  143.55  (3.3%)   
-0.4% (  -7% -6%)
  AndHighLow  539.26  (3.7%)  537.25  (3.2%)   
-0.4% (  -7% -6%)
PKLookup  132.96  (3.0%)  132.48  (4.6%)   
-0.4% (  -7% -7%)
   OrHighMed  115.79  (2.7%)  115.49  (3.5%)   
-0.3% (  -6% -6%)
  PrefixConjHighTerm   36.98  (2.8%)   36.93  (3.4%)   
-0.1% (  -6% -6%)
WildcardConjHighTerm   45.79  (3.0%)   45.73  (3.1%)   
-0.1% (  -6% -6%)
   OrHighLow  448.91  (3.7%)  448.70  (6.3%)   
-0.0% (  -9% -   10%)
Wildcard   78.89  (3.2%)   78.95  (3.6%)
0.1% (  -6% -7%)
  IntNRQConjHighTerm   78.35  (2.3%)   78.48  (2.4%)
0.2% (  -4% -4%)
  IntNRQ  100.56  (2.7%)  100.84  (2.8%)
0.3% (  -5% -5%)
OrHighNotLow  732.45  (2.8%)  734.56  (5.3%)
0.3% (  -7% -8%)
   OrHighNotHigh  544.87  (2.8%)  546.47  (4.6%)
0.3% (  -6% -7%)
   IntNRQConjLowTerm  249.20  (4.2%)  249.99  (3.8%)
0.3% (  -7% -8%)
 Respell   73.05  (3.1%)   73.28  (3.4%)
0.3% (  -6% -7%)
  OrHighHigh   35.56  (3.0%)   35.68  (4.2%)
0.3% (  -6% -7%)
OrNotHighLow  695.41  (4.8%)  697.88  (6.5%)
0.4% ( -10% -   12%)
 MedSpanNear   59.99  (3.8%)   60.30  (4.0%)
0.5% (  -7% -8%)
  AndHighMed  190.02  (3.1%)  191.04  (3.6%)
0.5% (  -5% -7%)
 LowSpanNear   12.73  (3.9%)   12.81  (4.2%)
0.6% (  -7% -8%)
   HighTermDayOfYearSort   88.42  (7.0%)   89.09  (7.1%)
0.8% ( -12% -   15%)
   PrefixConjLowTerm   54.95  (3.7%)   55.43  (3.8%)
0.9% (  -6% -8%)
OrHighNotMed  628.44  (3.4%)  634.02  (6.1%)
0.9% (  -8% -   10%)
HighSpanNear   28.86  (3.2%)   29.11  (3.5%)
0.9% (  -5% -7%)
 WildcardConjMedTerm   72.48  (3.4%)   73.19  (4.8%)
1.0% (  -7% -9%)
  Fuzzy2   49.17  (9.9%)   49.68 (11.7%)
1.0% ( -18% -   25%)
 AndHighHigh   63.44  (3.8%)   64.11  (3.8%)
1.1% (  -6% -9%)
  Fuzzy1   79.43  (9.9%)   80.55  (9.7%)
1.4% ( -16% -   23%)
   OrNotHighHigh  574.89  (3.6%)  584.43  (5.5%)
1.7% (  -7% -   11%)
   PrefixConjMedTerm   79.00  (3.2%)   80.50  (3.6%)
1.9% (  -4% -8%)
 WildcardConjLowTerm   90.67  (2.9%)   92.49  (3.7%)
2.0% (  -4% -8%)
   HighTermMonthSort   86.13 (11.8%)   88.79 (12.4%)
3.1% ( -18% -   30%)
{noformat}
I also ran benchmarks with the bitset optimization in place on both ends:

{noformat}
Report after iter 19:
TaskQPS baseline  StdDevQPS my_modified_version  
StdDevPct diff

[jira] [Comment Edited] (LUCENE-8796) Use exponential search in IntArrayDocIdSet advance method

2019-05-09 Thread Luca Cavanna (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16835542#comment-16835542
 ] 

Luca Cavanna edited comment on LUCENE-8796 at 5/9/19 3:21 PM:
--

I have made the change and played with luceneutil to run some benchmark. I 
opened a PR here: [https://github.com/apache/lucene-solr/pull/667] .

Luceneutil does not currently benchmark the queries that should be affected by 
this change, hence I added benchmarks for numeric range queries, prefix queries 
and wildcard queries in conjunction with term queries (low, medium and high 
frequency). See the changes I made to my luceneutil fork: 
[https://github.com/mikemccand/luceneutil/compare/master...javanna:conjunctions]
 .  Also, for the benchmarks I temporarily modified DocIdSetBuilder#grow to 
never call upgradeToBitSet (on both baseline and modified version), so that the 
updated code is exercised as much as possible during the benchmarks run, 
otherwise in many cases we would use bitsets instead and the changed code would 
not be exercised at all.

I ran the wikimedium10m benchmarks a few times, here is probably the run with 
the least noise, results show a little improvement for some queries, and no 
regressions in general:
  

{{{noformat}}}
 Report after iter 19:
 TaskQPS baseline StdDevQPS my_modified_version StdDev Pct diff
 WildcardConjMedTerm 75.49 (2.2%) 72.79 (2.0%) -3.6% ( -7% - 0%)
 OrHighNotMed 607.01 (5.7%) 593.10 (4.4%) -2.3% ( -11% - 8%)
 WildcardConjHighTerm 64.00 (1.7%) 62.55 (1.4%) -2.3% ( -5% - 0%)
 Fuzzy2 20.14 (3.4%) 19.72 (4.6%) -2.1% ( -9% - 6%)
 HighTerm 1174.41 (4.7%) 1150.11 (4.2%) -2.1% ( -10% - 7%)
 OrHighLow 483.40 (5.1%) 473.69 (6.9%) -2.0% ( -13% - 10%)
 OrNotHighLow 526.75 (3.6%) 516.47 (3.6%) -2.0% ( -8% - 5%)
 OrNotHighHigh 600.38 (4.9%) 590.21 (3.7%) -1.7% ( -9% - 7%)
 HighTermMonthSort 110.05 (11.7%) 108.58 (11.5%) -1.3% ( -21% - 24%)
 OrHighMed 107.83 (2.6%) 106.48 (4.7%) -1.3% ( -8% - 6%)
 PrefixConjMedTerm 56.98 (2.5%) 56.33 (1.7%) -1.1% ( -5% - 3%)
 AndHighLow 432.27 (3.6%) 427.46 (3.2%) -1.1% ( -7% - 5%)
 PrefixConjLowTerm 44.43 (2.8%) 43.98 (1.8%) -1.0% ( -5% - 3%)
 MedTerm 1409.97 (5.5%) 1396.33 (4.9%) -1.0% ( -10% - 9%)
 HighSloppyPhrase 11.98 (4.3%) 11.87 (5.1%) -0.9% ( -9% - 8%)
 OrNotHighMed 614.19 (4.6%) 608.74 (3.8%) -0.9% ( -8% - 7%)
 Respell 58.11 (2.4%) 57.61 (2.4%) -0.9% ( -5% - 3%)
 LowTerm 1342.33 (4.8%) 1330.86 (4.0%) -0.9% ( -9% - 8%)
 PrefixConjHighTerm 68.50 (2.9%) 67.93 (1.8%) -0.8% ( -5% - 3%)
 OrHighNotHigh 566.30 (5.2%) 561.88 (4.5%) -0.8% ( -9% - 9%)
 WildcardConjLowTerm 32.75 (2.5%) 32.56 (2.1%) -0.6% ( -5% - 4%)
 PKLookup 131.80 (2.4%) 131.28 (2.3%) -0.4% ( -5% - 4%)
 OrHighHigh 29.90 (3.4%) 29.79 (5.3%) -0.4% ( -8% - 8%)
 OrHighNotLow 497.65 (6.6%) 495.84 (5.2%) -0.4% ( -11% - 12%)
 AndHighMed 175.08 (3.5%) 174.58 (3.0%) -0.3% ( -6% - 6%)
 LowSpanNear 15.17 (1.8%) 15.13 (2.5%) -0.2% ( -4% - 4%)
 Fuzzy1 71.14 (5.9%) 70.97 (6.3%) -0.2% ( -11% - 12%)
 LowSloppyPhrase 35.23 (2.0%) 35.16 (2.6%) -0.2% ( -4% - 4%)
 LowPhrase 74.10 (1.7%) 73.98 (1.8%) -0.2% ( -3% - 3%)
 HighPhrase 34.18 (2.1%) 34.13 (2.0%) -0.1% ( -4% - 3%)
 Prefix3 45.33 (2.3%) 45.28 (2.1%) -0.1% ( -4% - 4%)
 MedPhrase 28.30 (2.1%) 28.27 (1.7%) -0.1% ( -3% - 3%)
 MedSloppyPhrase 6.80 (3.6%) 6.80 (3.2%) -0.0% ( -6% - 6%)
 AndHighHigh 53.79 (3.9%) 53.79 (4.0%) -0.0% ( -7% - 8%)
 MedSpanNear 61.78 (2.2%) 61.83 (1.7%) 0.1% ( -3% - 4%)
 Wildcard 37.83 (2.5%) 37.91 (1.7%) 0.2% ( -3% - 4%)
 IntNRQConjHighTerm 20.17 (3.8%) 20.24 (4.9%) 0.3% ( -8% - 9%)
 HighTermDayOfYearSort 53.55 (7.8%) 53.76 (7.3%) 0.4% ( -13% - 16%)
 HighSpanNear 5.39 (2.6%) 5.42 (2.6%) 0.5% ( -4% - 5%)
 IntNRQConjLowTerm 19.69 (4.3%) 19.86 (4.3%) 0.9% ( -7% - 9%)
 IntNRQConjMedTerm 15.93 (4.5%) 16.12 (5.4%) 1.2% ( -8% - 11%)
 IntNRQ 114.28 (10.3%) 116.41 (14.0%) 1.9% ( -20% - 29%)

 {{{noformat}}}

 


was (Author: lucacavanna):
I have made the change and played with luceneutil to run some benchmark. I 
opened a PR here: https://github.com/apache/lucene-solr/pull/667 .

Luceneutil does not currently benchmark the queries that should be affected by 
this change, hence I added benchmarks for numeric range queries, prefix queries 
and wildcard queries in conjunction with term queries (low, medium and high 
frequency). See the changes I made to my luceneutil fork: 
[https://github.com/mikemccand/luceneutil/compare/master...javanna:conjunctions]
 .  Also, for the benchmarks I temporarily modified DocIdSetBuilder#grow to 
never call upgradeToBitSet (on both baseline and modified version), so that the 
updated code is exercised as much as possible during the benchmarks run, 
otherwise in many cases we would use bitsets instead and the changed code would 
not be exercised at all.

I ran the wikimedium10m benchmarks a few times, here is probably the run with 
the least noise, results show a little improvement for some queries, 

[jira] [Commented] (LUCENE-8796) Use exponential search in IntArrayDocIdSet advance method

2019-05-09 Thread Luca Cavanna (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836467#comment-16836467
 ] 

Luca Cavanna commented on LUCENE-8796:
--

I have updated the PR after applying Yonik's suggestion and re-run benchmarks a 
few times. The run with the least noise had these results (note that I disabled 
the bitset optimization on both sides):

{{
Report after iter 19:
TaskQPS baseline  StdDevQPS my_modified_version  
StdDevPct diff
HighTerm 1575.07  (5.9%) 1541.27  (6.9%)   
-2.1% ( -14% -   11%)
 MedTerm 1363.22  (6.5%) 1337.03  (7.0%)   
-1.9% ( -14% -   12%)
 LowTerm 1441.86  (4.2%) 1420.77  (5.2%)   
-1.5% ( -10% -8%)
   IntNRQConjMedTerm  280.55  (4.0%)  277.64  (4.1%)   
-1.0% (  -8% -7%)
   MedPhrase  153.84  (3.5%)  152.44  (3.3%)   
-0.9% (  -7% -6%)
 Prefix3  224.92  (4.0%)  223.13  (3.7%)   
-0.8% (  -8% -7%)
HighSloppyPhrase   19.70  (3.7%)   19.56  (4.5%)   
-0.7% (  -8% -7%)
 MedSloppyPhrase   18.23  (4.3%)   18.11  (4.7%)   
-0.7% (  -9% -8%)
OrNotHighMed  586.33  (3.4%)  582.47  (4.9%)   
-0.7% (  -8% -7%)
 LowSloppyPhrase   18.56  (3.6%)   18.46  (3.9%)   
-0.5% (  -7% -7%)
  HighPhrase   22.64  (2.7%)   22.54  (3.0%)   
-0.4% (  -6% -5%)
   LowPhrase  144.10  (3.8%)  143.55  (3.3%)   
-0.4% (  -7% -6%)
  AndHighLow  539.26  (3.7%)  537.25  (3.2%)   
-0.4% (  -7% -6%)
PKLookup  132.96  (3.0%)  132.48  (4.6%)   
-0.4% (  -7% -7%)
   OrHighMed  115.79  (2.7%)  115.49  (3.5%)   
-0.3% (  -6% -6%)
  PrefixConjHighTerm   36.98  (2.8%)   36.93  (3.4%)   
-0.1% (  -6% -6%)
WildcardConjHighTerm   45.79  (3.0%)   45.73  (3.1%)   
-0.1% (  -6% -6%)
   OrHighLow  448.91  (3.7%)  448.70  (6.3%)   
-0.0% (  -9% -   10%)
Wildcard   78.89  (3.2%)   78.95  (3.6%)
0.1% (  -6% -7%)
  IntNRQConjHighTerm   78.35  (2.3%)   78.48  (2.4%)
0.2% (  -4% -4%)
  IntNRQ  100.56  (2.7%)  100.84  (2.8%)
0.3% (  -5% -5%)
OrHighNotLow  732.45  (2.8%)  734.56  (5.3%)
0.3% (  -7% -8%)
   OrHighNotHigh  544.87  (2.8%)  546.47  (4.6%)
0.3% (  -6% -7%)
   IntNRQConjLowTerm  249.20  (4.2%)  249.99  (3.8%)
0.3% (  -7% -8%)
 Respell   73.05  (3.1%)   73.28  (3.4%)
0.3% (  -6% -7%)
  OrHighHigh   35.56  (3.0%)   35.68  (4.2%)
0.3% (  -6% -7%)
OrNotHighLow  695.41  (4.8%)  697.88  (6.5%)
0.4% ( -10% -   12%)
 MedSpanNear   59.99  (3.8%)   60.30  (4.0%)
0.5% (  -7% -8%)
  AndHighMed  190.02  (3.1%)  191.04  (3.6%)
0.5% (  -5% -7%)
 LowSpanNear   12.73  (3.9%)   12.81  (4.2%)
0.6% (  -7% -8%)
   HighTermDayOfYearSort   88.42  (7.0%)   89.09  (7.1%)
0.8% ( -12% -   15%)
   PrefixConjLowTerm   54.95  (3.7%)   55.43  (3.8%)
0.9% (  -6% -8%)
OrHighNotMed  628.44  (3.4%)  634.02  (6.1%)
0.9% (  -8% -   10%)
HighSpanNear   28.86  (3.2%)   29.11  (3.5%)
0.9% (  -5% -7%)
 WildcardConjMedTerm   72.48  (3.4%)   73.19  (4.8%)
1.0% (  -7% -9%)
  Fuzzy2   49.17  (9.9%)   49.68 (11.7%)
1.0% ( -18% -   25%)
 AndHighHigh   63.44  (3.8%)   64.11  (3.8%)
1.1% (  -6% -9%)
  Fuzzy1   79.43  (9.9%)   80.55  (9.7%)
1.4% ( -16% -   23%)
   OrNotHighHigh  574.89  (3.6%)  584.43  (5.5%)
1.7% (  -7% -   11%)
   PrefixConjMedTerm   79.00  (3.2%)   80.50  (3.6%)
1.9% (  -4% -8%)
 WildcardConjLowTerm   90.67  (2.9%)   92.49  (3.7%)
2.0% (  -4% -8%)
   HighTermMonthSort   86.13 (11.8%)   88.79 (12.4%)
3.1% ( -18% -   30%)
}}

I also ran benchmarks with the bitset optimization in place on both ends:

{{
Report after iter 19:
TaskQPS baseline  StdDevQPS my_modified_version  
StdDevPct diff
  IntNRQ   63.46 (24.6%)   62.28 (24.2%)   
-

[jira] [Comment Edited] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2019-05-09 Thread Fredrik Rodland (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836454#comment-16836454
 ] 

Fredrik Rodland edited comment on SOLR-12243 at 5/9/19 3:05 PM:


Thanks for taking the time to explain and link other issues [~mgibney].  Good 
we're not alone here.  For the time being we've limited pf to only allow 
non-synonym fields as pf is really not that crucial for our site.


was (Author: fmr):
Thanks for taking the time to explain and link other issues [~mgibney].  Good 
we're not alone here.  For the time being we've disabled limited pf to only 
allow non-synonym fields as pf is really not that crucial for our site.

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, 8.0
>
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> multiword-synonyms.txt, schema.xml, solrconfig.xml
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3<-1 6<-3 9<30%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2019-05-09 Thread Fredrik Rodland (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836454#comment-16836454
 ] 

Fredrik Rodland commented on SOLR-12243:


Thanks for taking the time to explain and link other issues [~mgibney].  Good 
we're not alone here.  For the time being we've disabled limited pf to only 
allow non-synonym fields as pf is really not that crucial for our site.

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, 8.0
>
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> multiword-synonyms.txt, schema.xml, solrconfig.xml
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3<-1 6<-3 9<30%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2019-05-09 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836448#comment-16836448
 ] 

Michael Gibney commented on SOLR-12243:
---

[~fmr], there are several issues relevant to the problem you've encountered:

Multi-term synonyms invoke graphPhraseQuery, implemented for {{6.5 <= _version_ 
< 7.6}} as SpanNearQuery, which was not prone to exponential growth; but (also 
prior to 7.6) that SpanNearQuery was completely ignored. It's the latter 
problem (ignoring) that this issue (SOLR-12243) fixes.

The exponential expansion is related to LUCENE-8531, which reverts LUCENE-7699 
by changing the SpanNearQuery graph phrase query implementation back to the 
pre-6.5 MultiPhraseQuery implementation (when slop>0), for semantic 
compatibility reasons.

MultiPhraseQuery is inherently susceptible to exponential expansion, so there 
is no workaround at the moment to fully support a high degree of synonym 
expansion in conjunction with slop>0. Regarding the manifestation of the 
problem as "single query taking down an entire solr-server", this should be 
mitigated starting in 8.1 (see SOLR-13336). Individual queries will still fail 
if expanded beyond a configurable threshold (number of clauses), but the type 
of systemic problem that you encountered will be prevented.

Regarding a potential longer-term solution, it might be worth looking at 
LUCENE-8544.

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, 8.0
>
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> multiword-synonyms.txt, schema.xml, solrconfig.xml
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3<-1 6<-3 9<30%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Date format issue in solr select query.

2019-05-09 Thread David Smiley
(the correct list here is solr-user, not dev)

Solr has minimal support for _formatting_ the response; that's generally up
the the application that builds the UI.  If you want Solr to retain the
original input precision which appears to be lost here, then use a typical
copyField approach to a string stored field.  This is necessary because
primitive field types (date, float, int, etc.) normalize the input when the
value is internally stored.  Perhaps it shouldn't do that -- as you show
here the surface form (original) may indicate the precision.

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Wed, May 8, 2019 at 10:42 PM Karthik Gunasekaran <
karthik.gunaseka...@stats.govt.nz> wrote:

> Hi,
>
> I am new to solr. I am using solr7.6 version.
>
>
>
> The problem which I am facing is to format the date for a specific field.
>
>
>
> Explanation of my issue:
>
>
>
> I have a collection named “DateFieldTest”
>
> It has few fields out of “initial_release_date” is a field of type pdate.
>
> We are loading the data into the collection as below
>
>
>
>
>
> [
>
>   {
>
> "id": 0,
>
> "Number": 0,
>
> "String": "This is a string 0",
>
> "initial_release_date": "2019-02-28"
>
>   },
>
>   {
>
> "ID": 1,
>
> "Number": 1,
>
> "String": "This is a string 1",
>
> " initial_release_date ": "2019-02-28"
>
>   }]
>
>
>
> When we do a select query as
> http://localhost:8983/solr/DateFieldTest/select?q=*:*
>
> We are getting the output as,
>
> {
>
>   "responseHeader":{
>
> "zkConnected":true,
>
> "status":0,
>
> "QTime":0,
>
> "params":{
>
>   "q":"*:*"}},
>
>   "response":{"numFound":1000,"start":0,"docs":[
>
>   {
>
> "id":"0",
>
> "Number":[0],
>
> "String":["This is a Māori macron 0"],
>
> "initial_release_date":["2019-02-28T00:00:00Z"],
>
> "_version_":1633015101576445952},
>
>   {
>
> "ID":[1],
>
> "Number":[1],
>
> "String":["This is a Māori macron 1"],
>
> "initial_release_date":["2019-02-28T00:00:00Z"],
>
> "_version_":1633015101949739008},
>
>
>
> But our use case is to get the output for the above query is to get the
> initial_release_date field to be formatted as -MM-DD.
>
> The query returns by adding time to the data field automatically, which we
> don’t want to happen.
>
> Can someone please help me to resolve this issue to get only date value
> without time in my select query.
>
>
>
> Thanks,
>
> Karthik Gunasekaran
>
> Senior Applications Developer | kaiwhakawhanake Pūmanawa Tautono
>
> Digital Business  - Channels | Ngā Ratonga Mamati - Ngā Hongere
>
> Digital Business Services | Ngā Ratonga Pakihi Mamati
>
> Stats NZ Tatauranga Aotearoa
> * DDI* +64 4 931 4347 | stats.govt.nz 
>
> [image: cid:image007.jpg@01D29D69.DD3FD280]
>
> [image: cid:image008.png@01D29D69.DD3FD280]
>   [image:
> cid:image009.png@01D29D69.DD3FD280]   [image:
> cid:image010.png@01D29D69.DD3FD280]
> 
>
>
>


[jira] [Comment Edited] (SOLR-13439) Make collection properties easier and safer to use in code

2019-05-09 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836424#comment-16836424
 ] 

Gus Heck edited comment on SOLR-13439 at 5/9/19 2:39 PM:
-

{quote}Case 1 (Frequent access):
 1. T+0: SolrCore starts (i.e. A replica is added, a collection is created, you 
name it)
 2. T+0: On initialization, a component that relies on collection properties 
registers a listener. Solr reads from ZooKeeper once.
 3. T+0 to T+end-of-life-of-the-core: The watch remains.
 4. T+end-of-life-of-the-core: no more reads are expected, the watch is removed.
{quote}
This identical to my Case 1 if the core lives for 30 minutes?
{quote}Times read with current approach: 1 + Number of modifications of 
properties.
 Times with the cache approach: unknown, at least the same as with the 
listener, but depends on when the properties are accessed.
{quote}
Not true. One (or possibly zero reads if we luck into my case 3) on step 1 
(instead of step 2 as you list it)... Properties remain cached (with updates 
when they change in zk of course) until core is removed. This is the whole 
point of why I wrote ConditionaExpiringCache. While the watch is active, it 
will exist in collectionPropsWatches, and the predicate the cache is created 
with will evaluate to true and the timeout will be refreshed without removing 
the entry
{code:java}
if (condition.test(peek)) {
  // we passed our stay alive condition refresh our 
time-out and put back in the queue.
  peek.expireAt = System.nanoTime() + retainForNanos;
  timeSortedEntries.add(peek);
} else {
  // we are past our time limit, and have failed our 
condition to stay alive so remove from cache
  cache.remove(peek.key);
} {code}
I expect that the typo you caught and I corrected in the latest version may 
have thrown you off of realizing that the cache never expires while a watch is 
set. In that respect my patch is backwards compatible.
{quote}I can give you two cases:
 Case 1: Collection properties are accessed infrequently (like in my “case 2 
above”), but collection properties change frequently (i.e. every second)
 1. T + 0: call to getCollectionProperties(), Zk watch is set and element is on 
cache
 2. T + 1 to T + 9: Collection properties changes, fires watches to Solr. Solr 
receives the watch and reads from Zookeeper
 3. T + 10 cache expires
 With cache, we read from Zookeeper 10 times, and ZooKeeper fires 10 watches. 
Without cache, we read once, ZooKeeper doens't fire any watch. Keep in mind 
that some clusters may have many collections (hundreds/thousands?), this may 
add a lot of load to ZooKeeper for things that aren’t going to be needed.
{quote}
Ah yes, I was evaluating in terms of reads caused by user action. I have 
assumed (I state it in one of the comments in code I think) that collection 
properties are not frequently updated. What you say is true, but it's no worse 
than having a watch set. I am amenable to making the expiration time tunable 
via configuration if you think frequent updates to collection properties for an 
individual collection are a realistic use case. My assumption is that they are 
likely to be set at initialization, and perhaps changed when an admin or 
autoscalling wants to adjust something. If something is thrashing collection 
properties I think that's the problem and this would be a symptom. So to 
summarize this case:

*TLDR;*This patch does pose a risk for systems with large numbers of 
collections that also frequently update collection properties on many of those 
collections if that system was +not already using collectionPropertiesWatches+ 
(or only using them on a few collections) prior to this patch (i.e. most 
updates to properties are ignored).

That seems like a pretty narrow use case, since it implies that the majority of 
updates are going un-read unless they are making frequent calls to 
getcollectionProperties() and perhaps should have been setting a watch in the 
first place... This case (which I thought of but discarded as very unlikely) 
could be mitigated by shortening the cache expiration time and the cache 
eviction frequency interval here, or making them configurable.
{code:java}
  // ten minutes -- possibly make this configurable
  private static final long CACHE_COLLECTION_PROPS_FOR_NANOS = 10L * 60 * 1000 
* 1000 * 1000;

  // one minute -- possibly make this configurable
  public static final int CACHE_COLLECTION_PROPS_REAPER_INTERVAL = 6;
{code}
Before we do that however, are we really meaning to support use cases where 
people thrash values we store in zookeeper? Or is this an anti-pattern like 
individual requests for updates rather than batching? My gut says the later, 
but maybe I'm out in left field on that?
{quote}Case 2:

[jira] [Comment Edited] (SOLR-13439) Make collection properties easier and safer to use in code

2019-05-09 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836424#comment-16836424
 ] 

Gus Heck edited comment on SOLR-13439 at 5/9/19 2:36 PM:
-

{quote}Case 1 (Frequent access):
 1. T+0: SolrCore starts (i.e. A replica is added, a collection is created, you 
name it)
 2. T+0: On initialization, a component that relies on collection properties 
registers a listener. Solr reads from ZooKeeper once.
 3. T+0 to T+end-of-life-of-the-core: The watch remains.
 4. T+end-of-life-of-the-core: no more reads are expected, the watch is removed.
{quote}
This identical to my Case 1 if the core lives for 30 minutes?
{quote}Times read with current approach: 1 + Number of modifications of 
properties.
 Times with the cache approach: unknown, at least the same as with the 
listener, but depends on when the properties are accessed.
{quote}
Not true. One (or possibly zero reads if we luck into my case 3) on step 1 
(instead of step 2 as you list it)... Properties remain cached (with updates 
when they change in zk of course) until core is removed. This is the whole 
point of why I wrote ConditionaExpiringCache. While the watch is active, it 
will exist in collectionPropsWatches, and the predicate the cache is created 
with will evaluate to true and the timeout will be refreshed without removing 
the entry
{code:java}
if (condition.test(peek)) {
  // we passed our stay alive condition refresh our 
time-out and put back in the queue.
  peek.expireAt = System.nanoTime() + retainForNanos;
  timeSortedEntries.add(peek);
} else {
  // we are past our time limit, and have failed our 
condition to stay alive so remove from cache
  cache.remove(peek.key);
} {code}
I expect that the typo you caught and I corrected in the latest version may 
have thrown you off of realizing that the cache never expires while a watch is 
set. In that respect my patch is backwards compatible.
{quote}I can give you two cases:
 Case 1: Collection properties are accessed infrequently (like in my “case 2 
above”), but collection properties change frequently (i.e. every second)
 1. T + 0: call to getCollectionProperties(), Zk watch is set and element is on 
cache
 2. T + 1 to T + 9: Collection properties changes, fires watches to Solr. Solr 
receives the watch and reads from Zookeeper
 3. T + 10 cache expires
 With cache, we read from Zookeeper 10 times, and ZooKeeper fires 10 watches. 
Without cache, we read once, ZooKeeper doens't fire any watch. Keep in mind 
that some clusters may have many collections (hundreds/thousands?), this may 
add a lot of load to ZooKeeper for things that aren’t going to be needed.
{quote}
Ah yes, I was evaluating in terms of reads caused by user action. I have 
assumed (I state it in one of the comments in code I think) that collection 
properties are not frequently updated. What you say is true, but it's no worse 
than having a watch set. I am amenable to making the expiration time tunable 
via configuration if you think frequent updates to collection properties for an 
individual collection are a realistic use case. My assumption is that they are 
likely to be set at initialization, and perhaps changed when an admin or 
autoscalling wants to adjust something. If something is thrashing collection 
properties I think that's the problem and this would be a symptom. So to 
summarize this case:

*TLDR;*This patch does pose a risk for systems with large numbers of 
collections that also frequently update collection properties on many of those 
collections if that system was +not already using collectionPropertiesWatches+ 
(or only using them on a few collections) prior to this patch (i.e. most 
updates to properties are ignored).

That seems like a pretty narrow use case, since it implies that the majority of 
updates are going un-read unless they are making frequent calls to 
getcollectionProperties() and perhaps should have been setting a watch in the 
first place... This case (which I thought of but discarded as very unlikely) 
could be mitigated by shortening the cache expiration time and the cache 
eviction frequency interval here, or making them configurable.
{code:java}
  // ten minutes -- possibly make this configurable
  private static final long CACHE_COLLECTION_PROPS_FOR_NANOS = 10L * 60 * 1000 
* 1000 * 1000;

  // one minute -- possibly make this configurable
  public static final int CACHE_COLLECTION_PROPS_REAPER_INTERVAL = 6;
{code}
Before we do that however, are we really meaning to support use cases where 
people thrash values we store in zookeeper? Or is this an anti-pattern like 
individual requests for updates rather than batching? My gut says the later, 
but maybe I'm out in left field on that?
{quote}Case 2:

[jira] [Commented] (SOLR-13394) Change default GC from CMS to G1

2019-05-09 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836430#comment-16836430
 ] 

Shawn Heisey commented on SOLR-13394:
-

Regarding the -XX:+PerfDisableSharedMem parameter:  This is another case of 
things from my GC experiments creeping into other places.

On systems with lots of disk writes, that parameter can lead to real 
performance gains.

It does stop a lot of commandline Java tools from working, though ... because 
those tools gather their information from the target JVM through the shared 
memory interface and don't have any other way to do it.  A prime example is 
jstat.

My GC tuning wiki page at [https://wiki.apache.org/solr/ShawnHeisey] references 
a fascinating blog post: [http://www.evanjones.ca/jvm-mmap-pause.html]

For the general case, I think that parameter can really help performance ... 
but some users will be seriously hampered by the lack of working Java 
commandline tools.  Whether or not we leave it in by default, some 
documentation is a good idea.

> Change default GC from CMS to G1
> 
>
> Key: SOLR-13394
> URL: https://issues.apache.org/jira/browse/SOLR-13394
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.1
>
> Attachments: SOLR-13394.patch, SOLR-13394.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> CMS has been deprecated in new versions of Java 
> (http://openjdk.java.net/jeps/291). This issue is to switch Solr default from 
> CMS to G1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13439) Make collection properties easier and safer to use in code

2019-05-09 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836424#comment-16836424
 ] 

Gus Heck commented on SOLR-13439:
-

{quote}Case 1 (Frequent access):
 1. T+0: SolrCore starts (i.e. A replica is added, a collection is created, you 
name it)
 2. T+0: On initialization, a component that relies on collection properties 
registers a listener. Solr reads from ZooKeeper once.
 3. T+0 to T+end-of-life-of-the-core: The watch remains.
 4. T+end-of-life-of-the-core: no more reads are expected, the watch is removed.
{quote}
This identical to my Case 1 if the core lives for 30 minutes?
{quote}Times read with current approach: 1 + Number of modifications of 
properties.
 Times with the cache approach: unknown, at least the same as with the 
listener, but depends on when the properties are accessed.
{quote}
Not true. One (or possibly zero reads if we luck into my case 3) on step 1 
(instead of step 2 as you list it)... Properties remain cached (with updates 
when they change in zk of course) until core is removed. This is the whole 
point of why I wrote ConditionaExpiringCache. While the watch is active, it 
will exist in collectionPropsWatches, and the predicate the cache is created 
with will evaluate to true and the timeout will be refreshed without removing 
the entry
{code:java}
if (condition.test(peek)) {
  // we passed our stay alive condition refresh our 
time-out and put back in the queue.
  peek.expireAt = System.nanoTime() + retainForNanos;
  timeSortedEntries.add(peek);
} else {
  // we are past our time limit, and have failed our 
condition to stay alive so remove from cache
  cache.remove(peek.key);
} {code}
I expect that the typo you caught and I corrected in the latest version may 
have thrown you off of realizing that the cache never expires while a watch is 
set. In that respect my patch is backwards compatible.
{quote}I can give you two cases:
 Case 1: Collection properties are accessed infrequently (like in my “case 2 
above”), but collection properties change frequently (i.e. every second)
 1. T + 0: call to getCollectionProperties(), Zk watch is set and element is on 
cache
 2. T + 1 to T + 9: Collection properties changes, fires watches to Solr. Solr 
receives the watch and reads from Zookeeper
 3. T + 10 cache expires
 With cache, we read from Zookeeper 10 times, and ZooKeeper fires 10 watches. 
Without cache, we read once, ZooKeeper doens't fire any watch. Keep in mind 
that some clusters may have many collections (hundreds/thousands?), this may 
add a lot of load to ZooKeeper for things that aren’t going to be needed.
{quote}
Ah yes, I was evaluating in terms of reads caused by user action. I have 
assumed (I state it in one of the comments in code I think) that collection 
properties are not frequently updated. What you say is true, but it's no worse 
than having a watch set. I am amenable to making the expiration time tunable 
via configuration if you think frequent updates to collection properties for an 
individual collection are a realistic use case. My assumption is that they are 
likely to be set at initialization, and perhaps changed when an admin or 
autoscalling wants to adjust something. If something is thrashing collection 
properties I think that's the problem and this would be a symptom. So to 
summarize this case:

*TLDR;*This patch does pose a risk for systems with large numbers of 
collections that also frequently update collection properties on many of those 
collections if that system was +not already using collectionPropertiesWatches+ 
(or only using them on a few collections) prior to this patch (i.e. most 
updates to properties are ignored).

That seems like a pretty narrow use case, since it implies that the majority of 
updates are going un-read unless they are making frequent calls to 
getcollectionProperties() and perhaps should have been setting a watch in the 
first place... This case (which I thought of but discarded as very unlikely) 
could be mitigated by shortening the cache expiration time and the cache 
eviction frequency interval here, or making them configurable.
{code:java}
  // ten minutes -- possibly make this configurable
  private static final long CACHE_COLLECTION_PROPS_FOR_NANOS = 10L * 60 * 1000 
* 1000 * 1000;

  // one minute -- possibly make this configurable
  public static final int CACHE_COLLECTION_PROPS_REAPER_INTERVAL = 6;
{code}
Before we do that however, are we really meaning to support use cases where 
people thrash values we store in zookeeper? Or is this an anti-pattern like 
individual requests for updates rather than batching? My gut says the later, 
but maybe I'm out in left field on that?
{quote}Case 2: A component doesn’t rely on a listener, but re

[GitHub] [lucene-solr] janhoy opened a new pull request #668: SOLR-13453: Adjust auth metrics asserts in tests after SOLR-13449

2019-05-09 Thread GitBox
janhoy opened a new pull request #668: SOLR-13453: Adjust auth metrics asserts 
in tests after SOLR-13449
URL: https://github.com/apache/lucene-solr/pull/668
 
 
   Fixes the test failures


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7697) IndexSearcher should leverage index sorting

2019-05-09 Thread Atri Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836421#comment-16836421
 ] 

Atri Sharma commented on LUCENE-7697:
-

Couple of ideas:

 

1) For sorted DocValues and a query seeking an exact value of the sort key, do 
a binary search per segment instead of loading every document and checking.

 

2) If the sort order of an index and sort order of a key match, terminate early.

 

Any other thoughts/ideas?

> IndexSearcher should leverage index sorting
> ---
>
> Key: LUCENE-7697
> URL: https://issues.apache.org/jira/browse/LUCENE-7697
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> We made good efforts in order to make index sorting fast and easy to 
> configure. We should now look into making IndexSearcher aware of it. This 
> will probably require changes of the API as not collecting all matches means 
> that we can no longer know things like the total number of hits or the 
> maximum score.
> I don't plan to work on it anytime soon, I'm just opening this issue to raise 
> awareness. I'd be happy to do reviews however if someone decides to tackle it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 8.1.0 RC1

2019-05-09 Thread Jan Høydahl
I fixed https://issues.apache.org/jira/browse/SOLR-13453 
, see 
https://github.com/apache/lucene-solr/pull/668 

Can we merge to 8.1 for RC2?

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 9. mai 2019 kl. 08:42 skrev Ishan Chattopadhyaya :
> 
> Okay, sure. I'll re-spin. :-(
> This vote is cancelled. Thanks to Tomoko, David, Kevin and Varun for voting.
> 
> On Thu, May 9, 2019 at 7:19 AM Noble Paul  wrote:
>> 
>> It's a bug fix. So,we should include it
>> 
>> On Thu, May 9, 2019 at 8:13 AM Ishan Chattopadhyaya
>>  wrote:
>>> 
>>> Hi Dat,
>>> 
 Should we respin the release for SOLR-13449.
>>> 
>>> I don't fully understand the implications of not having SOLR-13449. If
>>> you (or someone else) suggest(s) that this needs to go into 8.1, then
>>> I'll re-spin RC2 tomorrow.
>>> 
>>> Thanks,
>>> Ishan
>>> 
>>> On Thu, May 9, 2019 at 3:29 AM Varun Thacker  wrote:
 
 SUCCESS! [1:08:48.869786]
 
 
 On Wed, May 8, 2019 at 1:16 PM Đạt Cao Mạnh  
 wrote:
> 
> Hi Ishan,
> 
> Should we respin the release for SOLR-13449.
> 
> On Wed, 8 May 2019 at 17:45, Kevin Risden  wrote:
>> 
>> +1 SUCCESS! [1:15:45.039228]
>> 
>> Kevin Risden
>> 
>> 
>> On Wed, May 8, 2019 at 11:12 AM David Smiley  
>> wrote:
>>> 
>>> +1
>>> SUCCESS! [1:29:43.016321]
>>> 
>>> Thanks for doing the release Ishan!
>>> 
>>> ~ David Smiley
>>> Apache Lucene/Solr Search Developer
>>> http://www.linkedin.com/in/davidwsmiley
>>> 
>>> 
>>> On Tue, May 7, 2019 at 1:49 PM Ishan Chattopadhyaya 
>>>  wrote:
 
 Please vote for release candidate 1 for Lucene/Solr 8.1.0
 
 The artifacts can be downloaded from:
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
 
 You can run the smoke tester directly with this command:
 
 python3 -u dev-tools/scripts/smokeTestRelease.py \
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.1.0-RC1-reve5839fb416083fcdaeedfb1e329a9fdaa29fdc50
 
 Here's my +1
 SUCCESS! [0:46:38.948020]
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
> --
> Best regards,
> Cao Mạnh Đạt
> D.O.B : 31-07-1991
> Cell: (+84) 946.328.329
> E-mail: caomanhdat...@gmail.com
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>> 
>> 
>> --
>> -
>> Noble Paul
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



[jira] [Commented] (SOLR-13453) JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed when SolrClientNodeStateProvider behave nicely

2019-05-09 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836415#comment-16836415
 ] 

Jan Høydahl commented on SOLR-13453:


Fixed the test fails in [https://github.com/apache/lucene-solr/pull/668]

Needed to lower the number in metrics asserts, so obviously the bug in 
SOLR-13449 caused a too high metric count, which is not good at all.

> JWTAuthPluginIntegrationTest and TestSolrCloudWithHadoopAuthPlugin get failed 
> when SolrClientNodeStateProvider behave nicely
> 
>
> Key: SOLR-13453
> URL: https://issues.apache.org/jira/browse/SOLR-13453
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> SOLR-13449 is a trivial fix for SolrClientNodeStateProvider, it makes the 
> provider stop retry once the metrics are successfully grabbed. 
> In an unexpected way, JWTAuthPluginIntegrationTest and 
> TestSolrCloudWithHadoopAuthPlugin get failed 100%. This is bugs of tests for 
> sure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-13-ea+18) - Build # 535 - Unstable!

2019-05-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/535/
Java: 64bit/jdk-13-ea+18 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  org.apache.solr.cloud.NestedShardedAtomicUpdateTest.test

Error Message:
Error from server at http://127.0.0.1:34617/collection1: non ok status: 500, 
message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:34617/collection1: non ok status: 500, 
message:Server Error
at 
__randomizedtesting.SeedInfo.seed([C81FA7980DE87FEB:404B9842A3141213]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:579)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.BaseDistributedSearchTestCase.add(BaseDistributedSearchTestCase.java:576)
at 
org.apache.solr.cloud.NestedShardedAtomicUpdateTest.indexDocAndRandomlyCommit(NestedShardedAtomicUpdateTest.java:221)
at 
org.apache.solr.cloud.NestedShardedAtomicUpdateTest.sendWrongRouteParam(NestedShardedAtomicUpdateTest.java:191)
at 
org.apache.solr.cloud.NestedShardedAtomicUpdateTest.test(NestedShardedAtomicUpdateTest.java:55)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.Stateme

[jira] [Commented] (SOLR-13413) suspicious test failures caused by jetty TimeoutException related to using HTTP2

2019-05-09 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836386#comment-16836386
 ] 

Kevin Risden commented on SOLR-13413:
-

[~caomanhdat] - thanks for tracking this down!

> suspicious test failures caused by jetty TimeoutException related to using 
> HTTP2
> 
>
> Key: SOLR-13413
> URL: https://issues.apache.org/jira/browse/SOLR-13413
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: 
> nocommit_TestDistributedStatsComponentCardinality_trivial-no-http2.patch
>
>
> There is evidence in some recent jenkins failures that we may have some manor 
> of bug in our http2 client/server code that can cause intra-node query 
> requests to stall / timeout non-reproducibly.
> In at least one known case, forcing the jetty & SolrClients used in the test 
> to use http1.1, seems to prevent these test failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13394) Change default GC from CMS to G1

2019-05-09 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836355#comment-16836355
 ] 

Andrzej Bialecki  commented on SOLR-13394:
--

IMHO we should leave it in, but document it better. It would be nice to have an 
option in {{bin/solr}} to easily turn it off for development, eg. {{bin/solr 
-debug}}

> Change default GC from CMS to G1
> 
>
> Key: SOLR-13394
> URL: https://issues.apache.org/jira/browse/SOLR-13394
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.1
>
> Attachments: SOLR-13394.patch, SOLR-13394.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> CMS has been deprecated in new versions of Java 
> (http://openjdk.java.net/jeps/291). This issue is to switch Solr default from 
> CMS to G1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.1-Linux (64bit/jdk-13-ea+18) - Build # 304 - Unstable!

2019-05-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.1-Linux/304/
Java: 64bit/jdk-13-ea+18 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandlerDiskOverFlow.testDiskOverFlow

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([39AD14442E8F3464:F39FB3944DF13E0B]:0)
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.solr.handler.TestReplicationHandlerDiskOverFlow.testDiskOverFlow(TestReplicationHandlerDiskOverFlow.java:157)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)




Build Log:
[...truncated 12925 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandlerDiskOverFlow
   [junit4]   2> 13978 INFO  
(SUITE-TestReplicationHandlerDiskOverFlow-seed#[39AD14442E8F3464]-worker) [
] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 

[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 94 - Still Unstable

2019-05-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/94/

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.util.TestSolrCLIRunExample

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [TransactionLog, 
NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.TransactionLog  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.update.TransactionLog.(TransactionLog.java:188)  at 
org.apache.solr.update.UpdateLog.newTransactionLog(UpdateLog.java:468)  at 
org.apache.solr.update.UpdateLog.ensureLog(UpdateLog.java:1329)  at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:572)  at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:552)  at 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:351)
  at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:289)
  at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:236)
  at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:76)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at 
org.apache.solr.update.processor.NestedUpdateProcessorFactory$NestedUpdateProcessor.processAdd(NestedUpdateProcessorFactory.java:79)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:257)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doVersionAdd(DistributedUpdateProcessor.java:487)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.lambda$versionAdd$0(DistributedUpdateProcessor.java:337)
  at org.apache.solr.update.VersionBucket.runWithLock(VersionBucket.java:50)  
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:337)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:223)
  at 
org.apache.solr.update.processor.DistributedZkUpdateProcessor.processAdd(DistributedZkUpdateProcessor.java:231)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at 
org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:475)
  at 
org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:110)  
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:327)
  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readIterator(JavaBinUpdateRequestCodec.java:280)
  at org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:335) 
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:280)  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readNamedList(JavaBinUpdateRequestCodec.java:235)
  at org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:300) 
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:280)  at 
org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:193)  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:126)
  at 
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:123)
  at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:70)  
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
  at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2566)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:756)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:542)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:397)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(S

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 24055 - Still Unstable!

2019-05-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24055/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestCloudSearcherWarming.testRepFactor1LeaderStartup

Error Message:
No live SolrServers available to handle this request

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:343)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1068)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:837)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:769)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:547)
at 
org.apache.solr.cloud.TestCloudSearcherWarming.tearDown(TestCloudSearcherWarming.java:78)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakCon

[jira] [Commented] (LUCENE-8757) Better Segment To Thread Mapping Algorithm

2019-05-09 Thread Atri Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836281#comment-16836281
 ] 

Atri Sharma commented on LUCENE-8757:
-

[~simonw] Please let me know if you have any further concerns. Happy to address

> Better Segment To Thread Mapping Algorithm
> --
>
> Key: LUCENE-8757
> URL: https://issues.apache.org/jira/browse/LUCENE-8757
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Atri Sharma
>Priority: Major
> Attachments: LUCENE-8757.patch, LUCENE-8757.patch, LUCENE-8757.patch
>
>
> The current segments to threads allocation algorithm always allocates one 
> thread per segment. This is detrimental to performance in case of skew in 
> segment sizes since small segments also get their dedicated thread. This can 
> lead to performance degradation due to context switching overheads.
>  
> A better algorithm which is cognizant of size skew would have better 
> performance for realistic scenarios



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-05-09 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11959:

Comment: was deleted

(was: Hi Jan, I am looking at the Auth code and finally got a bit familiar. 
I am able to build the logic where CDCR requests (LASTPROCESSEDVERSION, 
CHECKPOINTS, etc) made within the same cluster, even if they are not called 
within the local thread pool, get validated by PKIAuthPlugin.
Though *I am still not able to locate where exactly PKIAuthPlugin whitelists 
nodes* (i.e. live nodes listed under its own zookeeper), doing debugging for a 
while but not able to find the code.
Any help is appreciated.)

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13347) Error writing Transaction log for UUIDField

2019-05-09 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836275#comment-16836275
 ] 

Thomas Wöckinger commented on SOLR-13347:
-

Fix provided with PR https://github.com/apache/lucene-solr/pull/665

> Error writing Transaction log for UUIDField
> ---
>
> Key: SOLR-13347
> URL: https://issues.apache.org/jira/browse/SOLR-13347
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.0
>Reporter: Thomas Wöckinger
>Priority: Major
>
> When using Atomic Update, adding a value leads to following Exception
> org.apache.solr.common.SolrException: TransactionLog doesn't know how to 
> serialize class java.util.UUID; try implementing ObjectResolver?
>     at 
> org.apache.solr.update.TransactionLog$1.resolve(TransactionLog.java:100)
>     at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:263)
>     at 
> org.apache.solr.common.util.JavaBinCodec.writeArray(JavaBinCodec.java:770)
>     at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:369)
>     at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:362)
>     at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:252)
>     at 
> org.apache.solr.common.util.JavaBinCodec$BinEntryWriter.put(JavaBinCodec.java:437)
>     at 
> org.apache.solr.common.MapWriter$EntryWriter.putNoEx(MapWriter.java:100)
>     at 
> org.apache.solr.common.MapWriter$EntryWriter.lambda$getBiConsumer$0(MapWriter.java:160)
>     at java.base/java.util.LinkedHashMap.forEach(LinkedHashMap.java:684)
>     at 
> org.apache.solr.common.SolrInputDocument.writeMap(SolrInputDocument.java:51)
>     at 
> org.apache.solr.common.util.JavaBinCodec.writeSolrInputDocument(JavaBinCodec.java:657)
>     at org.apache.solr.update.TransactionLog.write(TransactionLog.java:371)
>     at org.apache.solr.update.UpdateLog.add(UpdateLog.java:573)
>     at org.apache.solr.update.UpdateLog.add(UpdateLog.java:552)
>     at 
> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:351)
>     at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:289)
>     at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:236)
>     at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:76)
>     at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>     at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:995)
>     at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1216)
>     at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:700)
>     at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
>     at 
> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:110)
>     at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:327)
>     at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readIterator(JavaBinUpdateRequestCodec.java:280)
>     at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:335)
>     at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:280)
>     at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$StreamingCodec.readNamedList(JavaBinUpdateRequestCodec.java:235)
>     at 
> org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:300)
>     at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:280)
>     at 
> org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:193)
>     at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:126)
>     at 
> org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:123)
>     at 
> org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:70)
>     at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
>     at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
>     at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>     at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

--

[jira] [Commented] (SOLR-13413) suspicious test failures caused by jetty TimeoutException related to using HTTP2

2019-05-09 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836259#comment-16836259
 ] 

Cao Manh Dat commented on SOLR-13413:
-

The problem is solved by https://github.com/eclipse/jetty.project/issues/3605

> suspicious test failures caused by jetty TimeoutException related to using 
> HTTP2
> 
>
> Key: SOLR-13413
> URL: https://issues.apache.org/jira/browse/SOLR-13413
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: 
> nocommit_TestDistributedStatsComponentCardinality_trivial-no-http2.patch
>
>
> There is evidence in some recent jenkins failures that we may have some manor 
> of bug in our http2 client/server code that can cause intra-node query 
> requests to stall / timeout non-reproducibly.
> In at least one known case, forcing the jetty & SolrClients used in the test 
> to use http1.1, seems to prevent these test failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13413) suspicious test failures caused by jetty TimeoutException related to using HTTP2

2019-05-09 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat reassigned SOLR-13413:
---

Assignee: Cao Manh Dat

> suspicious test failures caused by jetty TimeoutException related to using 
> HTTP2
> 
>
> Key: SOLR-13413
> URL: https://issues.apache.org/jira/browse/SOLR-13413
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: 
> nocommit_TestDistributedStatsComponentCardinality_trivial-no-http2.patch
>
>
> There is evidence in some recent jenkins failures that we may have some manor 
> of bug in our http2 client/server code that can cause intra-node query 
> requests to stall / timeout non-reproducibly.
> In at least one known case, forcing the jetty & SolrClients used in the test 
> to use http1.1, seems to prevent these test failures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2019-05-09 Thread Fredrik Rodland (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836249#comment-16836249
 ] 

Fredrik Rodland edited comment on SOLR-12243 at 5/9/19 9:58 AM:


I am aware that this issue is closed, but nonetheless:

I think this actually broke something regarding expansion of synonyms for large 
queries (possibly large OR-queries).

Having \{code}pf\{code} enabled on fields with a substansial amount of synonym 
resulted in the pf-portion of the query growing "exponentially" and resulted in 
one single query taking down an entire solr-server.

By adjusting the number of OR-queries we were able to increase the memory 
required for running the query.

example (id has synonyms enabled, companyname has not):

q=( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR ortopedi OR 
odontologi )&debugQuery=true&pf=companyname

results in pf-part of edismax-query

(+DisjunctionMaxQuery((companyname:\"? samfunnsviter klima miljø ? ? psykolog 
rus ortopedi odontologi\"~5)~0.01)) 

q=( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR ortopedi OR 
odontologi )&debugQuery=true&pf=id companyname

results in pf-part of edismax-query

(+DisjunctionMaxQuery(((id:\"samfunnsviter klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsviter klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"samfunnsvitar klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsvitar klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykologspesialist rus 
ortopedi odontologi\"~5 id:\"statsviter klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"statsviter klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykologspesialist rus ortopedi 
odontologi\"~5) | companyname:\"? samfunnsviter klima miljø ? ? psykolog rus 
ortopedi odontologi\"~5)~0.01))\{code}

 

 increasing the number of OR-terms or synonyms results in the id-part of the 
query growing "exponentially"


was (Author: fmr):
I am aware that this issue is closed, but nonetheless:

I think this actually broke something regarding expansion of synonyms for large 
queries (possibly large OR-queries).

Having \{code}pf\{code} enabled on fields with a substansial amount of synonym 
resulted in the pf-portion of the query growing "exponentially" and resulted in 
one single query taking down an entire solr-server.

By adjusting the number of OR-queries we were able to increase the memory 
required for running the query.

example (id has synonyms enabled, companyname has not):
{code:java}
q= ( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR ortopedi OR 
odontologi )&debugQuery=true&pf=companyname\
{code}
results in pf-part of edismax-query

{code}(+DisjunctionMaxQuery((companyname:\"? samfunnsviter klima miljø ? ? 
psykolog rus ortopedi odontologi\"~5)~0.01))\{code}

 
{code:java}
q= ( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR ortopedi OR 
odontologi )&debugQuery=true&pf=id companyname\
{code}
results in pf-part of edismax-query

{code}(+DisjunctionMaxQuery(((id:\"samfunnsviter klima miljø psykolog rus 
ortopedi odontologi\"~5 id:\"samfunnsviter klima miljø psykologspesialist rus 
ortopedi odontologi\"~5 id:\"samfunnsvitar klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsvitar klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykologspesialist rus 
ortopedi odontologi\"~5 id:\"statsviter klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"statsviter klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykologspesialist rus ortopedi 
odontologi\"~5) | companyname:\"? samfunnsviter klima miljø ? ? psykolog rus 
ortopedi odontologi\"~5)~0.01))\{code}

 

 increasing the number of OR-terms or synonyms results in the id-part of the 
query growing "exponentially"

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, 8

[jira] [Comment Edited] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2019-05-09 Thread Fredrik Rodland (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836249#comment-16836249
 ] 

Fredrik Rodland edited comment on SOLR-12243 at 5/9/19 9:56 AM:


I am aware that this issue is closed, but nonetheless:

I think this actually broke something regarding expansion of synonyms for large 
queries (possibly large OR-queries).

Having \{code}pf\{code} enabled on fields with a substansial amount of synonym 
resulted in the pf-portion of the query growing "exponentially" and resulted in 
one single query taking down an entire solr-server.

By adjusting the number of OR-queries we were able to increase the memory 
required for running the query.

example (id has synonyms enabled, companyname has not):
{code:java}
q= ( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR ortopedi OR 
odontologi )&debugQuery=true&pf=companyname\
{code}
results in pf-part of edismax-query

{code}(+DisjunctionMaxQuery((companyname:\"? samfunnsviter klima miljø ? ? 
psykolog rus ortopedi odontologi\"~5)~0.01))\{code}

 
{code:java}
q= ( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR ortopedi OR 
odontologi )&debugQuery=true&pf=id companyname\
{code}
results in pf-part of edismax-query

{code}(+DisjunctionMaxQuery(((id:\"samfunnsviter klima miljø psykolog rus 
ortopedi odontologi\"~5 id:\"samfunnsviter klima miljø psykologspesialist rus 
ortopedi odontologi\"~5 id:\"samfunnsvitar klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsvitar klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykologspesialist rus 
ortopedi odontologi\"~5 id:\"statsviter klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"statsviter klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykologspesialist rus ortopedi 
odontologi\"~5) | companyname:\"? samfunnsviter klima miljø ? ? psykolog rus 
ortopedi odontologi\"~5)~0.01))\{code}

 

 increasing the number of OR-terms or synonyms results in the id-part of the 
query growing "exponentially"


was (Author: fmr):
I am aware that this issue is closed, but nonetheless:

I think this actually broke something regarding expansion of synonyms for large 
queries (possibly large OR-queries).

Having \{code}pf\{code} enabled on fields with a substansial amount of synonym 
resulted in the pf-portion of the query growing "exponentially" and resulted in 
one single query taking down an entire solr-server.

By adjusting the number of OR-queries we were able to increase the memory 
required for running the query.

example (id has synonyms enabled, companyname has not):

{code}q= ( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR 
ortopedi OR odontologi )&debugQuery=true&pf=companyname\{code}

results in pf-part of edismax-query

{code}(+DisjunctionMaxQuery((companyname:\"? samfunnsviter klima miljø ? ? 
psykolog rus ortopedi odontologi\"~5)~0.01))\{code}

 

{code}q= ( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR 
ortopedi OR odontologi )&debugQuery=true&pf=id companyname\{code}

results in pf-part of edismax-query

{code}(+DisjunctionMaxQuery(((id:\"samfunnsviter klima miljø psykolog rus 
ortopedi odontologi\"~5 id:\"samfunnsviter klima miljø psykologspesialist rus 
ortopedi odontologi\"~5 id:\"samfunnsvitar klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsvitar klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykologspesialist rus 
ortopedi odontologi\"~5 id:\"statsviter klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"statsviter klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykologspesialist rus ortopedi 
odontologi\"~5) | companyname:\"? samfunnsviter klima miljø ? ? psykolog rus 
ortopedi odontologi\"~5)~0.01))\{code}

 

 increasing the number of OR-terms or synonyms results in the id-part of the 
query growing "exponentially"

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Steve Rowe
>

[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2019-05-09 Thread Fredrik Rodland (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836249#comment-16836249
 ] 

Fredrik Rodland commented on SOLR-12243:


I am aware that this issue is closed, but nonetheless:

I think this actually broke something regarding expansion of synonyms for large 
queries (possibly large OR-queries).

Having \{code}pf\{code} enabled on fields with a substansial amount of synonym 
resulted in the pf-portion of the query growing "exponentially" and resulted in 
one single query taking down an entire solr-server.

By adjusting the number of OR-queries we were able to increase the memory 
required for running the query.

example (id has synonyms enabled, companyname has not):

{code}q= ( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR 
ortopedi OR odontologi )&debugQuery=true&pf=companyname\{code}

results in pf-part of edismax-query

{code}(+DisjunctionMaxQuery((companyname:\"? samfunnsviter klima miljø ? ? 
psykolog rus ortopedi odontologi\"~5)~0.01))\{code}

 

{code}q= ( samfunnsviter (klima OR miljø) ) NOT ( psykolog%20 OR rus OR 
ortopedi OR odontologi )&debugQuery=true&pf=id companyname\{code}

results in pf-part of edismax-query

{code}(+DisjunctionMaxQuery(((id:\"samfunnsviter klima miljø psykolog rus 
ortopedi odontologi\"~5 id:\"samfunnsviter klima miljø psykologspesialist rus 
ortopedi odontologi\"~5 id:\"samfunnsvitar klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsvitar klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"social scientist klima miljø psykologspesialist rus 
ortopedi odontologi\"~5 id:\"statsviter klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"statsviter klima miljø psykologspesialist rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykolog rus ortopedi 
odontologi\"~5 id:\"samfunnsøkonom klima miljø psykologspesialist rus ortopedi 
odontologi\"~5) | companyname:\"? samfunnsviter klima miljø ? ? psykolog rus 
ortopedi odontologi\"~5)~0.01))\{code}

 

 increasing the number of OR-terms or synonyms results in the id-part of the 
query growing "exponentially"

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.6, 8.0
>
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> multiword-synonyms.txt, schema.xml, solrconfig.xml
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> {code}
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> {code}
> request handler:
> {code:xml}
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3<-1 6<-3 9<30%
>  *:*
>  25
> 
> 
> {code}
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-05-09 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836248#comment-16836248
 ] 

Amrit Sarkar commented on SOLR-11959:
-

Hi Jan, I am looking at the Auth code and finally got a bit familiar. 
I am able to build the logic where CDCR requests (LASTPROCESSEDVERSION, 
CHECKPOINTS, etc) made within the same cluster, even if they are not called 
within the local thread pool, get validated by PKIAuthPlugin.
Though *I am still not able to locate where exactly PKIAuthPlugin whitelists 
nodes* (i.e. live nodes listed under its own zookeeper), doing debugging for a 
while but not able to find the code.
Any help is appreciated.

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-reference-guide-8.0 - Build # 2 - Failure

2019-05-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-8.0/2/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on websites1 (git-websites svn-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-8.0
No credentials specified
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://gitbox.apache.org/repos/asf/lucene-solr.git # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from 
https://gitbox.apache.org/repos/asf/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://gitbox.apache.org/repos/asf/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/branch_8_0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/branch_8_0^{commit} # timeout=10
Checking out Revision b5a872d3fd55a67be16a9fc30671b29be0ece013 
(refs/remotes/origin/branch_8_0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f b5a872d3fd55a67be16a9fc30671b29be0ece013
Commit message: "SOLR-13425: Wrong color in horizontal definition list (#653)"
 > git rev-list --no-walk b5a872d3fd55a67be16a9fc30671b29be0ece013 # timeout=10
No emails were triggered.
[Solr-reference-guide-8.0] $ /bin/bash -xe /tmp/jenkins6777006674825455636.sh
+ bash dev-tools/scripts/jenkins.build.ref.guide.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.8.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.8/1.29.8.tar.gz.asc
gpg: Signature made Wed 08 May 2019 02:14:49 PM UTC using RSA key ID 39499BDB
gpg: Good signature from "Piotr Kuczynski "
gpg: WARNING: This key is not certified with a trusted signature!
gpg:  There is no indication that the signature belongs to the owner.
Primary key fingerprint: 7D2B AF1C F37B 13E2 069D  6956 105B D0E7 3949 9BDB
GPG verified '/home/jenkins/shared/.rvm/archives/rvm-1.29.8.tgz'
Upgrading the RVM installation in /home/jenkins/shared/.rvm/
Upgrade of RVM in /home/jenkins/shared/.rvm/ is complete.

Thanks for installing RVM 🙏
Please consider donating to our open collective to help us maintain RVM.

👉  Donate: https://opencollective.com/rvm/donate


+ set +x
Running 'source /home/jenkins/.rvm/scripts/rvm'
Running 'rvm cleanup all'
Warning! PATH is not properly set up, 
/home/jenkins/shared/.rvm/gems/ruby-2.3.3/bin is not at first place.
 Usually this is caused by shell initialization files. Search for 
PATH=... entries.
 You can also re-add RVM to your profile by running: rvm get 
stable --auto-dotfiles
 To fix it temporarily in this shell session run: rvm use 
ruby-2.3.3
 To ignore this error add 
rvm_silence_path_mismatch_check_flag=1 to your 
~/.rvmrc file.
Cleaning up rvm archives
Cleaning up rvm repos
Cleaning up rvm src
Cleaning up rvm log
Cleaning up rvm tmp
Cleaning up rvm gemsets
Cleaning up rvm links
Cleanup done.
Running 'rvm autolibs disable'
Warning! PATH is not properly set up, 
/home/jenkins/shared/.rvm/gems/ruby-2.3.3/bin is not at first place.
 Usually this is caused by shell initialization files. Search for 
PATH=... entries.
 You can also re-add RVM to your profile by running: rvm get 
stable --auto-dotfiles
 To fix it temporarily in this shell session run: rvm use 
ruby-2.3.3
 To ignore this error add 
rvm_silence_path_mismatch_check_flag=1 to your 
~/.rvmrc file.
Running 'rvm install ruby-2.3.3'
Warning! PATH is not properly set up, 
/home/jenkins/shared/.rvm/gems/ruby-2.3.3/bin is not at first place.
 Usually this is caused by shell initialization files. Search for 
PATH=... entries.
 You can also re-add RVM to your profile by running: rvm get 
stable --auto-dotfiles
 To fix it temporarily in this shell session run: rvm use 
ruby-2.3.3
 To ignore this error add 
rvm_silence_path_mismatch_check_flag=1 to your 
~/.rvmrc file.
Already installed ruby-2.3.3.
To reinstall use:

rvm reinstall ruby-2.3.3

Running 'rvm gemset create solr-refguide-gemset'
ruby-2.3.3 - #gemset created 
/home/jenkins/shared/.rvm/gems/ruby-2.3.3@solr-refguide-gemset
ruby-2.3.3 - #generating solr-refguide-gemset 
wrappers|/-\|/-\|.-\|/-\|/-.|/-\|/-\|.-\|/-\|/-.|/-\|/-\|.-\|/-\|/-.|/-\|/-\.
Running 'rvm ruby-2.3.3@solr-refguide-gemset'
Using /home/jenkins/shared/.rvm/gems/ruby-2.3.3 with gemset solr-refguide-gemset
Running 'gem install --force --version 3.5.0 jekyll'
Successfully installed jekyll-3.5.0
Parsing documentation for jekyll-3.5.0
Done installing document

[jira] [Commented] (SOLR-13394) Change default GC from CMS to G1

2019-05-09 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836217#comment-16836217
 ] 

Ishan Chattopadhyaya commented on SOLR-13394:
-

As [~ab] brought up in Slack, the -XX:+PerfDisableSharedMem causes the Solr 
process to not show up in the `jps` output. Some details about this parameter 
is here: https://www.evanjones.ca/jvm-mmap-pause.html

Does someone think that this warrants a revert of that parameter?

> Change default GC from CMS to G1
> 
>
> Key: SOLR-13394
> URL: https://issues.apache.org/jira/browse/SOLR-13394
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.1
>
> Attachments: SOLR-13394.patch, SOLR-13394.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> CMS has been deprecated in new versions of Java 
> (http://openjdk.java.net/jeps/291). This issue is to switch Solr default from 
> CMS to G1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7840) BooleanQuery.rewriteNoScoring - optimize away any SHOULD clauses if at least 1 MUST/FILTER clause and 0==minShouldMatch

2019-05-09 Thread Jim Ferenczi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi resolved LUCENE-7840.
--
   Resolution: Fixed
Fix Version/s: 8.2
   master (9.0)

Thanks [~atris]!

> BooleanQuery.rewriteNoScoring - optimize away any SHOULD clauses if at least 
> 1 MUST/FILTER clause and 0==minShouldMatch
> ---
>
> Key: LUCENE-7840
> URL: https://issues.apache.org/jira/browse/LUCENE-7840
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Hoss Man
>Priority: Major
> Fix For: master (9.0), 8.2
>
> Attachments: LUCENE-7840.patch, LUCENE-7840.patch, LUCENE-7840.patch
>
>
> I haven't thought this through completely, let alone write up a patch / test 
> case, but IIUC...
> We should be able to optimize  {{ BooleanQuery rewriteNoScoring() }} so that 
> (after converting MUST clauses to FILTER clauses) we can check for the common 
> case of {{0==getMinimumNumberShouldMatch()}} and throw away any SHOULD 
> clauses as long as there is is at least one FILTER clause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7840) BooleanQuery.rewriteNoScoring - optimize away any SHOULD clauses if at least 1 MUST/FILTER clause and 0==minShouldMatch

2019-05-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836204#comment-16836204
 ] 

ASF subversion and git services commented on LUCENE-7840:
-

Commit 214f70cb44d5ef3ff6f689c9cbe98fc6f986552b in lucene-solr's branch 
refs/heads/branch_8x from Atri Sharma
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=214f70c ]

LUCENE-7840: Avoid Building Scorer Supplier For Redundant SHOULD Clauses

For boolean queries, we should eliminate redundant SHOULD clauses during
query rewrite and not build the scorer supplier, as opposed to
eliminating them during weight construction

Signed-off-by: jimczi 


> BooleanQuery.rewriteNoScoring - optimize away any SHOULD clauses if at least 
> 1 MUST/FILTER clause and 0==minShouldMatch
> ---
>
> Key: LUCENE-7840
> URL: https://issues.apache.org/jira/browse/LUCENE-7840
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Hoss Man
>Priority: Major
> Attachments: LUCENE-7840.patch, LUCENE-7840.patch, LUCENE-7840.patch
>
>
> I haven't thought this through completely, let alone write up a patch / test 
> case, but IIUC...
> We should be able to optimize  {{ BooleanQuery rewriteNoScoring() }} so that 
> (after converting MUST clauses to FILTER clauses) we can check for the common 
> case of {{0==getMinimumNumberShouldMatch()}} and throw away any SHOULD 
> clauses as long as there is is at least one FILTER clause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13455) JettySolrRunner does not enable HTTP2 unless jetty.testMode is set.

2019-05-09 Thread Cao Manh Dat (JIRA)
Cao Manh Dat created SOLR-13455:
---

 Summary: JettySolrRunner does not enable HTTP2 unless 
jetty.testMode is set.
 Key: SOLR-13455
 URL: https://issues.apache.org/jira/browse/SOLR-13455
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Cao Manh Dat
Assignee: Cao Manh Dat


Right now JettySolrRunner does not add HTTP2ConnectionFactory unless 
"jetty.testMode" is set in system properties. This will affect anyone who want 
to run embedded Solr or using solr-test-framework.
https://stackoverflow.com/questions/55417706/solr-8-minisolrcloudcluster-with-multiple-servers-gives-java-io-ioexception
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7840) BooleanQuery.rewriteNoScoring - optimize away any SHOULD clauses if at least 1 MUST/FILTER clause and 0==minShouldMatch

2019-05-09 Thread Atri Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836178#comment-16836178
 ] 

Atri Sharma commented on LUCENE-7840:
-

Thank you for committing it!

> BooleanQuery.rewriteNoScoring - optimize away any SHOULD clauses if at least 
> 1 MUST/FILTER clause and 0==minShouldMatch
> ---
>
> Key: LUCENE-7840
> URL: https://issues.apache.org/jira/browse/LUCENE-7840
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Hoss Man
>Priority: Major
> Attachments: LUCENE-7840.patch, LUCENE-7840.patch, LUCENE-7840.patch
>
>
> I haven't thought this through completely, let alone write up a patch / test 
> case, but IIUC...
> We should be able to optimize  {{ BooleanQuery rewriteNoScoring() }} so that 
> (after converting MUST clauses to FILTER clauses) we can check for the common 
> case of {{0==getMinimumNumberShouldMatch()}} and throw away any SHOULD 
> clauses as long as there is is at least one FILTER clause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7840) BooleanQuery.rewriteNoScoring - optimize away any SHOULD clauses if at least 1 MUST/FILTER clause and 0==minShouldMatch

2019-05-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836167#comment-16836167
 ] 

ASF subversion and git services commented on LUCENE-7840:
-

Commit c988b04b1888d5091a9455d3ae34762eb0ca0cea in lucene-solr's branch 
refs/heads/master from Atri Sharma
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c988b04 ]

LUCENE-7840: Avoid Building Scorer Supplier For Redundant SHOULD Clauses

For boolean queries, we should eliminate redundant SHOULD clauses during
query rewrite and not build the scorer supplier, as opposed to
eliminating them during weight construction

Signed-off-by: jimczi 


> BooleanQuery.rewriteNoScoring - optimize away any SHOULD clauses if at least 
> 1 MUST/FILTER clause and 0==minShouldMatch
> ---
>
> Key: LUCENE-7840
> URL: https://issues.apache.org/jira/browse/LUCENE-7840
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Hoss Man
>Priority: Major
> Attachments: LUCENE-7840.patch, LUCENE-7840.patch, LUCENE-7840.patch
>
>
> I haven't thought this through completely, let alone write up a patch / test 
> case, but IIUC...
> We should be able to optimize  {{ BooleanQuery rewriteNoScoring() }} so that 
> (after converting MUST clauses to FILTER clauses) we can check for the common 
> case of {{0==getMinimumNumberShouldMatch()}} and throw away any SHOULD 
> clauses as long as there is is at least one FILTER clause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org