[jira] [Commented] (LUCENE-7135) Constants check for JRE bitness causes SecurityException under WebStart
[ https://issues.apache.org/jira/browse/LUCENE-7135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15619342#comment-15619342 ] Aaron Madlon-Kay commented on LUCENE-7135: -- > Maybe we should only do this on fallback That's precisely what my patch does. > Constants check for JRE bitness causes SecurityException under WebStart > --- > > Key: LUCENE-7135 > URL: https://issues.apache.org/jira/browse/LUCENE-7135 > Project: Lucene - Core > Issue Type: Bug > Components: core/other >Affects Versions: 5.5 > Environment: OS X 10.11.4, Java 1.8.0_77-b03 (under WebStart) >Reporter: Aaron Madlon-Kay > Attachments: LUCENE-7135.diff > > > I have an app that I deploy via WebStart that uses Lucene 5.2.1 (we are > locked to 5.2.1 because that's what [LanguageTool|https://languagetool.org/] > uses). > When running under the WebStart security manager, there are two locations > where exceptions are thrown and prevent pretty much all Lucene classes from > initializing. This is true even when we sign everything and specify > {{}}. > # In {{RamUsageEstimator}}, fixed by LUCENE-6923 > # In {{Constants}}, caused by the call > {{System.getProperty("sun.arch.data.model")}} (stack trace below). > {code} > Error: Caused by: java.security.AccessControlException: access denied > ("java.util.PropertyPermission" "sun.arch.data.model" "read") > Error:at > java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) > > Error:at > java.security.AccessController.checkPermission(AccessController.java:884) > Error:at > java.lang.SecurityManager.checkPermission(SecurityManager.java:549) > Error:at > com.sun.javaws.security.JavaWebStartSecurity.checkPermission(Unknown Source) > Error:at > java.lang.SecurityManager.checkPropertyAccess(SecurityManager.java:1294) > Error:at java.lang.System.getProperty(System.java:717) > Error:at org.apache.lucene.util.Constants.(Constants.java:71) > Error:... 34 more > {code} > The latter is still present in the latest version. My patch illustrates one > solution that appears to be working for us. > (This patch, together with a backport of the fix to LUCENE-6923, seems to fix > the issue for our purposes. However if you really wanted to make my day you > could put out a maintenance release of 5.2 with both fixes included.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_102) - Build # 549 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/549/ Java: 32bit/jdk1.8.0_102 -server -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: org.apache.lucene.replicator.IndexReplicationClientTest.testConsistencyOnExceptions Error Message: Captured an uncaught exception in thread: Thread[id=22, name=ReplicationThread-index, state=RUNNABLE, group=TGRP-IndexReplicationClientTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=22, name=ReplicationThread-index, state=RUNNABLE, group=TGRP-IndexReplicationClientTest] at __randomizedtesting.SeedInfo.seed([AC5E40A54CC19FE8:23D0A7055EAD6C17]:0) Caused by: java.lang.AssertionError: handler failed too many times: -1 at __randomizedtesting.SeedInfo.seed([AC5E40A54CC19FE8]:0) at org.apache.lucene.replicator.IndexReplicationClientTest$4.handleUpdateException(IndexReplicationClientTest.java:304) at org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77) FAILED: org.apache.solr.TestDistributedSearch.test Error Message: Expected to find shardAddress in the up shard info Stack Trace: java.lang.AssertionError: Expected to find shardAddress in the up shard info at __randomizedtesting.SeedInfo.seed([E9786555F5908FBE:612C5A8F5B6CE246]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.TestDistributedSearch.comparePartialResponses(TestDistributedSearch.java:1162) at org.apache.solr.TestDistributedSearch.queryPartialResults(TestDistributedSearch.java:1103) at org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:963) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1018) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[jira] [Commented] (SOLR-9701) NPE in export handler when "fl" parameter is omitted.
[ https://issues.apache.org/jira/browse/SOLR-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15619142#comment-15619142 ] ASF subversion and git services commented on SOLR-9701: --- Commit 807ba8c60c43b277fe2d04e8d7f5d83689e255bb in lucene-solr's branch refs/heads/branch_6x from [~erickerickson] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=807ba8c ] SOLR-9701: NPE in export handler when fl parameter is omitted. (cherry picked from commit 42eab70) > NPE in export handler when "fl" parameter is omitted. > - > > Key: SOLR-9701 > URL: https://issues.apache.org/jira/browse/SOLR-9701 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: trunk, 6.4 >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > Fix For: trunk, 6.4 > > Attachments: SOLR-9701.patch, SOLR-9701.patch > > > This started when a user reported that if you do not specify any parameters > for the export handler, you get an NPE. I tracked it down to not specifying > an "fl" parameter. > But in general I rearranged the error reporting in > SortingResponseWriter.write so that immediately upon detecting a problem, the > exception gets written to the output stream and then return immediately > rather than save it up for the end. Preliminary version of the patch > attached; it fixes the immediate problem. > Still to see is if it breaks any tests since the first error detected will be > returned to the user rather than the last. I'll fix any tests that are > sensitive to this and check in sometime this weekend. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9701) NPE in export handler when "fl" parameter is omitted.
[ https://issues.apache.org/jira/browse/SOLR-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-9701. -- Resolution: Fixed Fix Version/s: 6.4 trunk > NPE in export handler when "fl" parameter is omitted. > - > > Key: SOLR-9701 > URL: https://issues.apache.org/jira/browse/SOLR-9701 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: trunk, 6.4 >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > Fix For: trunk, 6.4 > > Attachments: SOLR-9701.patch, SOLR-9701.patch > > > This started when a user reported that if you do not specify any parameters > for the export handler, you get an NPE. I tracked it down to not specifying > an "fl" parameter. > But in general I rearranged the error reporting in > SortingResponseWriter.write so that immediately upon detecting a problem, the > exception gets written to the output stream and then return immediately > rather than save it up for the end. Preliminary version of the patch > attached; it fixes the immediate problem. > Still to see is if it breaks any tests since the first error detected will be > returned to the user rather than the last. I'll fix any tests that are > sensitive to this and check in sometime this weekend. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+140) - Build # 2071 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2071/ Java: 32bit/jdk-9-ea+140 -server -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.handler.component.SpellCheckComponentTest.test Error Message: List size mismatch @ spellcheck/suggestions Stack Trace: java.lang.RuntimeException: List size mismatch @ spellcheck/suggestions at __randomizedtesting.SeedInfo.seed([628EC3E98952B368:EADAFC3327AEDE90]:0) at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:900) at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:847) at org.apache.solr.handler.component.SpellCheckComponentTest.test(SpellCheckComponentTest.java:147) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(java.base@9-ea/Thread.java:843) Build Log: [...truncated 10844 lines...] [junit4] Suite:
[jira] [Updated] (SOLR-9701) NPE in export handler when "fl" parameter is omitted.
[ https://issues.apache.org/jira/browse/SOLR-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-9701: - Attachment: SOLR-9701.patch Final patch with tests. > NPE in export handler when "fl" parameter is omitted. > - > > Key: SOLR-9701 > URL: https://issues.apache.org/jira/browse/SOLR-9701 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: trunk, 6.4 >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-9701.patch, SOLR-9701.patch > > > This started when a user reported that if you do not specify any parameters > for the export handler, you get an NPE. I tracked it down to not specifying > an "fl" parameter. > But in general I rearranged the error reporting in > SortingResponseWriter.write so that immediately upon detecting a problem, the > exception gets written to the output stream and then return immediately > rather than save it up for the end. Preliminary version of the patch > attached; it fixes the immediate problem. > Still to see is if it breaks any tests since the first error detected will be > returned to the user rather than the last. I'll fix any tests that are > sensitive to this and check in sometime this weekend. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9701) NPE in export handler when "fl" parameter is omitted.
[ https://issues.apache.org/jira/browse/SOLR-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15619120#comment-15619120 ] ASF subversion and git services commented on SOLR-9701: --- Commit 42eab7035ed0d5ebc7ba87f8c08a7677b87b7bef in lucene-solr's branch refs/heads/master from [~erickerickson] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=42eab70 ] SOLR-9701: NPE in export handler when fl parameter is omitted. > NPE in export handler when "fl" parameter is omitted. > - > > Key: SOLR-9701 > URL: https://issues.apache.org/jira/browse/SOLR-9701 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: trunk, 6.4 >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-9701.patch, SOLR-9701.patch > > > This started when a user reported that if you do not specify any parameters > for the export handler, you get an NPE. I tracked it down to not specifying > an "fl" parameter. > But in general I rearranged the error reporting in > SortingResponseWriter.write so that immediately upon detecting a problem, the > exception gets written to the output stream and then return immediately > rather than save it up for the end. Preliminary version of the patch > attached; it fixes the immediate problem. > Still to see is if it breaks any tests since the first error detected will be > returned to the user rather than the last. I'll fix any tests that are > sensitive to this and check in sometime this weekend. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9704) Optimize blockChildren facets with filter specified
[ https://issues.apache.org/jira/browse/SOLR-9704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley resolved SOLR-9704. Resolution: Fixed Fix Version/s: 6.4 master (7.0) > Optimize blockChildren facets with filter specified > --- > > Key: SOLR-9704 > URL: https://issues.apache.org/jira/browse/SOLR-9704 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Yonik Seeley >Assignee: Yonik Seeley > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9704.patch > > > When doing a domain switch from parents to children, we normally map to all > children and then after apply any facet filters. This can be done in > parallel by passing the child filters as "acceptDocs". -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9704) Optimize blockChildren facets with filter specified
[ https://issues.apache.org/jira/browse/SOLR-9704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15619055#comment-15619055 ] ASF subversion and git services commented on SOLR-9704: --- Commit 19d86e69d9e2ab768c0ce2e3aa0737a2e5104d0b in lucene-solr's branch refs/heads/branch_6x from [~yo...@apache.org] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=19d86e6 ] SOLR-9704: optimization: use filters after blockChildren for acceptDocs > Optimize blockChildren facets with filter specified > --- > > Key: SOLR-9704 > URL: https://issues.apache.org/jira/browse/SOLR-9704 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Yonik Seeley >Assignee: Yonik Seeley > Attachments: SOLR-9704.patch > > > When doing a domain switch from parents to children, we normally map to all > children and then after apply any facet filters. This can be done in > parallel by passing the child filters as "acceptDocs". -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9705) Add pageRank Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-9705: - Description: Starting with Solr 6.3, Streaming Expressions have the capability to run parallel *batch jobs*. So it would be useful to implement some batch algorithms. One useful batch algorithm is PageRank https://en.wikipedia.org/wiki/PageRank. PageRank can be used as a general purpose approach for ranking nodes in a graph. It can also be used for ranking web pages based on link popularity. was: Starting with Solr 6.3, Streaming Expressions has the capability to run parallel *batch jobs*. So it would be useful to implement some batch algorithms. One useful batch algorithm is PageRank https://en.wikipedia.org/wiki/PageRank. PageRank can be used as a general purpose approach for ranking nodes in a graph. It can also be used for ranking web pages based on link popularity. > Add pageRank Streaming Expression > - > > Key: SOLR-9705 > URL: https://issues.apache.org/jira/browse/SOLR-9705 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein > > Starting with Solr 6.3, Streaming Expressions have the capability to run > parallel *batch jobs*. So it would be useful to implement some batch > algorithms. One useful batch algorithm is PageRank > https://en.wikipedia.org/wiki/PageRank. > PageRank can be used as a general purpose approach for ranking nodes in a > graph. It can also be used for ranking web pages based on link popularity. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9705) Add pageRank Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-9705: - Description: Starting with Solr 6.3, Streaming Expressions has the capability to run parallel *batch jobs*. So it would be useful to implement some batch algorithms. One useful batch algorithm is PageRank https://en.wikipedia.org/wiki/PageRank. PageRank can be used as a general purpose approach for ranking nodes in a graph. It can also be used for ranking web pages based on link popularity. was: Starting with Solr 6.3, Streaming Expressions has the capability to run parallel *batch jobs*. So it would be useful to implement some batch algorithms. One useful batch algorithms is PageRank https://en.wikipedia.org/wiki/PageRank. PageRank can be used as a general purpose approach for ranking nodes in a graph. It can also be used in ranking pages based on link popularity. > Add pageRank Streaming Expression > - > > Key: SOLR-9705 > URL: https://issues.apache.org/jira/browse/SOLR-9705 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein > > Starting with Solr 6.3, Streaming Expressions has the capability to run > parallel *batch jobs*. So it would be useful to implement some batch > algorithms. One useful batch algorithm is PageRank > https://en.wikipedia.org/wiki/PageRank. > PageRank can be used as a general purpose approach for ranking nodes in a > graph. It can also be used for ranking web pages based on link popularity. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9705) Add pageRank Streaming Expression
Joel Bernstein created SOLR-9705: Summary: Add pageRank Streaming Expression Key: SOLR-9705 URL: https://issues.apache.org/jira/browse/SOLR-9705 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Reporter: Joel Bernstein Starting with Solr 6.3, Streaming Expressions has the capability to run parallel *batch jobs*. So it would be useful to implement some batch algorithms. One useful batch algorithms is PageRank https://en.wikipedia.org/wiki/PageRank. PageRank can be used as a general purpose approach for ranking nodes in a graph. It can also be used in ranking pages based on link popularity. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9704) Optimize blockChildren facets with filter specified
[ https://issues.apache.org/jira/browse/SOLR-9704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15619027#comment-15619027 ] ASF subversion and git services commented on SOLR-9704: --- Commit 0f8802ba20de35daac75f6bbcc28a1789a27b06a in lucene-solr's branch refs/heads/master from [~yo...@apache.org] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0f8802b ] SOLR-9704: optimization: use filters after blockChildren for acceptDocs > Optimize blockChildren facets with filter specified > --- > > Key: SOLR-9704 > URL: https://issues.apache.org/jira/browse/SOLR-9704 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Yonik Seeley >Assignee: Yonik Seeley > Attachments: SOLR-9704.patch > > > When doing a domain switch from parents to children, we normally map to all > children and then after apply any facet filters. This can be done in > parallel by passing the child filters as "acceptDocs". -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7526) Improvements to UnifiedHighlighter OffsetStrategies
[ https://issues.apache.org/jira/browse/LUCENE-7526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15619010#comment-15619010 ] ASF GitHub Bot commented on LUCENE-7526: Github user dsmiley commented on the issue: https://github.com/apache/lucene-solr/pull/105 Check it out: https://github.com/dsmiley/lucene-solr/commit/50e2ea89c7eb15c863aa6e04e14fd32085ee85bd Remember this is a package-private class internal to the UnifiedHighlighter. Not completely implementing an interface if the caller won't care is okay. RE MultiTermHighlighting: I think it's fine as-is. The UH class is very long to adopt this -- too long as it is IMO. > Improvements to UnifiedHighlighter OffsetStrategies > --- > > Key: LUCENE-7526 > URL: https://issues.apache.org/jira/browse/LUCENE-7526 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Reporter: Timothy M. Rodriguez >Assignee: David Smiley >Priority: Minor > Fix For: 6.4 > > > This ticket improves several of the UnifiedHighlighter FieldOffsetStrategies > by reducing reliance on creating or re-creating TokenStreams. > The primary changes are as follows: > * AnalysisOffsetStrategy - split into two offset strategies > ** MemoryIndexOffsetStrategy - the primary analysis mode that utilizes a > MemoryIndex for producing Offsets > ** TokenStreamOffsetStrategy - an offset strategy that avoids creating a > MemoryIndex. Can only be used if the query distills down to terms and > automata. > * TokenStream removal > ** MemoryIndexOffsetStrategy - previously a TokenStream was created to fill > the memory index and then once consumed a new one was generated by > uninverting the MemoryIndex back into a TokenStream if there were automata > (wildcard/mtq queries) involved. Now this is avoided, which should save > memory and avoid a second pass over the data. > ** TermVectorOffsetStrategy - this was refactored in a similar way to avoid > generating a TokenStream if automata are involved. > ** PostingsWithTermVectorsOffsetStrategy - similar refactoring > * CompositePostingsEnum - aggregates several underlying PostingsEnums for > wildcard/mtq queries. This should improve relevancy by providing unified > metrics for a wildcard across all it's term matches > * Added a HighlightFlag for enabling the newly separated > TokenStreamOffsetStrategy since it can adversely affect passage relevancy -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr issue #105: LUCENE-7526 Improvements to UnifiedHighlighter Offse...
Github user dsmiley commented on the issue: https://github.com/apache/lucene-solr/pull/105 Check it out: https://github.com/dsmiley/lucene-solr/commit/50e2ea89c7eb15c863aa6e04e14fd32085ee85bd Remember this is a package-private class internal to the UnifiedHighlighter. Not completely implementing an interface if the caller won't care is okay. RE MultiTermHighlighting: I think it's fine as-is. The UH class is very long to adopt this -- too long as it is IMO. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9704) Optimize blockChildren facets with filter specified
[ https://issues.apache.org/jira/browse/SOLR-9704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley updated SOLR-9704: --- Attachment: SOLR-9704.patch Patch attached. > Optimize blockChildren facets with filter specified > --- > > Key: SOLR-9704 > URL: https://issues.apache.org/jira/browse/SOLR-9704 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Yonik Seeley >Assignee: Yonik Seeley > Attachments: SOLR-9704.patch > > > When doing a domain switch from parents to children, we normally map to all > children and then after apply any facet filters. This can be done in > parallel by passing the child filters as "acceptDocs". -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 937 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/937/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail Error Message: expected:<200> but was:<404> Stack Trace: java.lang.AssertionError: expected:<200> but was:<404> at __randomizedtesting.SeedInfo.seed([D7F4265C2DC04E3B:BF4B1376FD5A5CD7]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.cancelDelegationToken(TestSolrCloudWithDelegationTokens.java:140) at org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenCancelFail(TestSolrCloudWithDelegationTokens.java:294) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Assigned] (SOLR-9704) Optimize blockChildren facets with filter specified
[ https://issues.apache.org/jira/browse/SOLR-9704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley reassigned SOLR-9704: -- Assignee: Yonik Seeley > Optimize blockChildren facets with filter specified > --- > > Key: SOLR-9704 > URL: https://issues.apache.org/jira/browse/SOLR-9704 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Yonik Seeley >Assignee: Yonik Seeley > > When doing a domain switch from parents to children, we normally map to all > children and then after apply any facet filters. This can be done in > parallel by passing the child filters as "acceptDocs". -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9704) Optimize blockChildren facets with filter specified
Yonik Seeley created SOLR-9704: -- Summary: Optimize blockChildren facets with filter specified Key: SOLR-9704 URL: https://issues.apache.org/jira/browse/SOLR-9704 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: Facet Module Reporter: Yonik Seeley When doing a domain switch from parents to children, we normally map to all children and then after apply any facet filters. This can be done in parallel by passing the child filters as "acceptDocs". -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9702) Authentication & Authorization based on Jetty security
[ https://issues.apache.org/jira/browse/SOLR-9702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618844#comment-15618844 ] Hrishikesh Gadre commented on SOLR-9702: bq. which in turn would open up the possibility to use a whole range of auth services (in particular LDAP servers). I recently contributed LDAP authentication support in hadoop authentication framework (HADOOP-12082). SOLR-9513 is tracking the changes required to expose this functionality in Solr. May be you can use that ? > Authentication & Authorization based on Jetty security > -- > > Key: SOLR-9702 > URL: https://issues.apache.org/jira/browse/SOLR-9702 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: security >Affects Versions: 6.2.1 >Reporter: Thomas Quinot > > (following up on comments initially posted on SOLR-7275). > Back in Solr 4 days, user authentication could be handled by Jetty, and some > level of authorization could be implemented using request regexp rules. This > was explicitly documented in the SolrSecurity page: > http://wiki.apache.org/solr/SolrSecurity?action=recall=35#Jetty_realm_example > In particular, authentication could thus be performed against a variety of > services implemented in Jetty, such as HashLoginService (mentioned explicitly > in the above documentation, tested in production, does work) or possibly > JAASLoginService, which in turn would open up the possibility to use a whole > range of auth services (in particular LDAP servers). > I see that the usage of Jetty is now "an implementation detail". Does this > mean that the feature listed above is not supported anymore? (This is quite > unfortunate IMO, as even just the HashLoginService would be useful to > authenticate users against a database of UNIX crypt(3) passwords) > The new login services that are apparently being reimplemented in Solr itself > seem to be much less flexible and limited. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9703) Increase sub-facet efficiency, don't re-parse queries for each parent bucket
[ https://issues.apache.org/jira/browse/SOLR-9703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley updated SOLR-9703: --- Component/s: Facet Module > Increase sub-facet efficiency, don't re-parse queries for each parent bucket > > > Key: SOLR-9703 > URL: https://issues.apache.org/jira/browse/SOLR-9703 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Yonik Seeley > > Right now, if one has a parent facet with a child facet, the child facet's > queries (say from "filter") will be parsed for each processed bucket in the > parent (in fact a new FacetProcessor will be created for each parent bucket). > We could have a parse cache, store the parsed queries in the request context, > or perhaps do something more general and make facet processors reusable. The > latter sounds the most promising way to reduce a bunch of redundant work per > bucket. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9703) Increase sub-facet efficiency, don't re-parse queries for each parent bucket
Yonik Seeley created SOLR-9703: -- Summary: Increase sub-facet efficiency, don't re-parse queries for each parent bucket Key: SOLR-9703 URL: https://issues.apache.org/jira/browse/SOLR-9703 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Yonik Seeley Right now, if one has a parent facet with a child facet, the child facet's queries (say from "filter") will be parsed for each processed bucket in the parent (in fact a new FacetProcessor will be created for each parent bucket). We could have a parse cache, store the parsed queries in the request context, or perhaps do something more general and make facet processors reusable. The latter sounds the most promising way to reduce a bunch of redundant work per bucket. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9681) add filter to any facet
[ https://issues.apache.org/jira/browse/SOLR-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618765#comment-15618765 ] ASF subversion and git services commented on SOLR-9681: --- Commit 3ada3421cda4c9d5275b559f084dbc886eee4d72 in lucene-solr's branch refs/heads/branch_6x from [~yo...@apache.org] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3ada342 ] SOLR-9681:tests: add filter after block join test > add filter to any facet > --- > > Key: SOLR-9681 > URL: https://issues.apache.org/jira/browse/SOLR-9681 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Yonik Seeley >Assignee: Yonik Seeley > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9681.patch > > > For the JSON Facet API, we should be able to add a list of filters to any > facet. These would be applied after any domain changes, hence useful for > parent->child mapping that would otherwise match all children of any parent > (SOLR-9510) > The API should also be consistent with "filter" at the top level of the JSON > Request API (examples at http://yonik.com/solr-json-request-api/ ) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9681) add filter to any facet
[ https://issues.apache.org/jira/browse/SOLR-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618763#comment-15618763 ] ASF subversion and git services commented on SOLR-9681: --- Commit d8d3a8b9b8e7345c4a02a62f7e321c4e9a2440bf in lucene-solr's branch refs/heads/master from [~yo...@apache.org] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d8d3a8b ] SOLR-9681:tests: add filter after block join test > add filter to any facet > --- > > Key: SOLR-9681 > URL: https://issues.apache.org/jira/browse/SOLR-9681 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Yonik Seeley >Assignee: Yonik Seeley > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9681.patch > > > For the JSON Facet API, we should be able to add a list of filters to any > facet. These would be applied after any domain changes, hence useful for > parent->child mapping that would otherwise match all children of any parent > (SOLR-9510) > The API should also be consistent with "filter" at the top level of the JSON > Request API (examples at http://yonik.com/solr-json-request-api/ ) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9702) Authentication & Authorization based on Jetty security
[ https://issues.apache.org/jira/browse/SOLR-9702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618754#comment-15618754 ] Jan Høydahl commented on SOLR-9702: --- The old wiki article you refer to was a user-contributed recipe and was never "supported" as such. The Solr/Lucene project will not officially endorse hacking the internal Jetty settings for the reasons you mention yourself. That does not mean that you cannot get it working in your own environment by adding the missing JARs and setting things up -- it is still Jetty. But you will be on your own for the next upgrade or if/when we stop using Jetty to power Solr. Your best action forward would be to describe what you are not able to do with our current Auth/Authz plugins, and see if there is interest in adding what you need, e.g. HashLogin. It is actually not very difficult to write your own security custom plugin either, perhaps wrapping the functionality from an existing library. This issue will probably be closed as Won't fix :( > Authentication & Authorization based on Jetty security > -- > > Key: SOLR-9702 > URL: https://issues.apache.org/jira/browse/SOLR-9702 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: security >Affects Versions: 6.2.1 >Reporter: Thomas Quinot > > (following up on comments initially posted on SOLR-7275). > Back in Solr 4 days, user authentication could be handled by Jetty, and some > level of authorization could be implemented using request regexp rules. This > was explicitly documented in the SolrSecurity page: > http://wiki.apache.org/solr/SolrSecurity?action=recall=35#Jetty_realm_example > In particular, authentication could thus be performed against a variety of > services implemented in Jetty, such as HashLoginService (mentioned explicitly > in the above documentation, tested in production, does work) or possibly > JAASLoginService, which in turn would open up the possibility to use a whole > range of auth services (in particular LDAP servers). > I see that the usage of Jetty is now "an implementation detail". Does this > mean that the feature listed above is not supported anymore? (This is quite > unfortunate IMO, as even just the HashLoginService would be useful to > authenticate users against a database of UNIX crypt(3) passwords) > The new login services that are apparently being reimplemented in Solr itself > seem to be much less flexible and limited. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 188 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/188/ 6 tests failed. FAILED: org.apache.lucene.search.TestFuzzyQuery.testRandom Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([B7170FB6D1C73E2]:0) FAILED: junit.framework.TestSuite.org.apache.lucene.search.TestFuzzyQuery Error Message: Suite timeout exceeded (>= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (>= 720 msec). at __randomizedtesting.SeedInfo.seed([B7170FB6D1C73E2]:0) FAILED: org.apache.solr.cloud.SharedFSAutoReplicaFailoverTest.test Error Message: Timeout waiting for all live and active Stack Trace: java.lang.AssertionError: Timeout waiting for all live and active at __randomizedtesting.SeedInfo.seed([3A2118BED78F59EA:B275276479733412]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SharedFSAutoReplicaFailoverTest.testBasics(SharedFSAutoReplicaFailoverTest.java:309) at org.apache.solr.cloud.SharedFSAutoReplicaFailoverTest.test(SharedFSAutoReplicaFailoverTest.java:127) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3636 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3636/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.security.BasicAuthStandaloneTest.testBasicAuth Error Message: Invalid jsonError 401 HTTP ERROR: 401 Problem accessing /solr/admin/authentication. Reason: Bad credentials http://eclipse.org/jetty;>Powered by Jetty:// 9.3.8.v20160314 Stack Trace: java.lang.AssertionError: Invalid json Error 401 HTTP ERROR: 401 Problem accessing /solr/admin/authentication. Reason: Bad credentials http://eclipse.org/jetty;>Powered by Jetty:// 9.3.8.v20160314 at __randomizedtesting.SeedInfo.seed([33D21BE7C47A7E4:BF5357ACD814249E]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.security.BasicAuthIntegrationTest.verifySecurityStatus(BasicAuthIntegrationTest.java:256) at org.apache.solr.security.BasicAuthIntegrationTest.verifySecurityStatus(BasicAuthIntegrationTest.java:237) at org.apache.solr.security.BasicAuthStandaloneTest.testBasicAuth(BasicAuthStandaloneTest.java:102) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at
[jira] [Updated] (SOLR-9684) Add scheduler Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-9684: - Description: SOLR-9559 adds a general purpose *parallel task executor* for streaming expressions. The executor() function executes a stream of tasks and doesn't have any concept of task priority. The scheduler() function wraps two streams, a high priority stream and a low priority stream. The scheduler function emits tuples from the high priority stream first, and then the low priority stream. The executor() function can then wrap the scheduler function to see tasks in priority order. Pseudo syntax: {code} daemon(executor(scheduler(topic(tasks, q="priority:high"), topic(tasks, "priority:low" {code} was: SOLR-9559 adds a general purpose *parallel task executor* for streaming expressions. The executor() function executes a stream of tasks and doesn't have any concept of task priority. The scheduler() function wraps two streams, a high priority stream and a low priority stream. The scheduler function emits tuples from the higher priority stream first, and then the lower priority stream. The executor() function can then wrap the scheduler function to see tasks in priority order. Pseudo syntax: {code} daemon(executor(scheduler(topic(tasks, q="priority:high"), topic(tasks, "priority:low" {code} > Add scheduler Streaming Expression > -- > > Key: SOLR-9684 > URL: https://issues.apache.org/jira/browse/SOLR-9684 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-9684.patch, SOLR-9684.patch > > > SOLR-9559 adds a general purpose *parallel task executor* for streaming > expressions. The executor() function executes a stream of tasks and doesn't > have any concept of task priority. > The scheduler() function wraps two streams, a high priority stream and a low > priority stream. The scheduler function emits tuples from the high priority > stream first, and then the low priority stream. > The executor() function can then wrap the scheduler function to see tasks in > priority order. > Pseudo syntax: > {code} > daemon(executor(scheduler(topic(tasks, q="priority:high"), topic(tasks, > "priority:low" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9684) Add scheduler Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-9684: - Attachment: SOLR-9684.patch > Add scheduler Streaming Expression > -- > > Key: SOLR-9684 > URL: https://issues.apache.org/jira/browse/SOLR-9684 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-9684.patch, SOLR-9684.patch > > > SOLR-9559 adds a general purpose *parallel task executor* for streaming > expressions. The executor() function executes a stream of tasks and doesn't > have any concept of task priority. > The scheduler() function wraps two streams, a high priority stream and a low > priority stream. The scheduler function emits tuples from the higher priority > stream first, and then the lower priority stream. > The executor() function can then wrap the scheduler function to see tasks in > priority order. > Pseudo syntax: > {code} > daemon(executor(scheduler(topic(tasks, q="priority:high"), topic(tasks, > "priority:low" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9684) Add scheduler Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-9684: - Description: SOLR-9559 adds a general purpose *parallel task executor* for streaming expressions. The executor() function executes a stream of tasks and doesn't have any concept of task priority. The scheduler() function wraps two streams, a high priority stream and a low priority stream. The scheduler function emits tuples from the higher priority stream first, and then the lower priority stream. The executor() function can then wrap the scheduler function to see tasks in priority order. Pseudo syntax: {code} daemon(executor(scheduler(topic(tasks, q="priority:high"), topic(tasks, "priority:low" {code} was: SOLR-9559 adds a general purpose *parallel task executor* for streaming expressions. The executor() function executes a stream of tasks and doesn't have any concept of task priority. The scheduler() function wraps a list of streams and *prioritizes* the iteration of the streams. This allows there to be different task queues with different priorities. The executor() function can then wrap the scheduler function to see tasks in priority order. Pseudo syntax: {code} daemon(executor(scheduler(topic(), topic(), topic( {code} > Add scheduler Streaming Expression > -- > > Key: SOLR-9684 > URL: https://issues.apache.org/jira/browse/SOLR-9684 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-9684.patch > > > SOLR-9559 adds a general purpose *parallel task executor* for streaming > expressions. The executor() function executes a stream of tasks and doesn't > have any concept of task priority. > The scheduler() function wraps two streams, a high priority stream and a low > priority stream. The scheduler function emits tuples from the higher priority > stream first, and then the lower priority stream. > The executor() function can then wrap the scheduler function to see tasks in > priority order. > Pseudo syntax: > {code} > daemon(executor(scheduler(topic(tasks, q="priority:high"), topic(tasks, > "priority:low" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7526) Improvements to UnifiedHighlighter OffsetStrategies
[ https://issues.apache.org/jira/browse/LUCENE-7526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618647#comment-15618647 ] ASF GitHub Bot commented on LUCENE-7526: Github user Timothy055 commented on the issue: https://github.com/apache/lucene-solr/pull/105 Hmm, clever! But not sure I find it very clean though. I feel like that can lead to trouble down the road if code ever expects the offsets to be ordered. If we went that route we wouldn't even need the priority queue though. Btw, I MultiTermHighlighting is nearly gone except for one method that is used in the UnifiedHighlighter and MemoryIndexOffsetStrategy for extracting automata from a query. Any ideas on good ways to move it? Perhaps the UnifiedHighlighter should do all automata extraction and pass that in? > Improvements to UnifiedHighlighter OffsetStrategies > --- > > Key: LUCENE-7526 > URL: https://issues.apache.org/jira/browse/LUCENE-7526 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Reporter: Timothy M. Rodriguez >Assignee: David Smiley >Priority: Minor > Fix For: 6.4 > > > This ticket improves several of the UnifiedHighlighter FieldOffsetStrategies > by reducing reliance on creating or re-creating TokenStreams. > The primary changes are as follows: > * AnalysisOffsetStrategy - split into two offset strategies > ** MemoryIndexOffsetStrategy - the primary analysis mode that utilizes a > MemoryIndex for producing Offsets > ** TokenStreamOffsetStrategy - an offset strategy that avoids creating a > MemoryIndex. Can only be used if the query distills down to terms and > automata. > * TokenStream removal > ** MemoryIndexOffsetStrategy - previously a TokenStream was created to fill > the memory index and then once consumed a new one was generated by > uninverting the MemoryIndex back into a TokenStream if there were automata > (wildcard/mtq queries) involved. Now this is avoided, which should save > memory and avoid a second pass over the data. > ** TermVectorOffsetStrategy - this was refactored in a similar way to avoid > generating a TokenStream if automata are involved. > ** PostingsWithTermVectorsOffsetStrategy - similar refactoring > * CompositePostingsEnum - aggregates several underlying PostingsEnums for > wildcard/mtq queries. This should improve relevancy by providing unified > metrics for a wildcard across all it's term matches > * Added a HighlightFlag for enabling the newly separated > TokenStreamOffsetStrategy since it can adversely affect passage relevancy -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr issue #105: LUCENE-7526 Improvements to UnifiedHighlighter Offse...
Github user Timothy055 commented on the issue: https://github.com/apache/lucene-solr/pull/105 Hmm, clever! But not sure I find it very clean though. I feel like that can lead to trouble down the road if code ever expects the offsets to be ordered. If we went that route we wouldn't even need the priority queue though. Btw, I MultiTermHighlighting is nearly gone except for one method that is used in the UnifiedHighlighter and MemoryIndexOffsetStrategy for extracting automata from a query. Any ideas on good ways to move it? Perhaps the UnifiedHighlighter should do all automata extraction and pass that in? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9684) Add scheduler Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-9684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-9684: - Attachment: SOLR-9684.patch > Add scheduler Streaming Expression > -- > > Key: SOLR-9684 > URL: https://issues.apache.org/jira/browse/SOLR-9684 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Attachments: SOLR-9684.patch > > > SOLR-9559 adds a general purpose *parallel task executor* for streaming > expressions. The executor() function executes a stream of tasks and doesn't > have any concept of task priority. > The scheduler() function wraps a list of streams and *prioritizes* the > iteration of the streams. This allows there to be different task queues with > different priorities. > The executor() function can then wrap the scheduler function to see tasks in > priority order. > Pseudo syntax: > {code} > daemon(executor(scheduler(topic(), topic(), topic( > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9681) add filter to any facet
[ https://issues.apache.org/jira/browse/SOLR-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley resolved SOLR-9681. Resolution: Fixed Fix Version/s: 6.4 master (7.0) > add filter to any facet > --- > > Key: SOLR-9681 > URL: https://issues.apache.org/jira/browse/SOLR-9681 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Yonik Seeley >Assignee: Yonik Seeley > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9681.patch > > > For the JSON Facet API, we should be able to add a list of filters to any > facet. These would be applied after any domain changes, hence useful for > parent->child mapping that would otherwise match all children of any parent > (SOLR-9510) > The API should also be consistent with "filter" at the top level of the JSON > Request API (examples at http://yonik.com/solr-json-request-api/ ) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-9681) add filter to any facet
[ https://issues.apache.org/jira/browse/SOLR-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley reassigned SOLR-9681: -- Assignee: Yonik Seeley > add filter to any facet > --- > > Key: SOLR-9681 > URL: https://issues.apache.org/jira/browse/SOLR-9681 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Yonik Seeley >Assignee: Yonik Seeley > Attachments: SOLR-9681.patch > > > For the JSON Facet API, we should be able to add a list of filters to any > facet. These would be applied after any domain changes, hence useful for > parent->child mapping that would otherwise match all children of any parent > (SOLR-9510) > The API should also be consistent with "filter" at the top level of the JSON > Request API (examples at http://yonik.com/solr-json-request-api/ ) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9681) add filter to any facet
[ https://issues.apache.org/jira/browse/SOLR-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618623#comment-15618623 ] ASF subversion and git services commented on SOLR-9681: --- Commit 05ea64a665d390d4ebbb985d0505941ef15f6d85 in lucene-solr's branch refs/heads/branch_6x from [~yo...@apache.org] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=05ea64a ] SOLR-9681: add filters to any facet command > add filter to any facet > --- > > Key: SOLR-9681 > URL: https://issues.apache.org/jira/browse/SOLR-9681 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Yonik Seeley > Attachments: SOLR-9681.patch > > > For the JSON Facet API, we should be able to add a list of filters to any > facet. These would be applied after any domain changes, hence useful for > parent->child mapping that would otherwise match all children of any parent > (SOLR-9510) > The API should also be consistent with "filter" at the top level of the JSON > Request API (examples at http://yonik.com/solr-json-request-api/ ) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9702) Authentication & Authorization based on Jetty security
Thomas Quinot created SOLR-9702: --- Summary: Authentication & Authorization based on Jetty security Key: SOLR-9702 URL: https://issues.apache.org/jira/browse/SOLR-9702 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: security Affects Versions: 6.2.1 Reporter: Thomas Quinot (following up on comments initially posted on SOLR-7275). Back in Solr 4 days, user authentication could be handled by Jetty, and some level of authorization could be implemented using request regexp rules. This was explicitly documented in the SolrSecurity page: http://wiki.apache.org/solr/SolrSecurity?action=recall=35#Jetty_realm_example In particular, authentication could thus be performed against a variety of services implemented in Jetty, such as HashLoginService (mentioned explicitly in the above documentation, tested in production, does work) or possibly JAASLoginService, which in turn would open up the possibility to use a whole range of auth services (in particular LDAP servers). I see that the usage of Jetty is now "an implementation detail". Does this mean that the feature listed above is not supported anymore? (This is quite unfortunate IMO, as even just the HashLoginService would be useful to authenticate users against a database of UNIX crypt(3) passwords) The new login services that are apparently being reimplemented in Solr itself seem to be much less flexible and limited. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7275) Pluggable authorization module in Solr
[ https://issues.apache.org/jira/browse/SOLR-7275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618615#comment-15618615 ] Thomas Quinot commented on SOLR-7275: - SOLR-9702 created. > Pluggable authorization module in Solr > -- > > Key: SOLR-7275 > URL: https://issues.apache.org/jira/browse/SOLR-7275 > Project: Solr > Issue Type: Sub-task >Reporter: Anshum Gupta >Assignee: Anshum Gupta > Fix For: 5.2 > > Attachments: SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, > SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, > SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, > SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, SOLR-7275.patch, > SOLR-7275.patch, SOLR-7275.patch > > > Solr needs an interface that makes it easy for different authorization > systems to be plugged into it. Here's what I plan on doing: > Define an interface {{SolrAuthorizationPlugin}} with one single method > {{isAuthorized}}. This would take in a {{SolrRequestContext}} object and > return an {{SolrAuthorizationResponse}} object. The object as of now would > only contain a single boolean value but in the future could contain more > information e.g. ACL for document filtering etc. > The reason why we need a context object is so that the plugin doesn't need to > understand Solr's capabilities e.g. how to extract the name of the collection > or other information from the incoming request as there are multiple ways to > specify the target collection for a request. Similarly request type can be > specified by {{qt}} or {{/handler_name}}. > Flow: > Request -> SolrDispatchFilter -> isAuthorized(context) -> Process/Return. > {code} > public interface SolrAuthorizationPlugin { > public SolrAuthorizationResponse isAuthorized(SolrRequestContext context); > } > {code} > {code} > public class SolrRequestContext { > UserInfo; // Will contain user context from the authentication layer. > HTTPRequest request; > Enum OperationType; // Correlated with user roles. > String[] CollectionsAccessed; > String[] FieldsAccessed; > String Resource; > } > {code} > {code} > public class SolrAuthorizationResponse { > boolean authorized; > public boolean isAuthorized(); > } > {code} > User Roles: > * Admin > * Collection Level: > * Query > * Update > * Admin > Using this framework, an implementation could be written for specific > security systems e.g. Apache Ranger or Sentry. It would keep all the security > system specific code out of Solr. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9681) add filter to any facet
[ https://issues.apache.org/jira/browse/SOLR-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618602#comment-15618602 ] ASF subversion and git services commented on SOLR-9681: --- Commit 650276e14bd85cdd12a77956f2403369ff3465ac in lucene-solr's branch refs/heads/master from [~yo...@apache.org] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=650276e ] SOLR-9681: add filters to any facet command > add filter to any facet > --- > > Key: SOLR-9681 > URL: https://issues.apache.org/jira/browse/SOLR-9681 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Yonik Seeley > Attachments: SOLR-9681.patch > > > For the JSON Facet API, we should be able to add a list of filters to any > facet. These would be applied after any domain changes, hence useful for > parent->child mapping that would otherwise match all children of any parent > (SOLR-9510) > The API should also be consistent with "filter" at the top level of the JSON > Request API (examples at http://yonik.com/solr-json-request-api/ ) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+140) - Build # 18170 - Still Unstable!
I wonder if this could be SSL related, and failing whenever SSL is active? I have only run tests locally on my Mac, and SSL is never randomized on OSX :( Will try to beast the tests on a Linux VM... -- Jan Høydahl, search solution architect Cominvent AS - www.cominvent.com > 29. okt. 2016 kl. 18.42 skrev Policeman Jenkins Server: > > Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18170/ > Java: 64bit/jdk-9-ea+140 -XX:-UseCompressedOops -XX:+UseG1GC > > 1 tests failed. > FAILED: org.apache.solr.security.BasicAuthStandaloneTest.testBasicAuth > > Error Message: > Invalid jsoncontent="text/html;charset=ISO-8859-1"/> Error 401 > HTTP ERROR: 401 Problem accessing > /solr/admin/authentication. Reason: Bad credentials />http://eclipse.org/jetty;>Powered by Jetty:// > 9.3.8.v20160314 > > Stack Trace: > java.lang.AssertionError: Invalid json > > > Error 401 > > > HTTP ERROR: 401 > Problem accessing /solr/admin/authentication. Reason: > Bad credentials > http://eclipse.org/jetty;>Powered by Jetty:// > 9.3.8.v20160314 > > > > at > __randomizedtesting.SeedInfo.seed([EC7C625E5CC88F3E:5012144CF89B0C44]:0) > at org.junit.Assert.fail(Assert.java:93) > at > org.apache.solr.security.BasicAuthIntegrationTest.verifySecurityStatus(BasicAuthIntegrationTest.java:256) > at > org.apache.solr.security.BasicAuthIntegrationTest.verifySecurityStatus(BasicAuthIntegrationTest.java:237) > at > org.apache.solr.security.BasicAuthStandaloneTest.testBasicAuth(BasicAuthStandaloneTest.java:102) > at > jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native > Method) > at > jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) > at > jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) > at > com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) > at > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) > at > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at >
[jira] [Commented] (LUCENE-7526) Improvements to UnifiedHighlighter OffsetStrategies
[ https://issues.apache.org/jira/browse/LUCENE-7526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618483#comment-15618483 ] ASF GitHub Bot commented on LUCENE-7526: Github user dsmiley commented on the issue: https://github.com/apache/lucene-solr/pull/105 I started playing with a bit and realized the same thing. It looks straight-forward but it's deceptively more complicated. Then it hit me -- lets not try to return the correct position at all! A "normal" PostingsEnum should but this one is _only_ used for offsets. So always return -1 -- we can get away with it for this internal use. > Improvements to UnifiedHighlighter OffsetStrategies > --- > > Key: LUCENE-7526 > URL: https://issues.apache.org/jira/browse/LUCENE-7526 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Reporter: Timothy M. Rodriguez >Assignee: David Smiley >Priority: Minor > Fix For: 6.4 > > > This ticket improves several of the UnifiedHighlighter FieldOffsetStrategies > by reducing reliance on creating or re-creating TokenStreams. > The primary changes are as follows: > * AnalysisOffsetStrategy - split into two offset strategies > ** MemoryIndexOffsetStrategy - the primary analysis mode that utilizes a > MemoryIndex for producing Offsets > ** TokenStreamOffsetStrategy - an offset strategy that avoids creating a > MemoryIndex. Can only be used if the query distills down to terms and > automata. > * TokenStream removal > ** MemoryIndexOffsetStrategy - previously a TokenStream was created to fill > the memory index and then once consumed a new one was generated by > uninverting the MemoryIndex back into a TokenStream if there were automata > (wildcard/mtq queries) involved. Now this is avoided, which should save > memory and avoid a second pass over the data. > ** TermVectorOffsetStrategy - this was refactored in a similar way to avoid > generating a TokenStream if automata are involved. > ** PostingsWithTermVectorsOffsetStrategy - similar refactoring > * CompositePostingsEnum - aggregates several underlying PostingsEnums for > wildcard/mtq queries. This should improve relevancy by providing unified > metrics for a wildcard across all it's term matches > * Added a HighlightFlag for enabling the newly separated > TokenStreamOffsetStrategy since it can adversely affect passage relevancy -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr issue #105: LUCENE-7526 Improvements to UnifiedHighlighter Offse...
Github user dsmiley commented on the issue: https://github.com/apache/lucene-solr/pull/105 I started playing with a bit and realized the same thing. It looks straight-forward but it's deceptively more complicated. Then it hit me -- lets not try to return the correct position at all! A "normal" PostingsEnum should but this one is _only_ used for offsets. So always return -1 -- we can get away with it for this internal use. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_102) - Build # 6212 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6212/ Java: 64bit/jdk1.8.0_102 -XX:-UseCompressedOops -XX:+UseSerialGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.lucene.index.TestNeverDelete Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestNeverDelete_C6E3B3B644A138AD-001\TestNeverDelete-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestNeverDelete_C6E3B3B644A138AD-001\TestNeverDelete-001 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestNeverDelete_C6E3B3B644A138AD-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestNeverDelete_C6E3B3B644A138AD-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestNeverDelete_C6E3B3B644A138AD-001\TestNeverDelete-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestNeverDelete_C6E3B3B644A138AD-001\TestNeverDelete-001 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestNeverDelete_C6E3B3B644A138AD-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.index.TestNeverDelete_C6E3B3B644A138AD-001 at __randomizedtesting.SeedInfo.seed([C6E3B3B644A138AD]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:323) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.solr.security.BasicAuthStandaloneTest.testBasicAuth Error Message: Invalid jsonError 401 HTTP ERROR: 401 Problem accessing /solr/admin/authentication. Reason: Bad credentials http://eclipse.org/jetty;>Powered by Jetty:// 9.3.8.v20160314 Stack Trace: java.lang.AssertionError: Invalid json Error 401 HTTP ERROR: 401 Problem accessing /solr/admin/authentication. Reason: Bad credentials http://eclipse.org/jetty;>Powered by Jetty:// 9.3.8.v20160314 at __randomizedtesting.SeedInfo.seed([DFB19A99FA00E3C:B1956FBB3BF38D46]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.security.BasicAuthIntegrationTest.verifySecurityStatus(BasicAuthIntegrationTest.java:256) at org.apache.solr.security.BasicAuthIntegrationTest.verifySecurityStatus(BasicAuthIntegrationTest.java:237) at org.apache.solr.security.BasicAuthStandaloneTest.testBasicAuth(BasicAuthStandaloneTest.java:102) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+140) - Build # 18170 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18170/ Java: 64bit/jdk-9-ea+140 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.security.BasicAuthStandaloneTest.testBasicAuth Error Message: Invalid jsonError 401 HTTP ERROR: 401 Problem accessing /solr/admin/authentication. Reason: Bad credentials http://eclipse.org/jetty;>Powered by Jetty:// 9.3.8.v20160314 Stack Trace: java.lang.AssertionError: Invalid json Error 401 HTTP ERROR: 401 Problem accessing /solr/admin/authentication. Reason: Bad credentials http://eclipse.org/jetty;>Powered by Jetty:// 9.3.8.v20160314 at __randomizedtesting.SeedInfo.seed([EC7C625E5CC88F3E:5012144CF89B0C44]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.security.BasicAuthIntegrationTest.verifySecurityStatus(BasicAuthIntegrationTest.java:256) at org.apache.solr.security.BasicAuthIntegrationTest.verifySecurityStatus(BasicAuthIntegrationTest.java:237) at org.apache.solr.security.BasicAuthStandaloneTest.testBasicAuth(BasicAuthStandaloneTest.java:102) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at
[jira] [Updated] (SOLR-9681) add filter to any facet
[ https://issues.apache.org/jira/browse/SOLR-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley updated SOLR-9681: --- Attachment: SOLR-9681.patch Here's the patch I plan on committing soon. > add filter to any facet > --- > > Key: SOLR-9681 > URL: https://issues.apache.org/jira/browse/SOLR-9681 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Yonik Seeley > Attachments: SOLR-9681.patch > > > For the JSON Facet API, we should be able to add a list of filters to any > facet. These would be applied after any domain changes, hence useful for > parent->child mapping that would otherwise match all children of any parent > (SOLR-9510) > The API should also be consistent with "filter" at the top level of the JSON > Request API (examples at http://yonik.com/solr-json-request-api/ ) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9701) NPE in export handler when "fl" parameter is omitted.
[ https://issues.apache.org/jira/browse/SOLR-9701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-9701: - Attachment: SOLR-9701.patch Preliminary patch, still to run full test suite. > NPE in export handler when "fl" parameter is omitted. > - > > Key: SOLR-9701 > URL: https://issues.apache.org/jira/browse/SOLR-9701 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: trunk, 6.4 >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-9701.patch > > > This started when a user reported that if you do not specify any parameters > for the export handler, you get an NPE. I tracked it down to not specifying > an "fl" parameter. > But in general I rearranged the error reporting in > SortingResponseWriter.write so that immediately upon detecting a problem, the > exception gets written to the output stream and then return immediately > rather than save it up for the end. Preliminary version of the patch > attached; it fixes the immediate problem. > Still to see is if it breaks any tests since the first error detected will be > returned to the user rather than the last. I'll fix any tests that are > sensitive to this and check in sometime this weekend. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9701) NPE in export handler when "fl" parameter is omitted.
Erick Erickson created SOLR-9701: Summary: NPE in export handler when "fl" parameter is omitted. Key: SOLR-9701 URL: https://issues.apache.org/jira/browse/SOLR-9701 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Affects Versions: trunk, 6.4 Reporter: Erick Erickson Assignee: Erick Erickson Priority: Minor This started when a user reported that if you do not specify any parameters for the export handler, you get an NPE. I tracked it down to not specifying an "fl" parameter. But in general I rearranged the error reporting in SortingResponseWriter.write so that immediately upon detecting a problem, the exception gets written to the output stream and then return immediately rather than save it up for the end. Preliminary version of the patch attached; it fixes the immediate problem. Still to see is if it breaks any tests since the first error detected will be returned to the user rather than the last. I'll fix any tests that are sensitive to this and check in sometime this weekend. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9700) NPE in export handler when "fl" parameter is omitted.
Erick Erickson created SOLR-9700: Summary: NPE in export handler when "fl" parameter is omitted. Key: SOLR-9700 URL: https://issues.apache.org/jira/browse/SOLR-9700 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Affects Versions: 6x, trunk Reporter: Erick Erickson Assignee: Erick Erickson Priority: Minor Fix For: trunk, 6.4 This started when a user reported that if you do not specify any parameters for the export handler, you get an NPE. I tracked it down to not specifying an "fl" parameter. But in general I rearranged the error reporting in SortingResponseWriter.write so that immediately upon detecting a problem, the exception gets written to the output stream and then return immediately rather than save it up for the end. Preliminary version of the patch attached; it fixes the immediate problem. Still to see is if it breaks any tests since the first error detected will be returned to the user rather than the last. I'll fix any tests that are sensitive to this and check in sometime this weekend. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7526) Improvements to UnifiedHighlighter OffsetStrategies
[ https://issues.apache.org/jira/browse/LUCENE-7526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618320#comment-15618320 ] ASF GitHub Bot commented on LUCENE-7526: Github user Timothy055 commented on the issue: https://github.com/apache/lucene-solr/pull/105 I don't think there's a way to avoid keeping the position state, unfortunately. The reason is that we can move one of the postings enums to the next position, but then realize the next position for that term is behind the position for a different term (and postings enum) that also matches the wildcard. Then we'll update the top and switch to the next postings enum (by offset now), but once it's exhausted or we switch back to the previous one from interleaving the position is lost. :/ An alternative to avoid this would be to change PostingsEnum to allow fetching of the currentPosition, then nearly all the house keeping would go away. > Improvements to UnifiedHighlighter OffsetStrategies > --- > > Key: LUCENE-7526 > URL: https://issues.apache.org/jira/browse/LUCENE-7526 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Reporter: Timothy M. Rodriguez >Assignee: David Smiley >Priority: Minor > Fix For: 6.4 > > > This ticket improves several of the UnifiedHighlighter FieldOffsetStrategies > by reducing reliance on creating or re-creating TokenStreams. > The primary changes are as follows: > * AnalysisOffsetStrategy - split into two offset strategies > ** MemoryIndexOffsetStrategy - the primary analysis mode that utilizes a > MemoryIndex for producing Offsets > ** TokenStreamOffsetStrategy - an offset strategy that avoids creating a > MemoryIndex. Can only be used if the query distills down to terms and > automata. > * TokenStream removal > ** MemoryIndexOffsetStrategy - previously a TokenStream was created to fill > the memory index and then once consumed a new one was generated by > uninverting the MemoryIndex back into a TokenStream if there were automata > (wildcard/mtq queries) involved. Now this is avoided, which should save > memory and avoid a second pass over the data. > ** TermVectorOffsetStrategy - this was refactored in a similar way to avoid > generating a TokenStream if automata are involved. > ** PostingsWithTermVectorsOffsetStrategy - similar refactoring > * CompositePostingsEnum - aggregates several underlying PostingsEnums for > wildcard/mtq queries. This should improve relevancy by providing unified > metrics for a wildcard across all it's term matches > * Added a HighlightFlag for enabling the newly separated > TokenStreamOffsetStrategy since it can adversely affect passage relevancy -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr issue #105: LUCENE-7526 Improvements to UnifiedHighlighter Offse...
Github user Timothy055 commented on the issue: https://github.com/apache/lucene-solr/pull/105 I don't think there's a way to avoid keeping the position state, unfortunately. The reason is that we can move one of the postings enums to the next position, but then realize the next position for that term is behind the position for a different term (and postings enum) that also matches the wildcard. Then we'll update the top and switch to the next postings enum (by offset now), but once it's exhausted or we switch back to the previous one from interleaving the position is lost. :/ An alternative to avoid this would be to change PostingsEnum to allow fetching of the currentPosition, then nearly all the house keeping would go away. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7525) ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method size
[ https://issues.apache.org/jira/browse/LUCENE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618179#comment-15618179 ] Uwe Schindler commented on LUCENE-7525: --- bq. I thought the ASCII Filter was basically a legacy filter and it grew into MappingCharFilter I generally prefer ASCIIFoldingFilter because it allows you much more flexibility. The problem with MappingCharFilter is that you can only apply it *before* tokenization. And this is the major downside: If you have tokenizers that rely on language specific stuff (Asian like Japanese, Chinese,...) it is a bad idea to do such stuff like before the tokenization. Also if you do stemming: Stemming in France breaks if you remove accents before! So ASCIIFoldingFilter is in most analysis chains the very last filter! > ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method > size > -- > > Key: LUCENE-7525 > URL: https://issues.apache.org/jira/browse/LUCENE-7525 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Affects Versions: 6.2.1 >Reporter: Karl von Randow > Attachments: ASCIIFolding.java, ASCIIFoldingFilter.java, > TestASCIIFolding.java > > > The {{ASCIIFoldingFilter.foldToASCII}} method has an enormous switch > statement and is too large for the HotSpot compiler to compile; causing a > performance problem. > The method is about 13K compiled, versus the 8KB HotSpot limit. So splitting > the method in half works around the problem. > In my tests splitting the method in half resulted in a 5X performance > increase. > In the test code below you can see how slow the fold method is, even when it > is using the shortcut when the character is less than 0x80, compared to an > inline implementation of the same shortcut. > So a workaround is to split the method. I'm happy to provide a patch. It's a > hack, of course. Perhaps using the {{MappingCharFilterFactory}} with an input > file as per SOLR-2013 would be a better replacement for this method in this > class? > {code:java} > public class ASCIIFoldingFilterPerformanceTest { > private static final int ITERATIONS = 1_000_000; > @Test > public void testFoldShortString() { > char[] input = "testing".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testFoldShortAccentedString() { > char[] input = "éúéúøßüäéúéúøßüä".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testManualFoldTinyString() { > char[] input = "t".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > int k = 0; > for (int j = 0; j < 1; ++j) { > final char c = input[j]; > if (c < '\u0080') { > output[k++] = c; > } else { > Assert.assertTrue(false); > } > } > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+140) - Build # 18169 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18169/ Java: 32bit/jdk-9-ea+140 -client -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.handler.TestReplicationHandler.doTestStressReplication Error Message: [snapshot_metadata, index.20161029191909112, replication.properties, index.properties, index.20161029191907184] expected:<1> but was:<2> Stack Trace: java.lang.AssertionError: [snapshot_metadata, index.20161029191909112, replication.properties, index.properties, index.20161029191907184] expected:<1> but was:<2> at __randomizedtesting.SeedInfo.seed([B06F9660B0CAAE5E:6BC496A6B5E2C7ED]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:907) at org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:874) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1141 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1141/ 10 tests failed. FAILED: org.apache.lucene.codecs.lucene54.TestLucene54DocValuesFormat.testSparseDocValuesVsStoredFields Error Message: dv iterator field=numeric: doc=71089 has unstable advanceExact Stack Trace: java.lang.RuntimeException: dv iterator field=numeric: doc=71089 has unstable advanceExact at __randomizedtesting.SeedInfo.seed([376025608E59A340:63BBF97374D5850F]:0) at org.apache.lucene.index.CheckIndex.checkDVIterator(CheckIndex.java:2118) at org.apache.lucene.index.CheckIndex.checkDocValues(CheckIndex.java:2291) at org.apache.lucene.index.CheckIndex.testDocValues(CheckIndex.java:2039) at org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:340) at org.apache.lucene.util.TestUtil.checkReader(TestUtil.java:319) at org.apache.lucene.codecs.lucene54.TestLucene54DocValuesFormat.doTestSparseDocValuesVsStoredFields(TestLucene54DocValuesFormat.java:204) at org.apache.lucene.codecs.lucene54.TestLucene54DocValuesFormat.testSparseDocValuesVsStoredFields(TestLucene54DocValuesFormat.java:149) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testUpdateLogSynchronisation Error Message: Timeout
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 936 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/936/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.update.AutoCommitTest.testMaxDocs Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([6D200348FF807EAD:D4A1D597D36A7A27]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:812) at org.apache.solr.update.AutoCommitTest.testMaxDocs(AutoCommitTest.java:225) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=//result[@numFound=1] xml response was: 00 request was:q=id:14=standard=0=20=2.2 at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:805) ... 40 more Build Log: [...truncated 10715 lines...] [junit4] Suite:
[jira] [Commented] (LUCENE-7525) ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method size
[ https://issues.apache.org/jira/browse/LUCENE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618059#comment-15618059 ] Alexandre Rafalovitch commented on LUCENE-7525: --- I thought the ASCII Filter was basically a legacy filter and it grew into MappingCharFilterFactory. Now, we are talking about making ASCII Filter even more like Mapping Char Filter. Why would we need both then? I know one is Token Filter and another Char filter, but I am not sure it is sufficiently important for this discussion. > ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method > size > -- > > Key: LUCENE-7525 > URL: https://issues.apache.org/jira/browse/LUCENE-7525 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Affects Versions: 6.2.1 >Reporter: Karl von Randow > Attachments: ASCIIFolding.java, ASCIIFoldingFilter.java, > TestASCIIFolding.java > > > The {{ASCIIFoldingFilter.foldToASCII}} method has an enormous switch > statement and is too large for the HotSpot compiler to compile; causing a > performance problem. > The method is about 13K compiled, versus the 8KB HotSpot limit. So splitting > the method in half works around the problem. > In my tests splitting the method in half resulted in a 5X performance > increase. > In the test code below you can see how slow the fold method is, even when it > is using the shortcut when the character is less than 0x80, compared to an > inline implementation of the same shortcut. > So a workaround is to split the method. I'm happy to provide a patch. It's a > hack, of course. Perhaps using the {{MappingCharFilterFactory}} with an input > file as per SOLR-2013 would be a better replacement for this method in this > class? > {code:java} > public class ASCIIFoldingFilterPerformanceTest { > private static final int ITERATIONS = 1_000_000; > @Test > public void testFoldShortString() { > char[] input = "testing".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testFoldShortAccentedString() { > char[] input = "éúéúøßüäéúéúøßüä".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testManualFoldTinyString() { > char[] input = "t".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > int k = 0; > for (int j = 0; j < 1; ++j) { > final char c = input[j]; > if (c < '\u0080') { > output[k++] = c; > } else { > Assert.assertTrue(false); > } > } > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-7525) ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method size
[ https://issues.apache.org/jira/browse/LUCENE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618031#comment-15618031 ] Uwe Schindler edited comment on LUCENE-7525 at 10/29/16 12:16 PM: -- I think, we can for now replace the large switch statement with a resource file. I'd have 2 ideas: - A UTF-8 encoded file with 2 columns: first column is a single char, 2nd column is a series of replacements. I don't really like this approach as it is very sensitive to corrumption by editors and hard to commit correct - A simple file like {{int => int,int,int // comment}}, this is easy to parse and convert, but backside is that its harder to read the codepoints (for that we have a comment) The actual code that parses the file and converts to the "lookup table" could be replaced easily afterwards. I'd start with a binary lookup as suggested (similar to the switch statement's internal impl). was (Author: thetaphi): I think, we can for now replace the large switch statement with a resource file. I'd have 2 ideas: - A UTF-8 encoded file with 2 columns: first column is a single char, 2nd column is a series of replacements. I don't really like this approach as it is very sensitive to corrumption by editors and hard to commit correct - A simple file like {{int => int,int,int // comment}}, this is easy to parse and convert, but backside is that its harder to read the codepoints (for that we have a comment) > ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method > size > -- > > Key: LUCENE-7525 > URL: https://issues.apache.org/jira/browse/LUCENE-7525 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Affects Versions: 6.2.1 >Reporter: Karl von Randow > Attachments: ASCIIFolding.java, ASCIIFoldingFilter.java, > TestASCIIFolding.java > > > The {{ASCIIFoldingFilter.foldToASCII}} method has an enormous switch > statement and is too large for the HotSpot compiler to compile; causing a > performance problem. > The method is about 13K compiled, versus the 8KB HotSpot limit. So splitting > the method in half works around the problem. > In my tests splitting the method in half resulted in a 5X performance > increase. > In the test code below you can see how slow the fold method is, even when it > is using the shortcut when the character is less than 0x80, compared to an > inline implementation of the same shortcut. > So a workaround is to split the method. I'm happy to provide a patch. It's a > hack, of course. Perhaps using the {{MappingCharFilterFactory}} with an input > file as per SOLR-2013 would be a better replacement for this method in this > class? > {code:java} > public class ASCIIFoldingFilterPerformanceTest { > private static final int ITERATIONS = 1_000_000; > @Test > public void testFoldShortString() { > char[] input = "testing".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testFoldShortAccentedString() { > char[] input = "éúéúøßüäéúéúøßüä".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testManualFoldTinyString() { > char[] input = "t".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > int k = 0; > for (int j = 0; j < 1; ++j) { > final char c = input[j]; > if (c < '\u0080') { > output[k++] = c; > } else { > Assert.assertTrue(false); > } > } > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-7525) ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method size
[ https://issues.apache.org/jira/browse/LUCENE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618031#comment-15618031 ] Uwe Schindler edited comment on LUCENE-7525 at 10/29/16 12:14 PM: -- I think, we can for now replace the large switch statement with a resource file. I'd have 2 ideas: - A UTF-8 encoded file with 2 columns: first column is a single char, 2nd column is a series of replacements. I don't really like this approach as it is very sensitive to corrumption by editors and hard to commit correct - A simple file like {{int => int,int,int // comment}}, this is easy to parse and convert, but backside is that its harder to read the codepoints (for that we have a comment) was (Author: thetaphi): I think, we can for now replace the large switch statement with a resource file. I'd have 2 ideas: - A UTF-8 encoded file with 2 columns: first column is a single char, 2nd column is a series of replacements. I don't really like this approach as it is very sensitive and hard to commit - A simple file like {{int => int,int,int // comment}}, this is easy to parse and convert, but backside is that its harder to read the codepoints (for that we have a comment) > ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method > size > -- > > Key: LUCENE-7525 > URL: https://issues.apache.org/jira/browse/LUCENE-7525 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Affects Versions: 6.2.1 >Reporter: Karl von Randow > Attachments: ASCIIFolding.java, ASCIIFoldingFilter.java, > TestASCIIFolding.java > > > The {{ASCIIFoldingFilter.foldToASCII}} method has an enormous switch > statement and is too large for the HotSpot compiler to compile; causing a > performance problem. > The method is about 13K compiled, versus the 8KB HotSpot limit. So splitting > the method in half works around the problem. > In my tests splitting the method in half resulted in a 5X performance > increase. > In the test code below you can see how slow the fold method is, even when it > is using the shortcut when the character is less than 0x80, compared to an > inline implementation of the same shortcut. > So a workaround is to split the method. I'm happy to provide a patch. It's a > hack, of course. Perhaps using the {{MappingCharFilterFactory}} with an input > file as per SOLR-2013 would be a better replacement for this method in this > class? > {code:java} > public class ASCIIFoldingFilterPerformanceTest { > private static final int ITERATIONS = 1_000_000; > @Test > public void testFoldShortString() { > char[] input = "testing".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testFoldShortAccentedString() { > char[] input = "éúéúøßüäéúéúøßüä".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testManualFoldTinyString() { > char[] input = "t".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > int k = 0; > for (int j = 0; j < 1; ++j) { > final char c = input[j]; > if (c < '\u0080') { > output[k++] = c; > } else { > Assert.assertTrue(false); > } > } > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7525) ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method size
[ https://issues.apache.org/jira/browse/LUCENE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618031#comment-15618031 ] Uwe Schindler commented on LUCENE-7525: --- I think, we can for now replace the large switch statement with a resource file. I'd have 2 ideas: - A UTF-8 encoded file with 2 columns: first column is a single char, 2nd column is a series of replacements. I don't really like this approach as it is very sensitive and hard to commit - A simple file like {{int => int,int,int // comment}}, this is easy to parse and convert, but backside is that its harder to read the codepoints (for that we have a comment) > ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method > size > -- > > Key: LUCENE-7525 > URL: https://issues.apache.org/jira/browse/LUCENE-7525 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Affects Versions: 6.2.1 >Reporter: Karl von Randow > Attachments: ASCIIFolding.java, ASCIIFoldingFilter.java, > TestASCIIFolding.java > > > The {{ASCIIFoldingFilter.foldToASCII}} method has an enormous switch > statement and is too large for the HotSpot compiler to compile; causing a > performance problem. > The method is about 13K compiled, versus the 8KB HotSpot limit. So splitting > the method in half works around the problem. > In my tests splitting the method in half resulted in a 5X performance > increase. > In the test code below you can see how slow the fold method is, even when it > is using the shortcut when the character is less than 0x80, compared to an > inline implementation of the same shortcut. > So a workaround is to split the method. I'm happy to provide a patch. It's a > hack, of course. Perhaps using the {{MappingCharFilterFactory}} with an input > file as per SOLR-2013 would be a better replacement for this method in this > class? > {code:java} > public class ASCIIFoldingFilterPerformanceTest { > private static final int ITERATIONS = 1_000_000; > @Test > public void testFoldShortString() { > char[] input = "testing".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testFoldShortAccentedString() { > char[] input = "éúéúøßüäéúéúøßüä".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testManualFoldTinyString() { > char[] input = "t".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > int k = 0; > for (int j = 0; j < 1; ++j) { > final char c = input[j]; > if (c < '\u0080') { > output[k++] = c; > } else { > Assert.assertTrue(false); > } > } > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7525) ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method size
[ https://issues.apache.org/jira/browse/LUCENE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618025#comment-15618025 ] Uwe Schindler commented on LUCENE-7525: --- I am not sure if an FST is not like to use a sledgehammer to crack a nut :-) We just need a lookup for single ints in a for-loop and replace those ints with a sequence of other ints. I will check the ICU source code to se what they are doing. > ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method > size > -- > > Key: LUCENE-7525 > URL: https://issues.apache.org/jira/browse/LUCENE-7525 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Affects Versions: 6.2.1 >Reporter: Karl von Randow > Attachments: ASCIIFolding.java, ASCIIFoldingFilter.java, > TestASCIIFolding.java > > > The {{ASCIIFoldingFilter.foldToASCII}} method has an enormous switch > statement and is too large for the HotSpot compiler to compile; causing a > performance problem. > The method is about 13K compiled, versus the 8KB HotSpot limit. So splitting > the method in half works around the problem. > In my tests splitting the method in half resulted in a 5X performance > increase. > In the test code below you can see how slow the fold method is, even when it > is using the shortcut when the character is less than 0x80, compared to an > inline implementation of the same shortcut. > So a workaround is to split the method. I'm happy to provide a patch. It's a > hack, of course. Perhaps using the {{MappingCharFilterFactory}} with an input > file as per SOLR-2013 would be a better replacement for this method in this > class? > {code:java} > public class ASCIIFoldingFilterPerformanceTest { > private static final int ITERATIONS = 1_000_000; > @Test > public void testFoldShortString() { > char[] input = "testing".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testFoldShortAccentedString() { > char[] input = "éúéúøßüäéúéúøßüä".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testManualFoldTinyString() { > char[] input = "t".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > int k = 0; > for (int j = 0; j < 1; ++j) { > final char c = input[j]; > if (c < '\u0080') { > output[k++] = c; > } else { > Assert.assertTrue(false); > } > } > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7525) ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method size
[ https://issues.apache.org/jira/browse/LUCENE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15618018#comment-15618018 ] Michael McCandless commented on LUCENE-7525: Maybe an FST, like {{MappingCharFilter}}? > ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method > size > -- > > Key: LUCENE-7525 > URL: https://issues.apache.org/jira/browse/LUCENE-7525 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Affects Versions: 6.2.1 >Reporter: Karl von Randow > Attachments: ASCIIFolding.java, ASCIIFoldingFilter.java, > TestASCIIFolding.java > > > The {{ASCIIFoldingFilter.foldToASCII}} method has an enormous switch > statement and is too large for the HotSpot compiler to compile; causing a > performance problem. > The method is about 13K compiled, versus the 8KB HotSpot limit. So splitting > the method in half works around the problem. > In my tests splitting the method in half resulted in a 5X performance > increase. > In the test code below you can see how slow the fold method is, even when it > is using the shortcut when the character is less than 0x80, compared to an > inline implementation of the same shortcut. > So a workaround is to split the method. I'm happy to provide a patch. It's a > hack, of course. Perhaps using the {{MappingCharFilterFactory}} with an input > file as per SOLR-2013 would be a better replacement for this method in this > class? > {code:java} > public class ASCIIFoldingFilterPerformanceTest { > private static final int ITERATIONS = 1_000_000; > @Test > public void testFoldShortString() { > char[] input = "testing".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testFoldShortAccentedString() { > char[] input = "éúéúøßüäéúéúøßüä".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testManualFoldTinyString() { > char[] input = "t".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > int k = 0; > for (int j = 0; j < 1; ++j) { > final char c = input[j]; > if (c < '\u0080') { > output[k++] = c; > } else { > Assert.assertTrue(false); > } > } > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 18168 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18168/ Java: 32bit/jdk1.8.0_102 -client -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: org.apache.solr.core.TestDynamicLoading.testDynamicLoading Error Message: Could not get expected value 'X val changed' for path 'x' full output: { "responseHeader":{ "status":0, "QTime":0}, "params":{"wt":"json"}, "context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"}, "class":"org.apache.solr.core.BlobStoreTestRequestHandler", "x":null}, from server: null Stack Trace: java.lang.AssertionError: Could not get expected value 'X val changed' for path 'x' full output: { "responseHeader":{ "status":0, "QTime":0}, "params":{"wt":"json"}, "context":{ "webapp":"", "path":"/test1", "httpMethod":"GET"}, "class":"org.apache.solr.core.BlobStoreTestRequestHandler", "x":null}, from server: null at __randomizedtesting.SeedInfo.seed([927D77D5D86553ED:4A305A822FB8F64D]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:535) at org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:249) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
[jira] [Commented] (LUCENE-7527) Facing unsafe memory access operation error while calling searcherManager.maybeReopen()
[ https://issues.apache.org/jira/browse/LUCENE-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617848#comment-15617848 ] Michael McCandless commented on LUCENE-7527: This can happen if your application mis-uses the {{IndexReader}} lifecycle by closing an {{IndexReader}} while searches are still running. Triple check all your code to make sure you always {{acquire}} a searcher from {{SearcherManager}} and then always release it, only once, via {{release}}, and that you never directly close a searcher (just the {{SearcherManager}} once all searching is finished). Though it is odd you hit it inside {{cleanMapping}}. You could also try switching to {{NIOFSDirectory}} ... performance may be worse in some cases, but maybe it'll throw {{AlreadyClosedException}} instead of crashing your JVM. Also, 3.5 is really ancient at this point. It could be you are hitting an already fixed bug. > Facing unsafe memory access operation error while calling > searcherManager.maybeReopen() > --- > > Key: LUCENE-7527 > URL: https://issues.apache.org/jira/browse/LUCENE-7527 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Affects Versions: 3.5 >Reporter: Jagmohan Singh > > We are getting below error while calling searcherManager.maybeReopen() > method. We are using MMAP implementation to read NFS index directory mounted > against 3 servers. We have a different process to update the indices and 3 > other processes to read from the same index. What we believe is that this > issue occurs when we call maybeReopen() method during heavy writes to the > indices and MMap implementation is not able to coop with it.. > Caused by: java.lang.InternalError: a fault occurred in a recent unsafe > memory access operation in compiled Java code > at java.security.AccessController.doPrivileged(Native Method) > at > org.apache.lucene.store.MMapDirectory.cleanMapping(MMapDirectory.java:158) > at > org.apache.lucene.store.MMapDirectory$MMapIndexInput.close(MMapDirectory.java:389) > at > org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:690) > at > org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:593) > at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:359) > at > org.apache.lucene.index.SegmentInfos.readCurrentVersion(SegmentInfos.java:480) > at > org.apache.lucene.index.DirectoryReader.isCurrent(DirectoryReader.java:901) > at > org.apache.lucene.index.DirectoryReader.doOpenNoWriter(DirectoryReader.java:471) > at > org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:450) > at > org.apache.lucene.index.DirectoryReader.doOpenIfChanged(DirectoryReader.java:391) > at > org.apache.lucene.index.IndexReader.openIfChanged(IndexReader.java:497) > at > org.apache.lucene.search.SearcherManager.maybeReopen(SearcherManager.java:162) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Future of FieldCache in Solr
Well said Mark, that is exactly the design of the Apache model, and I agree in general it's healthy: it means only conservative-ish changes happen in a project. Mike McCandless http://blog.mikemccandless.com On Thu, Oct 27, 2016 at 10:02 PM, Mark Millerwrote: > Apache is not designed to handle accusations of a history of behavior of > poor opinions when driving code forward in any meaningful way. > > Instead we have technical discussions per issue and the power of the veto. > The threat that we should just to work together rather than attacking one > another. > > Some people may want to plow forward in any given area at any given time. > And it's great when progress happens. But we have given dozens of people the > power of veto, and that's pretty much the rules. If it acts as a brake > sometimes, IMO, that is exactly the design. A lot of people here like to > think they know what should happen despite opposing views. I think our > system is designed with the understanding the truth is often in the middle. > > Discussion and veto power are not attached to activity either. If someone > wants to participate on a JIRA issue, they are in the club, regardless of > how they choose to develop. > > It's like a political system. Choose deadlock or consensus, and stop > worrying about opposing conspiracy theories. True or not means little in how > things are decided. > > I can nitpick on a lot of the choices and motivations of a lot of people > here. But it would be useless for forward progress (detrimental even) and > perpetuate what has been a huge culture decline in these projects. > > - Mark > > On Thu, Oct 27, 2016 at 6:22 PM Yonik Seeley wrote: >> >> (splitting this off) >> >> > Your threat to veto the original addition of Uwe's NumericFields to >> > Lucene's core stands out in my (long) memory as another. >> >> ??? I seriously question that long memory. Or perhaps just the color >> of the glasses you're viewing the world through. >> >> I fee like I helped develop NumericField (although Uwe was the primary >> author)! IIRC, I wrote the first draft of the code that enabled >> variable precision steps. >> >> >> https://issues.apache.org/jira/browse/LUCENE-1470?focusedCommentId=12671495=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12671495 >> >> http://markmail.org/message/vcwwxwciwf7ztrfg >> >> And this is the JIRA issue to actually move it to core... all I >> remember is an honest technical opinion about if it should be baked >> into the index format (and certainly no vetoes or even opinions >> against it being in "core"): >> https://issues.apache.org/jira/browse/LUCENE-1673 >> >> >> Luckily, I'm in good company... I'm not the only person to be accused >> of nefariously obstructing Lucene and only participating in Lucene >> issues to slow it down or make it harder to use. >> If one looks hard enough for something, they will start seeing it. >> >> -Yonik >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> > -- > - Mark > about.me/markrmiller - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Future of FieldCache in Solr
On Thu, Oct 27, 2016 at 6:21 PM, Yonik Seeleywrote: > I fee like I helped develop NumericField (although Uwe was the primary > author)! Sorry, you are correct: thank you for that! I had indeed forgotten that you helped improve on Uwe's numeric fields, originally. > all I remember is an honest technical opinion about if it should be baked > into the index format Yes, that is exactly what I am referring too. Your comment stated that we either commit numerics in a buggy state (so users don't get back a NumericField when they load their document at search time), or we don't even add a NumericField at all (an even worse API for direct Lucene users). Both options made Lucene's numerics harder to use. So of course we compromised, and the Uwe's numeric fields did go into core, in the buggy state. Fortunately we finally managed to fix that bug but iirc that took several years. Mike McCandless http://blog.mikemccandless.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_102) - Build # 547 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/547/ Java: 32bit/jdk1.8.0_102 -client -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest Error Message: Timeout waiting for all live and active Stack Trace: java.lang.AssertionError: Timeout waiting for all live and active at __randomizedtesting.SeedInfo.seed([52FDB8637ACB:D3765D0F6E1A746A]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest(TestCloudRecovery.java:147) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.solr.cloud.TestCloudRecovery.leaderRecoverFromLogOnStartupTest Error Message: Error from server at http://127.0.0.1:50584/solr/collection1: No registered leader was found after waiting for 4000ms , collection: collection1 slice: shard1 Stack Trace:
[jira] [Commented] (LUCENE-7525) ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method size
[ https://issues.apache.org/jira/browse/LUCENE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617679#comment-15617679 ] Uwe Schindler commented on LUCENE-7525: --- I'd suggest to use the simple binary search approach, but without generated code. I'd suggest to convert the large switch statement once to a simple text file and load it as resource in static initializer. This allows to maybe further extend the folding filter so people can use their own mappings by pointing to an input stream! > ASCIIFoldingFilter.foldToASCII performance issue due to large compiled method > size > -- > > Key: LUCENE-7525 > URL: https://issues.apache.org/jira/browse/LUCENE-7525 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Affects Versions: 6.2.1 >Reporter: Karl von Randow > Attachments: ASCIIFolding.java, ASCIIFoldingFilter.java, > TestASCIIFolding.java > > > The {{ASCIIFoldingFilter.foldToASCII}} method has an enormous switch > statement and is too large for the HotSpot compiler to compile; causing a > performance problem. > The method is about 13K compiled, versus the 8KB HotSpot limit. So splitting > the method in half works around the problem. > In my tests splitting the method in half resulted in a 5X performance > increase. > In the test code below you can see how slow the fold method is, even when it > is using the shortcut when the character is less than 0x80, compared to an > inline implementation of the same shortcut. > So a workaround is to split the method. I'm happy to provide a patch. It's a > hack, of course. Perhaps using the {{MappingCharFilterFactory}} with an input > file as per SOLR-2013 would be a better replacement for this method in this > class? > {code:java} > public class ASCIIFoldingFilterPerformanceTest { > private static final int ITERATIONS = 1_000_000; > @Test > public void testFoldShortString() { > char[] input = "testing".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testFoldShortAccentedString() { > char[] input = "éúéúøßüäéúéúøßüä".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > ASCIIFoldingFilter.foldToASCII(input, 0, output, 0, > input.length); > } > } > @Test > public void testManualFoldTinyString() { > char[] input = "t".toCharArray(); > char[] output = new char[input.length * 4]; > for (int i = 0; i < ITERATIONS; i++) { > int k = 0; > for (int j = 0; j < 1; ++j) { > final char c = input[j]; > if (c < '\u0080') { > output[k++] = c; > } else { > Assert.assertTrue(false); > } > } > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_102) - Build # 18167 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18167/ Java: 32bit/jdk1.8.0_102 -client -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication Error Message: expected:<1> but was:<0> Stack Trace: java.lang.AssertionError: expected:<1> but was:<0> at __randomizedtesting.SeedInfo.seed([9CD60D84EF3ED3FA:6BA5E3DC29D67C1C]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1329) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 11086 lines...] [junit4] Suite:
[jira] [Comment Edited] (SOLR-5944) Support updates of numeric DocValues
[ https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617617#comment-15617617 ] Ishan Chattopadhyaya edited comment on SOLR-5944 at 10/29/16 7:29 AM: -- Just discovered another, more common, problem with reordered DBQs and in-place updates working together. The earlier discussed problem, of resurrecting a document, is very similar. So, here's a description of both: SCENARIO 1: {code} Imagine, updates on leader are: ADD (id=1, updateable_field=1, title="mydoc1", version=100) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) DBQ (q="updateable_field:1", version=300) The same on the replica (forwarded): ADD (id=1, updateable_field=1, title="mydoc1", version=100) DBQ (q="updateable_field:1", version=300) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) The expected net effect is that no document is deleted, and id=1 document exists with updateable_field=2. Here, the DBQ was reordered. When they are executed on the replica, the version=200 update cannot be applied since there is no document with (id=1,prevVersion=100). What is required is a resurrection of the document that was deleted by the DBQ, so that other stored/indexed fields are not lost. {code} SCENARIO 2: {code} Imagine, updates on leader are: ADD (id=1, updateable_field=1, title="mydoc1", version=100) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) DBQ (q="id:1", version=300) The same on the replica (forwarded): ADD (id=1, updateable_field=1, title="mydoc1", version=100) DBQ (q="id:1", version=300) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) The expected net effect is that the document with id=1 be deleted. But again, the DBQ is reordered. When executed on replica, update version=200 cannot be applied, since the id=1 document has been deleted. What is required is for this update (version=200) to be dropped silently. {code} Scenario 1 is rare, scenario 2 would be more common. At the point when the inplace update (version=200 in both cases) is applied, the replica has no way to know if the update requires a resurrection of the document, or requires to be dropped. Till now, I hadn't considered scenario 2, but for the rare scenario 1, I resorted to throwing an error so as to throw the replica in LIR. Clearly, in view of scenario 2, this looks like a bad idea. Here are two potential solutions that come to mind: Solution 1: {code} In a replica, while applying an in-place update, if the required prevVersion update cannot be found in tlog or index (due to these reordered DBQs), then fetch from the leader an update that contains the full document with the version (for which the update failed at replica). If it has been deleted on leader, just drop it on replica silently. Downside to this approach is that unstored/non-dv fields will get dropped (as is the case with regular atomic updates today). {code} Solution 2: {code} Ensure that DBQs are never reordered from leader -> replica. One approach can be SOLR-8148. Another could be to block, on the leader, all updates newer than a DBQ until the DBQ is processed on leader and all the replicas, and only then process the other updates. Also, block the DBQ and execute it only after all updates older than the DBQ have been processed on leader and all the replicas. {code} Solution 1 seems easier to implement now than solution 2, but solution 2 (if implemented correctly) seems cleaner. Any thoughts? Edit: There's a third solution in the interim: {code} Have a field definition flag, inplace-updateable=true, or a similar schema level property, to enable or disable this feature (of updating docValues). This feature can be turned off by default (and this default can be revisited in a later major release). But someone can turn it on, if he/she agrees to (a) ensure they don't issue DBQs on updated documents or, even if they do that, (b) they make sure their DBQs are not reordered. {code} Not an ideal solution, but this could be in the spirit of "progress, not perfection". was (Author: ichattopadhyaya): Just discovered another, more common, problem with reordered DBQs and in-place updates working together. The earlier discussed problem, of resurrecting a document, is very similar. So, here's a description of both: SCENARIO 1: {code} Imagine, updates on leader are: ADD (id=1, updateable_field=1, title="mydoc1", version=100) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) DBQ (q="updateable_field:1", version=300) The same on the replica (forwarded): ADD (id=1, updateable_field=1, title="mydoc1", version=100) DBQ (q="updateable_field:1", version=300) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) The expected net effect is that no document is deleted, and id=1 document exists with
[jira] [Comment Edited] (SOLR-5944) Support updates of numeric DocValues
[ https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617617#comment-15617617 ] Ishan Chattopadhyaya edited comment on SOLR-5944 at 10/29/16 7:21 AM: -- Just discovered another, more common, problem with reordered DBQs and in-place updates working together. The earlier discussed problem, of resurrecting a document, is very similar. So, here's a description of both: SCENARIO 1: {code} Imagine, updates on leader are: ADD (id=1, updateable_field=1, title="mydoc1", version=100) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) DBQ (q="updateable_field:1", version=300) The same on the replica (forwarded): ADD (id=1, updateable_field=1, title="mydoc1", version=100) DBQ (q="updateable_field:1", version=300) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) The expected net effect is that no document is deleted, and id=1 document exists with updateable_field=2. Here, the DBQ was reordered. When they are executed on the replica, the version=200 update cannot be applied since there is no document with (id=1,prevVersion=100). What is required is a resurrection of the document that was deleted by the DBQ, so that other stored/indexed fields are not lost. {code} SCENARIO 2: {code} Imagine, updates on leader are: ADD (id=1, updateable_field=1, title="mydoc1", version=100) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) DBQ (q="id:1", version=300) The same on the replica (forwarded): ADD (id=1, updateable_field=1, title="mydoc1", version=100) DBQ (q="id:1", version=300) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) The expected net effect is that the document with id=1 be deleted. But again, the DBQ is reordered. When executed on replica, update version=200 cannot be applied, since the id=1 document has been deleted. What is required is for this update (version=200) to be dropped silently. {code} Scenario 1 is rare, scenario 2 would be more common. At the point when the inplace update (version=200 in both cases) is applied, the replica has no way to know if the update requires a resurrection of the document, or requires to be dropped. Till now, I hadn't considered scenario 2, but for the rare scenario 1, I resorted to throwing an error so as to throw the replica in LIR. Clearly, in view of scenario 2, this looks like a bad idea. Here are two potential solutions that come to mind: Solution 1: {code} In a replica, while applying an in-place update, if the required prevVersion update cannot be found in tlog or index (due to these reordered DBQs), then fetch from the leader an update that contains the full document with the version (for which the update failed at replica). Downside to this approach is that unstored/non-dv fields will get dropped (as is the case with regular atomic updates today). {code} Solution 2: {code} Ensure that DBQs are never reordered from leader -> replica. One approach can be SOLR-8148. Another could be to block, on the leader, all updates newer than a DBQ, that has been sent through a different thread, until the DBQ is processed on leader and all the replicas, and only then process the other updates. {code} Solution 1 seems easier to implement now than solution 2, but solution 2 (if implemented correctly) seems cleaner. Any thoughts? Edit: There's a third solution in the interim: {code} Have a field definition flag, inplace-updateable=true, or a similar schema level property, to enable or disable this feature (of updating docValues). This feature can be turned off by default (and this default can be revisited in a later major release). But someone can turn it on, if he/she agrees to (a) ensure they don't issue DBQs on updated documents or, even if they do that, (b) they make sure their DBQs are not reordered. {code} Not an ideal solution, but this could be in the spirit of "progress, not perfection". was (Author: ichattopadhyaya): Just discovered another, more common, problem with reordered DBQs and in-place updates working together. The earlier discussed problem, of resurrecting a document, is very similar. So, here's a description of both: SCENARIO 1: {code} Imagine, updates on leader are: ADD (id=1, updateable_field=1, title="mydoc1", version=100) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) DBQ (q="updateable_field:1", version=300) The same on the replica (forwarded): ADD (id=1, updateable_field=1, title="mydoc1", version=100) DBQ (q="updateable_field:1", version=300) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) The expected net effect is that no document is deleted, and id=1 document exists with updateable_field=2. Here, the DBQ was reordered. When they are executed on the replica, the version=200 update cannot be applied since there is no
[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues
[ https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15617617#comment-15617617 ] Ishan Chattopadhyaya commented on SOLR-5944: Just discovered another, more common, problem with reordered DBQs and in-place updates working together. The earlier discussed problem, of resurrecting a document, is very similar. So, here's a description of both: SCENARIO 1: {code} Imagine, updates on leader are: ADD (id=1, updateable_field=1, title="mydoc1", version=100) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) DBQ (q="updateable_field:1", version=300) The same on the replica (forwarded): ADD (id=1, updateable_field=1, title="mydoc1", version=100) DBQ (q="updateable_field:1", version=300) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) The expected net effect is that no document is deleted, and id=1 document exists with updateable_field=2. Here, the DBQ was reordered. When they are executed on the replica, the version=200 update cannot be applied since there is no document with (id=1,prevVersion=100). What is required is a resurrection of the document that was deleted by the DBQ, so that other stored/indexed fields are not lost. {code} SCENARIO 2: {code} Imagine, updates on leader are: ADD (id=1, updateable_field=1, title="mydoc1", version=100) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) DBQ (q="id:1", version=300) The same on the replica (forwarded): ADD (id=1, updateable_field=1, title="mydoc1", version=100) DBQ (q="id:1", version=300) INP-UPD (id=1, updateable_field=2, version=200, prevVersion=100) The expected net effect is that the document with id=1 be deleted. But again, the DBQ is reordered. When executed on replica, update version=200 cannot be applied, since the id=1 document has been deleted. What is required is for this update (version=200) to be dropped silently. {code} Scenario 1 is rare, scenario 2 would be more common. At the point when the inplace update (version=200 in both cases) is applied, the replica has no way to know if the update requires a resurrection of the document, or requires to be dropped. Till now, I hadn't considered scenario 2, but for the rare scenario 1, I resorted to throwing an error so as to throw the replica in LIR. Clearly, in view of scenario 2, this looks like a bad idea. Here are two potential solutions that come to mind: Solution 1: {code} In a replica, while applying an in-place update, if the required prevVersion update cannot be found in tlog or index (due to these reordered DBQs), then fetch from the leader an update that contains the full document with the version (for which the update failed at replica). Downside to this approach is that unstored/non-dv fields will get dropped (as is the case with regular atomic updates today). {code} Solution 2: {code} Ensure that DBQs are never reordered from leader -> replica. One approach can be SOLR-8148. Another could be to block, on the leader, all updates newer than a DBQ, that has been sent through a different thread, until the DBQ is processed on leader and all the replicas, and only then process the other updates. {code} Solution 1 seems easier to implement now than solution 2, but solution 2 (if implemented correctly) seems cleaner. Any thoughts? > Support updates of numeric DocValues > > > Key: SOLR-5944 > URL: https://issues.apache.org/jira/browse/SOLR-5944 > Project: Solr > Issue Type: New Feature >Reporter: Ishan Chattopadhyaya >Assignee: Shalin Shekhar Mangar > Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, > SOLR-5944.patch, SOLR-5944.patch, > TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, > TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, > TestStressInPlaceUpdates.eb044ac71.failures.tar.gz, defensive-checks.log.gz, > hoss.62D328FA1DEA57FD.fail.txt,
[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+140) - Build # 2064 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2064/ Java: 32bit/jdk-9-ea+140 -server -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.PeerSyncReplicationTest.test Error Message: PeerSync failed. Had to fail back to replication expected:<0> but was:<18> Stack Trace: java.lang.AssertionError: PeerSync failed. Had to fail back to replication expected:<0> but was:<18> at __randomizedtesting.SeedInfo.seed([E1BCCD231971E84A:69E8F2F9B78D85B2]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:290) at org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:244) at org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:130) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native Method) at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62) at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:535) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at