[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-13-ea+26) - Build # 24309 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24309/ Java: 64bit/jdk-13-ea+26 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 4 tests failed. FAILED: org.apache.lucene.util.TestRamUsageEstimator.testMap Error Message: expected:<25152.0> but was:<30184.0> Stack Trace: java.lang.AssertionError: expected:<25152.0> but was:<30184.0> at __randomizedtesting.SeedInfo.seed([88B2C2C29C4228D9:A894761186B93BAB]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:553) at org.junit.Assert.assertEquals(Assert.java:683) at org.apache.lucene.util.TestRamUsageEstimator.testMap(TestRamUsageEstimator.java:136) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:830) FAILED: org.apache.lucene.util.TestRamUsageEstimator.testMap Error Message: expected:<25152.0> but was:<30184.0> Stack Trace: java.lang.AssertionError: expected:<25152.0> but was:<30184.0> at __randomizedtesting.SeedInfo.seed([88B2C2C29C4228D9:A894761186B93BAB]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:553) at org.junit.Assert.assertEquals(Assert.java:683) at
[jira] [Commented] (LUCENE-8891) Snowball stemmer/analyzer for the Estonian language
[ https://issues.apache.org/jira/browse/LUCENE-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875390#comment-16875390 ] Tomoko Uchida commented on LUCENE-8891: --- Hi [~gpaimla], I will commit the patch to the ASF repo as-is in 24 hours. Please add the change log to CHANGES.txt by then, if you'd like to write it on your own (or else I will add a short log message for this). > Snowball stemmer/analyzer for the Estonian language > --- > > Key: LUCENE-8891 > URL: https://issues.apache.org/jira/browse/LUCENE-8891 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Gert Morten Paimla >Assignee: Tomoko Uchida >Priority: Minor > Labels: newbie, ready-to-commit > Attachments: LUCENE-8891.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently there is no Estonian specific stemmer for SnowballFilter. > I would like to add a Snowball stemmer for the Estonian language and also add > a new Language analyzer for the Estonian language based on the snowball > stemmer. > [https://github.com/gpaimla/lucene-solr] fork of master branch with the > analyzer implemented -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5797) Explain plan transform does not work in Solr cloud
[ https://issues.apache.org/jira/browse/SOLR-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875389#comment-16875389 ] Munendra S N commented on SOLR-5797: [~dmehta] I'm not able to reproduce this in the master. Could you please verify if this still an issue and attach a test case if possible? > Explain plan transform does not work in Solr cloud > -- > > Key: SOLR-5797 > URL: https://issues.apache.org/jira/browse/SOLR-5797 > Project: Solr > Issue Type: Bug >Affects Versions: 4.4 >Reporter: Divya Mehta >Priority: Major > Labels: explainPlan, solrcloud > > explain plan works as expected on single solr node, After moving to Solr > Cloud, it does not show any explanation field in returned documents. > This is how we ask for explain output in our SolrQuery, as > SolrQuery sq = new SolrQuery(); > > if (args.getExplain()) { > sq.setParam(CommonParams.DEBUG_QUERY, true); > sq.addField("explanation:[explain style=text]"); > } > I checked the logs at both single node and cloud, but request and its > parameters are exactly the same. > Is this a known issue or does it need some other configuration to make it > work on solr cloud. We have one main node and one shard and using standalone > zookeeper to manage solr cloud. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-13-ea+26) - Build # 783 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/783/ Java: 64bit/jdk-13-ea+26 -XX:-UseCompressedOops -XX:+UseParallelGC 4 tests failed. FAILED: org.apache.lucene.util.TestRamUsageEstimator.testMap Error Message: expected:<25152.0> but was:<30184.0> Stack Trace: java.lang.AssertionError: expected:<25152.0> but was:<30184.0> at __randomizedtesting.SeedInfo.seed([DB89AE21C7281635:FBAF1AF2DDD30547]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:553) at org.junit.Assert.assertEquals(Assert.java:683) at org.apache.lucene.util.TestRamUsageEstimator.testMap(TestRamUsageEstimator.java:136) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:830) FAILED: org.apache.lucene.util.TestRamUsageEstimator.testMap Error Message: expected:<25152.0> but was:<30184.0> Stack Trace: java.lang.AssertionError: expected:<25152.0> but was:<30184.0> at __randomizedtesting.SeedInfo.seed([DB89AE21C7281635:FBAF1AF2DDD30547]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:553) at org.junit.Assert.assertEquals(Assert.java:683) at
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-13-ea+shipilev-fastdebug) - Build # 24308 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24308/ Java: 64bit/jdk-13-ea+shipilev-fastdebug -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 4 tests failed. FAILED: org.apache.lucene.util.TestRamUsageEstimator.testMap Error Message: expected:<25152.0> but was:<30184.0> Stack Trace: java.lang.AssertionError: expected:<25152.0> but was:<30184.0> at __randomizedtesting.SeedInfo.seed([C06877127E23DC94:E04EC3C164D8CFE6]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:553) at org.junit.Assert.assertEquals(Assert.java:683) at org.apache.lucene.util.TestRamUsageEstimator.testMap(TestRamUsageEstimator.java:136) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:830) FAILED: org.apache.lucene.util.TestRamUsageEstimator.testMap Error Message: expected:<25152.0> but was:<30184.0> Stack Trace: java.lang.AssertionError: expected:<25152.0> but was:<30184.0> at __randomizedtesting.SeedInfo.seed([C06877127E23DC94:E04EC3C164D8CFE6]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:553) at org.junit.Assert.assertEquals(Assert.java:683) at
[jira] [Commented] (SOLR-13375) Dimensional Routed Aliases
[ https://issues.apache.org/jira/browse/SOLR-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875345#comment-16875345 ] Gus Heck commented on SOLR-13375: - Another WIP patch, now solving the v2 api issue via an implementation of toMap() on the SolrParams anon wrapper Both work, and all tests pass (but the DRA they create still isn't functional, that's next). > Dimensional Routed Aliases > -- > > Key: SOLR-13375 > URL: https://issues.apache.org/jira/browse/SOLR-13375 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Affects Versions: master (9.0) >Reporter: Gus Heck >Assignee: Gus Heck >Priority: Major > Attachments: SOLR-13375.patch, SOLR-13375.patch > > > Current available routed aliases are restricted to a single field. This > feature will allow Solr to provide data driven collection access, creation > and management based on multiple fields in a document. The collections will > be queried and updated in a unified manner via an alias. Current routing is > restricted to the values of a single field. The particularly useful > combination at this time will be Category X Time routing but Category X > Category may also be useful. More importantly, if additional routing schemes > are created in the future (either as contributions or as custom code by > users) combination among these should be supported. > It is expected that not all combinations will be useful, and that > determination of usefulness I expect to leave up to the user. Some Routing > schemes may need to be limited to be the leaf/last routing scheme for > technical reasons, though I'm not entirely convinced of that yet. If so, a > flag will be added to the RoutedAlias interface. > Initial desire is to support two levels, though if arbitrary levels can be > supported easily that will be done. > This could also have been called CompositeRoutedAlias, but that creates a TLA > clash with CategoryRoutedAlias. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13375) Dimensional Routed Aliases
[ https://issues.apache.org/jira/browse/SOLR-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gus Heck updated SOLR-13375: Attachment: SOLR-13375.patch > Dimensional Routed Aliases > -- > > Key: SOLR-13375 > URL: https://issues.apache.org/jira/browse/SOLR-13375 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Affects Versions: master (9.0) >Reporter: Gus Heck >Assignee: Gus Heck >Priority: Major > Attachments: SOLR-13375.patch, SOLR-13375.patch > > > Current available routed aliases are restricted to a single field. This > feature will allow Solr to provide data driven collection access, creation > and management based on multiple fields in a document. The collections will > be queried and updated in a unified manner via an alias. Current routing is > restricted to the values of a single field. The particularly useful > combination at this time will be Category X Time routing but Category X > Category may also be useful. More importantly, if additional routing schemes > are created in the future (either as contributions or as custom code by > users) combination among these should be supported. > It is expected that not all combinations will be useful, and that > determination of usefulness I expect to leave up to the user. Some Routing > schemes may need to be limited to be the leaf/last routing scheme for > technical reasons, though I'm not entirely convinced of that yet. If so, a > flag will be added to the RoutedAlias interface. > Initial desire is to support two levels, though if arbitrary levels can be > supported easily that will be done. > This could also have been called CompositeRoutedAlias, but that creates a TLA > clash with CategoryRoutedAlias. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Please look at Hoss' test rollups
All: I don’t know how many people actually look at the rollup report Hoss puts up, but I’d encourage everyone to make it a habit to check it at least once a week if not daily, especially just after you’ve made changes. There are two graphics that you can check in < 60 seconds: http://fucit.org/solr-jenkins-reports/failure-report.html http://fucit.org/solr-jenkins-reports/suspicious-failure-report.html Note the drop-down in the upper left of the first link. The view defaults to the last 7 days, but you can choose the last 24 hours from the drop-down. The second link is, IIUC, tests that have started failing recently. Erick - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13584) Explore prohibiting aliases and collections from having the same name.
[ https://issues.apache.org/jira/browse/SOLR-13584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875303#comment-16875303 ] Erick Erickson commented on SOLR-13584: --- Oops, that's not so good then ;( > Explore prohibiting aliases and collections from having the same name. > -- > > Key: SOLR-13584 > URL: https://issues.apache.org/jira/browse/SOLR-13584 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Priority: Major > > Allowing aliases and collections to have the same name is fragile and a > potentially a data issue. I'll link in a few JIRAs illustrating this and one > at least where the discussion gets long. > Straw-man proposal to start things off. > Deprecate this ability now, and enforce it in 9.0. > We have to provide a graceful way for users to get themselves out of the > following currently-possible use-case. > * a collection C1 is created and all the front-end uses it. > * users want to atomically switch to a new collection for various reasons > * users create C2 and test it out. > * users create an alias C1->C2 > Let's discuss. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
SolrCloud - "[not a shard request]" is returned when search request is short circuited
Hello, If the collection has only one shard/replica or in case when _route_ param points to the hosted core, [shard] field in response is set to "[not a shard request]". When short-circuiting in below code "shard.url" is not populated in request param. Please let me know if I submit a JIRA. https://github.com/apache/lucene-solr/blob/301ea0e4624c2bd693fc034a801c4abb91cba299/solr/core/src/java/org/apache/solr/handler/component/HttpShardHandler.java#L405 http://localhost:8983/solr/collection1/selct?q=*:*=[shard] Thanks Gopi
[jira] [Updated] (SOLR-13580) java 13 changes to locale specific Numeric parsing rules affect ParseNumeric UpdateProcessors when using 'local' config option - notably affects French
[ https://issues.apache.org/jira/browse/SOLR-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man updated SOLR-13580: Description: Per [JDK-8221432|https://bugs.openjdk.java.net/browse/JDK-8221432] Java13 has updated to [CLDR 35.1|http://cldr.unicode.org/] – which controls the definition of language & locale specific formatting characters – in a non-backwards compatible way due to "French" changes in [CLDR 34|http://cldr.unicode.org/index/downloads/cldr-34#TOC-Detailed-Data-Changes] This impacts people who use any of the "ParseNumeric" UpdateProcessors in conjunction with the "locale=fr" or "locale=fr_FR" init param and expect the (pre java13) existing behavior of treating U+00A0 (NO BREAK SPACE) as a "grouping" character (ie: between thousands and million, between millions and billions, etc...). Starting with java13 the JVM expects U+202F (NARROW NO BREAK SPACE) in it's place. Notably: upgrading to jdk13-ea+26 caused failures in Solr's ParsingFieldUpdateProcessorsTest which was initially had hardcoded test data that used U+00A0. ParsingFieldUpdateProcessorsTest has since been updated to account for this discrepency by modifying the test data used to determine the "expected" character for the current JVM, but there is nothing Solr or the ParseNumeric UpdateProcessors can do to help mitigate this change in behavior for end users who upgrade to java13. Affected users with U+00A0 characters in their incoming SolrInputDocuments will see the ParseNumeric UpdateProcessors (configured with locale=fr...) "skip" these values as unparsable, most likely resulting in a failure to index into a numeric field since the original "String" value will be left as is. Affected users may want to consider updating their configs to include a {{RegexReplaceProcessorFactory}} configured to strip out all whitespace characters, prior to any ParseNumeric update processors configured expect french langauge numbers was: Per [JDK-8221432|https://bugs.openjdk.java.net/browse/JDK-8221432] Java13 has updated to [CLDR 35.1|http://cldr.unicode.org/] – which controls the definition of language & locale specific formatting characters – in a non-backwards compatible way due to "French" changes in [CLDR 34|http://cldr.unicode.org/index/downloads/cldr-34#TOC-Detailed-Data-Changes] This impacts people who use any of the "ParseNumeric" UpdateProcessors in conjunction with the "locale=fr" or "locale=fr_FR" init param and expect the (pre java13) existing behavior of treating U+00A0 (NO BREAK SPACE) as a "grouping" character (ie: between thousands and million, between millions and billions, etc...). Starting with java13 the JVM expects U+202F (NARROW NO BREAK SPACE) in it's place. Notably: upgrading to jdk13-ea+26 caused failures in Solr's ParsingFieldUpdateProcessorsTest which was initially had hardcoded test data that used U+00A0. ParsingFieldUpdateProcessorsTest has since been updated to account for this discrepency by modifying the test data used to determine the "expected" character for the current JVM, but there is nothing Solr or the ParseNumeric UpdateProcessors can do to help mitigate this change in behavior for end users who upgrade to java13. Affected users with U+00A0 characters in their incoming SolrInputDocuments will see the ParseNumeric UpdateProcessors (configured with locale=fr...) "skip" these values as unparsable, most likely resulting in a failure to index into a numeric field since the original "String" value will be left as is. update dsecription with a possible workaround that just occured to me > java 13 changes to locale specific Numeric parsing rules affect ParseNumeric > UpdateProcessors when using 'local' config option - notably affects French > > > Key: SOLR-13580 > URL: https://issues.apache.org/jira/browse/SOLR-13580 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Hoss Man >Priority: Major > Labels: Java13 > Attachments: SOLR-13580.patch > > > Per [JDK-8221432|https://bugs.openjdk.java.net/browse/JDK-8221432] Java13 has > updated to [CLDR 35.1|http://cldr.unicode.org/] – which controls the > definition of language & locale specific formatting characters – in a > non-backwards compatible way due to "French" changes in [CLDR > 34|http://cldr.unicode.org/index/downloads/cldr-34#TOC-Detailed-Data-Changes] > This impacts people who use any of the "ParseNumeric" UpdateProcessors in > conjunction with the "locale=fr" or "locale=fr_FR" init param and expect the > (pre java13) existing behavior of treating U+00A0 (NO BREAK SPACE) as a >
[jira] [Resolved] (SOLR-13580) java 13 changes to locale specific Numeric parsing rules affect ParseNumeric UpdateProcessors when using 'local' config option - notably affects French
[ https://issues.apache.org/jira/browse/SOLR-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man resolved SOLR-13580. - Resolution: Not A Bug > java 13 changes to locale specific Numeric parsing rules affect ParseNumeric > UpdateProcessors when using 'local' config option - notably affects French > > > Key: SOLR-13580 > URL: https://issues.apache.org/jira/browse/SOLR-13580 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Hoss Man >Priority: Major > Labels: Java13 > Attachments: SOLR-13580.patch > > > Per [JDK-8221432|https://bugs.openjdk.java.net/browse/JDK-8221432] Java13 has > updated to [CLDR 35.1|http://cldr.unicode.org/] – which controls the > definition of language & locale specific formatting characters – in a > non-backwards compatible way due to "French" changes in [CLDR > 34|http://cldr.unicode.org/index/downloads/cldr-34#TOC-Detailed-Data-Changes] > This impacts people who use any of the "ParseNumeric" UpdateProcessors in > conjunction with the "locale=fr" or "locale=fr_FR" init param and expect the > (pre java13) existing behavior of treating U+00A0 (NO BREAK SPACE) as a > "grouping" character (ie: between thousands and million, between millions and > billions, etc...). Starting with java13 the JVM expects U+202F (NARROW NO > BREAK SPACE) in it's place. > Notably: upgrading to jdk13-ea+26 caused failures in Solr's > ParsingFieldUpdateProcessorsTest which was initially had hardcoded test data > that used U+00A0. ParsingFieldUpdateProcessorsTest has since been updated to > account for this discrepency by modifying the test data used to determine the > "expected" character for the current JVM, but there is nothing Solr or the > ParseNumeric UpdateProcessors can do to help mitigate this change in behavior > for end users who upgrade to java13. > Affected users with U+00A0 characters in their incoming SolrInputDocuments > will see the ParseNumeric UpdateProcessors (configured with locale=fr...) > "skip" these values as unparsable, most likely resulting in a failure to > index into a numeric field since the original "String" value will be left as > is. > Affected users may want to consider updating their configs to include a > {{RegexReplaceProcessorFactory}} configured to strip out all whitespace > characters, prior to any ParseNumeric update processors configured expect > french langauge numbers > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11.0.3) - Build # 8023 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8023/ Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud Error Message: IOException occurred when talking to server at: https://127.0.0.1:59371/solr Stack Trace: org.apache.solr.client.solrj.SolrServerException: IOException occurred when talking to server at: https://127.0.0.1:59371/solr at __randomizedtesting.SeedInfo.seed([22000C1980FBF390:F307FE9C24F478A2]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:670) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245) at org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368) at org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228) at org.apache.solr.cloud.LegacyCloudClusterPropTest.createAndTest(LegacyCloudClusterPropTest.java:87) at org.apache.solr.cloud.LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud(LegacyCloudClusterPropTest.java:79) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[jira] [Updated] (SOLR-13580) java 13 changes to locale specific Numeric parsing rules affect ParseNumeric UpdateProcessors when using 'local' config option - notably affects French
[ https://issues.apache.org/jira/browse/SOLR-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man updated SOLR-13580: Description: Per [JDK-8221432|https://bugs.openjdk.java.net/browse/JDK-8221432] Java13 has updated to [CLDR 35.1|http://cldr.unicode.org/] – which controls the definition of language & locale specific formatting characters – in a non-backwards compatible way due to "French" changes in [CLDR 34|http://cldr.unicode.org/index/downloads/cldr-34#TOC-Detailed-Data-Changes] This impacts people who use any of the "ParseNumeric" UpdateProcessors in conjunction with the "locale=fr" or "locale=fr_FR" init param and expect the (pre java13) existing behavior of treating U+00A0 (NO BREAK SPACE) as a "grouping" character (ie: between thousands and million, between millions and billions, etc...). Starting with java13 the JVM expects U+202F (NARROW NO BREAK SPACE) in it's place. Notably: upgrading to jdk13-ea+26 caused failures in Solr's ParsingFieldUpdateProcessorsTest which was initially had hardcoded test data that used U+00A0. ParsingFieldUpdateProcessorsTest has since been updated to account for this discrepency by modifying the test data used to determine the "expected" character for the current JVM, but there is nothing Solr or the ParseNumeric UpdateProcessors can do to help mitigate this change in behavior for end users who upgrade to java13. Affected users with U+00A0 characters in their incoming SolrInputDocuments will see the ParseNumeric UpdateProcessors (configured with locale=fr...) "skip" these values as unparsable, most likely resulting in a failure to index into a numeric field since the original "String" value will be left as is. was: ParsingFieldUpdateProcessorsTest has uncovered a JDK 13-ea+26 bug when dealing with the fr_FR Locale (which may affect other locales as well) which causes the grouping seperator ( U+00A0 in fr_FR ) to be ignored when parsing, treating them as a termination character -- example: "10 898" is parsed as "10" instead of "10898", leaving the " 898" portion of the string unparsed. The way the ParseNumeric UpdateProcessors are implemented, the fact that the NumbertFormat instance does not recognize the entire string as a Number results in the String value being left "as is" in the input documents. In ParsingFieldUpdateProcessorsTest this has manifested as jenkins failures like this... {noformat} [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=ParsingFieldUpdateProcessorsTest -Dtests.method=testParseFloatNonRootLocale -Dtests.seed=AE6C840917DD963B -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=us -Dtests.timezone=GMT -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [junit4] FAILURE 0.03s | ParsingFieldUpdateProcessorsTest.testParseFloatNonRootLocale <<< [junit4]> Throwable #1: java.lang.AssertionError [junit4]>at __randomizedtesting.SeedInfo.seed([AE6C840917DD963B:B5B079D8B7786A26]:0) [junit4]>at org.apache.solr.update.processor.ParsingFieldUpdateProcessorsTest.testParseFloatNonRootLocale(ParsingFieldUpdateProcessorsTest.java:471) [junit4]>at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit4]>at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [junit4]>at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit4]>at java.base/java.lang.reflect.Method.invoke(Method.java:567) [junit4]>at java.base/java.lang.Thread.run(Thread.java:830) {noformat} Summary: java 13 changes to locale specific Numeric parsing rules affect ParseNumeric UpdateProcessors when using 'local' config option - notably affects French (was: java 13-ea NumberFormat.parse bugs in some Locales, affects ParseNumeric UpdateProcessors when using the 'locale' config option) updated summary & description to be helpful to end users who might see a change in behavior and think there is a bug in the UpdaeProcessors > java 13 changes to locale specific Numeric parsing rules affect ParseNumeric > UpdateProcessors when using 'local' config option - notably affects French > > > Key: SOLR-13580 > URL: https://issues.apache.org/jira/browse/SOLR-13580 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Hoss Man >Priority: Major > Labels: Java13 > Attachments: SOLR-13580.patch > > > Per
[jira] [Commented] (SOLR-13584) Explore prohibiting aliases and collections from having the same name.
[ https://issues.apache.org/jira/browse/SOLR-13584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875269#comment-16875269 ] Jan Høydahl commented on SOLR-13584: {quote}I have not tested this at all, but "theoretically it should work". {quote} I have read a bit more about the RENAME command, and it actually does not rename anything, it just adds an alias and makes it look like a rename happened from the API's perspective. So I don't hink RENAME in its current form is a way forward for your use case. > Explore prohibiting aliases and collections from having the same name. > -- > > Key: SOLR-13584 > URL: https://issues.apache.org/jira/browse/SOLR-13584 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Priority: Major > > Allowing aliases and collections to have the same name is fragile and a > potentially a data issue. I'll link in a few JIRAs illustrating this and one > at least where the discussion gets long. > Straw-man proposal to start things off. > Deprecate this ability now, and enforce it in 9.0. > We have to provide a graceful way for users to get themselves out of the > following currently-possible use-case. > * a collection C1 is created and all the front-end uses it. > * users want to atomically switch to a new collection for various reasons > * users create C2 and test it out. > * users create an alias C1->C2 > Let's discuss. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8858) Migrate Lucene's Moin wiki to Confluence
[ https://issues.apache.org/jira/browse/LUCENE-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875259#comment-16875259 ] Jan Høydahl commented on LUCENE-8858: - If you want more explicit redirects, you may update https://issues.apache.org/jira/browse/INFRA-18677 directly. > Migrate Lucene's Moin wiki to Confluence > > > Key: LUCENE-8858 > URL: https://issues.apache.org/jira/browse/LUCENE-8858 > Project: Lucene - Core > Issue Type: Task > Components: general/website >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Attachments: lucene-cwiki.txt, lucene-moin.txt > > > We have a deadline end of June to migrate Moin wiki to Confluence. > This Jira will track migration of Lucene's > https://wiki.apache.org/lucene-java/ over to > https://cwiki.apache.org/confluence/display/LUCENE > The old Confluence space will be overwritten as it is not used. > After migration we'll clean up and weed out what is not needed, and then > start moving developer-centric content into the main git repo (which will be > covered in other JIRAs) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13548) Migrate Solr's Moin wiki to Confluence
[ https://issues.apache.org/jira/browse/SOLR-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875260#comment-16875260 ] Jan Høydahl commented on SOLR-13548: If you want more explicit redirects, you may update https://issues.apache.org/jira/browse/INFRA-18677 directly. > Migrate Solr's Moin wiki to Confluence > -- > > Key: SOLR-13548 > URL: https://issues.apache.org/jira/browse/SOLR-13548 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: website >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Attachments: SolrCwikiPages.txt, SolrMoinTitles.txt, > create_dummy_confluence_pages.py > > > We have a deadline end of June to migrate Moin wiki to Confluence. > This Jira will track migration of Solr's [https://wiki.apache.org/solr/] over > to [https://cwiki.apache.org/confluence/display/SOLR] > The old Confluence space currently hosts the old Reference Guide for version > 6.5 before we moved to asciidoc. This will be overwritten. > Steps: > # Delete all pages in current SOLR space > ## Q: Can we do a bulk delete ourselves or do we need to ask INFRA? > # The rules in {{.htaccess}} which redirects to the 6.6 guide will remain as > is > # Run the migration tool at > [https://selfserve.apache.org|https://selfserve.apache.org/] > # Add a clearly visible link from front page to the ref guide for people > landing there for docs > After migration we'll clean up and weed out what is not needed, and then > start moving developer-centric content into the main git repo (which will be > covered in other JIRAs) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11.0.3) - Build # 24307 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24307/ Java: 64bit/jdk-11.0.3 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 4 tests failed. FAILED: org.apache.lucene.util.TestRamUsageEstimator.testMap Error Message: expected:<25152.0> but was:<30184.0> Stack Trace: java.lang.AssertionError: expected:<25152.0> but was:<30184.0> at __randomizedtesting.SeedInfo.seed([ACEA85FAA092F7D9:8CCC3129BA69E4AB]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:553) at org.junit.Assert.assertEquals(Assert.java:683) at org.apache.lucene.util.TestRamUsageEstimator.testMap(TestRamUsageEstimator.java:136) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:834) FAILED: org.apache.lucene.util.TestRamUsageEstimator.testMap Error Message: expected:<25152.0> but was:<30184.0> Stack Trace: java.lang.AssertionError: expected:<25152.0> but was:<30184.0> at __randomizedtesting.SeedInfo.seed([ACEA85FAA092F7D9:8CCC3129BA69E4AB]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:553) at org.junit.Assert.assertEquals(Assert.java:683) at
[jira] [Commented] (SOLR-13577) TestReplicationHandler.doTestIndexFetchOnMasterRestart failures
[ https://issues.apache.org/jira/browse/SOLR-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875256#comment-16875256 ] Hoss Man commented on SOLR-13577: - well... i think it would be a huge mistake to remove {{waitForJettyToStop(JettySolrRunner)}} from {{MiniSolrCloudCluster}} – not only does it affect a lot of tests unneccesarily, but it breaks backcompat for end users that have existing tests using {{MiniSolrCloudCluster}}. If you really want to add {{waitForJettyToStop()}} to {{JettySolrRunner}}, just make the existing {{waitForJettyToStop(JettySolrRunner)}} a thin wrapper around it ... but if it were me i'd kep the changes dead simple and just use something like this in TestReplicationHandler... {code:java} final TimeOut waitForLeaderToShutdown = new TimeOut(300, TimeUnit.SECONDS, TimeSource.NANO_TIME); waitForLeaderToShutdown.waitFor ("Gave up after waiting an obscene amount of time for leader to shut down", () -> { masterJetty.isStopped() }) {code} > TestReplicationHandler.doTestIndexFetchOnMasterRestart failures > --- > > Key: SOLR-13577 > URL: https://issues.apache.org/jira/browse/SOLR-13577 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: 8016-consoleText.zip, SOLR-13577.patch, > SOLR-13577.patch, SOLR-13577.patch, SOLR-13577.patch, screenshot-1.png, still > failed on Windows consoleText.zip > > > It's seems like clear test failures. Failed 6 times in a row at lines 682, 684 > {quote} > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 1 build (Since Failed#8011 ) > Took 6 sec. > Error Message > null > Stacktrace > java.lang.NumberFormatException: null > at > __randomizedtesting.SeedInfo.seed([6AB4ECC957E5CCA2:B243282DFC3E0EFE]:0) > at java.base/java.lang.Integer.parseInt(Integer.java:614) > at java.base/java.lang.Integer.parseInt(Integer.java:770) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:682) > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 3 builds (Since Failed#8011 ) > Took 7.5 sec. > Stacktrace > java.lang.AssertionError > at > __randomizedtesting.SeedInfo.seed([E88092B4017D2D3D:30775650AAA6EF61]:0) > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:684) > {quote} > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13577) TestReplicationHandler.doTestIndexFetchOnMasterRestart failures
[ https://issues.apache.org/jira/browse/SOLR-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875215#comment-16875215 ] Mikhail Khludnev commented on SOLR-13577: - Thanks for advice, [~hossman]. I decided to move waitToStop into JettyRunner, turns out it's not easy. How it's better to treat this method? > TestReplicationHandler.doTestIndexFetchOnMasterRestart failures > --- > > Key: SOLR-13577 > URL: https://issues.apache.org/jira/browse/SOLR-13577 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: 8016-consoleText.zip, SOLR-13577.patch, > SOLR-13577.patch, SOLR-13577.patch, SOLR-13577.patch, screenshot-1.png, still > failed on Windows consoleText.zip > > > It's seems like clear test failures. Failed 6 times in a row at lines 682, 684 > {quote} > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 1 build (Since Failed#8011 ) > Took 6 sec. > Error Message > null > Stacktrace > java.lang.NumberFormatException: null > at > __randomizedtesting.SeedInfo.seed([6AB4ECC957E5CCA2:B243282DFC3E0EFE]:0) > at java.base/java.lang.Integer.parseInt(Integer.java:614) > at java.base/java.lang.Integer.parseInt(Integer.java:770) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:682) > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 3 builds (Since Failed#8011 ) > Took 7.5 sec. > Stacktrace > java.lang.AssertionError > at > __randomizedtesting.SeedInfo.seed([E88092B4017D2D3D:30775650AAA6EF61]:0) > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:684) > {quote} > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13577) TestReplicationHandler.doTestIndexFetchOnMasterRestart failures
[ https://issues.apache.org/jira/browse/SOLR-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13577: Attachment: SOLR-13577.patch > TestReplicationHandler.doTestIndexFetchOnMasterRestart failures > --- > > Key: SOLR-13577 > URL: https://issues.apache.org/jira/browse/SOLR-13577 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: 8016-consoleText.zip, SOLR-13577.patch, > SOLR-13577.patch, SOLR-13577.patch, SOLR-13577.patch, screenshot-1.png, still > failed on Windows consoleText.zip > > > It's seems like clear test failures. Failed 6 times in a row at lines 682, 684 > {quote} > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 1 build (Since Failed#8011 ) > Took 6 sec. > Error Message > null > Stacktrace > java.lang.NumberFormatException: null > at > __randomizedtesting.SeedInfo.seed([6AB4ECC957E5CCA2:B243282DFC3E0EFE]:0) > at java.base/java.lang.Integer.parseInt(Integer.java:614) > at java.base/java.lang.Integer.parseInt(Integer.java:770) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:682) > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 3 builds (Since Failed#8011 ) > Took 7.5 sec. > Stacktrace > java.lang.AssertionError > at > __randomizedtesting.SeedInfo.seed([E88092B4017D2D3D:30775650AAA6EF61]:0) > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:684) > {quote} > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] mayya-sharipova commented on issue #595: Load freqs lazily in Postings
mayya-sharipova commented on issue #595: Load freqs lazily in Postings URL: https://github.com/apache/lucene-solr/pull/595#issuecomment-506868411 @jpountz I have run `luceneutil` on wikimedium10k, 500k and 10M, but have a difficulty interpreting results. For `wikimedium10k`, we have diff from `-8.8% to 6.7%` For `wikimedium500k `, we have diff from `-11.6% to 5.5%` For `wikimedium10m `, we have diff from `-14.2% to 10.1% ` Does it mean that have a substantial regression for some cases? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] mayya-sharipova commented on issue #595: Load freqs lazily in Postings
mayya-sharipova commented on issue #595: Load freqs lazily in Postings URL: https://github.com/apache/lucene-solr/pull/595#issuecomment-506866675 ```bash python src/python/localrun.py -source wikimedium10m ``` ``` Report after iter 19: TaskQPS baseline StdDevQPS my_modified_version StdDevPct diff HighSloppyPhrase 20.21 (2.6%) 17.35 (3.5%) -14.2% ( -19% - -8%) LowSloppyPhrase4.82 (2.4%)4.14 (2.4%) -14.0% ( -18% - -9%) MedSloppyPhrase 21.73 (1.7%) 19.48 (2.3%) -10.3% ( -14% - -6%) MedSpanNear 26.51 (3.3%) 24.04 (1.6%) -9.3% ( -13% - -4%) HighSpanNear4.26 (2.6%)3.87 (1.2%) -9.2% ( -12% - -5%) HighIntervalsOrdered6.68 (1.5%)6.11 (1.6%) -8.6% ( -11% - -5%) LowSpanNear 13.14 (2.2%) 12.14 (1.0%) -7.6% ( -10% - -4%) OrNotHighHigh 453.73 (7.1%) 437.16 (6.2%) -3.7% ( -15% - 10%) HighPhrase6.48 (1.3%)6.28 (2.4%) -3.0% ( -6% -0%) OrHighNotLow 466.27 (4.9%) 458.30 (4.7%) -1.7% ( -10% -8%) Fuzzy1 38.01 (11.8%) 37.53 (9.0%) -1.3% ( -19% - 22%) HighTerm 833.01 (5.8%) 823.44 (5.3%) -1.1% ( -11% - 10%) MedPhrase 25.00 (1.5%) 24.75 (2.0%) -1.0% ( -4% -2%) IntNRQ 167.96 (12.3%) 166.61 (10.2%) -0.8% ( -20% - 24%) BrowseMonthTaxoFacets 5926.47 (3.9%) 5887.19 (5.1%) -0.7% ( -9% -8%) OrHighNotHigh 387.62 (6.0%) 386.18 (3.6%) -0.4% ( -9% -9%) BrowseDayOfYearTaxoFacets 5890.73 (3.1%) 5884.09 (3.4%) -0.1% ( -6% -6%) BrowseDateTaxoFacets2.17 (1.1%)2.17 (0.3%) -0.1% ( -1% -1%) OrHighNotMed 389.41 (6.0%) 389.62 (5.2%) 0.1% ( -10% - 12%) OrNotHighMed 394.93 (3.4%) 396.13 (3.9%) 0.3% ( -6% -7%) Respell 43.39 (2.5%) 43.54 (2.5%) 0.4% ( -4% -5%) PKLookup 96.00 (2.9%) 96.42 (2.7%) 0.4% ( -5% -6%) BrowseDayOfYearSSDVFacets 11.35 (14.0%) 11.42 (13.9%) 0.6% ( -23% - 33%) OrNotHighLow 384.92 (6.5%) 388.64 (3.9%) 1.0% ( -8% - 12%) MedTerm 990.83 (5.1%) 1006.09 (5.4%) 1.5% ( -8% - 12%) LowPhrase 19.67 (1.6%) 19.99 (1.7%) 1.6% ( -1% -4%) OrHighHigh 28.77 (2.6%) 29.27 (2.8%) 1.7% ( -3% -7%) BrowseMonthSSDVFacets 13.53 (17.0%) 13.91 (12.9%) 2.8% ( -23% - 39%) AndHighHigh 63.92 (2.9%) 65.79 (2.5%) 2.9% ( -2% -8%) Prefix3 85.24 (7.5%) 88.08 (10.2%) 3.3% ( -13% - 22%) HighTermDayOfYearSort 45.94 (9.0%) 47.58 (6.8%) 3.6% ( -11% - 21%) Wildcard 115.06 (5.6%) 119.32 (5.6%) 3.7% ( -7% - 15%) Fuzzy2 31.88 (9.3%) 33.14 (9.4%) 4.0% ( -13% - 24%) OrHighLow 289.38 (6.2%) 305.73 (5.8%) 5.6% ( -6% - 18%) OrHighMed 92.48 (3.5%) 98.34 (3.1%) 6.3% ( 0% - 13%) LowTerm 724.16 (6.0%) 772.43 (7.9%) 6.7% ( -6% - 21%) HighTermMonthSort 145.23 (11.7%) 154.97 (11.8%) 6.7% ( -14% - 34%) AndHighLow 437.58 (5.0%) 472.81 (5.0%) 8.1% ( -1% - 18%) AndHighMed 120.90 (2.2%) 133.07 (2.8%) 10.1% ( 4% - 15%) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1884 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1884/ 4 tests failed. FAILED: org.apache.lucene.search.TestFieldCacheRewriteMethod.testRegexps Error Message: Hit 12 docnumbers don't match Hits length1=17 length2=17 hit=0: doc54=1.0 shardIndex=0, doc54=1.0 shardIndex=0 hit=1: doc93=1.0 shardIndex=0, doc93=1.0 shardIndex=0 hit=2: doc114=1.0 shardIndex=0, doc114=1.0 shardIndex=0 hit=3: doc216=1.0 shardIndex=0, doc216=1.0 shardIndex=0 hit=4: doc332=1.0 shardIndex=0, doc332=1.0 shardIndex=0 hit=5: doc372=1.0 shardIndex=0, doc372=1.0 shardIndex=0 hit=6: doc388=1.0 shardIndex=0, doc388=1.0 shardIndex=0 hit=7: doc489=1.0 shardIndex=0, doc489=1.0 shardIndex=0 hit=8: doc492=1.0 shardIndex=0, doc492=1.0 shardIndex=0 hit=9: doc547=1.0 shardIndex=0, doc547=1.0 shardIndex=0 hit=10: doc558=1.0 shardIndex=0, doc558=1.0 shardIndex=0 hit=11: doc625=1.0 shardIndex=0, doc625=1.0 shardIndex=0 hit=12: doc692=1.0 shardIndex=0, doc653=1.0 shardIndex=0 hit=13: doc714=1.0 shardIndex=0, doc692=1.0 shardIndex=0 hit=14: doc747=1.0 shardIndex=0, doc714=1.0 shardIndex=0 hit=15: doc756=1.0 shardIndex=0, doc747=1.0 shardIndex=0 hit=16: doc653=1.0 shardIndex=1, doc756=1.0 shardIndex=0 for query:/[[-ዬ)?(ᘗ单]+/ Stack Trace: junit.framework.AssertionFailedError: Hit 12 docnumbers don't match Hits length1=17 length2=17 hit=0: doc54=1.0 shardIndex=0, doc54=1.0 shardIndex=0 hit=1: doc93=1.0 shardIndex=0, doc93=1.0 shardIndex=0 hit=2: doc114=1.0 shardIndex=0, doc114=1.0 shardIndex=0 hit=3: doc216=1.0 shardIndex=0, doc216=1.0 shardIndex=0 hit=4: doc332=1.0 shardIndex=0, doc332=1.0 shardIndex=0 hit=5: doc372=1.0 shardIndex=0, doc372=1.0 shardIndex=0 hit=6: doc388=1.0 shardIndex=0, doc388=1.0 shardIndex=0 hit=7: doc489=1.0 shardIndex=0, doc489=1.0 shardIndex=0 hit=8: doc492=1.0 shardIndex=0, doc492=1.0 shardIndex=0 hit=9: doc547=1.0 shardIndex=0, doc547=1.0 shardIndex=0 hit=10: doc558=1.0 shardIndex=0, doc558=1.0 shardIndex=0 hit=11: doc625=1.0 shardIndex=0, doc625=1.0 shardIndex=0 hit=12: doc692=1.0 shardIndex=0, doc653=1.0 shardIndex=0 hit=13: doc714=1.0 shardIndex=0, doc692=1.0 shardIndex=0 hit=14: doc747=1.0 shardIndex=0, doc714=1.0 shardIndex=0 hit=15: doc756=1.0 shardIndex=0, doc747=1.0 shardIndex=0 hit=16: doc653=1.0 shardIndex=1, doc756=1.0 shardIndex=0 for query:/[[-ዬ)?(ᘗ单]+/ at __randomizedtesting.SeedInfo.seed([54BAAD7FF7A7488B:B5E6EC6E290D1F03]:0) at junit.framework.Assert.fail(Assert.java:57) at org.apache.lucene.search.CheckHits.checkEqual(CheckHits.java:205) at org.apache.lucene.search.TestFieldCacheRewriteMethod.assertSame(TestFieldCacheRewriteMethod.java:42) at org.apache.lucene.search.TestRegexpRandom2.testRegexps(TestRegexpRandom2.java:164) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at
Re:Welcome Kevin Risden to the PMC
Welcome Kevin! From: dev@lucene.apache.org At: 06/27/19 13:04:23To: dev@lucene.apache.org Subject: Welcome Kevin Risden to the PMC I am pleased to announce that Kevin Risden has accepted the PMC's invitation to join. Welcome Kevin! -- Jan Høydahl, search solution architect Cominvent AS - www.cominvent.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13580) java 13-ea NumberFormat.parse bugs in some Locales, affects ParseNumeric UpdateProcessors when using the 'locale' config option
[ https://issues.apache.org/jira/browse/SOLR-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875188#comment-16875188 ] ASF subversion and git services commented on SOLR-13580: Commit 881aabe28a0678d65074a6240ca976fcbfab27bf in lucene-solr's branch refs/heads/branch_8x from Chris M. Hostetter [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=881aabe ] SOLR-13580: update test to account for different versions of java using different locale specific numeric formatting characters (cherry picked from commit 8b72e91df7b8ea545b6344d665bbb80e27a80aa4) > java 13-ea NumberFormat.parse bugs in some Locales, affects ParseNumeric > UpdateProcessors when using the 'locale' config option > --- > > Key: SOLR-13580 > URL: https://issues.apache.org/jira/browse/SOLR-13580 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Hoss Man >Priority: Major > Labels: Java13 > Attachments: SOLR-13580.patch > > > ParsingFieldUpdateProcessorsTest has uncovered a JDK 13-ea+26 bug when > dealing with the fr_FR Locale (which may affect other locales as well) which > causes the grouping seperator ( U+00A0 in fr_FR ) to be ignored when parsing, > treating them as a termination character -- example: "10 898" is parsed as > "10" instead of "10898", leaving the " 898" portion of the string unparsed. > The way the ParseNumeric UpdateProcessors are implemented, the fact that the > NumbertFormat instance does not recognize the entire string as a Number > results in the String value being left "as is" in the input documents. > In ParsingFieldUpdateProcessorsTest this has manifested as jenkins failures > like this... > {noformat} >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=ParsingFieldUpdateProcessorsTest > -Dtests.method=testParseFloatNonRootLocale -Dtests.seed=AE6C840917DD963B > -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true > -Dtests.locale=us -Dtests.timezone=GMT -Dtests.asserts=true > -Dtests.file.encoding=US-ASCII >[junit4] FAILURE 0.03s | > ParsingFieldUpdateProcessorsTest.testParseFloatNonRootLocale <<< >[junit4]> Throwable #1: java.lang.AssertionError >[junit4]> at > __randomizedtesting.SeedInfo.seed([AE6C840917DD963B:B5B079D8B7786A26]:0) >[junit4]> at > org.apache.solr.update.processor.ParsingFieldUpdateProcessorsTest.testParseFloatNonRootLocale(ParsingFieldUpdateProcessorsTest.java:471) >[junit4]> at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >[junit4]> at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >[junit4]> at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >[junit4]> at > java.base/java.lang.reflect.Method.invoke(Method.java:567) >[junit4]> at java.base/java.lang.Thread.run(Thread.java:830) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13585) factor out SearchGroupsResultTransformer.[de]serializeOneSearchGroup methods
[ https://issues.apache.org/jira/browse/SOLR-13585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-13585: --- Status: Patch Available (was: Open) > factor out SearchGroupsResultTransformer.[de]serializeOneSearchGroup methods > > > Key: SOLR-13585 > URL: https://issues.apache.org/jira/browse/SOLR-13585 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13585.patch > > > The {{SearchGroupsResultTransformer}}'s methods {{serializeSearchGroup}} e.g. > [#L110-L127|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L110-L127] > and {{transformToNative}} e.g. > [#L73-L108|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L73-L108] > do quite a few things and factoring out of portions of the code e.g. > [#L114-L120|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L114-L120] > and > [#L83-L99|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L83-L99] > could help with code comprehension. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-11.0.3) - Build # 781 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/781/ Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth Error Message: Expected metric minimums for prefix SECURITY./authentication.: {failMissingCredentials=2, authenticated=20, passThrough=9, failWrongCredentials=1, requests=32, errors=0}, but got: {failMissingCredentials=2, authenticated=19, passThrough=12, totalTime=4962822, failWrongCredentials=1, requestTimes=828, requests=34, errors=0} Stack Trace: java.lang.AssertionError: Expected metric minimums for prefix SECURITY./authentication.: {failMissingCredentials=2, authenticated=20, passThrough=9, failWrongCredentials=1, requests=32, errors=0}, but got: {failMissingCredentials=2, authenticated=19, passThrough=12, totalTime=4962822, failWrongCredentials=1, requestTimes=828, requests=34, errors=0} at __randomizedtesting.SeedInfo.seed([A13DC82D2F7F0776:1D53BE3F8B2C840C]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.solr.cloud.SolrCloudAuthTestCase.assertAuthMetricsMinimums(SolrCloudAuthTestCase.java:129) at org.apache.solr.cloud.SolrCloudAuthTestCase.assertAuthMetricsMinimums(SolrCloudAuthTestCase.java:83) at org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth(BasicAuthIntegrationTest.java:313) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at
[jira] [Comment Edited] (SOLR-13585) factor out SearchGroupsResultTransformer.[de]serializeOneSearchGroup methods
[ https://issues.apache.org/jira/browse/SOLR-13585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875173#comment-16875173 ] Christine Poerschke edited comment on SOLR-13585 at 6/28/19 7:41 PM: - In the context of SOLR-11831 (and assuming separate subsequent small private-to-protected and Array-or-List-to-Object style tweaks) the factored out methods would also help avoid code duplication: * [https://github.com/apache/lucene-solr/pull/300/files#r275043339] * [https://github.com/cpoerschke/lucene-solr/commit/10fbfd1dcf16065688c3610b26a55f2aa9c99f8a] * [https://github.com/apache/lucene-solr/commit/9480482166b05d04eb2bb55227baf4f3c6549ab9] * [https://github.com/apache/lucene-solr/pull/300/files#r288569478] * [https://github.com/apache/lucene-solr/commit/0eb206bf6f635ba2e7514efa369e09137911b825] * [https://github.com/apache/lucene-solr/pull/300/files#r298725095] At this point it might be helpful to briefly outline the difference between the existing {{SearchGroupsResultTransformer}} and SOLR-11831's proposed and tentatively named {{SkipSecondStepSearchResultResultTransformer}} class i.e. what is different between them but why can they still share code: * The {{SearchGroupsResultTransformer}} transforms search groups i.e. it needs to know the identity of the group and a sort value for it so that groups from multiple shards can be collated correctly. In this first phase there is no need to know the identity of documents in each group since those are determined in the second phase. * The {{SkipSecondStepSearchResultResultTransformer}} needs to know everything that the {{SearchGroupsResultTransformer}} needs to know but if the second phase is to be skipped (in certain circumstances) then in the first phase there is a need to know the identity of documents in each group (technically just one document per group i.e. SOLR-11831 requires group.limit=1). * In terms of serialisation and deserialisation of search groups therefore the {{SkipSecondStepSearchResultResultTransformer}} can use the same code as the {{SearchGroupsResultTransformer}} but there is some extra "add-on" info i.e. the document identity info that needs to be packed (serialised) and unpacked (deserialised). was (Author: cpoerschke): In the context of SOLR-11831 (and assuming separate subsequent small {{private}}-->{{protected}} and {{...}}-->{{Object}} tweaks) the factored out methods would also help avoid code duplication: * [https://github.com/apache/lucene-solr/pull/300/files#r275043339] * [https://github.com/cpoerschke/lucene-solr/commit/10fbfd1dcf16065688c3610b26a55f2aa9c99f8a] * [https://github.com/apache/lucene-solr/commit/9480482166b05d04eb2bb55227baf4f3c6549ab9] * [https://github.com/apache/lucene-solr/pull/300/files#r288569478] * [https://github.com/apache/lucene-solr/commit/0eb206bf6f635ba2e7514efa369e09137911b825] * (add-one-more-link-here-shortly) At this point it might be helpful to briefly outline the difference between the existing {{SearchGroupsResultTransformer}} and SOLR-11831's proposed and tentatively named {{SkipSecondStepSearchResultResultTransformer}} class i.e. what is different between them but why can they still share code: * The {{SearchGroupsResultTransformer}} transforms search groups i.e. it needs to know the identity of the group and a sort value for it so that groups from multiple shards can be collated correctly. In this first phase there is no need to know the identity of documents in each group since those are determined in the second phase. * The {{SkipSecondStepSearchResultResultTransformer}} needs to know everything that the {{SearchGroupsResultTransformer}} needs to know but if the second phase is to be skipped (in certain circumstances) then in the first phase there is a need to know the identity of documents in each group (technically just one document per group i.e. SOLR-11831 requires group.limit=1). * In terms of serialisation and deserialisation of search groups therefore the {{SkipSecondStepSearchResultResultTransformer}} can use the same code as the {{SearchGroupsResultTransformer}} but there is some extra "add-on" info i.e. the document identity info that needs to be packed (serialised) and unpacked (deserialised). > factor out SearchGroupsResultTransformer.[de]serializeOneSearchGroup methods > > > Key: SOLR-13585 > URL: https://issues.apache.org/jira/browse/SOLR-13585 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13585.patch > > > The {{SearchGroupsResultTransformer}}'s methods {{serializeSearchGroup}} e.g. >
[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch) URL: https://github.com/apache/lucene-solr/pull/300#discussion_r298725095 ## File path: solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java ## @@ -34,17 +41,37 @@ /** * Implementation for transforming {@link SearchGroup} into a {@link NamedList} structure and visa versa. */ -public class SearchGroupsResultTransformer implements ShardResultTransformer, Map> { +public abstract class SearchGroupsResultTransformer implements ShardResultTransformer, Map> { Review comment: Thanks @diegoceccarelli for applying the serialise-one-group commit and for attempting a similar approach for the deserialisation! As you say, now there is no duplication though returning to this after a little while it seems to me at least a little bit tricky to understand where the boundaries are between the refactoring and the changes related to the skip-second-step logic i.e. to see that the refactoring does not break anything and to see that and why the skip-second-step logic will work as intended. Against that background https://issues.apache.org/jira/browse/SOLR-13585 proposes to do a pure factoring out of two methods first (albeit with foresight of the envisaged code changes here) and then hopefully the skip-second-step logic addition would be clearer to see. Admittedly though such a split approach would make for a mildly messy rebase for the PR branch here. What do you think? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13586) Solr 7.7.2 - Autoscaling ignores sysprop rules if no nodes able to satisfy
Andrew Kettmann created SOLR-13586: -- Summary: Solr 7.7.2 - Autoscaling ignores sysprop rules if no nodes able to satisfy Key: SOLR-13586 URL: https://issues.apache.org/jira/browse/SOLR-13586 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: AutoScaling Affects Versions: 7.7.2 Reporter: Andrew Kettmann Solr 7.7.2 on new Znode on ZK. Created the chroot using solr zk mkroot. Created a policy: {code:java} {'set-policy': {'banana': [{'replica': '#ALL', 'sysprop.HELM_CHART': 'notbanana'}]}}{code} No errors on creation of the policy. I have no nodes that have that value for the system property "HELM_CHART", I have nodes that contain "banana" and "rulesos" for that value only. I create the collection with a call to the /admin/collections: {code:java} {'action': 'CREATE', 'collection.configName': 'project-solr-7', 'name': 'banana', 'numShards': '2', 'policy': 'banana', 'replicationFactor': '2'}{code} and it creates the collection without an error. My expectation would be that it would error as the policy cannot be followed. Autoscaling seems to silently ignore rules (at least sysprop rules). Example rule: {code:java} {'set-policy': {'sales-uat': [{'node': '#ANY', 'replica': '<2', 'strict': 'false'}, {'replica': '#ALL', 'strict': 'true', 'sysprop.HELM_CHART': 'foo'}]}}{code} Two cases will get the sysprop rule ignored: # No nodes have a HELM_CHART system property defined # No nodes have the value "foo" for the HELM_CHART system property If you have SOME nodes that have -DHELM_CHART=foo, then it will fail if it cannot satisfy another strict rule. So sysprop autoscaling rules appear to be unable to be strict on their own it appears. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13585) factor out SearchGroupsResultTransformer.[de]serializeOneSearchGroup methods
[ https://issues.apache.org/jira/browse/SOLR-13585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875175#comment-16875175 ] Christine Poerschke commented on SOLR-13585: "factor out SearchGroupsResultTransformer.[de]serializeOneSearchGroup methods (Christine Poerschke, Diego Ceccarelli)" patch attached. Reviews, comments, questions, etc. welcome as usual, thank you. > factor out SearchGroupsResultTransformer.[de]serializeOneSearchGroup methods > > > Key: SOLR-13585 > URL: https://issues.apache.org/jira/browse/SOLR-13585 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13585.patch > > > The {{SearchGroupsResultTransformer}}'s methods {{serializeSearchGroup}} e.g. > [#L110-L127|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L110-L127] > and {{transformToNative}} e.g. > [#L73-L108|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L73-L108] > do quite a few things and factoring out of portions of the code e.g. > [#L114-L120|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L114-L120] > and > [#L83-L99|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L83-L99] > could help with code comprehension. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13585) factor out SearchGroupsResultTransformer.[de]serializeOneSearchGroup methods
[ https://issues.apache.org/jira/browse/SOLR-13585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-13585: --- Attachment: SOLR-13585.patch > factor out SearchGroupsResultTransformer.[de]serializeOneSearchGroup methods > > > Key: SOLR-13585 > URL: https://issues.apache.org/jira/browse/SOLR-13585 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13585.patch > > > The {{SearchGroupsResultTransformer}}'s methods {{serializeSearchGroup}} e.g. > [#L110-L127|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L110-L127] > and {{transformToNative}} e.g. > [#L73-L108|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L73-L108] > do quite a few things and factoring out of portions of the code e.g. > [#L114-L120|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L114-L120] > and > [#L83-L99|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L83-L99] > could help with code comprehension. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13585) factor out SearchGroupsResultTransformer.[de]serializeOneSearchGroup methods
[ https://issues.apache.org/jira/browse/SOLR-13585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875173#comment-16875173 ] Christine Poerschke commented on SOLR-13585: In the context of SOLR-11831 (and assuming separate subsequent small {{private}}-->{{protected}} and {{...}}-->{{Object}} tweaks) the factored out methods would also help avoid code duplication: * [https://github.com/apache/lucene-solr/pull/300/files#r275043339] * [https://github.com/cpoerschke/lucene-solr/commit/10fbfd1dcf16065688c3610b26a55f2aa9c99f8a] * [https://github.com/apache/lucene-solr/commit/9480482166b05d04eb2bb55227baf4f3c6549ab9] * [https://github.com/apache/lucene-solr/pull/300/files#r288569478] * [https://github.com/apache/lucene-solr/commit/0eb206bf6f635ba2e7514efa369e09137911b825] * (add-one-more-link-here-shortly) At this point it might be helpful to briefly outline the difference between the existing {{SearchGroupsResultTransformer}} and SOLR-11831's proposed and tentatively named {{SkipSecondStepSearchResultResultTransformer}} class i.e. what is different between them but why can they still share code: * The {{SearchGroupsResultTransformer}} transforms search groups i.e. it needs to know the identity of the group and a sort value for it so that groups from multiple shards can be collated correctly. In this first phase there is no need to know the identity of documents in each group since those are determined in the second phase. * The {{SkipSecondStepSearchResultResultTransformer}} needs to know everything that the {{SearchGroupsResultTransformer}} needs to know but if the second phase is to be skipped (in certain circumstances) then in the first phase there is a need to know the identity of documents in each group (technically just one document per group i.e. SOLR-11831 requires group.limit=1). * In terms of serialisation and deserialisation of search groups therefore the {{SkipSecondStepSearchResultResultTransformer}} can use the same code as the {{SearchGroupsResultTransformer}} but there is some extra "add-on" info i.e. the document identity info that needs to be packed (serialised) and unpacked (deserialised). > factor out SearchGroupsResultTransformer.[de]serializeOneSearchGroup methods > > > Key: SOLR-13585 > URL: https://issues.apache.org/jira/browse/SOLR-13585 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > > The {{SearchGroupsResultTransformer}}'s methods {{serializeSearchGroup}} e.g. > [#L110-L127|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L110-L127] > and {{transformToNative}} e.g. > [#L73-L108|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L73-L108] > do quite a few things and factoring out of portions of the code e.g. > [#L114-L120|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L114-L120] > and > [#L83-L99|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L83-L99] > could help with code comprehension. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13585) factor out SearchGroupsResultTransformer.[de]serializeOneSearchGroup methods
Christine Poerschke created SOLR-13585: -- Summary: factor out SearchGroupsResultTransformer.[de]serializeOneSearchGroup methods Key: SOLR-13585 URL: https://issues.apache.org/jira/browse/SOLR-13585 Project: Solr Issue Type: Task Reporter: Christine Poerschke Assignee: Christine Poerschke The {{SearchGroupsResultTransformer}}'s methods {{serializeSearchGroup}} e.g. [#L110-L127|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L110-L127] and {{transformToNative}} e.g. [#L73-L108|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L73-L108] do quite a few things and factoring out of portions of the code e.g. [#L114-L120|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L114-L120] and [#L83-L99|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/shardresultserializer/SearchGroupsResultTransformer.java#L83-L99] could help with code comprehension. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13580) java 13-ea NumberFormat.parse bugs in some Locales, affects ParseNumeric UpdateProcessors when using the 'locale' config option
[ https://issues.apache.org/jira/browse/SOLR-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875165#comment-16875165 ] ASF subversion and git services commented on SOLR-13580: Commit 8b72e91df7b8ea545b6344d665bbb80e27a80aa4 in lucene-solr's branch refs/heads/master from Chris M. Hostetter [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8b72e91 ] SOLR-13580: update test to account for different versions of java using different locale specific numeric formatting characters > java 13-ea NumberFormat.parse bugs in some Locales, affects ParseNumeric > UpdateProcessors when using the 'locale' config option > --- > > Key: SOLR-13580 > URL: https://issues.apache.org/jira/browse/SOLR-13580 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Hoss Man >Priority: Major > Labels: Java13 > Attachments: SOLR-13580.patch > > > ParsingFieldUpdateProcessorsTest has uncovered a JDK 13-ea+26 bug when > dealing with the fr_FR Locale (which may affect other locales as well) which > causes the grouping seperator ( U+00A0 in fr_FR ) to be ignored when parsing, > treating them as a termination character -- example: "10 898" is parsed as > "10" instead of "10898", leaving the " 898" portion of the string unparsed. > The way the ParseNumeric UpdateProcessors are implemented, the fact that the > NumbertFormat instance does not recognize the entire string as a Number > results in the String value being left "as is" in the input documents. > In ParsingFieldUpdateProcessorsTest this has manifested as jenkins failures > like this... > {noformat} >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=ParsingFieldUpdateProcessorsTest > -Dtests.method=testParseFloatNonRootLocale -Dtests.seed=AE6C840917DD963B > -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true > -Dtests.locale=us -Dtests.timezone=GMT -Dtests.asserts=true > -Dtests.file.encoding=US-ASCII >[junit4] FAILURE 0.03s | > ParsingFieldUpdateProcessorsTest.testParseFloatNonRootLocale <<< >[junit4]> Throwable #1: java.lang.AssertionError >[junit4]> at > __randomizedtesting.SeedInfo.seed([AE6C840917DD963B:B5B079D8B7786A26]:0) >[junit4]> at > org.apache.solr.update.processor.ParsingFieldUpdateProcessorsTest.testParseFloatNonRootLocale(ParsingFieldUpdateProcessorsTest.java:471) >[junit4]> at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >[junit4]> at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) >[junit4]> at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >[junit4]> at > java.base/java.lang.reflect.Method.invoke(Method.java:567) >[junit4]> at java.base/java.lang.Thread.run(Thread.java:830) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13577) TestReplicationHandler.doTestIndexFetchOnMasterRestart failures
[ https://issues.apache.org/jira/browse/SOLR-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875163#comment-16875163 ] Hoss Man commented on SOLR-13577: - Mikhail: 2 thoughts... # recent failures seem to only happen on windows, so don't rule out the possibility that your fixes have actual uncovered an real bug in how replication works/fails on windows in some situations # the problem may simply be thread contention ... IIUC the test is trying to verify that *after* the master shuts down, *then* the slave should poll and see the master is down, and *then* we should be able to ask the slave for details and see that failure cont ... but nothing in the test "waits" to ensure the master is actually shutdown – by the time we've used up all our retires the master may still be up, let alone giving the slave enough time to poll the master. i would suggest adding logic similar to what's in {{MiniSolrCloudCluster.waitForJettyToStop()}} to the test to verify the master is down *before* starting the retry loop that attempts to fetch details from the slave. > TestReplicationHandler.doTestIndexFetchOnMasterRestart failures > --- > > Key: SOLR-13577 > URL: https://issues.apache.org/jira/browse/SOLR-13577 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: 8016-consoleText.zip, SOLR-13577.patch, > SOLR-13577.patch, SOLR-13577.patch, screenshot-1.png, still failed on Windows > consoleText.zip > > > It's seems like clear test failures. Failed 6 times in a row at lines 682, 684 > {quote} > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 1 build (Since Failed#8011 ) > Took 6 sec. > Error Message > null > Stacktrace > java.lang.NumberFormatException: null > at > __randomizedtesting.SeedInfo.seed([6AB4ECC957E5CCA2:B243282DFC3E0EFE]:0) > at java.base/java.lang.Integer.parseInt(Integer.java:614) > at java.base/java.lang.Integer.parseInt(Integer.java:770) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:682) > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 3 builds (Since Failed#8011 ) > Took 7.5 sec. > Stacktrace > java.lang.AssertionError > at > __randomizedtesting.SeedInfo.seed([E88092B4017D2D3D:30775650AAA6EF61]:0) > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:684) > {quote} > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-8.1 - Build # 67 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.1/67/ 2 tests failed. FAILED: org.apache.solr.cloud.ReindexCollectionTest.testBasicReindexing Error Message: num docs expected:<200> but was:<199> Stack Trace: java.lang.AssertionError: num docs expected:<200> but was:<199> at __randomizedtesting.SeedInfo.seed([9602BE980064688A:5A07177088DBCB2]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.apache.solr.cloud.ReindexCollectionTest.indexDocs(ReindexCollectionTest.java:376) at org.apache.solr.cloud.ReindexCollectionTest.testBasicReindexing(ReindexCollectionTest.java:123) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.common.cloud.TestCollectionStateWatchers.testWaitForStateWatcherIsRetainedOnPredicateFailure Error Message: Did not see a fully active cluster after 30
[GitHub] [lucene-solr] mayya-sharipova commented on issue #595: Load freqs lazily in Postings
mayya-sharipova commented on issue #595: Load freqs lazily in Postings URL: https://github.com/apache/lucene-solr/pull/595#issuecomment-506834942 ```bash python src/python/localrun.py -source wikimedium500k ``` ``` Report after iter 19: TaskQPS baseline StdDevQPS my_modified_version StdDevPct diff HighIntervalsOrdered 132.21 (4.3%) 116.88 (1.9%) -11.6% ( -17% - -5%) LowSloppyPhrase 33.81 (8.7%) 30.23 (4.1%) -10.6% ( -21% -2%) HighSpanNear 25.91 (6.9%) 23.29 (4.9%) -10.1% ( -20% -1%) HighSloppyPhrase 191.59 (7.2%) 176.44 (5.7%) -7.9% ( -19% -5%) Fuzzy1 62.16 (19.8%) 58.37 (20.2%) -6.1% ( -38% - 42%) LowPhrase 570.54 (6.9%) 542.75 (5.5%) -4.9% ( -16% -8%) MedPhrase 342.01 (7.9%) 325.83 (8.5%) -4.7% ( -19% - 12%) MedSloppyPhrase 514.58 (6.5%) 491.56 (5.7%) -4.5% ( -15% -8%) LowSpanNear 349.06 (5.8%) 337.14 (4.8%) -3.4% ( -13% -7%) MedSpanNear 260.49 (8.3%) 251.81 (6.0%) -3.3% ( -16% - 11%) MedTerm 1207.51 (11.4%) 1178.37 (10.9%) -2.4% ( -22% - 22%) OrHighHigh 118.22 (12.5%) 115.82 (13.7%) -2.0% ( -25% - 27%) BrowseDayOfYearTaxoFacets 8255.43 (8.5%) 8145.52 (7.6%) -1.3% ( -16% - 16%) HighPhrase 277.64 (6.3%) 274.04 (6.0%) -1.3% ( -12% - 11%) OrHighMed 228.58 (13.0%) 226.01 (9.1%) -1.1% ( -20% - 24%) AndHighLow 1151.84 (12.4%) 1141.92 (9.0%) -0.9% ( -19% - 23%) LowTerm 1756.48 (7.1%) 1753.91 (8.1%) -0.1% ( -14% - 16%) Respell 125.05 (9.9%) 125.43 (10.7%) 0.3% ( -18% - 23%) HighTermMonthSort 717.77 (10.6%) 720.39 (13.6%) 0.4% ( -21% - 27%) HighTerm 916.79 (10.0%) 922.03 (8.5%) 0.6% ( -16% - 21%) IntNRQ 256.66 (7.9%) 258.64 (11.2%) 0.8% ( -17% - 21%) Wildcard 213.38 (14.7%) 215.15 (9.2%) 0.8% ( -20% - 29%) OrHighLow 411.57 (12.2%) 415.17 (8.1%) 0.9% ( -17% - 24%) PKLookup 107.86 (7.2%) 109.52 (5.0%) 1.5% ( -9% - 14%) AndHighMed 416.98 (6.9%) 427.29 (6.8%) 2.5% ( -10% - 17%) BrowseDateTaxoFacets 39.34 (7.1%) 40.32 (8.6%) 2.5% ( -12% - 19%) Prefix3 287.84 (11.9%) 295.97 (13.0%) 2.8% ( -19% - 31%) BrowseDayOfYearSSDVFacets 220.00 (22.1%) 226.55 (25.1%) 3.0% ( -36% - 64%) BrowseMonthSSDVFacets 198.87 (28.0%) 206.42 (28.9%) 3.8% ( -41% - 84%) BrowseMonthTaxoFacets 8498.54 (6.1%) 8843.13 (9.9%) 4.1% ( -11% - 21%) HighTermDayOfYearSort 234.76 (9.2%) 246.51 (6.8%) 5.0% ( -10% - 23%) Fuzzy2 38.21 (12.9%) 40.15 (15.1%) 5.1% ( -20% - 37%) AndHighHigh 181.86 (12.1%) 191.92 (9.3%) 5.5% ( -14% - 30%) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-13280) strengthen ScheduledTrigger's preferredOperation parameter validation
[ https://issues.apache.org/jira/browse/SOLR-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved SOLR-13280. Resolution: Fixed Fix Version/s: 8.2 master (9.0) > strengthen ScheduledTrigger's preferredOperation parameter validation > - > > Key: SOLR-13280 > URL: https://issues.apache.org/jira/browse/SOLR-13280 > Project: Solr > Issue Type: Improvement > Components: AutoScaling >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: master (9.0), 8.2 > > Attachments: SOLR-13280.patch > > > Currently a typo such as (say) {{MOVE_REPLICA}} instead of the correct > {{MOVEREPLICA}} results in a "success" response for the "set-trigger" call > but in the Solr logs NullPointerException stuff is happening e.g. > {code} > "eventType":"SCHEDULED", > "properties":{ > "actualEventTime":..., > "preferredOperation":"MOVE_REPLICA", > "_enqueue_time_":...}} > at > org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:160) > ~[?:?] > at > org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:324) > ~[?:?] > ... 6 more > Caused by: java.lang.NullPointerException > at > org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getSuggester(Policy.java:628) > ~[?:?] > at > org.apache.solr.cloud.autoscaling.ComputePlanAction.getSuggester(ComputePlanAction.java:262) > ~[?:?] > at > org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:97) > ~[?:?] > at > org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:324) > ~[?:?] > ... 6 more > {code} > Proposed patch to follow. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13576) factor out a TopGroupsShardResponseProcessor.fillResultIds method
[ https://issues.apache.org/jira/browse/SOLR-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated SOLR-13576: --- Resolution: Fixed Fix Version/s: 8.2 master (9.0) Status: Resolved (was: Patch Available) > factor out a TopGroupsShardResponseProcessor.fillResultIds method > - > > Key: SOLR-13576 > URL: https://issues.apache.org/jira/browse/SOLR-13576 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: master (9.0), 8.2 > > Attachments: SOLR-13576.patch > > > The {{TopGroupsShardResponseProcessor.process}} method e.g. > [#L54-L215|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/TopGroupsShardResponseProcessor.java#L54-L215] > does quite a few things and factoring out a {{fillResultIds}} (or similarly > named) method for the logically distinct > [#L192-L214|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/TopGroupsShardResponseProcessor.java#L192-L214] > portion could help with code comprehension. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-13279) clarify ScheduledTrigger's "every parameter missing" error response
[ https://issues.apache.org/jira/browse/SOLR-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved SOLR-13279. Resolution: Fixed Fix Version/s: 8.2 master (9.0) > clarify ScheduledTrigger's "every parameter missing" error response > --- > > Key: SOLR-13279 > URL: https://issues.apache.org/jira/browse/SOLR-13279 > Project: Solr > Issue Type: Improvement > Components: AutoScaling >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: master (9.0), 8.2 > > Attachments: SOLR-13279.patch > > > current behaviour: > {code} > "Error validating trigger config scheduled_event_trigger: > org.apache.solr.common.SolrException: Invalid Date Math > String:'2019-02-27T20:20:20.202Znull'" > {code} > proposed behaviour: > {code} > "Error validating trigger config scheduled_event_trigger: > TriggerValidationException{name=scheduled_event_trigger, > details='{every=missing required property}'}" > {code} > one-line (proposed) patch summary: > * in the {{ScheduledTrigger}} constructor move {{"every"}} from the > {{TriggerUtils.validProperties}} to the {{TriggerUtils.requiredProperties}} > line/list -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13279) clarify ScheduledTrigger's "every parameter missing" error response
[ https://issues.apache.org/jira/browse/SOLR-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875119#comment-16875119 ] ASF subversion and git services commented on SOLR-13279: Commit 785937d987fdf2ada2f33b1925a3844bf9216758 in lucene-solr's branch refs/heads/branch_8x from Christine Poerschke [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=785937d ] SOLR-13279: Clarify ScheduledTrigger's "every parameter missing" error response. > clarify ScheduledTrigger's "every parameter missing" error response > --- > > Key: SOLR-13279 > URL: https://issues.apache.org/jira/browse/SOLR-13279 > Project: Solr > Issue Type: Improvement > Components: AutoScaling >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13279.patch > > > current behaviour: > {code} > "Error validating trigger config scheduled_event_trigger: > org.apache.solr.common.SolrException: Invalid Date Math > String:'2019-02-27T20:20:20.202Znull'" > {code} > proposed behaviour: > {code} > "Error validating trigger config scheduled_event_trigger: > TriggerValidationException{name=scheduled_event_trigger, > details='{every=missing required property}'}" > {code} > one-line (proposed) patch summary: > * in the {{ScheduledTrigger}} constructor move {{"every"}} from the > {{TriggerUtils.validProperties}} to the {{TriggerUtils.requiredProperties}} > line/list -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13280) strengthen ScheduledTrigger's preferredOperation parameter validation
[ https://issues.apache.org/jira/browse/SOLR-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875120#comment-16875120 ] ASF subversion and git services commented on SOLR-13280: Commit 07cf48816f13444965553a955f69f7be9fc90e41 in lucene-solr's branch refs/heads/branch_8x from Christine Poerschke [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=07cf488 ] SOLR-13280: Strengthen ScheduledTrigger's preferredOperation parameter validation. > strengthen ScheduledTrigger's preferredOperation parameter validation > - > > Key: SOLR-13280 > URL: https://issues.apache.org/jira/browse/SOLR-13280 > Project: Solr > Issue Type: Improvement > Components: AutoScaling >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13280.patch > > > Currently a typo such as (say) {{MOVE_REPLICA}} instead of the correct > {{MOVEREPLICA}} results in a "success" response for the "set-trigger" call > but in the Solr logs NullPointerException stuff is happening e.g. > {code} > "eventType":"SCHEDULED", > "properties":{ > "actualEventTime":..., > "preferredOperation":"MOVE_REPLICA", > "_enqueue_time_":...}} > at > org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:160) > ~[?:?] > at > org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:324) > ~[?:?] > ... 6 more > Caused by: java.lang.NullPointerException > at > org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getSuggester(Policy.java:628) > ~[?:?] > at > org.apache.solr.cloud.autoscaling.ComputePlanAction.getSuggester(ComputePlanAction.java:262) > ~[?:?] > at > org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:97) > ~[?:?] > at > org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:324) > ~[?:?] > ... 6 more > {code} > Proposed patch to follow. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13576) factor out a TopGroupsShardResponseProcessor.fillResultIds method
[ https://issues.apache.org/jira/browse/SOLR-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875121#comment-16875121 ] ASF subversion and git services commented on SOLR-13576: Commit 328db38d71bf87da472717762ba7adf2d428e05d in lucene-solr's branch refs/heads/branch_8x from Christine Poerschke [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=328db38 ] SOLR-13576: Factor out a TopGroupsShardResponseProcessor.fillResultIds method. (Christine Poerschke, Diego Ceccarelli) > factor out a TopGroupsShardResponseProcessor.fillResultIds method > - > > Key: SOLR-13576 > URL: https://issues.apache.org/jira/browse/SOLR-13576 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13576.patch > > > The {{TopGroupsShardResponseProcessor.process}} method e.g. > [#L54-L215|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/TopGroupsShardResponseProcessor.java#L54-L215] > does quite a few things and factoring out a {{fillResultIds}} (or similarly > named) method for the logically distinct > [#L192-L214|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/TopGroupsShardResponseProcessor.java#L192-L214] > portion could help with code comprehension. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13576) factor out a TopGroupsShardResponseProcessor.fillResultIds method
[ https://issues.apache.org/jira/browse/SOLR-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875111#comment-16875111 ] ASF subversion and git services commented on SOLR-13576: Commit a49ddbaf116e82b3af5b15b3ddb6f64954aa2951 in lucene-solr's branch refs/heads/master from Christine Poerschke [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a49ddba ] SOLR-13576: Factor out a TopGroupsShardResponseProcessor.fillResultIds method. (Christine Poerschke, Diego Ceccarelli) > factor out a TopGroupsShardResponseProcessor.fillResultIds method > - > > Key: SOLR-13576 > URL: https://issues.apache.org/jira/browse/SOLR-13576 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13576.patch > > > The {{TopGroupsShardResponseProcessor.process}} method e.g. > [#L54-L215|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/TopGroupsShardResponseProcessor.java#L54-L215] > does quite a few things and factoring out a {{fillResultIds}} (or similarly > named) method for the logically distinct > [#L192-L214|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/TopGroupsShardResponseProcessor.java#L192-L214] > portion could help with code comprehension. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13279) clarify ScheduledTrigger's "every parameter missing" error response
[ https://issues.apache.org/jira/browse/SOLR-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875109#comment-16875109 ] ASF subversion and git services commented on SOLR-13279: Commit 993c051a0edb6d634396fd814f0e2bb80d65ce29 in lucene-solr's branch refs/heads/master from Christine Poerschke [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=993c051 ] SOLR-13279: Clarify ScheduledTrigger's "every parameter missing" error response. > clarify ScheduledTrigger's "every parameter missing" error response > --- > > Key: SOLR-13279 > URL: https://issues.apache.org/jira/browse/SOLR-13279 > Project: Solr > Issue Type: Improvement > Components: AutoScaling >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13279.patch > > > current behaviour: > {code} > "Error validating trigger config scheduled_event_trigger: > org.apache.solr.common.SolrException: Invalid Date Math > String:'2019-02-27T20:20:20.202Znull'" > {code} > proposed behaviour: > {code} > "Error validating trigger config scheduled_event_trigger: > TriggerValidationException{name=scheduled_event_trigger, > details='{every=missing required property}'}" > {code} > one-line (proposed) patch summary: > * in the {{ScheduledTrigger}} constructor move {{"every"}} from the > {{TriggerUtils.validProperties}} to the {{TriggerUtils.requiredProperties}} > line/list -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13280) strengthen ScheduledTrigger's preferredOperation parameter validation
[ https://issues.apache.org/jira/browse/SOLR-13280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875110#comment-16875110 ] ASF subversion and git services commented on SOLR-13280: Commit 5d2569eab1c911e10dc166486dc66568717f6ff8 in lucene-solr's branch refs/heads/master from Christine Poerschke [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5d2569e ] SOLR-13280: Strengthen ScheduledTrigger's preferredOperation parameter validation. > strengthen ScheduledTrigger's preferredOperation parameter validation > - > > Key: SOLR-13280 > URL: https://issues.apache.org/jira/browse/SOLR-13280 > Project: Solr > Issue Type: Improvement > Components: AutoScaling >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13280.patch > > > Currently a typo such as (say) {{MOVE_REPLICA}} instead of the correct > {{MOVEREPLICA}} results in a "success" response for the "set-trigger" call > but in the Solr logs NullPointerException stuff is happening e.g. > {code} > "eventType":"SCHEDULED", > "properties":{ > "actualEventTime":..., > "preferredOperation":"MOVE_REPLICA", > "_enqueue_time_":...}} > at > org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:160) > ~[?:?] > at > org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:324) > ~[?:?] > ... 6 more > Caused by: java.lang.NullPointerException > at > org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getSuggester(Policy.java:628) > ~[?:?] > at > org.apache.solr.cloud.autoscaling.ComputePlanAction.getSuggester(ComputePlanAction.java:262) > ~[?:?] > at > org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:97) > ~[?:?] > at > org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:324) > ~[?:?] > ... 6 more > {code} > Proposed patch to follow. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk-12.0.1) - Build # 211 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/211/ Java: 64bit/jdk-12.0.1 -XX:+UseCompressedOops -XX:+UseG1GC 13 tests failed. FAILED: org.apache.solr.cloud.rule.RulesTest.doIntegrationTest Error Message: Timeout occurred while waiting response from server at: https://127.0.0.1:60821/solr Stack Trace: org.apache.solr.client.solrj.SolrServerException: Timeout occurred while waiting response from server at: https://127.0.0.1:60821/solr at __randomizedtesting.SeedInfo.seed([F72636A25622C7C2:121571234A5635C0]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:667) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245) at org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368) at org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228) at org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:547) at org.apache.solr.cloud.rule.RulesTest.removeCollections(RulesTest.java:65) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-13576) factor out a TopGroupsShardResponseProcessor.fillResultIds method
[ https://issues.apache.org/jira/browse/SOLR-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875095#comment-16875095 ] Christine Poerschke commented on SOLR-13576: {quote}The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {quote} * This is a straightforward 2-ish-lines factoring out change. No obvious, clear, new, good ROI targets to extend existing test coverage. * Lucene/Solr QA Jenkins ran the {{core}} tests. > factor out a TopGroupsShardResponseProcessor.fillResultIds method > - > > Key: SOLR-13576 > URL: https://issues.apache.org/jira/browse/SOLR-13576 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-13576.patch > > > The {{TopGroupsShardResponseProcessor.process}} method e.g. > [#L54-L215|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/TopGroupsShardResponseProcessor.java#L54-L215] > does quite a few things and factoring out a {{fillResultIds}} (or similarly > named) method for the logically distinct > [#L192-L214|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.1.1/solr/core/src/java/org/apache/solr/search/grouping/distributed/responseprocessor/TopGroupsShardResponseProcessor.java#L192-L214] > portion could help with code comprehension. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13580) java 13-ea NumberFormat.parse bugs in some Locales, affects ParseNumeric UpdateProcessors when using the 'locale' config option
[ https://issues.apache.org/jira/browse/SOLR-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874358#comment-16874358 ] Hoss Man edited comment on SOLR-13580 at 6/28/19 5:39 PM: -- [https://bugs.openjdk.java.net/browse/JDK-8226867] {noformat} JDK-13 uses the CLDR v35 . From CLDR v34, the French grouping separator changed from no-break space U+00A0 to narrow no-break space U+202F. (Reference Release Notes for CLDR v34 : http://cldr.unicode.org/index/downloads/cldr-34 ) "10\u202F0898,83491" will give the expected result : ### input = 10?0898,83491 100898.83491 class java.lang.Double java.text.ParsePosition[index=13,errorIndex=-1] {noformat} Linked issue: https://bugs.openjdk.java.net/browse/JDK-8225245 {quote} Instead of hardcoding \u00A0, applications can use DecimalFormatSymbols.getInstance(Locale.FRENCH).getGroupingSeparator(). {quote} ...so it looks like we can change the test to work regardless of java version by using DecimalFormatSymbols to determine the corect grouping character at runtime -- but that won't change the fact that end users who don't know any better might see a back compat break (in the ParseNumeric UpdateProcessors) if they upgrade to java13 ... so we should mak sure the isue description spells this out. not sure there is much we can safely do to mitigate this for users across all possible locales that my be affected. was (Author: hossman): [https://bugs.openjdk.java.net/browse/JDK-8226867] {quote} JDK-13 uses the CLDR v35 . From CLDR v34, the French grouping separator changed from no-break space U+00A0 to narrow no-break space U+202F. (Reference Release Notes for CLDR v34 : http://cldr.unicode.org/index/downloads/cldr-34 ) "10\u202F0898,83491" will give the expected result : ### input = 10?0898,83491 100898.83491 class java.lang.Double java.text.ParsePosition[index=13,errorIndex=-1] {quote} Linked issue: https://bugs.openjdk.java.net/browse/JDK-8225245 {quote} Instead of hardcoding \u00A0, applications can use DecimalFormatSymbols.getInstance(Locale.FRENCH).getGroupingSeparator(). {quote} ...so it looks like we can change the test to work regardless of java version by using DecimalFormatSymbols to determine the corect grouping character at runtime -- but that won't change the fact that end users who don't know any better might see a back compat break (in the ParseNumeric UpdateProcessors) if they upgrade to java13 ... so we should mak sure the isue description spells this out. not sure there is much we can safely do to mitigate this for users across all possible locales that my be affected. > java 13-ea NumberFormat.parse bugs in some Locales, affects ParseNumeric > UpdateProcessors when using the 'locale' config option > --- > > Key: SOLR-13580 > URL: https://issues.apache.org/jira/browse/SOLR-13580 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Hoss Man >Priority: Major > Labels: Java13 > Attachments: SOLR-13580.patch > > > ParsingFieldUpdateProcessorsTest has uncovered a JDK 13-ea+26 bug when > dealing with the fr_FR Locale (which may affect other locales as well) which > causes the grouping seperator ( U+00A0 in fr_FR ) to be ignored when parsing, > treating them as a termination character -- example: "10 898" is parsed as > "10" instead of "10898", leaving the " 898" portion of the string unparsed. > The way the ParseNumeric UpdateProcessors are implemented, the fact that the > NumbertFormat instance does not recognize the entire string as a Number > results in the String value being left "as is" in the input documents. > In ParsingFieldUpdateProcessorsTest this has manifested as jenkins failures > like this... > {noformat} >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=ParsingFieldUpdateProcessorsTest > -Dtests.method=testParseFloatNonRootLocale -Dtests.seed=AE6C840917DD963B > -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true > -Dtests.locale=us -Dtests.timezone=GMT -Dtests.asserts=true > -Dtests.file.encoding=US-ASCII >[junit4] FAILURE 0.03s | > ParsingFieldUpdateProcessorsTest.testParseFloatNonRootLocale <<< >[junit4]> Throwable #1: java.lang.AssertionError >[junit4]> at > __randomizedtesting.SeedInfo.seed([AE6C840917DD963B:B5B079D8B7786A26]:0) >[junit4]> at > org.apache.solr.update.processor.ParsingFieldUpdateProcessorsTest.testParseFloatNonRootLocale(ParsingFieldUpdateProcessorsTest.java:471) >[junit4]> at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >[junit4]> at >
[GitHub] [lucene-solr] mayya-sharipova commented on issue #595: Load freqs lazily in Postings
mayya-sharipova commented on issue #595: Load freqs lazily in Postings URL: https://github.com/apache/lucene-solr/pull/595#issuecomment-506811878 Results running the current version against master: ```bash python src/python/localrun.py -source wikimedium10k ``` ``` Report after iter 19: TaskQPS baseline StdDevQPS my_modified_version StdDevPct diff HighIntervalsOrdered 331.95 (15.5%) 302.61 (17.2%) -8.8% ( -35% - 28%) MedSpanNear 526.35 (18.1%) 495.44 (16.2%) -5.9% ( -34% - 34%) LowSloppyPhrase 856.47 (15.0%) 832.17 (15.1%) -2.8% ( -28% - 32%) Wildcard 325.84 (15.1%) 318.97 (16.3%) -2.1% ( -29% - 34%) HighTerm 2452.42 (13.0%) 2427.11 (15.9%) -1.0% ( -26% - 32%) Prefix3 225.03 (15.9%) 224.37 (17.1%) -0.3% ( -28% - 38%) OrHighHigh 179.73 (18.6%) 179.48 (16.6%) -0.1% ( -29% - 43%) OrHighMed 436.93 (16.5%) 436.34 (15.4%) -0.1% ( -27% - 37%) AndHighHigh 295.80 (17.7%) 295.42 (17.1%) -0.1% ( -29% - 41%) LowSpanNear 658.15 (15.9%) 659.13 (12.5%) 0.1% ( -24% - 33%) MedTerm 2426.87 (16.6%) 2432.89 (13.4%) 0.2% ( -25% - 36%) LowPhrase 1034.22 (15.8%) 1038.10 (14.3%) 0.4% ( -25% - 36%) IntNRQ 717.64 (16.1%) 722.40 (16.5%) 0.7% ( -27% - 39%) HighTermDayOfYearSort 407.60 (10.7%) 411.51 (16.6%) 1.0% ( -23% - 31%) Fuzzy2 39.92 (21.9%) 40.31 (16.5%) 1.0% ( -30% - 50%) OrHighLow 646.37 (13.9%) 654.28 (17.8%) 1.2% ( -26% - 38%) AndHighLow 827.09 (16.0%) 840.43 (16.1%) 1.6% ( -26% - 40%) HighSpanNear 377.10 (16.0%) 383.73 (14.6%) 1.8% ( -24% - 38%) HighTermMonthSort 1257.71 (12.6%) 1280.97 (15.7%) 1.8% ( -23% - 34%) AndHighMed 634.96 (16.4%) 647.16 (12.8%) 1.9% ( -23% - 37%) HighPhrase 560.37 (18.3%) 572.15 (18.0%) 2.1% ( -28% - 46%) BrowseMonthSSDVFacets 1515.12 (11.5%) 1549.39 (10.5%) 2.3% ( -17% - 27%) BrowseDateTaxoFacets 2733.74 (10.6%) 2805.93 (11.5%) 2.6% ( -17% - 27%) PKLookup 123.28 (13.1%) 126.65 (12.2%) 2.7% ( -19% - 32%) BrowseMonthTaxoFacets 6927.15 (11.6%) 7131.62 (13.2%) 3.0% ( -19% - 31%) BrowseDayOfYearSSDVFacets 1249.77 (13.6%) 1286.87 (15.3%) 3.0% ( -22% - 36%) MedPhrase 708.03 (16.0%) 732.35 (17.7%) 3.4% ( -26% - 44%) MedSloppyPhrase 451.45 (16.9%) 468.08 (15.6%) 3.7% ( -24% - 43%) BrowseDayOfYearTaxoFacets 6774.32 (14.3%) 7028.15 (12.9%) 3.7% ( -20% - 36%) LowTerm 2906.03 (13.5%) 3028.13 (15.3%) 4.2% ( -21% - 38%) Fuzzy1 322.57 (16.9%) 337.58 (14.1%) 4.7% ( -22% - 42%) HighSloppyPhrase 596.31 (19.8%) 626.88 (14.7%) 5.1% ( -24% - 49%) Respell 208.18 (17.7%) 222.12 (14.1%) 6.7% ( -21% - 46%) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-8891) Snowball stemmer/analyzer for the Estonian language
[ https://issues.apache.org/jira/browse/LUCENE-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomoko Uchida reassigned LUCENE-8891: - Assignee: Tomoko Uchida > Snowball stemmer/analyzer for the Estonian language > --- > > Key: LUCENE-8891 > URL: https://issues.apache.org/jira/browse/LUCENE-8891 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Gert Morten Paimla >Assignee: Tomoko Uchida >Priority: Minor > Labels: newbie, ready-to-commit > Attachments: LUCENE-8891.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently there is no Estonian specific stemmer for SnowballFilter. > I would like to add a Snowball stemmer for the Estonian language and also add > a new Language analyzer for the Estonian language based on the snowball > stemmer. > [https://github.com/gpaimla/lucene-solr] fork of master branch with the > analyzer implemented -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8891) Snowball stemmer/analyzer for the Estonian language
[ https://issues.apache.org/jira/browse/LUCENE-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875065#comment-16875065 ] Tomoko Uchida commented on LUCENE-8891: --- OK, the patch passed precommit. I'd like to wait for a while before committing to ASF repo, so that others can review it. If there are no objections I will commit it to the master and branch_8x on the weekend. [~gpaimla]: in the meantime, you can add a change log to lucene/CHANGES.txt. It should be added to "New Features" section in "Lucene 8.2.0" updates. The credit would be "(your_name via Tomoko Uchida)". > Snowball stemmer/analyzer for the Estonian language > --- > > Key: LUCENE-8891 > URL: https://issues.apache.org/jira/browse/LUCENE-8891 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Gert Morten Paimla >Priority: Minor > Labels: newbie, ready-to-commit > Attachments: LUCENE-8891.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently there is no Estonian specific stemmer for SnowballFilter. > I would like to add a Snowball stemmer for the Estonian language and also add > a new Language analyzer for the Estonian language based on the snowball > stemmer. > [https://github.com/gpaimla/lucene-solr] fork of master branch with the > analyzer implemented -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9409) CollapseQParser missleading error on TextField: 64 bit numeric collapse fields are not supported"
[ https://issues.apache.org/jira/browse/SOLR-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-9409: --- Status: Patch Available (was: Open) > CollapseQParser missleading error on TextField: 64 bit numeric collapse > fields are not supported" > - > > Key: SOLR-9409 > URL: https://issues.apache.org/jira/browse/SOLR-9409 > Project: Solr > Issue Type: Bug >Reporter: Hoss Man >Priority: Major > Attachments: SOLR-9409.patch, SOLR-9409.patch > > > An IRC user asked about the error "64 bit numeric collapse fields are not > supported" when doing a query like this where subTitle is a TextField... > {noformat} > fq={!collapse+field%3DsubTitle} > {noformat} > The code in question looks roughly like this... > {code} > if (collapseFieldType instanceof StrField) { >... > } else if (collapseFieldType instanceof TrieIntField || > collapseFieldType instanceof TrieFloatField) { >... > } else { >throw new IOException("64 bit numeric collapse fields are not supported"); > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9409) CollapseQParser missleading error on TextField: 64 bit numeric collapse fields are not supported"
[ https://issues.apache.org/jira/browse/SOLR-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-9409: --- Attachment: SOLR-9409.patch > CollapseQParser missleading error on TextField: 64 bit numeric collapse > fields are not supported" > - > > Key: SOLR-9409 > URL: https://issues.apache.org/jira/browse/SOLR-9409 > Project: Solr > Issue Type: Bug >Reporter: Hoss Man >Priority: Major > Attachments: SOLR-9409.patch, SOLR-9409.patch > > > An IRC user asked about the error "64 bit numeric collapse fields are not > supported" when doing a query like this where subTitle is a TextField... > {noformat} > fq={!collapse+field%3DsubTitle} > {noformat} > The code in question looks roughly like this... > {code} > if (collapseFieldType instanceof StrField) { >... > } else if (collapseFieldType instanceof TrieIntField || > collapseFieldType instanceof TrieFloatField) { >... > } else { >throw new IOException("64 bit numeric collapse fields are not supported"); > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: 8.1.2 bug fix release
Thanks! it is done. On Fri, Jun 28, 2019 at 5:19 PM Đạt Cao Mạnh wrote: > Ok, I'm hoping these two are these last ones. >
[jira] [Updated] (LUCENE-8831) LatLonShapeBoundingBoxQuery hashcode is wrong
[ https://issues.apache.org/jira/browse/LUCENE-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ignacio Vera updated LUCENE-8831: - Fix Version/s: 8.1.2 > LatLonShapeBoundingBoxQuery hashcode is wrong > -- > > Key: LUCENE-8831 > URL: https://issues.apache.org/jira/browse/LUCENE-8831 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ignacio Vera >Assignee: Ignacio Vera >Priority: Major > Fix For: master (9.0), 8.2, 8.1.2 > > Time Spent: 20m > Remaining Estimate: 0h > > Currently the hashcode implementation for LatLonShapeBoundingBoxQuery returns > always a different value. Therefore the query cannot be cached. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8831) LatLonShapeBoundingBoxQuery hashcode is wrong
[ https://issues.apache.org/jira/browse/LUCENE-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875034#comment-16875034 ] ASF subversion and git services commented on LUCENE-8831: - Commit 9c08ff57ab1d3e62cb91885f0bbe19c6453fa093 in lucene-solr's branch refs/heads/branch_8_1 from Ignacio Vera [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9c08ff5 ] LUCENE-8831: Fixed LatLonShapeBoundingBoxQuery .hashCode methods > LatLonShapeBoundingBoxQuery hashcode is wrong > -- > > Key: LUCENE-8831 > URL: https://issues.apache.org/jira/browse/LUCENE-8831 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ignacio Vera >Assignee: Ignacio Vera >Priority: Major > Fix For: master (9.0), 8.2 > > Time Spent: 20m > Remaining Estimate: 0h > > Currently the hashcode implementation for LatLonShapeBoundingBoxQuery returns > always a different value. Therefore the query cannot be cached. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8891) Snowball stemmer/analyzer for the Estonian language
[ https://issues.apache.org/jira/browse/LUCENE-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875019#comment-16875019 ] Gert Morten Paimla commented on LUCENE-8891: I removed the TestEstonianStemming class and merged it into the Analyzer testing class, it was just a leftover that i forgot even existed. Hopefully the precommit task doesnt fail now either. Although i cant run it myself because it throws into an error: [source-patterns] Unescaped symbol "->" on line #46: solr/solr-ref-guide/src/analytics.adoc [source-patterns] Unescaped symbol "->" on line #55: solr/solr-ref-guide/src/analytics.adoc BUILD FAILED > Snowball stemmer/analyzer for the Estonian language > --- > > Key: LUCENE-8891 > URL: https://issues.apache.org/jira/browse/LUCENE-8891 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Gert Morten Paimla >Priority: Minor > Labels: newbie, ready-to-commit > Attachments: LUCENE-8891.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently there is no Estonian specific stemmer for SnowballFilter. > I would like to add a Snowball stemmer for the Estonian language and also add > a new Language analyzer for the Estonian language based on the snowball > stemmer. > [https://github.com/gpaimla/lucene-solr] fork of master branch with the > analyzer implemented -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8891) Snowball stemmer/analyzer for the Estonian language
[ https://issues.apache.org/jira/browse/LUCENE-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gert Morten Paimla updated LUCENE-8891: --- Attachment: LUCENE-8891.patch > Snowball stemmer/analyzer for the Estonian language > --- > > Key: LUCENE-8891 > URL: https://issues.apache.org/jira/browse/LUCENE-8891 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Gert Morten Paimla >Priority: Minor > Labels: newbie, ready-to-commit > Attachments: LUCENE-8891.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently there is no Estonian specific stemmer for SnowballFilter. > I would like to add a Snowball stemmer for the Estonian language and also add > a new Language analyzer for the Estonian language based on the snowball > stemmer. > [https://github.com/gpaimla/lucene-solr] fork of master branch with the > analyzer implemented -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8891) Snowball stemmer/analyzer for the Estonian language
[ https://issues.apache.org/jira/browse/LUCENE-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gert Morten Paimla updated LUCENE-8891: --- Attachment: (was: LUCENE-8891.patch) > Snowball stemmer/analyzer for the Estonian language > --- > > Key: LUCENE-8891 > URL: https://issues.apache.org/jira/browse/LUCENE-8891 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Gert Morten Paimla >Priority: Minor > Labels: newbie, ready-to-commit > Attachments: LUCENE-8891.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently there is no Estonian specific stemmer for SnowballFilter. > I would like to add a Snowball stemmer for the Estonian language and also add > a new Language analyzer for the Estonian language based on the snowball > stemmer. > [https://github.com/gpaimla/lucene-solr] fork of master branch with the > analyzer implemented -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] magibney opened a new pull request #751: SOLR-13132: single sweep iteration over base, foreground, and background sets for "relatedness" calculation
magibney opened a new pull request #751: SOLR-13132: single sweep iteration over base, foreground, and background sets for "relatedness" calculation URL: https://github.com/apache/lucene-solr/pull/751 Relatedness essentially calculates facets separately across three different docSets; this patch increments facet counts over foreground, background, and base sets in a single sweep, to leverage the efficiency of existing faceting over the base set This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10860) in-place updates currently throw NumberFormatException instead of a Bad Request SolrException for bad input
[ https://issues.apache.org/jira/browse/SOLR-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-10860: Status: Patch Available (was: Open) > in-place updates currently throw NumberFormatException instead of a Bad > Request SolrException for bad input > --- > > Key: SOLR-10860 > URL: https://issues.apache.org/jira/browse/SOLR-10860 > Project: Solr > Issue Type: Bug >Reporter: Tomás Fernández Löbbe >Assignee: Ishan Chattopadhyaya >Priority: Major > Attachments: SOLR-10860.patch, SOLR-10860.patch, SOLR-10860.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13132) Improve JSON "terms" facet performance when sorted by relatedness
[ https://issues.apache.org/jira/browse/SOLR-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Gibney updated SOLR-13132: -- Status: Open (was: Patch Available) > Improve JSON "terms" facet performance when sorted by relatedness > -- > > Key: SOLR-13132 > URL: https://issues.apache.org/jira/browse/SOLR-13132 > Project: Solr > Issue Type: Improvement > Components: Facet Module >Affects Versions: 7.4, master (9.0) >Reporter: Michael Gibney >Priority: Major > Attachments: SOLR-13132-with-cache-01.patch, > SOLR-13132-with-cache.patch, SOLR-13132.patch > > > When sorting buckets by {{relatedness}}, JSON "terms" facet must calculate > {{relatedness}} for every term. > The current implementation uses a standard uninverted approach (either > {{docValues}} or {{UnInvertedField}}) to get facet counts over the domain > base docSet, and then uses that initial pass as a pre-filter for a > second-pass, inverted approach of fetching docSets for each relevant term > (i.e., {{count > minCount}}?) and calculating intersection size of those sets > with the domain base docSet. > Over high-cardinality fields, the overhead of per-term docSet creation and > set intersection operations increases request latency to the point where > relatedness sort may not be usable in practice (for my use case, even after > applying the patch for SOLR-13108, for a field with ~220k unique terms per > core, QTime for high-cardinality domain docSets were, e.g.: cardinality > 1816684=9000ms, cardinality 5032902=18000ms). > The attached patch brings the above example QTimes down to a manageable > ~300ms and ~250ms respectively. The approach calculates uninverted facet > counts over domain base, foreground, and background docSets in parallel in a > single pass. This allows us to take advantage of the efficiencies built into > the standard uninverted {{FacetFieldProcessorByArray[DV|UIF]}}), and avoids > the per-term docSet creation and set intersection overhead. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9242) Collection level backup/restore should provide a param for specifying the repository implementation it should use
[ https://issues.apache.org/jira/browse/SOLR-9242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875008#comment-16875008 ] Mikhail Khludnev commented on SOLR-9242: I'm wondering why BackupRepository isn't ever closed. I suppose it causes leaks hdfs stuff in TestHdfsCloudBackupRestore. Isn't it worth to close repo every time? Or it's better to keep repo instances in BRFactory? I'm asking in scope of SOLR-9961 where we need to spawn threadpool once or per repo. > Collection level backup/restore should provide a param for specifying the > repository implementation it should use > - > > Key: SOLR-9242 > URL: https://issues.apache.org/jira/browse/SOLR-9242 > Project: Solr > Issue Type: Improvement >Reporter: Hrishikesh Gadre >Assignee: Varun Thacker >Priority: Major > Fix For: 6.2, 7.0 > > Attachments: 7726.log.gz, SOLR-9242.patch, SOLR-9242.patch, > SOLR-9242.patch, SOLR-9242.patch, SOLR-9242.patch, SOLR-9242_followup.patch, > SOLR-9242_followup2.patch > > > SOLR-7374 provides BackupRepository interface to enable storing Solr index > data to a configured file-system (e.g. HDFS, local file-system etc.). This > JIRA is to track the work required to extend this functionality at the > collection level. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13122) Ability to query aliases in Solr Admin UI
[ https://issues.apache.org/jira/browse/SOLR-13122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875006#comment-16875006 ] Erick Erickson commented on SOLR-13122: --- I opened a new Jira about alias uniqueness and link back to the discussion here and other JIRAs, let's move the discussion about whether to require uniqueness there. See SOLR-13584 > Ability to query aliases in Solr Admin UI > - > > Key: SOLR-13122 > URL: https://issues.apache.org/jira/browse/SOLR-13122 > Project: Solr > Issue Type: Improvement > Components: Admin UI >Reporter: mosh >Assignee: Jan Høydahl >Priority: Major > Labels: UI > Fix For: 8.2 > > Attachments: alias-collection-menu-selected.png, > alias-collection-view.png, alias-collections-menu.png, > alias-collections-menu.png, alias-delete-dialogue.png, alias-dropdown.png, > alias-select-double.png, alias-view.png, new-collection-dropdown.png, > new-oll-overview.png > > Time Spent: 10m > Remaining Estimate: 0h > > After having recently toyed with Time Routed Alias in SolrCloud, > we have noticed there is no way to query an alias from the admin UI, > since the combo box only contains the current collection in the cluster. > Solr Admin UI ought to have a way to query these aliases, for better > convenience. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13583) Impossible to delete a collection with the same name as an existing alias
[ https://issues.apache.org/jira/browse/SOLR-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875002#comment-16875002 ] Gus Heck edited comment on SOLR-13583 at 6/28/19 3:35 PM: -- Also this issue should probably clarify the intended behavior relative to [~erickerickson]'s concerns expressed in: SOLR-13122 My $0.02: The current situation of an alias pointing to an alias but only one level deep is strange. In 9.0 We should either support nesting (arbitrary depth) or not support it at all. Certainly nesting imposes significant risks and more work, and I think if we do follow aliases more than one level, we should provide an additional attribute followAliasMaxDepth (default=1), and then fail with if it finds some depth greater than the specified maximum, returning a list of potentially affected collections, aliases involved, and the depth required to run the command. This is a lot of work, and so not following aliases in certain documented cases (or complete rollback) to get this ticket resolved quickly is probably a good option. was (Author: gus_heck): Also this issue should probably clarify the intended behavior relative to [~erickerickson]'s concerns expressed in: SOLR-13122 My $0.02: The current situation of an alias pointing to an alias but only one level deep is strange. In 9.0 We should either support nesting (arbitrary depth) or not support it at all. Certainly nesting imposes significant risks and more work, and I think if we do follow aliases more than one level, we should provide require an additional attribute followAliasMaxDepth (default=1), and then fail with if it finds some depth greater than the specified maximum, returning a list of potentially affected collections, aliases involved, and the depth required to run the command. This is a lot of work, and so not following aliases in certain documented cases (or complete rollback) to get this ticket resolved quickly is probably a good option. > Impossible to delete a collection with the same name as an existing alias > - > > Key: SOLR-13583 > URL: https://issues.apache.org/jira/browse/SOLR-13583 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.1, 8.1.1 >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Blocker > Fix For: 8.1.2 > > > SOLR-13262 changed the behavior of most collection admin commands so that > they always resolve aliases by default. In most cases this is desireable > behavior but it also prevents executing commands on the collections that have > the same name as an existing alias (which usually points to a different > collection). > This behavior also breaks the REINDEXCOLLECTION command with > {{removeSource=true,}} which can also lead to data loss. > This issue can be resolved by adding either an opt-in or opt-out flag to the > collection admin commands that specifies whether the command should attempt > resolving the provided name as an alias first. From the point of view of ease > of use this could be an opt-out option, from the point of view of data safety > this could be an opt-in option. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13584) Explore prohibiting aliases and collections from having the same name.
[ https://issues.apache.org/jira/browse/SOLR-13584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875003#comment-16875003 ] Erick Erickson commented on SOLR-13584: --- One proposal for handing the use-case I outlined in the Jira statement is to use the Collections RENAME command to rename your original collection to something else_._ I have not tested this at all, but "theoretically it should work". > Explore prohibiting aliases and collections from having the same name. > -- > > Key: SOLR-13584 > URL: https://issues.apache.org/jira/browse/SOLR-13584 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Priority: Major > > Allowing aliases and collections to have the same name is fragile and a > potentially a data issue. I'll link in a few JIRAs illustrating this and one > at least where the discussion gets long. > Straw-man proposal to start things off. > Deprecate this ability now, and enforce it in 9.0. > We have to provide a graceful way for users to get themselves out of the > following currently-possible use-case. > * a collection C1 is created and all the front-end uses it. > * users want to atomically switch to a new collection for various reasons > * users create C2 and test it out. > * users create an alias C1->C2 > Let's discuss. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13583) Impossible to delete a collection with the same name as an existing alias
[ https://issues.apache.org/jira/browse/SOLR-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875002#comment-16875002 ] Gus Heck commented on SOLR-13583: - Also this issue should probably clarify the intended behavior relative to [~erickerickson]'s concerns expressed in: SOLR-13122 My $0.02: The current situation of an alias pointing to an alias but only one level deep is strange. In 9.0 We should either support nesting (arbitrary depth) or not support it at all. Certainly nesting imposes significant risks and more work, and I think if we do follow aliases more than one level, we should provide require an additional attribute followAliasMaxDepth (default=1), and then fail with if it finds some depth greater than the specified maximum, returning a list of potentially affected collections, aliases involved, and the depth required to run the command. This is a lot of work, and so not following aliases in certain documented cases (or complete rollback) to get this ticket resolved quickly is probably a good option. > Impossible to delete a collection with the same name as an existing alias > - > > Key: SOLR-13583 > URL: https://issues.apache.org/jira/browse/SOLR-13583 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.1, 8.1.1 >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Blocker > Fix For: 8.1.2 > > > SOLR-13262 changed the behavior of most collection admin commands so that > they always resolve aliases by default. In most cases this is desireable > behavior but it also prevents executing commands on the collections that have > the same name as an existing alias (which usually points to a different > collection). > This behavior also breaks the REINDEXCOLLECTION command with > {{removeSource=true,}} which can also lead to data loss. > This issue can be resolved by adding either an opt-in or opt-out flag to the > collection admin commands that specifies whether the command should attempt > resolving the provided name as an alias first. From the point of view of ease > of use this could be an opt-out option, from the point of view of data safety > this could be an opt-in option. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13584) Explore prohibiting aliases and collections from having the same name.
[ https://issues.apache.org/jira/browse/SOLR-13584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874998#comment-16874998 ] Erick Erickson commented on SOLR-13584: --- SOLR-13122 in particular has some discussion, but changing this behavior is way outside the scope of that JIRA. Anyone working on this should read it though. SOLR-13583 illustrates how allowing aliases and collections to have the same name can and has gone wrong. > Explore prohibiting aliases and collections from having the same name. > -- > > Key: SOLR-13584 > URL: https://issues.apache.org/jira/browse/SOLR-13584 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Priority: Major > > Allowing aliases and collections to have the same name is fragile and a > potentially a data issue. I'll link in a few JIRAs illustrating this and one > at least where the discussion gets long. > Straw-man proposal to start things off. > Deprecate this ability now, and enforce it in 9.0. > We have to provide a graceful way for users to get themselves out of the > following currently-possible use-case. > * a collection C1 is created and all the front-end uses it. > * users want to atomically switch to a new collection for various reasons > * users create C2 and test it out. > * users create an alias C1->C2 > Let's discuss. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13584) Explore prohibiting aliases and collections from having the same name.
Erick Erickson created SOLR-13584: - Summary: Explore prohibiting aliases and collections from having the same name. Key: SOLR-13584 URL: https://issues.apache.org/jira/browse/SOLR-13584 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Erick Erickson Allowing aliases and collections to have the same name is fragile and a potentially a data issue. I'll link in a few JIRAs illustrating this and one at least where the discussion gets long. Straw-man proposal to start things off. Deprecate this ability now, and enforce it in 9.0. We have to provide a graceful way for users to get themselves out of the following currently-possible use-case. * a collection C1 is created and all the front-end uses it. * users want to atomically switch to a new collection for various reasons * users create C2 and test it out. * users create an alias C1->C2 Let's discuss. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-12.0.1) - Build # 5227 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/5227/ Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseG1GC 16 tests failed. FAILED: org.apache.lucene.util.TestRamUsageEstimator.testMap Error Message: expected:<25152.0> but was:<30184.0> Stack Trace: java.lang.AssertionError: expected:<25152.0> but was:<30184.0> at __randomizedtesting.SeedInfo.seed([24FCE328C3D04974:4DA57FBD92B5A06]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:553) at org.junit.Assert.assertEquals(Assert.java:683) at org.apache.lucene.util.TestRamUsageEstimator.testMap(TestRamUsageEstimator.java:136) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:835) FAILED: org.apache.lucene.util.TestRamUsageEstimator.testMap Error Message: expected:<25152.0> but was:<30184.0> Stack Trace: java.lang.AssertionError: expected:<25152.0> but was:<30184.0> at __randomizedtesting.SeedInfo.seed([24FCE328C3D04974:4DA57FBD92B5A06]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:553) at org.junit.Assert.assertEquals(Assert.java:683) at
[jira] [Updated] (SOLR-12554) Expose IndexWriterConfig's RAMPerThreadHardLimitMB as SolrConfig.xml param
[ https://issues.apache.org/jira/browse/SOLR-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-12554: Attachment: SOLR-12554.patch > Expose IndexWriterConfig's RAMPerThreadHardLimitMB as SolrConfig.xml param > -- > > Key: SOLR-12554 > URL: https://issues.apache.org/jira/browse/SOLR-12554 > Project: Solr > Issue Type: New Feature >Reporter: Ishan Chattopadhyaya >Assignee: Munendra S N >Priority: Major > Attachments: SOLR-12554.patch, SOLR-12554.patch, SOLR-12554.patch, > SOLR-12554.patch > > > Currently, the RAMPerThreadHardLimitMB parameter of IWC is not exposed. This > is useful to control flush policies. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: 8.1.2 bug fix release
Ok, I'm hoping these two are these last ones.
[GitHub] [lucene-solr] jpountz commented on a change in pull request #595: Load freqs lazily in Postings
jpountz commented on a change in pull request #595: Load freqs lazily in Postings URL: https://github.com/apache/lucene-solr/pull/595#discussion_r298630153 ## File path: lucene/core/src/java/org/apache/lucene/codecs/lucene50/Lucene50PostingsReader.java ## @@ -391,22 +408,19 @@ public int nextDoc() throws IOException { return doc = NO_MORE_DOCS; } if (docBufferUpto == BLOCK_SIZE) { -refillDocs(); +refillDocs(true); Review comment: This would be a bit simpler for sure, but I'd be curious whether it helps with performance? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13577) TestReplicationHandler.doTestIndexFetchOnMasterRestart failures
[ https://issues.apache.org/jira/browse/SOLR-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874976#comment-16874976 ] ASF subversion and git services commented on SOLR-13577: Commit d2acaff5789c6b6ec5baa6af4fb5cacc471b05d2 in lucene-solr's branch refs/heads/branch_8x from Mikhail Khludnev [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d2acaff ] SOLR-13577: spin until slave got a replication failure while master is down. > TestReplicationHandler.doTestIndexFetchOnMasterRestart failures > --- > > Key: SOLR-13577 > URL: https://issues.apache.org/jira/browse/SOLR-13577 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: 8016-consoleText.zip, SOLR-13577.patch, > SOLR-13577.patch, SOLR-13577.patch, screenshot-1.png, still failed on Windows > consoleText.zip > > > It's seems like clear test failures. Failed 6 times in a row at lines 682, 684 > {quote} > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 1 build (Since Failed#8011 ) > Took 6 sec. > Error Message > null > Stacktrace > java.lang.NumberFormatException: null > at > __randomizedtesting.SeedInfo.seed([6AB4ECC957E5CCA2:B243282DFC3E0EFE]:0) > at java.base/java.lang.Integer.parseInt(Integer.java:614) > at java.base/java.lang.Integer.parseInt(Integer.java:770) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:682) > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 3 builds (Since Failed#8011 ) > Took 7.5 sec. > Stacktrace > java.lang.AssertionError > at > __randomizedtesting.SeedInfo.seed([E88092B4017D2D3D:30775650AAA6EF61]:0) > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:684) > {quote} > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] mayya-sharipova commented on a change in pull request #595: Load freqs lazily in Postings
mayya-sharipova commented on a change in pull request #595: Load freqs lazily in Postings URL: https://github.com/apache/lucene-solr/pull/595#discussion_r298628008 ## File path: lucene/core/src/java/org/apache/lucene/codecs/lucene50/Lucene50PostingsReader.java ## @@ -391,22 +408,19 @@ public int nextDoc() throws IOException { return doc = NO_MORE_DOCS; } if (docBufferUpto == BLOCK_SIZE) { -refillDocs(); +refillDocs(true); Review comment: @jpountz I can modify to load freqs lazily here as well. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-13548) Migrate Solr's Moin wiki to Confluence
[ https://issues.apache.org/jira/browse/SOLR-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl resolved SOLR-13548. Resolution: Fixed > Migrate Solr's Moin wiki to Confluence > -- > > Key: SOLR-13548 > URL: https://issues.apache.org/jira/browse/SOLR-13548 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: website >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Attachments: SolrCwikiPages.txt, SolrMoinTitles.txt, > create_dummy_confluence_pages.py > > > We have a deadline end of June to migrate Moin wiki to Confluence. > This Jira will track migration of Solr's [https://wiki.apache.org/solr/] over > to [https://cwiki.apache.org/confluence/display/SOLR] > The old Confluence space currently hosts the old Reference Guide for version > 6.5 before we moved to asciidoc. This will be overwritten. > Steps: > # Delete all pages in current SOLR space > ## Q: Can we do a bulk delete ourselves or do we need to ask INFRA? > # The rules in {{.htaccess}} which redirects to the 6.6 guide will remain as > is > # Run the migration tool at > [https://selfserve.apache.org|https://selfserve.apache.org/] > # Add a clearly visible link from front page to the ref guide for people > landing there for docs > After migration we'll clean up and weed out what is not needed, and then > start moving developer-centric content into the main git repo (which will be > covered in other JIRAs) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13548) Migrate Solr's Moin wiki to Confluence
[ https://issues.apache.org/jira/browse/SOLR-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874974#comment-16874974 ] Jan Høydahl commented on SOLR-13548: (/) Deleted old 'SOLR' Confluence space (/) Re-created new 'SOLR' Confluence space with [selfserve.apache.org|http://selfserve.apache.org/] (/) Migrated Solr wiki to this new space with [selfserve.apache.org|http://selfserve.apache.org/] (/) Created the dummy pages to reserve these for the 6.6 redirect, using attached python script (/) Created INFRA issue to get redirects in place: INFRA-18677 I forgot to rename the 'DocValues' and 'MoreLikeThis' pages, so currently they are inaccessible :) However, I assume that we'll delete most of those that are now covered by Reference Guide anyway. > Migrate Solr's Moin wiki to Confluence > -- > > Key: SOLR-13548 > URL: https://issues.apache.org/jira/browse/SOLR-13548 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: website >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Attachments: SolrCwikiPages.txt, SolrMoinTitles.txt, > create_dummy_confluence_pages.py > > > We have a deadline end of June to migrate Moin wiki to Confluence. > This Jira will track migration of Solr's [https://wiki.apache.org/solr/] over > to [https://cwiki.apache.org/confluence/display/SOLR] > The old Confluence space currently hosts the old Reference Guide for version > 6.5 before we moved to asciidoc. This will be overwritten. > Steps: > # Delete all pages in current SOLR space > ## Q: Can we do a bulk delete ourselves or do we need to ask INFRA? > # The rules in {{.htaccess}} which redirects to the 6.6 guide will remain as > is > # Run the migration tool at > [https://selfserve.apache.org|https://selfserve.apache.org/] > # Add a clearly visible link from front page to the ref guide for people > landing there for docs > After migration we'll clean up and weed out what is not needed, and then > start moving developer-centric content into the main git repo (which will be > covered in other JIRAs) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13577) TestReplicationHandler.doTestIndexFetchOnMasterRestart failures
[ https://issues.apache.org/jira/browse/SOLR-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874973#comment-16874973 ] ASF subversion and git services commented on SOLR-13577: Commit d54555c7575c86bb68581c6bbe5057c4725948dd in lucene-solr's branch refs/heads/master from Mikhail Khludnev [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d54555c ] SOLR-13577: spin until slave got a replication failure while master is down. > TestReplicationHandler.doTestIndexFetchOnMasterRestart failures > --- > > Key: SOLR-13577 > URL: https://issues.apache.org/jira/browse/SOLR-13577 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: 8016-consoleText.zip, SOLR-13577.patch, > SOLR-13577.patch, SOLR-13577.patch, screenshot-1.png, still failed on Windows > consoleText.zip > > > It's seems like clear test failures. Failed 6 times in a row at lines 682, 684 > {quote} > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 1 build (Since Failed#8011 ) > Took 6 sec. > Error Message > null > Stacktrace > java.lang.NumberFormatException: null > at > __randomizedtesting.SeedInfo.seed([6AB4ECC957E5CCA2:B243282DFC3E0EFE]:0) > at java.base/java.lang.Integer.parseInt(Integer.java:614) > at java.base/java.lang.Integer.parseInt(Integer.java:770) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:682) > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart > Failing for the past 3 builds (Since Failed#8011 ) > Took 7.5 sec. > Stacktrace > java.lang.AssertionError > at > __randomizedtesting.SeedInfo.seed([E88092B4017D2D3D:30775650AAA6EF61]:0) > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.solr.handler.TestReplicationHandler.doTestIndexFetchOnMasterRestart(TestReplicationHandler.java:684) > {quote} > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-8891) Snowball stemmer/analyzer for the Estonian language
[ https://issues.apache.org/jira/browse/LUCENE-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874965#comment-16874965 ] Tomoko Uchida edited comment on LUCENE-8891 at 6/28/19 2:42 PM: Thanks [~gpaimla], the patch looks fine to me. I noticed a few small things: - "precommit" task is failed because of there remain unused import statements. We have various code and documentation linters / rules which should be checked before committing changes to the master. You can run the checks by executing {{ant precommit}}. - It is common convention to create one test class for each class to be tested. {{TestEstonianStemming}} seems slightly odd to me since there is no class named {{EstonianStemming}}. I think it can be merged into {{TestEstonianAnalyzer}}, opinions? was (Author: tomoko uchida): Thanks [~gpaimla], the patch looks fine to me. I noticed a few small things: - "precommit" task is failed because of there remain unused import statements. We have various code and documentation linters / rules which should be checked before committing changes to the master. You can run the checks by executing {{ant precommit}}. - It is common convention to create one test class for each class to be tested. {{TestEstonianStemming}} seems slightly odd to me since there are not {{EstonianStemming}} class. I think it can be merged into {{TestEstonianAnalyzer}}, opinions? > Snowball stemmer/analyzer for the Estonian language > --- > > Key: LUCENE-8891 > URL: https://issues.apache.org/jira/browse/LUCENE-8891 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Gert Morten Paimla >Priority: Minor > Labels: newbie, ready-to-commit > Attachments: LUCENE-8891.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently there is no Estonian specific stemmer for SnowballFilter. > I would like to add a Snowball stemmer for the Estonian language and also add > a new Language analyzer for the Estonian language based on the snowball > stemmer. > [https://github.com/gpaimla/lucene-solr] fork of master branch with the > analyzer implemented -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13583) Impossible to delete a collection with the same name as an existing alias
[ https://issues.apache.org/jira/browse/SOLR-13583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874966#comment-16874966 ] Gus Heck commented on SOLR-13583: - SOLR-13262 has been released, so this needs to be fixed in 8x. We can't change the basic way aliases work (by requiring uniqueness) until a major version (9.0) IMHO since that's not backwards compatible. I think we probably need to bolt on a followAliases=true to enable what was released on 13262 or roll 13262 back. Then we can change things fundamentally in 9.0 if we desire (in another ticket). > Impossible to delete a collection with the same name as an existing alias > - > > Key: SOLR-13583 > URL: https://issues.apache.org/jira/browse/SOLR-13583 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.1, 8.1.1 >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Blocker > Fix For: 8.1.2 > > > SOLR-13262 changed the behavior of most collection admin commands so that > they always resolve aliases by default. In most cases this is desireable > behavior but it also prevents executing commands on the collections that have > the same name as an existing alias (which usually points to a different > collection). > This behavior also breaks the REINDEXCOLLECTION command with > {{removeSource=true,}} which can also lead to data loss. > This issue can be resolved by adding either an opt-in or opt-out flag to the > collection admin commands that specifies whether the command should attempt > resolving the provided name as an alias first. From the point of view of ease > of use this could be an opt-out option, from the point of view of data safety > this could be an opt-in option. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8891) Snowball stemmer/analyzer for the Estonian language
[ https://issues.apache.org/jira/browse/LUCENE-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874965#comment-16874965 ] Tomoko Uchida commented on LUCENE-8891: --- Thanks [~gpaimla], the patch looks fine to me. I noticed a few small things: - "precommit" task is failed because of there remain unused import statements. We have various code and documentation linters / rules which should be checked before committing changes to the master. You can run the checks by executing {{ant precommit}}. - It is common convention to create one test class for each class to be tested. {{TestEstonianStemming}} seems slightly odd to me since there are not {{EstonianStemming}} class. I think it can be merged into {{TestEstonianAnalyzer}}, opinions? > Snowball stemmer/analyzer for the Estonian language > --- > > Key: LUCENE-8891 > URL: https://issues.apache.org/jira/browse/LUCENE-8891 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/analysis >Reporter: Gert Morten Paimla >Priority: Minor > Labels: newbie, ready-to-commit > Attachments: LUCENE-8891.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently there is no Estonian specific stemmer for SnowballFilter. > I would like to add a Snowball stemmer for the Estonian language and also add > a new Language analyzer for the Estonian language based on the snowball > stemmer. > [https://github.com/gpaimla/lucene-solr] fork of master branch with the > analyzer implemented -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8892) Missing closing parens in string representation of MultiBoolFunction
[ https://issues.apache.org/jira/browse/LUCENE-8892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874962#comment-16874962 ] Adrien Grand commented on LUCENE-8892: -- +1 > Missing closing parens in string representation of MultiBoolFunction > > > Key: LUCENE-8892 > URL: https://issues.apache.org/jira/browse/LUCENE-8892 > Project: Lucene - Core > Issue Type: Bug >Reporter: Florian Diebold >Priority: Trivial > Attachments: 0001-Fix-missing-parenthesis-in-MultiBoolFunction.patch, > LUCENE-8892.patch, SOLR-13514.patch > > > The {{description}} function of {{MultiBoolFunction}} includes an open > parenthesis, but doesn't close it. This makes score explanations more > confusing than necessary sometimes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on a change in pull request #709: LUCENE-8850: Calculate the area of a polygon and throw error when values are invalid
iverase commented on a change in pull request #709: LUCENE-8850: Calculate the area of a polygon and throw error when values are invalid URL: https://github.com/apache/lucene-solr/pull/709#discussion_r298453900 ## File path: lucene/core/src/java/org/apache/lucene/geo/Polygon.java ## @@ -96,34 +99,50 @@ public Polygon(double[] polyLats, double[] polyLons, Polygon... holes) { this.holes = holes.clone(); // compute bounding box -double minLat = polyLats[0]; -double maxLat = polyLats[0]; -double minLon = polyLons[0]; -double maxLon = polyLons[0]; +double minLat = Double.POSITIVE_INFINITY; +double maxLat = Double.NEGATIVE_INFINITY; +double minLon = Double.POSITIVE_INFINITY; +double maxLon = Double.NEGATIVE_INFINITY; double windingSum = 0d; final int numPts = polyLats.length - 1; -for (int i = 1, j = 0; i < numPts; j = i++) { +for (int i = 0; i < numPts; i++) { minLat = Math.min(polyLats[i], minLat); maxLat = Math.max(polyLats[i], maxLat); minLon = Math.min(polyLons[i], minLon); maxLon = Math.max(polyLons[i], maxLon); // compute signed area - windingSum += (polyLons[j] - polyLons[numPts])*(polyLats[i] - polyLats[numPts]) - - (polyLats[j] - polyLats[numPts])*(polyLons[i] - polyLons[numPts]); + windingSum += polyLons[i] * polyLats[i + 1] - polyLats[i] * polyLons[i + 1]; +} +if (windingSum == 0) { + throw new IllegalArgumentException("Cannot compute the polygon / hole orientation."); } this.minLat = minLat; this.maxLat = maxLat; this.minLon = minLon; this.maxLon = maxLon; this.windingOrder = (windingSum < 0) ? GeoUtils.WindingOrder.CCW : GeoUtils.WindingOrder.CW; +double area = Math.abs(windingSum / 2d); +for (Polygon hole : holes) { + area -= hole.area(); +} +if (area <= 0) { + throw new IllegalArgumentException("Polygon has an invalid area (area = " + area + ")."); +} +this.area = area; + } /** returns the number of vertex points */ public int numPoints() { return polyLats.length; } + /** returns the area of the polygon */ + public double area() { Review comment: I agree we should not provide this info to the user, +1 not to offer the calculation . The idea behind this change was validation of the polygon. The idea of this change came from the fact that I am using this calculation to check that a Polygon tessellation is correct and I realise that: - Currently we compute the signed area to check the orientation of a polygon, if the area is 0 the polygon is considered CW, should we fail instead? - I have seen polygons with one hole where the hole and polygon are the same, should we capture that situation and fail? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on a change in pull request #709: LUCENE-8850: Calculate the area of a polygon and throw error when values are invalid
iverase commented on a change in pull request #709: LUCENE-8850: Calculate the area of a polygon and throw error when values are invalid URL: https://github.com/apache/lucene-solr/pull/709#discussion_r298453900 ## File path: lucene/core/src/java/org/apache/lucene/geo/Polygon.java ## @@ -96,34 +99,50 @@ public Polygon(double[] polyLats, double[] polyLons, Polygon... holes) { this.holes = holes.clone(); // compute bounding box -double minLat = polyLats[0]; -double maxLat = polyLats[0]; -double minLon = polyLons[0]; -double maxLon = polyLons[0]; +double minLat = Double.POSITIVE_INFINITY; +double maxLat = Double.NEGATIVE_INFINITY; +double minLon = Double.POSITIVE_INFINITY; +double maxLon = Double.NEGATIVE_INFINITY; double windingSum = 0d; final int numPts = polyLats.length - 1; -for (int i = 1, j = 0; i < numPts; j = i++) { +for (int i = 0; i < numPts; i++) { minLat = Math.min(polyLats[i], minLat); maxLat = Math.max(polyLats[i], maxLat); minLon = Math.min(polyLons[i], minLon); maxLon = Math.max(polyLons[i], maxLon); // compute signed area - windingSum += (polyLons[j] - polyLons[numPts])*(polyLats[i] - polyLats[numPts]) - - (polyLats[j] - polyLats[numPts])*(polyLons[i] - polyLons[numPts]); + windingSum += polyLons[i] * polyLats[i + 1] - polyLats[i] * polyLons[i + 1]; +} +if (windingSum == 0) { + throw new IllegalArgumentException("Cannot compute the polygon / hole orientation."); } this.minLat = minLat; this.maxLat = maxLat; this.minLon = minLon; this.maxLon = maxLon; this.windingOrder = (windingSum < 0) ? GeoUtils.WindingOrder.CCW : GeoUtils.WindingOrder.CW; +double area = Math.abs(windingSum / 2d); +for (Polygon hole : holes) { + area -= hole.area(); +} +if (area <= 0) { + throw new IllegalArgumentException("Polygon has an invalid area (area = " + area + ")."); +} +this.area = area; + } /** returns the number of vertex points */ public int numPoints() { return polyLats.length; } + /** returns the area of the polygon */ + public double area() { Review comment: I agree we should not provide this info to the user, +1 not to offer the calculation . The idea behind this change was validation of the polygon. The idea of this change came from the fact that I am using this calculation to check that a Polygon tessellation is correct and I realise that: - Currently we compute the signed area to check the orientation of a polygon, if the area is 0 the polygon is consider CW, should we fail instead? - I have seen polygons with one hole where the hole and polygon are the same, should we capture that situation and fail? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] dsmiley commented on a change in pull request #709: LUCENE-8850: Calculate the area of a polygon and throw error when values are invalid
dsmiley commented on a change in pull request #709: LUCENE-8850: Calculate the area of a polygon and throw error when values are invalid URL: https://github.com/apache/lucene-solr/pull/709#discussion_r298616974 ## File path: lucene/core/src/java/org/apache/lucene/geo/Polygon.java ## @@ -96,34 +99,50 @@ public Polygon(double[] polyLats, double[] polyLons, Polygon... holes) { this.holes = holes.clone(); // compute bounding box -double minLat = polyLats[0]; -double maxLat = polyLats[0]; -double minLon = polyLons[0]; -double maxLon = polyLons[0]; +double minLat = Double.POSITIVE_INFINITY; +double maxLat = Double.NEGATIVE_INFINITY; +double minLon = Double.POSITIVE_INFINITY; +double maxLon = Double.NEGATIVE_INFINITY; double windingSum = 0d; final int numPts = polyLats.length - 1; -for (int i = 1, j = 0; i < numPts; j = i++) { +for (int i = 0; i < numPts; i++) { minLat = Math.min(polyLats[i], minLat); maxLat = Math.max(polyLats[i], maxLat); minLon = Math.min(polyLons[i], minLon); maxLon = Math.max(polyLons[i], maxLon); // compute signed area - windingSum += (polyLons[j] - polyLons[numPts])*(polyLats[i] - polyLats[numPts]) - - (polyLats[j] - polyLats[numPts])*(polyLons[i] - polyLons[numPts]); + windingSum += polyLons[i] * polyLats[i + 1] - polyLats[i] * polyLons[i + 1]; +} +if (windingSum == 0) { + throw new IllegalArgumentException("Cannot compute the polygon / hole orientation."); } this.minLat = minLat; this.maxLat = maxLat; this.minLon = minLon; this.maxLon = maxLon; this.windingOrder = (windingSum < 0) ? GeoUtils.WindingOrder.CCW : GeoUtils.WindingOrder.CW; +double area = Math.abs(windingSum / 2d); +for (Polygon hole : holes) { + area -= hole.area(); +} +if (area <= 0) { + throw new IllegalArgumentException("Polygon has an invalid area (area = " + area + ")."); +} +this.area = area; + } /** returns the number of vertex points */ public int numPoints() { return polyLats.length; } + /** returns the area of the polygon */ + public double area() { Review comment: Please add a comment to the code to to link to the "shoelace method" that Nick points at. I think it's important for non-trivial algorithms to incorporate references where the readers of this code can learn more. Who knows... maybe some "velcro method" will come along and be superior and then some student looking at this code will point this out to us :-) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-8.1 - Build # 46 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.1/46/ No tests ran. Build Log: [...truncated 23880 lines...] [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [java] Processed 2570 links (2103 relative) to 3374 anchors in 253 files [echo] Validated Links & Anchors via: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/build/solr-ref-guide/bare-bones-html/ -dist-changes: [copy] Copying 4 files to /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/package/changes package: -unpack-solr-tgz: -ensure-solr-tgz-exists: [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/build/solr.tgz.unpacked [untar] Expanding: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/package/solr-8.1.2.tgz into /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/solr/build/solr.tgz.unpacked generate-maven-artifacts: resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.1/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure:
[jira] [Updated] (LUCENE-8892) Missing closing parens in string representation of MultiBoolFunction
[ https://issues.apache.org/jira/browse/LUCENE-8892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated LUCENE-8892: - Attachment: LUCENE-8892.patch > Missing closing parens in string representation of MultiBoolFunction > > > Key: LUCENE-8892 > URL: https://issues.apache.org/jira/browse/LUCENE-8892 > Project: Lucene - Core > Issue Type: Bug >Reporter: Florian Diebold >Priority: Trivial > Attachments: 0001-Fix-missing-parenthesis-in-MultiBoolFunction.patch, > LUCENE-8892.patch, SOLR-13514.patch > > > The {{description}} function of {{MultiBoolFunction}} includes an open > parenthesis, but doesn't close it. This makes score explanations more > confusing than necessary sometimes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13548) Migrate Solr's Moin wiki to Confluence
[ https://issues.apache.org/jira/browse/SOLR-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874946#comment-16874946 ] Jan Høydahl commented on SOLR-13548: The list of redirects I'll request from INFRA for Solr are: {noformat} https://wiki.apache.org/solr/Support -> https://cwiki.apache.org/confluence/display/solr/Support https://wiki.apache.org/solr/PublicServers -> https://cwiki.apache.org/confluence/display/solr/PublicServers https://wiki.apache.org/solr/SolrTerminology -> http://lucene.apache.org/solr/guide/solr-glossary.html https://wiki.apache.org/solr/SpatialSearch -> https://cwiki.apache.org/confluence/display/solr/Spatial+Search https://wiki.apache.org/solr/HowToContribute -> https://cwiki.apache.org/confluence/display/solr/HowToContribute https://wiki.apache.org/solr/SolrSecurity -> https://cwiki.apache.org/confluence/display/solr/SolrSecurity https://wiki.apache.org/solr/SolrPerformanceProblems -> https://cwiki.apache.org/confluence/display/solr/SolrPerformanceProblems https://wiki.apache.org/solr/NegativeQueryProblems -> https://cwiki.apache.org/confluence/display/solr/NegativeQueryProblems https://wiki.apache.org/solr/IntegratingSolr -> https://cwiki.apache.org/confluence/display/solr/IntegratingSolr https://wiki.apache.org/solr/FAQ -> https://cwiki.apache.org/confluence/display/solr/FAQ {noformat} > Migrate Solr's Moin wiki to Confluence > -- > > Key: SOLR-13548 > URL: https://issues.apache.org/jira/browse/SOLR-13548 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: website >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Attachments: SolrCwikiPages.txt, SolrMoinTitles.txt, > create_dummy_confluence_pages.py > > > We have a deadline end of June to migrate Moin wiki to Confluence. > This Jira will track migration of Solr's [https://wiki.apache.org/solr/] over > to [https://cwiki.apache.org/confluence/display/SOLR] > The old Confluence space currently hosts the old Reference Guide for version > 6.5 before we moved to asciidoc. This will be overwritten. > Steps: > # Delete all pages in current SOLR space > ## Q: Can we do a bulk delete ourselves or do we need to ask INFRA? > # The rules in {{.htaccess}} which redirects to the 6.6 guide will remain as > is > # Run the migration tool at > [https://selfserve.apache.org|https://selfserve.apache.org/] > # Add a clearly visible link from front page to the ref guide for people > landing there for docs > After migration we'll clean up and weed out what is not needed, and then > start moving developer-centric content into the main git repo (which will be > covered in other JIRAs) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13571) Make recent RefGuide rank well in Google
[ https://issues.apache.org/jira/browse/SOLR-13571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874944#comment-16874944 ] Alexandre Rafalovitch commented on SOLR-13571: -- We could definitely do a sitemap. But also, we could update the redirect list and see if that makes a lot of difference. I had a quick look in the infra repo and it seems to be two files: (solr_id_to_new.map.txt and solr_name_to_new.map.txt). This seems to correspond to those we generated in SOLR-10595. So perhaps we just need to review those files for target file name changes (may be 99% same) and ask Infra to refresh files with new URL base of 8.1. Also, if we could get access to the Google Webmaster tools, that would be nice. It can be done by publishing a file to the server, can we do that outside of a full publication process. Finally, if we republish 6.6 with additional canonical header pointing to latest (or 8.1 or whatever), this may also refocus the search ranking. The work for that would probably be identical to that required to redo the maps. > Make recent RefGuide rank well in Google > > > Key: SOLR-13571 > URL: https://issues.apache.org/jira/browse/SOLR-13571 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Jan Høydahl >Priority: Major > > Spinoff from SOLR-13548 > The old Confluence ref-guide has a lot of pages pointing to it, and all of > that link karma is delegated to the {{/solr/guide/6_6/}} html ref guide, > making it often rank top. However we'd want newer content to rank high. See > these comments for some first ideas. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-12.0.1) - Build # 24305 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24305/ Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseG1GC 4 tests failed. FAILED: org.apache.lucene.util.TestRamUsageEstimator.testMap Error Message: expected:<25152.0> but was:<30184.0> Stack Trace: java.lang.AssertionError: expected:<25152.0> but was:<30184.0> at __randomizedtesting.SeedInfo.seed([62DD86C90DCA26F3:42FB321A17313581]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:553) at org.junit.Assert.assertEquals(Assert.java:683) at org.apache.lucene.util.TestRamUsageEstimator.testMap(TestRamUsageEstimator.java:136) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:835) FAILED: org.apache.lucene.util.TestRamUsageEstimator.testMap Error Message: expected:<25152.0> but was:<30184.0> Stack Trace: java.lang.AssertionError: expected:<25152.0> but was:<30184.0> at __randomizedtesting.SeedInfo.seed([62DD86C90DCA26F3:42FB321A17313581]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:553) at org.junit.Assert.assertEquals(Assert.java:683) at
[jira] [Resolved] (LUCENE-8858) Migrate Lucene's Moin wiki to Confluence
[ https://issues.apache.org/jira/browse/LUCENE-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl resolved LUCENE-8858. - Resolution: Fixed Resolving this again. I'll request INFRA to setup the redirects I compiled: {noformat} https://wiki.apache.org/lucene-java/Support -> https://cwiki.apache.org/confluence/display/lucene/Support https://wiki.apache.org/lucene-java/PoweredBy -> https://cwiki.apache.org/confluence/display/lucene/PoweredBy https://wiki.apache.org/lucene-java/LucenePapers -> https://cwiki.apache.org/confluence/display/lucene/LucenePapers https://wiki.apache.org/lucene-java/ImproveIndexingSpeed -> https://cwiki.apache.org/confluence/display/lucene/ImproveIndexingSpeed https://wiki.apache.org/lucene-java/ReleaseTodo -> https://cwiki.apache.org/confluence/display/lucene/ReleaseTodo {noformat} > Migrate Lucene's Moin wiki to Confluence > > > Key: LUCENE-8858 > URL: https://issues.apache.org/jira/browse/LUCENE-8858 > Project: Lucene - Core > Issue Type: Task > Components: general/website >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Attachments: lucene-cwiki.txt, lucene-moin.txt > > > We have a deadline end of June to migrate Moin wiki to Confluence. > This Jira will track migration of Lucene's > https://wiki.apache.org/lucene-java/ over to > https://cwiki.apache.org/confluence/display/LUCENE > The old Confluence space will be overwritten as it is not used. > After migration we'll clean up and weed out what is not needed, and then > start moving developer-centric content into the main git repo (which will be > covered in other JIRAs) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12554) Expose IndexWriterConfig's RAMPerThreadHardLimitMB as SolrConfig.xml param
[ https://issues.apache.org/jira/browse/SOLR-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874940#comment-16874940 ] Munendra S N commented on SOLR-12554: - [^SOLR-12554.patch] * Fix test failure * Update documentation for RAMPerThreadHardLimitMB [~ichattopadhyaya] could you please review this, especially documentation changes?? > Expose IndexWriterConfig's RAMPerThreadHardLimitMB as SolrConfig.xml param > -- > > Key: SOLR-12554 > URL: https://issues.apache.org/jira/browse/SOLR-12554 > Project: Solr > Issue Type: New Feature >Reporter: Ishan Chattopadhyaya >Assignee: Munendra S N >Priority: Major > Attachments: SOLR-12554.patch, SOLR-12554.patch, SOLR-12554.patch > > > Currently, the RAMPerThreadHardLimitMB parameter of IWC is not exposed. This > is useful to control flush policies. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12554) Expose IndexWriterConfig's RAMPerThreadHardLimitMB as SolrConfig.xml param
[ https://issues.apache.org/jira/browse/SOLR-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-12554: Attachment: SOLR-12554.patch > Expose IndexWriterConfig's RAMPerThreadHardLimitMB as SolrConfig.xml param > -- > > Key: SOLR-12554 > URL: https://issues.apache.org/jira/browse/SOLR-12554 > Project: Solr > Issue Type: New Feature >Reporter: Ishan Chattopadhyaya >Assignee: Munendra S N >Priority: Major > Attachments: SOLR-12554.patch, SOLR-12554.patch, SOLR-12554.patch > > > Currently, the RAMPerThreadHardLimitMB parameter of IWC is not exposed. This > is useful to control flush policies. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12364) edismax "boost" is not tested
[ https://issues.apache.org/jira/browse/SOLR-12364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-12364: Status: Patch Available (was: Open) > edismax "boost" is not tested > - > > Key: SOLR-12364 > URL: https://issues.apache.org/jira/browse/SOLR-12364 > Project: Solr > Issue Type: Improvement > Components: query parsers >Reporter: David Smiley >Priority: Minor > Attachments: SOLR-12364.patch > > > I see no trace of any test of the "boost" param for edismax. "bq" is tested, > but not "boost". Ouch. The test ought to go here: TestExtendedDismaxParser. > I searched in many ways, not limited to that test file. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12364) edismax "boost" is not tested
[ https://issues.apache.org/jira/browse/SOLR-12364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16874938#comment-16874938 ] Munendra S N commented on SOLR-12364: - [^SOLR-12364.patch] * Adds test for bf (I couldn't find any test for this too) and boost * Modifies some tests in TestExtendedDismaxParser to use exprectThrows > edismax "boost" is not tested > - > > Key: SOLR-12364 > URL: https://issues.apache.org/jira/browse/SOLR-12364 > Project: Solr > Issue Type: Improvement > Components: query parsers >Reporter: David Smiley >Priority: Minor > Attachments: SOLR-12364.patch > > > I see no trace of any test of the "boost" param for edismax. "bq" is tested, > but not "boost". Ouch. The test ought to go here: TestExtendedDismaxParser. > I searched in many ways, not limited to that test file. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12364) edismax "boost" is not tested
[ https://issues.apache.org/jira/browse/SOLR-12364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-12364: Attachment: SOLR-12364.patch > edismax "boost" is not tested > - > > Key: SOLR-12364 > URL: https://issues.apache.org/jira/browse/SOLR-12364 > Project: Solr > Issue Type: Improvement > Components: query parsers >Reporter: David Smiley >Priority: Minor > Attachments: SOLR-12364.patch > > > I see no trace of any test of the "boost" param for edismax. "bq" is tested, > but not "boost". Ouch. The test ought to go here: TestExtendedDismaxParser. > I searched in many ways, not limited to that test file. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] atris commented on issue #729: LUCENE-8862: Introduce Collector Level Memory Accounting
atris commented on issue #729: LUCENE-8862: Introduce Collector Level Memory Accounting URL: https://github.com/apache/lucene-solr/pull/729#issuecomment-506740202 @jpountz Added the same, please see if it works now This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org