[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 329 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/329/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC 4 tests failed. FAILED: org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest.testConsistencyOnExceptions Error Message: Captured an uncaught exception in thread: Thread[id=21, name=ReplicationThread-indexAndTaxo, state=RUNNABLE, group=TGRP-IndexAndTaxonomyReplicationClientTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=21, name=ReplicationThread-indexAndTaxo, state=RUNNABLE, group=TGRP-IndexAndTaxonomyReplicationClientTest] at __randomizedtesting.SeedInfo.seed([7B85E791A61C20E0:F40B0031B470D31F]:0) Caused by: java.lang.AssertionError: handler failed too many times: -1 at __randomizedtesting.SeedInfo.seed([7B85E791A61C20E0]:0) at org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest$4.handleUpdateException(IndexAndTaxonomyReplicationClientTest.java:422) at org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77) FAILED: org.apache.solr.cloud.TestRandomRequestDistribution.test Error Message: Timeout occured while waiting response from server at: http://127.0.0.1:53080 Stack Trace: org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting response from server at: http://127.0.0.1:53080 at __randomizedtesting.SeedInfo.seed([231278813625D508:AB46475B98D9B8F0]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:314) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at
[jira] [Updated] (SOLR-11723) Solr6.6- Full Import or Delta Import are not responding, showing idle status always for all cores suddenly
[ https://issues.apache.org/jira/browse/SOLR-11723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chandrasekhar updated SOLR-11723: - Description: Suddenly not importing data into solr. The full-import or delta-import or reload-config commands not responding and showing idle status after hitting full import or delta import commands from url. But i am able to run the queryies using query handler. There are already 9 cores are running succesfully. It started this issue, when i created new 10th core. Moreover all cores in my solr instance got same issue. I haven't seen any error in admin console logging section or in solr server logs. (was: Suddenly not importing data into solr. The full-import or delta-import or reload-config commands not responding and showing idle status after hitting full import or delta import commands from url. But i am able to run the queryies using query handler. There are already 9 cores are running succesfully. It started this issue, when i created new 10th core. Moreover all cores in my solr instance got same issue. ) > Solr6.6- Full Import or Delta Import are not responding, showing idle status > always for all cores suddenly > -- > > Key: SOLR-11723 > URL: https://issues.apache.org/jira/browse/SOLR-11723 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 6.6 > Environment: Linux Enviornment, Solr6.6 with 10 cores running > already. >Reporter: chandrasekhar >Priority: Blocker > > Suddenly not importing data into solr. The full-import or delta-import or > reload-config commands not responding and showing idle status after hitting > full import or delta import commands from url. But i am able to run the > queryies using query handler. There are already 9 cores are running > succesfully. It started this issue, when i created new 10th core. Moreover > all cores in my solr instance got same issue. I haven't seen any error in > admin console logging section or in solr server logs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Lucene/Solr 7.2
+1 this is over due On Dec 4, 2017 16:38, "Varun Thacker"wrote: +1 Adrien ! Thanks for taking it up On Fri, Dec 1, 2017 at 9:05 AM, Christine Poerschke (BLOOMBERG/ QUEEN VIC) < cpoersc...@bloomberg.net> wrote: > I'd like to see https://issues.apache.org/jira/browse/SOLR-9137 included > in the release. Hoping to commit it on/by Tuesday. > > Christine > > From: dev@lucene.apache.org At: 12/01/17 10:11:59 > To: dev@lucene.apache.org > Subject: Lucene/Solr 7.2 > > Hello, > > It's been more than 6 weeks since we released 7.1 and we accumulated a > good set of changes, so I think we should release Lucene/Solr 7.2.0. > > There is one change that I would like to have before building a RC: > LUCENE-8043[1], which looks like it is almost ready to be merged. Please > let me know if there are any other changes that should make it to the > release. > > I volunteer to be the release manager. I'm currently thinking of building > the first release candidate next wednesday, December 6th. > > [1] https://issues.apache.org/jira/browse/LUCENE-8043 > >
[jira] [Resolved] (SOLR-9743) An UTILIZENODE command
[ https://issues.apache.org/jira/browse/SOLR-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul resolved SOLR-9743. -- Resolution: Fixed Fix Version/s: 7.2 > An UTILIZENODE command > -- > > Key: SOLR-9743 > URL: https://issues.apache.org/jira/browse/SOLR-9743 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > Fix For: 7.2 > > > The command would accept one or more nodes and create appropriate replicas > based on some strategy. > The params are > *node: (required && multi-valued) : The nodes to be deployed > * collection: (optional) The collection to which the node should be added > to. if this parameter is not passed, try to assign to all collections > example: > {code} > action=UTILIZENODE=gettingstarted > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9743) An UTILIZENODE command
[ https://issues.apache.org/jira/browse/SOLR-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278091#comment-16278091 ] ASF subversion and git services commented on SOLR-9743: --- Commit 093f6c2171b391aa8db4f5dc4ae4b6e1a7213597 in lucene-solr's branch refs/heads/branch_7x from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=093f6c2 ] SOLR-9743: A new UTILIZENODE command > An UTILIZENODE command > -- > > Key: SOLR-9743 > URL: https://issues.apache.org/jira/browse/SOLR-9743 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > > The command would accept one or more nodes and create appropriate replicas > based on some strategy. > The params are > *node: (required && multi-valued) : The nodes to be deployed > * collection: (optional) The collection to which the node should be added > to. if this parameter is not passed, try to assign to all collections > example: > {code} > action=UTILIZENODE=gettingstarted > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9743) An UTILIZENODE command
[ https://issues.apache.org/jira/browse/SOLR-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278090#comment-16278090 ] ASF subversion and git services commented on SOLR-9743: --- Commit c62d5384d2393d861b0ae498b094de80eb0caee6 in lucene-solr's branch refs/heads/branch_7x from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c62d538 ] SOLR-9743: A new UTILIZENODE command > An UTILIZENODE command > -- > > Key: SOLR-9743 > URL: https://issues.apache.org/jira/browse/SOLR-9743 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > > The command would accept one or more nodes and create appropriate replicas > based on some strategy. > The params are > *node: (required && multi-valued) : The nodes to be deployed > * collection: (optional) The collection to which the node should be added > to. if this parameter is not passed, try to assign to all collections > example: > {code} > action=UTILIZENODE=gettingstarted > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11723) Solr6.6- Full Import or Delta Import are not responding, showing idle status always for all cores suddenly
chandrasekhar created SOLR-11723: Summary: Solr6.6- Full Import or Delta Import are not responding, showing idle status always for all cores suddenly Key: SOLR-11723 URL: https://issues.apache.org/jira/browse/SOLR-11723 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: contrib - DataImportHandler Affects Versions: 6.6 Environment: Linux Enviornment, Solr6.6 with 10 cores running already. Reporter: chandrasekhar Priority: Blocker Suddenly not importing data into solr. The full-import or delta-import or reload-config commands not responding and showing idle status after hitting full import or delta import commands from url. But i am able to run the queryies using query handler. There are already 9 cores are running succesfully. It started this issue, when i created new 10th core. Moreover all cores in my solr instance got same issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+32) - Build # 21032 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21032/ Java: 64bit/jdk-10-ea+32 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeAddedTriggerRestoreState Error Message: The trigger did not fire at all Stack Trace: java.lang.AssertionError: The trigger did not fire at all at __randomizedtesting.SeedInfo.seed([D660691469D13A46:5E5DE06B5311DBEB]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeAddedTriggerRestoreState(TriggerIntegrationTest.java:414) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 1796 lines...] [junit4] JVM J0: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J0-20171205_043122_85611843886385711779040.syserr [junit4] >>> JVM
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7041 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7041/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.AutoScalingHandlerTest.testReadApi Error Message: expected:<1> but was:<0> Stack Trace: java.lang.AssertionError: expected:<1> but was:<0> at __randomizedtesting.SeedInfo.seed([BB1FFF1CC3BEC889:EC3604A9184C2A92]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.autoscaling.AutoScalingHandlerTest.testReadApi(AutoScalingHandlerTest.java:724) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 12082 lines...] [junit4] Suite:
[jira] [Commented] (SOLR-11336) DocBasedVersionConstraintsProcessor should be more extensible and support multiple version fields
[ https://issues.apache.org/jira/browse/SOLR-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278043#comment-16278043 ] David Smiley commented on SOLR-11336: - Overall +1 thus far. The docs for isVersionNewEnough was modified to say it returns "the solr version" instead of "true". But no, it's true/false. Perhaps you were toying with some refactoring to this method and backed out. I'm not sure what methods should be protected vs private but I can tell when you were deliberate about it, and those places make sense to me. Although likely out of scope, FYI increasingly URPs are allowing a parameter based configuration and then auto-register it such that you needn't have to touch your configuration to use it. See TemplateUpdateProcessorFactory and UUIDUpdateProcessorFactory for what I mean. > DocBasedVersionConstraintsProcessor should be more extensible and support > multiple version fields > - > > Key: SOLR-11336 > URL: https://issues.apache.org/jira/browse/SOLR-11336 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (8.0) >Reporter: Michael Braun >Priority: Minor > Attachments: SOLR-11336.patch, SOLR-11336.patch > > > DocBasedVersionConstraintsProcessor supports allowing document updates only > if the new version is greater than the old. However, if any behavior wants to > be extended / changed in minor ways, the entire class will need to be copied > and slightly modified rather than extending and changing the method in > question. > It would be nice if DocBasedVersionConstraintsProcessor stood on its own as a > non-private class. In addition, certain methods (such as pieces of > isVersionNewEnough) should be broken out into separate methods so they can be > extended such that someone can extend the processor class and override what > it means for a new version to be accepted (allowing equal versions through? > What if new is a lower not greater number?). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11508) Make coreRootDirectory configurable via an environment variable (SOLR_CORE_HOME)
[ https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278034#comment-16278034 ] Shawn Heisey edited comment on SOLR-11508 at 12/5/17 5:26 AM: -- If Solr no longer requires solr.xml to start, then the solr home can be completely empty on startup, fulfilling the requirements mentioned for the data volume on Docker. If you're in standalone mode and you create a core with the commandline, then the commandline script will copy a config to $\{instancedDir\}/conf (where the instanceDir is inside the solr home), and when Solr is informed about the new core with the CoreAdmin API, part of the core startup will create $\{instanceDir\}/data, and under that directory, Lucene will create the index directory. bq. but it would condemn cloud mode users to choose between sticking to the default settings or mixing their configuration and data. In cloud mode, Solr doesn't mix config and data. The config is not on disk at all. It's in zookeeper. Even solr.xml can be in zookeeper when running in cloud mode. Which means that cloud mode can ALREADY work with no solr.xml file in the solr home, just like I am describing. bq. It's either that or we would need to externalize every configuration parameter available in solr.xml I agree that it would be important to make sure that a certain critical subset of solr.xml configuration parameters must be configurable with system properties, which should definitely include the various home directories, but I don't think that *everything* needs to be configurable that way. Even though solr.xml would not be REQUIRED to start Solr, you'd still be able to create that file and have Solr honor its settings. So to reiterate my modified proposal, for the master branch only: {quote} Eliminate coreRootDirectory entirely. Make sure there are only two "home" properties. One of them should be solr.index.home, and the other will either be solr.solr.home or solr.data.home. The latter makes a lot of sense, but the former has historical value and will support older configs better. If both solr.solr.home and solr.data.home are set in an upgraded configuration, Solr should log an error and refuse to start. Solr should be able to start up without a solr.xml file, but if one is found (either in zookeeper or the solr home) then it will be honored. {quote} We could backport these ideas to 7.x, but the "new" way of configuring things would have to be explicitly enabled, probably with a system property, and the current way of configuring things would need to remain supported for all of 7.x. was (Author: elyograg): If Solr no longer requires solr.xml to start, then the solr home can be completely empty on startup, fulfilling the requirements mentioned for the data volume on Docker. If you're in standalone mode and you create a core with the commandline, then the commandline script will copy a config to ${instancedDir}/conf (where the instanceDir is inside the solr home), and when Solr is informed about the new core with the CoreAdmin API, part of the core startup will create ${instanceDir}/data, and under that directory, Lucene will create the index directory. bq. but it would condemn cloud mode users to choose between sticking to the default settings or mixing their configuration and data. In cloud mode, Solr doesn't mix config and data. The config is not on disk at all. It's in zookeeper. Even solr.xml can be in zookeeper when running in cloud mode. Which means that cloud mode can ALREADY work with no solr.xml file in the solr home, just like I am describing. bq. It's either that or we would need to externalize every configuration parameter available in solr.xml I agree that it would be important to make sure that a certain critical subset of solr.xml configuration parameters must be configurable with system properties, which should definitely include the various home directories, but I don't think that *everything* needs to be configurable that way. Even though solr.xml would not be REQUIRED to start Solr, you'd still be able to create that file and have Solr honor its settings. So to reiterate my modified proposal, for the master branch only: {quote} Eliminate coreRootDirectory entirely. Make sure there are only two "home" properties. One of them should be solr.index.home, and the other will either be solr.solr.home or solr.data.home. The latter makes a lot of sense, but the former has historical value and will support older configs better. If both solr.solr.home and solr.data.home are set in an upgraded configuration, Solr should log an error and refuse to start. Solr should be able to start up without a solr.xml file, but if one is found (either in zookeeper or the solr home) then it will be honored. {quote} We could backport these ideas to 7.x, but the "new" way of configuring things
[jira] [Commented] (SOLR-11508) Make coreRootDirectory configurable via an environment variable (SOLR_CORE_HOME)
[ https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278034#comment-16278034 ] Shawn Heisey commented on SOLR-11508: - If Solr no longer requires solr.xml to start, then the solr home can be completely empty on startup, fulfilling the requirements mentioned for the data volume on Docker. If you're in standalone mode and you create a core with the commandline, then the commandline script will copy a config to ${instancedDir}/conf (where the instanceDir is inside the solr home), and when Solr is informed about the new core with the CoreAdmin API, part of the core startup will create ${instanceDir}/data, and under that directory, Lucene will create the index directory. bq. but it would condemn cloud mode users to choose between sticking to the default settings or mixing their configuration and data. In cloud mode, Solr doesn't mix config and data. The config is not on disk at all. It's in zookeeper. Even solr.xml can be in zookeeper when running in cloud mode. Which means that cloud mode can ALREADY work with no solr.xml file in the solr home, just like I am describing. bq. It's either that or we would need to externalize every configuration parameter available in solr.xml I agree that it would be important to make sure that a certain critical subset of solr.xml configuration parameters must be configurable with system properties, which should definitely include the various home directories, but I don't think that *everything* needs to be configurable that way. Even though solr.xml would not be REQUIRED to start Solr, you'd still be able to create that file and have Solr honor its settings. So to reiterate my modified proposal, for the master branch only: {quote} Eliminate coreRootDirectory entirely. Make sure there are only two "home" properties. One of them should be solr.index.home, and the other will either be solr.solr.home or solr.data.home. The latter makes a lot of sense, but the former has historical value and will support older configs better. If both solr.solr.home and solr.data.home are set in an upgraded configuration, Solr should log an error and refuse to start. Solr should be able to start up without a solr.xml file, but if one is found (either in zookeeper or the solr home) then it will be honored. {quote} We could backport these ideas to 7.x, but the "new" way of configuring things would have to be explicitly enabled, probably with a system property, and the current way of configuring things would need to remain supported for all of 7.x. > Make coreRootDirectory configurable via an environment variable > (SOLR_CORE_HOME) > > > Key: SOLR-11508 > URL: https://issues.apache.org/jira/browse/SOLR-11508 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Marc Morissette > > (Heavily edited) > Since Solr 7, it is possible to store Solr cores in separate disk locations > using solr.data.home (see SOLR-6671). This is very useful when running Solr > in Docker where data must be stored in a directory which is independent from > the rest of the container. > While this works well in standalone mode, it doesn't in Cloud mode as the > core.properties automatically created by Solr are still stored in > coreRootDirectory and cores created that way disappear when the Solr Docker > container is redeployed. > The solution is to configure coreRootDirectory to an empty directory that can > be mounted outside the Docker container. > The incoming patch makes this easier to do by allowing coreRootDirectory to > be configured via a solr.core.home system property and SOLR_CORE_HOME > environment variable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11706) JSON FacetModule can't compute stats (min,max,etc...) on multivalued fields
[ https://issues.apache.org/jira/browse/SOLR-11706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278002#comment-16278002 ] David Smiley commented on SOLR-11706: - I just want to point out that "multi-valued functions" in fact exist -- {{org.apache.lucene.queries.function.valuesource.MultiValueSource}}. I'm not a fan -- the API feels awkward to me, but there it is. We pretty much only _use_ it today for some legacy-ish spatial stuff. I'd prefer this interface and some of the related methods on FunctionValues that take arrays be deprecated out of Lucene. > JSON FacetModule can't compute stats (min,max,etc...) on multivalued fields > --- > > Key: SOLR-11706 > URL: https://issues.apache.org/jira/browse/SOLR-11706 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man > Attachments: SOLR-11706.patch > > > While trying to write some tests demonstrating equivalences between the > StatsComponent and the JSON FacetModule i discovered that the FacetModules > stat functions (min, max, etc...) don't seem to work on multivalued fields. > Based on the stack traces, i gather the problem is because the FacetModule > seems to rely exclusively on using the "Function" parsers to get a value > source -- apparently w/o any other method of accumulating numeric stats from > multivalued (numeric) DocValues? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 937 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/937/ Java: 32bit/jdk1.8.0_144 -server -XX:+UseSerialGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth Error Message: 1 thread leaked from SUITE scope at org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) Thread[id=10904, name=jetty-launcher-796-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) Thread[id=10904, name=jetty-launcher-796-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) at __randomizedtesting.SeedInfo.seed([5E6966172EC27F40]:0) FAILED: org.apache.solr.cloud.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete Error Message: Error from server at https://127.0.0.1:45067/solr/testcollection_shard1_replica_n2: Expected mime type application/octet-stream but got text/html.Error 404HTTP ERROR: 404 Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason: Can not find: /solr/testcollection_shard1_replica_n2/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:45067/solr/testcollection_shard1_replica_n2: Expected mime type application/octet-stream but got text/html. Error 404 HTTP ERROR: 404 Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason: Can not find: /solr/testcollection_shard1_replica_n2/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 at
[jira] [Commented] (SOLR-11485) Add olsRegress, spline and derivative Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277995#comment-16277995 ] ASF subversion and git services commented on SOLR-11485: Commit c51e34905037a44347530304d2be5b23e7095348 in lucene-solr's branch refs/heads/branch_7x from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c51e349 ] SOLR-11485: Fix precommit > Add olsRegress, spline and derivative Stream Evaluators > --- > > Key: SOLR-11485 > URL: https://issues.apache.org/jira/browse/SOLR-11485 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11485.patch > > > This ticket adds support for OLS (ordinary least squares) multiple > regression, spline interpolation and derivative functions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11485) Add olsRegress, spline and derivative Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277994#comment-16277994 ] ASF subversion and git services commented on SOLR-11485: Commit 282fb6d9f10c79aaf68cd8ab6e723a726c5fef22 in lucene-solr's branch refs/heads/branch_7x from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=282fb6d ] SOLR-11485: Fix broken tests > Add olsRegress, spline and derivative Stream Evaluators > --- > > Key: SOLR-11485 > URL: https://issues.apache.org/jira/browse/SOLR-11485 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11485.patch > > > This ticket adds support for OLS (ordinary least squares) multiple > regression, spline interpolation and derivative functions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11485) Add olsRegress, spline and derivative Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277993#comment-16277993 ] ASF subversion and git services commented on SOLR-11485: Commit fb474d3627c1a169ff48c22a0edc7a0919ab58a6 in lucene-solr's branch refs/heads/branch_7x from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fb474d3 ] SOLR-11485: Fix precommit > Add olsRegress, spline and derivative Stream Evaluators > --- > > Key: SOLR-11485 > URL: https://issues.apache.org/jira/browse/SOLR-11485 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11485.patch > > > This ticket adds support for OLS (ordinary least squares) multiple > regression, spline interpolation and derivative functions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11485) Add olsRegress, spline and derivative Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277992#comment-16277992 ] ASF subversion and git services commented on SOLR-11485: Commit e3e98c68471024c35537ac2d422d376920626a5f in lucene-solr's branch refs/heads/branch_7x from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e3e98c6 ] SOLR-11485: Add olsRegress, spline and derivative Stream Evaluators > Add olsRegress, spline and derivative Stream Evaluators > --- > > Key: SOLR-11485 > URL: https://issues.apache.org/jira/browse/SOLR-11485 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11485.patch > > > This ticket adds support for OLS (ordinary least squares) multiple > regression, spline interpolation and derivative functions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11304) Exception while returning document if LatLonPointSpatialField field is not stored
[ https://issues.apache.org/jira/browse/SOLR-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley resolved SOLR-11304. - Resolution: Duplicate Fix Version/s: 7.2 Thanks for reporting. We resolved this in SOLR-11532 (without noticing this issue) so I'm marking this as resolved and fixed in 7.2. Disclaimer: I didn't look at your patch but the symptom is spot on. > Exception while returning document if LatLonPointSpatialField field is not > stored > - > > Key: SOLR-11304 > URL: https://issues.apache.org/jira/browse/SOLR-11304 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6, 7.0 >Reporter: Karthik Ramachandran >Assignee: Karthik Ramachandran > Fix For: 7.2 > > Attachments: SOLR-11304.patch > > > NullPointerException while retrieving the document if LatLonPointSpatialField > is not stored > {code:xml} > docValues="true"/> > stored="false"/> > {code} > {code} > 2017-08-31 12:18:23.368 ERROR (qtp1866850137-23) [ x:latlon] > o.a.s.s.HttpSolrCall null:java.lang.NullPointerException > at > org.apache.solr.search.SolrDocumentFetcher.decorateDocValueFields(SolrDocumentFetcher.java:510) > at org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:160) > at org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:1) > at > org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:806) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:535) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at >
[jira] [Commented] (SOLR-11485) Add olsRegress, spline and derivative Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277984#comment-16277984 ] ASF subversion and git services commented on SOLR-11485: Commit cd10d0bda076b2eac6cf158ae238c609e2589eeb in lucene-solr's branch refs/heads/master from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cd10d0b ] SOLR-11485: Fix precommit > Add olsRegress, spline and derivative Stream Evaluators > --- > > Key: SOLR-11485 > URL: https://issues.apache.org/jira/browse/SOLR-11485 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11485.patch > > > This ticket adds support for OLS (ordinary least squares) multiple > regression, spline interpolation and derivative functions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11336) DocBasedVersionConstraintsProcessor should be more extensible and support multiple version fields
[ https://issues.apache.org/jira/browse/SOLR-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Braun updated SOLR-11336: - Attachment: SOLR-11336.patch Latest version of patch - still need to update documentation and add tests. Support for multiple versions while preserving existing parameters. > DocBasedVersionConstraintsProcessor should be more extensible and support > multiple version fields > - > > Key: SOLR-11336 > URL: https://issues.apache.org/jira/browse/SOLR-11336 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (8.0) >Reporter: Michael Braun >Priority: Minor > Attachments: SOLR-11336.patch, SOLR-11336.patch > > > DocBasedVersionConstraintsProcessor supports allowing document updates only > if the new version is greater than the old. However, if any behavior wants to > be extended / changed in minor ways, the entire class will need to be copied > and slightly modified rather than extending and changing the method in > question. > It would be nice if DocBasedVersionConstraintsProcessor stood on its own as a > non-private class. In addition, certain methods (such as pieces of > isVersionNewEnough) should be broken out into separate methods so they can be > extended such that someone can extend the processor class and override what > it means for a new version to be accepted (allowing equal versions through? > What if new is a lower not greater number?). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 322 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/322/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth Error Message: 2 threads leaked from SUITE scope at org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) Thread[id=14786, name=jetty-launcher-2602-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) 2) Thread[id=14777, name=jetty-launcher-2602-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) Thread[id=14786, name=jetty-launcher-2602-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at
[jira] [Updated] (SOLR-11336) DocBasedVersionConstraintsProcessor should be more extensible and support multiple version fields
[ https://issues.apache.org/jira/browse/SOLR-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Braun updated SOLR-11336: - Summary: DocBasedVersionConstraintsProcessor should be more extensible and support multiple version fields (was: DocBasedVersionConstraintsProcessor should be more extensible) > DocBasedVersionConstraintsProcessor should be more extensible and support > multiple version fields > - > > Key: SOLR-11336 > URL: https://issues.apache.org/jira/browse/SOLR-11336 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: master (8.0) >Reporter: Michael Braun >Priority: Minor > Attachments: SOLR-11336.patch > > > DocBasedVersionConstraintsProcessor supports allowing document updates only > if the new version is greater than the old. However, if any behavior wants to > be extended / changed in minor ways, the entire class will need to be copied > and slightly modified rather than extending and changing the method in > question. > It would be nice if DocBasedVersionConstraintsProcessor stood on its own as a > non-private class. In addition, certain methods (such as pieces of > isVersionNewEnough) should be broken out into separate methods so they can be > extended such that someone can extend the processor class and override what > it means for a new version to be accepted (allowing equal versions through? > What if new is a lower not greater number?). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11485) Add olsRegress, spline and derivative Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277973#comment-16277973 ] ASF subversion and git services commented on SOLR-11485: Commit 862f48761c24339b120a9fa66c6a5f6bc44d22d2 in lucene-solr's branch refs/heads/master from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=862f487 ] SOLR-11485: Fix broken tests > Add olsRegress, spline and derivative Stream Evaluators > --- > > Key: SOLR-11485 > URL: https://issues.apache.org/jira/browse/SOLR-11485 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11485.patch > > > This ticket adds support for OLS (ordinary least squares) multiple > regression, spline interpolation and derivative functions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11625) Solr may remove live index on Solr shutdown
[ https://issues.apache.org/jira/browse/SOLR-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson reassigned SOLR-11625: - Assignee: Erick Erickson > Solr may remove live index on Solr shutdown > --- > > Key: SOLR-11625 > URL: https://issues.apache.org/jira/browse/SOLR-11625 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6.1 >Reporter: Nikolay Martynov >Assignee: Erick Erickson > > This has been observed in the wild: > {noformat} > 2017-11-07 02:35:46.909 ERROR (qtp1724399560-8090) [c:xxx s:shard4 > r:core_node399 x:xxx_shard4_replica8] o.a.s.c.SolrCore > :java.nio.channels.ClosedByInterruptException > at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) > at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:315) > at > org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:242) > at > org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192) > at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:356) > at > org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3044) > at org.apache.solr.core.SolrCore.close(SolrCore.java:1575) > at org.apache.solr.servlet.HttpSolrCall.destroy(HttpSolrCall.java:582) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:374) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) > at java.lang.Thread.run(Thread.java:748) > 2017-11-07 02:35:46.912 INFO > (OldIndexDirectoryCleanupThreadForCore-xxx_shard4_replica8) [c:xxx s:shard4 > r:core_node399 x:xxx_shard4_replica8] o.a.s.c.DirectoryFactory Found 1 old > index directories to clean-up under > /opt/solr/server/solr/xxx_shard4_replica8/data/ afterReload=false > {noformat} > After this Solr cannot start claiming that some files that are supposed to > exist in the index do not exist. On one occasion we observed segments file > not being present. > We were able to trace this problem to {{SolrCore.cleanupOldIndexDirectories}} > using wrong index directory as current index because > {{SolrCore.getNewIndexDir}} could not read proper index directory
[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 96 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/96/ No tests ran. Build Log: [...truncated 28042 lines...] prepare-release-no-sign: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist [copy] Copying 476 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 215 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL "file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.08 sec (3.1 MB/sec) [smoker] check changes HTML... [smoker] download lucene-7.2.0-src.tgz... [smoker] 31.2 MB in 1.26 sec (24.7 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-7.2.0.tgz... [smoker] 71.0 MB in 0.99 sec (71.6 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-7.2.0.zip... [smoker] 81.5 MB in 1.26 sec (64.6 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-7.2.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6227 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-7.2.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6227 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-7.2.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run "ant validate" [smoker] [smoker] command "export JAVA_HOME="/home/jenkins/tools/java/latest1.8" PATH="/home/jenkins/tools/java/latest1.8/bin:$PATH" JAVACMD="/home/jenkins/tools/java/latest1.8/bin/java"; ant validate" failed: [smoker] Buildfile: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/lucene-7.2.0/build.xml [smoker] [smoker] common.compile-tools: [smoker] [smoker] -check-git-state: [smoker] [smoker] -git-cleanroot: [smoker] [smoker] -copy-git-state: [smoker] [smoker] git-autoclean: [smoker] [smoker] ivy-availability-check: [smoker] [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. [smoker] [smoker] -ivy-fail-disallowed-ivy-version: [smoker] [smoker] ivy-fail: [smoker] [smoker] ivy-configure: [smoker] [ivy:configure] :: Apache Ivy 2.4.0 - 20141213170938 :: http://ant.apache.org/ivy/ :: [smoker] [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/lucene-7.2.0/top-level-ivy-settings.xml [smoker] [smoker] resolve: [smoker] [smoker] init: [smoker] [smoker] compile-core: [smoker] [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/lucene-7.2.0/build/tools/classes/java [smoker] [javac] Compiling 7 source files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/lucene-7.2.0/build/tools/classes/java [smoker] [copy] Copying 1 file to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/lucene-7.2.0/build/tools/classes/java [smoker] [smoker] compile-tools: [smoker] [smoker] compile-tools: [smoker] [smoker] common.compile-tools: [smoker] [smoker] -check-git-state: [smoker] [smoker] -git-cleanroot: [smoker] [smoker] -copy-git-state: [smoker] [smoker] git-autoclean: [smoker] [smoker] ivy-availability-check: [smoker] [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. [smoker] [smoker] -ivy-fail-disallowed-ivy-version: [smoker] [smoker] ivy-fail: [smoker] [smoker] ivy-configure: [smoker] [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/lucene-7.2.0/top-level-ivy-settings.xml [smoker] [smoker] resolve: [smoker] [smoker] init: [smoker] [smoker] compile-core: [smoker] [smoker] compile-tools: [smoker] [mkdir] Created dir:
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 21031 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21031/ Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 5 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.BlobRepositoryCloudTest Error Message: Error from server at http://127.0.0.1:34343/solr: Could not fully create collection: col2 Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:34343/solr: Could not fully create collection: col2 at __randomizedtesting.SeedInfo.seed([418FA9B9AE7D42D9]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.core.BlobRepositoryCloudTest.setupCluster(BlobRepositoryCloudTest.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly Error Message: Unexpected number of elements in the group for intGSF: 8 Stack Trace: java.lang.AssertionError: Unexpected number of elements in the group for intGSF: 8 at __randomizedtesting.SeedInfo.seed([418FA9B9AE7D42D9:DA34C7E1E3257087]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly(DocValuesNotIndexedTest.java:377) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at
[jira] [Commented] (SOLR-11485) Add olsRegress, spline and derivative Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277915#comment-16277915 ] ASF subversion and git services commented on SOLR-11485: Commit 8750e5f2a97e2011da7a3c821dca38a31d0f9bf1 in lucene-solr's branch refs/heads/master from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8750e5f ] SOLR-11485: Add olsRegress, spline and derivative Stream Evaluators > Add olsRegress, spline and derivative Stream Evaluators > --- > > Key: SOLR-11485 > URL: https://issues.apache.org/jira/browse/SOLR-11485 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11485.patch > > > This ticket adds support for OLS (ordinary least squares) multiple > regression, spline interpolation and derivative functions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11716) Solr website features page pointing to the old ref guide
[ https://issues.apache.org/jira/browse/SOLR-11716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277917#comment-16277917 ] Varun Thacker commented on SOLR-11716: -- Thanks Cassandra! > Solr website features page pointing to the old ref guide > > > Key: SOLR-11716 > URL: https://issues.apache.org/jira/browse/SOLR-11716 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Cassandra Targett > > I was going through http://lucene.apache.org/solr/features.html and the > clicked on one of the ref guide pages. The link itself is > https://cwiki.apache.org/confluence/display/solr/Documents%2C+Fields%2C+and+Schema+Design > which redirects to > https://lucene.apache.org/solr/guide/6_6/documents-fields-and-schema-design.html > We need to update > https://svn.apache.org/repos/asf/lucene/cms/trunk/content/solr/features.mdtext > and point the links to the new ref guide so that it always redirects to the > latest -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11485) Add olsRegress, spline and derivative Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277916#comment-16277916 ] ASF subversion and git services commented on SOLR-11485: Commit f8c69270a130bdd66f2571f932370ff45801501e in lucene-solr's branch refs/heads/master from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f8c6927 ] SOLR-11485: Fix precommit > Add olsRegress, spline and derivative Stream Evaluators > --- > > Key: SOLR-11485 > URL: https://issues.apache.org/jira/browse/SOLR-11485 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11485.patch > > > This ticket adds support for OLS (ordinary least squares) multiple > regression, spline interpolation and derivative functions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 2204 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2204/ All tests passed Build Log: [...truncated 11738 lines...] [junit4] JVM J2: stdout was not empty, see: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/temp/junit4-J2-20171205_002225_5693460823100441639110.sysout [junit4] >>> JVM J2 emitted unexpected output (verbatim) [junit4] # [junit4] # There is insufficient memory for the Java Runtime Environment to continue. [junit4] # Native memory allocation (mmap) failed to map 12288 bytes[thread 139998985160448 also had an error] [junit4] [thread 13017158400 also had an error] [junit4] [thread 139998984107776 also had an error] [junit4] for committing reserved memory. [junit4] # An error report file with more information is saved as: [junit4] # [thread 139998975702784 also had an error] [junit4] [thread 139998994745088 also had an error] [junit4] [thread 139998978860800 also had an error] [junit4] /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J2/hs_err_pid0.log [junit4] [thread 139998977808128 also had an error] [junit4] [thread 13021360896 also had an error] [junit4] <<< JVM J2: EOF [junit4] JVM J2: stderr was not empty, see: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/temp/junit4-J2-20171205_002225_5698363510189808995475.syserr [junit4] >>> JVM J2 emitted unexpected output (verbatim) [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x7f53d2bee000, 12288, 0) failed; error='Cannot allocate memory' (errno=12) [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x7f540db78000, 12288, 0) failed; error='Cannot allocate memory' (errno=12) [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x7f540f9fc000, 12288, 0) failed; error='Cannot allocate memory' (errno=12) [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x7f540da77000, 12288, 0) failed; error='Cannot allocate memory' (errno=12) [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x7f540d273000, 12288, 0) failed; error='Cannot allocate memory' (errno=12) [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x7f540e49c000, 12288, 0) failed; error='Cannot allocate memory' (errno=12) [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x7f540d576000, 12288, 0) failed; error='Cannot allocate memory' (errno=12) [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x7f540d475000, 12288, 0) failed; error='Cannot allocate memory' (errno=12) [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x7f540fdfe000, 12288, 0) failed; error='Cannot allocate memory' (errno=12) [junit4] <<< JVM J2: EOF [...truncated 61 lines...] [junit4] JVM J0: stdout was not empty, see: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/temp/junit4-J0-20171205_002225_5447302560896296540165.sysout [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] # [junit4] # There is insufficient memory for the Java Runtime Environment to continue. [junit4] # Native memory allocation (mmap) failed to map 52428800 bytes for committing reserved memory. [junit4] # An error report file with more information is saved as: [junit4] # /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J0/hs_err_pid22219.log [junit4] <<< JVM J0: EOF [junit4] JVM J0: stderr was not empty, see: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/temp/junit4-J0-20171205_002225_5447201345503647330485.syserr [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0xf238, 52428800, 0) failed; error='Cannot allocate memory' (errno=12) [junit4] <<< JVM J0: EOF [...truncated 1030 lines...] [junit4] JVM J1: stdout was not empty, see: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/temp/junit4-J1-20171205_002225_5604536211972955121655.sysout [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] # [junit4] # There is insufficient memory for the Java Runtime Environment to continue. [junit4] # Native memory allocation (mmap) failed to map 12288 bytes for committing reserved memory. [junit4] # An error report file with more information is saved as: [junit4] # /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J1/hs_err_pid22212.log [junit4] <<< JVM J1: EOF [junit4] JVM J1: stderr was not empty, see:
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1554 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1554/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeAddedTriggerRestoreState Error Message: The trigger did not fire at all Stack Trace: java.lang.AssertionError: The trigger did not fire at all at __randomizedtesting.SeedInfo.seed([48B5B05F0558D9C9:C08839203F983864]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeAddedTriggerRestoreState(TriggerIntegrationTest.java:414) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 12618 lines...] [junit4] Suite: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest [junit4] 2> Creating dataDir:
[jira] [Updated] (SOLR-11485) Add olsRegress, spline and derivative Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11485: -- Summary: Add olsRegress, spline and derivative Stream Evaluators (was: Add olsRegress, spline, derivative Stream Evaluators) > Add olsRegress, spline and derivative Stream Evaluators > --- > > Key: SOLR-11485 > URL: https://issues.apache.org/jira/browse/SOLR-11485 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11485.patch > > > This ticket adds support for OLS (ordinary least squares) multiple > regression, spline interpolation and derivative functions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11485) Add olsRegress, spline, derivative Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11485: -- Attachment: SOLR-11485.patch > Add olsRegress, spline, derivative Stream Evaluators > > > Key: SOLR-11485 > URL: https://issues.apache.org/jira/browse/SOLR-11485 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11485.patch > > > This ticket adds support for OLS (ordinary least squares) multiple > regression, spline interpolation and derivative functions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11485) Add olsRegress, spline, derivative Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11485: -- Description: This ticket adds support for OLS (ordinary least squares) multiple regression, spline interpolation and derivative functions. (was: This ticket adds support for OLS (ordinary least squares) multiple regression, prediction and residuals.) > Add olsRegress, spline, derivative Stream Evaluators > > > Key: SOLR-11485 > URL: https://issues.apache.org/jira/browse/SOLR-11485 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.2, master (8.0) > > > This ticket adds support for OLS (ordinary least squares) multiple > regression, spline interpolation and derivative functions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11485) Add olsRegress, spline, derivative Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11485: -- Summary: Add olsRegress, spline, derivative Stream Evaluators (was: Add olsRegress, olsPredict and olsResiduals Stream Evaluators) > Add olsRegress, spline, derivative Stream Evaluators > > > Key: SOLR-11485 > URL: https://issues.apache.org/jira/browse/SOLR-11485 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.2, master (8.0) > > > This ticket adds support for OLS (ordinary least squares) multiple > regression, prediction and residuals. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11722) API to create a Time Routed Alias and first collection
[ https://issues.apache.org/jira/browse/SOLR-11722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277892#comment-16277892 ] Gus Heck commented on SOLR-11722: - I think we can handle the creation of the properly formatted collection name and maybe we can pull the creation metadata from an example collection? The example collection can be retained for future use, modified or deleted as the user sees fit as we will still record any parameters we care about as alias metadata. This should be nice and convenient for folks initially investigating this feature, and it also avoids duplicating configuration parameters already defined elsewhere here. Someone checking it out for the first time can just throw the name of an existing collection with a time valued field in it and start playing. Also, in thinking about usability I suspect we may want to allow a full time stamp including timezone specifier in the names for collections so that users don't have to have dig into zookeeper or the collections api to determine what timezone is in play. (they could even perhaps find out using {{=\[shard]}}). Such a thing would however require adjustment of the regular expressions in TimeRoutedAliasUpdateProcessor. How about a set of options like this? (expressed here as an addition to the v2api _introspect for create-alias) {code} "collection-routing": { "type": "string", "description": "The type of routing to perform. Currently only 'time' is supported. This property is mutually exclusive with 'collections', and required with any route-* property. Collections created will be named for the alias and the first value allowed for the partition represented by each collection separated by an underscore. Time based routing will express the first value as a time stamp including the applicable timezone specifier (only time based routing is supported at this time)" }, "route-field": { "type": "string", "description": "The field in incoming documents that is consulted to decide which collection the document should be routed to." }, "route-start": { "type": "string", "description": "The earliest/lowest value for routeField that may be indexed into this alias. Documents with values less than this will return an error. For time based routing this may be a date math expression." }, "route-timezone": { "type": "string", "description": "Optional timezone for time based routing. If omitted, UTC timezone will be used" }, "route-increment": { "type": "string", "description": "A specification of the width of the interval for each partition collection. For time based routing this should be a date math expression fragment starting with the + character. When appended to a standard iso date the result should be a valid date math expression." }, "route-example-collection": { "type": "string", "description": "A collection to use as an initial example for the creation of the initial partition of a collection routing alias. Subsequent partitions will ignore this value and will take configuration values (config, numShards, etc) from the first partition." }, {code} > API to create a Time Routed Alias and first collection > -- > > Key: SOLR-11722 > URL: https://issues.apache.org/jira/browse/SOLR-11722 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley > > This issue is about creating a single API command to create a "Time Routed > Alias" along with its first collection. Need to decide what endpoint URL it > is and parameters. > Perhaps in v2 it'd be {{/api/collections?command=create-routed-alias}} or > alternatively piggy-back off of command=create-alias but we add more options, > perhaps with a prefix like "router"? > Inputs: > * alias name > * misc collection creation metadata (e.g. config, numShards, ...) perhaps in > this context with a prefix like "collection." > * metadata for TimeRoutedAliasUpdateProcessor, currently: router.field > * date specifier for first collection; can include "date math". > We'll certainly add more options as future features unfold. > I believe the collection needs to be created first (referring to the alias > name via a core property), and then the alias pointing to it which demands > collections exist first. When figuring the collection name, you'll need to > reference the format in TimeRoutedAliasUpdateProcessor. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail:
[JENKINS] Lucene-Solr-Tests-7.x - Build # 269 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/269/ 6 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateWithDefaultConfigSet Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([BFB71A43A1F8089C:F11CBB04FE47726F]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateWithDefaultConfigSet(CollectionsAPISolrJTest.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.TestSkipOverseerOperations.testSkipLeaderOperations Error Message: Expected 2x1 for collection: collection1 null Live Nodes: [127.0.0.1:34873_solr, 127.0.0.1:39241_solr, 127.0.0.1:45364_solr] Last available state: null Stack Trace: java.lang.AssertionError: Expected 2x1 for collection:
[jira] [Commented] (SOLR-11508) Make coreRootDirectory configurable via an environment variable (SOLR_CORE_HOME)
[ https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277849#comment-16277849 ] Marc Morissette commented on SOLR-11508: [~elyograg] This is an interesting idea but I'm not sure how this solves the problem. It would be nice if Solr could start without solr.xml but it would condemn cloud mode users to choose between sticking to the default settings or mixing their configuration and data. It's either that or we would need to externalize every configuration parameter available in solr.xml (and there are a lot). > Make coreRootDirectory configurable via an environment variable > (SOLR_CORE_HOME) > > > Key: SOLR-11508 > URL: https://issues.apache.org/jira/browse/SOLR-11508 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Marc Morissette > > (Heavily edited) > Since Solr 7, it is possible to store Solr cores in separate disk locations > using solr.data.home (see SOLR-6671). This is very useful when running Solr > in Docker where data must be stored in a directory which is independent from > the rest of the container. > While this works well in standalone mode, it doesn't in Cloud mode as the > core.properties automatically created by Solr are still stored in > coreRootDirectory and cores created that way disappear when the Solr Docker > container is redeployed. > The solution is to configure coreRootDirectory to an empty directory that can > be mounted outside the Docker container. > The incoming patch makes this easier to do by allowing coreRootDirectory to > be configured via a solr.core.home system property and SOLR_CORE_HOME > environment variable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+32) - Build # 936 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/936/ Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testTriggerThrottling Error Message: Both triggers should have fired by now Stack Trace: java.lang.AssertionError: Both triggers should have fired by now at __randomizedtesting.SeedInfo.seed([D5659617CDBEC029:2E473E321F1423BB]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testTriggerThrottling(TriggerIntegrationTest.java:205) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 1729 lines...] [junit4] JVM J0: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/core/test/temp/junit4-J0-20171204_233023_2075838813275738204033.syserr [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] Java
[jira] [Commented] (SOLR-11508) Make coreRootDirectory configurable via an environment variable (SOLR_CORE_HOME)
[ https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277818#comment-16277818 ] Shawn Heisey commented on SOLR-11508: - I'm starting to get an idea of the problem that you want to solve. What do you think of altering my proposal in one small way: Making sure that Solr starts even if solr.xml cannot be found. This would allow you to point solr.solr.home to a location that's completely empty and still have Solr start. You would then have the option of adding a solr.xml if you desired some changes there, and even adding configsets if you wanted to run Solr in standalone mode but still have common configs like SolrCloud. I did try starting Solr 7.0.0 (already had it downloaded) without a solr.xml, and it refuses to start. I think this is not how it should behave. Having Solr log a warning (and possibly even output a message to the console) mentioning the missing solr.xml would be a good idea. I created a minimal solr.xml (just contained on one line) and Solr did start, so it's not like it must have any config there. I have noticed that the stock solr.xml included with 7.x has a lot of config in it that uses various system properties, with defaults. I have no idea whether the default settings for these things is reasonable or not. We would need to make sure that the defaults are reasonable. > Make coreRootDirectory configurable via an environment variable > (SOLR_CORE_HOME) > > > Key: SOLR-11508 > URL: https://issues.apache.org/jira/browse/SOLR-11508 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Marc Morissette > > (Heavily edited) > Since Solr 7, it is possible to store Solr cores in separate disk locations > using solr.data.home (see SOLR-6671). This is very useful when running Solr > in Docker where data must be stored in a directory which is independent from > the rest of the container. > While this works well in standalone mode, it doesn't in Cloud mode as the > core.properties automatically created by Solr are still stored in > coreRootDirectory and cores created that way disappear when the Solr Docker > container is redeployed. > The solution is to configure coreRootDirectory to an empty directory that can > be mounted outside the Docker container. > The incoming patch makes this easier to do by allowing coreRootDirectory to > be configured via a solr.core.home system property and SOLR_CORE_HOME > environment variable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 328 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/328/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 5 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.DistribJoinFromCollectionTest Error Message: IOException occured when talking to server at: http://127.0.0.1:50049/solr/to_2x2_shard2_replica_n6 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: IOException occured when talking to server at: http://127.0.0.1:50049/solr/to_2x2_shard2_replica_n6 at __randomizedtesting.SeedInfo.seed([8D53E7BFCC56B55D]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1012) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) at org.apache.solr.cloud.DistribJoinFromCollectionTest.indexDoc(DistribJoinFromCollectionTest.java:234) at org.apache.solr.cloud.DistribJoinFromCollectionTest.setupCluster(DistribJoinFromCollectionTest.java:105) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:50049/solr/to_2x2_shard2_replica_n6 at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at
[jira] [Commented] (LUCENE-8048) Filesystems do not guarantee order of directories updates
[ https://issues.apache.org/jira/browse/LUCENE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277784#comment-16277784 ] Erick Erickson commented on LUCENE-8048: Fella goes out for a run and comes back and some kind person shows him how to solve the problem. Or maybe solves it for me... I'll run it through the rest of its paces, precommit test and the like, as well as, perhaps, finally figure out what that MockDirectoryWrapper is doing. Thanks a bunch Simon! > Filesystems do not guarantee order of directories updates > - > > Key: LUCENE-8048 > URL: https://issues.apache.org/jira/browse/LUCENE-8048 > Project: Lucene - Core > Issue Type: Bug >Reporter: Nikolay Martynov >Assignee: Erick Erickson > Attachments: LUCENE-8048.patch, LUCENE-8048.patch, Screen Shot > 2017-11-22 at 12.34.51 PM.png > > > Currently when index is written to disk the following sequence of events is > taking place: > * write segment file > * sync segment file > * write segment file > * sync segment file > ... > * write list of segments > * sync list of segments > * rename list of segments > * sync index directory > This sequence leads to potential window of opportunity for system to crash > after 'rename list of segments' but before 'sync index directory' and > depending on exact filesystem implementation this may potentially lead to > 'list of segments' being visible in directory while some of the segments are > not. > Solution to this is to sync index directory after all segments have been > written. [This > commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c] > shows idea implemented. I'm fairly certain that I didn't find all the places > this may be potentially happening. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+32) - Build # 21030 - Still Unstable!
BTW, I've been saving these with the additional indexing output, I expect to be able to actually look Real Soon Now. Erick On Mon, Dec 4, 2017 at 3:28 PM, Policeman Jenkins Serverwrote: > Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21030/ > Java: 64bit/jdk-10-ea+32 -XX:+UseCompressedOops -XX:+UseSerialGC > > 4 tests failed. > FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores > > Error Message: > 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: > 1) Thread[id=65122, name=searcherExecutor-6470-thread-1, state=WAITING, > group=TGRP-TestLazyCores] at > java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at > java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) > at > java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074) > at > java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) > at > java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061) > at > java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121) > at > java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > at java.base@10-ea/java.lang.Thread.run(Thread.java:844) > > Stack Trace: > com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from > SUITE scope at org.apache.solr.core.TestLazyCores: >1) Thread[id=65122, name=searcherExecutor-6470-thread-1, state=WAITING, > group=TGRP-TestLazyCores] > at java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) > at > java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) > at > java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074) > at > java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) > at > java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061) > at > java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121) > at > java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > at java.base@10-ea/java.lang.Thread.run(Thread.java:844) > at __randomizedtesting.SeedInfo.seed([7E6EDDDB79E39D4]:0) > > > FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores > > Error Message: > There are still zombie threads that couldn't be terminated:1) > Thread[id=65122, name=searcherExecutor-6470-thread-1, state=WAITING, > group=TGRP-TestLazyCores] at > java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at > java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) > at > java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074) > at > java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) > at > java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061) > at > java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121) > at > java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > at java.base@10-ea/java.lang.Thread.run(Thread.java:844) > > Stack Trace: > com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie > threads that couldn't be terminated: >1) Thread[id=65122, name=searcherExecutor-6470-thread-1, state=WAITING, > group=TGRP-TestLazyCores] > at java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) > at > java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) > at > java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074) > at > java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) > at > java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061) > at > java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121) > at > java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) > at java.base@10-ea/java.lang.Thread.run(Thread.java:844) > at __randomizedtesting.SeedInfo.seed([7E6EDDDB79E39D4]:0) > > > FAILED: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test > > Error Message: > Timeout occured while waiting response from server at:
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+32) - Build # 21030 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21030/ Java: 64bit/jdk-10-ea+32 -XX:+UseCompressedOops -XX:+UseSerialGC 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=65122, name=searcherExecutor-6470-thread-1, state=WAITING, group=TGRP-TestLazyCores] at java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074) at java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) at java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061) at java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121) at java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base@10-ea/java.lang.Thread.run(Thread.java:844) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=65122, name=searcherExecutor-6470-thread-1, state=WAITING, group=TGRP-TestLazyCores] at java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074) at java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) at java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061) at java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121) at java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base@10-ea/java.lang.Thread.run(Thread.java:844) at __randomizedtesting.SeedInfo.seed([7E6EDDDB79E39D4]:0) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=65122, name=searcherExecutor-6470-thread-1, state=WAITING, group=TGRP-TestLazyCores] at java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074) at java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) at java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061) at java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121) at java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base@10-ea/java.lang.Thread.run(Thread.java:844) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=65122, name=searcherExecutor-6470-thread-1, state=WAITING, group=TGRP-TestLazyCores] at java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074) at java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435) at java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061) at java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121) at java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base@10-ea/java.lang.Thread.run(Thread.java:844) at __randomizedtesting.SeedInfo.seed([7E6EDDDB79E39D4]:0) FAILED: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test Error Message: Timeout occured while waiting response from server at: https://127.0.0.1:36043/s/d Stack Trace: org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting response from server at: https://127.0.0.1:36043/s/d at __randomizedtesting.SeedInfo.seed([7E6EDDDB79E39D4:8FB2D2071962542C]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654) at
[jira] [Commented] (SOLR-11720) Backup restore command completes before index is ready
[ https://issues.apache.org/jira/browse/SOLR-11720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277714#comment-16277714 ] Vitaly Lavrov commented on SOLR-11720: -- Backup restore command is sent as http://solr.host.com:8983/solr/admin/collections?action=RESTORE=offers=/backup=offers=101=json The target collection 'offers' gets deleted right before restore command is sent (so does configSet). 1. No commits are sent, only searches 2. Old searcher does not exist because collection is being restored 3. Timeline after restore command has shown 'completed' status: 10 seconds after 448435467 documents 20 seconds after 454947732 documents 30 seconds after 460556612 documents 40 seconds after 466567718 documents 50 seconds after 472862588 documents 60 seconds after 472862588 documents 70 seconds after 479985875 documents (final number, the correct one) Number of documents are measured with http://solr.host.com:8983/api/c/offers/query?q=*:* 4. Searches go directly to solr cloud > Backup restore command completes before index is ready > -- > > Key: SOLR-11720 > URL: https://issues.apache.org/jira/browse/SOLR-11720 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 7.1 >Reporter: Vitaly Lavrov > > Scenario: > 1. Run backup restore as async command > 2. Check command status until it has 'completed' status > 3. Query collection immediately for total number of documents > The problem is the query will return incorrect number. The query will return > correct number eventually if one keeps querying. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4312 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4312/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 104 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.DisMaxRequestHandlerTest Error Message: org.apache.solr.DisMaxRequestHandlerTest Stack Trace: java.lang.ClassNotFoundException: org.apache.solr.DisMaxRequestHandlerTest at java.net.URLClassLoader$1.run(URLClassLoader.java:370) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.instantiate(SlaveMain.java:280) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:240) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:368) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:13) Caused by: java.io.FileNotFoundException: /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/classes/test/org/apache/solr/DisMaxRequestHandlerTest.class (Too many open files) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.(FileInputStream.java:138) at sun.misc.URLClassPath$FileLoader$1.getInputStream(URLClassPath.java:1288) at sun.misc.Resource.cachedInputStream(Resource.java:77) at sun.misc.Resource.getByteBuffer(Resource.java:160) at java.net.URLClassLoader.defineClass(URLClassLoader.java:454) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) ... 12 more FAILED: junit.framework.TestSuite.org.apache.solr.DistributedIntervalFacetingTest Error Message: org.apache.solr.DistributedIntervalFacetingTest Stack Trace: java.lang.ClassNotFoundException: org.apache.solr.DistributedIntervalFacetingTest at java.net.URLClassLoader$1.run(URLClassLoader.java:370) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.instantiate(SlaveMain.java:280) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:240) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:368) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:13) Caused by: java.io.FileNotFoundException: /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/classes/test/org/apache/solr/DistributedIntervalFacetingTest.class (Too many open files) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.(FileInputStream.java:138) at sun.misc.URLClassPath$FileLoader$1.getInputStream(URLClassPath.java:1288) at sun.misc.Resource.cachedInputStream(Resource.java:77) at sun.misc.Resource.getByteBuffer(Resource.java:160) at java.net.URLClassLoader.defineClass(URLClassLoader.java:454) at java.net.URLClassLoader.access$100(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:368) ... 12 more FAILED: junit.framework.TestSuite.org.apache.solr.EchoParamsTest Error Message: org.apache.solr.EchoParamsTest Stack Trace: java.lang.ClassNotFoundException: org.apache.solr.EchoParamsTest at java.net.URLClassLoader$1.run(URLClassLoader.java:370) at java.net.URLClassLoader$1.run(URLClassLoader.java:362) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:361) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at
[jira] [Commented] (SOLR-11719) Committing to restored collection does nothing
[ https://issues.apache.org/jira/browse/SOLR-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277656#comment-16277656 ] Vitaly Lavrov commented on SOLR-11719: -- The difference is that story is about updates sent to the restored index. SOLR-11720 is about searches in a restored index sent right after restore completes. > Committing to restored collection does nothing > -- > > Key: SOLR-11719 > URL: https://issues.apache.org/jira/browse/SOLR-11719 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 7.1 >Reporter: Vitaly Lavrov > > Scenario that was reproduced many times: > 1. Restore collection > 2. Send updates > 3. Send commit > Commit request returns instantly and the index stays intact. After collection > reload or cluster reboot updates are visible and commits will work. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8015) TestBasicModelIne.testRandomScoring failure
[ https://issues.apache.org/jira/browse/LUCENE-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277651#comment-16277651 ] Adrien Grand commented on LUCENE-8015: -- I don't think we can fix this with a nextUp/nextDown? One way we could fix it for sure would be by implementing the basic model and the after effect in a single method. For instance {{(A + B * tfn) * (C / (tfn + 1))}} could be rewritten as {{(A - B + B * (1 + tfn))) * C / (tfn + 1) = (A - B) * C / (tfn + 1) + B * C}}. Since there is only one occurrence of tfn in the latter, it would be guaranteed to be non-decreasing when tfn increases. Fixing it in the general case looks challenging however? Maybe one reasonable way to avoid this issue would be to bound the values that tfn may take? This isn't nice but it wouldn't affect the general case, only when freq, avgdl, or some other stats have extreme values? > TestBasicModelIne.testRandomScoring failure > --- > > Key: LUCENE-8015 > URL: https://issues.apache.org/jira/browse/LUCENE-8015 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand > Attachments: LUCENE-8015_test_fangs.patch > > > reproduce with: ant test -Dtestcase=TestBasicModelIne > -Dtests.method=testRandomScoring -Dtests.seed=86E85958B1183E93 > -Dtests.slow=true -Dtests.locale=vi-VN -Dtests.timezone=Pacific/Tongatapu > -Dtests.asserts=true -Dtests.file.encoding=UTF8 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11716) Solr website features page pointing to the old ref guide
[ https://issues.apache.org/jira/browse/SOLR-11716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett resolved SOLR-11716. -- Resolution: Fixed Assignee: Cassandra Targett I fixed all these. > Solr website features page pointing to the old ref guide > > > Key: SOLR-11716 > URL: https://issues.apache.org/jira/browse/SOLR-11716 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Cassandra Targett > > I was going through http://lucene.apache.org/solr/features.html and the > clicked on one of the ref guide pages. The link itself is > https://cwiki.apache.org/confluence/display/solr/Documents%2C+Fields%2C+and+Schema+Design > which redirects to > https://lucene.apache.org/solr/guide/6_6/documents-fields-and-schema-design.html > We need to update > https://svn.apache.org/repos/asf/lucene/cms/trunk/content/solr/features.mdtext > and point the links to the new ref guide so that it always redirects to the > latest -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8048) Filesystems do not guarantee order of directories updates
[ https://issues.apache.org/jira/browse/LUCENE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Willnauer updated LUCENE-8048: Attachment: LUCENE-8048.patch I think you need to modify the test and add this as a valid case, like the patch I attached. > Filesystems do not guarantee order of directories updates > - > > Key: LUCENE-8048 > URL: https://issues.apache.org/jira/browse/LUCENE-8048 > Project: Lucene - Core > Issue Type: Bug >Reporter: Nikolay Martynov >Assignee: Erick Erickson > Attachments: LUCENE-8048.patch, LUCENE-8048.patch, Screen Shot > 2017-11-22 at 12.34.51 PM.png > > > Currently when index is written to disk the following sequence of events is > taking place: > * write segment file > * sync segment file > * write segment file > * sync segment file > ... > * write list of segments > * sync list of segments > * rename list of segments > * sync index directory > This sequence leads to potential window of opportunity for system to crash > after 'rename list of segments' but before 'sync index directory' and > depending on exact filesystem implementation this may potentially lead to > 'list of segments' being visible in directory while some of the segments are > not. > Solution to this is to sync index directory after all segments have been > written. [This > commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c] > shows idea implemented. I'm fairly certain that I didn't find all the places > this may be potentially happening. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11312) Document how to stop solr in README.txt
[ https://issues.apache.org/jira/browse/SOLR-11312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277548#comment-16277548 ] ASF subversion and git services commented on SOLR-11312: Commit e2d5e6382f96b9664623b6ebcafb615592f5bae6 in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e2d5e63 ] SOLR-11312: Add docs on how to stop Solr to README.txt > Document how to stop solr in README.txt > --- > > Key: SOLR-11312 > URL: https://issues.apache.org/jira/browse/SOLR-11312 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Jan Høydahl >Priority: Trivial > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11312.patch > > > As discussed in "Re: [VOTE] Release Apache Lucene/Solr 7.0.0 RC2" > https://lists.apache.org/thread.html/36303b8245514aff58fa8d335b955abc8360a2528f30a3a283c25c81@%3Cdev.lucene.apache.org%3E -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11312) Document how to stop solr in README.txt
[ https://issues.apache.org/jira/browse/SOLR-11312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett reassigned SOLR-11312: Assignee: Cassandra Targett > Document how to stop solr in README.txt > --- > > Key: SOLR-11312 > URL: https://issues.apache.org/jira/browse/SOLR-11312 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Jan Høydahl >Assignee: Cassandra Targett >Priority: Trivial > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11312.patch > > > As discussed in "Re: [VOTE] Release Apache Lucene/Solr 7.0.0 RC2" > https://lists.apache.org/thread.html/36303b8245514aff58fa8d335b955abc8360a2528f30a3a283c25c81@%3Cdev.lucene.apache.org%3E -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11312) Document how to stop solr in README.txt
[ https://issues.apache.org/jira/browse/SOLR-11312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett resolved SOLR-11312. -- Resolution: Fixed > Document how to stop solr in README.txt > --- > > Key: SOLR-11312 > URL: https://issues.apache.org/jira/browse/SOLR-11312 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Jan Høydahl >Priority: Trivial > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11312.patch > > > As discussed in "Re: [VOTE] Release Apache Lucene/Solr 7.0.0 RC2" > https://lists.apache.org/thread.html/36303b8245514aff58fa8d335b955abc8360a2528f30a3a283c25c81@%3Cdev.lucene.apache.org%3E -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.1) - Build # 328 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/328/ Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 9 tests failed. FAILED: junit.framework.TestSuite.org.apache.lucene.index.TestManyPointsInOldIndex Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestManyPointsInOldIndex_DB078CE59ADAD55C-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestManyPointsInOldIndex_DB078CE59ADAD55C-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestManyPointsInOldIndex_DB078CE59ADAD55C-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestManyPointsInOldIndex_DB078CE59ADAD55C-001 at __randomizedtesting.SeedInfo.seed([DB078CE59ADAD55C]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: junit.framework.TestSuite.org.apache.lucene.search.suggest.analyzing.BlendedInfixSuggesterTest Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\suggest\test\J1\temp\lucene.search.suggest.analyzing.BlendedInfixSuggesterTest_8D6B109D203841E5-001\BlendedInfixSuggesterTest-003\_0.fdt: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\suggest\test\J1\temp\lucene.search.suggest.analyzing.BlendedInfixSuggesterTest_8D6B109D203841E5-001\BlendedInfixSuggesterTest-003\_0.fdt C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\suggest\test\J1\temp\lucene.search.suggest.analyzing.BlendedInfixSuggesterTest_8D6B109D203841E5-001\BlendedInfixSuggesterTest-003: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\suggest\test\J1\temp\lucene.search.suggest.analyzing.BlendedInfixSuggesterTest_8D6B109D203841E5-001\BlendedInfixSuggesterTest-003 C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\suggest\test\J1\temp\lucene.search.suggest.analyzing.BlendedInfixSuggesterTest_8D6B109D203841E5-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\suggest\test\J1\temp\lucene.search.suggest.analyzing.BlendedInfixSuggesterTest_8D6B109D203841E5-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\suggest\test\J1\temp\lucene.search.suggest.analyzing.BlendedInfixSuggesterTest_8D6B109D203841E5-001\BlendedInfixSuggesterTest-003\_0.fdt: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\suggest\test\J1\temp\lucene.search.suggest.analyzing.BlendedInfixSuggesterTest_8D6B109D203841E5-001\BlendedInfixSuggesterTest-003\_0.fdt C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\suggest\test\J1\temp\lucene.search.suggest.analyzing.BlendedInfixSuggesterTest_8D6B109D203841E5-001\BlendedInfixSuggesterTest-003: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\suggest\test\J1\temp\lucene.search.suggest.analyzing.BlendedInfixSuggesterTest_8D6B109D203841E5-001\BlendedInfixSuggesterTest-003
[jira] [Commented] (SOLR-11312) Document how to stop solr in README.txt
[ https://issues.apache.org/jira/browse/SOLR-11312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277544#comment-16277544 ] ASF subversion and git services commented on SOLR-11312: Commit 0b4e9b2b338154ba83bc9a15f3ade5df0f119e45 in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0b4e9b2 ] SOLR-11312: Add docs on how to stop Solr to README.txt > Document how to stop solr in README.txt > --- > > Key: SOLR-11312 > URL: https://issues.apache.org/jira/browse/SOLR-11312 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Jan Høydahl >Priority: Trivial > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11312.patch > > > As discussed in "Re: [VOTE] Release Apache Lucene/Solr 7.0.0 RC2" > https://lists.apache.org/thread.html/36303b8245514aff58fa8d335b955abc8360a2528f30a3a283c25c81@%3Cdev.lucene.apache.org%3E -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11312) Document how to stop solr in README.txt
[ https://issues.apache.org/jira/browse/SOLR-11312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277540#comment-16277540 ] Cassandra Targett commented on SOLR-11312: -- This looks good, Jason, sorry for the delay. I'll commit it. > Document how to stop solr in README.txt > --- > > Key: SOLR-11312 > URL: https://issues.apache.org/jira/browse/SOLR-11312 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Jan Høydahl >Priority: Trivial > Fix For: 7.2, master (8.0) > > Attachments: SOLR-11312.patch > > > As discussed in "Re: [VOTE] Release Apache Lucene/Solr 7.0.0 RC2" > https://lists.apache.org/thread.html/36303b8245514aff58fa8d335b955abc8360a2528f30a3a283c25c81@%3Cdev.lucene.apache.org%3E -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11508) Make coreRootDirectory configurable via an environment variable (SOLR_CORE_HOME)
[ https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277497#comment-16277497 ] David Smiley edited comment on SOLR-11508 at 12/4/17 9:11 PM: -- Perhaps solr.data.home should be deprecated in SolrCloud mode (displaying a warning at startup)? In 8.0 we could remove the "-t" convenience parameter to bin/solr, leaving it as more of an internal setting as SolrCloud is more prominent was (Author: dsmiley): Perhaps solr.data.dir should be deprecated in SolrCloud mode (displaying a warning at startup)? In 8.0 we could remove the "-t" convenience parameter to bin/solr, leaving it as more of an internal setting. > Make coreRootDirectory configurable via an environment variable > (SOLR_CORE_HOME) > > > Key: SOLR-11508 > URL: https://issues.apache.org/jira/browse/SOLR-11508 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Marc Morissette > > (Heavily edited) > Since Solr 7, it is possible to store Solr cores in separate disk locations > using solr.data.home (see SOLR-6671). This is very useful when running Solr > in Docker where data must be stored in a directory which is independent from > the rest of the container. > While this works well in standalone mode, it doesn't in Cloud mode as the > core.properties automatically created by Solr are still stored in > coreRootDirectory and cores created that way disappear when the Solr Docker > container is redeployed. > The solution is to configure coreRootDirectory to an empty directory that can > be mounted outside the Docker container. > The incoming patch makes this easier to do by allowing coreRootDirectory to > be configured via a solr.core.home system property and SOLR_CORE_HOME > environment variable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11508) Make coreRootDirectory configurable via an environment variable (SOLR_CORE_HOME)
[ https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277497#comment-16277497 ] David Smiley commented on SOLR-11508: - Perhaps solr.data.dir should be deprecated in SolrCloud mode (displaying a warning at startup)? In 8.0 we could remove the "-t" convenience parameter to bin/solr, leaving it as more of an internal setting. > Make coreRootDirectory configurable via an environment variable > (SOLR_CORE_HOME) > > > Key: SOLR-11508 > URL: https://issues.apache.org/jira/browse/SOLR-11508 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Marc Morissette > > (Heavily edited) > Since Solr 7, it is possible to store Solr cores in separate disk locations > using solr.data.home (see SOLR-6671). This is very useful when running Solr > in Docker where data must be stored in a directory which is independent from > the rest of the container. > While this works well in standalone mode, it doesn't in Cloud mode as the > core.properties automatically created by Solr are still stored in > coreRootDirectory and cores created that way disappear when the Solr Docker > container is redeployed. > The solution is to configure coreRootDirectory to an empty directory that can > be mounted outside the Docker container. > The incoming patch makes this easier to do by allowing coreRootDirectory to > be configured via a solr.core.home system property and SOLR_CORE_HOME > environment variable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 2203 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2203/ 3 tests failed. FAILED: org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory Error Message: expected:<8> but was:<9> Stack Trace: java.lang.AssertionError: expected:<8> but was:<9> at __randomizedtesting.SeedInfo.seed([30AD83B0208DBE4B:5D51274D9AC5414C]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:268) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.security.TestPKIAuthenticationPlugin.test Error Message: Stack Trace: java.lang.NullPointerException at
[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 901 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/901/ No tests ran. Build Log: [...truncated 28047 lines...] # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 12288 bytes for committing reserved memory. # An error report file with more information is saved as: # /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/hs_err_pid12306.log Build step 'Invoke Ant' marked build as failure Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8043) Attempting to add documents past limit can corrupt index
[ https://issues.apache.org/jira/browse/LUCENE-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Willnauer resolved LUCENE-8043. - Resolution: Fixed fixed! thanks [~yo...@apache.org] [~mikemccand] [~jpountz] > Attempting to add documents past limit can corrupt index > > > Key: LUCENE-8043 > URL: https://issues.apache.org/jira/browse/LUCENE-8043 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 4.10, 7.0, master (8.0) >Reporter: Yonik Seeley >Assignee: Simon Willnauer > Fix For: master (8.0), 7.2, 7.1.1 > > Attachments: LUCENE-8043.patch, LUCENE-8043.patch, LUCENE-8043.patch, > LUCENE-8043.patch, LUCENE-8043.patch, YCS_IndexTest7a.java > > > The IndexWriter check for too many documents does not always work, resulting > in going over the limit. Once this happens, Lucene refuses to open the index > and throws a CorruptIndexException: Too many documents. > This appears to affect all versions of Lucene/Solr (the check was first > implemented in LUCENE-5843 in v4.9.1/4.10 and we've seen this manifest in > 4.10) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8043) Attempting to add documents past limit can corrupt index
[ https://issues.apache.org/jira/browse/LUCENE-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277490#comment-16277490 ] ASF subversion and git services commented on LUCENE-8043: - Commit 65a716911f35c304ae9da6d4ebb865509787548e in lucene-solr's branch refs/heads/branch_7_1 from [~simonw] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=65a7169 ] LUCENE-8043: Fix document accounting in IndexWriter The IndexWriter check for too many documents does not always work, resulting in going over the limit. Once this happens, Lucene refuses to open the index and throws a CorruptIndexException: Too many documents. This change also fixes document accounting if the index writer hits an aborting exception and/or the writer is rolled back. Pending document counts are now consistent with the latest SegmentInfos once the writer has been rolled back. > Attempting to add documents past limit can corrupt index > > > Key: LUCENE-8043 > URL: https://issues.apache.org/jira/browse/LUCENE-8043 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 4.10, 7.0, master (8.0) >Reporter: Yonik Seeley >Assignee: Simon Willnauer > Fix For: master (8.0), 7.2, 7.1.1 > > Attachments: LUCENE-8043.patch, LUCENE-8043.patch, LUCENE-8043.patch, > LUCENE-8043.patch, LUCENE-8043.patch, YCS_IndexTest7a.java > > > The IndexWriter check for too many documents does not always work, resulting > in going over the limit. Once this happens, Lucene refuses to open the index > and throws a CorruptIndexException: Too many documents. > This appears to affect all versions of Lucene/Solr (the check was first > implemented in LUCENE-5843 in v4.9.1/4.10 and we've seen this manifest in > 4.10) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8043) Attempting to add documents past limit can corrupt index
[ https://issues.apache.org/jira/browse/LUCENE-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277489#comment-16277489 ] ASF subversion and git services commented on LUCENE-8043: - Commit 0bc07bc02a2bb5253f85bbca97041c76e4509f5f in lucene-solr's branch refs/heads/branch_7x from [~simonw] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0bc07bc ] LUCENE-8043: Fix document accounting in IndexWriter The IndexWriter check for too many documents does not always work, resulting in going over the limit. Once this happens, Lucene refuses to open the index and throws a CorruptIndexException: Too many documents. This change also fixes document accounting if the index writer hits an aborting exception and/or the writer is rolled back. Pending document counts are now consistent with the latest SegmentInfos once the writer has been rolled back. > Attempting to add documents past limit can corrupt index > > > Key: LUCENE-8043 > URL: https://issues.apache.org/jira/browse/LUCENE-8043 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 4.10, 7.0, master (8.0) >Reporter: Yonik Seeley >Assignee: Simon Willnauer > Fix For: master (8.0), 7.2, 7.1.1 > > Attachments: LUCENE-8043.patch, LUCENE-8043.patch, LUCENE-8043.patch, > LUCENE-8043.patch, LUCENE-8043.patch, YCS_IndexTest7a.java > > > The IndexWriter check for too many documents does not always work, resulting > in going over the limit. Once this happens, Lucene refuses to open the index > and throws a CorruptIndexException: Too many documents. > This appears to affect all versions of Lucene/Solr (the check was first > implemented in LUCENE-5843 in v4.9.1/4.10 and we've seen this manifest in > 4.10) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11508) Make coreRootDirectory configurable via an environment variable (SOLR_CORE_HOME)
[ https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277474#comment-16277474 ] David Smiley commented on SOLR-11508: - coreRootDirectory is not a new concept and is already configurable. You're merely making it _easier_ (and more consistent) to configure. I think making a setting like this dependent on wether you're in SolrCloud mode or not makes this more confusing, but I understand your motivation. I think more documentation can add guidance on the use of these. SOLR_DATA_DIR can be a gotcha for a docker user and that advise can be in the Solr ref guide. Future docker images can set things up correctly OOTB using a "volume". > Make coreRootDirectory configurable via an environment variable > (SOLR_CORE_HOME) > > > Key: SOLR-11508 > URL: https://issues.apache.org/jira/browse/SOLR-11508 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Marc Morissette > > (Heavily edited) > Since Solr 7, it is possible to store Solr cores in separate disk locations > using solr.data.home (see SOLR-6671). This is very useful when running Solr > in Docker where data must be stored in a directory which is independent from > the rest of the container. > While this works well in standalone mode, it doesn't in Cloud mode as the > core.properties automatically created by Solr are still stored in > coreRootDirectory and cores created that way disappear when the Solr Docker > container is redeployed. > The solution is to configure coreRootDirectory to an empty directory that can > be mounted outside the Docker container. > The incoming patch makes this easier to do by allowing coreRootDirectory to > be configured via a solr.core.home system property and SOLR_CORE_HOME > environment variable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8043) Attempting to add documents past limit can corrupt index
[ https://issues.apache.org/jira/browse/LUCENE-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277472#comment-16277472 ] ASF subversion and git services commented on LUCENE-8043: - Commit b7d8731bbf2a9278c22efa5a7fb43285236c90ba in lucene-solr's branch refs/heads/master from [~simonw] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b7d8731 ] LUCENE-8043: Fix document accounting in IndexWriter The IndexWriter check for too many documents does not always work, resulting in going over the limit. Once this happens, Lucene refuses to open the index and throws a CorruptIndexException: Too many documents. This change also fixes document accounting if the index writer hits an aborting exception and/or the writer is rolled back. Pending document counts are now consistent with the latest SegmentInfos once the writer has been rolled back. > Attempting to add documents past limit can corrupt index > > > Key: LUCENE-8043 > URL: https://issues.apache.org/jira/browse/LUCENE-8043 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 4.10, 7.0, master (8.0) >Reporter: Yonik Seeley >Assignee: Simon Willnauer > Fix For: master (8.0), 7.2, 7.1.1 > > Attachments: LUCENE-8043.patch, LUCENE-8043.patch, LUCENE-8043.patch, > LUCENE-8043.patch, LUCENE-8043.patch, YCS_IndexTest7a.java > > > The IndexWriter check for too many documents does not always work, resulting > in going over the limit. Once this happens, Lucene refuses to open the index > and throws a CorruptIndexException: Too many documents. > This appears to affect all versions of Lucene/Solr (the check was first > implemented in LUCENE-5843 in v4.9.1/4.10 and we've seen this manifest in > 4.10) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11662) Make overlapping query term scoring configurable per field type
[ https://issues.apache.org/jira/browse/SOLR-11662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277462#comment-16277462 ] Doug Turnbull commented on SOLR-11662: -- Thanks for helping with the change David! I would probably personally do something like that. However, I tend to restructure most synonyms into a taxonomy. Many people aren't aware of hypernymy/hyponymy. It's not uncommon to see a synonym in an e-commerce clients, for example, that looks like `pants,khakis` with another line that's `pants,jeans` which of course creates an unintentional equivalence between jeans and khakis. Even when these are mixed in with true synonyms, I tend to restructure the whole thing as a taxonomy For example, some people avoid this for example at query time by expanding the query, and expecting the "as_distinct_terms" behavior, which biases towards exact match pants => jeans,pants,khakis jeans => jeans,pants khakis => jeans,khakis A search for pants here shows a mix of different kinds of pants (khakis and jeans roughly equal) A search for jeans puts jeans first (low doc freq), followed by various kinds of pants (high doc freq) A search for khakis puts khakis first, followed by various kinds of non-jean pants I tend to think of synonyms as hyponyms of a canonical name for an idea. So jeans for example, I might expand that to blue_jeans => blue_jeans,jeans,pants denim_jeans => denim_jeans,jeans,pants With multiple analyzer chains, I might recommend controlling how loose the search is with different analyzer chains. For example, one could see forcing a strong boost for conceptually similar items. Or limiting the semantic expansion so that blue_jeans, for example, only expands up to the jeans level. There's quite a lot of "it depends". The example above presupposes that pants have a higher doc freq than jeans, which may not be the case without a similar index-time expansion. > Make overlapping query term scoring configurable per field type > --- > > Key: SOLR-11662 > URL: https://issues.apache.org/jira/browse/SOLR-11662 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Doug Turnbull >Assignee: David Smiley > Fix For: 7.2, master (8.0) > > > This patch customizes the query-time behavior when query terms overlap > positions. Right now the only option is SynonymQuery. This is a fantastic > default & improvement on past versions. However, there are use cases where > terms overlap positions but don't carry exact synonymy relationships. Often > synonyms are actually used to model hypernym/hyponym relationships using > synonyms (or other analyzers). So the individual term scores matter, with > terms with higher specificity (hyponym) scoring higher than terms with lower > specificity (hypernym). > This patch adds the fieldType setting scoreOverlaps, as in: > {code:java} >class="solr.TextField" positionIncrementGap="100" multiValued="true"> > {code} > Valid values for scoreOverlaps are: > *as_one_term* > Default, most synonym use cases. Uses SynonymQuery > Treats all terms as if they're exactly equivalent, with document frequency > from underlying terms blended > *pick_best* > For a given document, score using the best scoring synonym (ie dismax over > generated terms). > Useful when synonyms not exactly equilevant. Instead they are used to model > hypernym/hyponym relationships. Such as expanding to synonyms of where terms > scores will reflect that quality > IE this query time expansion > tabby => tabby, cat, animal > Searching "text", generates the dismax (text:tabby | text:cat | text:animal) > *as_distinct_terms* > (The pre 6.0 behavior.) > Compromise between pick_best and as_oneSterm > Appropriate when synonyms reflect a hypernym/hyponym relationship, but lets > scores stack, so documents with more tabby, cat, or animal the better w/ a > bias towards the term with highest specificity > Terms are turned into a boolean OR query, with documen frequencies not blended > IE this query time expansion > tabby => tabby, cat, animal > Searching "text", generates the boolean query (text:tabby text:cat > text:animal) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11653) create next time collection based on a fixed time gap
[ https://issues.apache.org/jira/browse/SOLR-11653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley reassigned SOLR-11653: --- Assignee: David Smiley > create next time collection based on a fixed time gap > - > > Key: SOLR-11653 > URL: https://issues.apache.org/jira/browse/SOLR-11653 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: David Smiley > > For time series collections (as part of a collection Alias with certain > metadata), we want to automatically add new collections. In this issue, this > is about creating the next collection based on a configurable fixed time gap. > And we will also add this collection synchronously once a document flowing > through the URP chain exceeds the gap, as opposed to asynchronously in > advance. There will be some Alias metadata to define in this issue. The > preponderance of the implementation will be in TimePartitionedUpdateProcessor > or perhaps a helper to this URP. > note: other issues will implement pre-emptive creation and capping > collections by size. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11722) API to create a Time Routed Alias and first collection
David Smiley created SOLR-11722: --- Summary: API to create a Time Routed Alias and first collection Key: SOLR-11722 URL: https://issues.apache.org/jira/browse/SOLR-11722 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Reporter: David Smiley This issue is about creating a single API command to create a "Time Routed Alias" along with its first collection. Need to decide what endpoint URL it is and parameters. Perhaps in v2 it'd be {{/api/collections?command=create-routed-alias}} or alternatively piggy-back off of command=create-alias but we add more options, perhaps with a prefix like "router"? Inputs: * alias name * misc collection creation metadata (e.g. config, numShards, ...) perhaps in this context with a prefix like "collection." * metadata for TimeRoutedAliasUpdateProcessor, currently: router.field * date specifier for first collection; can include "date math". We'll certainly add more options as future features unfold. I believe the collection needs to be created first (referring to the alias name via a core property), and then the alias pointing to it which demands collections exist first. When figuring the collection name, you'll need to reference the format in TimeRoutedAliasUpdateProcessor. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support
[ https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett resolved SOLR-11412. -- Resolution: Fixed Assignee: Cassandra Targett (was: Varun Thacker) Fix Version/s: master (8.0) 7.2 > Documentation changes for SOLR-11003: Bi-directional CDCR support > - > > Key: SOLR-11412 > URL: https://issues.apache.org/jira/browse/SOLR-11412 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR, documentation >Reporter: Amrit Sarkar >Assignee: Cassandra Targett > Fix For: 7.2, master (8.0) > > Attachments: CDCR_bidir.png, SOLR-11412-split.patch, > SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch, > SOLR-11412.patch > > > Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its > conclusion. The relevant changes in documentation needs to be done. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support
[ https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277444#comment-16277444 ] ASF subversion and git services commented on SOLR-11412: Commit 89b02c430a6a42d4687afce647b030580fc46af6 in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=89b02c4 ] SOLR-11412: Add docs for bi-directional CDCR; split CDCR pages into multiple child pages > Documentation changes for SOLR-11003: Bi-directional CDCR support > - > > Key: SOLR-11412 > URL: https://issues.apache.org/jira/browse/SOLR-11412 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR, documentation >Reporter: Amrit Sarkar >Assignee: Varun Thacker > Attachments: CDCR_bidir.png, SOLR-11412-split.patch, > SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch, > SOLR-11412.patch > > > Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its > conclusion. The relevant changes in documentation needs to be done. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support
[ https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277443#comment-16277443 ] ASF subversion and git services commented on SOLR-11412: Commit 52cefbe742095daf8a31b8bc34ade5c809226c7d in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=52cefbe ] SOLR-11412: Add docs for bi-directional CDCR; split CDCR pages into multiple child pages > Documentation changes for SOLR-11003: Bi-directional CDCR support > - > > Key: SOLR-11412 > URL: https://issues.apache.org/jira/browse/SOLR-11412 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR, documentation >Reporter: Amrit Sarkar >Assignee: Varun Thacker > Attachments: CDCR_bidir.png, SOLR-11412-split.patch, > SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch, > SOLR-11412.patch > > > Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its > conclusion. The relevant changes in documentation needs to be done. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-7.x - Build # 268 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/268/ 3 tests failed. FAILED: org.apache.solr.cloud.TestTolerantUpdateProcessorRandomCloud.testRandomUpdates Error Message: Could not load collection from ZK: test_col Stack Trace: org.apache.solr.common.SolrException: Could not load collection from ZK: test_col at __randomizedtesting.SeedInfo.seed([668D84404C018B0B:5A4C51F03B7C7519]:0) at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1122) at org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:647) at org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1205) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:848) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463) at org.apache.solr.cloud.TestTolerantUpdateProcessorRandomCloud.testRandomUpdates(TestTolerantUpdateProcessorRandomCloud.java:274) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at
[jira] [Commented] (LUCENE-8048) Filesystems do not guarantee order of directories updates
[ https://issues.apache.org/jira/browse/LUCENE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277423#comment-16277423 ] Erick Erickson commented on LUCENE-8048: Well, that was short lived optimism. I was running the wrong test. So I'm back to looking at whether these two lines in TestIndexWriteExceptions.testExceptionDuringCommit are a test error or not: new FailOnlyInCommit(false, FailOnlyInCommit.PREPARE_STAGE), // fail during global field map is written new FailOnlyInCommit(true, FailOnlyInCommit.PREPARE_STAGE), // fail after global field map is written Any help appreciated of course. > Filesystems do not guarantee order of directories updates > - > > Key: LUCENE-8048 > URL: https://issues.apache.org/jira/browse/LUCENE-8048 > Project: Lucene - Core > Issue Type: Bug >Reporter: Nikolay Martynov >Assignee: Erick Erickson > Attachments: LUCENE-8048.patch, Screen Shot 2017-11-22 at 12.34.51 > PM.png > > > Currently when index is written to disk the following sequence of events is > taking place: > * write segment file > * sync segment file > * write segment file > * sync segment file > ... > * write list of segments > * sync list of segments > * rename list of segments > * sync index directory > This sequence leads to potential window of opportunity for system to crash > after 'rename list of segments' but before 'sync index directory' and > depending on exact filesystem implementation this may potentially lead to > 'list of segments' being visible in directory while some of the segments are > not. > Solution to this is to sync index directory after all segments have been > written. [This > commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c] > shows idea implemented. I'm fairly certain that I didn't find all the places > this may be potentially happening. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 21029 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21029/ Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.ShardRoutingTest.test Error Message: Timeout occured while waiting response from server at: https://127.0.0.1:43607/hz Stack Trace: org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting response from server at: https://127.0.0.1:43607/hz at __randomizedtesting.SeedInfo.seed([160757B4C0F137C5:9E53686E6E0D5A3D]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:314) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
[jira] [Commented] (LUCENE-8043) Attempting to add documents past limit can corrupt index
[ https://issues.apache.org/jira/browse/LUCENE-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277397#comment-16277397 ] Michael McCandless commented on LUCENE-8043: +1 to the patch! Phew that was tricky; thanks @simonw. I beasted all Lucene tests 113X times and only hit 3 failures from LUCENE-8073. +1 to push! > Attempting to add documents past limit can corrupt index > > > Key: LUCENE-8043 > URL: https://issues.apache.org/jira/browse/LUCENE-8043 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 4.10, 7.0, master (8.0) >Reporter: Yonik Seeley >Assignee: Simon Willnauer > Fix For: master (8.0), 7.2, 7.1.1 > > Attachments: LUCENE-8043.patch, LUCENE-8043.patch, LUCENE-8043.patch, > LUCENE-8043.patch, LUCENE-8043.patch, YCS_IndexTest7a.java > > > The IndexWriter check for too many documents does not always work, resulting > in going over the limit. Once this happens, Lucene refuses to open the index > and throws a CorruptIndexException: Too many documents. > This appears to affect all versions of Lucene/Solr (the check was first > implemented in LUCENE-5843 in v4.9.1/4.10 and we've seen this manifest in > 4.10) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7040 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7040/ Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_8478FC4CC0D4EB8F-001\5.4.1-nocfs-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_8478FC4CC0D4EB8F-001\5.4.1-nocfs-001 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_8478FC4CC0D4EB8F-001\3.6.0-cfs-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_8478FC4CC0D4EB8F-001\3.6.0-cfs-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_8478FC4CC0D4EB8F-001\5.4.1-nocfs-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_8478FC4CC0D4EB8F-001\5.4.1-nocfs-001 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_8478FC4CC0D4EB8F-001\3.6.0-cfs-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_8478FC4CC0D4EB8F-001\3.6.0-cfs-001 at __randomizedtesting.SeedInfo.seed([8478FC4CC0D4EB8F]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest.testConsistencyOnExceptions Error Message: Captured an uncaught exception in thread: Thread[id=15, name=ReplicationThread-indexAndTaxo, state=RUNNABLE, group=TGRP-IndexAndTaxonomyReplicationClientTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=15, name=ReplicationThread-indexAndTaxo, state=RUNNABLE, group=TGRP-IndexAndTaxonomyReplicationClientTest] at __randomizedtesting.SeedInfo.seed([AF8CD9C19F076EBD:20023E618D6B9D42]:0) Caused by: java.lang.AssertionError: handler failed too many times: -1 at __randomizedtesting.SeedInfo.seed([AF8CD9C19F076EBD]:0) at org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest$4.handleUpdateException(IndexAndTaxonomyReplicationClientTest.java:422) at org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77) FAILED: junit.framework.TestSuite.org.apache.lucene.document.TestLatLonPointDistanceSort Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\sandbox\test\J1\temp\lucene.document.TestLatLonPointDistanceSort_CE7CEF6133E0383D-001\index-MMapDirectory-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\sandbox\test\J1\temp\lucene.document.TestLatLonPointDistanceSort_CE7CEF6133E0383D-001\index-MMapDirectory-001
[jira] [Commented] (SOLR-11508) Make coreRootDirectory configurable via an environment variable (SOLR_CORE_HOME)
[ https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277362#comment-16277362 ] Marc Morissette commented on SOLR-11508: I've started work on a patch that adds the ability to set coreRootDirectory via an environment variable and command line option: https://github.com/morissm/lucene-solr/commit/95cbd1410fb4bdf97fd9ffec8737117a7931054d I'm starting to have second thoughts though. Solr already has a steep learning curve and I'm loathe to add yet another option if there is a way to avoid it. What if core.properties files were stored in SOLR_DATA_HOME only when Solr is in cloud mode? Unless I'm mistaken, all configuration is stored in Zookeeper in cloud mode so that is the only file that matters. As I've argued earlier, core.properties files in cloud mode are mostly an implementation detail and belong with the data. The only issue would be how to handle the transition for people who have set SOLR_DATA_HOME in cloud mode pre 7.2. I've thought of many automated ways to handle the transition but this might not be easy to accomplish without introducing some potential unintended behaviours. Comments? > Make coreRootDirectory configurable via an environment variable > (SOLR_CORE_HOME) > > > Key: SOLR-11508 > URL: https://issues.apache.org/jira/browse/SOLR-11508 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Marc Morissette > > (Heavily edited) > Since Solr 7, it is possible to store Solr cores in separate disk locations > using solr.data.home (see SOLR-6671). This is very useful when running Solr > in Docker where data must be stored in a directory which is independent from > the rest of the container. > While this works well in standalone mode, it doesn't in Cloud mode as the > core.properties automatically created by Solr are still stored in > coreRootDirectory and cores created that way disappear when the Solr Docker > container is redeployed. > The solution is to configure coreRootDirectory to an empty directory that can > be mounted outside the Docker container. > The incoming patch makes this easier to do by allowing coreRootDirectory to > be configured via a solr.core.home system property and SOLR_CORE_HOME > environment variable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11622) Bundled mime4j library not sufficient for Tika requirement
[ https://issues.apache.org/jira/browse/SOLR-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277359#comment-16277359 ] Tim Allison commented on SOLR-11622: My {{ant-precommit}} had the usual build failure with broken links...So, I think we're good. :) > Bundled mime4j library not sufficient for Tika requirement > -- > > Key: SOLR-11622 > URL: https://issues.apache.org/jira/browse/SOLR-11622 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Build >Affects Versions: 7.1, 6.6.2 >Reporter: Karim Malhas >Assignee: Karthik Ramachandran >Priority: Minor > Labels: build > Attachments: SOLR-11622.patch, SOLR-11622.patch > > > The version 7.2 of Apache James Mime4j bundled with the Solr binary releases > does not match what is required by Apache Tika for parsing rfc2822 messages. > The master branch for james-mime4j seems to contain the missing Builder class > [https://github.com/apache/james-mime4j/blob/master/core/src/main/java/org/apache/james/mime4j/stream/MimeConfig.java > ] > This prevents import of rfc2822 formatted messages. For example like so: > {{./bin/post -c dovecot -type 'message/rfc822' 'testdata/email_01.txt' > }} > And results in the following stacktrace: > java.lang.NoClassDefFoundError: > org/apache/james/mime4j/stream/MimeConfig$Builder > at > org.apache.tika.parser.mail.RFC822Parser.parse(RFC822Parser.java:63) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:135) > at > org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:228) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at >
[jira] [Commented] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support
[ https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277355#comment-16277355 ] Cassandra Targett commented on SOLR-11412: -- * {{cdcr-api.adoc}} ** I made all the recommended changes you suggested here. * CDCR Architecture Page: bq. "The data changes can be replicated in near real-time (with a small delay) or could be scheduled to be sent at longer intervals to the Target data center" : "The data changes can be replicated at a configurable amount of time" Should Source and Target should start with a capital letter? It was like that before so isn't really related to the new content. I would normally remove the capitalization, but Erick wrote/contributed the page, and it's not so egregious that I thought it had to be changed. I think an argument can be made that in _this_ case, the capitalization works as shorthand for the two clusters. bq. "Since this is a full copy of the entire index, network bandwidth should be considered." : What value does this line add to the user? That he/she should be aware that an entire index is going over the wire and they shouldn't expect it to be done in nanoseconds. The rest of your comments for this page are judgement calls, again mostly not-related to the patch. If you want to make the changes at a later time, go ahead, I can't figure out what does & doesn't add value at the same level that you can. * CDCR Configuration : bq. "" : We recommend everyone to disable buffering. Let's remove this comment I did this. bq. "Sync the index directories from the Source collection to Target collection across to the corresponding shard nodes. rsync works well for this" till the end of the section : Seems like a lot of info or notes which are already known? Known by who? bq. "ZooKeeper Settings" 800 is a typo? We say we want to set 200 but use It was already there, I'll assume you know it should be set to 800 in the comment. > Documentation changes for SOLR-11003: Bi-directional CDCR support > - > > Key: SOLR-11412 > URL: https://issues.apache.org/jira/browse/SOLR-11412 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR, documentation >Reporter: Amrit Sarkar >Assignee: Varun Thacker > Attachments: CDCR_bidir.png, SOLR-11412-split.patch, > SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch, > SOLR-11412.patch > > > Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its > conclusion. The relevant changes in documentation needs to be done. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11622) Bundled mime4j library not sufficient for Tika requirement
[ https://issues.apache.org/jira/browse/SOLR-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277347#comment-16277347 ] Tim Allison commented on SOLR-11622: Smh...that we haven't run Solr against Tika's test files before/recently. This would have surfaced SOLR-11693. Unit tests would not have found that, but a full integration test would have. :( Speaking of which, with ref to [this|https://issues.apache.org/jira/browse/SOLR-11622?focusedCommentId=16274648=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16274648], I'm still getting the CTTable xsb error on our {{testPPT_various.pptx}}, and you can't just do a drop and replace POI-3.17-beta1 with POI-3.17, because there's a binary conflict on wmf files. That fix will require the upgrade to Tika 1.17, which should be on the way. I'm guessing that you aren't seeing that because of the luck of your classloader? > Bundled mime4j library not sufficient for Tika requirement > -- > > Key: SOLR-11622 > URL: https://issues.apache.org/jira/browse/SOLR-11622 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Build >Affects Versions: 7.1, 6.6.2 >Reporter: Karim Malhas >Assignee: Karthik Ramachandran >Priority: Minor > Labels: build > Attachments: SOLR-11622.patch, SOLR-11622.patch > > > The version 7.2 of Apache James Mime4j bundled with the Solr binary releases > does not match what is required by Apache Tika for parsing rfc2822 messages. > The master branch for james-mime4j seems to contain the missing Builder class > [https://github.com/apache/james-mime4j/blob/master/core/src/main/java/org/apache/james/mime4j/stream/MimeConfig.java > ] > This prevents import of rfc2822 formatted messages. For example like so: > {{./bin/post -c dovecot -type 'message/rfc822' 'testdata/email_01.txt' > }} > And results in the following stacktrace: > java.lang.NoClassDefFoundError: > org/apache/james/mime4j/stream/MimeConfig$Builder > at > org.apache.tika.parser.mail.RFC822Parser.parse(RFC822Parser.java:63) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:135) > at > org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:228) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at >
[jira] [Commented] (SOLR-11622) Bundled mime4j library not sufficient for Tika requirement
[ https://issues.apache.org/jira/browse/SOLR-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277332#comment-16277332 ] ASF GitHub Bot commented on SOLR-11622: --- Github user tballison commented on the issue: https://github.com/apache/lucene-solr/pull/273 [SOLR-11622-tallison.diff.txt](https://github.com/apache/lucene-solr/files/1528516/SOLR-11622-tallison.diff.txt) This is the full `git diff 83753d0..d2f40af > SOLR-11622-tallison.diff` Had to add jackcess-encrypt for msaccess, bcpkix for psd files, and rome-utils for atom/rss. > Bundled mime4j library not sufficient for Tika requirement > -- > > Key: SOLR-11622 > URL: https://issues.apache.org/jira/browse/SOLR-11622 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Build >Affects Versions: 7.1, 6.6.2 >Reporter: Karim Malhas >Assignee: Karthik Ramachandran >Priority: Minor > Labels: build > Attachments: SOLR-11622.patch, SOLR-11622.patch > > > The version 7.2 of Apache James Mime4j bundled with the Solr binary releases > does not match what is required by Apache Tika for parsing rfc2822 messages. > The master branch for james-mime4j seems to contain the missing Builder class > [https://github.com/apache/james-mime4j/blob/master/core/src/main/java/org/apache/james/mime4j/stream/MimeConfig.java > ] > This prevents import of rfc2822 formatted messages. For example like so: > {{./bin/post -c dovecot -type 'message/rfc822' 'testdata/email_01.txt' > }} > And results in the following stacktrace: > java.lang.NoClassDefFoundError: > org/apache/james/mime4j/stream/MimeConfig$Builder > at > org.apache.tika.parser.mail.RFC822Parser.parse(RFC822Parser.java:63) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:135) > at > org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:228) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at >
[GitHub] lucene-solr issue #273: SOLR-11622: Fix mime4j library dependency for Tika
Github user tballison commented on the issue: https://github.com/apache/lucene-solr/pull/273 [SOLR-11622-tallison.diff.txt](https://github.com/apache/lucene-solr/files/1528516/SOLR-11622-tallison.diff.txt) This is the full `git diff 83753d0..d2f40af > SOLR-11622-tallison.diff` Had to add jackcess-encrypt for msaccess, bcpkix for psd files, and rome-utils for atom/rss. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8015) TestBasicModelIne.testRandomScoring failure
[ https://issues.apache.org/jira/browse/LUCENE-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277325#comment-16277325 ] Robert Muir commented on LUCENE-8015: - thanks for the analysis! I still don't even really want to commit the "floor" modifications for In and Ine because i dont like it: really a scoring formula should be able to return a tiny tiny value for a stopword, that should be ok. It shouldnt have to be a number between 1 and 43 or whatever to work with lucene. For model IF its justifiable, just like its justifiable in the BM25 case, because the formula is fundamentally broken you know, i mean we dont want negative scores for stopwords. But your analysis suggests maybe we can look at a more surgical fix, like a nextUp/nextDown somewhere? > TestBasicModelIne.testRandomScoring failure > --- > > Key: LUCENE-8015 > URL: https://issues.apache.org/jira/browse/LUCENE-8015 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand > Attachments: LUCENE-8015_test_fangs.patch > > > reproduce with: ant test -Dtestcase=TestBasicModelIne > -Dtests.method=testRandomScoring -Dtests.seed=86E85958B1183E93 > -Dtests.slow=true -Dtests.locale=vi-VN -Dtests.timezone=Pacific/Tongatapu > -Dtests.asserts=true -Dtests.file.encoding=UTF8 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11622) Bundled mime4j library not sufficient for Tika requirement
[ https://issues.apache.org/jira/browse/SOLR-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277318#comment-16277318 ] ASF GitHub Bot commented on SOLR-11622: --- Github user mrkarthik commented on the issue: https://github.com/apache/lucene-solr/pull/273 I had local branch solr-11622 and origin had SOLR-11622, for some reason my branch is getting deleted. Sorry for the all these deletes. > Bundled mime4j library not sufficient for Tika requirement > -- > > Key: SOLR-11622 > URL: https://issues.apache.org/jira/browse/SOLR-11622 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Build >Affects Versions: 7.1, 6.6.2 >Reporter: Karim Malhas >Assignee: Karthik Ramachandran >Priority: Minor > Labels: build > Attachments: SOLR-11622.patch, SOLR-11622.patch > > > The version 7.2 of Apache James Mime4j bundled with the Solr binary releases > does not match what is required by Apache Tika for parsing rfc2822 messages. > The master branch for james-mime4j seems to contain the missing Builder class > [https://github.com/apache/james-mime4j/blob/master/core/src/main/java/org/apache/james/mime4j/stream/MimeConfig.java > ] > This prevents import of rfc2822 formatted messages. For example like so: > {{./bin/post -c dovecot -type 'message/rfc822' 'testdata/email_01.txt' > }} > And results in the following stacktrace: > java.lang.NoClassDefFoundError: > org/apache/james/mime4j/stream/MimeConfig$Builder > at > org.apache.tika.parser.mail.RFC822Parser.parse(RFC822Parser.java:63) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:135) > at > org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:228) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at >
[GitHub] lucene-solr issue #273: SOLR-11622: Fix mime4j library dependency for Tika
Github user mrkarthik commented on the issue: https://github.com/apache/lucene-solr/pull/273 I had local branch solr-11622 and origin had SOLR-11622, for some reason my branch is getting deleted. Sorry for the all these deletes. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11622) Bundled mime4j library not sufficient for Tika requirement
[ https://issues.apache.org/jira/browse/SOLR-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277306#comment-16277306 ] ASF GitHub Bot commented on SOLR-11622: --- Github user tballison commented on the issue: https://github.com/apache/lucene-solr/pull/273 This was the patch I had before running ant precommit, etc., which is still running. You'll need to run ant- clean-jars, jar-checksums and do the git add/rm before this will work. [SOLR-11622_tallison.patch.txt](https://github.com/apache/lucene-solr/files/1528484/SOLR-11622_tallison.patch.txt) > Bundled mime4j library not sufficient for Tika requirement > -- > > Key: SOLR-11622 > URL: https://issues.apache.org/jira/browse/SOLR-11622 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Build >Affects Versions: 7.1, 6.6.2 >Reporter: Karim Malhas >Assignee: Karthik Ramachandran >Priority: Minor > Labels: build > Attachments: SOLR-11622.patch, SOLR-11622.patch > > > The version 7.2 of Apache James Mime4j bundled with the Solr binary releases > does not match what is required by Apache Tika for parsing rfc2822 messages. > The master branch for james-mime4j seems to contain the missing Builder class > [https://github.com/apache/james-mime4j/blob/master/core/src/main/java/org/apache/james/mime4j/stream/MimeConfig.java > ] > This prevents import of rfc2822 formatted messages. For example like so: > {{./bin/post -c dovecot -type 'message/rfc822' 'testdata/email_01.txt' > }} > And results in the following stacktrace: > java.lang.NoClassDefFoundError: > org/apache/james/mime4j/stream/MimeConfig$Builder > at > org.apache.tika.parser.mail.RFC822Parser.parse(RFC822Parser.java:63) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:135) > at > org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:228) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at >
[GitHub] lucene-solr issue #273: SOLR-11622: Fix mime4j library dependency for Tika
Github user tballison commented on the issue: https://github.com/apache/lucene-solr/pull/273 This was the patch I had before running ant precommit, etc., which is still running. You'll need to run ant- clean-jars, jar-checksums and do the git add/rm before this will work. [SOLR-11622_tallison.patch.txt](https://github.com/apache/lucene-solr/files/1528484/SOLR-11622_tallison.patch.txt) --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11622) Bundled mime4j library not sufficient for Tika requirement
[ https://issues.apache.org/jira/browse/SOLR-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277283#comment-16277283 ] ASF GitHub Bot commented on SOLR-11622: --- GitHub user mrkarthik reopened a pull request: https://github.com/apache/lucene-solr/pull/273 SOLR-11622: Fix mime4j library dependency for Tika You can merge this pull request into a Git repository by running: $ git pull https://github.com/mrkarthik/lucene-solr jira/SOLR-11622 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/273.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #273 commit c5d4e37de782b2491b3e71cfbb004e5022b55f6b Author: Karthik RamachandranDate: 2017-11-14T00:21:44Z SOLR-11622: Fix mime4j library dependency for Tika commit 40b246b12e8fc6455e023d9d60b8edcfab9b184e Author: Karthik Ramachandran Date: 2017-12-01T22:12:15Z Merge remote-tracking branch 'upstream/master' into jira/solr-11622 commit 21f2ab483f356fad9b89233e544457a07540afd1 Author: Karthik Ramachandran Date: 2017-12-03T03:50:01Z SOLR-11622: Fix bundled mime4j library not sufficient for Tika requirement commit a40ca80ed7036732a332f5508589ae32eb4b Author: Karthik Ramachandran Date: 2017-12-04T15:33:18Z Merge remote-tracking branch 'upstream/master' into jira/solr-11622 > Bundled mime4j library not sufficient for Tika requirement > -- > > Key: SOLR-11622 > URL: https://issues.apache.org/jira/browse/SOLR-11622 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Build >Affects Versions: 7.1, 6.6.2 >Reporter: Karim Malhas >Assignee: Karthik Ramachandran >Priority: Minor > Labels: build > Attachments: SOLR-11622.patch, SOLR-11622.patch > > > The version 7.2 of Apache James Mime4j bundled with the Solr binary releases > does not match what is required by Apache Tika for parsing rfc2822 messages. > The master branch for james-mime4j seems to contain the missing Builder class > [https://github.com/apache/james-mime4j/blob/master/core/src/main/java/org/apache/james/mime4j/stream/MimeConfig.java > ] > This prevents import of rfc2822 formatted messages. For example like so: > {{./bin/post -c dovecot -type 'message/rfc822' 'testdata/email_01.txt' > }} > And results in the following stacktrace: > java.lang.NoClassDefFoundError: > org/apache/james/mime4j/stream/MimeConfig$Builder > at > org.apache.tika.parser.mail.RFC822Parser.parse(RFC822Parser.java:63) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:135) > at > org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:228) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) >
[GitHub] lucene-solr pull request #273: SOLR-11622: Fix mime4j library dependency for...
GitHub user mrkarthik reopened a pull request: https://github.com/apache/lucene-solr/pull/273 SOLR-11622: Fix mime4j library dependency for Tika You can merge this pull request into a Git repository by running: $ git pull https://github.com/mrkarthik/lucene-solr jira/SOLR-11622 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/273.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #273 commit c5d4e37de782b2491b3e71cfbb004e5022b55f6b Author: Karthik RamachandranDate: 2017-11-14T00:21:44Z SOLR-11622: Fix mime4j library dependency for Tika commit 40b246b12e8fc6455e023d9d60b8edcfab9b184e Author: Karthik Ramachandran Date: 2017-12-01T22:12:15Z Merge remote-tracking branch 'upstream/master' into jira/solr-11622 commit 21f2ab483f356fad9b89233e544457a07540afd1 Author: Karthik Ramachandran Date: 2017-12-03T03:50:01Z SOLR-11622: Fix bundled mime4j library not sufficient for Tika requirement commit a40ca80ed7036732a332f5508589ae32eb4b Author: Karthik Ramachandran Date: 2017-12-04T15:33:18Z Merge remote-tracking branch 'upstream/master' into jira/solr-11622 --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11458) Bugs in MoveReplicaCmd handling of failures
[ https://issues.apache.org/jira/browse/SOLR-11458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki updated SOLR-11458: - Fix Version/s: master (8.0) 7.2 > Bugs in MoveReplicaCmd handling of failures > --- > > Key: SOLR-11458 > URL: https://issues.apache.org/jira/browse/SOLR-11458 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.0, 7.0.1, 7.1, master (8.0) >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki > Fix For: 7.2, master (8.0) > > > Spin-off from SOLR-11449: > {quote} > There's a section of code in moveNormalReplica that ensures that we don't > lose a shard leader during move. There's no corresponding protection in > moveHdfsReplica, which means that moving a replica that is also a shard > leader may potentially lead to data loss (eg. when replicationFactor=1). > Also, there's no rollback strategy when moveHdfsReplica partially fails, > unlike in moveNormalReplica where the code simply skips deleting the original > replica - it seems that the code should attempt to restore the original > replica in this case? When RF=1 and such failure occurs then not restoring > the original replica means lost shard. > {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11622) Bundled mime4j library not sufficient for Tika requirement
[ https://issues.apache.org/jira/browse/SOLR-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277232#comment-16277232 ] Tim Allison commented on SOLR-11622: Finished analysis. Will submit PR to against your branch shortly. Working on {{ant precommit}} now. > Bundled mime4j library not sufficient for Tika requirement > -- > > Key: SOLR-11622 > URL: https://issues.apache.org/jira/browse/SOLR-11622 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Build >Affects Versions: 7.1, 6.6.2 >Reporter: Karim Malhas >Assignee: Karthik Ramachandran >Priority: Minor > Labels: build > Attachments: SOLR-11622.patch, SOLR-11622.patch > > > The version 7.2 of Apache James Mime4j bundled with the Solr binary releases > does not match what is required by Apache Tika for parsing rfc2822 messages. > The master branch for james-mime4j seems to contain the missing Builder class > [https://github.com/apache/james-mime4j/blob/master/core/src/main/java/org/apache/james/mime4j/stream/MimeConfig.java > ] > This prevents import of rfc2822 formatted messages. For example like so: > {{./bin/post -c dovecot -type 'message/rfc822' 'testdata/email_01.txt' > }} > And results in the following stacktrace: > java.lang.NoClassDefFoundError: > org/apache/james/mime4j/stream/MimeConfig$Builder > at > org.apache.tika.parser.mail.RFC822Parser.parse(RFC822Parser.java:63) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:135) > at > org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:228) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at >
[jira] [Comment Edited] (SOLR-11622) Bundled mime4j library not sufficient for Tika requirement
[ https://issues.apache.org/jira/browse/SOLR-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277232#comment-16277232 ] Tim Allison edited comment on SOLR-11622 at 12/4/17 6:43 PM: - Finished analysis. Will submit PR against your branch shortly. Working on {{ant precommit}} now. was (Author: talli...@mitre.org): Finished analysis. Will submit PR to against your branch shortly. Working on {{ant precommit}} now. > Bundled mime4j library not sufficient for Tika requirement > -- > > Key: SOLR-11622 > URL: https://issues.apache.org/jira/browse/SOLR-11622 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Build >Affects Versions: 7.1, 6.6.2 >Reporter: Karim Malhas >Assignee: Karthik Ramachandran >Priority: Minor > Labels: build > Attachments: SOLR-11622.patch, SOLR-11622.patch > > > The version 7.2 of Apache James Mime4j bundled with the Solr binary releases > does not match what is required by Apache Tika for parsing rfc2822 messages. > The master branch for james-mime4j seems to contain the missing Builder class > [https://github.com/apache/james-mime4j/blob/master/core/src/main/java/org/apache/james/mime4j/stream/MimeConfig.java > ] > This prevents import of rfc2822 formatted messages. For example like so: > {{./bin/post -c dovecot -type 'message/rfc822' 'testdata/email_01.txt' > }} > And results in the following stacktrace: > java.lang.NoClassDefFoundError: > org/apache/james/mime4j/stream/MimeConfig$Builder > at > org.apache.tika.parser.mail.RFC822Parser.parse(RFC822Parser.java:63) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:135) > at > org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:228) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at >
[jira] [Resolved] (SOLR-11662) Make overlapping query term scoring configurable per field type
[ https://issues.apache.org/jira/browse/SOLR-11662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley resolved SOLR-11662. - Resolution: Fixed Assignee: David Smiley Thanks Doug! BTW I have a question on the practical use of this option. In the docs you mention the default as_same_term is good for real synonyms and that the otherS are good for hyponyms. Lets say the synonyms file has a mix of both (typical). It seems impossible to use both since the QueryBuilder passes no context other than the terms to build the query. Do you recommend different analyzer chains, one with regular synonyms and another with hypernyms via perhaps SOLR-11698? Of course that'd be less efficient than one query with the right type of query per synonym clause; but that's elusive without some custom query parser that detects the types and handles it (not leveraging QueryBuilder as it's not hackable). > Make overlapping query term scoring configurable per field type > --- > > Key: SOLR-11662 > URL: https://issues.apache.org/jira/browse/SOLR-11662 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Doug Turnbull >Assignee: David Smiley > Fix For: 7.2, master (8.0) > > > This patch customizes the query-time behavior when query terms overlap > positions. Right now the only option is SynonymQuery. This is a fantastic > default & improvement on past versions. However, there are use cases where > terms overlap positions but don't carry exact synonymy relationships. Often > synonyms are actually used to model hypernym/hyponym relationships using > synonyms (or other analyzers). So the individual term scores matter, with > terms with higher specificity (hyponym) scoring higher than terms with lower > specificity (hypernym). > This patch adds the fieldType setting scoreOverlaps, as in: > {code:java} >class="solr.TextField" positionIncrementGap="100" multiValued="true"> > {code} > Valid values for scoreOverlaps are: > *as_one_term* > Default, most synonym use cases. Uses SynonymQuery > Treats all terms as if they're exactly equivalent, with document frequency > from underlying terms blended > *pick_best* > For a given document, score using the best scoring synonym (ie dismax over > generated terms). > Useful when synonyms not exactly equilevant. Instead they are used to model > hypernym/hyponym relationships. Such as expanding to synonyms of where terms > scores will reflect that quality > IE this query time expansion > tabby => tabby, cat, animal > Searching "text", generates the dismax (text:tabby | text:cat | text:animal) > *as_distinct_terms* > (The pre 6.0 behavior.) > Compromise between pick_best and as_oneSterm > Appropriate when synonyms reflect a hypernym/hyponym relationship, but lets > scores stack, so documents with more tabby, cat, or animal the better w/ a > bias towards the term with highest specificity > Terms are turned into a boolean OR query, with documen frequencies not blended > IE this query time expansion > tabby => tabby, cat, animal > Searching "text", generates the boolean query (text:tabby text:cat > text:animal) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11622) Bundled mime4j library not sufficient for Tika requirement
[ https://issues.apache.org/jira/browse/SOLR-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277213#comment-16277213 ] ASF GitHub Bot commented on SOLR-11622: --- Github user mrkarthik closed the pull request at: https://github.com/apache/lucene-solr/pull/282 > Bundled mime4j library not sufficient for Tika requirement > -- > > Key: SOLR-11622 > URL: https://issues.apache.org/jira/browse/SOLR-11622 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Build >Affects Versions: 7.1, 6.6.2 >Reporter: Karim Malhas >Assignee: Karthik Ramachandran >Priority: Minor > Labels: build > Attachments: SOLR-11622.patch, SOLR-11622.patch > > > The version 7.2 of Apache James Mime4j bundled with the Solr binary releases > does not match what is required by Apache Tika for parsing rfc2822 messages. > The master branch for james-mime4j seems to contain the missing Builder class > [https://github.com/apache/james-mime4j/blob/master/core/src/main/java/org/apache/james/mime4j/stream/MimeConfig.java > ] > This prevents import of rfc2822 formatted messages. For example like so: > {{./bin/post -c dovecot -type 'message/rfc822' 'testdata/email_01.txt' > }} > And results in the following stacktrace: > java.lang.NoClassDefFoundError: > org/apache/james/mime4j/stream/MimeConfig$Builder > at > org.apache.tika.parser.mail.RFC822Parser.parse(RFC822Parser.java:63) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280) > at > org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:135) > at > org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:228) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at >
[jira] [Commented] (LUCENE-8015) TestBasicModelIne.testRandomScoring failure
[ https://issues.apache.org/jira/browse/LUCENE-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277214#comment-16277214 ] Adrien Grand commented on LUCENE-8015: -- I have been looking into the following model G failure. {noformat} 7.0E-45 = score(DFRSimilarity, doc=0, freq=0.9994), computed from: 1.4E-45 = boost 3.09640771E16 = NormalizationH1, computed from: 0.9994 = tf 1.61490253E9 = avgFieldLength 112.0 = len 9.2892231E16 = BasicModelG, computed from: 12.0 = numberOfDocuments 1.0 = totalTermFreq 4.8443234E-17 = AfterEffectB, computed from: 3.09640771E16 = tfn 1.0 = totalTermFreq 1.0 = docFreq 5.6E-45 = score(DFRSimilarity, doc=0, freq=1.0), computed from: 1.4E-45 = boost 3.09640792E16 = NormalizationH1, computed from: 1.0 = tf 1.61490253E9 = avgFieldLength 112.0 = len 9.289224E16 = BasicModelG, computed from: 12.0 = numberOfDocuments 1.0 = totalTermFreq 4.844323E-17 = AfterEffectB, computed from: 3.09640792E16 = tfn 1.0 = totalTermFreq 1.0 = docFreq DFR GB1 field="field",maxDoc=46519,docCount=12,sumTotalTermFreq=19378830951,sumDocFreq=19378830951 term="term",docFreq=1,totalTermFreq=1 norm=59 (doc length ~ 112) freq=1.0 NOTE: reproduce with: ant test -Dtestcase=TestBasicModelG -Dtests.method=testRandomScoring -Dtests.seed=3C22B051C61EEC84 -Dtests.locale=cs-CZ -Dtests.timezone=Atlantic/Madeira -Dtests.asserts=true -Dtests.file.encoding=UTF-8 {noformat} In short, the scoring formula here looks like {{(A + B * tfn) * (C / (tfn + 1))}} where A, B and C are constants. This function increases when tfn increases when B > A, which is always the case. The problem is that tfn is so large (ulp(tfn) = 4) , that {{tfn+1}} always returns {{tfn}} and {{A + B * tfn}} always returns the same as {{B * tfn}}. So when tfn gets high, the formula is effectively {{(B * tfn) * (C / tfn)}}. This is a constant, but since we compute the left and right parts independently, this might decrease when tfn increases about half the time. Even though I triggered it with BasicModelG, I suspect it affects almost all DFRSimilarity impls. > TestBasicModelIne.testRandomScoring failure > --- > > Key: LUCENE-8015 > URL: https://issues.apache.org/jira/browse/LUCENE-8015 > Project: Lucene - Core > Issue Type: Task >Reporter: Adrien Grand > Attachments: LUCENE-8015_test_fangs.patch > > > reproduce with: ant test -Dtestcase=TestBasicModelIne > -Dtests.method=testRandomScoring -Dtests.seed=86E85958B1183E93 > -Dtests.slow=true -Dtests.locale=vi-VN -Dtests.timezone=Pacific/Tongatapu > -Dtests.asserts=true -Dtests.file.encoding=UTF8 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #282: SOLR-11622: Fix mime4j library dependency for...
Github user mrkarthik closed the pull request at: https://github.com/apache/lucene-solr/pull/282 --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11687) SolrCore.getNewIndexDir falsely returns {dataDir}/index on any IOException reading index.properties
[ https://issues.apache.org/jira/browse/SOLR-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-11687. --- Resolution: Fixed Fix Version/s: 7.2 > SolrCore.getNewIndexDir falsely returns {dataDir}/index on any IOException > reading index.properties > --- > > Key: SOLR-11687 > URL: https://issues.apache.org/jira/browse/SOLR-11687 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson > Fix For: 7.2 > > Attachments: SOLR-11687.patch, SOLR-11687.patch, SOLR-11687.patch, > SOLR-11687_alt.patch > > > I'll link the originating Solr JIRA in a minute (many thanks Nikolay). > right at the top of this method we have this: > {code} > String result = dataDir + "index/"; > {code} > If, for any reason, the method doesn't complete properly, the "result" is > still returned. Now for instance, down in SolrCore.cleanupOldIndexDirectories > the "old" directory is dataDir/index which may point to the current index. > This seems particularly dangerous: > {code} >try { > p.load(new InputStreamReader(is, StandardCharsets.UTF_8)); > String s = p.getProperty("index"); > if (s != null && s.trim().length() > 0) { > result = dataDir + s; > } > } catch (Exception e) { > log.error("Unable to load " + IndexFetcher.INDEX_PROPERTIES, e); > } finally { > IOUtils.closeQuietly(is); > } > {code} > Should "p.load" fail for any reason whatsoever, we'll still return > dataDir/index. > Anyone want to chime on on what the expectations are here before I dive in? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11662) Make overlapping query term scoring configurable per field type
[ https://issues.apache.org/jira/browse/SOLR-11662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277202#comment-16277202 ] ASF subversion and git services commented on SOLR-11662: Commit ee896ec6ac103220c311421147d290124ab3df74 in lucene-solr's branch refs/heads/branch_7x from [~dsmiley] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ee896ec ] SOLR-11662: synonymQueryStyle option for FieldType used by query parser (cherry picked from commit 83753d0) > Make overlapping query term scoring configurable per field type > --- > > Key: SOLR-11662 > URL: https://issues.apache.org/jira/browse/SOLR-11662 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Doug Turnbull > Fix For: 7.2, master (8.0) > > > This patch customizes the query-time behavior when query terms overlap > positions. Right now the only option is SynonymQuery. This is a fantastic > default & improvement on past versions. However, there are use cases where > terms overlap positions but don't carry exact synonymy relationships. Often > synonyms are actually used to model hypernym/hyponym relationships using > synonyms (or other analyzers). So the individual term scores matter, with > terms with higher specificity (hyponym) scoring higher than terms with lower > specificity (hypernym). > This patch adds the fieldType setting scoreOverlaps, as in: > {code:java} >class="solr.TextField" positionIncrementGap="100" multiValued="true"> > {code} > Valid values for scoreOverlaps are: > *as_one_term* > Default, most synonym use cases. Uses SynonymQuery > Treats all terms as if they're exactly equivalent, with document frequency > from underlying terms blended > *pick_best* > For a given document, score using the best scoring synonym (ie dismax over > generated terms). > Useful when synonyms not exactly equilevant. Instead they are used to model > hypernym/hyponym relationships. Such as expanding to synonyms of where terms > scores will reflect that quality > IE this query time expansion > tabby => tabby, cat, animal > Searching "text", generates the dismax (text:tabby | text:cat | text:animal) > *as_distinct_terms* > (The pre 6.0 behavior.) > Compromise between pick_best and as_oneSterm > Appropriate when synonyms reflect a hypernym/hyponym relationship, but lets > scores stack, so documents with more tabby, cat, or animal the better w/ a > bias towards the term with highest specificity > Terms are turned into a boolean OR query, with documen frequencies not blended > IE this query time expansion > tabby => tabby, cat, animal > Searching "text", generates the boolean query (text:tabby text:cat > text:animal) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11662) Make overlapping query term scoring configurable per field type
[ https://issues.apache.org/jira/browse/SOLR-11662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16277199#comment-16277199 ] ASF subversion and git services commented on SOLR-11662: Commit 83753d0a2ae5bdd00649f43e355b5a43c6709917 in lucene-solr's branch refs/heads/master from [~dsmiley] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=83753d0 ] SOLR-11662: synonymQueryStyle option for FieldType used by query parser > Make overlapping query term scoring configurable per field type > --- > > Key: SOLR-11662 > URL: https://issues.apache.org/jira/browse/SOLR-11662 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Doug Turnbull > Fix For: 7.2, master (8.0) > > > This patch customizes the query-time behavior when query terms overlap > positions. Right now the only option is SynonymQuery. This is a fantastic > default & improvement on past versions. However, there are use cases where > terms overlap positions but don't carry exact synonymy relationships. Often > synonyms are actually used to model hypernym/hyponym relationships using > synonyms (or other analyzers). So the individual term scores matter, with > terms with higher specificity (hyponym) scoring higher than terms with lower > specificity (hypernym). > This patch adds the fieldType setting scoreOverlaps, as in: > {code:java} >class="solr.TextField" positionIncrementGap="100" multiValued="true"> > {code} > Valid values for scoreOverlaps are: > *as_one_term* > Default, most synonym use cases. Uses SynonymQuery > Treats all terms as if they're exactly equivalent, with document frequency > from underlying terms blended > *pick_best* > For a given document, score using the best scoring synonym (ie dismax over > generated terms). > Useful when synonyms not exactly equilevant. Instead they are used to model > hypernym/hyponym relationships. Such as expanding to synonyms of where terms > scores will reflect that quality > IE this query time expansion > tabby => tabby, cat, animal > Searching "text", generates the dismax (text:tabby | text:cat | text:animal) > *as_distinct_terms* > (The pre 6.0 behavior.) > Compromise between pick_best and as_oneSterm > Appropriate when synonyms reflect a hypernym/hyponym relationship, but lets > scores stack, so documents with more tabby, cat, or animal the better w/ a > bias towards the term with highest specificity > Terms are turned into a boolean OR query, with documen frequencies not blended > IE this query time expansion > tabby => tabby, cat, animal > Searching "text", generates the boolean query (text:tabby text:cat > text:animal) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org