RE: [JENKINS] Solr-Artifacts-6.x - Build # 218 - Still Failing
Hi, there is something wrong since the last 2 Solr Artifact builds. I have no idea why Lucene Artifact builds succeed - and also why tests succeed, but when building Solr's Artifacts (ant prepare-release-no-sign in Solr's subdir) it breaks with this compile error: [solr] $ /home/jenkins/tools/ant/apache-ant-1.8.4/bin/ant -file build.xml -Dlucene.javadoc.url=https://builds.apache.org/job/Lucene-Artifacts-trunk/javadoc/ -Dversion.suffix=218 prepare-release-no-sign Buildfile: /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/solr/build.xml Maybe some changes in dependencies that are not correctly resolved if you purely run Solr's part of the build (with clean checkout). Uwe - Uwe Schindler Achterdiek 19, D-28357 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de > -Original Message- > From: Apache Jenkins Server [mailto:jenk...@builds.apache.org] > Sent: Sunday, January 8, 2017 11:30 PM > To: dev@lucene.apache.org > Subject: [JENKINS] Solr-Artifacts-6.x - Build # 218 - Still Failing > > Build: https://builds.apache.org/job/Solr-Artifacts-6.x/218/ > > No tests ran. > > Build Log: > [...truncated 480 lines...] > [javac] Compiling 77 source files to /x1/jenkins/jenkins- > slave/workspace/Solr-Artifacts-6.x/lucene/build/suggest/classes/java > [javac] /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts- > 6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/DocumentV > alueSourceDictionary.java:26: error: package > org.apache.lucene.queries.function does not exist > [javac] import org.apache.lucene.queries.function.ValueSource; > [javac] ^ > [javac] /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts- > 6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/DocumentV > alueSourceDictionary.java:77: error: cannot find symbol > [javac]ValueSource > weightsValueSource, String > payload, String contexts) { > [javac]^ > [javac] symbol: class ValueSource > [javac] location: class DocumentValueSourceDictionary > [javac] /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts- > 6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/DocumentV > alueSourceDictionary.java:104: error: cannot find symbol > [javac]ValueSource > weightsValueSource, String > payload) { > [javac]^ > [javac] symbol: class ValueSource > [javac] location: class DocumentValueSourceDictionary > [javac] /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts- > 6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/DocumentV > alueSourceDictionary.java:130: error: cannot find symbol > [javac]ValueSource > weightsValueSource) { > [javac]^ > [javac] symbol: class ValueSource > [javac] location: class DocumentValueSourceDictionary > [javac] Note: /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts- > 6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/jaspell/Jasp > ellLookup.java uses or overrides a deprecated API. > [javac] Note: Recompile with -Xlint:deprecation for details. > [javac] 4 errors > > BUILD FAILED > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/solr/build.xml:539: > The following error occurred while executing this line: > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/solr/common- > build.xml:418: The following error occurred while executing this line: > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/module- > build.xml:670: The following error occurred while executing this line: > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/common- > build.xml:501: The following error occurred while executing this line: > /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/common- > build.xml:1955: Compile failed; see the compiler error output for details. > > Total time: 31 seconds > Build step 'Invoke Ant' marked build as failure > Archiving artifacts > Publishing Javadoc > Email was triggered for: Failure - Any > Sending email for trigger: Failure - Any > > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_112) - Build # 2625 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2625/ Java: 32bit/jdk1.8.0_112 -client -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPISolrJTest.testSplitShard Error Message: Error from server at https://127.0.0.1:33422/solr: Could not fully remove collection: solrj_test_splitshard Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:33422/solr: Could not fully remove collection: solrj_test_splitshard at __randomizedtesting.SeedInfo.seed([D943370886737B42:2499A64988647FD]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:610) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:435) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:387) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1344) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1095) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1037) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166) at org.apache.solr.cloud.CollectionsAPISolrJTest.testSplitShard(CollectionsAPISolrJTest.java:143) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1207 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1207/ 9 tests failed. FAILED: org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testUpdateLogSynchronisation Error Message: Timeout waiting for CDCR replication to complete @source_collection:shard1 Stack Trace: java.lang.RuntimeException: Timeout waiting for CDCR replication to complete @source_collection:shard1 at __randomizedtesting.SeedInfo.seed([7F748095DCCAFB69:811BD8361EEAD878]:0) at org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795) at org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testUpdateLogSynchronisation(CdcrReplicationDistributedZkTest.java:376) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_112) - Build # 18732 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18732/ Java: 32bit/jdk1.8.0_112 -client -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPISolrJTest.testAddAndDeleteReplicaProp Error Message: Error from server at https://127.0.0.1:39249/solr: Could not fully create collection: replicaProperties Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:39249/solr: Could not fully create collection: replicaProperties at __randomizedtesting.SeedInfo.seed([3375EA1E46ACD2E4:F7AE55F6FD832188]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1344) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1095) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1037) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166) at org.apache.solr.cloud.CollectionsAPISolrJTest.testAddAndDeleteReplicaProp(CollectionsAPISolrJTest.java:309) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
[jira] [Commented] (SOLR-6092) Provide a REST managed QueryElevationComponent
[ https://issues.apache.org/jira/browse/SOLR-6092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15810679#comment-15810679 ] jefferyyuan commented on SOLR-6092: --- Vote for this. We can manage stop words, synonyms, why not QueryElevation which are much more useful. Thanks > Provide a REST managed QueryElevationComponent > -- > > Key: SOLR-6092 > URL: https://issues.apache.org/jira/browse/SOLR-6092 > Project: Solr > Issue Type: New Feature >Reporter: Timothy Potter >Priority: Minor > > Provide a managed query elevation component to allow CRUD operations from a > REST API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_112) - Build # 6345 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6345/ Java: 32bit/jdk1.8.0_112 -server -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly Error Message: Unexpected number of elements in the group for intGSF: 8 Stack Trace: java.lang.AssertionError: Unexpected number of elements in the group for intGSF: 8 at __randomizedtesting.SeedInfo.seed([636876FB286E4E94:F8D318A365367CCA]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly(DocValuesNotIndexedTest.java:376) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 12022 lines...] [junit4] Suite: org.apache.solr.cloud.DocValuesNotIndexedTest [junit4] 2> Creating dataDir:
[jira] [Commented] (SOLR-9867) The schemaless example can not be started after being stopped.
[ https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15810664#comment-15810664 ] Varun Thacker commented on SOLR-9867: - The patch didn't work because {{SolrCli Line 2704}} makes a call to {{admin/cores?action=STATUS=techproducts}}} to check if the core exists or not. In {HttpSolrCall}} {{admin}} requests are handled differently {code} if (handler != null) { solrReq = SolrRequestParsers.DEFAULT.parse(null, path, req); solrReq.getContext().put(CoreContainer.class.getName(), cores); requestType = RequestType.ADMIN; action = ADMIN; return; } {code} > The schemaless example can not be started after being stopped. > -- > > Key: SOLR-9867 > URL: https://issues.apache.org/jira/browse/SOLR-9867 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9867.patch, SOLR-9867.patch > > > I'm having trouble when I start up the schemaless example after shutting down. > I first tracked this down to the fact that the run example tool is getting an > error when it tries to create the SolrCore (again, it already exists) and so > it deletes the cores instance dir which leads to tlog and index lock errors > in Solr. > The reason it seems to be trying to create the core when it already exists is > that the run example tool uses a core status call to check existence and > because the core is loading, we don't consider it as existing. I added a > check to look for core.properties. > That seemed to let me start up, but my first requests failed because the core > was still loading. It appears CoreContainer#getCore is supposed to be > blocking so you don't have this problem, but there must be an issue, because > it is not blocking. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7619) Add WordDelimiterGraphFilter
[ https://issues.apache.org/jira/browse/LUCENE-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15810645#comment-15810645 ] David Smiley commented on LUCENE-7619: -- Very cool! > Add WordDelimiterGraphFilter > > > Key: LUCENE-7619 > URL: https://issues.apache.org/jira/browse/LUCENE-7619 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: master (7.0), 6.5 > > Attachments: LUCENE-7619.patch, after.png, before.png > > > Currently, {{WordDelimiterFilter}} doesn't try to set the {{posLen}} > attribute and so it creates graphs like this: > !before.png! > but with this patch (still a work in progress) it creates this graph instead: > !after.png! > This means (today) positional queries when using WDF at search time are > buggy, but since we fixed LUCENE-7603, with this change here you should be > able to use positional queries with WDGF. > I'm also trying to produce holes properly (removes logic from the current WDF > that swallows a hole when whole token is just delimiters). > Surprisingly, it's actually quite easy to tweak WDF to create a graph (unlike > e.g. {{SynonymGraphFilter}}) because it's already creating the necessary new > positions, and its output graph never has side paths, except for single > tokens that skip nodes because they have {{posLen > 1}}. I.e. the only fix > to make, I think, is to set {{posLen}} properly. And it really helps that it > does its own "new token buffering + sorting" already. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-9867) The schemaless example can not be started after being stopped.
[ https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15810626#comment-15810626 ] Varun Thacker edited comment on SOLR-9867 at 1/9/17 4:15 AM: - I tried out the patch against master and I get this error {code} [master] ~/apache-work/lucene-solr/solr$ ./bin/solr start -e techproducts [master] ~/apache-work/lucene-solr/solr$ ./bin/solr stop [master] ~/apache-work/lucene-solr/solr$ ./bin/solr start -e techproducts Solr home directory /Users/varun/apache-work/lucene-solr/solr/example/techproducts/solr already exists. Starting up Solr on port 8983 using command: bin/solr start -p 8983 -s "example/techproducts/solr" Archiving 1 old GC log files to /Users/varun/apache-work/lucene-solr/solr/example/techproducts/solr/../logs/archived Archiving 1 console log files to /Users/varun/apache-work/lucene-solr/solr/example/techproducts/solr/../logs/archived Rotating solr logs, keeping a max of 9 generations Waiting up to 180 seconds to see Solr running on port 8983 [\] Started Solr server on port 8983 (pid=53397). Happy searching! Creating new core 'techproducts' using command: http://localhost:8983/solr/admin/cores?action=CREATE=techproducts=techproducts ERROR: Error CREATEing SolrCore 'techproducts': Could not create a new core in /Users/varun/apache-work/lucene-solr/solr/example/techproducts/solr/techproductsas another core is already defined there ERROR: Failed to create techproducts using command: [-name, techproducts, -shards, 1, -replicationFactor, 1, -confname, techproducts, -confdir, sample_techproducts_configs, -configsetsDir, /Users/varun/apache-work/lucene-solr/solr/server/solr/configsets, -solrUrl, http://localhost:8983/solr] {code} was (Author: varunthacker): I tried out the patch against master and I get this error {code} [master] ~/apache-work/lucene-solr/solr$ ./bin/solr start -e techproducts Solr home directory /Users/varun/apache-work/lucene-solr/solr/example/techproducts/solr already exists. Starting up Solr on port 8983 using command: bin/solr start -p 8983 -s "example/techproducts/solr" Archiving 1 old GC log files to /Users/varun/apache-work/lucene-solr/solr/example/techproducts/solr/../logs/archived Archiving 1 console log files to /Users/varun/apache-work/lucene-solr/solr/example/techproducts/solr/../logs/archived Rotating solr logs, keeping a max of 9 generations Waiting up to 180 seconds to see Solr running on port 8983 [\] Started Solr server on port 8983 (pid=53397). Happy searching! Creating new core 'techproducts' using command: http://localhost:8983/solr/admin/cores?action=CREATE=techproducts=techproducts ERROR: Error CREATEing SolrCore 'techproducts': Could not create a new core in /Users/varun/apache-work/lucene-solr/solr/example/techproducts/solr/techproductsas another core is already defined there ERROR: Failed to create techproducts using command: [-name, techproducts, -shards, 1, -replicationFactor, 1, -confname, techproducts, -confdir, sample_techproducts_configs, -configsetsDir, /Users/varun/apache-work/lucene-solr/solr/server/solr/configsets, -solrUrl, http://localhost:8983/solr] {code} > The schemaless example can not be started after being stopped. > -- > > Key: SOLR-9867 > URL: https://issues.apache.org/jira/browse/SOLR-9867 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9867.patch, SOLR-9867.patch > > > I'm having trouble when I start up the schemaless example after shutting down. > I first tracked this down to the fact that the run example tool is getting an > error when it tries to create the SolrCore (again, it already exists) and so > it deletes the cores instance dir which leads to tlog and index lock errors > in Solr. > The reason it seems to be trying to create the core when it already exists is > that the run example tool uses a core status call to check existence and > because the core is loading, we don't consider it as existing. I added a > check to look for core.properties. > That seemed to let me start up, but my first requests failed because the core > was still loading. It appears CoreContainer#getCore is supposed to be > blocking so you don't have this problem, but there must be an issue, because > it is not blocking. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9867) The schemaless example can not be started after being stopped.
[ https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15810626#comment-15810626 ] Varun Thacker commented on SOLR-9867: - I tried out the patch against master and I get this error {code} [master] ~/apache-work/lucene-solr/solr$ ./bin/solr start -e techproducts Solr home directory /Users/varun/apache-work/lucene-solr/solr/example/techproducts/solr already exists. Starting up Solr on port 8983 using command: bin/solr start -p 8983 -s "example/techproducts/solr" Archiving 1 old GC log files to /Users/varun/apache-work/lucene-solr/solr/example/techproducts/solr/../logs/archived Archiving 1 console log files to /Users/varun/apache-work/lucene-solr/solr/example/techproducts/solr/../logs/archived Rotating solr logs, keeping a max of 9 generations Waiting up to 180 seconds to see Solr running on port 8983 [\] Started Solr server on port 8983 (pid=53397). Happy searching! Creating new core 'techproducts' using command: http://localhost:8983/solr/admin/cores?action=CREATE=techproducts=techproducts ERROR: Error CREATEing SolrCore 'techproducts': Could not create a new core in /Users/varun/apache-work/lucene-solr/solr/example/techproducts/solr/techproductsas another core is already defined there ERROR: Failed to create techproducts using command: [-name, techproducts, -shards, 1, -replicationFactor, 1, -confname, techproducts, -confdir, sample_techproducts_configs, -configsetsDir, /Users/varun/apache-work/lucene-solr/solr/server/solr/configsets, -solrUrl, http://localhost:8983/solr] {code} > The schemaless example can not be started after being stopped. > -- > > Key: SOLR-9867 > URL: https://issues.apache.org/jira/browse/SOLR-9867 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9867.patch, SOLR-9867.patch > > > I'm having trouble when I start up the schemaless example after shutting down. > I first tracked this down to the fact that the run example tool is getting an > error when it tries to create the SolrCore (again, it already exists) and so > it deletes the cores instance dir which leads to tlog and index lock errors > in Solr. > The reason it seems to be trying to create the core when it already exists is > that the run example tool uses a core status call to check existence and > because the core is loading, we don't consider it as existing. I added a > check to look for core.properties. > That seemed to let me start up, but my first requests failed because the core > was still loading. It appears CoreContainer#getCore is supposed to be > blocking so you don't have this problem, but there must be an issue, because > it is not blocking. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9934) SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called
[ https://issues.apache.org/jira/browse/SOLR-9934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15810624#comment-15810624 ] David Smiley commented on SOLR-9934: +1 nice explanation and javadocs > SolrTestCase.clearIndex should ensure IndexWriter.deleteAll is called > - > > Key: SOLR-9934 > URL: https://issues.apache.org/jira/browse/SOLR-9934 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Hoss Man > Attachments: SOLR-9934.patch > > > Normal deleteByQuery commands are subject to version constraint checks due to > the possibility of out of order updates, but DUH2 has special support > (triggered by {{version=-Long.MAX_VALUE}} for use by tests to override these > version constraints and do a low level {{IndexWriter.deleteAll()}} call. A > handful of tests override {{SolrTestCaseJ4.clearIndex()}} to take advantage > of this (using copy/pasted impls), but given the intended purpose/usage of > {{SolrTestCaseJ4.clearIndex()}}, it seems like the the base method in > {{SolrTestCaseJ4}} should itself trigger this low level deletion, so tests > get this behavior automatically. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9777) IndexFingerprinting: use getCombinedCoreAndDeletesKey() instead of getCoreCacheKey() for per-segment caching
[ https://issues.apache.org/jira/browse/SOLR-9777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishan Chattopadhyaya resolved SOLR-9777. Resolution: Fixed > IndexFingerprinting: use getCombinedCoreAndDeletesKey() instead of > getCoreCacheKey() for per-segment caching > - > > Key: SOLR-9777 > URL: https://issues.apache.org/jira/browse/SOLR-9777 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9777.patch > > > [Note: Had initially posted to SOLR-9506, but now moved here] > While working on SOLR-5944, I realized that the current per segment caching > logic works fine for deleted documents (due to comparison of numDocs in a > segment for the criterion of cache hit/miss). However, if a segment has > docValues updates, the same logic is insufficient. It is my understanding > that changing the key for caching from reader().getCoreCacheKey() to > reader().getCombinedCoreAndDeletesKey() would work here, since the docValues > updates are internally handled using deletion queue and hence the "combined" > core and deletes key would work here. Attaching a patch for the same. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9777) IndexFingerprinting: use getCombinedCoreAndDeletesKey() instead of getCoreCacheKey() for per-segment caching
[ https://issues.apache.org/jira/browse/SOLR-9777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishan Chattopadhyaya updated SOLR-9777: --- Fix Version/s: 6.4 master (7.0) > IndexFingerprinting: use getCombinedCoreAndDeletesKey() instead of > getCoreCacheKey() for per-segment caching > - > > Key: SOLR-9777 > URL: https://issues.apache.org/jira/browse/SOLR-9777 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9777.patch > > > [Note: Had initially posted to SOLR-9506, but now moved here] > While working on SOLR-5944, I realized that the current per segment caching > logic works fine for deleted documents (due to comparison of numDocs in a > segment for the criterion of cache hit/miss). However, if a segment has > docValues updates, the same logic is insufficient. It is my understanding > that changing the key for caching from reader().getCoreCacheKey() to > reader().getCombinedCoreAndDeletesKey() would work here, since the docValues > updates are internally handled using deletion queue and hence the "combined" > core and deletes key would work here. Attaching a patch for the same. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9777) IndexFingerprinting: use getCombinedCoreAndDeletesKey() instead of getCoreCacheKey() for per-segment caching
[ https://issues.apache.org/jira/browse/SOLR-9777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15810507#comment-15810507 ] ASF subversion and git services commented on SOLR-9777: --- Commit b0177312032e039673bfbbd42cd1dca09fb93833 in lucene-solr's branch refs/heads/master from [~ichattopadhyaya] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b017731 ] SOLR-9777: IndexFingerprinting should use getCombinedCoreAndDeletesKey() instead of getCoreCacheKey() for per-segment caching > IndexFingerprinting: use getCombinedCoreAndDeletesKey() instead of > getCoreCacheKey() for per-segment caching > - > > Key: SOLR-9777 > URL: https://issues.apache.org/jira/browse/SOLR-9777 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya > Attachments: SOLR-9777.patch > > > [Note: Had initially posted to SOLR-9506, but now moved here] > While working on SOLR-5944, I realized that the current per segment caching > logic works fine for deleted documents (due to comparison of numDocs in a > segment for the criterion of cache hit/miss). However, if a segment has > docValues updates, the same logic is insufficient. It is my understanding > that changing the key for caching from reader().getCoreCacheKey() to > reader().getCombinedCoreAndDeletesKey() would work here, since the docValues > updates are internally handled using deletion queue and hence the "combined" > core and deletes key would work here. Attaching a patch for the same. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9777) IndexFingerprinting: use getCombinedCoreAndDeletesKey() instead of getCoreCacheKey() for per-segment caching
[ https://issues.apache.org/jira/browse/SOLR-9777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15810508#comment-15810508 ] ASF subversion and git services commented on SOLR-9777: --- Commit 1c943be5ed2894baa37f69b6273e1fbe15e72d5d in lucene-solr's branch refs/heads/branch_6x from [~ichattopadhyaya] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1c943be ] SOLR-9777: IndexFingerprinting should use getCombinedCoreAndDeletesKey() instead of getCoreCacheKey() for per-segment caching > IndexFingerprinting: use getCombinedCoreAndDeletesKey() instead of > getCoreCacheKey() for per-segment caching > - > > Key: SOLR-9777 > URL: https://issues.apache.org/jira/browse/SOLR-9777 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya > Attachments: SOLR-9777.patch > > > [Note: Had initially posted to SOLR-9506, but now moved here] > While working on SOLR-5944, I realized that the current per segment caching > logic works fine for deleted documents (due to comparison of numDocs in a > segment for the criterion of cache hit/miss). However, if a segment has > docValues updates, the same logic is insufficient. It is my understanding > that changing the key for caching from reader().getCoreCacheKey() to > reader().getCombinedCoreAndDeletesKey() would work here, since the docValues > updates are internally handled using deletion queue and hence the "combined" > core and deletes key would work here. Attaching a patch for the same. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 606 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/606/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest Error Message: ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130) at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94) at org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102) at sun.reflect.GeneratedConstructorAccessor130.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753) at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815) at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065) at org.apache.solr.core.SolrCore.(SolrCore.java:930) at org.apache.solr.core.SolrCore.(SolrCore.java:823) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:889) at org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541) at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Stack Trace: java.lang.AssertionError: ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130) at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94) at org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102) at sun.reflect.GeneratedConstructorAccessor130.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753) at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815) at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065) at org.apache.solr.core.SolrCore.(SolrCore.java:930) at org.apache.solr.core.SolrCore.(SolrCore.java:823) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:889) at org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541) at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([5BE7C0F88B77EAA4]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266) at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-Tests-6.x - Build # 662 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/662/ 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest Error Message: ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130) at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94) at org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102) at sun.reflect.GeneratedConstructorAccessor120.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753) at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815) at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065) at org.apache.solr.core.SolrCore.(SolrCore.java:930) at org.apache.solr.core.SolrCore.(SolrCore.java:823) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:889) at org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541) at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Stack Trace: java.lang.AssertionError: ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130) at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137) at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94) at org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102) at sun.reflect.GeneratedConstructorAccessor120.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753) at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815) at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065) at org.apache.solr.core.SolrCore.(SolrCore.java:930) at org.apache.solr.core.SolrCore.(SolrCore.java:823) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:889) at org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541) at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) at __randomizedtesting.SeedInfo.seed([FCF41DD081C196DE]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:266) at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 631 - Still unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/631/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.cloud.HttpPartitionTest.test Error Message: Doc with id=2 not found in http://127.0.0.1:58637/c8n_1x2_leader_session_loss due to: Path not found: /id; rsp={doc=null} Stack Trace: java.lang.AssertionError: Doc with id=2 not found in http://127.0.0.1:58637/c8n_1x2_leader_session_loss due to: Path not found: /id; rsp={doc=null} at __randomizedtesting.SeedInfo.seed([168A5458188EA161:9EDE6B82B672CC99]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:620) at org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:575) at org.apache.solr.cloud.HttpPartitionTest.testLeaderZkSessionLoss(HttpPartitionTest.java:523) at org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:136) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at
[JENKINS] Lucene-Solr-Tests-master - Build # 1602 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1602/ 1 tests failed. FAILED: org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail Error Message: expected:<200> but was:<404> Stack Trace: java.lang.AssertionError: expected:<200> but was:<404> at __randomizedtesting.SeedInfo.seed([70F432290077E31:6FB07608409D6CDD]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.cancelDelegationToken(TestDelegationWithHadoopAuth.java:128) at org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail(TestDelegationWithHadoopAuth.java:280) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at
[JENKINS] Solr-Artifacts-6.x - Build # 218 - Still Failing
Build: https://builds.apache.org/job/Solr-Artifacts-6.x/218/ No tests ran. Build Log: [...truncated 480 lines...] [javac] Compiling 77 source files to /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/build/suggest/classes/java [javac] /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/DocumentValueSourceDictionary.java:26: error: package org.apache.lucene.queries.function does not exist [javac] import org.apache.lucene.queries.function.ValueSource; [javac] ^ [javac] /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/DocumentValueSourceDictionary.java:77: error: cannot find symbol [javac]ValueSource weightsValueSource, String payload, String contexts) { [javac]^ [javac] symbol: class ValueSource [javac] location: class DocumentValueSourceDictionary [javac] /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/DocumentValueSourceDictionary.java:104: error: cannot find symbol [javac]ValueSource weightsValueSource, String payload) { [javac]^ [javac] symbol: class ValueSource [javac] location: class DocumentValueSourceDictionary [javac] /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/DocumentValueSourceDictionary.java:130: error: cannot find symbol [javac]ValueSource weightsValueSource) { [javac]^ [javac] symbol: class ValueSource [javac] location: class DocumentValueSourceDictionary [javac] Note: /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/suggest/src/java/org/apache/lucene/search/suggest/jaspell/JaspellLookup.java uses or overrides a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] 4 errors BUILD FAILED /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/solr/build.xml:539: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/solr/common-build.xml:418: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/module-build.xml:670: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/common-build.xml:501: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Solr-Artifacts-6.x/lucene/common-build.xml:1955: Compile failed; see the compiler error output for details. Total time: 31 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Publishing Javadoc Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+147) - Build # 18729 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18729/ Java: 64bit/jdk-9-ea+147 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit Error Message: expected:<1> but was:<2> Stack Trace: java.lang.AssertionError: expected:<1> but was:<2> at __randomizedtesting.SeedInfo.seed([2EBA596694EFA902:D7F7CAC9A89AE488]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit(ShardSplitTest.java:279) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:538) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-7611) Make suggester module use LongValuesSource
[ https://issues.apache.org/jira/browse/LUCENE-7611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809970#comment-15809970 ] ASF subversion and git services commented on LUCENE-7611: - Commit 322ad889604688db9d22ba7dfa1e389a01e34857 in lucene-solr's branch refs/heads/master from [~romseygeek] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=322ad88 ] LUCENE-7611: Remove queries javadoc link from suggester help page > Make suggester module use LongValuesSource > -- > > Key: LUCENE-7611 > URL: https://issues.apache.org/jira/browse/LUCENE-7611 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Alan Woodward > Fix For: master (7.0), 6.4 > > Attachments: LUCENE-7611.patch, LUCENE-7611.patch > > > This allows us to remove the suggester module's dependency on the queries > module. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9939) Ping handler logs each request twice
[ https://issues.apache.org/jira/browse/SOLR-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809946#comment-15809946 ] Trey Cahill commented on SOLR-9939: --- [~mkhludnev] good call on clearing rsp.getToLog(); uploaded a patch that does just that. Ends up being much cleaner. > Ping handler logs each request twice > > > Key: SOLR-9939 > URL: https://issues.apache.org/jira/browse/SOLR-9939 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.4 >Reporter: Shawn Heisey >Priority: Minor > Attachments: SOLR-9939.patch, SOLR-9939.patch > > > Requests to the ping handler are being logged twice. The first line has > "hits" and the second one doesn't, but other than that they have the same > info. > These lines are from a 5.3.2-SNAPSHOT version. In the IRC channel, > [~ctargett] confirmed that this also happens in 6.4-SNAPSHOT. > {noformat} > 2017-01-06 14:16:37.253 INFO (qtp1510067370-186262) [ x:sparkmain] > or.ap.so.co.So.Request [sparkmain] webapp=/solr path=/admin/ping params={} > hits=400271103 status=0 QTime=4 > 2017-01-06 14:16:37.253 INFO (qtp1510067370-186262) [ x:sparkmain] > or.ap.so.co.So.Request [sparkmain] webapp=/solr path=/admin/ping params={} > status=0 QTime=4 > {noformat} > Unless there's a good reason to have it that I'm not aware of, the second log > should be removed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9939) Ping handler logs each request twice
[ https://issues.apache.org/jira/browse/SOLR-9939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Trey Cahill updated SOLR-9939: -- Attachment: SOLR-9939.patch > Ping handler logs each request twice > > > Key: SOLR-9939 > URL: https://issues.apache.org/jira/browse/SOLR-9939 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.4 >Reporter: Shawn Heisey >Priority: Minor > Attachments: SOLR-9939.patch, SOLR-9939.patch > > > Requests to the ping handler are being logged twice. The first line has > "hits" and the second one doesn't, but other than that they have the same > info. > These lines are from a 5.3.2-SNAPSHOT version. In the IRC channel, > [~ctargett] confirmed that this also happens in 6.4-SNAPSHOT. > {noformat} > 2017-01-06 14:16:37.253 INFO (qtp1510067370-186262) [ x:sparkmain] > or.ap.so.co.So.Request [sparkmain] webapp=/solr path=/admin/ping params={} > hits=400271103 status=0 QTime=4 > 2017-01-06 14:16:37.253 INFO (qtp1510067370-186262) [ x:sparkmain] > or.ap.so.co.So.Request [sparkmain] webapp=/solr path=/admin/ping params={} > status=0 QTime=4 > {noformat} > Unless there's a good reason to have it that I'm not aware of, the second log > should be removed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 670 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/670/ No tests ran. Build Log: [...truncated 41983 lines...] prepare-release-no-sign: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist [copy] Copying 476 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 260 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL "file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.01 sec (15.5 MB/sec) [smoker] check changes HTML... [smoker] download lucene-7.0.0-src.tgz... [smoker] 30.5 MB in 0.02 sec (1225.7 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-7.0.0.tgz... [smoker] 65.0 MB in 0.05 sec (1196.4 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-7.0.0.zip... [smoker] 75.9 MB in 0.06 sec (1211.4 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-7.0.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6184 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-7.0.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6184 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-7.0.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run "ant validate" [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.8... [smoker] got 215 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] generate javadocs w/ Java 8... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] success! [smoker] [smoker] Test Solr... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.00 sec (283.4 MB/sec) [smoker] check changes HTML... [smoker] download solr-7.0.0-src.tgz... [smoker] 40.1 MB in 0.19 sec (207.3 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-7.0.0.tgz... [smoker] 140.5 MB in 0.91 sec (153.8 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-7.0.0.zip... [smoker] 150.1 MB in 0.13 sec (1126.4 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack solr-7.0.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] unpack lucene-7.0.0.tgz... [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar: it has javax.* classes [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar: it has javax.* classes [smoker] copying unpacked distribution for Java 8 ... [smoker] test solr example w/ Java 8... [smoker] start Solr instance (log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)... [smoker] No process found for Solr node running on port 8983 [smoker] Running techproducts example on port 8983 from /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8 [smoker] Creating Solr home directory /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr [smoker] [smoker] Starting up Solr on port 8983 using command: [smoker] bin/solr start -p 8983 -s "example/techproducts/solr" [smoker] [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|] [/] [-] [\] [smoker] Started Solr server on port 8983 (pid=22312). Happy searching!
[jira] [Commented] (SOLR-8292) TransactionLog.next() does not honor contract and return null for EOF
[ https://issues.apache.org/jira/browse/SOLR-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809882#comment-15809882 ] Mark Miller commented on SOLR-8292: --- Correct. We know how much we are supposed to read, we don't need a signal and if we get an EOF exception it's because the file is corrupt. > TransactionLog.next() does not honor contract and return null for EOF > - > > Key: SOLR-8292 > URL: https://issues.apache.org/jira/browse/SOLR-8292 > Project: Solr > Issue Type: Bug >Reporter: Erick Erickson >Assignee: Erick Erickson > Attachments: SOLR-8292.patch > > > This came to light in CDCR testing, which stresses this code a lot, there's a > stack trace showing this line (641 trunk) throwing an EOF exception: > o = codec.readVal(fis); > At first I thought to just wrap reading fis in a try/catch and return null, > but looking at the code a bit more I'm not so sure, that seems like it'd mask > what looks at first glance like a bug in the logic. > A few lines earlier (633-4) there's these lines: > // shouldn't currently happen - header and first record are currently written > at the same time > if (fis.position() >= fos.size()) { > Why are we comparing the the input file position against the size of the > output file? Maybe because the 'i' key is right next to the 'o' key? The > comment hints that it's checking for the ability to read the first record in > input stream along with the header. And perhaps there's a different issue > here because the expectation clearly is that the first record should be there > if the header is. > So what's the right thing to do? Wrap in a try/catch and return null for EOF? > Change the test? Do both? > I can take care of either, but wanted a clue whether the comparison of fis to > fos is intended. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9902) StandardDirectoryFactory should use Files API for it's move implementation.
[ https://issues.apache.org/jira/browse/SOLR-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809832#comment-15809832 ] ASF subversion and git services commented on SOLR-9902: --- Commit 8fca7442716ad3397096fc271b1b9c22dd436d53 in lucene-solr's branch refs/heads/branch_6x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8fca744 ] SOLR-9902: Fix move impl. > StandardDirectoryFactory should use Files API for it's move implementation. > --- > > Key: SOLR-9902 > URL: https://issues.apache.org/jira/browse/SOLR-9902 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9902.patch > > > It's done in a platform independent way as opposed to the old File API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_112) - Build # 18728 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18728/ Java: 64bit/jdk1.8.0_112 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPISolrJTest.testSplitShard Error Message: Error from server at https://127.0.0.1:39740/solr: Could not fully create collection: solrj_test_splitshard Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:39740/solr: Could not fully create collection: solrj_test_splitshard at __randomizedtesting.SeedInfo.seed([D5DD77C991D8AB67:ED7DAA58F2D97D8]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1344) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1095) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1037) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166) at org.apache.solr.cloud.CollectionsAPISolrJTest.testSplitShard(CollectionsAPISolrJTest.java:143) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Resolved] (SOLR-9901) Implement move in HdfsDirectoryFactory.
[ https://issues.apache.org/jira/browse/SOLR-9901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-9901. --- Resolution: Fixed > Implement move in HdfsDirectoryFactory. > --- > > Key: SOLR-9901 > URL: https://issues.apache.org/jira/browse/SOLR-9901 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9901.patch > > > Without this, you can end up with things like a 0 bytes segment file. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9937) StandardDirectoryFactory::move never uses atomic implementation
[ https://issues.apache.org/jira/browse/SOLR-9937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809559#comment-15809559 ] Mark Miller commented on SOLR-9937: --- Another bug is that we didn't return even if the move worked - we did super.move after anyway. Both fixed as additional commits in SOLR-9902. > StandardDirectoryFactory::move never uses atomic implementation > --- > > Key: SOLR-9937 > URL: https://issues.apache.org/jira/browse/SOLR-9937 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mike Drob >Assignee: Mark Miller > Attachments: SOLR-9937.patch > > > {noformat} > Path path1 = ((FSDirectory) > baseFromDir).getDirectory().toAbsolutePath(); > Path path2 = ((FSDirectory) > baseFromDir).getDirectory().toAbsolutePath(); > > try { > Files.move(path1.resolve(fileName), path2.resolve(fileName), > StandardCopyOption.ATOMIC_MOVE); > } catch (AtomicMoveNotSupportedException e) { > Files.move(path1.resolve(fileName), path2.resolve(fileName)); > } > {noformat} > Because {{path1 == path2}} this code never does anything and move always > defaults to the less efficient implementation in DirectoryFactory. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9902) StandardDirectoryFactory should use Files API for it's move implementation.
[ https://issues.apache.org/jira/browse/SOLR-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809557#comment-15809557 ] ASF subversion and git services commented on SOLR-9902: --- Commit 8bc151d1c61932dda26c682cf2281535f0c36058 in lucene-solr's branch refs/heads/master from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8bc151d ] SOLR-9902: Fix move impl. > StandardDirectoryFactory should use Files API for it's move implementation. > --- > > Key: SOLR-9902 > URL: https://issues.apache.org/jira/browse/SOLR-9902 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9902.patch > > > It's done in a platform independent way as opposed to the old File API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1065 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1065/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 4 tests failed. FAILED: org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenRenew Error Message: expected:<200> but was:<403> Stack Trace: java.lang.AssertionError: expected:<200> but was:<403> at __randomizedtesting.SeedInfo.seed([BD2AC0C81AC84F11:8AB134D6220492B5]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.renewDelegationToken(TestSolrCloudWithDelegationTokens.java:130) at org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.verifyDelegationTokenRenew(TestSolrCloudWithDelegationTokens.java:315) at org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenRenew(TestSolrCloudWithDelegationTokens.java:332) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at
[jira] [Commented] (SOLR-9867) The schemaless example can not be started after being stopped.
[ https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809514#comment-15809514 ] Mark Miller commented on SOLR-9867: --- If someone has time to review this, we should get it in for release. > The schemaless example can not be started after being stopped. > -- > > Key: SOLR-9867 > URL: https://issues.apache.org/jira/browse/SOLR-9867 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9867.patch, SOLR-9867.patch > > > I'm having trouble when I start up the schemaless example after shutting down. > I first tracked this down to the fact that the run example tool is getting an > error when it tries to create the SolrCore (again, it already exists) and so > it deletes the cores instance dir which leads to tlog and index lock errors > in Solr. > The reason it seems to be trying to create the core when it already exists is > that the run example tool uses a core status call to check existence and > because the core is loading, we don't consider it as existing. I added a > check to look for core.properties. > That seemed to let me start up, but my first requests failed because the core > was still loading. It appears CoreContainer#getCore is supposed to be > blocking so you don't have this problem, but there must be an issue, because > it is not blocking. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9867) The schemaless example can not be started after being stopped.
[ https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-9867: -- Attachment: SOLR-9867.patch > The schemaless example can not be started after being stopped. > -- > > Key: SOLR-9867 > URL: https://issues.apache.org/jira/browse/SOLR-9867 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9867.patch, SOLR-9867.patch > > > I'm having trouble when I start up the schemaless example after shutting down. > I first tracked this down to the fact that the run example tool is getting an > error when it tries to create the SolrCore (again, it already exists) and so > it deletes the cores instance dir which leads to tlog and index lock errors > in Solr. > The reason it seems to be trying to create the core when it already exists is > that the run example tool uses a core status call to check existence and > because the core is loading, we don't consider it as existing. I added a > check to look for core.properties. > That seemed to let me start up, but my first requests failed because the core > was still loading. It appears CoreContainer#getCore is supposed to be > blocking so you don't have this problem, but there must be an issue, because > it is not blocking. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9867) The schemaless example can not be started after being stopped.
[ https://issues.apache.org/jira/browse/SOLR-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-9867: -- Attachment: SOLR-9867.patch > The schemaless example can not be started after being stopped. > -- > > Key: SOLR-9867 > URL: https://issues.apache.org/jira/browse/SOLR-9867 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9867.patch > > > I'm having trouble when I start up the schemaless example after shutting down. > I first tracked this down to the fact that the run example tool is getting an > error when it tries to create the SolrCore (again, it already exists) and so > it deletes the cores instance dir which leads to tlog and index lock errors > in Solr. > The reason it seems to be trying to create the core when it already exists is > that the run example tool uses a core status call to check existence and > because the core is loading, we don't consider it as existing. I added a > check to look for core.properties. > That seemed to let me start up, but my first requests failed because the core > was still loading. It appears CoreContainer#getCore is supposed to be > blocking so you don't have this problem, but there must be an issue, because > it is not blocking. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9899) StandardDirectoryFactory should use optimizations for all FilterDirectorys not just NRTCachingDirectory.
[ https://issues.apache.org/jira/browse/SOLR-9899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-9899. --- Resolution: Fixed > StandardDirectoryFactory should use optimizations for all FilterDirectorys > not just NRTCachingDirectory. > > > Key: SOLR-9899 > URL: https://issues.apache.org/jira/browse/SOLR-9899 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9899.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9902) StandardDirectoryFactory should use Files API for it's move implementation.
[ https://issues.apache.org/jira/browse/SOLR-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-9902. --- Resolution: Fixed > StandardDirectoryFactory should use Files API for it's move implementation. > --- > > Key: SOLR-9902 > URL: https://issues.apache.org/jira/browse/SOLR-9902 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9902.patch > > > It's done in a platform independent way as opposed to the old File API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9859) replication.properties cannot be updated after being written and neither replication.properties or index.properties are durable in the face of a crash
[ https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-9859. --- Resolution: Fixed Fix Version/s: 6.4 master (7.0) Thanks all! > replication.properties cannot be updated after being written and neither > replication.properties or index.properties are durable in the face of a crash > -- > > Key: SOLR-9859 > URL: https://issues.apache.org/jira/browse/SOLR-9859 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 5.5.3, 6.3 >Reporter: Pushkar Raste >Assignee: Mark Miller >Priority: Minor > Fix For: master (7.0), 6.4 > > Attachments: SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch, > SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch > > > If a shard recovers via replication (vs PeerSync) a file named > {{replication.properties}} gets created. If the same shard recovers once more > via replication, IndexFetcher fails to write latest replication information > as it tries to create {{replication.properties}} but as file already exists. > Here is the stack trace I saw > {code} > java.nio.file.FileAlreadyExistsException: > \shard-3-001\cores\collection1\data\replication.properties > at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) > at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) > at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) > at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source) > at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source) > at java.nio.file.Files.newOutputStream(Unknown Source) > at > org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413) > at > org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409) > at > org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253) > at > org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689) > at > org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501) > at > org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265) > at > org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397) > at > org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157) > at > org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409) > at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222) > at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) > at java.util.concurrent.FutureTask.run(Unknown Source) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229) > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > at java.lang.Thread.run(Unknown Source) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9777) IndexFingerprinting: use getCombinedCoreAndDeletesKey() instead of getCoreCacheKey() for per-segment caching
[ https://issues.apache.org/jira/browse/SOLR-9777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809478#comment-15809478 ] Mark Miller commented on SOLR-9777: --- +1 > IndexFingerprinting: use getCombinedCoreAndDeletesKey() instead of > getCoreCacheKey() for per-segment caching > - > > Key: SOLR-9777 > URL: https://issues.apache.org/jira/browse/SOLR-9777 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Assignee: Ishan Chattopadhyaya > Attachments: SOLR-9777.patch > > > [Note: Had initially posted to SOLR-9506, but now moved here] > While working on SOLR-5944, I realized that the current per segment caching > logic works fine for deleted documents (due to comparison of numDocs in a > segment for the criterion of cache hit/miss). However, if a segment has > docValues updates, the same logic is insufficient. It is my understanding > that changing the key for caching from reader().getCoreCacheKey() to > reader().getCombinedCoreAndDeletesKey() would work here, since the docValues > updates are internally handled using deletion queue and hence the "combined" > core and deletes key would work here. Attaching a patch for the same. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9859) replication.properties cannot be updated after being written and neither replication.properties or index.properties are durable in the face of a crash
[ https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809470#comment-15809470 ] ASF subversion and git services commented on SOLR-9859: --- Commit 3919519a22491f01c993b82bf1470f0d3967771c in lucene-solr's branch refs/heads/branch_6x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3919519 ] SOLR-9859: Don't log error on NoSuchFileException (Cao Manh Dat) > replication.properties cannot be updated after being written and neither > replication.properties or index.properties are durable in the face of a crash > -- > > Key: SOLR-9859 > URL: https://issues.apache.org/jira/browse/SOLR-9859 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 5.5.3, 6.3 >Reporter: Pushkar Raste >Assignee: Mark Miller >Priority: Minor > Attachments: SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch, > SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch > > > If a shard recovers via replication (vs PeerSync) a file named > {{replication.properties}} gets created. If the same shard recovers once more > via replication, IndexFetcher fails to write latest replication information > as it tries to create {{replication.properties}} but as file already exists. > Here is the stack trace I saw > {code} > java.nio.file.FileAlreadyExistsException: > \shard-3-001\cores\collection1\data\replication.properties > at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) > at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) > at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) > at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source) > at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source) > at java.nio.file.Files.newOutputStream(Unknown Source) > at > org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413) > at > org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409) > at > org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253) > at > org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689) > at > org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501) > at > org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265) > at > org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397) > at > org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157) > at > org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409) > at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222) > at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) > at java.util.concurrent.FutureTask.run(Unknown Source) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229) > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > at java.lang.Thread.run(Unknown Source) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9859) replication.properties cannot be updated after being written and neither replication.properties or index.properties are durable in the face of a crash
[ https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809466#comment-15809466 ] ASF subversion and git services commented on SOLR-9859: --- Commit 25290ab5d6af25c05cbbb4738f49329273a7d693 in lucene-solr's branch refs/heads/master from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=25290ab ] SOLR-9859: Don't log error on NoSuchFileException (Cao Manh Dat) > replication.properties cannot be updated after being written and neither > replication.properties or index.properties are durable in the face of a crash > -- > > Key: SOLR-9859 > URL: https://issues.apache.org/jira/browse/SOLR-9859 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 5.5.3, 6.3 >Reporter: Pushkar Raste >Assignee: Mark Miller >Priority: Minor > Attachments: SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch, > SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch > > > If a shard recovers via replication (vs PeerSync) a file named > {{replication.properties}} gets created. If the same shard recovers once more > via replication, IndexFetcher fails to write latest replication information > as it tries to create {{replication.properties}} but as file already exists. > Here is the stack trace I saw > {code} > java.nio.file.FileAlreadyExistsException: > \shard-3-001\cores\collection1\data\replication.properties > at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) > at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) > at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) > at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source) > at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source) > at java.nio.file.Files.newOutputStream(Unknown Source) > at > org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413) > at > org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409) > at > org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253) > at > org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689) > at > org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501) > at > org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265) > at > org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397) > at > org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157) > at > org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409) > at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222) > at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) > at java.util.concurrent.FutureTask.run(Unknown Source) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229) > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > at java.lang.Thread.run(Unknown Source) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9859) replication.properties cannot be updated after being written and neither replication.properties or index.properties are durable in the face of a crash
[ https://issues.apache.org/jira/browse/SOLR-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809451#comment-15809451 ] Mark Miller commented on SOLR-9859: --- Thanks Cao - looks like we are only ignoring FileNotFoundException and this is throwing NoSuchFileException. > replication.properties cannot be updated after being written and neither > replication.properties or index.properties are durable in the face of a crash > -- > > Key: SOLR-9859 > URL: https://issues.apache.org/jira/browse/SOLR-9859 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 5.5.3, 6.3 >Reporter: Pushkar Raste >Assignee: Mark Miller >Priority: Minor > Attachments: SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch, > SOLR-9859.patch, SOLR-9859.patch, SOLR-9859.patch > > > If a shard recovers via replication (vs PeerSync) a file named > {{replication.properties}} gets created. If the same shard recovers once more > via replication, IndexFetcher fails to write latest replication information > as it tries to create {{replication.properties}} but as file already exists. > Here is the stack trace I saw > {code} > java.nio.file.FileAlreadyExistsException: > \shard-3-001\cores\collection1\data\replication.properties > at sun.nio.fs.WindowsException.translateToIOException(Unknown Source) > at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) > at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source) > at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(Unknown Source) > at java.nio.file.spi.FileSystemProvider.newOutputStream(Unknown Source) > at java.nio.file.Files.newOutputStream(Unknown Source) > at > org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:413) > at > org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:409) > at > org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253) > at > org.apache.solr.handler.IndexFetcher.logReplicationTimeAndConfFiles(IndexFetcher.java:689) > at > org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:501) > at > org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:265) > at > org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:397) > at > org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:157) > at > org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:409) > at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:222) > at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) > at java.util.concurrent.FutureTask.run(Unknown Source) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$0(ExecutorUtil.java:229) > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > at java.lang.Thread.run(Unknown Source) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 630 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/630/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC All tests passed Build Log: [...truncated 13426 lines...] [junit4] JVM J1: stdout was not empty, see: /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/build/solr-solrj/test/temp/junit4-J1-20170108_135210_592.sysout [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] # [junit4] # A fatal error has been detected by the Java Runtime Environment: [junit4] # [junit4] # Internal Error (sharedRuntime.cpp:873), pid=40282, tid=0x4b37 [junit4] # guarantee(nm != NULL) failed: must have containing nmethod for implicit division-by-zero exceptions [junit4] # [junit4] # JRE version: Java(TM) SE Runtime Environment (8.0_102-b14) (build 1.8.0_102-b14) [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.102-b14 mixed mode bsd-amd64 compressed oops) [junit4] # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again [junit4] # [junit4] # An error report file with more information is saved as: [junit4] # /Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/build/solr-solrj/test/J1/hs_err_pid40282.log [junit4] # [junit4] # If you would like to submit a bug report, please visit: [junit4] # http://bugreport.java.com/bugreport/crash.jsp [junit4] # [junit4] <<< JVM J1: EOF [...truncated 169 lines...] [junit4] ERROR: JVM J1 ended with an exception, command line: /Library/Java/JavaVirtualMachines/jdk1.8.0_102.jdk/Contents/Home/jre/bin/java -XX:+UseCompressedOops -XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/heapdumps -ea -esa -Dtests.prefix=tests -Dtests.seed=207B1C87B2E7C17E -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=6.4.0 -Dtests.cleanthreads=perClass -Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/tools/junit4/logging.properties -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false -Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp -Djava.io.tmpdir=./temp -Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/build/solr-solrj/test/temp -Dcommon.dir=/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene -Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build/clover/db -Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/tools/junit4/solr-tests.policy -Dtests.LUCENE_VERSION=6.4.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -Djava.awt.headless=true -Djdk.map.althashing.threshold=0 -Djunit4.childvm.cwd=/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/solr/build/solr-solrj/test/J1 -Djunit4.childvm.id=1 -Djunit4.childvm.count=2 -Dtests.leaveTemporary=false -Dtests.filterstacks=true -Dtests.disableHdfs=true -Djava.security.manager=org.apache.lucene.util.TestSecurityManager -classpath
[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 251 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/251/ 11 tests failed. FAILED: org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterRestart Error Message: Timeout waiting for CDCR replication to complete @source_collection:shard2 Stack Trace: java.lang.RuntimeException: Timeout waiting for CDCR replication to complete @source_collection:shard2 at __randomizedtesting.SeedInfo.seed([9C544F75B30F79F6:C049A8F47D2FCDC0]:0) at org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795) at org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterRestart(CdcrReplicationDistributedZkTest.java:236) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3764 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3764/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.PeerSyncReplicationTest.test Error Message: timeout waiting to see all nodes active Stack Trace: java.lang.AssertionError: timeout waiting to see all nodes active at __randomizedtesting.SeedInfo.seed([33167BFC955BE930:BB4244263BA784C8]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:311) at org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:262) at org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:244) at org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:133) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at
[jira] [Commented] (LUCENE-7588) A parallel DrillSideways implementation
[ https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809231#comment-15809231 ] ASF subversion and git services commented on LUCENE-7588: - Commit 373826a69bda27e181eae063abca658798d42cb6 in lucene-solr's branch refs/heads/branch_6x from Mike McCandless [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=373826a ] LUCENE-7588: the parallell search method was failing to pass on the user's requested sort when merge-sorting all hits > A parallel DrillSideways implementation > --- > > Key: LUCENE-7588 > URL: https://issues.apache.org/jira/browse/LUCENE-7588 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.4 > > Attachments: LUCENE-7588.patch, lucene-7588-sort-fix.patch > > > Currently DrillSideways implementation is based on the single threaded > IndexSearcher.search(Query query, Collector results). > On large document set, the single threaded collection can be really slow. > The ParallelDrillSideways implementation could: > 1. Use the CollectionManager based method IndexSearcher.search(Query query, > CollectorManager collectorManager) to get the benefits of multithreading on > index segments, > 2. Compute each DrillSideway subquery on a single thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7588) A parallel DrillSideways implementation
[ https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809228#comment-15809228 ] ASF subversion and git services commented on LUCENE-7588: - Commit 1aa9c4251289e71ab8e87b03797b20f4a8fda0a5 in lucene-solr's branch refs/heads/master from Mike McCandless [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1aa9c42 ] LUCENE-7588: the parallell search method was failing to pass on the user's requested sort when merge-sorting all hits > A parallel DrillSideways implementation > --- > > Key: LUCENE-7588 > URL: https://issues.apache.org/jira/browse/LUCENE-7588 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.4 > > Attachments: LUCENE-7588.patch, lucene-7588-sort-fix.patch > > > Currently DrillSideways implementation is based on the single threaded > IndexSearcher.search(Query query, Collector results). > On large document set, the single threaded collection can be really slow. > The ParallelDrillSideways implementation could: > 1. Use the CollectionManager based method IndexSearcher.search(Query query, > CollectorManager collectorManager) to get the benefits of multithreading on > index segments, > 2. Compute each DrillSideway subquery on a single thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7588) A parallel DrillSideways implementation
[ https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809221#comment-15809221 ] Michael McCandless commented on LUCENE-7588: Thank you [~ekeller], the patch looks great, and fixes the failing seed! I'll push shortly... > A parallel DrillSideways implementation > --- > > Key: LUCENE-7588 > URL: https://issues.apache.org/jira/browse/LUCENE-7588 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.4 > > Attachments: LUCENE-7588.patch, lucene-7588-sort-fix.patch > > > Currently DrillSideways implementation is based on the single threaded > IndexSearcher.search(Query query, Collector results). > On large document set, the single threaded collection can be really slow. > The ParallelDrillSideways implementation could: > 1. Use the CollectionManager based method IndexSearcher.search(Query query, > CollectorManager collectorManager) to get the benefits of multithreading on > index segments, > 2. Compute each DrillSideway subquery on a single thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7588) A parallel DrillSideways implementation
[ https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Emmanuel Keller updated LUCENE-7588: Attachment: lucene-7588-sort-fix.patch > A parallel DrillSideways implementation > --- > > Key: LUCENE-7588 > URL: https://issues.apache.org/jira/browse/LUCENE-7588 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.4 > > Attachments: LUCENE-7588.patch, lucene-7588-sort-fix.patch > > > Currently DrillSideways implementation is based on the single threaded > IndexSearcher.search(Query query, Collector results). > On large document set, the single threaded collection can be really slow. > The ParallelDrillSideways implementation could: > 1. Use the CollectionManager based method IndexSearcher.search(Query query, > CollectorManager collectorManager) to get the benefits of multithreading on > index segments, > 2. Compute each DrillSideway subquery on a single thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7588) A parallel DrillSideways implementation
[ https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Emmanuel Keller updated LUCENE-7588: Attachment: (was: lucene-7588-test.patch) > A parallel DrillSideways implementation > --- > > Key: LUCENE-7588 > URL: https://issues.apache.org/jira/browse/LUCENE-7588 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.4 > > Attachments: LUCENE-7588.patch > > > Currently DrillSideways implementation is based on the single threaded > IndexSearcher.search(Query query, Collector results). > On large document set, the single threaded collection can be really slow. > The ParallelDrillSideways implementation could: > 1. Use the CollectionManager based method IndexSearcher.search(Query query, > CollectorManager collectorManager) to get the benefits of multithreading on > index segments, > 2. Compute each DrillSideway subquery on a single thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7588) A parallel DrillSideways implementation
[ https://issues.apache.org/jira/browse/LUCENE-7588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15809098#comment-15809098 ] Emmanuel Keller commented on LUCENE-7588: - That's the issue. Just fixed, all tests are again ok now. I upload the patch. > A parallel DrillSideways implementation > --- > > Key: LUCENE-7588 > URL: https://issues.apache.org/jira/browse/LUCENE-7588 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: master (7.0), 6.3.1 >Reporter: Emmanuel Keller >Priority: Minor > Labels: facet, faceting > Fix For: master (7.0), 6.4 > > Attachments: LUCENE-7588.patch > > > Currently DrillSideways implementation is based on the single threaded > IndexSearcher.search(Query query, Collector results). > On large document set, the single threaded collection can be really slow. > The ParallelDrillSideways implementation could: > 1. Use the CollectionManager based method IndexSearcher.search(Query query, > CollectorManager collectorManager) to get the benefits of multithreading on > index segments, > 2. Compute each DrillSideway subquery on a single thread. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org