[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b28) - Build # 11346 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11346/ Java: 32bit/jdk1.9.0-ea-b28 -server -XX:+UseParallelGC 2 tests failed. REGRESSION: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch Error Message: Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: halfcollection_shard1_replica1 Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: halfcollection_shard1_replica1 at __randomizedtesting.SeedInfo.seed([9C6B5054917A901F:1D8DDE4CE625F023]:0) at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:484) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.a
[JENKINS] Lucene-Solr-4.10-Linux (32bit/ibm-j9-jdk7) - Build # 17 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.10-Linux/17/ Java: 32bit/ibm-j9-jdk7 -Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;} 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestCoreContainer Error Message: Suite timeout exceeded (>= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (>= 720 msec). at __randomizedtesting.SeedInfo.seed([13CA3EC6FEEF059C]:0) REGRESSION: org.apache.solr.core.TestCoreContainer.testReloadThreaded Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([13CA3EC6FEEF059C]:0) REGRESSION: org.apache.solr.update.AutoCommitTest.testMaxTime Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([13CA3EC6FEEF059C:893E4324607599A0]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:709) at org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:227) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) at java.lang.reflect.Method.invoke(Method.java:619) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluat
[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1856 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1856/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. REGRESSION: org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.testDistribSearch Error Message: There were too many update fails - we expect it can happen, but shouldn't easily Stack Trace: java.lang.AssertionError: There were too many update fails - we expect it can happen, but shouldn't easily at __randomizedtesting.SeedInfo.seed([E6D9279FAC226442:673FA987DB7D047E]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertFalse(Assert.java:68) at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.doTest(ChaosMonkeyNothingIsSafeTest.java:223) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b28) - Build # 11345 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11345/ Java: 32bit/jdk1.9.0-ea-b28 -client -XX:+UseConcMarkSweepGC 1 tests failed. REGRESSION: org.apache.solr.cloud.HttpPartitionTest.testDistribSearch Error Message: Doc with id=2 not found in http://127.0.0.1:57006/c8n_1x2_leader_session_loss due to: Path not found: /id; rsp={doc=null} Stack Trace: java.lang.AssertionError: Doc with id=2 not found in http://127.0.0.1:57006/c8n_1x2_leader_session_loss due to: Path not found: /id; rsp={doc=null} at __randomizedtesting.SeedInfo.seed([96A844702AD1E5B9:174ECA685D8E8585]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:437) at org.apache.solr.cloud.HttpPartitionTest.testLeaderZkSessionLoss(HttpPartitionTest.java:324) at org.apache.solr.cloud.HttpPartitionTest.doTest(HttpPartitionTest.java:119) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:484) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_67) - Build # 4342 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4342/ Java: 32bit/jdk1.7.0_67 -server -XX:+UseConcMarkSweepGC 1 tests failed. REGRESSION: org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:58723, http://127.0.0.1:58750, http://127.0.0.1:58732, http://127.0.0.1:58741, http://127.0.0.1:58708] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:58723, http://127.0.0.1:58750, http://127.0.0.1:58732, http://127.0.0.1:58741, http://127.0.0.1:58708] at __randomizedtesting.SeedInfo.seed([F559E07BA7B8CC64:74BF6E63D0E7AC58]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:322) at org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880) at org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658) at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601) at org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:171) at org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:144) at org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:88) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.j
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40-ea-b04) - Build # 11192 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11192/ Java: 64bit/jdk1.8.0_40-ea-b04 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 2 tests failed. REGRESSION: org.apache.solr.cloud.CloudExitableDirectoryReaderTest.testDistribSearch Error Message: no exception matching expected: 400: Request took too long during query expansion. Terminating request. Stack Trace: java.lang.AssertionError: no exception matching expected: 400: Request took too long during query expansion. Terminating request. at __randomizedtesting.SeedInfo.seed([F507E3F206B0EBC1:74E16DEA71EF8BFD]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertFail(CloudExitableDirectoryReaderTest.java:101) at org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:75) at org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTest(CloudExitableDirectoryReaderTest.java:54) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at
Re: How to openIfChanged the most recent merge?
It's awesome! Incredibly thank you! On Sun, Sep 28, 2014 at 12:46 PM, Michael McCandless < luc...@mikemccandless.com> wrote: > OK I ran the test and saw the failure, thank you! > > I think I understand why you are seeing what you are seeing. > > First off, you are not actually using an NRT reader when > hardReopenBeforeDVUpdate is false, because in readerReopenIfChanged, > when oldReader == null, you must do: > > return DirectoryReader.open(writer2, true); > > so that your initial reader is in fact NRT. All subsequent reopens > from then on will then be NRT. > > When I make that change to your test, it seems to pass (or at least > run for much longer than it did before...). > > However, if I remove the writer.commit() before the reopen, the test > fails. The reason is that IW commit and NRT reader reopen do not > reflect merges "just kicked off" by that flush, even when using SMS. > So, there will always be this "off by 1", in that you'll get a reader > with 10 segments (pre-merge) not 1 segment (post-merge). > > One possible workaround here w/o having to call crazy-expensive commit > would be to call reopenIfChanged twice in a row (and fix your reopen > method to properly handle null return from openIfChanged); when I > tried that in your test, it also seemed to run forever... > > Mike McCandless > > http://blog.mikemccandless.com > > > On Fri, Sep 26, 2014 at 2:44 PM, Mikhail Khludnev > wrote: > > > > > > On Fri, Sep 26, 2014 at 7:07 PM, Michael McCandless > > wrote: > >> > >> Sorry I can't make heads or tails of what you are saying here ... can > >> you maybe make a small test case that fails with "ant test"? Boil it > >> down as much as possible... > > > > > > Sure. I'm really sorry for being so confusing. > > I changed constant > > > https://github.com/m-khl/lucene-merge-visibility/commit/a4a01c2c91d9c30850602b8dddf23de5363c4851#diff-86ebfbf440fe69ee36a52705cb92b824R44 > > to make it fail. > > the branch reader-vs-merge at > > https://github.com/m-khl/lucene-merge-visibility/tree/reader-vs-merge > > in lucene/core there is a failed test > > $> ant test -Dtestcase=TestNumDValUpdVsReaderVisibility > > > > it's verbose, because it uses sysout as infostream. > >[junit4] FAILURE 2.40s | TestNumDValUpdVsReaderVisibility.testSimple > <<< > >[junit4]> Throwable #1: java.lang.AssertionError: failed on > id:doc-18 > > expected:<17> but was:<18> > >[junit4]> at > > __randomizedtesting.SeedInfo.seed([73A18231908F4ADC:4B12A6CFB77C9E0D]:0) > >[junit4]> at > > > org.apache.lucene.index.TestNumDValUpdVsReaderVisibility.testSimple(TestNumDValUpdVsReaderVisibility.java:134) > > > > > >> > >> > >> The gist seems to be if you use an NRT reader something fails, but if > >> you instead open a new reader, that something passes? > > > > I don't use NTR, and perhaps it's a solution. I just don't know how to do > > that. > > Note: closing writer, open reader - works (but I suppose it's slow); just > > committing and reopening reader - it fails; > >> > >> But what > >> exactly is failing? > > > > - let I have merge factor 10 and SerialMergeSceduler. > > - I did 9 commits already and have 9 segments in the index > > - I add a few docs and commit > > - 10th commit triggers merge synchronously, it's done. > > - now if I reopen reader it see 10 unmerged segments (merged single > segment > > index, isn't visible for reopen) /*test FAILS*/ > > - but if I fully close writer&reader and open reader, I've got single > > segment merged index./*test PASS */ > > > > - usually such behavior gets no probs, it's reasonable, and fine. > > - but I do a mad thing > > - I use that reader (with 10 segments) to get docnum and write it as a > > docvalue; > > - after I commit only docvalues update (no docs update) and reopen > reader, > > I've got single segment index, which was already written by merge at the > > previous commit. > > - and here is a problem because a docnum obtained at 10 segments index, > > doesn't match to docnum at single segment index (there was a deletion) > > > >> > >> And what is a "solid" segment here? > > > > I meant an index contains of single segment, at contrast from index > contains > > of many ones. > > > > Thank you! > >> > >> > >> Mike McCandless > >> > >> http://blog.mikemccandless.com > >> > >> > >> On Thu, Sep 25, 2014 at 6:00 PM, Mikhail Khludnev > >> wrote: > >> > Hello Mike! > >> > > >> > Thanks for your attention. > >> > I pushed the mad case at > >> > > >> > > https://github.com/m-khl/lucene-merge-visibility/commit/fa2d60be5b13eb57e0527c843119cf62cfa83a7d#diff-86ebfbf440fe69ee36a52705cb92b824R120 > >> > > >> > it does the following > >> > > >> > - writes a pair of doc > >> > - commit > >> > - reopen reader, searches for one of them > >> > - update this doc with its' docnum (I know it's weird, but I should > work > >> > if > >> > reopened reader sees that update) > >> > - commit this DV update > >> > - search that doc and check the written doc val. > >> > it
[jira] [Commented] (SOLR-6249) Schema API changes return success before all cores are updated
[ https://issues.apache.org/jira/browse/SOLR-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151162#comment-14151162 ] Noble Paul commented on SOLR-6249: -- yeah , you are right. Let's get this out of the way . I shall open another one. > Schema API changes return success before all cores are updated > -- > > Key: SOLR-6249 > URL: https://issues.apache.org/jira/browse/SOLR-6249 > Project: Solr > Issue Type: Improvement > Components: Schema and Analysis, SolrCloud >Reporter: Gregory Chanan >Assignee: Timothy Potter > Attachments: SOLR-6249.patch, SOLR-6249.patch, SOLR-6249.patch > > > See SOLR-6137 for more details. > The basic issue is that Schema API changes return success when the first core > is updated, but other cores asynchronously read the updated schema from > ZooKeeper. > So a client application could make a Schema API change and then index some > documents based on the new schema that may fail on other nodes. > Possible fixes: > 1) Make the Schema API calls synchronous > 2) Give the client some ability to track the state of the schema. They can > already do this to a certain extent by checking the Schema API on all the > replicas and verifying that the field has been added, though this is pretty > cumbersome. Maybe it makes more sense to do this sort of thing on the > collection level, i.e. Schema API changes return the zk version to the > client. We add an API to return the current zk version. On a replica, if > the zk version is >= the version the client has, the client knows that > replica has at least seen the schema change. We could also provide an API to > do the distribution and checking across the different replicas of the > collection so that clients don't need ot do that themselves. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2141 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2141/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup Error Message: 1 thread leaked from SUITE scope at org.apache.solr.handler.TestReplicationHandlerBackup: 1) Thread[id=3791, name=Thread-1285, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323) at java.net.URL.openStream(URL.java:1037) at org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.handler.TestReplicationHandlerBackup: 1) Thread[id=3791, name=Thread-1285, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323) at java.net.URL.openStream(URL.java:1037) at org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318) at __randomizedtesting.SeedInfo.seed([12D85106FD1A0E91]:0) FAILED: junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=3791, name=Thread-1285, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323) at java.net.URL.openStream(URL.java:1037) at org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:318) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=3791, name=Thread-1285, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup] at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.
[jira] [Updated] (SOLR-1632) Distributed IDF
[ https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitaliy Zhovtyuk updated SOLR-1632: --- Attachment: SOLR-1632.patch Wrong patch was attached on 1.04.2014. Updated previous changes to current trunk. TestDefaultStatsCache, TestExactSharedStatsCache, TestExactStatsCache, TestLRUStatsCache are passing. > Distributed IDF > --- > > Key: SOLR-1632 > URL: https://issues.apache.org/jira/browse/SOLR-1632 > Project: Solr > Issue Type: New Feature > Components: search >Affects Versions: 1.5 >Reporter: Andrzej Bialecki >Assignee: Mark Miller > Fix For: 4.9, Trunk > > Attachments: 3x_SOLR-1632_doesntwork.patch, SOLR-1632.patch, > SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, > SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, > SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, > distrib-2.patch, distrib.patch > > > Distributed IDF is a valuable enhancement for distributed search across > non-uniform shards. This issue tracks the proposed implementation of an API > to support this functionality in Solr. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-1632) Distributed IDF
[ https://issues.apache.org/jira/browse/SOLR-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitaliy Zhovtyuk updated SOLR-1632: --- Attachment: (was: SOLR-5488.patch) > Distributed IDF > --- > > Key: SOLR-1632 > URL: https://issues.apache.org/jira/browse/SOLR-1632 > Project: Solr > Issue Type: New Feature > Components: search >Affects Versions: 1.5 >Reporter: Andrzej Bialecki >Assignee: Mark Miller > Fix For: 4.9, Trunk > > Attachments: 3x_SOLR-1632_doesntwork.patch, SOLR-1632.patch, > SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, > SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, > SOLR-1632.patch, SOLR-1632.patch, SOLR-1632.patch, distrib-2.patch, > distrib.patch > > > Distributed IDF is a valuable enhancement for distributed search across > non-uniform shards. This issue tracks the proposed implementation of an API > to support this functionality in Solr. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')
[ https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vitaliy Zhovtyuk updated SOLR-6351: --- Attachment: SOLR-6351.patch Intermediate results: 1. Added pivot facet test to SolrExampleTests, extended org.apache.solr.client.solrj.SolrQuery to provide multipls stats.field parameter 2. Added FacetPivotSmallTest and moved all asserts from DistributedFacetPivotSmallTest to XPath assertions, separated it to few test methods 3. "tag" local parameter parsing added 4. added org.apache.solr.handler.component.StatsInfo#tagToStatsFields and org.apache.solr.handler.component.StatsInfo#getStatsFieldsByTag to lookup list of stats fields by tag 5. Modified PivotFacetProcessor to collect and put StatValues for ever pivot field, added test to assert stats value of pivots 6. Updated PivotField and org.apache.solr.client.solrj.response.QueryResponse to read stats values on pivots > Let Stats Hang off of Pivots (via 'tag') > > > Key: SOLR-6351 > URL: https://issues.apache.org/jira/browse/SOLR-6351 > Project: Solr > Issue Type: Sub-task >Reporter: Hoss Man > Attachments: SOLR-6351.patch > > > he goal here is basically flip the notion of "stats.facet" on it's head, so > that instead of asking the stats component to also do some faceting > (something that's never worked well with the variety of field types and has > never worked in distributed mode) we instead ask the PivotFacet code to > compute some stats X for each leaf in a pivot. We'll do this with the > existing {{stats.field}} params, but we'll leverage the {{tag}} local param > of the {{stats.field}} instances to be able to associate which stats we want > hanging off of which {{facet.pivot}} > Example... > {noformat} > facet.pivot={!stats=s1}category,manufacturer > stats.field={!key=avg_price tag=s1 mean=true}price > stats.field={!tag=s1 min=true max=true}user_rating > {noformat} > ...with the request above, in addition to computing the min/max user_rating > and mean price (labeled "avg_price") over the entire result set, the > PivotFacet component will also include those stats for every node of the tree > it builds up when generating a pivot of the fields "category,manufacturer" -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6249) Schema API changes return success before all cores are updated
[ https://issues.apache.org/jira/browse/SOLR-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151140#comment-14151140 ] Timothy Potter commented on SOLR-6249: -- Refining our watcher strategy for conf znodes is definitely important but seems like it should be tackled in a separate JIRA ticket? Any further thoughts on the issues this ticket is trying to address specifically? If not, I'll commit to trunk. > Schema API changes return success before all cores are updated > -- > > Key: SOLR-6249 > URL: https://issues.apache.org/jira/browse/SOLR-6249 > Project: Solr > Issue Type: Improvement > Components: Schema and Analysis, SolrCloud >Reporter: Gregory Chanan >Assignee: Timothy Potter > Attachments: SOLR-6249.patch, SOLR-6249.patch, SOLR-6249.patch > > > See SOLR-6137 for more details. > The basic issue is that Schema API changes return success when the first core > is updated, but other cores asynchronously read the updated schema from > ZooKeeper. > So a client application could make a Schema API change and then index some > documents based on the new schema that may fail on other nodes. > Possible fixes: > 1) Make the Schema API calls synchronous > 2) Give the client some ability to track the state of the schema. They can > already do this to a certain extent by checking the Schema API on all the > replicas and verifying that the field has been added, though this is pretty > cumbersome. Maybe it makes more sense to do this sort of thing on the > collection level, i.e. Schema API changes return the zk version to the > client. We add an API to return the current zk version. On a replica, if > the zk version is >= the version the client has, the client knows that > replica has at least seen the schema change. We could also provide an API to > do the distribution and checking across the different replicas of the > collection so that clients don't need ot do that themselves. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr Design and Architecture
Then just do a Google search on “Solr Architecture” and start working through the various presentations. At some point you’ll have to decide how deeply you want to get into Lucene, although you do need to know a fair amount about Lucene in any case. -- Jack Krupansky From: Anurag Sharma Sent: Sunday, September 28, 2014 11:50 AM To: dev@lucene.apache.org Subject: Re: Solr Design and Architecture Sure, I am following https://cwiki.apache.org/confluence/display/solr for the features. Also I've worked on other Search and NLP engines so understands the expected features and capabilities. On Sun, Sep 28, 2014 at 9:13 PM, Jack Krupansky wrote: Best to get very familiar with using Solr before you try diving deeply into the code and design. -- Jack Krupansky From: Anurag Sharma Sent: Sunday, September 28, 2014 11:28 AM To: dev@lucene.apache.org Subject: Solr Design and Architecture Hi, I am new to Solr and would like to understand and deep dive into Solr design and architecture. Please suggest the document/resource explaining above. Thanks Anurag
[jira] [Commented] (SOLR-6476) Create a bulk mode for schema API
[ https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151137#comment-14151137 ] Noble Paul commented on SOLR-6476: -- the "files link" is a problem. Instead we should just have a prominent way to access "schema" which should display what is the current list of fields , types etc. we need to eventually de-prioritize the raw file view. > Create a bulk mode for schema API > - > > Key: SOLR-6476 > URL: https://issues.apache.org/jira/browse/SOLR-6476 > Project: Solr > Issue Type: Bug > Components: Schema and Analysis >Reporter: Noble Paul >Assignee: Noble Paul > Labels: managedResource > Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, > SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch > > > The current schema API does one operation at a time and the normal usecase is > that users add multiple fields/fieldtypes/copyFields etc in one shot. > example > {code:javascript} > curl http://localhost:8983/solr/collection1/schema -H > 'Content-type:application/json' -d '{ > "add-field": { > "name":"sell-by", > "type":"tdate", > "stored":true > }, > "add-field":{ > "name":"catchall", > "type":"text_general", > "stored":false > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6476) Create a bulk mode for schema API
[ https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151134#comment-14151134 ] Erick Erickson commented on SOLR-6476: -- Works for me. The only thing I'll _strongly_ weigh in on here is that the admin UI _must_ be able to access at least the "describe table" functionality, and ideally the ability to change the schema definition from the admin UI. Ditto with accessing the rest of the configuration information. There are just waaay too many situations "in the field" where actually seeing what the server is working with (as opposed to what the ops person _thinks_ they've configured) on a running instance is critical to troubleshooting. Currently this is all done via the admin UI and the "files" link, although you can't edit there of course. Not quite sure how all this applies to, say, solrconfig.xml though. I guess we'll see as things develop. > Create a bulk mode for schema API > - > > Key: SOLR-6476 > URL: https://issues.apache.org/jira/browse/SOLR-6476 > Project: Solr > Issue Type: Bug > Components: Schema and Analysis >Reporter: Noble Paul >Assignee: Noble Paul > Labels: managedResource > Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, > SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch > > > The current schema API does one operation at a time and the normal usecase is > that users add multiple fields/fieldtypes/copyFields etc in one shot. > example > {code:javascript} > curl http://localhost:8983/solr/collection1/schema -H > 'Content-type:application/json' -d '{ > "add-field": { > "name":"sell-by", > "type":"tdate", > "stored":true > }, > "add-field":{ > "name":"catchall", > "type":"text_general", > "stored":false > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr Design and Architecture
Sure, I am following https://cwiki.apache.org/confluence/display/solr for the features. Also I've worked on other Search and NLP engines so understands the expected features and capabilities. On Sun, Sep 28, 2014 at 9:13 PM, Jack Krupansky wrote: > Best to get very familiar with using Solr before you try diving deeply > into the code and design. > > -- Jack Krupansky > > *From:* Anurag Sharma > *Sent:* Sunday, September 28, 2014 11:28 AM > *To:* dev@lucene.apache.org > *Subject:* Solr Design and Architecture > > Hi, > > I am new to Solr and would like to understand and deep dive into Solr > design and architecture. Please suggest the document/resource explaining > above. > > Thanks > Anurag >
[jira] [Commented] (LUCENE-5969) Add Lucene50Codec
[ https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151131#comment-14151131 ] ASF subversion and git services commented on LUCENE-5969: - Commit 1628077 from [~rcmuir] in branch 'dev/branches/lucene5969' [ https://svn.apache.org/r1628077 ] LUCENE-5969: add missing checkIntegrity() calls for segments that cannot be bulk-merged > Add Lucene50Codec > - > > Key: LUCENE-5969 > URL: https://issues.apache.org/jira/browse/LUCENE-5969 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless > Fix For: 5.0, Trunk > > Attachments: LUCENE-5969.patch, LUCENE-5969.patch > > > Spinoff from LUCENE-5952: > * Fix .si to write Version as 3 ints, not a String that requires parsing at > read time. > * Lucene42TermVectorsFormat should not use the same codecName as > Lucene41StoredFieldsFormat > It would also be nice if we had a "bumpCodecVersion" script so rolling a new > codec is not so daunting. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr Design and Architecture
Best to get very familiar with using Solr before you try diving deeply into the code and design. -- Jack Krupansky From: Anurag Sharma Sent: Sunday, September 28, 2014 11:28 AM To: dev@lucene.apache.org Subject: Solr Design and Architecture Hi, I am new to Solr and would like to understand and deep dive into Solr design and architecture. Please suggest the document/resource explaining above. Thanks Anurag
[jira] [Commented] (SOLR-6476) Create a bulk mode for schema API
[ https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151128#comment-14151128 ] Noble Paul commented on SOLR-6476: -- bq.A while ago, Stefan and I tried to allow schema.xml to be edited from the admin UI. I want to move to a system where the users are not aware of schema.xml . It should be an internal detail. Exactly the same way I deal with my RDBMS/Cassandra (or whatever) . The way a user thinks of my RDBMS is as follows * Startup the server first * use A DDL to create a schema * During the lifecycle of the system I use more DDL to add/remove fields * I use a command like "describe table' to know the current schema. (We have a REST API ) * I don't really care about how the server stores the schema/config or whatever To achieve tis goal , we must stop thinking about the system in terms of xmls and we should start thinking about the APIs as the DDL for Solr. The term 'managed schema' will have no relevance. schema will always be 'managed' and the the adjective 'managed' must go away > Create a bulk mode for schema API > - > > Key: SOLR-6476 > URL: https://issues.apache.org/jira/browse/SOLR-6476 > Project: Solr > Issue Type: Bug > Components: Schema and Analysis >Reporter: Noble Paul >Assignee: Noble Paul > Labels: managedResource > Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, > SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch > > > The current schema API does one operation at a time and the normal usecase is > that users add multiple fields/fieldtypes/copyFields etc in one shot. > example > {code:javascript} > curl http://localhost:8983/solr/collection1/schema -H > 'Content-type:application/json' -d '{ > "add-field": { > "name":"sell-by", > "type":"tdate", > "stored":true > }, > "add-field":{ > "name":"catchall", > "type":"text_general", > "stored":false > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Solr Design and Architecture
Hi, I am new to Solr and would like to understand and deep dive into Solr design and architecture. Please suggest the document/resource explaining above. Thanks Anurag
[jira] [Resolved] (SOLR-6558) solr does not insert the first line in the csv file
[ https://issues.apache.org/jira/browse/SOLR-6558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-6558. -- Resolution: Invalid > solr does not insert the first line in the csv file > --- > > Key: SOLR-6558 > URL: https://issues.apache.org/jira/browse/SOLR-6558 > Project: Solr > Issue Type: Bug > Components: Build, clients - java, contrib - DataImportHandler >Affects Versions: 4.7.2 > Environment: 4.7.2 solr , windows 7 and java version is 1.7.0_25 >Reporter: fatih > Labels: features > Fix For: 4.7.2 > > Original Estimate: 24h > Remaining Estimate: 24h > > link for stackoverflow as well > http://stackoverflow.com/questions/26000623/solr-does-not-insert-the-first-line-in-the-csv-file > When a csv file is uploaded over curl command as below > C:\>curl > "http://localhost:8983/solr/update/csv?commit=true&stream.file=C:\dev\tools\solr-4.7.2\data.txt&stream.contentType=text/csv&header=false&fieldnames=id,cat,pubyear_i,title,author, > series_s,sequence_i&skipLines=0" > and data.txt content is as below > book1,fantasy,2000,A Storm of Swords,George R.R. Martin,A Song of Ice and > Fire,3 > book2,fantasy,2005,A Feast for Crows,George R.R. Martin,A Song of Ice and > Fire,4 > book3,fantasy,2011,A Dance with Dragons,George R.R. Martin,A Song of Ice > and Fire,5 > book4,sci-fi,1987,Consider Phlebas,Iain M. Banks,The Culture,1 > book5,sci-fi,1988,The Player of Games,Iain M. Banks,The Culture,2 > book6,sci-fi,1990,Use of Weapons,Iain M. Banks,The Culture,3 > book7,fantasy,1984,Shadows Linger,Glen Cook,The Black Company,2 > book8,fantasy,1984,The White Rose,Glen Cook,The Black Company,3 > book9,fantasy,1989,Shadow Games,Glen Cook,The Black Company,4 > book10,sci-fi,2001,Gridlinked,Neal Asher,Ian Cormac,1 > book11,sci-fi,2003,The Line of Polity,Neal Asher,Ian Cormac,2 > book12,sci-fi,2005,Brass Man,Neal Asher,Ian Cormac,3 > first data in data.txt file is not being inserted to Solr which its id is > "book1". Can someone please tell why? > http://localhost:8983/solr/query?q=id:book1 > { > "responseHeader":{ > "status":0, > "QTime":1, > "params":{ > "q":"id:book1"}}, > "response":{"numFound":0,"start":0,"docs":[] > }} > Solr logs already tells that book1 is being added. > 15440876 [searcherExecutor-5-thread-1] INFO > org.apache.solr.core.SolrCore û [collection1] Registered new searcher > Searcher@177fcdf1[collection1] > main{StandardDirectoryReader(segments_1g:124:nrt _z(4.7):C12)} > 15440877 [qtp84034882-11] INFO > org.apache.solr.update.processor.LogUpdateProcessor û [collection1] > webapp=/solr path=/update > params={fieldnames=id,cat,pubyear_i,title,author,series_s,sequence_i&skipLines=0&commit=true&stream.con > > tentType=text/csv&header=false&stream.file=C:\dev\tools\solr-4.7.2\data.txt} > {add=[?book1 (1480070032327180288), book2 (1480070032332423168), book3 > (1480070032335568896), book4 (1480070032337666048), book5 > (1480070032339763200), b > ook6 (1480070032341860352), book7 (1480070032343957504), book8 > (1480070032347103232), book9 (1480070032349200384), book10 > (1480070032351297536), ... (12 adds)],commit=} 0 92 > If I ask for all data then below you can also see book1 is still missing > > http://localhost:8983/solr/query?q=id:book*&sort=pubyear_i+desc&fl=id,title,pubyear_i&rows=15 > { > "responseHeader":{ > "status":0, > "QTime":1, > "params":{ > "fl":"id,title,pubyear_i", > "sort":"pubyear_i desc", > "q":"id:book*", > "rows":"15"}}, > "response":{"numFound":11,"start":0,"docs":[ > { > "id":"book3", > "pubyear_i":2011, > "title":["A Dance with Dragons"]}, > { > "id":"book2", > "pubyear_i":2005, > "title":["A Feast for Crows"]}, > { > "id":"book12", > "pubyear_i":2005, > "title":["Brass Man"]}, > { > "id":"book11", > "pubyear_i":2003, > "title":["The Line of Polity"]}, > { > "id":"book10", > "pubyear_i":2001, > "title":["Gridlinked"]}, > { > "id":"book6", > "pubyear_i":1990, > "title":["Use of Weapons"]}, > { > "id":"book9", > "pubyear_i":1989, > "title":["Shadow Games"]}, > { > "id":"book5", > "pubyear_i":1988, > "title":["The Player of Games"]}, > { > "id":"book4", > "pubyear_i":1987, > "title":["Consider Phlebas"]},
[jira] [Closed] (SOLR-6569) Why tab seperated file is giving error in solr during being inserted
[ https://issues.apache.org/jira/browse/SOLR-6569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson closed SOLR-6569. Resolution: Invalid Please raise this kind of issue on the Solr user's list before raising a JIRA. JIRA tickets are not intended to resolve usage questions. > Why tab seperated file is giving error in solr during being inserted > > > Key: SOLR-6569 > URL: https://issues.apache.org/jira/browse/SOLR-6569 > Project: Solr > Issue Type: Bug > Components: Build, clients - java > Environment: Windows7 , java 1.7, solr 4.7.2 >Reporter: fatih > Labels: documentation, features, mentor, newbie > Fix For: 4.7.2 > > > link for stackoverflow as well > http://stackoverflow.com/questions/26077474/why-tab-seperated-file-is-giving-error-in-solr-during-being-inserted > When below command is run > C:\dev\tools\solr-4.7.2\apache-tomcat-6.0.37\bin>curl > "http://localhost:8080/solr-4.7.2/update/csv?commit=true&rowid=id&fieldnames=interfaceseq,extractnumber&separator=%09&stream.file=C:\ > opt\invoices\input\5924usage_data1.dat&stream.contentType=text/csv&header=false&trim=true" > I get below error which i can not understand the reason. > > > 400 name="QTime">1ERROR: [doc=0] > unknown field 'interfaceseq'400 > > The file content is as below > 101 5923 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6476) Create a bulk mode for schema API
[ https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151124#comment-14151124 ] Erick Erickson commented on SOLR-6476: -- This is complete tangential, but I wanted to get other's thinking about it A while ago, Stefan and I tried to allow schema.xml to be edited from the admin UI. See: https://issues.apache.org/jira/browse/SOLR-5287 Uwe pointed out that writing arbitrary XML to a server is a security problem so we pulled things out. It's actually in limbo in trunk currently marked as a blocker. Is there any way the managed schema functionality could be warped in the Admin UI to allow editing for the schema file? I'm forever wishing that I could do that I suppose it would require that the managed schema is used though... Anyway, feel free to ignore this entirely or open a new JIRA if it sparks some ideas. > Create a bulk mode for schema API > - > > Key: SOLR-6476 > URL: https://issues.apache.org/jira/browse/SOLR-6476 > Project: Solr > Issue Type: Bug > Components: Schema and Analysis >Reporter: Noble Paul >Assignee: Noble Paul > Labels: managedResource > Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, > SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch > > > The current schema API does one operation at a time and the normal usecase is > that users add multiple fields/fieldtypes/copyFields etc in one shot. > example > {code:javascript} > curl http://localhost:8983/solr/collection1/schema -H > 'Content-type:application/json' -d '{ > "add-field": { > "name":"sell-by", > "type":"tdate", > "stored":true > }, > "add-field":{ > "name":"catchall", > "type":"text_general", > "stored":false > } > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5969) Add Lucene50Codec
[ https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151120#comment-14151120 ] ASF subversion and git services commented on LUCENE-5969: - Commit 1628073 from [~rcmuir] in branch 'dev/branches/lucene5969' [ https://svn.apache.org/r1628073 ] LUCENE-5969: add merge api > Add Lucene50Codec > - > > Key: LUCENE-5969 > URL: https://issues.apache.org/jira/browse/LUCENE-5969 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless > Fix For: 5.0, Trunk > > Attachments: LUCENE-5969.patch, LUCENE-5969.patch > > > Spinoff from LUCENE-5952: > * Fix .si to write Version as 3 ints, not a String that requires parsing at > read time. > * Lucene42TermVectorsFormat should not use the same codecName as > Lucene41StoredFieldsFormat > It would also be nice if we had a "bumpCodecVersion" script so rolling a new > codec is not so daunting. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 641 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/641/ 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch Error Message: Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: halfcollection_shard1_replica1 Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: halfcollection_shard1_replica1 at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFa
[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1855 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1855/ Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. REGRESSION: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch Error Message: Could not fully create collection: acollectionafterbaddelete Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Could not fully create collection: acollectionafterbaddelete at __randomizedtesting.SeedInfo.seed([C2C5DE70AE105B43:43235068D94F3B7F]:0) at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:932) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:203) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.u
[jira] [Commented] (LUCENE-5969) Add Lucene50Codec
[ https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151116#comment-14151116 ] ASF subversion and git services commented on LUCENE-5969: - Commit 1628070 from [~rcmuir] in branch 'dev/branches/lucene5969' [ https://svn.apache.org/r1628070 ] LUCENE-5969: fix compile/javadocs, tighten up backwards codecs, add more safety to 5.x fields/vectors > Add Lucene50Codec > - > > Key: LUCENE-5969 > URL: https://issues.apache.org/jira/browse/LUCENE-5969 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless > Fix For: 5.0, Trunk > > Attachments: LUCENE-5969.patch, LUCENE-5969.patch > > > Spinoff from LUCENE-5952: > * Fix .si to write Version as 3 ints, not a String that requires parsing at > read time. > * Lucene42TermVectorsFormat should not use the same codecName as > Lucene41StoredFieldsFormat > It would also be nice if we had a "bumpCodecVersion" script so rolling a new > codec is not so daunting. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_40-ea-b04) - Build # 11343 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11343/ Java: 32bit/jdk1.8.0_40-ea-b04 -server -XX:+UseParallelGC 1 tests failed. REGRESSION: org.apache.solr.cloud.CloudExitableDirectoryReaderTest.testDistribSearch Error Message: no exception matching expected: 400: Request took too long during query expansion. Terminating request. Stack Trace: java.lang.AssertionError: no exception matching expected: 400: Request took too long during query expansion. Terminating request. at __randomizedtesting.SeedInfo.seed([BCBE62615F717DF2:3D58EC79282E1DCE]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertFail(CloudExitableDirectoryReaderTest.java:101) at org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:81) at org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTest(CloudExitableDirectoryReaderTest.java:54) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
[jira] [Commented] (SOLR-6282) ArrayIndexOutOfBoundsException during search
[ https://issues.apache.org/jira/browse/SOLR-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151096#comment-14151096 ] Anurag Sharma commented on SOLR-6282: - Jason - So far there is no clarity on the steps to reproduce this issue. Also from the above comment it looks like the issue doesn't exist at all. If you still see the issue please update with the detailed steps. Otherwise with the above comments and information we are bound to close it. > ArrayIndexOutOfBoundsException during search > > > Key: SOLR-6282 > URL: https://issues.apache.org/jira/browse/SOLR-6282 > Project: Solr > Issue Type: Bug > Components: query parsers >Affects Versions: 4.8 >Reporter: Jason Emeric >Priority: Critical > Labels: difficulty-medium, impact-low > > When executing a search with the following query strings a > ERROR org.apache.solr.servlet.SolrDispatchFilter â > null:java.lang.ArrayIndexOutOfBoundsException > error is thrown and no stack trace is provided. This is happening on > searches that seem to have no similar pattern to them (special characters, > length, spaces, etc.) > q=((work_title_search:(%22+zoe%22%20)%20OR%20work_title_search:%22+zoe%22^100)%20AND%20(performer_name_search:(+big~0.75%20+b%27z%20%20)^7%20OR%20performer_name_search:%22+big%20+b%27z%20%20%22^30)) > q=((work_title_search:(%22+rtb%22%20)%20OR%20work_title_search:%22+rtb%22^100)%20AND%20(performer_name_search:(+fly~0.75%20+street~0.75%20+gang~0.75%20)^7%20OR%20performer_name_search:%22+fly%20+street%20+gang%20%22^30)) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6307) Atomic update remove does not work for int array or date array
[ https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anurag Sharma updated SOLR-6307: Attachment: SOLR-6307.patch Here is the patch for review using approach#2. Other than Int and Date also have covered the case for float. > Atomic update remove does not work for int array or date array > -- > > Key: SOLR-6307 > URL: https://issues.apache.org/jira/browse/SOLR-6307 > Project: Solr > Issue Type: Bug > Components: update >Affects Versions: 4.9 >Reporter: Kun Xi > Labels: atomic, difficulty-medium, impact-medium > Attachments: SOLR-6307.patch > > > Try to remove an element in the string array with curl: > {code} > curl http://localhost:8080/update\?commit\=true -H > 'Content-type:application/json' -d '[{ "attr_birth_year_is": { "remove": > [1960]}, "id": 1098}]' > curl http://localhost:8080/update\?commit\=true -H > 'Content-type:application/json' -d '[{"reserved_on_dates_dts": {"remove": > ["2014-02-12T12:00:00Z", "2014-07-16T12:00:00Z", "2014-02-15T12:00:00Z", > "2014-02-21T12:00:00Z"]}, "id": 1098}]' > {code} > Neither of them works. > The set and add operation for int array works. > The set, remove, and add operation for string array works -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_67) - Build # 4239 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4239/ Java: 32bit/jdk1.7.0_67 -server -XX:+UseParallelGC 3 tests failed. REGRESSION: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch Error Message: Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: halfcollection_shard1_replica1 Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core [halfcollection_shard1_replica1] Caused by: Could not get shard id for core: halfcollection_shard1_replica1 at __randomizedtesting.SeedInfo.seed([1EEBA7AA7178A4CB:9F0D29B20627C4F7]:0) at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:568) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.GeneratedMethodAccessor65.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
[jira] [Updated] (SOLR-6528) hdfs cluster with replication min set to 2 / Solr does not honor dfs.replication in hdfs-site.xml
[ https://issues.apache.org/jira/browse/SOLR-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated SOLR-6528: - Fix Version/s: (was: 4.10.1) 4.10.2 > hdfs cluster with replication min set to 2 / Solr does not honor > dfs.replication in hdfs-site.xml > -- > > Key: SOLR-6528 > URL: https://issues.apache.org/jira/browse/SOLR-6528 > Project: Solr > Issue Type: Bug >Affects Versions: 4.9 > Environment: RedHat JDK 1.7 hadoop 2.4.1 >Reporter: davidchiu > Fix For: 4.10.2, Trunk > > > org.apache.hadoop.ipc.RemoteException(java.io.IOException): file > /user/solr/test1/core_node1/data/tlog/tlog.000 on client > 192.161.1.91.\nRequested replication 1 is less than the required minimum 2\n\t -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6539) SolrJ document object binding / BigDecimal
[ https://issues.apache.org/jira/browse/SOLR-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated SOLR-6539: - Fix Version/s: (was: 4.10.1) 4.10.2 > SolrJ document object binding / BigDecimal > -- > > Key: SOLR-6539 > URL: https://issues.apache.org/jira/browse/SOLR-6539 > Project: Solr > Issue Type: Improvement > Components: SolrJ >Affects Versions: 4.9, 4.10, 4.10.1 >Reporter: Bert Brecht > Labels: patch > Fix For: 4.10.2 > > Attachments: SOLR-6539.diff > > > We are using BigDecimals in our application quite often for calculating. We > store our values typically as java primitives (int, long/double, float) and > using the DocumentObjectBinder (annotations based document object binding). > Unfortunately, we must have exactly the type given in solr schema for type > used as field/accessor. We found out, that the following patch would allow us > to define BigDecimal as type as we just use BigDecimal as a type in our > mapped POJO. This would help to make the mapping more powerful without > loosing anything. > -- > $ svn diff > Downloads/solr/solr/solrj/src/java/org/apache/solr/client/solrj/beans/DocumentObjectBinder.java > Index: > Downloads/solr/solr/solrj/src/java/org/apache/solr/client/solrj/beans/DocumentObjectBinder.java > === > --- > Downloads/solr/solr/solrj/src/java/org/apache/solr/client/solrj/beans/DocumentObjectBinder.java > (revision 1626087) > +++ > Downloads/solr/solr/solrj/src/java/org/apache/solr/client/solrj/beans/DocumentObjectBinder.java > (working copy) > @@ -359,6 +359,9 @@ >if (v != null && type == ByteBuffer.class && v.getClass() == > byte[].class) { > v = ByteBuffer.wrap((byte[]) v); >} > + if (type == java.math.BigDecimal.class){ > +v = BigDecimal.valueOf(v): > + } >try { > if (field != null) { >field.set(obj, v); -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-5936) Add BWC checks to verify what is tested matches what versions we know about
[ https://issues.apache.org/jira/browse/LUCENE-5936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-5936. Resolution: Fixed [~rjernst] this is done now right? > Add BWC checks to verify what is tested matches what versions we know about > --- > > Key: LUCENE-5936 > URL: https://issues.apache.org/jira/browse/LUCENE-5936 > Project: Lucene - Core > Issue Type: Test >Reporter: Ryan Ernst >Assignee: Ryan Ernst > Fix For: 4.10.1, 5.0, Trunk > > Attachments: LUCENE-5936.patch, LUCENE-5936.patch > > > This is a follow up from LUCENE-5934. Mike has already has something like > this for the smoke tester, but here I am suggesting a test within the test > (similar to other Version tests we have which check things like deprecation > status of old versions). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: How to openIfChanged the most recent merge?
OK I ran the test and saw the failure, thank you! I think I understand why you are seeing what you are seeing. First off, you are not actually using an NRT reader when hardReopenBeforeDVUpdate is false, because in readerReopenIfChanged, when oldReader == null, you must do: return DirectoryReader.open(writer2, true); so that your initial reader is in fact NRT. All subsequent reopens from then on will then be NRT. When I make that change to your test, it seems to pass (or at least run for much longer than it did before...). However, if I remove the writer.commit() before the reopen, the test fails. The reason is that IW commit and NRT reader reopen do not reflect merges "just kicked off" by that flush, even when using SMS. So, there will always be this "off by 1", in that you'll get a reader with 10 segments (pre-merge) not 1 segment (post-merge). One possible workaround here w/o having to call crazy-expensive commit would be to call reopenIfChanged twice in a row (and fix your reopen method to properly handle null return from openIfChanged); when I tried that in your test, it also seemed to run forever... Mike McCandless http://blog.mikemccandless.com On Fri, Sep 26, 2014 at 2:44 PM, Mikhail Khludnev wrote: > > > On Fri, Sep 26, 2014 at 7:07 PM, Michael McCandless > wrote: >> >> Sorry I can't make heads or tails of what you are saying here ... can >> you maybe make a small test case that fails with "ant test"? Boil it >> down as much as possible... > > > Sure. I'm really sorry for being so confusing. > I changed constant > https://github.com/m-khl/lucene-merge-visibility/commit/a4a01c2c91d9c30850602b8dddf23de5363c4851#diff-86ebfbf440fe69ee36a52705cb92b824R44 > to make it fail. > the branch reader-vs-merge at > https://github.com/m-khl/lucene-merge-visibility/tree/reader-vs-merge > in lucene/core there is a failed test > $> ant test -Dtestcase=TestNumDValUpdVsReaderVisibility > > it's verbose, because it uses sysout as infostream. >[junit4] FAILURE 2.40s | TestNumDValUpdVsReaderVisibility.testSimple <<< >[junit4]> Throwable #1: java.lang.AssertionError: failed on id:doc-18 > expected:<17> but was:<18> >[junit4]> at > __randomizedtesting.SeedInfo.seed([73A18231908F4ADC:4B12A6CFB77C9E0D]:0) >[junit4]> at > org.apache.lucene.index.TestNumDValUpdVsReaderVisibility.testSimple(TestNumDValUpdVsReaderVisibility.java:134) > > >> >> >> The gist seems to be if you use an NRT reader something fails, but if >> you instead open a new reader, that something passes? > > I don't use NTR, and perhaps it's a solution. I just don't know how to do > that. > Note: closing writer, open reader - works (but I suppose it's slow); just > committing and reopening reader - it fails; >> >> But what >> exactly is failing? > > - let I have merge factor 10 and SerialMergeSceduler. > - I did 9 commits already and have 9 segments in the index > - I add a few docs and commit > - 10th commit triggers merge synchronously, it's done. > - now if I reopen reader it see 10 unmerged segments (merged single segment > index, isn't visible for reopen) /*test FAILS*/ > - but if I fully close writer&reader and open reader, I've got single > segment merged index./*test PASS */ > > - usually such behavior gets no probs, it's reasonable, and fine. > - but I do a mad thing > - I use that reader (with 10 segments) to get docnum and write it as a > docvalue; > - after I commit only docvalues update (no docs update) and reopen reader, > I've got single segment index, which was already written by merge at the > previous commit. > - and here is a problem because a docnum obtained at 10 segments index, > doesn't match to docnum at single segment index (there was a deletion) > >> >> And what is a "solid" segment here? > > I meant an index contains of single segment, at contrast from index contains > of many ones. > > Thank you! >> >> >> Mike McCandless >> >> http://blog.mikemccandless.com >> >> >> On Thu, Sep 25, 2014 at 6:00 PM, Mikhail Khludnev >> wrote: >> > Hello Mike! >> > >> > Thanks for your attention. >> > I pushed the mad case at >> > >> > https://github.com/m-khl/lucene-merge-visibility/commit/fa2d60be5b13eb57e0527c843119cf62cfa83a7d#diff-86ebfbf440fe69ee36a52705cb92b824R120 >> > >> > it does the following >> > >> > - writes a pair of doc >> > - commit >> > - reopen reader, searches for one of them >> > - update this doc with its' docnum (I know it's weird, but I should work >> > if >> > reopened reader sees that update) >> > - commit this DV update >> > - search that doc and check the written doc val. >> > it passes if hardReopenBeforeDVUpdate=true and fails otherwise >> > >> > I know that changing docnum is natural, but I expect it doesnt change >> > while >> > I update docvals. >> > here how it flips: >> > at the commit after doc update we have many segments >> > >> > now checkpoint "_0(6.0.0):C2/1:delGen=1:fieldInfosGen=1:dvGen=1 >> > _1(6.0.0):C2:fieldInfosGen=1:dvGen=1 _2(6.0.0):C2: >> >