[JENKINS] Lucene-Solr-NightlyTests-7.0 - Build # 45 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.0/45/ 6 tests failed. FAILED: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test Error Message: The Monkey ran for over 45 seconds and no jetties were stopped - this is worth investigating! Stack Trace: java.lang.AssertionError: The Monkey ran for over 45 seconds and no jetties were stopped - this is worth investigating! at __randomizedtesting.SeedInfo.seed([2CB88AE6D0BACB03:A4ECB53C7E46A6FB]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:587) at org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:133) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 193 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/193/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC 3 tests failed. FAILED: org.apache.solr.cloud.HttpPartitionTest.test Error Message: Error from server at http://127.0.0.1:61457: Could not find collection : c8n_1x3 Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:61457: Could not find collection : c8n_1x3 at __randomizedtesting.SeedInfo.seed([FA9745AB31FEC91E:72C37A719F02A4E6]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollectionRetry(AbstractFullDistribZkTestBase.java:1914) at org.apache.solr.cloud.HttpPartitionTest.testRf3(HttpPartitionTest.java:392) at org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:138) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverrides
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 439 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/439/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC 2 tests failed. FAILED: org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey Error Message: There are still nodes recoverying - waited for 330 seconds Stack Trace: java.lang.AssertionError: There are still nodes recoverying - waited for 330 seconds at __randomizedtesting.SeedInfo.seed([8579DBFB4EFBB912:E5E082A0FFD1296]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:185) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:908) at org.apache.solr.cloud.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:436) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apac
[jira] [Commented] (LUCENE-7951) New wrapper classes for Geo3d
[ https://issues.apache.org/jira/browse/LUCENE-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16171100#comment-16171100 ] David Smiley commented on LUCENE-7951: -- (I'm back from travel) The current state is looking pretty nice. I think you can now merge Geo3dAreaRptTest into Geo3dRptTest? Keep the specific tests for bugs that were found. Use the new random shape generator instead of the manual code. bq. Geo3dCircleShape.relate While I don't pretend to know the details of the algorithms for both world models, I do find the override of the method here a bit suspicious. Shouldn't GeoStandardCircle look at the PlanetModel and do the right thing? bq. New constructor in Geo3dRectangleShape Ok. > New wrapper classes for Geo3d > - > > Key: LUCENE-7951 > URL: https://issues.apache.org/jira/browse/LUCENE-7951 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/spatial-extras >Reporter: Ignacio Vera >Assignee: David Smiley >Priority: Minor > Attachments: LUCENE_7951_build.patch, LUCENE_7951_build.patch, > LUCENE-7951.patch, LUCENE-7951.patch > > > Hi, > After the latest developments in the Geo3d library, in particular: > [https://issues.apache.org/jira/browse/LUCENE-7906] : Spatial relationships > between GeoShapes > [https://issues.apache.org/jira/browse/LUCENE-7936]: Serialization of > GeoShapes. > I propose a new set of wrapper classes which can be for example linked to > Solr as they implement their own SpatialContextFactory. It provides the > capability of indexing shapes with > spherical geometry. > Thanks! -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10451) Remove contrib/ltr/lib from lib includes in the techproducts example config
[ https://issues.apache.org/jira/browse/SOLR-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16171086#comment-16171086 ] SungJunyoung commented on SOLR-10451: - Is it right way to contribute to solr? Wanting to contribute, I made a pull-request with github. > Remove contrib/ltr/lib from lib includes in the techproducts example config > --- > > Key: SOLR-10451 > URL: https://issues.apache.org/jira/browse/SOLR-10451 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Priority: Minor > Labels: newdev > Attachments: SOLR-10451.patch > > > As [~varunthacker] mentioned in SOLR-8542 there are actually no jars in the > {{contrib/ltr/lib}} folder. > -So to avoid confusion, let's remove the {{contrib/ltr}} folder from the Solr > binary release (it currently contains just a boilerplate {{README.txt}} > file).- > The {{ regex=".*\.jar" />}} line in > https://github.com/apache/lucene-solr/blob/master/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml > can also be removed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7871) Platform independent config file instead of solr.in.sh and solr.in.cmd
[ https://issues.apache.org/jira/browse/SOLR-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski updated SOLR-7871: -- Attachment: SOLR-7871.patch Adds tests for the configuration precedence and the toBashScript/toCmdScript functionality. Next steps are: - Windows support - pulling out more default values, separate GC options file maybe(?). But this is getting pretty close to being ready. [~janhoy] do you have any opinions on my question earlier about the format/location we keep the defaults in? I made a point earlier that it _might_ make sense to keep them in Java as static constants. Though I also might just be being crazy. Anyway, feedback appreciated. > Platform independent config file instead of solr.in.sh and solr.in.cmd > -- > > Key: SOLR-7871 > URL: https://issues.apache.org/jira/browse/SOLR-7871 > Project: Solr > Issue Type: Improvement > Components: scripts and tools >Affects Versions: 5.2.1 >Reporter: Jan Høydahl >Assignee: Jan Høydahl > Labels: bin/solr > Attachments: SOLR-7871.patch, SOLR-7871.patch, SOLR-7871.patch, > SOLR-7871.patch, SOLR-7871.patch, SOLR-7871.patch, SOLR-7871.patch > > > Spinoff from SOLR-7043 > The config files {{solr.in.sh}} and {{solr.in.cmd}} are currently executable > batch files, but all they do is to set environment variables for the start > scripts on the format {{key=value}} > Suggest to instead have one central platform independent config file e.g. > {{bin/solr.yml}} or {{bin/solrstart.properties}} which is parsed by > {{SolrCLI.java}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 194 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/194/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest Error Message: Timeout waiting for all live and active Stack Trace: java.lang.AssertionError: Timeout waiting for all live and active at __randomizedtesting.SeedInfo.seed([2F00C71C00B65773:AC7698EED6CF59D2]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest(TestCloudRecovery.java:184) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.TestTlogReplica.testCreateDelete {seed=[2F00C71C00B65773:80468AACC284646F]} Error Message: Could not find collection : tlog_replica_test_create_delete Stack Trace: org.apache.solr.common.SolrException: Could not find collection : tlog_replica_test
[jira] [Commented] (SOLR-10451) Remove contrib/ltr/lib from lib includes in the techproducts example config
[ https://issues.apache.org/jira/browse/SOLR-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16171076#comment-16171076 ] ASF GitHub Bot commented on SOLR-10451: --- GitHub user sungjunyoung opened a pull request: https://github.com/apache/lucene-solr/pull/249 SOLR-10451: Remove contrib/ltr/lib from lib includes in the techproducts example config I deleted line 84 : `` in [solrconfig.xml] file(https://github.com/apache/lucene-solr/blob/master/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml) You can merge this pull request into a Git repository by running: $ git pull https://github.com/sungjunyoung/lucene-solr SOLR-10451 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/249.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #249 commit 5f627025dcb08e3158823685917de1dfe98127b6 Author: sungjunyoung Date: 2017-09-19T03:47:58Z SOLR-10451: Remove contrib/ltr/lib from lib includes in the techproducts example config I deleted line84 : `` in https://github.com/apache/lucene-solr/blob/master/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml > Remove contrib/ltr/lib from lib includes in the techproducts example config > --- > > Key: SOLR-10451 > URL: https://issues.apache.org/jira/browse/SOLR-10451 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Priority: Minor > Labels: newdev > Attachments: SOLR-10451.patch > > > As [~varunthacker] mentioned in SOLR-8542 there are actually no jars in the > {{contrib/ltr/lib}} folder. > -So to avoid confusion, let's remove the {{contrib/ltr}} folder from the Solr > binary release (it currently contains just a boilerplate {{README.txt}} > file).- > The {{ regex=".*\.jar" />}} line in > https://github.com/apache/lucene-solr/blob/master/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml > can also be removed. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #249: SOLR-10451: Remove contrib/ltr/lib from lib i...
GitHub user sungjunyoung opened a pull request: https://github.com/apache/lucene-solr/pull/249 SOLR-10451: Remove contrib/ltr/lib from lib includes in the techproducts example config I deleted line 84 : `` in [solrconfig.xml] file(https://github.com/apache/lucene-solr/blob/master/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml) You can merge this pull request into a Git repository by running: $ git pull https://github.com/sungjunyoung/lucene-solr SOLR-10451 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/249.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #249 commit 5f627025dcb08e3158823685917de1dfe98127b6 Author: sungjunyoung Date: 2017-09-19T03:47:58Z SOLR-10451: Remove contrib/ltr/lib from lib includes in the techproducts example config I deleted line84 : `` in https://github.com/apache/lucene-solr/blob/master/solr/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-5478) Optimization: Fetch all "fl" values from docValues instead of stored values if possible/equivalent
[ https://issues.apache.org/jira/browse/SOLR-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley closed SOLR-5478. -- Resolution: Duplicate > Optimization: Fetch all "fl" values from docValues instead of stored values > if possible/equivalent > -- > > Key: SOLR-5478 > URL: https://issues.apache.org/jira/browse/SOLR-5478 > Project: Solr > Issue Type: Improvement > Components: Response Writers >Affects Versions: 4.5 >Reporter: Manuel Lenormand > Fix For: 6.0, 4.9 > > Attachments: SOLR-5478.patch, SOLR-5478.patch, SOLR-5478 smiley > fl.fieldCacheFields.patch > > > When the "fl" field list mentions a list of fields that all have docValues, > and they are equivalent to the stored version (not true for multiValued field > due to ordering), then we can fetch the values more efficiently from > docValues than from stored values. If this can't be done, might as well > fetch them all from stored values when they have both (this is what happens > today). > Former title: Speed-up distributed search with high rows param or deep paging > by transforming docId's to uniqueKey via memory docValues > Originally the scope of this was just for the uniqueKey but now it's broader. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11365) Verbose parameter's names for QParsers
Cao Manh Dat created SOLR-11365: --- Summary: Verbose parameter's names for QParsers Key: SOLR-11365 URL: https://issues.apache.org/jira/browse/SOLR-11365 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Cao Manh Dat SOLR-11244 enabled a powerful and easy to construct Query DSL for Solr. Thefore we may consider using verbose names for parameters of QParsers to make the query more easy to understand. For example {code} curl -XGET http://localhost:8983/solr/query -d ' { "query" : { "boost" : { "query" : { "lucene" : { "operator" : "AND", "default_field" : "cat_s", "query" : "A" } } "function" : "log(popularity)" } } } {code} In my opinion we should support both verbose and shorthand names for Query DSL -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11363) TestCloudJSONFacetJoinDomain fails with Points enabled
[ https://issues.apache.org/jira/browse/SOLR-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley updated SOLR-11363: Attachment: SOLR-11363.patch Draft patch attached. > TestCloudJSONFacetJoinDomain fails with Points enabled > -- > > Key: SOLR-11363 > URL: https://issues.apache.org/jira/browse/SOLR-11363 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.0 >Reporter: Yonik Seeley > Attachments: SOLR-11363.patch > > > As Hoss noted in SOLR-10939, this test still had points disabled, and > enabling them causes tests to fail. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 2102 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2102/ 7 tests failed. FAILED: org.apache.solr.cloud.AssignTest.testIdIsUnique Error Message: org.apache.solr.common.SolrException: Error inc and get counter from Zookeeper for collection:c4 Stack Trace: java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: Error inc and get counter from Zookeeper for collection:c4 at __randomizedtesting.SeedInfo.seed([356A5B4AD3E160EC:A7D7F762443CE01A]:0) at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.solr.cloud.AssignTest.testIdIsUnique(AssignTest.java:114) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.solr.common.SolrException: Error inc and get counter from Zookeeper for collection:c4 at org.apache.solr.cloud.Assign.incAndGetId(Assign.java:107) at org.apache.solr.cloud
[jira] [Comment Edited] (SOLR-5478) Optimization: Fetch all "fl" values from docValues instead of stored values if possible/equivalent
[ https://issues.apache.org/jira/browse/SOLR-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16171004#comment-16171004 ] Cao Manh Dat edited comment on SOLR-5478 at 9/19/17 2:12 AM: - Hi [~dsmiley] [~varunthacker], I think commit on SOLR-8344 solved this issue as well then this issue can be closed. In case we want to change the optimization algorithm, we can open another ticket with a detailed proposal about the changing. was (Author: caomanhdat): Hi [~dsmiley] [~varunthacker], I think commit on SOLR-8344 solved this issue as well. In case we want to change the optimization algorithm, we can open another ticket with a detailed proposal about the changing. > Optimization: Fetch all "fl" values from docValues instead of stored values > if possible/equivalent > -- > > Key: SOLR-5478 > URL: https://issues.apache.org/jira/browse/SOLR-5478 > Project: Solr > Issue Type: Improvement > Components: Response Writers >Affects Versions: 4.5 >Reporter: Manuel Lenormand > Fix For: 4.9, 6.0 > > Attachments: SOLR-5478.patch, SOLR-5478.patch, SOLR-5478 smiley > fl.fieldCacheFields.patch > > > When the "fl" field list mentions a list of fields that all have docValues, > and they are equivalent to the stored version (not true for multiValued field > due to ordering), then we can fetch the values more efficiently from > docValues than from stored values. If this can't be done, might as well > fetch them all from stored values when they have both (this is what happens > today). > Former title: Speed-up distributed search with high rows param or deep paging > by transforming docId's to uniqueKey via memory docValues > Originally the scope of this was just for the uniqueKey but now it's broader. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5478) Optimization: Fetch all "fl" values from docValues instead of stored values if possible/equivalent
[ https://issues.apache.org/jira/browse/SOLR-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16171004#comment-16171004 ] Cao Manh Dat commented on SOLR-5478: Hi [~dsmiley] [~varunthacker], I think commit on SOLR-8344 solved this issue as well. In case we want to change the optimization algorithm, we can open another ticket with a detailed proposal about the changing. > Optimization: Fetch all "fl" values from docValues instead of stored values > if possible/equivalent > -- > > Key: SOLR-5478 > URL: https://issues.apache.org/jira/browse/SOLR-5478 > Project: Solr > Issue Type: Improvement > Components: Response Writers >Affects Versions: 4.5 >Reporter: Manuel Lenormand > Fix For: 4.9, 6.0 > > Attachments: SOLR-5478.patch, SOLR-5478.patch, SOLR-5478 smiley > fl.fieldCacheFields.patch > > > When the "fl" field list mentions a list of fields that all have docValues, > and they are equivalent to the stored version (not true for multiValued field > due to ordering), then we can fetch the values more efficiently from > docValues than from stored values. If this can't be done, might as well > fetch them all from stored values when they have both (this is what happens > today). > Former title: Speed-up distributed search with high rows param or deep paging > by transforming docId's to uniqueKey via memory docValues > Originally the scope of this was just for the uniqueKey but now it's broader. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10397) Port 'autoAddReplicas' feature to the autoscaling framework and make it work with non-shared filesystems
[ https://issues.apache.org/jira/browse/SOLR-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170992#comment-16170992 ] Cao Manh Dat commented on SOLR-10397: - +1 looks great. > Port 'autoAddReplicas' feature to the autoscaling framework and make it work > with non-shared filesystems > > > Key: SOLR-10397 > URL: https://issues.apache.org/jira/browse/SOLR-10397 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Cao Manh Dat > Labels: autoscaling > Fix For: 7.0 > > Attachments: SOLR-10397.1.patch, SOLR-10397.2.patch, > SOLR-10397.2.patch, SOLR-10397.2.patch, SOLR-10397.patch, > SOLR-10397_remove_nocommit.patch > > > Currently 'autoAddReplicas=true' can be specified in the Collection Create > API to automatically add replicas when a replica becomes unavailable. I > propose to move this feature to the autoscaling cluster policy rules design. > This will include the following: > * Trigger support for ‘nodeLost’ event type > * Modification of existing implementation of ‘autoAddReplicas’ to > automatically create the appropriate ‘nodeLost’ trigger. > * Any such auto-created trigger must be marked internally such that setting > ‘autoAddReplicas=false’ via the Modify Collection API should delete or > disable corresponding trigger. > * Support for non-HDFS filesystems while retaining the optimization afforded > by HDFS i.e. the replaced replica can point to the existing data dir of the > old replica. > * Deprecate/remove the feature of enabling/disabling ‘autoAddReplicas’ across > the entire cluster using cluster properties in favor of using the > suspend-trigger/resume-trigger APIs. > This will retain backward compatibility for the most part and keep a common > use-case easy to enable as well as make it available to more people (i.e. > people who don't use HDFS). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20496 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20496/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.cloud.TestPullReplica.testKillLeader Error Message: Replica state not updated in cluster state null Live Nodes: [127.0.0.1:45975_solr, 127.0.0.1:36691_solr] Last available state: DocCollection(pull_replica_test_kill_leader//collections/pull_replica_test_kill_leader/state.json/5)={ "pullReplicas":"1", "replicationFactor":"1", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"pull_replica_test_kill_leader_shard1_replica_n1", "base_url":"https://127.0.0.1:36691/solr";, "node_name":"127.0.0.1:36691_solr", "state":"down", "type":"NRT", "leader":"true"}, "core_node4":{ "core":"pull_replica_test_kill_leader_shard1_replica_p2", "base_url":"https://127.0.0.1:45975/solr";, "node_name":"127.0.0.1:45975_solr", "state":"active", "type":"PULL", "router":{"name":"compositeId"}, "maxShardsPerNode":"100", "autoAddReplicas":"false", "nrtReplicas":"1", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Replica state not updated in cluster state null Live Nodes: [127.0.0.1:45975_solr, 127.0.0.1:36691_solr] Last available state: DocCollection(pull_replica_test_kill_leader//collections/pull_replica_test_kill_leader/state.json/5)={ "pullReplicas":"1", "replicationFactor":"1", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"pull_replica_test_kill_leader_shard1_replica_n1", "base_url":"https://127.0.0.1:36691/solr";, "node_name":"127.0.0.1:36691_solr", "state":"down", "type":"NRT", "leader":"true"}, "core_node4":{ "core":"pull_replica_test_kill_leader_shard1_replica_p2", "base_url":"https://127.0.0.1:45975/solr";, "node_name":"127.0.0.1:45975_solr", "state":"active", "type":"PULL", "router":{"name":"compositeId"}, "maxShardsPerNode":"100", "autoAddReplicas":"false", "nrtReplicas":"1", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([99C033DEE3F7292C:D0D6C76A814CBD7A]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269) at org.apache.solr.cloud.TestPullReplica.doTestNoLeader(TestPullReplica.java:401) at org.apache.solr.cloud.TestPullReplica.testKillLeader(TestPullReplica.java:290) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carr
[jira] [Updated] (SOLR-10397) Port 'autoAddReplicas' feature to the autoscaling framework and make it work with non-shared filesystems
[ https://issues.apache.org/jira/browse/SOLR-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-10397: - Attachment: SOLR-10397_remove_nocommit.patch This patch removes the nocommit in Overseer. It moves the creation of .auto_add_replicas trigger to the OverseerTriggerThread where the trigger is directly added to the znode without going through the autoscaling handler API. This is ready. > Port 'autoAddReplicas' feature to the autoscaling framework and make it work > with non-shared filesystems > > > Key: SOLR-10397 > URL: https://issues.apache.org/jira/browse/SOLR-10397 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Cao Manh Dat > Labels: autoscaling > Fix For: 7.0 > > Attachments: SOLR-10397.1.patch, SOLR-10397.2.patch, > SOLR-10397.2.patch, SOLR-10397.2.patch, SOLR-10397.patch, > SOLR-10397_remove_nocommit.patch > > > Currently 'autoAddReplicas=true' can be specified in the Collection Create > API to automatically add replicas when a replica becomes unavailable. I > propose to move this feature to the autoscaling cluster policy rules design. > This will include the following: > * Trigger support for ‘nodeLost’ event type > * Modification of existing implementation of ‘autoAddReplicas’ to > automatically create the appropriate ‘nodeLost’ trigger. > * Any such auto-created trigger must be marked internally such that setting > ‘autoAddReplicas=false’ via the Modify Collection API should delete or > disable corresponding trigger. > * Support for non-HDFS filesystems while retaining the optimization afforded > by HDFS i.e. the replaced replica can point to the existing data dir of the > old replica. > * Deprecate/remove the feature of enabling/disabling ‘autoAddReplicas’ across > the entire cluster using cluster properties in favor of using the > suspend-trigger/resume-trigger APIs. > This will retain backward compatibility for the most part and keep a common > use-case easy to enable as well as make it available to more people (i.e. > people who don't use HDFS). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11363) TestCloudJSONFacetJoinDomain fails with Points enabled
[ https://issues.apache.org/jira/browse/SOLR-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170970#comment-16170970 ] Hoss Man commented on SOLR-11363: - linking to SOLR-10924 for posterity. > TestCloudJSONFacetJoinDomain fails with Points enabled > -- > > Key: SOLR-11363 > URL: https://issues.apache.org/jira/browse/SOLR-11363 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.0 >Reporter: Yonik Seeley > > As Hoss noted in SOLR-10939, this test still had points disabled, and > enabling them causes tests to fail. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11278) CdcrBootstrapTest failing intermittently
[ https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170958#comment-16170958 ] Amrit Sarkar edited comment on SOLR-11278 at 9/19/17 12:58 AM: --- I had an offline discussion with Shalin and Varun and we are able to figure out what's wrong in the Cdcr Bootstrap. * since issuing bootstrap is an asynchronous call, there is a probable race around condition where after issuing a bootstrap, it immediately checks for bootstrap status and if not found, another bootstrap gets issued. * this 2nd bootstrap fails to acquire lock issues cancel boostrap * since the bootstrap at target is now "cancelled", the bootstrap status in CdcrReplicatorManager goes into rigorous infinite loop as the condition "cancelled" is not handled. In the patch both "*submitted*" and "*cancelled*" bootstrap status conditions and 'what to do next' is covered, which will nullify the extensive bootstrap calling and even the bootstrap should complete successfully. was (Author: sarkaramr...@gmail.com): I had an offline discussion with Shalin and Varun and we are able to figure out what's wrong in the Cdcr Bootstrap. * since issuing bootstrap is an asynchronous call, there is a probable race around condition where after issuing a bootstrap, it immediately checks for bootstrap status and if not found, another bootstrap gets issued. * this 2nd bootstrap fails to acquire lock issues cancel boostrap * since the bootstrap at target is now "cancelled", the bootstrap status in CdcrReplicatorManager goes into infinite loop rigorous as the condition "cancelled" is not handled. In the patch both "*submitted*" and "*cancelled*" bootstrap status conditions and 'what to do next' is covered, which will nullify the extensive bootstrap calling and even the bootstrap should complete successfully. > CdcrBootstrapTest failing intermittently > > > Key: SOLR-11278 > URL: https://issues.apache.org/jira/browse/SOLR-11278 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.0, 6.6.1 >Reporter: Amrit Sarkar >Assignee: Varun Thacker >Priority: Critical > Labels: test > Attachments: master-bs.patch, SOLR-11278-awaits-fix.patch, > SOLR-11278-cancel-bootstrap-on-stop.patch, SOLR-11278.patch, > SOLR-11278.patch, test_results > > > {{CdcrBootstrapTest}} is failing while running beasts for significant > iterations. > The bootstrapping is failing in the test, after the first batch is indexed > for each {{testmethod}}, which results in documents mismatch :: > {code} > [beaster] 2> 39167 ERROR > (updateExecutor-39-thread-1-processing-n:127.0.0.1:42155_solr > x:cdcr-target_shard1_replica_n1 s:shard1 c:cdcr-target r:core_node2) > [n:127.0.0.1:42155_solr c:cdcr-target s:shard1 r:core_node2 > x:cdcr-target_shard1_replica_n1] o.a.s.h.CdcrRequestHandler Bootstrap > operation failed > [beaster] 2> java.util.concurrent.ExecutionException: > java.lang.AssertionError > [beaster] 2> at > java.util.concurrent.FutureTask.report(FutureTask.java:122) > [beaster] 2> at > java.util.concurrent.FutureTask.get(FutureTask.java:192) > [beaster] 2> at > org.apache.solr.handler.CdcrRequestHandler.lambda$handleBootstrapAction$0(CdcrRequestHandler.java:654) > [beaster] 2> at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > [beaster] 2> at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [beaster] 2> at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > [beaster] 2> at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188) > [beaster] 2> at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [beaster] 2> at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [beaster] 2> at java.lang.Thread.run(Thread.java:748) > [beaster] 2> Caused by: java.lang.AssertionError > [beaster] 2> at > org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:813) > [beaster] 2> at > org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:724) > [beaster] 2> at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197) > [beaster] 2> ... 5 more > {code} > {code} > [beaster] [01:37:16.282] FAILURE 153s | > CdcrBootstrapTest.testBootstrapWithSourceCluster <<< > [beaster]> Throwable #1: java.lang.
[jira] [Comment Edited] (SOLR-11363) TestCloudJSONFacetJoinDomain fails with Points enabled
[ https://issues.apache.org/jira/browse/SOLR-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170955#comment-16170955 ] Yonik Seeley edited comment on SOLR-11363 at 9/19/17 12:57 AM: --- Yep, that's it old string docvalue based numerics deduped multiple values 1,2,2,3 became 1,2,3 SortedNumericDocValues (which we now use for points) does not dedup, and the JSON Facet API does not account for duplicate values. edit: and now I notice Hoss' response above saying exactly that;-) was (Author: ysee...@gmail.com): Yep, that's it old string docvalue based numerics deduped multiple values 1,2,2,3 became 1,2,3 SortedNumericDocValues (which we now use for points) does not dedup, and the JSON Facet API does not account for duplicate values. > TestCloudJSONFacetJoinDomain fails with Points enabled > -- > > Key: SOLR-11363 > URL: https://issues.apache.org/jira/browse/SOLR-11363 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.0 >Reporter: Yonik Seeley > > As Hoss noted in SOLR-10939, this test still had points disabled, and > enabling them causes tests to fail. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11275) Adding diagrams for AutoAddReplica into Solr Ref Guide
[ https://issues.apache.org/jira/browse/SOLR-11275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mano Kovacs updated SOLR-11275: --- Attachment: autoaddreplica.puml > Adding diagrams for AutoAddReplica into Solr Ref Guide > -- > > Key: SOLR-11275 > URL: https://issues.apache.org/jira/browse/SOLR-11275 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Mano Kovacs >Assignee: Cassandra Targett > Attachments: autoaddreplica.png, autoaddreplica.puml, > autoaddreplica.puml, plantuml-diagram-test.png > > > Pilot jira for adding PlantUML diagrams for documenting internals. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11278) CdcrBootstrapTest failing intermittently
[ https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11278: Attachment: SOLR-11278.patch I had an offline discussion with Shalin and Varun and we are able to figure out what's wrong in the Cdcr Bootstrap. * since issuing bootstrap is an asynchronous call, there is a probable race around condition where after issuing a bootstrap, it immediately checks for bootstrap status and if not found, another bootstrap gets issued. * this 2nd bootstrap fails to acquire lock issues cancel boostrap * since the bootstrap at target is now "cancelled", the bootstrap status in CdcrReplicatorManager goes into infinite loop rigorous as the condition "cancelled" is not handled. In the patch both "*submitted*" and "*cancelled*" bootstrap status conditions and 'what to do next' is covered, which will nullify the extensive bootstrap calling and even the bootstrap should complete successfully. > CdcrBootstrapTest failing intermittently > > > Key: SOLR-11278 > URL: https://issues.apache.org/jira/browse/SOLR-11278 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.0, 6.6.1 >Reporter: Amrit Sarkar >Assignee: Varun Thacker >Priority: Critical > Labels: test > Attachments: master-bs.patch, SOLR-11278-awaits-fix.patch, > SOLR-11278-cancel-bootstrap-on-stop.patch, SOLR-11278.patch, > SOLR-11278.patch, test_results > > > {{CdcrBootstrapTest}} is failing while running beasts for significant > iterations. > The bootstrapping is failing in the test, after the first batch is indexed > for each {{testmethod}}, which results in documents mismatch :: > {code} > [beaster] 2> 39167 ERROR > (updateExecutor-39-thread-1-processing-n:127.0.0.1:42155_solr > x:cdcr-target_shard1_replica_n1 s:shard1 c:cdcr-target r:core_node2) > [n:127.0.0.1:42155_solr c:cdcr-target s:shard1 r:core_node2 > x:cdcr-target_shard1_replica_n1] o.a.s.h.CdcrRequestHandler Bootstrap > operation failed > [beaster] 2> java.util.concurrent.ExecutionException: > java.lang.AssertionError > [beaster] 2> at > java.util.concurrent.FutureTask.report(FutureTask.java:122) > [beaster] 2> at > java.util.concurrent.FutureTask.get(FutureTask.java:192) > [beaster] 2> at > org.apache.solr.handler.CdcrRequestHandler.lambda$handleBootstrapAction$0(CdcrRequestHandler.java:654) > [beaster] 2> at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > [beaster] 2> at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [beaster] 2> at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > [beaster] 2> at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188) > [beaster] 2> at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [beaster] 2> at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [beaster] 2> at java.lang.Thread.run(Thread.java:748) > [beaster] 2> Caused by: java.lang.AssertionError > [beaster] 2> at > org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:813) > [beaster] 2> at > org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:724) > [beaster] 2> at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197) > [beaster] 2> ... 5 more > {code} > {code} > [beaster] [01:37:16.282] FAILURE 153s | > CdcrBootstrapTest.testBootstrapWithSourceCluster <<< > [beaster]> Throwable #1: java.lang.AssertionError: Document mismatch on > target after sync expected:<2000> but was:<1000> > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11363) TestCloudJSONFacetJoinDomain fails with Points enabled
[ https://issues.apache.org/jira/browse/SOLR-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170955#comment-16170955 ] Yonik Seeley commented on SOLR-11363: - Yep, that's it old string docvalue based numerics deduped multiple values 1,2,2,3 became 1,2,3 SortedNumericDocValues (which we now use for points) does not dedup, and the JSON Facet API does not account for duplicate values. > TestCloudJSONFacetJoinDomain fails with Points enabled > -- > > Key: SOLR-11363 > URL: https://issues.apache.org/jira/browse/SOLR-11363 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.0 >Reporter: Yonik Seeley > > As Hoss noted in SOLR-10939, this test still had points disabled, and > enabling them causes tests to fail. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11363) TestCloudJSONFacetJoinDomain fails with Points enabled
[ https://issues.apache.org/jira/browse/SOLR-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170925#comment-16170925 ] Yonik Seeley commented on SOLR-11363: - OK, I boiled it down to a very simple case that fails (when added to TestJsonFacets: {code} @Test public void testRepeatedNumerics() throws Exception { Client client = Client.localClient(); client.add(sdoc("id", "1", "cat_s", "A", "where_s", "NY", "num_d", "4", "num_i", "2", "val_b", "true", "sparse_s", "one", "num_is","0", "num_is","0"), null); client.commit(); client.testJQ(params("q", "id:1" , "json.facet", "{f1:{terms:num_is}}" ) , "facets=={count:1, " + "f1:{buckets:[{val:0, count:1}]}}" ); } {code} The actual count being returned now is "2", which is incorrect... it should be the number of documents containing 0, which is 1 > TestCloudJSONFacetJoinDomain fails with Points enabled > -- > > Key: SOLR-11363 > URL: https://issues.apache.org/jira/browse/SOLR-11363 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.0 >Reporter: Yonik Seeley > > As Hoss noted in SOLR-10939, this test still had points disabled, and > enabling them causes tests to fail. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11363) TestCloudJSONFacetJoinDomain fails with Points enabled
[ https://issues.apache.org/jira/browse/SOLR-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170902#comment-16170902 ] Hoss Man commented on SOLR-11363: - yonik: I assume you noticed the "count=122" even though "numFound=109" ? Is it possible the faceting code is double counting when a (single) doc contains the same value multiple times in a multi-valued points field? (Is there a non-randomized test covering this situation?) > TestCloudJSONFacetJoinDomain fails with Points enabled > -- > > Key: SOLR-11363 > URL: https://issues.apache.org/jira/browse/SOLR-11363 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.0 >Reporter: Yonik Seeley > > As Hoss noted in SOLR-10939, this test still had points disabled, and > enabling them causes tests to fail. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11363) TestCloudJSONFacetJoinDomain fails with Points enabled
[ https://issues.apache.org/jira/browse/SOLR-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170894#comment-16170894 ] Yonik Seeley commented on SOLR-11363: - Hmmm, I disabled domain switches and set the number of fields and values to 1, and things still fail. This looks like something fundamentally broken with faceting on points. Example: {code} java.lang.AssertionError: {main(json.facet={facet_1+:+{+type:terms,+field:field_0_is,+limit:+2}+}),extra(q=(field_0_ss:0+OR+field_0_ss:0+OR+field_0_ss:1+OR+field_0_ss:0+OR+field_0_ss:1+OR+field_0_ss:1)&rows=0)} ===> {responseHeader={zkConnected=true,status=0,QTime=454},response={numFound=109,start=0,maxScore=2.0918934,docs=[]},facets={count=109,facet_1={buckets=[{val=0,count=122}, {val=1,count=102}]}}} --> facet_1: q=field_0_is:0+AND+(field_0_ss:0+OR+field_0_ss:0+OR+field_0_ss:1+OR+field_0_ss:0+OR+field_0_ss:1+OR+field_0_ss:1)&rows=0 Expected :122 Actual :82 {code} > TestCloudJSONFacetJoinDomain fails with Points enabled > -- > > Key: SOLR-11363 > URL: https://issues.apache.org/jira/browse/SOLR-11363 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.0 >Reporter: Yonik Seeley > > As Hoss noted in SOLR-10939, this test still had points disabled, and > enabling them causes tests to fail. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11364) Fields with useDocValuesAsStored=false never be returned in case of pattern matching
Cao Manh Dat created SOLR-11364: --- Summary: Fields with useDocValuesAsStored=false never be returned in case of pattern matching Key: SOLR-11364 URL: https://issues.apache.org/jira/browse/SOLR-11364 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Cao Manh Dat As [~dsmiley] pointed out in SOLR-8344 bq. in calcDocValueFieldsForReturn. If fl=foo*,dvField and dvField has useDocValuesAsStored=false then the code won't return dvField even though it's been explicitly mentioned. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8344) Decide default when requested fields are both column and row stored.
[ https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170884#comment-16170884 ] Cao Manh Dat commented on SOLR-8344: bq. I think I see an existing bug (not introduced here) in the logic you moved – calcDocValueFieldsForReturn. If fl=foo*,dvField and dvField has useDocValuesAsStored=false then the code won't return dvField even though it's been explicitly mentioned. I haven't tried this; I'm just reading the code carefully. [~dsmiley] This seems a bug to me, too. I will spin off this issue into another ticket. > Decide default when requested fields are both column and row stored. > > > Key: SOLR-8344 > URL: https://issues.apache.org/jira/browse/SOLR-8344 > Project: Solr > Issue Type: New Feature >Reporter: Ishan Chattopadhyaya >Assignee: Cao Manh Dat > Fix For: master (8.0), 7.1 > > Attachments: SOLR-8344.patch, SOLR-8344.patch, SOLR-8344.patch > > > This issue was discussed in the comments at SOLR-8220. Splitting it out to a > separate issue so that we can have a focused discussion on whether/how to do > this. > If a given set of requested fields are all stored and have docValues (column > stored), we can retrieve the values from either place. What should the > default be? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-8344) Decide default when requested fields are both column and row stored.
[ https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat resolved SOLR-8344. Resolution: Fixed Fix Version/s: 7.1 master (8.0) > Decide default when requested fields are both column and row stored. > > > Key: SOLR-8344 > URL: https://issues.apache.org/jira/browse/SOLR-8344 > Project: Solr > Issue Type: New Feature >Reporter: Ishan Chattopadhyaya >Assignee: Cao Manh Dat > Fix For: master (8.0), 7.1 > > Attachments: SOLR-8344.patch, SOLR-8344.patch, SOLR-8344.patch > > > This issue was discussed in the comments at SOLR-8220. Splitting it out to a > separate issue so that we can have a focused discussion on whether/how to do > this. > If a given set of requested fields are all stored and have docValues (column > stored), we can retrieve the values from either place. What should the > default be? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11011) Assign.buildCoreName can lead to error in creating a new core when legacyCloud=false
[ https://issues.apache.org/jira/browse/SOLR-11011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat resolved SOLR-11011. - Resolution: Fixed > Assign.buildCoreName can lead to error in creating a new core when > legacyCloud=false > > > Key: SOLR-11011 > URL: https://issues.apache.org/jira/browse/SOLR-11011 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat > Fix For: master (8.0), 7.1 > > Attachments: SOLR-11011.2.patch, SOLR-11011.2.patch, > SOLR-11011.3.patch, SOLR-11011.patch, SOLR-11011.patch, SOLR-11011.patch, > SOLR-11011.patch > > > Here are the case > {code} > shard1 : { > node1 : shard1_replica1, > node2 : shard1_replica2 > } > {code} > node2 go down, autoAddReplicasPlanAction is executed > {code} > shard1 : { > node1 : shard1_replica1, > node3 : shard1_replica3 > } > {code} > node2 back alive, because shard1_replica2 is removed from {{states.json}} so > that core won't be loaded ( but it won't be removed neither ). Then node1 go > down, Assign.buildCoreName will create a core with name=shard1_replica2 which > lead to a failure. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8344) Decide default when requested fields are both column and row stored.
[ https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170876#comment-16170876 ] ASF subversion and git services commented on SOLR-8344: --- Commit d877ade945c5b143dcd063e2865ab65368710b92 in lucene-solr's branch refs/heads/branch_7x from [~caomanhdat] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d877ade ] SOLR-8344: Decide default when requested fields are both column and row stored. > Decide default when requested fields are both column and row stored. > > > Key: SOLR-8344 > URL: https://issues.apache.org/jira/browse/SOLR-8344 > Project: Solr > Issue Type: New Feature >Reporter: Ishan Chattopadhyaya >Assignee: Cao Manh Dat > Attachments: SOLR-8344.patch, SOLR-8344.patch, SOLR-8344.patch > > > This issue was discussed in the comments at SOLR-8220. Splitting it out to a > separate issue so that we can have a focused discussion on whether/how to do > this. > If a given set of requested fields are all stored and have docValues (column > stored), we can retrieve the values from either place. What should the > default be? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8344) Decide default when requested fields are both column and row stored.
[ https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170873#comment-16170873 ] ASF subversion and git services commented on SOLR-8344: --- Commit 40f78dd2740122e5fa18f2515effc9272fd2d902 in lucene-solr's branch refs/heads/master from [~caomanhdat] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=40f78dd ] SOLR-8344: Decide default when requested fields are both column and row stored. > Decide default when requested fields are both column and row stored. > > > Key: SOLR-8344 > URL: https://issues.apache.org/jira/browse/SOLR-8344 > Project: Solr > Issue Type: New Feature >Reporter: Ishan Chattopadhyaya >Assignee: Cao Manh Dat > Attachments: SOLR-8344.patch, SOLR-8344.patch, SOLR-8344.patch > > > This issue was discussed in the comments at SOLR-8220. Splitting it out to a > separate issue so that we can have a focused discussion on whether/how to do > this. > If a given set of requested fields are all stored and have docValues (column > stored), we can retrieve the values from either place. What should the > default be? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10181) CREATEALIAS and DELETEALIAS commands consistency problems under concurrency
[ https://issues.apache.org/jira/browse/SOLR-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170811#comment-16170811 ] Erick Erickson commented on SOLR-10181: --- Assigning to myself to not lose track of it, feel free to take it if you've a special interest. > CREATEALIAS and DELETEALIAS commands consistency problems under concurrency > --- > > Key: SOLR-10181 > URL: https://issues.apache.org/jira/browse/SOLR-10181 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 5.3, 5.4, 5.5, 6.4.1 >Reporter: Samuel García Martínez >Assignee: Erick Erickson > Attachments: SOLR-10181.patch > > > When several CREATEALIAS are run at the same time by the OCP it could happen > that, even tho the API response is OK, some of those CREATEALIAS request > changes are lost. > h3. The problem > The problem happens because the CREATEALIAS cmd implementation relies on > _zkStateReader.getAliases()_ to create the map that will be stored in ZK. If > several threads reach that line at the same time it will happen that only one > will be stored correctly and the others will be overridden. > The code I'm referencing is [this > piece|https://github.com/apache/lucene-solr/blob/8c1e67e30e071ceed636083532d4598bf6a8791f/solr/core/src/java/org/apache/solr/cloud/CreateAliasCmd.java#L65]. > As an example, let's say that the current aliases map has {a:colA, b:colB}. > If two CREATEALIAS (one adding c:colC and other creating d:colD) are > submitted to the _tpe_ and reach that line at the same time, the resulting > maps will look like {a:colA, b:colB, c:colC} and {a:colA, b:colB, d:colD} and > only one of them will be stored correctly in ZK, resulting in "data loss", > meaning that API is returning OK despite that it didn't work as expected. > On top of this, another concurrency problem could happen when the command > checks if the alias has been set using _checkForAlias_ method. if these two > CREATEALIAS zk writes had ran at the same time, the alias check fir one of > the threads can timeout since only one of the writes has "survived" and has > been "committed" to the _zkStateReader.getAliases()_ map. > h3. How to fix it > I can post a patch to this if someone gives me directions on how it should be > fixed. As I see this, there are two places where the issue can be fixed: in > the processor (OverseerCollectionMessageHandler) in a generic way or inside > the command itself. > h5. The processor fix > The locking mechanism (_OverseerCollectionMessageHandler#lockTask_) should be > the place to fix this inside the processor. I thought that adding the > operation name instead of only "collection" or "name" to the locking key > would fix the issue, but I realized that the problem will happen anyway if > the concurrency happens between different operations modifying the same > resource (like CREATEALIAS and DELETEALIAS do). So, if this should be the > path to follow I don't know what should be used as a locking key. > h5. The command fix > Fixing it at the command level (_CreateAliasCmd_ and _DeleteAliasCmd_) would > be relatively easy. Using optimistic locking, i.e, using the aliases.json zk > version in the keeper.setData. To do that, Aliases class should offer the > aliases version so the commands can forward that version with the update and > retry when it fails. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-10181) CREATEALIAS and DELETEALIAS commands consistency problems under concurrency
[ https://issues.apache.org/jira/browse/SOLR-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson reassigned SOLR-10181: - Assignee: Erick Erickson > CREATEALIAS and DELETEALIAS commands consistency problems under concurrency > --- > > Key: SOLR-10181 > URL: https://issues.apache.org/jira/browse/SOLR-10181 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 5.3, 5.4, 5.5, 6.4.1 >Reporter: Samuel García Martínez >Assignee: Erick Erickson > Attachments: SOLR-10181.patch > > > When several CREATEALIAS are run at the same time by the OCP it could happen > that, even tho the API response is OK, some of those CREATEALIAS request > changes are lost. > h3. The problem > The problem happens because the CREATEALIAS cmd implementation relies on > _zkStateReader.getAliases()_ to create the map that will be stored in ZK. If > several threads reach that line at the same time it will happen that only one > will be stored correctly and the others will be overridden. > The code I'm referencing is [this > piece|https://github.com/apache/lucene-solr/blob/8c1e67e30e071ceed636083532d4598bf6a8791f/solr/core/src/java/org/apache/solr/cloud/CreateAliasCmd.java#L65]. > As an example, let's say that the current aliases map has {a:colA, b:colB}. > If two CREATEALIAS (one adding c:colC and other creating d:colD) are > submitted to the _tpe_ and reach that line at the same time, the resulting > maps will look like {a:colA, b:colB, c:colC} and {a:colA, b:colB, d:colD} and > only one of them will be stored correctly in ZK, resulting in "data loss", > meaning that API is returning OK despite that it didn't work as expected. > On top of this, another concurrency problem could happen when the command > checks if the alias has been set using _checkForAlias_ method. if these two > CREATEALIAS zk writes had ran at the same time, the alias check fir one of > the threads can timeout since only one of the writes has "survived" and has > been "committed" to the _zkStateReader.getAliases()_ map. > h3. How to fix it > I can post a patch to this if someone gives me directions on how it should be > fixed. As I see this, there are two places where the issue can be fixed: in > the processor (OverseerCollectionMessageHandler) in a generic way or inside > the command itself. > h5. The processor fix > The locking mechanism (_OverseerCollectionMessageHandler#lockTask_) should be > the place to fix this inside the processor. I thought that adding the > operation name instead of only "collection" or "name" to the locking key > would fix the issue, but I realized that the problem will happen anyway if > the concurrency happens between different operations modifying the same > resource (like CREATEALIAS and DELETEALIAS do). So, if this should be the > path to follow I don't know what should be used as a locking key. > h5. The command fix > Fixing it at the command level (_CreateAliasCmd_ and _DeleteAliasCmd_) would > be relatively easy. Using optimistic locking, i.e, using the aliases.json zk > version in the keeper.setData. To do that, Aliases class should offer the > aliases version so the commands can forward that version with the update and > retry when it fails. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10181) CREATEALIAS and DELETEALIAS commands consistency problems under concurrency
[ https://issues.apache.org/jira/browse/SOLR-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170798#comment-16170798 ] Samuel García Martínez commented on SOLR-10181: --- A code like this would fix the issue (but this is really bad code and difficult to maintain): {code:java} public String getTaskKey(ZkNodeProps message) { CollectionAction action = getCollectionAction(message.getStr(Overseer.QUEUE_OPERATION)); if(action == CREATEALIAS || action == DELETEALIAS) { return "/aliases.json"; } return message.containsKey(COLLECTION_PROP) ? message.getStr(COLLECTION_PROP) : message.getStr(NAME); {code} Another solution would be let the commands return their own lock key. The only thing I don't like from this approach is that it requires to look into the command map twice with the current code. If I get some feedback on the approach I can provide a patch > CREATEALIAS and DELETEALIAS commands consistency problems under concurrency > --- > > Key: SOLR-10181 > URL: https://issues.apache.org/jira/browse/SOLR-10181 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 5.3, 5.4, 5.5, 6.4.1 >Reporter: Samuel García Martínez > Attachments: SOLR-10181.patch > > > When several CREATEALIAS are run at the same time by the OCP it could happen > that, even tho the API response is OK, some of those CREATEALIAS request > changes are lost. > h3. The problem > The problem happens because the CREATEALIAS cmd implementation relies on > _zkStateReader.getAliases()_ to create the map that will be stored in ZK. If > several threads reach that line at the same time it will happen that only one > will be stored correctly and the others will be overridden. > The code I'm referencing is [this > piece|https://github.com/apache/lucene-solr/blob/8c1e67e30e071ceed636083532d4598bf6a8791f/solr/core/src/java/org/apache/solr/cloud/CreateAliasCmd.java#L65]. > As an example, let's say that the current aliases map has {a:colA, b:colB}. > If two CREATEALIAS (one adding c:colC and other creating d:colD) are > submitted to the _tpe_ and reach that line at the same time, the resulting > maps will look like {a:colA, b:colB, c:colC} and {a:colA, b:colB, d:colD} and > only one of them will be stored correctly in ZK, resulting in "data loss", > meaning that API is returning OK despite that it didn't work as expected. > On top of this, another concurrency problem could happen when the command > checks if the alias has been set using _checkForAlias_ method. if these two > CREATEALIAS zk writes had ran at the same time, the alias check fir one of > the threads can timeout since only one of the writes has "survived" and has > been "committed" to the _zkStateReader.getAliases()_ map. > h3. How to fix it > I can post a patch to this if someone gives me directions on how it should be > fixed. As I see this, there are two places where the issue can be fixed: in > the processor (OverseerCollectionMessageHandler) in a generic way or inside > the command itself. > h5. The processor fix > The locking mechanism (_OverseerCollectionMessageHandler#lockTask_) should be > the place to fix this inside the processor. I thought that adding the > operation name instead of only "collection" or "name" to the locking key > would fix the issue, but I realized that the problem will happen anyway if > the concurrency happens between different operations modifying the same > resource (like CREATEALIAS and DELETEALIAS do). So, if this should be the > path to follow I don't know what should be used as a locking key. > h5. The command fix > Fixing it at the command level (_CreateAliasCmd_ and _DeleteAliasCmd_) would > be relatively easy. Using optimistic locking, i.e, using the aliases.json zk > version in the keeper.setData. To do that, Aliases class should offer the > aliases version so the commands can forward that version with the update and > retry when it fails. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10181) CREATEALIAS and DELETEALIAS commands consistency problems under concurrency
[ https://issues.apache.org/jira/browse/SOLR-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Samuel García Martínez updated SOLR-10181: -- Attachment: SOLR-10181.patch patch with the test case (and some test improvements) for this issue > CREATEALIAS and DELETEALIAS commands consistency problems under concurrency > --- > > Key: SOLR-10181 > URL: https://issues.apache.org/jira/browse/SOLR-10181 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 5.3, 5.4, 5.5, 6.4.1 >Reporter: Samuel García Martínez > Attachments: SOLR-10181.patch > > > When several CREATEALIAS are run at the same time by the OCP it could happen > that, even tho the API response is OK, some of those CREATEALIAS request > changes are lost. > h3. The problem > The problem happens because the CREATEALIAS cmd implementation relies on > _zkStateReader.getAliases()_ to create the map that will be stored in ZK. If > several threads reach that line at the same time it will happen that only one > will be stored correctly and the others will be overridden. > The code I'm referencing is [this > piece|https://github.com/apache/lucene-solr/blob/8c1e67e30e071ceed636083532d4598bf6a8791f/solr/core/src/java/org/apache/solr/cloud/CreateAliasCmd.java#L65]. > As an example, let's say that the current aliases map has {a:colA, b:colB}. > If two CREATEALIAS (one adding c:colC and other creating d:colD) are > submitted to the _tpe_ and reach that line at the same time, the resulting > maps will look like {a:colA, b:colB, c:colC} and {a:colA, b:colB, d:colD} and > only one of them will be stored correctly in ZK, resulting in "data loss", > meaning that API is returning OK despite that it didn't work as expected. > On top of this, another concurrency problem could happen when the command > checks if the alias has been set using _checkForAlias_ method. if these two > CREATEALIAS zk writes had ran at the same time, the alias check fir one of > the threads can timeout since only one of the writes has "survived" and has > been "committed" to the _zkStateReader.getAliases()_ map. > h3. How to fix it > I can post a patch to this if someone gives me directions on how it should be > fixed. As I see this, there are two places where the issue can be fixed: in > the processor (OverseerCollectionMessageHandler) in a generic way or inside > the command itself. > h5. The processor fix > The locking mechanism (_OverseerCollectionMessageHandler#lockTask_) should be > the place to fix this inside the processor. I thought that adding the > operation name instead of only "collection" or "name" to the locking key > would fix the issue, but I realized that the problem will happen anyway if > the concurrency happens between different operations modifying the same > resource (like CREATEALIAS and DELETEALIAS do). So, if this should be the > path to follow I don't know what should be used as a locking key. > h5. The command fix > Fixing it at the command level (_CreateAliasCmd_ and _DeleteAliasCmd_) would > be relatively easy. Using optimistic locking, i.e, using the aliases.json zk > version in the keeper.setData. To do that, Aliases class should offer the > aliases version so the commands can forward that version with the update and > retry when it fails. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11275) Adding diagrams for AutoAddReplica into Solr Ref Guide
[ https://issues.apache.org/jira/browse/SOLR-11275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170781#comment-16170781 ] Mano Kovacs commented on SOLR-11275: [~ctargett], that is very good news! Looks great. [~varunthacker], sure, I'll upload a new puml. > Adding diagrams for AutoAddReplica into Solr Ref Guide > -- > > Key: SOLR-11275 > URL: https://issues.apache.org/jira/browse/SOLR-11275 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Mano Kovacs >Assignee: Cassandra Targett > Attachments: autoaddreplica.png, autoaddreplica.puml, > plantuml-diagram-test.png > > > Pilot jira for adding PlantUML diagrams for documenting internals. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 192 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/192/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.update.HardAutoCommitTest.testCommitWithin Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([9AA6575CE60D72C0:2074382465239CD5]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:884) at org.apache.solr.update.HardAutoCommitTest.testCommitWithin(HardAutoCommitTest.java:100) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=//result[@numFound=1] xml response was: 00 request was:q=id:529&qt=&start=0&rows=20&version=2.2 at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:877) ... 40 more Build Log: [...truncated 12459 lines...] [junit4] Suite: org.apache.so
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 20495 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20495/ Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream Error Message: Error from server at https://127.0.0.1:42939/solr/mainCorpus_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404HTTP ERROR: 404 Problem accessing /solr/mainCorpus_shard2_replica_n3/update. Reason: Can not find: /solr/mainCorpus_shard2_replica_n3/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.20.v20170531 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:42939/solr/mainCorpus_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 HTTP ERROR: 404 Problem accessing /solr/mainCorpus_shard2_replica_n3/update. Reason: Can not find: /solr/mainCorpus_shard2_replica_n3/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.20.v20170531 at __randomizedtesting.SeedInfo.seed([305317C6FBE9391C:1293963DD883130C]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream(StreamExpressionTest.java:7172) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesti
[jira] [Commented] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170618#comment-16170618 ] Erick Erickson commented on SOLR-11297: --- Possibly related, we need to check when we address this. > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11011) Assign.buildCoreName can lead to error in creating a new core when legacyCloud=false
[ https://issues.apache.org/jira/browse/SOLR-11011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170595#comment-16170595 ] Shalin Shekhar Mangar commented on SOLR-11011: -- Can this be closed now? > Assign.buildCoreName can lead to error in creating a new core when > legacyCloud=false > > > Key: SOLR-11011 > URL: https://issues.apache.org/jira/browse/SOLR-11011 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat > Fix For: master (8.0), 7.1 > > Attachments: SOLR-11011.2.patch, SOLR-11011.2.patch, > SOLR-11011.3.patch, SOLR-11011.patch, SOLR-11011.patch, SOLR-11011.patch, > SOLR-11011.patch > > > Here are the case > {code} > shard1 : { > node1 : shard1_replica1, > node2 : shard1_replica2 > } > {code} > node2 go down, autoAddReplicasPlanAction is executed > {code} > shard1 : { > node1 : shard1_replica1, > node3 : shard1_replica3 > } > {code} > node2 back alive, because shard1_replica2 is removed from {{states.json}} so > that core won't be loaded ( but it won't be removed neither ). Then node1 go > down, Assign.buildCoreName will create a core with name=shard1_replica2 which > lead to a failure. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 193 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/193/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC 122 tests failed. FAILED: org.apache.solr.cloud.BasicDistributedZkTest.test Error Message: IOException occured when talking to server at: http://127.0.0.1:64806/dv_qdx/h/collection2 Stack Trace: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:64806/dv_qdx/h/collection2 at __randomizedtesting.SeedInfo.seed([22C3862B1400E067:AA97B9F1BAFC8D9F]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:641) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:152) at org.apache.solr.cloud.BasicDistributedZkTest.indexDoc(BasicDistributedZkTest.java:1076) at org.apache.solr.cloud.BasicDistributedZkTest.testMultipleCollections(BasicDistributedZkTest.java:1016) at org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:370) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.e
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 437 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/437/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation Error Message: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=20562, name=jetty-launcher-4344-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) 2) Thread[id=20566, name=jetty-launcher-4344-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=20562, name=jetty-launcher-4344-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilder
[JENKINS] Lucene-Solr-Tests-7.0 - Build # 138 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.0/138/ 6 tests failed. FAILED: org.apache.lucene.classification.BooleanPerceptronClassifierTest.testPerformance Error Message: training took more than 10s: 13s Stack Trace: java.lang.AssertionError: training took more than 10s: 13s at __randomizedtesting.SeedInfo.seed([B1799241E15C8C93:769860638AE8B43C]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.lucene.classification.BooleanPerceptronClassifierTest.testPerformance(BooleanPerceptronClassifierTest.java:93) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.AliasIntegrationTest.test Error Message: Collection not found: testalias Stack Trace: org.apache.solr.common.SolrException: Collection not found: testalias at __randomizedtesting.SeedInfo.seed([703428D68449EEB2:F860170C2AB5834A]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:1139) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:822) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.
[jira] [Commented] (SOLR-9735) Umbrella JIRA for Auto Scaling and Cluster Management in SolrCloud
[ https://issues.apache.org/jira/browse/SOLR-9735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170464#comment-16170464 ] Shalin Shekhar Mangar commented on SOLR-9735: - There are a good chunk of features that have accumulated in the feature/autoscaling branch. I am going to review the remaining nocommits and tests so that we can merge this branch to master. I'll hold off merging to branch_7x until we can finish SOLR-11085. We have another branch named feature/autoscaling_72 which contains features that we plan to release in 7.2. Once autoscaling branch is merged to master, we can get rid of autoscaling_72 and continue further development on the autoscaling branch itself. > Umbrella JIRA for Auto Scaling and Cluster Management in SolrCloud > -- > > Key: SOLR-9735 > URL: https://issues.apache.org/jira/browse/SOLR-9735 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Anshum Gupta >Assignee: Shalin Shekhar Mangar > Original Estimate: 1,344h > Remaining Estimate: 1,344h > > As SolrCloud is now used at fairly large scale, most users end up writing > their own cluster management tools. We should have a framework for cluster > management in Solr. > In a discussion with [~noble.paul], we outlined the following steps w.r.t. > the approach to having this implemented: > * *Basic API* calls for cluster management e.g. utilize added nodes, remove a > node etc. These calls would need explicit invocation by the users to begin > with. It would also specify the {{strategy}} to use. For instance I can have > a strategy called {{optimizeCoreCount}} which would target to have an even > no:of cores in each node . The strategy could optionally take parameters as > well > * *Metrics* and stats tracking e.g. qps, etc. These would be required for any > advanced cluster management tasks e.g. *maintain a qps of 'x'* by > *auto-adding a replica* (using a recipe) etc. We would need > collection/shard/node level views of metrics for this. > * *Recipes*: combination of multiple sequential/parallel API calls based on > rules. This would be complicated specially as most of these would be long > running series of tasks which would either have to be rolled back or resumed > in case of a failure. > * *Event based triggers* that would not require explicit cluster management > calls for end users. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Windows (64bit/jdk-9-ea+181) - Build # 196 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/196/ Java: 64bit/jdk-9-ea+181 -XX:-UseCompressedOops -XX:+UseParallelGC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery Error Message: Expected a collection with one shard and two replicas null Live Nodes: [127.0.0.1:53792_solr, 127.0.0.1:53797_solr] Last available state: DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"MissingSegmentRecoveryTest_shard1_replica_n1", "base_url":"http://127.0.0.1:53792/solr";, "node_name":"127.0.0.1:53792_solr", "state":"active", "type":"NRT", "leader":"true"}, "core_node4":{ "core":"MissingSegmentRecoveryTest_shard1_replica_n2", "base_url":"http://127.0.0.1:53797/solr";, "node_name":"127.0.0.1:53797_solr", "state":"down", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Expected a collection with one shard and two replicas null Live Nodes: [127.0.0.1:53792_solr, 127.0.0.1:53797_solr] Last available state: DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"MissingSegmentRecoveryTest_shard1_replica_n1", "base_url":"http://127.0.0.1:53792/solr";, "node_name":"127.0.0.1:53792_solr", "state":"active", "type":"NRT", "leader":"true"}, "core_node4":{ "core":"MissingSegmentRecoveryTest_shard1_replica_n2", "base_url":"http://127.0.0.1:53797/solr";, "node_name":"127.0.0.1:53797_solr", "state":"down", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([EA7DA528F4EA0795:BA283D2BADCBB188]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269) at org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:105) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(Random
[jira] [Created] (SOLR-11363) TestCloudJSONFacetJoinDomain fails with Points enabled
Yonik Seeley created SOLR-11363: --- Summary: TestCloudJSONFacetJoinDomain fails with Points enabled Key: SOLR-11363 URL: https://issues.apache.org/jira/browse/SOLR-11363 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Affects Versions: 7.0 Reporter: Yonik Seeley As Hoss noted in SOLR-10939, this test still had points disabled, and enabling them causes tests to fail. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170429#comment-16170429 ] Richard Rominger commented on SOLR-11297: - agreed. 6.6.0 seems to load my core okay - even though WebGUI has [Red] Fault errors for me. Which that is enough for me not to use 6.6.0 as a Production candidate. > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170411#comment-16170411 ] Shawn Heisey commented on SOLR-11297: - Minor correction to that last statement: 6.6.1 seems to have a "real" problem, which may or may not be related to this problem. > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170407#comment-16170407 ] Robert Muir commented on LUCENE-7966: - thanks for the clarification. yes, that's no change from my patch. > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace call sites with java 9 > methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only > have to worry about testing that our java 8 fallback methods work. > I found that many of the current byte array methods today are willy-nilly and > very lenient for example, passing invalid offsets at times and relying on > compare methods not throwing exceptions, etc. I fixed all the instances in > core/codecs but have not looked at the problems with AnalyzingSuggester. Also > SimpleText still uses a silly method in ArrayUtil in similar crazy way, have > not removed that one yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170402#comment-16170402 ] Shawn Heisey commented on SOLR-11297: - [~erickerickson], the release notes for 6.6.1 indicate a bunch of changes in that version on the creation, loading, and reloading of cores. Those changes look like yours. With all the current and past work you've done on core loading, I think you're in the best position to understand the code. The problem I have reported on this issue isn't a "real" problem, because it doesn't affect usability at all, just logging ... but 6.6.1 seems to have a separate (and very real) problem. > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7998) Solr start/stop script is currently incompatible with SUSE 11
[ https://issues.apache.org/jira/browse/SOLR-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170401#comment-16170401 ] Mateo commented on SOLR-7998: - Under Suse 42.3 script fails on line https://github.com/apache/lucene-solr/blob/master/solr/bin/install_solr_service.sh#L196 service --version is invalid command. > Solr start/stop script is currently incompatible with SUSE 11 > - > > Key: SOLR-7998 > URL: https://issues.apache.org/jira/browse/SOLR-7998 > Project: Solr > Issue Type: Bug > Components: Build >Affects Versions: 5.3 > Environment: SUSE (SLES 11 SP2) >Reporter: gilles lafargue > > result of the command 'lsof -PniTCP:$SOLR_PORT -sTCP:LISTEN' in script > bin/solr > lsof: unsupported TCP/TPI info selection: C > lsof: unsupported TCP/TPI info selection: P > lsof: unsupported TCP/TPI info selection: : > lsof: unsupported TCP/TPI info selection: L > lsof: unsupported TCP/TPI info selection: I > lsof: unsupported TCP/TPI info selection: S > lsof: unsupported TCP/TPI info selection: T > lsof: unsupported TCP/TPI info selection: E > lsof: unsupported TCP/TPI info selection: N > lsof 4.80 > latest revision: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/ > latest FAQ: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ > latest man page: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_man > usage: [-?abhlnNoOPRstUvVX] [+|-c c] [+|-d s] [+D D] [+|-f[gG]] > [-F [f]] [-g [s]] [-i [i]] [+|-L [l]] [+m [m]] [+|-M] [-o [o]] > [-p s] [+|-r [t]] [-S [t]] [-T [t]] [-u s] [+|-w] [-x [fl]] [--] [names] > Use the ``-h'' option to get more help information. > it seems that option "-sTCP:LISTEN" is not correct for lsof v4.80 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170377#comment-16170377 ] Uwe Schindler edited comment on LUCENE-7966 at 9/18/17 5:53 PM: I implemented by latest idea based again on Robert's patch: https://github.com/apache/lucene-solr/compare/master...uschindler:jira/LUCENE-7966-v2 This approach is much more clean: We compile against Robert's replacement classes {{FutureObjects}} and {{FutureArrays}} (that have to contain the same method signatures as the Java 9 original, but we can add a test for this later with smoketester) as usual with Java 8. Before packaging the JAR file we read all class files and patch all {{FutureObjects/FutureArrays}} references to refer to the Java 9 class. The patched output is sent to a separate folder {{build/classes/java9}}. The JAR file is then packaged to take both variants, placing the patched ones in the Java 9 MultiRelease part. Currently only the lucene-core.jar file uses the patched stuff, so stuff outside lucene-core (e.g., codecs) does not yet automatically add Java 9 variants, instead it will use Robert's classes. If this is the way to go, I will move the patcher to the global tools directory and we can apply patching to all JAR files of the distribution. WARNING: We cannot support Maven builds here, Maven always builds a Java8-only JAR file! [~mikemccand], [~jpountz]: Could you build a lucene-core.jar file with the above branch on Github and do your tests again? The main difference here is that the JAR file no longer contains a delegator class. Instead all class files that were originally compiled with FutureObjects/FutureArrays (for Java 8 support) are patched to directly use the Java 9 Arrays/Objects methods, without using a delegator class. Keep in mind: This currently only support lucene-core.jar, the codecs JAR file is not yet Multirelease with this patch. When building with {{ant jar}} inside {{lucene/core}} you should see output like this: {noformat} [compile shit...] [copy] Copying 3 files to C:\Users\Uwe Schindler\Projects\lucene\trunk-lusolr1\lucene\build\core\classes\java -mrjar-classes-uptodate: resolve-groovy: [ivy:cachepath] :: resolving dependencies :: org.codehaus.groovy#groovy-all-caller;working [ivy:cachepath] confs: [default] [ivy:cachepath] found org.codehaus.groovy#groovy-all;2.4.8 in public [ivy:cachepath] :: resolution report :: resolve 170ms :: artifacts dl 5ms - | |modules|| artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| - | default | 1 | 0 | 0 | 0 || 1 | 0 | - patch-mrjar-classes: [ivy:cachepath] :: resolving dependencies :: org.ow2.asm#asm-commons-caller;working [ivy:cachepath] confs: [default] [ivy:cachepath] found org.ow2.asm#asm-commons;5.1 in public [ivy:cachepath] found org.ow2.asm#asm-tree;5.1 in public [ivy:cachepath] found org.ow2.asm#asm;5.1 in public [ivy:cachepath] :: resolution report :: resolve 701ms :: artifacts dl 8ms - | |modules|| artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| - | default | 3 | 0 | 0 | 0 || 3 | 0 | - [groovy] Remapped: org/apache/lucene/analysis/tokenattributes/CharTermAttributeImpl [groovy] Remapped: org/apache/lucene/codecs/compressing/LZ4 [groovy] Remapped: org/apache/lucene/document/BinaryPoint$2 [groovy] Remapped: org/apache/lucene/document/DoubleRange [groovy] Remapped: org/apache/lucene/document/FloatRange [groovy] Remapped: org/apache/lucene/document/IntRange [groovy] Remapped: org/apache/lucene/document/LongRange [groovy] Remapped: org/apache/lucene/index/BitsSlice [groovy] Remapped: org/apache/lucene/index/CodecReader [groovy] Remapped: org/apache/lucene/index/MergeReaderWrapper [groovy] Remapped: org/apache/lucene/search/BooleanScorer$TailPriorityQueue [groovy] Remapped: org/apache/lucene/util/BytesRef [groovy] Remapped: org/apache/lucene/util/BytesRefArray [groovy] Remapped: org/apache/lucene/util/CharsRef$UTF16SortedAsUTF8Comparator [groovy] Remapped: org/apache/lucene/util/CharsRef [groovy] Remapped: org/apache/lucene/util/IntsRef [groovy] Remapped: org/a
[jira] [Comment Edited] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170377#comment-16170377 ] Uwe Schindler edited comment on LUCENE-7966 at 9/18/17 5:52 PM: I implemented by latest idea based again on Robert's patch: https://github.com/apache/lucene-solr/compare/master...uschindler:jira/LUCENE-7966-v2 This approach is much more clean: We compile against Robert's replacement classes {{FutureObjects}} and {{FutureArrays}} (that have to contain the same method signatures as the Java 9 original, but we can add a test for this later with smoketester) as usual with Java 8. Before packaging the JAR file we read all class files and patch all {{FutureObjects/FutureArrays}} references to refer to the Java 9 class. The patched output is sent to a separate folder {{build/classes/java9}}. The JAR file is then packaged to take both variants, placing the patched ones in the Java 9 MultiRelease part. Currently only the lucene-core.jar file uses the patched stuff, so stuff outside lucene-core (e.g., codecs) does not yet automatically add Java 9 variants, instead it will use Robert's classes. If this is the way to go, I will move the patcher to the global tools directory and we can apply patching to all JAR files of the distribution. WARNING: We cannot support Maven builds here, Maven always builds a Java8-only JAR file! [~mikemccand], [~jpountz]: Could you build a lucene-core.jar file with the above branch on Github and do your tests again? The main difference here is that the JAR file no longer contains a delegator class. Instead all class files that were originally compiled with FutureObjects/FutureArrays (for Java 8 support) are patched to directly use the Java 9 Arrays/Objects methods, without using a delegator class. Keep in mind: This currently only support lucene-core.jar, the codecs JAR file is not yet Multirelease with this patch. When building with {{ant jar}} inside {{lucene/core}} you should see output like this: {noformat} [compile shit...] [copy] Copying 3 files to C:\Users\Uwe Schindler\Projects\lucene\trunk-lusolr1\lucene\build\core\classes\java -mrjar-classes-uptodate: resolve-groovy: [ivy:cachepath] :: resolving dependencies :: org.codehaus.groovy#groovy-all-caller;working [ivy:cachepath] confs: [default] [ivy:cachepath] found org.codehaus.groovy#groovy-all;2.4.8 in public [ivy:cachepath] :: resolution report :: resolve 170ms :: artifacts dl 5ms - | |modules|| artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| - | default | 1 | 0 | 0 | 0 || 1 | 0 | - patch-mrjar-classes: [ivy:cachepath] :: resolving dependencies :: org.ow2.asm#asm-commons-caller;working [ivy:cachepath] confs: [default] [ivy:cachepath] found org.ow2.asm#asm-commons;5.1 in public [ivy:cachepath] found org.ow2.asm#asm-tree;5.1 in public [ivy:cachepath] found org.ow2.asm#asm;5.1 in public [ivy:cachepath] :: resolution report :: resolve 701ms :: artifacts dl 8ms - | |modules|| artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| - | default | 3 | 0 | 0 | 0 || 3 | 0 | - [groovy] Remapped: org/apache/lucene/analysis/tokenattributes/CharTermAttributeImpl [groovy] Remapped: org/apache/lucene/codecs/compressing/LZ4 [groovy] Remapped: org/apache/lucene/document/BinaryPoint$2 [groovy] Remapped: org/apache/lucene/document/DoubleRange [groovy] Remapped: org/apache/lucene/document/FloatRange [groovy] Remapped: org/apache/lucene/document/IntRange [groovy] Remapped: org/apache/lucene/document/LongRange [groovy] Remapped: org/apache/lucene/index/BitsSlice [groovy] Remapped: org/apache/lucene/index/CodecReader [groovy] Remapped: org/apache/lucene/index/MergeReaderWrapper [groovy] Remapped: org/apache/lucene/search/BooleanScorer$TailPriorityQueue [groovy] Remapped: org/apache/lucene/util/BytesRef [groovy] Remapped: org/apache/lucene/util/BytesRefArray [groovy] Remapped: org/apache/lucene/util/CharsRef$UTF16SortedAsUTF8Comparator [groovy] Remapped: org/apache/lucene/util/CharsRef [groovy] Remapped: org/apache/lucene/util/IntsRef [groovy] Remapped: org/a
[jira] [Comment Edited] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170388#comment-16170388 ] Uwe Schindler edited comment on LUCENE-7966 at 9/18/17 5:52 PM: bq. Why exactly is this the case? with my patch maven should work. Maven just uses the jars produced by ant. The smoketester validates they are exactly the same. Maven works when you build the JAR files for Maven with our ANT targets. What does not work is the pom.xml build generated by sarowes maven build. That one will build JAR files only with the FutureXxx stuff, but no multirelease stuff. Sorry for being imprecise. was (Author: thetaphi): bq. Why exactly is this the case? with my patch maven should work. Maven just uses the jars produced by ant. The smoketester validates they are exactly the same. Maven works when you build the JAR files for Maven. What does not work is the pom.xml build generated by sarowes maven build. That one will build JAR files only with the FutureXxx stuff, but no multirelease stuff. Sorry for being imprecise. > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace call sites with java 9 > methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only > have to worry about testing that our java 8 fallback methods work. > I found that many of the current byte array methods today are willy-nilly and > very lenient for example, passing invalid offsets at times and relying on > compare methods not throwing exceptions, etc. I fixed all the instances in > core/codecs but have not looked at the problems with AnalyzingSuggester. Also > SimpleText still uses a silly method in ArrayUtil in similar crazy way, have > not removed that one yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170388#comment-16170388 ] Uwe Schindler commented on LUCENE-7966: --- bq. Why exactly is this the case? with my patch maven should work. Maven just uses the jars produced by ant. The smoketester validates they are exactly the same. Maven works when you build the JAR files for Maven. What does not work is the pom.xml build generated by sarowes maven build. That one will build JAR files only with the FutureXxx stuff, but no multirelease stuff. Sorry for being imprecise. > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace call sites with java 9 > methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only > have to worry about testing that our java 8 fallback methods work. > I found that many of the current byte array methods today are willy-nilly and > very lenient for example, passing invalid offsets at times and relying on > compare methods not throwing exceptions, etc. I fixed all the instances in > core/codecs but have not looked at the problems with AnalyzingSuggester. Also > SimpleText still uses a silly method in ArrayUtil in similar crazy way, have > not removed that one yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170377#comment-16170377 ] Uwe Schindler edited comment on LUCENE-7966 at 9/18/17 5:50 PM: I implemented by latest idea based again on Robert's patch: https://github.com/apache/lucene-solr/compare/master...uschindler:jira/LUCENE-7966-v2 This approach is much more clean: We compile against Robert's replacement classes {{FutureObjects}} and {{FutureArrays}} (that have to contain the same method signatures as the Java 9 original, but we can add a test for this later with smoketester) as usual with Java 8. Before packaging the JAR file we read all class files and patch all {{FutureObjects/FutureArrays}} references to refer to the Java 9 class. The patched output is sent to a separate folder {{build/classes/java9}}. The JAR file is then packaged to take both variants, placing the patched ones in the Java 9 MultiRelease part. Currently only the lucene-core.jar file uses the patched stuff, so stuff outside lucene-core (e.g., codecs) does not yet automatically add Java 9 variants, instead it will use Robert's classes. If this is the way to go, I will move the patcher to the global tools directory and we can apply patching to all JAR files of the distribution. WARNING: We cannot support Maven builds here, Maven always builds a Java8-only JAR file! [~mikemccand], [~jpountz]: Could you build a lucene-core.jar file with the above branch on Github and do your tests again? The main difference here is that the JAR file no longer contains a delegator class. Instead all class files that were originally compiled with FutureObjects/FutureArrays (for Java 8 support) are patched to directly use the Java 9 Arrays/Objects methods, without using a delegator class. Keep in mind: This currently only support lucene-core.jar, the codecs JAR file is not yet Multirelease with this patch. When building with {{ant jar}} inside {{lucene/core}} you should see output like this: {{noformat}} [compile shit...] [copy] Copying 3 files to C:\Users\Uwe Schindler\Projects\lucene\trunk-lusolr1\lucene\build\core\classes\java -mrjar-classes-uptodate: resolve-groovy: [ivy:cachepath] :: resolving dependencies :: org.codehaus.groovy#groovy-all-caller;working [ivy:cachepath] confs: [default] [ivy:cachepath] found org.codehaus.groovy#groovy-all;2.4.8 in public [ivy:cachepath] :: resolution report :: resolve 170ms :: artifacts dl 5ms - | |modules|| artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| - | default | 1 | 0 | 0 | 0 || 1 | 0 | - patch-mrjar-classes: [ivy:cachepath] :: resolving dependencies :: org.ow2.asm#asm-commons-caller;working [ivy:cachepath] confs: [default] [ivy:cachepath] found org.ow2.asm#asm-commons;5.1 in public [ivy:cachepath] found org.ow2.asm#asm-tree;5.1 in public [ivy:cachepath] found org.ow2.asm#asm;5.1 in public [ivy:cachepath] :: resolution report :: resolve 701ms :: artifacts dl 8ms - | |modules|| artifacts | | conf | number| search|dwnlded|evicted|| number|dwnlded| - | default | 3 | 0 | 0 | 0 || 3 | 0 | - [groovy] Remapped: org/apache/lucene/analysis/tokenattributes/CharTermAttributeImpl [groovy] Remapped: org/apache/lucene/codecs/compressing/LZ4 [groovy] Remapped: org/apache/lucene/document/BinaryPoint$2 [groovy] Remapped: org/apache/lucene/document/DoubleRange [groovy] Remapped: org/apache/lucene/document/FloatRange [groovy] Remapped: org/apache/lucene/document/IntRange [groovy] Remapped: org/apache/lucene/document/LongRange [groovy] Remapped: org/apache/lucene/index/BitsSlice [groovy] Remapped: org/apache/lucene/index/CodecReader [groovy] Remapped: org/apache/lucene/index/MergeReaderWrapper [groovy] Remapped: org/apache/lucene/search/BooleanScorer$TailPriorityQueue [groovy] Remapped: org/apache/lucene/util/BytesRef [groovy] Remapped: org/apache/lucene/util/BytesRefArray [groovy] Remapped: org/apache/lucene/util/CharsRef$UTF16SortedAsUTF8Comparator [groovy] Remapped: org/apache/lucene/util/CharsRef [groovy] Remapped: org/apache/lucene/util/IntsRef [groovy] Remapped: org
[jira] [Commented] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170382#comment-16170382 ] Robert Muir commented on LUCENE-7966: - {quote} WARNING: We cannot support Maven builds here, Maven always builds a Java8-only JAR file! {quote} Why exactly is this the case? with my patch maven should work. Maven just uses the jars produced by ant. The smoketester validates they are exactly the same. > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace call sites with java 9 > methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only > have to worry about testing that our java 8 fallback methods work. > I found that many of the current byte array methods today are willy-nilly and > very lenient for example, passing invalid offsets at times and relying on > compare methods not throwing exceptions, etc. I fixed all the instances in > core/codecs but have not looked at the problems with AnalyzingSuggester. Also > SimpleText still uses a silly method in ArrayUtil in similar crazy way, have > not removed that one yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
CVE-2017-9803: Security vulnerability in kerberos delegation token functionality
CVE-2017-9803: Security vulnerability in kerberos delegation token functionality Severity: Important Vendor: The Apache Software Foundation Versions Affected: Apache Solr 6.2.0 to 6.6.0 Description: Solr's Kerberos plugin can be configured to use delegation tokens, which allows an application to reuse the authentication of an end-user or another application. There are two issues with this functionality (when using SecurityAwareZkACLProvider type of ACL provider e.g. SaslZkACLProvider), Firstly, access to the security configuration can be leaked to users other than the solr super user. Secondly, malicious users can exploit this leaked configuration for privilege escalation to further expose/modify private data and/or disrupt operations in the Solr cluster. The vulnerability is fixed from Solr 6.6.1 onwards. Mitigation: 6.x users should upgrade to 6.6.1 Credit: This issue was discovered by Hrishikesh Gadre of Cloudera Inc. References: https://issues.apache.org/jira/browse/SOLR-11184 https://wiki.apache.org/solr/SolrSecurity - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170377#comment-16170377 ] Uwe Schindler commented on LUCENE-7966: --- I implemented by latest idea based again on Robert's patch: https://github.com/apache/lucene-solr/compare/master...uschindler:jira/LUCENE-7966-v2 This approach is much more clean: We compile against Robert's replacement classes {{FutureObjects}} and {{FutureArrays}} (that have to contain the same method signatures as the Java 9 original, but we can add a test for this later with smoketester) as usual with Java 8. Before packaging the JAR file we read all class files and patch all {{FutureObjects/FutureArrays}} references to refer to the Java 9 class. The patched output is sent to a separate folder {{build/classes/java9}}. The JAR file is then packaged to take both variants, placing the patched ones in the Java 9 MultiRelease part. Currently only the lucene-core.jar file uses the patched stuff, so stuff outside lucene-core (e.g., codecs) does not yet automatically add Java 9 variants, instead it will use Robert's classes. If this is the way to go, I will move the patcher to the global tools directory and we can apply patching to all JAR files of the distribution. WARNING: We cannot support Maven builds here, Maven always builds a Java8-only JAR file! [~mikemccand], [~jpountz]: Could you build a lucene-core.jar file with the above branch on Github and do your tests again? The main difference here is that the JAR file no longer contains a delegator class. Instead all class files that were originally compiled with FutureObjects/FutureArrays (for Java 8 support) are patched to directly use the Java 9 Arrays/Objects methods, without using a delegator class. Keep in mind: This currently only support lucene-core.jar, the codecs JAR file is not yet Multirelease with this patch. > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace call sites with java 9 > methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only > have to worry about testing that our java 8 fallback methods work. > I found that many of the current byte array methods today are willy-nilly and > very lenient for example, passing invalid offsets at times and relying on > compare methods not throwing exceptions, etc. I fixed all the instances in > core/codecs but have not looked at the problems with AnalyzingSuggester. Also > SimpleText still uses a silly method in ArrayUtil in similar crazy way, have > not removed that one yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170355#comment-16170355 ] Shawn Heisey commented on SOLR-11297: - [~niqbal], Every time I restart 6.6.0, I see the problem described in this issue. Version 6.6.1 may have a separate issue in addition to this issue, described in SOLR-11361. I saw a similar message in the logs with 6.6.2-SNAPSHOT, and in that version, some of my cores that do not work at all. For the problem I have described here, every core actually works, I just get annoying notifications in the admin UI and errors in the log. > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170333#comment-16170333 ] Richard Rominger commented on SOLR-11297: - I can say for my test bed using 6.6.0 / 6.6.1 it's pretty reproducible. I am running Windows 10 1703 for my testing. I go back to 6.2.1 and it stable as a rock. When the problem kicks in, all I have to do is shutdown Solr and restart it on a different port and it loads up fine. Then I shutdown for a bit of time and I can then restart on the port that I really want Solr running on. > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: lucene-solr:master: LUCENE-7906: Add new shapes to testing paradigm. Committed on behalf of Ignacio Vera.
: I didn't forget, precisely. I used git commit -a and it didn't pick up the : file (for some reason as yet undetermined) and I didn't catch it. Fixed : now (via explicit git add). "-a" is not designed/intended to pick up new files -- as documented... -a, --all Tell the command to automatically stage files that have been modified and deleted, but new files you have not told Git about are not affected. : On Mon, Sep 11, 2017 at 9:40 AM, Adrien Grand wrote: : : > Karl, did you forget to git add RandomGeo3dShapeGenerator? : > : > Le lun. 11 sept. 2017 à 15:01, a écrit : : > : >> Repository: lucene-solr : >> Updated Branches: : >> refs/heads/master 64d142858 -> cd425d609 : >> : >> : >> LUCENE-7906: Add new shapes to testing paradigm. Committed on behalf of : >> Ignacio Vera. : >> : >> : >> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo : >> Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/ : >> cd425d60 : >> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/cd425d60 : >> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/cd425d60 : >> : >> Branch: refs/heads/master : >> Commit: cd425d609cee8bcea6dbfeab8b3d42b1ce48eb40 : >> Parents: 64d1428 : >> Author: Karl Wright : >> Authored: Mon Sep 11 09:00:47 2017 -0400 : >> Committer: Karl Wright : >> Committed: Mon Sep 11 09:00:47 2017 -0400 : >> : >> -- : >> .../spatial3d/geom/RandomBinaryCodecTest.java | 10 +- : >> .../spatial3d/geom/RandomGeoShapeGenerator.java | 944 : >> --- : >> .../geom/RandomGeoShapeRelationshipTest.java| 65 +- : >> 3 files changed, 53 insertions(+), 966 deletions(-) : >> -- : >> : >> : >> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ : >> cd425d60/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/ : >> RandomBinaryCodecTest.java : >> -- : >> diff --git a/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/ : >> geom/RandomBinaryCodecTest.java b/lucene/spatial3d/src/test/ : >> org/apache/lucene/spatial3d/geom/RandomBinaryCodecTest.java : >> index ba9ee6e..250b652 100644 : >> --- a/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/ : >> geom/RandomBinaryCodecTest.java : >> +++ b/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/ : >> geom/RandomBinaryCodecTest.java : >> @@ -27,18 +27,18 @@ import org.junit.Test; : >> /** : >> * Test to check Serialization : >> */ : >> -public class RandomBinaryCodecTest extends RandomGeoShapeGenerator{ : >> +public class RandomBinaryCodecTest extends RandomGeo3dShapeGenerator { : >> : >>@Test : >>@Repeat(iterations = 10) : >>public void testRandomPointCodec() throws IOException{ : >> PlanetModel planetModel = randomPlanetModel(); : >> -GeoPoint shape = randomGeoPoint(planetModel, getEmptyConstraint()); : >> +GeoPoint shape = randomGeoPoint(planetModel); : >> ByteArrayOutputStream outputStream = new ByteArrayOutputStream(); : >> SerializableObject.writeObject(outputStream, shape); : >> ByteArrayInputStream inputStream = new ByteArrayInputStream( : >> outputStream.toByteArray()); : >> SerializableObject shapeCopy = SerializableObject.readObject(planetModel, : >> inputStream); : >> -assertEquals(shape, shapeCopy); : >> +assertEquals(shape.toString(), shape, shapeCopy); : >>} : >> : >>@Test : >> @@ -51,7 +51,7 @@ public class RandomBinaryCodecTest extends : >> RandomGeoShapeGenerator{ : >> SerializableObject.writePlanetObject(outputStream, shape); : >> ByteArrayInputStream inputStream = new ByteArrayInputStream( : >> outputStream.toByteArray()); : >> SerializableObject shapeCopy = SerializableObject. : >> readPlanetObject(inputStream); : >> -assertEquals(shape, shapeCopy); : >> +assertEquals(shape.toString(), shape, shapeCopy); : >>} : >> : >>@Test : >> @@ -64,6 +64,6 @@ public class RandomBinaryCodecTest extends : >> RandomGeoShapeGenerator{ : >> SerializableObject.writeObject(outputStream, shape); : >> ByteArrayInputStream inputStream = new ByteArrayInputStream( : >> outputStream.toByteArray()); : >> SerializableObject shapeCopy = SerializableObject.readObject(planetModel, : >> inputStream); : >> -assertEquals(shape, shapeCopy); : >> +assertEquals(shape.toString(), shape, shapeCopy); : >>} : >> } : >> : >> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/ : >> cd425d60/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/geom/ : >> RandomGeoShapeGenerator.java : >> -- : >> diff --git a/lucene/spatial3d/src/test/org/apache/lucene/spatial3d/ : >> geom/RandomGeoShapeGenerator.java b/lucene/spatial3d/src/test/ : >> org/apache/lucene/spatial3d/geom/RandomGeoShapeGenerato
[jira] [Commented] (SOLR-11361) After Restarting Solr 6.6.1 Seems to cause Error if Application is Reading/Writing?
[ https://issues.apache.org/jira/browse/SOLR-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170307#comment-16170307 ] Shawn Heisey commented on SOLR-11361: - With 6.6.0, I see the problem in SOLR-11297. I was trying to track something down on the 6.6.0 install (/solr/admin/metrics not working), and came across an error about an index, but the error didn't mention the index directory. Therefore I found the error logging in the source code, checked out the branch_6x code, modified the code to display what directory was having the issue, and built/installed version 6.6.2-SNAPSHOT. With the upgrade, I began having problems where cores actually did not successfully load, which is a very different problem than SOLR-11297. I saw the "SolrCore 'xx' is not available due to init failure: null" message in my logs just like this issue mentions, but could not find anything in the logs that actually indicated why the core failed to initialize. > After Restarting Solr 6.6.1 Seems to cause Error if Application is > Reading/Writing? > --- > > Key: SOLR-11361 > URL: https://issues.apache.org/jira/browse/SOLR-11361 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6.1 > Environment: Windows 10 VM >Reporter: Richard Rominger > Labels: newbie, upgrade, windows > > I have just updated from Solr 6.2.1 to 6.6.1. I put into place a fresh 6.6.1 > and mounted our Core (umslogs). This loaded perfectly fine on port 8181 and > our application is able to write/read data. > The problem started when I restart Solr 6.6.1 and the below error appeared > after Solr 6.6.1 came up accessible via the web page. > > *HttpSolrCall > null:org.apache.solr.core.SolrCoreInitializationException: SolrCore 'umslogs' > is not available due to init failure: null * > Next my testing lead me to start up Solr on port 8282 that no application is > connecting/reading/writing to. On this test umslogs core loads is perfectly > fine after erroring above. > Next my testing lead me to close +our application+ that writes/reads to Solr > 8181umslogs core and shutdown Solr 8282 umslogs core. Then I restarted > Solr back on Poret 8181 and the umslogs core loads properly and our > application that that writes/reads to Solr 8181 is once again operational. > Our application has used Solr 4.10.x, then Solr 6.2.x okay. Then again I do > not doubt that I might have done something wrong with the 6.6.1 upgrade that > is causing the above behavior -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170301#comment-16170301 ] Nawab Zada Asad iqbal commented on SOLR-11297: -- [~elyograg] [~erickerickson] I am curious about how frequently this behavior is reproducible. Is rest of the world running 6.6.1 totally fine and only 4 of us seeing this issue? I see LUCENE-7959 was fixed to give a better error, however in my case the file is actually getting created so I am not hitting the permission issue. One way to debug will be to downgrade to an earlier version and try to reproduce the error. Can someone suggest which version should I start with? > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 20494 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20494/ Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest Error Message: Timeout waiting for all live and active Stack Trace: java.lang.AssertionError: Timeout waiting for all live and active at __randomizedtesting.SeedInfo.seed([D0DEEFB90C34C247:53A8B04BDA4DCCE6]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest(TestCloudRecovery.java:184) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 11564 lines...] [junit4] Suite: org.apache.solr.cloud.TestCloudRecovery [junit4] 2> 257303 INFO (SUITE-TestCloudRecovery-seed#[D0DEEFB90C34C247]-worker) [] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=
[jira] [Updated] (SOLR-10132) Support facet.matches to cull facets returned with a regex
[ https://issues.apache.org/jira/browse/SOLR-10132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gus Heck updated SOLR-10132: Attachment: SOLR-10132.patch Patch returning null, with separated test, and doc. I'd rather not have the newExcludeButesRefFilter, method but unfortunately it is protected access, so I'm not sure if it can be eliminated. See comment in patch to this effect. > Support facet.matches to cull facets returned with a regex > -- > > Key: SOLR-10132 > URL: https://issues.apache.org/jira/browse/SOLR-10132 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: faceting >Affects Versions: 6.4.1 >Reporter: Gus Heck >Assignee: Christine Poerschke > Attachments: SOLR-10132.patch, SOLR-10132.patch, SOLR-10132.patch, > SOLR-10132.patch > > > I recently ran into a case where I really wanted to only return the next > level of a hierarchical facet, and while I was able to do that with a > coordinated set of dynamic fields, it occurred to me that this would have > been much much easier if I could have simply used PathHierarchyTokenizer and > written > &facet.matches="/my/current/prefix/[^/]+$" > thereby limiting the returned facets to the next level down and not return > the additional N levels I didn't (yet) want to display (numbering in the > thousands near the top of the tree). I suspect there are other good use > cases, and the patch seemed relatively tractable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-9-ea+181) - Build # 6904 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6904/ Java: 64bit/jdk-9-ea+181 -XX:-UseCompressedOops -XX:+UseG1GC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.update.TestInPlaceUpdatesDistrib.test Error Message: This doc was supposed to have been deleted, but was: SolrDocument{id=0, title_s=title0, id_i=0, inplace_updatable_float=1.0, _version_=1578891528619687936, inplace_updatable_int_with_default=666, inplace_updatable_float_with_default=42.0} Stack Trace: java.lang.AssertionError: This doc was supposed to have been deleted, but was: SolrDocument{id=0, title_s=title0, id_i=0, inplace_updatable_float=1.0, _version_=1578891528619687936, inplace_updatable_int_with_default=666, inplace_updatable_float_with_default=42.0} at __randomizedtesting.SeedInfo.seed([1AA9A65B416E3B90:92FD9981EF925668]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.update.TestInPlaceUpdatesDistrib.reorderedDBQsSimpleTest(TestInPlaceUpdatesDistrib.java:247) at org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:151) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at
[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 855 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/855/ No tests ran. Build Log: [...truncated 27437 lines...] prepare-release-no-sign: [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist [copy] Copying 476 files to /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 215 files to /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL "file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.03 sec (8.5 MB/sec) [smoker] check changes HTML... [smoker] download lucene-8.0.0-src.tgz... [smoker] 29.0 MB in 0.09 sec (326.7 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-8.0.0.tgz... [smoker] 69.1 MB in 0.24 sec (284.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-8.0.0.zip... [smoker] 79.4 MB in 0.28 sec (287.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-8.0.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6166 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-8.0.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6166 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-8.0.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run "ant validate" [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.8... [smoker] got 213 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] generate javadocs w/ Java 8... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] success! [smoker] [smoker] Test Solr... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.02 sec (14.8 MB/sec) [smoker] check changes HTML... [smoker] download solr-8.0.0-src.tgz... [smoker] 50.7 MB in 0.89 sec (57.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-8.0.0.tgz... [smoker] 143.1 MB in 2.35 sec (60.9 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-8.0.0.zip... [smoker] 144.1 MB in 2.45 sec (58.9 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack solr-8.0.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] unpack lucene-8.0.0.tgz... [smoker] **WARNING**: skipping check of /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar: it has javax.* classes [smoker] **WARNING**: skipping check of /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar: it has javax.* classes [smoker] copying unpacked distribution for Java 8 ... [smoker] test solr example w/ Java 8... [smoker] start Solr instance (log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)... [smoker] No process found for Solr node running on port 8983 [smoker] Running techproducts example on port 8983 from /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8 [smoker] Creating Solr home directory /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/example/techproducts/solr [smoker] [smoker] Starting up Solr on port 8983 using command: [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr" [smoker] [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|] [/] [-] [\] [|] [/] [-] [\] [|] [/]
Re: [jira] [Commented] (SOLR-11361) After Restarting Solr 6.6.1 Seems to cause Error if Application is Reading/Writing?
Richard: WARNING: SOLR-11297 is on my list so I don't lose track of it. If my past record this year is any indication there's no guarantee I'll be able to actively work on it any time soon. All help welcome Erick On Mon, Sep 18, 2017 at 5:21 AM, Richard Rominger (JIRA) wrote: > > [ > https://issues.apache.org/jira/browse/SOLR-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169937#comment-16169937 > ] > > Richard Rominger commented on SOLR-11361: > - > > I've done that after the fact but then found my issue was already reported > and being looked into by Erick Erickson in SOLR-11297. So I think there is > more than a question but a issue going on here but regardless this SOLR-11361 > at best a duplicate. > >> After Restarting Solr 6.6.1 Seems to cause Error if Application is >> Reading/Writing? >> --- >> >> Key: SOLR-11361 >> URL: https://issues.apache.org/jira/browse/SOLR-11361 >> Project: Solr >> Issue Type: Bug >> Security Level: Public(Default Security Level. Issues are Public) >>Affects Versions: 6.6.1 >> Environment: Windows 10 VM >>Reporter: Richard Rominger >> Labels: newbie, upgrade, windows >> >> I have just updated from Solr 6.2.1 to 6.6.1. I put into place a fresh >> 6.6.1 and mounted our Core (umslogs). This loaded perfectly fine on port >> 8181 and our application is able to write/read data. >> The problem started when I restart Solr 6.6.1 and the below error appeared >> after Solr 6.6.1 came up accessible via the web page. >> >> *HttpSolrCall >> null:org.apache.solr.core.SolrCoreInitializationException: SolrCore >> 'umslogs' is not available due to init failure: null * >> Next my testing lead me to start up Solr on port 8282 that no application is >> connecting/reading/writing to. On this test umslogs core loads is perfectly >> fine after erroring above. >> Next my testing lead me to close +our application+ that writes/reads to Solr >> 8181umslogs core and shutdown Solr 8282 umslogs core. Then I restarted >> Solr back on Poret 8181 and the umslogs core loads properly and our >> application that that writes/reads to Solr 8181 is once again operational. >> Our application has used Solr 4.10.x, then Solr 6.2.x okay. Then again I do >> not doubt that I might have done something wrong with the 6.6.1 upgrade that >> is causing the above behavior > > > > -- > This message was sent by Atlassian JIRA > (v6.4.14#64029) > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170027#comment-16170027 ] Uwe Schindler edited comment on LUCENE-7966 at 9/18/17 3:21 PM: Here is the class remapper: https://paste.apache.org/bAzx Basically it rewrites all references to oal.future.FutureXxxx to the Java 9 type java.util.Xxxx. All files that contain in the Java 8 code in {{build/classes/java}} references to our own FutureXxx classes (the remapper sets remapped=true for those) are saved to in rewritten formto a separate directory {{build/classes/java9}} in parallel to the original and are packaged into the multirelease part of the JAR. All classes that have no references to our FutureXxx backports are kept out. This can be done as a general task and may be applied to all Lucene/Solr modules. I will update my branch later. was (Author: thetaphi): Here is the class remapper: https://paste.apache.org/NUOp Basically it rewrites all references to oal.future.FutureXxxx to the Java 9 type java.util.Xxxx. All files that contain in the Java 8 code in {{build/classes/java}} references to our own FutureXxx classes (the remapper sets remapped=true for those) are saved to in rewritten formto a separate directory {{build/classes/java9}} in parallel to the original and are packaged into the multirelease part of the JAR. All classes that have no references to our FutureXxx backports are kept out. This can be done as a general task and may be applied to all Lucene/Solr modules. I will update my branch later. > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace call sites with java 9 > methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only > have to worry about testing that our java 8 fallback methods work. > I found that many of the current byte array methods today are willy-nilly and > very lenient for example, passing invalid offsets at times and relying on > compare methods not throwing exceptions, etc. I fixed all the instances in > core/codecs but have not looked at the problems with AnalyzingSuggester. Also > SimpleText still uses a silly method in ArrayUtil in similar crazy way, have > not removed that one yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170077#comment-16170077 ] Michael McCandless commented on LUCENE-7966: bq. Michael McCandless: A stupid question: Did you do the benchmark on Java 9 using the JAR file? If you did it with the class-files only classpath, it won't use any Java 9 features, so you won't see any speed improvement. MR-JAR files require to use them as JAR files. Just placing the files in META-INF subdirectories of a file-only classpath won't use them! That was a great idea [~thetaphi] but alas I was using Lucene via JAR files. > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace call sites with java 9 > methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only > have to worry about testing that our java 8 fallback methods work. > I found that many of the current byte array methods today are willy-nilly and > very lenient for example, passing invalid offsets at times and relying on > compare methods not throwing exceptions, etc. I fixed all the instances in > core/codecs but have not looked at the problems with AnalyzingSuggester. Also > SimpleText still uses a silly method in ArrayUtil in similar crazy way, have > not removed that one yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169838#comment-16169838 ] Uwe Schindler edited comment on LUCENE-7966 at 9/18/17 1:43 PM: When thinking last night about the whole thing a bit more, I had a cool idea: Currently we use ASM to generate the stub files to compile against (see my Github repo). On top of these stubs we use a "wrapper class" that just delegates all methods to the Java 9 one. IMHO, this is not nice for the optimizer (although it can handle that). But the oal.future.FutureObjects/FutureArrays classes just contain the same signatures as their Java 9 variants would contain. So my idea is to use ASM to patch all classes: - Use a groovy script that runs on the compiler output, before building the JAR file - Load class with ASM and use ASM's rewriter functionality to change the classname of all occurences of oal.future.FutureObjects/FutureArrays and replace them by java.util.Objects/Arrays. We can use this utility out of ASM to do this: [http://asm.ow2.org/asm50/javadoc/user/org/objectweb/asm/commons/ClassRemapper.html]. Whenever a class file contaisn references to FutureXXX classes, we patch it using asm and write it out to META-INF folder as Java 9 variant. - Then package MR jar. The good thing: - we don't need stub files to compile with Java 8. We just need the smoke tester to verify the patched class files actually resolves against Java 9 during the Java 9 checks - we have no license issues, because we don't need to generate and commit the stubs. In our source files we solely use oal.future.Objects/Arrays. Adapting to Java 9 is done by constant pool renaming :-) What do you think? I will try this variant a bit later today. We can use the same approach for other Java 9 classes, too! Maybe this also helps with the issues Mike has seen (I am not happy to have the degelator class). was (Author: thetaphi): When thinking last night about the whole thing a bit more, I had a cool idea: Currently we use ASM to generate the stub files to compile against (see my Github repo). On top of these stubs we use a "wrapper class" that just delegates all methods to the Java 9 one. IMHO, this is not nice for the optimizer (although it can handle that). But the oal.util.FutureObjects/FutureArrays classes just contain the same signatures as their Java 9 variants would contain. So my idea is to use ASM to patch all classes: - Use a groovy script that runs on the compiler output, before building the JAR file - Load class with ASM and use ASM's rewriter functionality to change the classname of all occurences of oal.util.FutureObjects/FutureArrays and replace them by java.util.Objects/Arrays. We can use this utility out of ASM to do this: [http://asm.ow2.org/asm50/javadoc/user/org/objectweb/asm/commons/ClassRemapper.html]. Whenever a class file contaisn references to FutureXXX classes, we patch it using asm and write it out to META-INF folder as Java 9 variant. - Then package MR jar. The good thing: - we don't need stub files to compile with Java 8. We just need the smoke tester to verify the patched class files actually resolves against Java 9 during the Java 9 checks - we have no license issues, because we don't need to generate and commit the stubs. In our source files we solely use oal.util.Objects/Arrays. Adapting to Java 9 is done by constant pool renaming :-) What do you think? I will try this variant a bit later today. We can use the same approach for other Java 9 classes, too! Maybe this also helps with the issues Mike has seen (I am not happy to have the degelator class). > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course
[jira] [Commented] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170027#comment-16170027 ] Uwe Schindler commented on LUCENE-7966: --- Here is the class remapper: https://paste.apache.org/NUOp Basically it rewrites all references to oal.future.FutureXxxx to the Java 9 type java.util.Xxxx. All files that contain in the Java 8 code in {{build/classes/java}} references to our own FutureXxx classes (the remapper sets remapped=true for those) are saved to in rewritten formto a separate directory {{build/classes/java9}} in parallel to the original and are packaged into the multirelease part of the JAR. All classes that have no references to our FutureXxx backports are kept out. This can be done as a general task and may be applied to all Lucene/Solr modules. I will update my branch later. > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace call sites with java 9 > methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only > have to worry about testing that our java 8 fallback methods work. > I found that many of the current byte array methods today are willy-nilly and > very lenient for example, passing invalid offsets at times and relying on > compare methods not throwing exceptions, etc. I fixed all the instances in > core/codecs but have not looked at the problems with AnalyzingSuggester. Also > SimpleText still uses a silly method in ArrayUtil in similar crazy way, have > not removed that one yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 191 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/191/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC 5 tests failed. FAILED: org.apache.solr.cloud.HttpPartitionTest.test Error Message: Timeout occured while waiting response from server at: http://127.0.0.1:47649/collMinRf_1x3 Stack Trace: org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting response from server at: http://127.0.0.1:47649/collMinRf_1x3 at __randomizedtesting.SeedInfo.seed([5E924F04245D447:8DBD1B2AECB9B9BF]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:638) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) at org.apache.solr.cloud.HttpPartitionTest.realTimeGetDocId(HttpPartitionTest.java:618) at org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603) at org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:560) at org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:251) at org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedt
[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+181) - Build # 20493 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20493/ Java: 32bit/jdk-9-ea+181 -client -XX:+UseG1GC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest Error Message: Timeout waiting for all live and active Stack Trace: java.lang.AssertionError: Timeout waiting for all live and active at __randomizedtesting.SeedInfo.seed([B257F69E19359092:3121A96CCF4C9E33]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest(TestCloudRecovery.java:184) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 12969 lines...] [junit4] Suite: org.apache.solr.cloud.TestCloudRecovery [junit4] 2> 1845922 INFO (SUITE-TestCloudRecovery-seed#[B257F69E19359092]-worker) [] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=
[jira] [Commented] (SOLR-11361) After Restarting Solr 6.6.1 Seems to cause Error if Application is Reading/Writing?
[ https://issues.apache.org/jira/browse/SOLR-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169937#comment-16169937 ] Richard Rominger commented on SOLR-11361: - I've done that after the fact but then found my issue was already reported and being looked into by Erick Erickson in SOLR-11297. So I think there is more than a question but a issue going on here but regardless this SOLR-11361 at best a duplicate. > After Restarting Solr 6.6.1 Seems to cause Error if Application is > Reading/Writing? > --- > > Key: SOLR-11361 > URL: https://issues.apache.org/jira/browse/SOLR-11361 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6.1 > Environment: Windows 10 VM >Reporter: Richard Rominger > Labels: newbie, upgrade, windows > > I have just updated from Solr 6.2.1 to 6.6.1. I put into place a fresh 6.6.1 > and mounted our Core (umslogs). This loaded perfectly fine on port 8181 and > our application is able to write/read data. > The problem started when I restart Solr 6.6.1 and the below error appeared > after Solr 6.6.1 came up accessible via the web page. > > *HttpSolrCall > null:org.apache.solr.core.SolrCoreInitializationException: SolrCore 'umslogs' > is not available due to init failure: null * > Next my testing lead me to start up Solr on port 8282 that no application is > connecting/reading/writing to. On this test umslogs core loads is perfectly > fine after erroring above. > Next my testing lead me to close +our application+ that writes/reads to Solr > 8181umslogs core and shutdown Solr 8282 umslogs core. Then I restarted > Solr back on Poret 8181 and the umslogs core loads properly and our > application that that writes/reads to Solr 8181 is once again operational. > Our application has used Solr 4.10.x, then Solr 6.2.x okay. Then again I do > not doubt that I might have done something wrong with the 6.6.1 upgrade that > is causing the above behavior -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-7.0 - Build # 42 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.0/42/ No tests ran. Build Log: [...truncated 25714 lines...] prepare-release-no-sign: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/lucene/build/smokeTestRelease/dist [copy] Copying 476 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 215 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL "file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/lucene/build/smokeTestRelease/dist/"... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.05 sec (4.6 MB/sec) [smoker] check changes HTML... [smoker] download lucene-7.0.0-src.tgz... [smoker] 29.5 MB in 0.05 sec (549.1 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-7.0.0.tgz... [smoker] 69.0 MB in 0.20 sec (338.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-7.0.0.zip... [smoker] 79.3 MB in 0.07 sec (1082.4 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-7.0.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6165 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-7.0.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6165 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-7.0.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run "ant validate" [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.8... [smoker] got 213 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] generate javadocs w/ Java 8... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] success! [smoker] [smoker] Test Solr... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.03 sec (7.0 MB/sec) [smoker] check changes HTML... [smoker] download solr-7.0.0-src.tgz... [smoker] 51.2 MB in 1.48 sec (34.6 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-7.0.0.tgz... [smoker] 142.7 MB in 3.96 sec (36.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-7.0.0.zip... [smoker] 143.7 MB in 4.63 sec (31.1 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack solr-7.0.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] unpack lucene-7.0.0.tgz... [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar: it has javax.* classes [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar: it has javax.* classes [smoker] copying unpacked distribution for Java 8 ... [smoker] test solr example w/ Java 8... [smoker] start Solr instance (log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)... [smoker] No process found for Solr node running on port 8983 [smoker] Running techproducts example on port 8983 from /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8 [smoker] Creating Solr home directory /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr [smoker] [smoker] Starting up Solr on port 8983 using command: [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr" [smoker] [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|] [/] [-] [\] [|] [/] [-] [\] [|] [/] [-] [\] [|] [/]
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 192 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/192/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC 2 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI Error Message: Expected to see collection awhollynewcollection_0 null Live Nodes: [127.0.0.1:64169_solr, 127.0.0.1:64170_solr, 127.0.0.1:64168_solr, 127.0.0.1:64172_solr] Last available state: DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/2)={ "pullReplicas":"0", "replicationFactor":"4", "shards":{ "shard1":{ "range":"8000-b332", "state":"active", "replicas":{ "core_node3":{ "core":"awhollynewcollection_0_shard1_replica_n1", "base_url":"http://127.0.0.1:64172/solr";, "node_name":"127.0.0.1:64172_solr", "state":"down", "type":"NRT"}, "core_node5":{ "core":"awhollynewcollection_0_shard1_replica_n2", "base_url":"http://127.0.0.1:64169/solr";, "node_name":"127.0.0.1:64169_solr", "state":"down", "type":"NRT"}, "core_node7":{ "core":"awhollynewcollection_0_shard1_replica_n4", "base_url":"http://127.0.0.1:64168/solr";, "node_name":"127.0.0.1:64168_solr", "state":"down", "type":"NRT"}, "core_node9":{ "core":"awhollynewcollection_0_shard1_replica_n6", "base_url":"http://127.0.0.1:64170/solr";, "node_name":"127.0.0.1:64170_solr", "state":"down", "type":"NRT"}}}, "shard2":{ "range":"b333-e665", "state":"active", "replicas":{ "core_node11":{ "core":"awhollynewcollection_0_shard2_replica_n8", "base_url":"http://127.0.0.1:64172/solr";, "node_name":"127.0.0.1:64172_solr", "state":"down", "type":"NRT"}, "core_node14":{ "core":"awhollynewcollection_0_shard2_replica_n10", "base_url":"http://127.0.0.1:64169/solr";, "node_name":"127.0.0.1:64169_solr", "state":"down", "type":"NRT"}, "core_node16":{ "core":"awhollynewcollection_0_shard2_replica_n12", "base_url":"http://127.0.0.1:64168/solr";, "node_name":"127.0.0.1:64168_solr", "state":"down", "type":"NRT"}, "core_node18":{ "core":"awhollynewcollection_0_shard2_replica_n13", "base_url":"http://127.0.0.1:64170/solr";, "node_name":"127.0.0.1:64170_solr", "state":"down", "type":"NRT"}}}, "shard3":{ "range":"e666-1998", "state":"active", "replicas":{ "core_node20":{ "core":"awhollynewcollection_0_shard3_replica_n15", "base_url":"http://127.0.0.1:64172/solr";, "node_name":"127.0.0.1:64172_solr", "state":"down", "type":"NRT"}, "core_node22":{ "core":"awhollynewcollection_0_shard3_replica_n17", "base_url":"http://127.0.0.1:64169/solr";, "node_name":"127.0.0.1:64169_solr", "state":"down", "type":"NRT"}, "core_node24":{ "core":"awhollynewcollection_0_shard3_replica_n19", "base_url":"http://127.0.0.1:64168/solr";, "node_name":"127.0.0.1:64168_solr", "state":"down", "type":"NRT"}, "core_node26":{ "core":"awhollynewcollection_0_shard3_replica_n21", "base_url":"http://127.0.0.1:64170/solr";, "node_name":"127.0.0.1:64170_solr", "state":"down", "type":"NRT"}}}, "shard4":{ "range":"1999-4ccb", "state":"active", "replicas":{ "core_node28":{ "core":"awhollynewcollection_0_shard4_replica_n23", "base_url":"http://127.0.0.1:64172/solr";, "node_name":"127.0.0.1:64172_solr", "state":"down", "type":"NRT"}, "core_node30":{ "core":"awhollynewcollection_0_shard4_replica_n25", "base_url":"http://127.0.0.1:64169/solr";, "node_name":"127.0.0.1:64169_solr", "state":"down", "type":"NRT"}, "core_node32":{ "core":"awhollynewcollection_0_shard4_replica_n27", "base_url":"http://127.0.0.1:64168/solr";, "node_name":"127.0.0.1:64168_solr", "state":"down", "type":"NRT"}, "core_node34":{ "core":"awhollynewcollection_0_shard4_replica_n29", "base_url":"http://127.0.0.1:64170/solr";, "node_name":"127.0.0.1:64170_solr", "state":"down", "type":"NRT"}}}, "shard5":{ "range":"4ccc-7fff", "state":"active", "replicas":{ "core_node36":{ "core":"awhollynewcollection_0_shard5_replica_n31", "base_url":"http://
[jira] [Commented] (SOLR-10132) Support facet.matches to cull facets returned with a regex
[ https://issues.apache.org/jira/browse/SOLR-10132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169878#comment-16169878 ] Christine Poerschke commented on SOLR-10132: Hi Gus, thanks for returning to this! bq. ... the MATCH_ALL_TERMS idea has missed the boat. ... so I think now returning null as before is the only path forward. ... I agree. bq. ... Also the new ascii doc stuff has come in since the last patch here so I probably should add some documentation for this feature too, now that that is something I can do myself :-). ... Yes please. bq. ... Should I do the patch vs trunk since it seems I just barely missed the boat for 7? Yes please. Almost always patches would be against trunk/master and then from there any back porting would be done via cherry-pick to the branches, branch_7x at present. > Support facet.matches to cull facets returned with a regex > -- > > Key: SOLR-10132 > URL: https://issues.apache.org/jira/browse/SOLR-10132 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: faceting >Affects Versions: 6.4.1 >Reporter: Gus Heck >Assignee: Christine Poerschke > Attachments: SOLR-10132.patch, SOLR-10132.patch, SOLR-10132.patch > > > I recently ran into a case where I really wanted to only return the next > level of a hierarchical facet, and while I was able to do that with a > coordinated set of dynamic fields, it occurred to me that this would have > been much much easier if I could have simply used PathHierarchyTokenizer and > written > &facet.matches="/my/current/prefix/[^/]+$" > thereby limiting the returned facets to the next level down and not return > the additional N levels I didn't (yet) want to display (numbering in the > thousands near the top of the tree). I suspect there are other good use > cases, and the patch seemed relatively tractable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-10990) QueryComponent.process breakup (for readability)
[ https://issues.apache.org/jira/browse/SOLR-10990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved SOLR-10990. Resolution: Fixed Fix Version/s: 7.1 master (8.0) > QueryComponent.process breakup (for readability) > > > Key: SOLR-10990 > URL: https://issues.apache.org/jira/browse/SOLR-10990 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: master (8.0), 7.1 > > > The method is currently very long i.e. > https://github.com/apache/lucene-solr/blob/e2521b2a8baabdaf43b92192588f51e042d21e97/solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java#L300-L565 > and breaking it up along logical lines (ids, grouped distributed first > phase, grouped distributed second phase, undistributed grouped, ungrouped) > would make it more readable. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11291) Adding Solr Core Reporter
[ https://issues.apache.org/jira/browse/SOLR-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169864#comment-16169864 ] Christine Poerschke commented on SOLR-11291: [~ab] and I spoke about this offline. At present there's the {{SolrInfoBean.Group.shard}} enum choice associated with the {{SolrShardReporter}} class for which {{setCore(core)}} is called. Instead of having the {{SolrInfoBean.Group.shard}} enum choice (and a potential additional {{SolrInfoBean.Group.replica}} enum choice) we should be able to inspect the class and based on that call the {{setCore(core)}} method e.g. something along the lines of {code} if (SolrCoreReporter.class.isAssignableFrom(reporter.getClass())) { ((SolrCoreReporter)reporter).setCore(core); } {code} which then likely also should permit the removal altogether of the {{SolrInfoBean.Group.shard}} enum choice. > Adding Solr Core Reporter > - > > Key: SOLR-11291 > URL: https://issues.apache.org/jira/browse/SOLR-11291 > Project: Solr > Issue Type: New Feature > Components: metrics >Reporter: Omar Abdelnabi >Priority: Minor > Attachments: SOLR-11291.patch > > > Adds a new reporter, SolrCoreReporter, which allows metrics to be reported on > per-core basis. > Also modifies the SolrMetricManager and SolrCoreMetricManager to take > advantage of this new reporter. > Adds a test/example that uses the SolrCoreReporter. Also adds randomization > to SolrCloudReportersTest. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11291) Adding Solr Core Reporter
[ https://issues.apache.org/jira/browse/SOLR-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke reassigned SOLR-11291: -- Assignee: Christine Poerschke > Adding Solr Core Reporter > - > > Key: SOLR-11291 > URL: https://issues.apache.org/jira/browse/SOLR-11291 > Project: Solr > Issue Type: New Feature > Components: metrics >Reporter: Omar Abdelnabi >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-11291.patch > > > Adds a new reporter, SolrCoreReporter, which allows metrics to be reported on > per-core basis. > Also modifies the SolrMetricManager and SolrCoreMetricManager to take > advantage of this new reporter. > Adds a test/example that uses the SolrCoreReporter. Also adds randomization > to SolrCloudReportersTest. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169838#comment-16169838 ] Uwe Schindler edited comment on LUCENE-7966 at 9/18/17 10:27 AM: - When thinking last night about the whole thing a bit more, I had a cool idea: Currently we use ASM to generate the stub files to compile against (see my Github repo). On top of these stubs we use a "wrapper class" that just delegates all methods to the Java 9 one. IMHO, this is not nice for the optimizer (although it can handle that). But the oal.util.FutureObjects/FutureArrays classes just contain the same signatures as their Java 9 variants would contain. So my idea is to use ASM to patch all classes: - Use a groovy script that runs on the compiler output, before building the JAR file - Load class with ASM and use ASM's rewriter functionality to change the classname of all occurences of oal.util.FutureObjects/FutureArrays and replace them by java.util.Objects/Arrays. We can use this utility out of ASM to do this: [http://asm.ow2.org/asm50/javadoc/user/org/objectweb/asm/commons/ClassRemapper.html]. Whenever a class file contaisn references to FutureXXX classes, we patch it using asm and write it out to META-INF folder as Java 9 variant. - Then package MR jar. The good thing: - we don't need stub files to compile with Java 8. We just need the smoke tester to verify the patched class files actually resolves against Java 9 during the Java 9 checks - we have no license issues, because we don't need to generate and commit the stubs. In our source files we solely use oal.util.Objects/Arrays. Adapting to Java 9 is done by constant pool renaming :-) What do you think? I will try this variant a bit later today. We can use the same approach for other Java 9 classes, too! Maybe this also helps with the issues Mike has seen (I am not happy to have the degelator class). was (Author: thetaphi): When thinking last night about the whole thing a bit more, I had a cool idea: Currently we use ASM to generate the stub files to compile against (see my Github repo). On top of these stubs we use a "wrapper class" that just delegates all methods to the Java 9 one. IMHO, this is not nice for the optimizer (although it can handle that). But the oal.util.FutureObjects/FutureArrays classes just contain the same signatures as their Java 9 variants would contain. So my idea is to use ASM to patch all classes: - Use a groovy script that runs on the compiler output, before building the JAR file - Load class with ASM and use ASM's rewriter functionality to change the classname of all occurences of oal.util.FutureObjects/FutureArrays and replace them by java.util.Objects/Arrays. We can use this utility out of ASM to do this: [http://asm.ow2.org/asm50/javadoc/user/org/objectweb/asm/commons/ClassRemapper.html] Whenever a class file contaisn references to FutureXXX classes, we patch it using asm and write it out to META-INF folder as Java 9 variant. - Then package MR jar. The good thing: - we don't need stub files to compile with Java 8. We just need the smoke tester to verify the patched class files actually resolves against Java 9 during the Java 9 checks - we have no license issues, because we don't need to generate and commit the stubs. In our source files we solely use oal.util.Objects/Arrays. Adapting to Java 9 is done by constant pool renaming :-) What do you think? I will try this variant a bit later today. We can use the same approach for other Java 9 classes, too! Maybe this also helps with the issues Mike has seen (I am not happy to have the degelator class). > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its
[jira] [Comment Edited] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169838#comment-16169838 ] Uwe Schindler edited comment on LUCENE-7966 at 9/18/17 10:27 AM: - When thinking last night about the whole thing a bit more, I had a cool idea: Currently we use ASM to generate the stub files to compile against (see my Github repo). On top of these stubs we use a "wrapper class" that just delegates all methods to the Java 9 one. IMHO, this is not nice for the optimizer (although it can handle that). But the oal.util.FutureObjects/FutureArrays classes just contain the same signatures as their Java 9 variants would contain. So my idea is to use ASM to patch all classes: - Use a groovy script that runs on the compiler output, before building the JAR file - Load class with ASM and use ASM's rewriter functionality to change the classname of all occurences of oal.util.FutureObjects/FutureArrays and replace them by java.util.Objects/Arrays. We can use this utility out of ASM to do this: [http://asm.ow2.org/asm50/javadoc/user/org/objectweb/asm/commons/ClassRemapper.html] Whenever a class file contaisn references to FutureXXX classes, we patch it using asm and write it out to META-INF folder as Java 9 variant. - Then package MR jar. The good thing: - we don't need stub files to compile with Java 8. We just need the smoke tester to verify the patched class files actually resolves against Java 9 during the Java 9 checks - we have no license issues, because we don't need to generate and commit the stubs. In our source files we solely use oal.util.Objects/Arrays. Adapting to Java 9 is done by constant pool renaming :-) What do you think? I will try this variant a bit later today. We can use the same approach for other Java 9 classes, too! Maybe this also helps with the issues Mike has seen (I am not happy to have the degelator class). was (Author: thetaphi): When thinking last night about the whole thing a bit more, I had a cool idea: Currently we use ASM to generate the stub files to compile against (see my Github repo). On top of these stubs we use a "wrapper class" that just delegates all methods to the Java 9 one. IMHO, this is not nice for the optimizer (although it can handle that). But the oal.util.FutureObjects/FutureArrays classes just contain the same signatures as their Java 9 variants would contain. So my idea is to use ASM to patch all classes: - Use a groovy script that runs on the compiler output, before building the JAR file - Load class with ASM and use ASM's rewriter functionality to change the classname of all occurences of oal.util.FutureObjects/FutureArrays and replace them by java.util.Objects/Arrays. Whenever a class file matches this pattern, we patch it using asm and write it out to META-INF folder as Java 9 variant. - Then package MR jar. The good thing: - we don't need stub files to compile with Java 8. We just need the smoke tester to verify the patched class files actually resolves against Java 9 during the Java 9 checks - we have no license issues, because we don't need to generate and commit the stubs. In our source files we solely use oal.util.Objects/Arrays. Adapting to Java 9 is done by constant pool renaming :-) What do you think? I will try this variant a bit later today. We can use the same approach for other Java 9 classes, too! Maybe this also helps with the issues Mike has seen (I am not happy to have the degelator class). > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace
[jira] [Commented] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169838#comment-16169838 ] Uwe Schindler commented on LUCENE-7966: --- When thinking last night about the whole thing a bit more, I had a cool idea: Currently we use ASM to generate the stub files to compile against (see my Github repo). On top of these stubs we use a "wrapper class" that just delegates all methods to the Java 9 one. IMHO, this is not nice for the optimizer (although it can handle that). But the oal.util.FutureObjects/FutureArrays classes just contain the same signatures as their Java 9 variants would contain. So my idea is to use ASM to patch all classes: - Use a groovy script that runs on the compiler output, before building the JAR file - Load class with ASM and use ASM's rewriter functionality to change the classname of all occurences of oal.util.FutureObjects/FutureArrays and replace them by java.util.Objects/Arrays. Whenever a class file matches this pattern, we patch it using asm and write it out to META-INF folder as Java 9 variant. - Then package MR jar. The good thing: - we don't need stub files to compile with Java 8. We just need the smoke tester to verify the patched class files actually resolves against Java 9 during the Java 9 checks - we have no license issues, because we don't need to generate and commit the stubs. In our source files we solely use oal.util.Objects/Arrays. Adapting to Java 9 is done by constant pool renaming :-) What do you think? I will try this variant a bit later today. We can use the same approach for other Java 9 classes, too! Maybe this also helps with the issues Mike has seen (I am not happy to have the degelator class). > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace call sites with java 9 > methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only > have to worry about testing that our java 8 fallback methods work. > I found that many of the current byte array methods today are willy-nilly and > very lenient for example, passing invalid offsets at times and relying on > compare methods not throwing exceptions, etc. I fixed all the instances in > core/codecs but have not looked at the problems with AnalyzingSuggester. Also > SimpleText still uses a silly method in ArrayUtil in similar crazy way, have > not removed that one yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169835#comment-16169835 ] Uwe Schindler commented on LUCENE-7966: --- [~mikemccand]: A stupid question: Did you do the benchmark on Java 9 using the JAR file? If you did it with the class-files only classpath, it won't use any Java 9 features, so you won't see any speed improvement. MR-JAR files require to use them as JAR files. Just placing the files in META-INF subdirectories of a file-only classpath won't use them! > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace call sites with java 9 > methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only > have to worry about testing that our java 8 fallback methods work. > I found that many of the current byte array methods today are willy-nilly and > very lenient for example, passing invalid offsets at times and relying on > compare methods not throwing exceptions, etc. I fixed all the instances in > core/codecs but have not looked at the problems with AnalyzingSuggester. Also > SimpleText still uses a silly method in ArrayUtil in similar crazy way, have > not removed that one yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169827#comment-16169827 ] Michael McCandless commented on LUCENE-7966: > Mike, is it possible the benchmark didn't warm up here (or maybe something > happened with the 7.x backport?). I'm not sure what happened ... the bench should have been "hot": plenty of RAM on the box, and I ran each case twice. I did also run in a virtual env (EC2), i3.16xlarge instance; maybe a noisy neighbor impacted results? I don't think we should let this block committing; the change otherwise seems awesome. > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: general/build >Reporter: Robert Muir > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace call sites with java 9 > methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only > have to worry about testing that our java 8 fallback methods work. > I found that many of the current byte array methods today are willy-nilly and > very lenient for example, passing invalid offsets at times and relying on > compare methods not throwing exceptions, etc. I fixed all the instances in > core/codecs but have not looked at the problems with AnalyzingSuggester. Also > SimpleText still uses a silly method in ArrayUtil in similar crazy way, have > not removed that one yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1389 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1389/ 5 tests failed. FAILED: org.apache.lucene.spatial3d.TestGeo3DPoint.testRandomMedium Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([E92118A36B0412C7]:0) FAILED: junit.framework.TestSuite.org.apache.lucene.spatial3d.TestGeo3DPoint Error Message: Suite timeout exceeded (>= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (>= 720 msec). at __randomizedtesting.SeedInfo.seed([E92118A36B0412C7]:0) FAILED: org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload Error Message: expected:<[{indexVersion=1505712360515,generation=2,filelist=[_fo.scf, _fo.si, _fu.fld, _fu.inf, _fu.len, _fu.pst, _fu.si, _g4.scf, _g4.si, _g7.fld, _g7.inf, _g7.len, _g7.pst, _g7.si, _ge.fld, _ge.inf, _ge.len, _ge.pst, _ge.si, _gl.fld, _gl.inf, _gl.len, _gl.pst, _gl.si, _gm.fld, _gm.inf, _gm.len, _gm.pst, _gm.si, _gn.fld, _gn.inf, _gn.len, _gn.pst, _gn.si, _go.fld, _go.inf, _go.len, _go.pst, _go.si, _gp.fld, _gp.inf, _gp.len, _gp.pst, _gp.si, _gq.scf, _gq.si, _gs.fld, _gs.inf, _gs.len, _gs.pst, _gs.si, _gt.fld, _gt.inf, _gt.len, _gt.pst, _gt.si, _gv.fld, _gv.inf, _gv.len, _gv.pst, _gv.si, _gw.fld, _gw.inf, _gw.len, _gw.pst, _gw.si, segments_2]}]> but was:<[{indexVersion=1505712360515,generation=2,filelist=[_fo.scf, _fo.si, _fu.fld, _fu.inf, _fu.len, _fu.pst, _fu.si, _g4.scf, _g4.si, _g7.fld, _g7.inf, _g7.len, _g7.pst, _g7.si, _ge.fld, _ge.inf, _ge.len, _ge.pst, _ge.si, _gl.fld, _gl.inf, _gl.len, _gl.pst, _gl.si, _gm.fld, _gm.inf, _gm.len, _gm.pst, _gm.si, _gn.fld, _gn.inf, _gn.len, _gn.pst, _gn.si, _go.fld, _go.inf, _go.len, _go.pst, _go.si, _gp.fld, _gp.inf, _gp.len, _gp.pst, _gp.si, _gq.scf, _gq.si, _gs.fld, _gs.inf, _gs.len, _gs.pst, _gs.si, _gt.fld, _gt.inf, _gt.len, _gt.pst, _gt.si, _gv.fld, _gv.inf, _gv.len, _gv.pst, _gv.si, _gw.fld, _gw.inf, _gw.len, _gw.pst, _gw.si, segments_2]}, {indexVersion=1505712360515,generation=3,filelist=[_ge.fld, _ge.inf, _ge.len, _ge.pst, _ge.si, _gr.scf, _gr.si, _gu.fld, _gu.inf, _gu.len, _gu.pst, _gu.si, _gx.scf, _gx.si, segments_3]}]> Stack Trace: java.lang.AssertionError: expected:<[{indexVersion=1505712360515,generation=2,filelist=[_fo.scf, _fo.si, _fu.fld, _fu.inf, _fu.len, _fu.pst, _fu.si, _g4.scf, _g4.si, _g7.fld, _g7.inf, _g7.len, _g7.pst, _g7.si, _ge.fld, _ge.inf, _ge.len, _ge.pst, _ge.si, _gl.fld, _gl.inf, _gl.len, _gl.pst, _gl.si, _gm.fld, _gm.inf, _gm.len, _gm.pst, _gm.si, _gn.fld, _gn.inf, _gn.len, _gn.pst, _gn.si, _go.fld, _go.inf, _go.len, _go.pst, _go.si, _gp.fld, _gp.inf, _gp.len, _gp.pst, _gp.si, _gq.scf, _gq.si, _gs.fld, _gs.inf, _gs.len, _gs.pst, _gs.si, _gt.fld, _gt.inf, _gt.len, _gt.pst, _gt.si, _gv.fld, _gv.inf, _gv.len, _gv.pst, _gv.si, _gw.fld, _gw.inf, _gw.len, _gw.pst, _gw.si, segments_2]}]> but was:<[{indexVersion=1505712360515,generation=2,filelist=[_fo.scf, _fo.si, _fu.fld, _fu.inf, _fu.len, _fu.pst, _fu.si, _g4.scf, _g4.si, _g7.fld, _g7.inf, _g7.len, _g7.pst, _g7.si, _ge.fld, _ge.inf, _ge.len, _ge.pst, _ge.si, _gl.fld, _gl.inf, _gl.len, _gl.pst, _gl.si, _gm.fld, _gm.inf, _gm.len, _gm.pst, _gm.si, _gn.fld, _gn.inf, _gn.len, _gn.pst, _gn.si, _go.fld, _go.inf, _go.len, _go.pst, _go.si, _gp.fld, _gp.inf, _gp.len, _gp.pst, _gp.si, _gq.scf, _gq.si, _gs.fld, _gs.inf, _gs.len, _gs.pst, _gs.si, _gt.fld, _gt.inf, _gt.len, _gt.pst, _gt.si, _gv.fld, _gv.inf, _gv.len, _gv.pst, _gv.si, _gw.fld, _gw.inf, _gw.len, _gw.pst, _gw.si, segments_2]}, {indexVersion=1505712360515,generation=3,filelist=[_ge.fld, _ge.inf, _ge.len, _ge.pst, _ge.si, _gr.scf, _gr.si, _gu.fld, _gu.inf, _gu.len, _gu.pst, _gu.si, _gx.scf, _gx.si, segments_3]}]> at __randomizedtesting.SeedInfo.seed([6DC009232B4B7A1B:481712135B037418]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.solr.handler.TestReplicationHandler.doTestReplicateAfterCoreReload(TestReplicationHandler.java:1277) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:98
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 20492 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20492/ Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.analytics.facet.RangeFacetCloudTest Error Message: Could not find collection:collection1 Stack Trace: java.lang.AssertionError: Could not find collection:collection1 at __randomizedtesting.SeedInfo.seed([74439A08012DD9A1]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155) at org.apache.solr.analytics.facet.AbstractAnalyticsFacetCloudTest.setupCluster(AbstractAnalyticsFacetCloudTest.java:59) at org.apache.solr.analytics.facet.RangeFacetCloudTest.beforeClass(RangeFacetCloudTest.java:49) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest Error Message: Timeout waiting for all live and active Stack Trace: java.lang.AssertionError: Timeout waiting for all live and active at __randomizedtesting.SeedInfo.seed([AE2E1709CD288301:2D5848FB1B518DA0]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest(TestCloudRecovery.java:184) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.
[jira] [Commented] (SOLR-11361) After Restarting Solr 6.6.1 Seems to cause Error if Application is Reading/Writing?
[ https://issues.apache.org/jira/browse/SOLR-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16169739#comment-16169739 ] Jan Høydahl commented on SOLR-11361: Here are instructions for signing up to the mailing list: http://lucene.apache.org/solr/community.html#mailing-lists-irc Then later if it turns out to be a bug, we can create a new issue. > After Restarting Solr 6.6.1 Seems to cause Error if Application is > Reading/Writing? > --- > > Key: SOLR-11361 > URL: https://issues.apache.org/jira/browse/SOLR-11361 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6.1 > Environment: Windows 10 VM >Reporter: Richard Rominger > Labels: newbie, upgrade, windows > > I have just updated from Solr 6.2.1 to 6.6.1. I put into place a fresh 6.6.1 > and mounted our Core (umslogs). This loaded perfectly fine on port 8181 and > our application is able to write/read data. > The problem started when I restart Solr 6.6.1 and the below error appeared > after Solr 6.6.1 came up accessible via the web page. > > *HttpSolrCall > null:org.apache.solr.core.SolrCoreInitializationException: SolrCore 'umslogs' > is not available due to init failure: null * > Next my testing lead me to start up Solr on port 8282 that no application is > connecting/reading/writing to. On this test umslogs core loads is perfectly > fine after erroring above. > Next my testing lead me to close +our application+ that writes/reads to Solr > 8181umslogs core and shutdown Solr 8282 umslogs core. Then I restarted > Solr back on Poret 8181 and the umslogs core loads properly and our > application that that writes/reads to Solr 8181 is once again operational. > Our application has used Solr 4.10.x, then Solr 6.2.x okay. Then again I do > not doubt that I might have done something wrong with the 6.6.1 upgrade that > is causing the above behavior -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7863) Don't repeat postings (and perhaps positions) on ReverseWF, EdgeNGram, etc
[ https://issues.apache.org/jira/browse/LUCENE-7863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated LUCENE-7863: - Attachment: LUCENE-7863.patch [^LUCENE-7863.patch] properly instantiates offset buffer. > Don't repeat postings (and perhaps positions) on ReverseWF, EdgeNGram, etc > > > Key: LUCENE-7863 > URL: https://issues.apache.org/jira/browse/LUCENE-7863 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Reporter: Mikhail Khludnev > Attachments: LUCENE-7863.hazard, LUCENE-7863.patch, > LUCENE-7863.patch, LUCENE-7863.patch, LUCENE-7863.patch, LUCENE-7863.patch, > LUCENE-7863.patch, LUCENE-7863.patch, LUCENE-7863.patch > > > h2. Context > \*suffix and \*infix\* searches on large indexes. > h2. Problem > Obviously applying {{ReversedWildcardFilter}} doubles an index size, and I'm > shuddering to think about EdgeNGrams... > h2. Proposal > _DRY_ -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 6903 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6903/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.ConfigureRecoveryStrategyTest Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.ConfigureRecoveryStrategyTest_6F4A16A3D2D384FB-001\init-core-data-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.ConfigureRecoveryStrategyTest_6F4A16A3D2D384FB-001\init-core-data-001 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.ConfigureRecoveryStrategyTest_6F4A16A3D2D384FB-001\init-core-data-001\snapshot_metadata: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.ConfigureRecoveryStrategyTest_6F4A16A3D2D384FB-001\init-core-data-001\snapshot_metadata C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.ConfigureRecoveryStrategyTest_6F4A16A3D2D384FB-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.ConfigureRecoveryStrategyTest_6F4A16A3D2D384FB-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.ConfigureRecoveryStrategyTest_6F4A16A3D2D384FB-001\init-core-data-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.ConfigureRecoveryStrategyTest_6F4A16A3D2D384FB-001\init-core-data-001 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.ConfigureRecoveryStrategyTest_6F4A16A3D2D384FB-001\init-core-data-001\snapshot_metadata: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.ConfigureRecoveryStrategyTest_6F4A16A3D2D384FB-001\init-core-data-001\snapshot_metadata C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.ConfigureRecoveryStrategyTest_6F4A16A3D2D384FB-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.ConfigureRecoveryStrategyTest_6F4A16A3D2D384FB-001 at __randomizedtesting.SeedInfo.seed([6F4A16A3D2D384FB]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 12143 lines...] [junit4] Suite: org.apache.solr.core.ConfigureRecoveryStrategyTest [junit4] 2> 965654 INFO (SUITE-ConfigureRecoveryStrategyTest-seed#[6F4A16A3D2D384FB]-worker) [] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom [junit4] 2> Creating dataDir: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.core.ConfigureRecoveryStrategyTest_6F4A16A3D2D384FB-001\init-core-data-001 [junit4] 2> 965656 INFO (SUITE-ConfigureRecoveryStrategyTest-seed#[6F4A16A3D2D384FB]-worker) [] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=false [junit4] 2> 965660 INFO (SUITE-ConfigureRecoveryStrategyTest-seed#[6F4A16A3D2D384FB]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: @org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN) [junit4
[jira] [Updated] (SOLR-10263) Different SpellcheckComponents should have their own suggestMode
[ https://issues.apache.org/jira/browse/SOLR-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abhishek Kumar Singh updated SOLR-10263: Attachment: SOLR-10263.v2.patch Uploading the updated patch > Different SpellcheckComponents should have their own suggestMode > > > Key: SOLR-10263 > URL: https://issues.apache.org/jira/browse/SOLR-10263 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) > Components: spellchecker >Reporter: Abhishek Kumar Singh >Priority: Minor > Attachments: SOLR-10263.v2.patch > > > As of now, common spellcheck options are applied to all the > SpellCheckComponents. > This can create problem in the following case:- > It may be the case that we want *DirectSolrSpellChecker* to ALWAYS_SUGGEST > spellcheck suggestions. > But we may want *WordBreakSpellChecker* to suggest only if the token is not > in the index (for relevance or performance reasons) > (SUGGEST_WHEN_NOT_IN_INDEX) . > *UPDATE :* Recently, we also figured out that, for > {{WordBreakSolrSpellChecker}} also, both - The {{WordBreak}} and {{WordJoin}} > should also have different suggestModes. > We faced this problem in our case, wherein, Most of the WordJoin cases are > those where the words individually are valid tokens, but what the users are > looking for is actually a combination (wordjoin) of the two tokens. > For example:- > *gold mine sunglasses* : Here, both *gold* and *mine* are valid tokens. But > the actual product being looked for is *goldmine sunglasses* , where > *goldmine* is a brand. > In such cases, we should recommend {{didYouMean:goldmine sunglasses}} . But > this wont be possible because we had set {{SUGGEST_WHEN_NOT_IN_INDEX}} for > {{WordBreakSolrSpellChecker}} (of which, WordJoin is a part) . > For this, we should have separate suggestModes for both `wordJoin` as well as > `wordBreak`. > Related changes have been done at Latest PR. : > https://github.com/apache/lucene-solr/pull/218. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10263) Different SpellcheckComponents should have their own suggestMode
[ https://issues.apache.org/jira/browse/SOLR-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abhishek Kumar Singh updated SOLR-10263: Attachment: (was: SOLR-10263.v2.patch) > Different SpellcheckComponents should have their own suggestMode > > > Key: SOLR-10263 > URL: https://issues.apache.org/jira/browse/SOLR-10263 > Project: Solr > Issue Type: Wish > Security Level: Public(Default Security Level. Issues are Public) > Components: spellchecker >Reporter: Abhishek Kumar Singh >Priority: Minor > > As of now, common spellcheck options are applied to all the > SpellCheckComponents. > This can create problem in the following case:- > It may be the case that we want *DirectSolrSpellChecker* to ALWAYS_SUGGEST > spellcheck suggestions. > But we may want *WordBreakSpellChecker* to suggest only if the token is not > in the index (for relevance or performance reasons) > (SUGGEST_WHEN_NOT_IN_INDEX) . > *UPDATE :* Recently, we also figured out that, for > {{WordBreakSolrSpellChecker}} also, both - The {{WordBreak}} and {{WordJoin}} > should also have different suggestModes. > We faced this problem in our case, wherein, Most of the WordJoin cases are > those where the words individually are valid tokens, but what the users are > looking for is actually a combination (wordjoin) of the two tokens. > For example:- > *gold mine sunglasses* : Here, both *gold* and *mine* are valid tokens. But > the actual product being looked for is *goldmine sunglasses* , where > *goldmine* is a brand. > In such cases, we should recommend {{didYouMean:goldmine sunglasses}} . But > this wont be possible because we had set {{SUGGEST_WHEN_NOT_IN_INDEX}} for > {{WordBreakSolrSpellChecker}} (of which, WordJoin is a part) . > For this, we should have separate suggestModes for both `wordJoin` as well as > `wordBreak`. > Related changes have been done at Latest PR. : > https://github.com/apache/lucene-solr/pull/218. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Linux (32bit/jdk-9-ea+181) - Build # 434 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/434/ Java: 32bit/jdk-9-ea+181 -server -XX:+UseSerialGC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest Error Message: Timeout waiting for all live and active Stack Trace: java.lang.AssertionError: Timeout waiting for all live and active at __randomizedtesting.SeedInfo.seed([7EDDAE8C04CC7771:FDABF17ED2B579D0]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest(TestCloudRecovery.java:184) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 12935 lines...] [junit4] Suite: org.apache.solr.cloud.TestCloudRecovery [junit4] 2> 2128878 INFO (SUITE-TestCloudRecovery-seed#[7EDDAE8C04CC7771]-worker) [] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=f