[JENKINS] Lucene-Solr-7.0-Linux (64bit/jdk-9) - Build # 411 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.0-Linux/411/ Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseG1GC --illegal-access=deny 2 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI Error Message: Error from server at https://127.0.0.1:34239/solr/awhollynewcollection_0: {"awhollynewcollection_0":6} Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:34239/solr/awhollynewcollection_0: {"awhollynewcollection_0":6} at __randomizedtesting.SeedInfo.seed([436E0637FE7C2E45:B1B7283F84F01D0]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:459) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9) - Build # 6937 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6937/ Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseParallelGC --illegal-access=deny 2 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI Error Message: Error from server at http://127.0.0.1:59779/solr/awhollynewcollection_0_shard1_replica_n1: ClusterState says we are the leader (http://127.0.0.1:59779/solr/awhollynewcollection_0_shard1_replica_n1), but locally we don't think so. Request came from null Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:59779/solr/awhollynewcollection_0_shard1_replica_n1: ClusterState says we are the leader (http://127.0.0.1:59779/solr/awhollynewcollection_0_shard1_replica_n1), but locally we don't think so. Request came from null at __randomizedtesting.SeedInfo.seed([13773ABA72E6D8E:4942071FA11D421B]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:459) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Comment Edited] (SOLR-10285) Skip LEADER messages when there are leader only shards
[ https://issues.apache.org/jira/browse/SOLR-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187638#comment-16187638 ] Cao Manh Dat edited comment on SOLR-10285 at 10/3/17 4:28 AM: -- Hi [~jhump], your patch looks good to me. About your TODO notes, I did some search and found that - ElectionContext is the only place use OverseerAction.Leader ( one for unset leader and one for set leader ). - STATE_PROP used in the second case is replica's state, which even not used in {{SliceMutator.setShardLeader}} So your concern about "mark the shard as inactive" is not correct, right? The only problem that can occur between upgrade is 1. A replica ( repA ) is currently leader 2. The overseer is very busy 3. repA does unset leader operation ( which is delayed because overseer is very busy ) 4. repA get stopped in middle of the election process ( so set leader operation never get executed ) 5. repA start with the new code, then it saw it is the leader ( the unset operation in step 2 had not been executed ) so it skipped set leader operation. I think that above case is very very very rare and even it happens ( it can be fixed with FORCE_LEADER API ), Sysadmins must handle overwhelming in the number of operations in Overseer first. was (Author: caomanhdat): Hi [~jhump], your patch looks good to me. About your TODO notes, I did some search and found that - ElectionContext is the only place use OverseerAction.Leader ( one for unset leader and one for set leader ). - STATE_PROP used in the second case is replica's state, which even not used in {{SliceMutator.setShardLeader}} So your concern about "mark the shard as inactive" is not correct, right? The only problem that can occur between upgrade is 1. A replica ( repA ) is currently leader 2. The overseer is very busy 3. repA does unset leader operation ( which is delayed because overseer is very busy ) 4. repA get stopped in middle of the election process ( so set leader operation never get executed ) 5. repA start with the new code, then it saw it is the leader ( the unset operation in step 2 had not been executed ) so it skipped set leader operation. I think that above case is very very very rare and even it happens, Sysadmins must handle overwhelming in the number of operations in Overseer first. > Skip LEADER messages when there are leader only shards > -- > > Key: SOLR-10285 > URL: https://issues.apache.org/jira/browse/SOLR-10285 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Cao Manh Dat > Attachments: SOLR-10285.patch, SOLR-10285.patch, SOLR-10285.patch > > > For shards which have 1 replica ( leader ) we know it doesn't need to recover > from anyone. We should short-circuit the recovery process in this case. > The motivation for this being that we will generate less state events and be > able to mark these replicas as active again without it needing to go into > 'recovering' state. > We already short circuit when you set {{-Dsolrcloud.skip.autorecovery=true}} > but that sys prop was meant for tests only. Extending this to make sure the > code short-circuits when the core knows its the only replica in the shard is > the motivation of the Jira. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11425) SolrClientBuilder does not allow infinite timeout (value 0)
[ https://issues.apache.org/jira/browse/SOLR-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-11425: --- Attachment: SOLR-11425.patch Patch fixes a precommit violation in javadocs, adds a simple test. > SolrClientBuilder does not allow infinite timeout (value 0) > --- > > Key: SOLR-11425 > URL: https://issues.apache.org/jira/browse/SOLR-11425 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.0 >Reporter: Peter Szantai-Kis >Assignee: Mark Miller > Attachments: SOLR-11425.patch, SOLR-11425.patch > > > [org.apache.solr.client.solrj.impl.SolrClientBuilder#withConnectionTimeout|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/SolrClientBuilder.java#L53] > does not allow to set the value of 0 which means infinite timeout, but > [RequestConfig|https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.html#getConnectTimeout()] > where it will be used have the option to do so. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20598 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20598/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation Error Message: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=14934, name=jetty-launcher-3227-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) 2) Thread[id=14930, name=jetty-launcher-3227-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=14934, name=jetty-launcher-3227-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at
[jira] [Commented] (SOLR-10285) Skip LEADER messages when there are leader only shards
[ https://issues.apache.org/jira/browse/SOLR-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189193#comment-16189193 ] Varun Thacker commented on SOLR-10285: -- Sounds good! > Skip LEADER messages when there are leader only shards > -- > > Key: SOLR-10285 > URL: https://issues.apache.org/jira/browse/SOLR-10285 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Cao Manh Dat > Attachments: SOLR-10285.patch, SOLR-10285.patch, SOLR-10285.patch > > > For shards which have 1 replica ( leader ) we know it doesn't need to recover > from anyone. We should short-circuit the recovery process in this case. > The motivation for this being that we will generate less state events and be > able to mark these replicas as active again without it needing to go into > 'recovering' state. > We already short circuit when you set {{-Dsolrcloud.skip.autorecovery=true}} > but that sys prop was meant for tests only. Extending this to make sure the > code short-circuits when the core knows its the only replica in the shard is > the motivation of the Jira. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 221 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/221/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI Error Message: Error from server at http://127.0.0.1:42829/solr/awhollynewcollection_0: {"awhollynewcollection_0":6} Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:42829/solr/awhollynewcollection_0: {"awhollynewcollection_0":6} at __randomizedtesting.SeedInfo.seed([322FC4B17481F353:7A5AB00572B2DCC6]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:460) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-10285) Skip LEADER messages when there are leader only shards
[ https://issues.apache.org/jira/browse/SOLR-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189175#comment-16189175 ] ASF subversion and git services commented on SOLR-10285: Commit fd2b4f3f868276a3b3e3b0b6845d1a7309c9cad0 in lucene-solr's branch refs/heads/master from [~caomanhdat] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fd2b4f3 ] SOLR-10285: Skip LEADER messages when there are leader only shards > Skip LEADER messages when there are leader only shards > -- > > Key: SOLR-10285 > URL: https://issues.apache.org/jira/browse/SOLR-10285 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Cao Manh Dat > Attachments: SOLR-10285.patch, SOLR-10285.patch, SOLR-10285.patch > > > For shards which have 1 replica ( leader ) we know it doesn't need to recover > from anyone. We should short-circuit the recovery process in this case. > The motivation for this being that we will generate less state events and be > able to mark these replicas as active again without it needing to go into > 'recovering' state. > We already short circuit when you set {{-Dsolrcloud.skip.autorecovery=true}} > but that sys prop was meant for tests only. Extending this to make sure the > code short-circuits when the core knows its the only replica in the shard is > the motivation of the Jira. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10285) Skip LEADER messages when there are leader only shards
[ https://issues.apache.org/jira/browse/SOLR-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat updated SOLR-10285: Summary: Skip LEADER messages when there are leader only shards (was: Reduce state messages when there are leader only shards) > Skip LEADER messages when there are leader only shards > -- > > Key: SOLR-10285 > URL: https://issues.apache.org/jira/browse/SOLR-10285 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Cao Manh Dat > Attachments: SOLR-10285.patch, SOLR-10285.patch, SOLR-10285.patch > > > For shards which have 1 replica ( leader ) we know it doesn't need to recover > from anyone. We should short-circuit the recovery process in this case. > The motivation for this being that we will generate less state events and be > able to mark these replicas as active again without it needing to go into > 'recovering' state. > We already short circuit when you set {{-Dsolrcloud.skip.autorecovery=true}} > but that sys prop was meant for tests only. Extending this to make sure the > code short-circuits when the core knows its the only replica in the shard is > the motivation of the Jira. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10285) Reduce state messages when there are leader only shards
[ https://issues.apache.org/jira/browse/SOLR-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189159#comment-16189159 ] Cao Manh Dat commented on SOLR-10285: - Hi [~varunthacker], I don't know why we have to wait for the leader message to be processed ( because this ticket skipped leader message )? Even if we send leader message and wait for it to be processed, we can easily get false positive, when the replica is already a leader and the unset leader message is in the queue. > Reduce state messages when there are leader only shards > --- > > Key: SOLR-10285 > URL: https://issues.apache.org/jira/browse/SOLR-10285 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Cao Manh Dat > Attachments: SOLR-10285.patch, SOLR-10285.patch, SOLR-10285.patch > > > For shards which have 1 replica ( leader ) we know it doesn't need to recover > from anyone. We should short-circuit the recovery process in this case. > The motivation for this being that we will generate less state events and be > able to mark these replicas as active again without it needing to go into > 'recovering' state. > We already short circuit when you set {{-Dsolrcloud.skip.autorecovery=true}} > but that sys prop was meant for tests only. Extending this to make sure the > code short-circuits when the core knows its the only replica in the shard is > the motivation of the Jira. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11428) Add spline Stream Evaluator to support spline interpolation
[ https://issues.apache.org/jira/browse/SOLR-11428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11428: -- Summary: Add spline Stream Evaluator to support spline interpolation (was: Add spline Stream Evaluator spline interpolation) > Add spline Stream Evaluator to support spline interpolation > --- > > Key: SOLR-11428 > URL: https://issues.apache.org/jira/browse/SOLR-11428 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.1, master (8.0) > > > The *spline* Stream Evaluator will fit a smooth curved line through a set of > points. > Syntax: > {code} > yvalues = spline(xvec, yvec) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.0-Linux (64bit/jdk-9) - Build # 410 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.0-Linux/410/ Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseG1GC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.cloud.TestTlogReplica.testOnlyLeaderIndexes Error Message: Some replicas are not in sync with leader Stack Trace: java.lang.AssertionError: Some replicas are not in sync with leader at __randomizedtesting.SeedInfo.seed([E17559B2FA44926D:FD74243F8FE1ECFE]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.TestTlogReplica.waitForReplicasCatchUp(TestTlogReplica.java:910) at org.apache.solr.cloud.TestTlogReplica.testOnlyLeaderIndexes(TestTlogReplica.java:508) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 12892 lines...] [junit4] Suite: org.apache.solr.cloud.TestTlogReplica [junit4] 2> Creating dataDir:
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1450 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1450/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC All tests passed Build Log: [...truncated 60306 lines...] -documentation-lint: [jtidy] Checking for broken html (such as invalid tags)... [delete] Deleting directory /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/build/jtidy_tmp [echo] Checking for broken links... [exec] [exec] Crawl/parse... [exec] [exec] Verify... [exec] [exec] file:///export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/docs/quickstart.html [exec] BAD EXTERNAL LINK: https://lucene.apache.org/solr/guide/solr-tutorial.html [exec] [exec] Broken javadocs links were found! Common root causes: [exec] * A typo of some sort for manually created links. [exec] * Public methods referencing non-public classes in their signature. BUILD FAILED /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/build.xml:826: The following error occurred while executing this line: /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/build.xml:101: The following error occurred while executing this line: /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build.xml:669: The following error occurred while executing this line: /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build.xml:682: The following error occurred while executing this line: /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/common-build.xml:2570: exec returned: 1 Total time: 96 minutes 53 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts [WARNINGS] Skipping publisher since build result is FAILURE Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11430) Add lerp Stream Evaluator to support linear interpolation
Joel Bernstein created SOLR-11430: - Summary: Add lerp Stream Evaluator to support linear interpolation Key: SOLR-11430 URL: https://issues.apache.org/jira/browse/SOLR-11430 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Reporter: Joel Bernstein The lerp Stream Evaluator supports Linear interpolation: {code} yvec = lerp(xvec, yvec) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11427) DELETEREPLICA with onlyIfDown specified should succeed if the host node is not present in the live_nodes Znode
[ https://issues.apache.org/jira/browse/SOLR-11427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189098#comment-16189098 ] Varun Thacker commented on SOLR-11427: -- In my head scripts would only delete replicas which are active ( to move replicas around ) .. Only maintenance scripts or cleanup scripts could benefit for this protection . But given someone would run this script manually to cleanup old cruft the chances of the user knowing about this flag and a bug in a cleanup script didn't feel right. But that's just me . Maybe it's more useful than I think > DELETEREPLICA with onlyIfDown specified should succeed if the host node is > not present in the live_nodes Znode > -- > > Key: SOLR-11427 > URL: https://issues.apache.org/jira/browse/SOLR-11427 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson > > The title says it pretty much, so opening up for discussion: > Here's the problem. Let's say a node is killed via {{kill -9}}. The > state.json file still says it's "active", but the node is gone from > live_nodes. If the node in question never comes back, the replica's state > doesn't necessarily get switched to "down", so specifying onlyIfDown fails > with "node is active" message. This is all documented more thoroughly in > SOLR-9361. > The question is whether it's sufficient and/or safe to succeed in deleting > the replica from state.json if the state is "active" _and_ the node is NOT > present in live_nodes. > I'm assigning to myself, but others should feel free to take it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11428) Add spline Stream Evaluator spline interpolation
[ https://issues.apache.org/jira/browse/SOLR-11428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein reassigned SOLR-11428: - Assignee: Joel Bernstein > Add spline Stream Evaluator spline interpolation > > > Key: SOLR-11428 > URL: https://issues.apache.org/jira/browse/SOLR-11428 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.1, master (8.0) > > > The spline Stream Evaluator will fit a smooth curved line through a set of > points. > Syntax: > {code} > yvalues = spline(xvec, yvec) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11428) Add spline Stream Evaluator spline interpolation
[ https://issues.apache.org/jira/browse/SOLR-11428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11428: -- Description: The *spline* Stream Evaluator will fit a smooth curved line through a set of points. Syntax: {code} yvalues = spline(xvec, yvec) {code} was: The spline Stream Evaluator will fit a smooth curved line through a set of points. Syntax: {code} yvalues = spline(xvec, yvec) {code} > Add spline Stream Evaluator spline interpolation > > > Key: SOLR-11428 > URL: https://issues.apache.org/jira/browse/SOLR-11428 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.1, master (8.0) > > > The *spline* Stream Evaluator will fit a smooth curved line through a set of > points. > Syntax: > {code} > yvalues = spline(xvec, yvec) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11428) Add spline Stream Evaluator spline interpolation
[ https://issues.apache.org/jira/browse/SOLR-11428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11428: -- Fix Version/s: master (8.0) 7.1 > Add spline Stream Evaluator spline interpolation > > > Key: SOLR-11428 > URL: https://issues.apache.org/jira/browse/SOLR-11428 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: 7.1, master (8.0) > > > The spline Stream Evaluator will fit a smooth curved line through a set of > points. > Syntax: > {code} > yvalues = spline(xvec, yvec) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11429) Add loess Stream Evaluator to support Local Regression interpolation
Joel Bernstein created SOLR-11429: - Summary: Add loess Stream Evaluator to support Local Regression interpolation Key: SOLR-11429 URL: https://issues.apache.org/jira/browse/SOLR-11429 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Reporter: Joel Bernstein The loess function will fit a curved line through a set of points using the Local Regression Algorithm. Syntax: {code} yvalues = loess(xvec, yvec) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11427) DELETEREPLICA with onlyIfDown specified should succeed if the host node is not present in the live_nodes Znode
[ https://issues.apache.org/jira/browse/SOLR-11427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189081#comment-16189081 ] Erick Erickson commented on SOLR-11427: --- Well, the behavior has changed over time. "In the old days" with legacyCloud, the replica could reconstruct itself after it had been deleted. Scenario > shut down Solr > delete replica on the down node > bring that Solr back up The replica could recreate itself. I think there was work at one point to not let that happen if a DELETEREPLICA had been issued. Much of that behavior is behind us now so we may be dealing with some remnants of how it used to be dealt with. bq: so the script would want to delete replicas from decommissioned nodes or from a node which has replicas in down state for some reason and they don't want it? Not quite. Imagine a small typo: if (replica.state.equals("ative") == false) { delete the replica } Yeah, yeah, yeah, we can't protect users from programming errors. And that's not a great example anyway. But you get the idea. The onlyIfDown bits are an extra safeguard there. Won't delete recovering nodes or active nodes etc. > DELETEREPLICA with onlyIfDown specified should succeed if the host node is > not present in the live_nodes Znode > -- > > Key: SOLR-11427 > URL: https://issues.apache.org/jira/browse/SOLR-11427 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson > > The title says it pretty much, so opening up for discussion: > Here's the problem. Let's say a node is killed via {{kill -9}}. The > state.json file still says it's "active", but the node is gone from > live_nodes. If the node in question never comes back, the replica's state > doesn't necessarily get switched to "down", so specifying onlyIfDown fails > with "node is active" message. This is all documented more thoroughly in > SOLR-9361. > The question is whether it's sufficient and/or safe to succeed in deleting > the replica from state.json if the state is "active" _and_ the node is NOT > present in live_nodes. > I'm assigning to myself, but others should feel free to take it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11428) Add spline Stream Evaluator spline interpolation
Joel Bernstein created SOLR-11428: - Summary: Add spline Stream Evaluator spline interpolation Key: SOLR-11428 URL: https://issues.apache.org/jira/browse/SOLR-11428 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Reporter: Joel Bernstein The spline Stream Evaluator will fit a smooth curved line through a set of points. Syntax: {code} yvalues = spline(xvec, yvec) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20597 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20597/ Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelExecutorStream Error Message: Error from server at https://127.0.0.1:34413/solr/workQueue_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404HTTP ERROR: 404 Problem accessing /solr/workQueue_shard2_replica_n3/update. Reason: Can not find: /solr/workQueue_shard2_replica_n3/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:34413/solr/workQueue_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 HTTP ERROR: 404 Problem accessing /solr/workQueue_shard2_replica_n3/update. Reason: Can not find: /solr/workQueue_shard2_replica_n3/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 at __randomizedtesting.SeedInfo.seed([D772E37D9FC7A5AE:6A659664A6EB98F3]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelExecutorStream(StreamExpressionTest.java:7540) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[jira] [Commented] (SOLR-11427) DELETEREPLICA with onlyIfDown specified should succeed if the host node is not present in the live_nodes Znode
[ https://issues.apache.org/jira/browse/SOLR-11427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189070#comment-16189070 ] Varun Thacker commented on SOLR-11427: -- so the script would want to delete replicas from decommissioned nodes or from a node which has replicas in down state for some reason and they don't want it? I'm not convinced that we need a flag for such use cases ( how many users will discover and use this in practice ) but that's probably another discussion / Jira . bq. The question is whether it's sufficient and/or safe to succeed in deleting the replica from state.json if the state is "active" and the node is NOT present in live_nodes. I think that's the right thing to do. We do a cross check like this in other places today as well bq. IIRC, at one point DELETEREPLICA failed if it couldn't connect to the Solr node that had the replica that was missing. I think it fails, but still cleans up the state. I'll have to confirm that it actually cleans up the state. maybe it should not throw and error and delete the state ? > DELETEREPLICA with onlyIfDown specified should succeed if the host node is > not present in the live_nodes Znode > -- > > Key: SOLR-11427 > URL: https://issues.apache.org/jira/browse/SOLR-11427 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson > > The title says it pretty much, so opening up for discussion: > Here's the problem. Let's say a node is killed via {{kill -9}}. The > state.json file still says it's "active", but the node is gone from > live_nodes. If the node in question never comes back, the replica's state > doesn't necessarily get switched to "down", so specifying onlyIfDown fails > with "node is active" message. This is all documented more thoroughly in > SOLR-9361. > The question is whether it's sufficient and/or safe to succeed in deleting > the replica from state.json if the state is "active" _and_ the node is NOT > present in live_nodes. > I'm assigning to myself, but others should feel free to take it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #257: SOLR-11423: Overseer queue needs a hard cap (...
GitHub user dragonsinth opened a pull request: https://github.com/apache/lucene-solr/pull/257 SOLR-11423: Overseer queue needs a hard cap (maximum size) that clients respect You can merge this pull request into a Git repository by running: $ git pull https://github.com/apache/lucene-solr jira/SOLR-11423 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/257.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #257 commit ef8e0934fb27530f0c9450b58872b2b11028f50a Author: Scott BlumDate: 2017-10-02T20:50:57Z SOLR-11423: Overseer queue needs a hard cap (maximum size) that clients respect --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect
[ https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189037#comment-16189037 ] ASF GitHub Bot commented on SOLR-11423: --- GitHub user dragonsinth opened a pull request: https://github.com/apache/lucene-solr/pull/257 SOLR-11423: Overseer queue needs a hard cap (maximum size) that clients respect You can merge this pull request into a Git repository by running: $ git pull https://github.com/apache/lucene-solr jira/SOLR-11423 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/257.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #257 commit ef8e0934fb27530f0c9450b58872b2b11028f50a Author: Scott BlumDate: 2017-10-02T20:50:57Z SOLR-11423: Overseer queue needs a hard cap (maximum size) that clients respect > Overseer queue needs a hard cap (maximum size) that clients respect > --- > > Key: SOLR-11423 > URL: https://issues.apache.org/jira/browse/SOLR-11423 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Scott Blum >Assignee: Scott Blum > > When Solr gets into pathological GC thrashing states, it can fill the > overseer queue with literally thousands and thousands of queued state > changes. Many of these end up being duplicated up/down state updates. Our > production cluster has gotten to the 100k queued items level many times, and > there's nothing useful you can do at this point except manually purge the > queue in ZK. Recently, it hit 3 million queued items, at which point our > entire ZK cluster exploded. > I propose a hard cap. Any client trying to enqueue a item when a queue is > full would throw an exception. I was thinking maybe 10,000 items would be a > reasonable limit. Thoughts? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10912) Adding automatic patch validation
[ https://issues.apache.org/jira/browse/SOLR-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189033#comment-16189033 ] Mano Kovacs commented on SOLR-10912: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 00s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 00s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} Check examples refer correct lucene version {color} | {color:green} 0m 06s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 0m 06s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 0m 06s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check licenses {color} | {color:green} 0m 06s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}255m 09s{color} | {color:red} core in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 11s{color} | {color:red} solrj in the patch failed. {color} | | {color:black}{color} | {color:black} {color} | {color:black}272m 02s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | solr.analysis.TestWordDelimiterFilterFactory | | | solr.cloud.CdcrBootstrapTest | | | solr.cloud.DeleteStatusTest | | | solr.cloud.FullSolrCloudDistribCmdsTest | | | solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest | | | solr.cloud.LeaderElectionIntegrationTest | | | solr.cloud.ShardRoutingTest | | | solr.cloud.ShardSplitTest | | | solr.cloud.SharedFSAutoReplicaFailoverUtilsTest | | | solr.cloud.SyncSliceTest | | | solr.cloud.TestHdfsCloudBackupRestore | | | solr.cloud.TestLeaderElectionWithEmptyReplica | | | solr.cloud.TestLocalFSCloudBackupRestore | | | solr.cloud.TestTolerantUpdateProcessorCloud | | | solr.cloud.UnloadDistributedZkTest | | | solr.core.TestCorePropertiesReload | | | solr.core.TestCustomStream | | | solr.core.TestInfoStreamLogging | | | solr.core.TestSimpleTextCodec | | | solr.core.TestSolrDeletionPolicy1 | | | solr.handler.loader.JavabinLoaderTest | | | solr.handler.TestSolrConfigHandlerCloud | | | solr.handler.V2StandaloneTest | | | solr.handler.XsltUpdateRequestHandlerTest | | | solr.highlight.TestPostingsSolrHighlighter | | | solr.internal.csv.CharBufferTest | | | solr.internal.csv.CSVUtilsTest | | | solr.metrics.JvmMetricsTest | | | solr.MinimalSchemaTest | | | solr.schema.DateFieldTest | | | solr.schema.MultiTermTest | | | solr.schema.PolyFieldTest | | | solr.schema.TestBinaryField | | | solr.schema.TestBulkSchemaConcurrent | | | solr.schema.TestManagedSchema | | | solr.schema.UUIDFieldTest | | | solr.search.AnalyticsMergeStrategyTest | | | solr.search.function.distance.DistanceFunctionTest | | | solr.search.similarities.TestPerFieldSimilarity | | | solr.search.TestPayloadScoreQParserPlugin | | | solr.search.TestSolrJ | | | solr.servlet.DirectSolrConnectionTest | | | solr.servlet.HttpSolrCallGetCoreTest | | | solr.spelling.SpellingQueryConverterTest | | | solr.spelling.suggest.SuggesterTest | | | solr.spelling.suggest.TestHighFrequencyDictionaryFactory | | | solr.spelling.suggest.TestPhraseSuggestions | | | solr.TestHighlightDedupGrouping | | | solr.util.CircularListTest | | | solr.util.FileUtilsTest | | | solr.util.PrimUtilsTest | | | solr.util.TestUtils | | | solr.client.solrj.io.stream.StreamExpressionTest | | | solr.common.cloud.TestCollectionStateWatchers | | | solr.analysis.TestWordDelimiterFilterFactory | | | solr.cloud.CdcrBootstrapTest | | | solr.cloud.DeleteStatusTest | | | solr.cloud.FullSolrCloudDistribCmdsTest | | | solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest | | | solr.cloud.LeaderElectionIntegrationTest | | | solr.cloud.ShardRoutingTest | | | solr.cloud.ShardSplitTest | | | solr.cloud.SharedFSAutoReplicaFailoverUtilsTest | | | solr.cloud.SyncSliceTest | | | solr.cloud.TestHdfsCloudBackupRestore | | | solr.cloud.TestLeaderElectionWithEmptyReplica | | | solr.cloud.TestLocalFSCloudBackupRestore | | | solr.cloud.TestTolerantUpdateProcessorCloud | | | solr.cloud.UnloadDistributedZkTest | | | solr.core.TestCorePropertiesReload | | | solr.core.TestCustomStream | | |
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 223 - Still unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/223/ Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseG1GC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream Error Message: Error from server at https://127.0.0.1:56976/solr/workQueue_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404HTTP ERROR: 404 Problem accessing /solr/workQueue_shard2_replica_n3/update. Reason: Can not find: /solr/workQueue_shard2_replica_n3/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:56976/solr/workQueue_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 HTTP ERROR: 404 Problem accessing /solr/workQueue_shard2_replica_n3/update. Reason: Can not find: /solr/workQueue_shard2_replica_n3/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 at __randomizedtesting.SeedInfo.seed([89BF67B7CD937318:AB7FE64CEEF95908]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream(StreamExpressionTest.java:7471) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[jira] [Commented] (SOLR-11427) DELETEREPLICA with onlyIfDown specified should succeed if the host node is not present in the live_nodes Znode
[ https://issues.apache.org/jira/browse/SOLR-11427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188961#comment-16188961 ] Erick Erickson commented on SOLR-11427: --- bq: So shouldn't the user know that he is deleting an active replica? Maybe, maybe not. The parent JIRA outlines all the things that are screwed up with how state is reported. The graph view of the cluster state shows the node as down. The state.json znode shows replicas on a missing node as "active" if the node was killed via, say, "kill -9". CLUSTERSTATUS reports it as "down". Then there's "gone"... IIRC, at one point DELETEREPLICA failed if it couldn't connect to the Solr node that had the replica that was missing. So if you forcibly killed a Solr instance (or pulled the plug) about the only way to clean up ZK was to hand-edit clusterstate.json (yes, a long time ago).. onlyIfDown was put in as a safety valve when users wanted to be cautious (perhaps when scripting) and did _not_ want to delete active replicas (through perhaps a typo, bad scripting, whatever) but did want a way to clean up ZK. Then there was the whole bit about how to delete a replica if it it was on a node that had been shut down and when the DELETEREPLICA command was issued and came back up (legacyCloud mode where the replica would recreate itself). > DELETEREPLICA with onlyIfDown specified should succeed if the host node is > not present in the live_nodes Znode > -- > > Key: SOLR-11427 > URL: https://issues.apache.org/jira/browse/SOLR-11427 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson > > The title says it pretty much, so opening up for discussion: > Here's the problem. Let's say a node is killed via {{kill -9}}. The > state.json file still says it's "active", but the node is gone from > live_nodes. If the node in question never comes back, the replica's state > doesn't necessarily get switched to "down", so specifying onlyIfDown fails > with "node is active" message. This is all documented more thoroughly in > SOLR-9361. > The question is whether it's sufficient and/or safe to succeed in deleting > the replica from state.json if the state is "active" _and_ the node is NOT > present in live_nodes. > I'm assigning to myself, but others should feel free to take it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect
[ https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188949#comment-16188949 ] ASF GitHub Bot commented on SOLR-11423: --- Github user asfgit closed the pull request at: https://github.com/apache/lucene-solr/pull/256 > Overseer queue needs a hard cap (maximum size) that clients respect > --- > > Key: SOLR-11423 > URL: https://issues.apache.org/jira/browse/SOLR-11423 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Scott Blum >Assignee: Scott Blum > > When Solr gets into pathological GC thrashing states, it can fill the > overseer queue with literally thousands and thousands of queued state > changes. Many of these end up being duplicated up/down state updates. Our > production cluster has gotten to the 100k queued items level many times, and > there's nothing useful you can do at this point except manually purge the > queue in ZK. Recently, it hit 3 million queued items, at which point our > entire ZK cluster exploded. > I propose a hard cap. Any client trying to enqueue a item when a queue is > full would throw an exception. I was thinking maybe 10,000 items would be a > reasonable limit. Thoughts? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-10842) Move quickstart.html to Ref Guide
[ https://issues.apache.org/jira/browse/SOLR-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett resolved SOLR-10842. -- Resolution: Fixed > Move quickstart.html to Ref Guide > - > > Key: SOLR-10842 > URL: https://issues.apache.org/jira/browse/SOLR-10842 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Minor > Fix For: 7.0 > > Attachments: SOLR-10842.patch > > > The Solr Quick Start at https://lucene.apache.org/solr/quickstart.html has > been problematic to keep up to date - until Ishan just updated it yesterday > for 6.6, it said "6.2.1", so hadn't been updated for several releases. > Now that the Ref Guide is in AsciiDoc format, we can easily use variables for > package versions, and it could be released as part of the Ref Guide and kept > up to date. It could also integrate links to more information on topics, and > users would already be IN the docs, so they would not need to wonder where > the docs are. > There are a few places on the site that will need to be updated to point to > the new location, but I can also put a redirect rule into .htaccess so people > are redirected to the new location if there are other links "in the wild" > that we cannot control. This allows it to be versioned also, if that becomes > necessary. > As part of this, I would like to also update the entire "Getting Started" > section of the Ref Guide, which is effectively identical to what was in the > first release of the Ref Guide back in 2009 for Solr 1.4 and is in serious > need of reconsideration. > My thought is that moving the page + redoing the Getting Started section > would be for 7.0, but if folks are excited about this idea I could move the > page for 6.6 and hold off redoing the larger section until 7.0. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #256: SOLR-11423: Overseer queue needs a hard cap (...
Github user asfgit closed the pull request at: https://github.com/apache/lucene-solr/pull/256 --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10842) Move quickstart.html to Ref Guide
[ https://issues.apache.org/jira/browse/SOLR-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188947#comment-16188947 ] ASF subversion and git services commented on SOLR-10842: Commit b501676cb4b84f3d0acea82c2ebb1d25943ab915 in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b501676 ] SOLR-10842: replace quickstart.mdtext content with link to Ref Guide tutorial > Move quickstart.html to Ref Guide > - > > Key: SOLR-10842 > URL: https://issues.apache.org/jira/browse/SOLR-10842 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Minor > Fix For: 7.0 > > Attachments: SOLR-10842.patch > > > The Solr Quick Start at https://lucene.apache.org/solr/quickstart.html has > been problematic to keep up to date - until Ishan just updated it yesterday > for 6.6, it said "6.2.1", so hadn't been updated for several releases. > Now that the Ref Guide is in AsciiDoc format, we can easily use variables for > package versions, and it could be released as part of the Ref Guide and kept > up to date. It could also integrate links to more information on topics, and > users would already be IN the docs, so they would not need to wonder where > the docs are. > There are a few places on the site that will need to be updated to point to > the new location, but I can also put a redirect rule into .htaccess so people > are redirected to the new location if there are other links "in the wild" > that we cannot control. This allows it to be versioned also, if that becomes > necessary. > As part of this, I would like to also update the entire "Getting Started" > section of the Ref Guide, which is effectively identical to what was in the > first release of the Ref Guide back in 2009 for Solr 1.4 and is in serious > need of reconsideration. > My thought is that moving the page + redoing the Getting Started section > would be for 7.0, but if folks are excited about this idea I could move the > page for 6.6 and hold off redoing the larger section until 7.0. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10842) Move quickstart.html to Ref Guide
[ https://issues.apache.org/jira/browse/SOLR-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188944#comment-16188944 ] ASF subversion and git services commented on SOLR-10842: Commit 5b3a5152bdf11758b5a02efb60568098fe825d45 in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5b3a515 ] SOLR-10842: replace quickstart.mdtext content with link to Ref Guide tutorial > Move quickstart.html to Ref Guide > - > > Key: SOLR-10842 > URL: https://issues.apache.org/jira/browse/SOLR-10842 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Minor > Fix For: 7.0 > > Attachments: SOLR-10842.patch > > > The Solr Quick Start at https://lucene.apache.org/solr/quickstart.html has > been problematic to keep up to date - until Ishan just updated it yesterday > for 6.6, it said "6.2.1", so hadn't been updated for several releases. > Now that the Ref Guide is in AsciiDoc format, we can easily use variables for > package versions, and it could be released as part of the Ref Guide and kept > up to date. It could also integrate links to more information on topics, and > users would already be IN the docs, so they would not need to wonder where > the docs are. > There are a few places on the site that will need to be updated to point to > the new location, but I can also put a redirect rule into .htaccess so people > are redirected to the new location if there are other links "in the wild" > that we cannot control. This allows it to be versioned also, if that becomes > necessary. > As part of this, I would like to also update the entire "Getting Started" > section of the Ref Guide, which is effectively identical to what was in the > first release of the Ref Guide back in 2009 for Solr 1.4 and is in serious > need of reconsideration. > My thought is that moving the page + redoing the Getting Started section > would be for 7.0, but if folks are excited about this idea I could move the > page for 6.6 and hold off redoing the larger section until 7.0. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10842) Move quickstart.html to Ref Guide
[ https://issues.apache.org/jira/browse/SOLR-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188929#comment-16188929 ] Cassandra Targett commented on SOLR-10842: -- bq. if we setup a "RedirectMatch Permanent" from /solr/quickstart.html to /solr/guide/solr-tutorial.html That worked as advertised, thanks. As a last thing, I'll update the quickstart.mdtext file in {{solr/site/quickstart.mdtxt}} and in the website CMS to say it's moved so anyone not paying close attention to all these details will be able to figure out what's going on later. > Move quickstart.html to Ref Guide > - > > Key: SOLR-10842 > URL: https://issues.apache.org/jira/browse/SOLR-10842 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Minor > Fix For: 7.0 > > Attachments: SOLR-10842.patch > > > The Solr Quick Start at https://lucene.apache.org/solr/quickstart.html has > been problematic to keep up to date - until Ishan just updated it yesterday > for 6.6, it said "6.2.1", so hadn't been updated for several releases. > Now that the Ref Guide is in AsciiDoc format, we can easily use variables for > package versions, and it could be released as part of the Ref Guide and kept > up to date. It could also integrate links to more information on topics, and > users would already be IN the docs, so they would not need to wonder where > the docs are. > There are a few places on the site that will need to be updated to point to > the new location, but I can also put a redirect rule into .htaccess so people > are redirected to the new location if there are other links "in the wild" > that we cannot control. This allows it to be versioned also, if that becomes > necessary. > As part of this, I would like to also update the entire "Getting Started" > section of the Ref Guide, which is effectively identical to what was in the > first release of the Ref Guide back in 2009 for Solr 1.4 and is in serious > need of reconsideration. > My thought is that moving the page + redoing the Getting Started section > would be for 7.0, but if folks are excited about this idea I could move the > page for 6.6 and hold off redoing the larger section until 7.0. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6513) Allow limits on SpanMultiTermQueryWrapper expansion
[ https://issues.apache.org/jira/browse/LUCENE-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188913#comment-16188913 ] Timothy M. Rodriguez commented on LUCENE-6513: -- Apologies for the late alternative implementation. For what it's worth, we've been utilizing this patch for about a year and it's helped improve responsiveness to queries while limiting the expansions. > Allow limits on SpanMultiTermQueryWrapper expansion > --- > > Key: LUCENE-6513 > URL: https://issues.apache.org/jira/browse/LUCENE-6513 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Priority: Minor > Attachments: LUCENE-6513.patch, LUCENE-6513.patch, LUCENE-6513.patch, > LUCENE-6513.patch > > > SpanMultiTermQueryWrapper currently rewrites to a SpanOrQuery with as many > clauses as there are matching terms. It would be nice to be able to limit > this in a slightly nicer way than using TopTerms, which for most queries just > translates to a lexicographical ordering. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (SOLR-11278) Fix race in cdcr bootstrap process
[ https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker reopened SOLR-11278: -- Still seeing failures on Jenkins > Fix race in cdcr bootstrap process > -- > > Key: SOLR-11278 > URL: https://issues.apache.org/jira/browse/SOLR-11278 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 6.6.1, 7.0 >Reporter: Amrit Sarkar >Assignee: Varun Thacker >Priority: Critical > Labels: test > Fix For: 7.1 > > Attachments: master-bs.patch, SOLR-11278-awaits-fix.patch, > SOLR-11278-cancel-bootstrap-on-stop.patch, SOLR-11278.patch, > SOLR-11278.patch, SOLR-11278.patch, SOLR-11278.patch, test_results > > > {{CdcrBootstrapTest}} is failing while running beasts for significant > iterations. > The bootstrapping is failing in the test, after the first batch is indexed > for each {{testmethod}}, which results in documents mismatch :: > {code} > [beaster] 2> 39167 ERROR > (updateExecutor-39-thread-1-processing-n:127.0.0.1:42155_solr > x:cdcr-target_shard1_replica_n1 s:shard1 c:cdcr-target r:core_node2) > [n:127.0.0.1:42155_solr c:cdcr-target s:shard1 r:core_node2 > x:cdcr-target_shard1_replica_n1] o.a.s.h.CdcrRequestHandler Bootstrap > operation failed > [beaster] 2> java.util.concurrent.ExecutionException: > java.lang.AssertionError > [beaster] 2> at > java.util.concurrent.FutureTask.report(FutureTask.java:122) > [beaster] 2> at > java.util.concurrent.FutureTask.get(FutureTask.java:192) > [beaster] 2> at > org.apache.solr.handler.CdcrRequestHandler.lambda$handleBootstrapAction$0(CdcrRequestHandler.java:654) > [beaster] 2> at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176) > [beaster] 2> at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [beaster] 2> at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > [beaster] 2> at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188) > [beaster] 2> at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [beaster] 2> at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [beaster] 2> at java.lang.Thread.run(Thread.java:748) > [beaster] 2> Caused by: java.lang.AssertionError > [beaster] 2> at > org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:813) > [beaster] 2> at > org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:724) > [beaster] 2> at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197) > [beaster] 2> ... 5 more > {code} > {code} > [beaster] [01:37:16.282] FAILURE 153s | > CdcrBootstrapTest.testBootstrapWithSourceCluster <<< > [beaster]> Throwable #1: java.lang.AssertionError: Document mismatch on > target after sync expected:<2000> but was:<1000> > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect
[ https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1617#comment-1617 ] Scott Blum commented on SOLR-11423: --- Patch passes tests for me. > Overseer queue needs a hard cap (maximum size) that clients respect > --- > > Key: SOLR-11423 > URL: https://issues.apache.org/jira/browse/SOLR-11423 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Scott Blum >Assignee: Scott Blum > > When Solr gets into pathological GC thrashing states, it can fill the > overseer queue with literally thousands and thousands of queued state > changes. Many of these end up being duplicated up/down state updates. Our > production cluster has gotten to the 100k queued items level many times, and > there's nothing useful you can do at this point except manually purge the > queue in ZK. Recently, it hit 3 million queued items, at which point our > entire ZK cluster exploded. > I propose a hard cap. Any client trying to enqueue a item when a queue is > full would throw an exception. I was thinking maybe 10,000 items would be a > reasonable limit. Thoughts? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4203 - Still unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4203/ Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseSerialGC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.cloud.TestTlogReplica.testOnlyLeaderIndexes Error Message: expected:<2> but was:<1> Stack Trace: java.lang.AssertionError: expected:<2> but was:<1> at __randomizedtesting.SeedInfo.seed([6D84636BF133F509:71851EE684968B9A]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.TestTlogReplica.assertCopyOverOldUpdates(TestTlogReplica.java:909) at org.apache.solr.cloud.TestTlogReplica.testOnlyLeaderIndexes(TestTlogReplica.java:501) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 11918
[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect
[ https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188867#comment-16188867 ] Scott Blum commented on SOLR-11423: --- [~noble.paul] I went with 20k. We could make it configurable, but again, I intend this to be a general purpose safety valve to protect Zookeeper from exploding, which makes me think we could pick a universal value. > Overseer queue needs a hard cap (maximum size) that clients respect > --- > > Key: SOLR-11423 > URL: https://issues.apache.org/jira/browse/SOLR-11423 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Scott Blum >Assignee: Scott Blum > > When Solr gets into pathological GC thrashing states, it can fill the > overseer queue with literally thousands and thousands of queued state > changes. Many of these end up being duplicated up/down state updates. Our > production cluster has gotten to the 100k queued items level many times, and > there's nothing useful you can do at this point except manually purge the > queue in ZK. Recently, it hit 3 million queued items, at which point our > entire ZK cluster exploded. > I propose a hard cap. Any client trying to enqueue a item when a queue is > full would throw an exception. I was thinking maybe 10,000 items would be a > reasonable limit. Thoughts? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect
[ https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188866#comment-16188866 ] ASF GitHub Bot commented on SOLR-11423: --- GitHub user dragonsinth opened a pull request: https://github.com/apache/lucene-solr/pull/256 SOLR-11423: Overseer queue needs a hard cap (maximum size) that clients respect You can merge this pull request into a Git repository by running: $ git pull https://github.com/apache/lucene-solr SOLR-11423 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/256.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #256 commit ef8e0934fb27530f0c9450b58872b2b11028f50a Author: Scott BlumDate: 2017-10-02T20:50:57Z SOLR-11423: Overseer queue needs a hard cap (maximum size) that clients respect > Overseer queue needs a hard cap (maximum size) that clients respect > --- > > Key: SOLR-11423 > URL: https://issues.apache.org/jira/browse/SOLR-11423 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Scott Blum >Assignee: Scott Blum > > When Solr gets into pathological GC thrashing states, it can fill the > overseer queue with literally thousands and thousands of queued state > changes. Many of these end up being duplicated up/down state updates. Our > production cluster has gotten to the 100k queued items level many times, and > there's nothing useful you can do at this point except manually purge the > queue in ZK. Recently, it hit 3 million queued items, at which point our > entire ZK cluster exploded. > I propose a hard cap. Any client trying to enqueue a item when a queue is > full would throw an exception. I was thinking maybe 10,000 items would be a > reasonable limit. Thoughts? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #256: SOLR-11423: Overseer queue needs a hard cap (...
GitHub user dragonsinth opened a pull request: https://github.com/apache/lucene-solr/pull/256 SOLR-11423: Overseer queue needs a hard cap (maximum size) that clients respect You can merge this pull request into a Git repository by running: $ git pull https://github.com/apache/lucene-solr SOLR-11423 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/256.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #256 commit ef8e0934fb27530f0c9450b58872b2b11028f50a Author: Scott BlumDate: 2017-10-02T20:50:57Z SOLR-11423: Overseer queue needs a hard cap (maximum size) that clients respect --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect
[ https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188807#comment-16188807 ] ASF subversion and git services commented on SOLR-11423: Commit ef8e0934fb27530f0c9450b58872b2b11028f50a in lucene-solr's branch refs/heads/SOLR-11423 from [~dragonsinth] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ef8e093 ] SOLR-11423: Overseer queue needs a hard cap (maximum size) that clients respect > Overseer queue needs a hard cap (maximum size) that clients respect > --- > > Key: SOLR-11423 > URL: https://issues.apache.org/jira/browse/SOLR-11423 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Scott Blum >Assignee: Scott Blum > > When Solr gets into pathological GC thrashing states, it can fill the > overseer queue with literally thousands and thousands of queued state > changes. Many of these end up being duplicated up/down state updates. Our > production cluster has gotten to the 100k queued items level many times, and > there's nothing useful you can do at this point except manually purge the > queue in ZK. Recently, it hit 3 million queued items, at which point our > entire ZK cluster exploded. > I propose a hard cap. Any client trying to enqueue a item when a queue is > full would throw an exception. I was thinking maybe 10,000 items would be a > reasonable limit. Thoughts? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect
[ https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188803#comment-16188803 ] Scott Blum commented on SOLR-11423: --- [~erickerickson] I have backported changes to reduce overseer task counts. This isn't an issue with normal operation. Think of this more as an automatic safety shutoff on a nuclear reactor. > Overseer queue needs a hard cap (maximum size) that clients respect > --- > > Key: SOLR-11423 > URL: https://issues.apache.org/jira/browse/SOLR-11423 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Scott Blum >Assignee: Scott Blum > > When Solr gets into pathological GC thrashing states, it can fill the > overseer queue with literally thousands and thousands of queued state > changes. Many of these end up being duplicated up/down state updates. Our > production cluster has gotten to the 100k queued items level many times, and > there's nothing useful you can do at this point except manually purge the > queue in ZK. Recently, it hit 3 million queued items, at which point our > entire ZK cluster exploded. > I propose a hard cap. Any client trying to enqueue a item when a queue is > full would throw an exception. I was thinking maybe 10,000 items would be a > reasonable limit. Thoughts? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11427) DELETEREPLICA with onlyIfDown specified should succeed if the host node is not present in the live_nodes Znode
[ https://issues.apache.org/jira/browse/SOLR-11427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188793#comment-16188793 ] Varun Thacker commented on SOLR-11427: -- Hi Erick, Do you know the original motivation of "onlyIdDown" ? While deleting a replica we need to specify collection,shard and replica . So shouldn't the user know that he is deleting an active replica? The scenario that you described totally happens . So to address that if we didn't have "onlyIfDown" in the first place then the command would try to delete the index , which would fail ( core is not present ) but atleast we would cleanup the state - which is what the user wants at this point ? > DELETEREPLICA with onlyIfDown specified should succeed if the host node is > not present in the live_nodes Znode > -- > > Key: SOLR-11427 > URL: https://issues.apache.org/jira/browse/SOLR-11427 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson > > The title says it pretty much, so opening up for discussion: > Here's the problem. Let's say a node is killed via {{kill -9}}. The > state.json file still says it's "active", but the node is gone from > live_nodes. If the node in question never comes back, the replica's state > doesn't necessarily get switched to "down", so specifying onlyIfDown fails > with "node is active" message. This is all documented more thoroughly in > SOLR-9361. > The question is whether it's sufficient and/or safe to succeed in deleting > the replica from state.json if the state is "active" _and_ the node is NOT > present in live_nodes. > I'm assigning to myself, but others should feel free to take it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11427) DELETEREPLICA with onlyIfDown specified should succeed if the host node is not present in the live_nodes Znode
Erick Erickson created SOLR-11427: - Summary: DELETEREPLICA with onlyIfDown specified should succeed if the host node is not present in the live_nodes Znode Key: SOLR-11427 URL: https://issues.apache.org/jira/browse/SOLR-11427 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Erick Erickson Assignee: Erick Erickson The title says it pretty much, so opening up for discussion: Here's the problem. Let's say a node is killed via {{kill -9}}. The state.json file still says it's "active", but the node is gone from live_nodes. If the node in question never comes back, the replica's state doesn't necessarily get switched to "down", so specifying onlyIfDown fails with "node is active" message. This is all documented more thoroughly in SOLR-9361. The question is whether it's sufficient and/or safe to succeed in deleting the replica from state.json if the state is "active" _and_ the node is NOT present in live_nodes. I'm assigning to myself, but others should feel free to take it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 55 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/55/ 11 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest Error Message: 6 threads leaked from SUITE scope at org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest: 1) Thread[id=142598, name=zkCallback-34457-thread-1, state=TIMED_WAITING, group=TGRP-ChaosMonkeySafeLeaderWithPullReplicasTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)2) Thread[id=142716, name=zkCallback-34457-thread-2, state=TIMED_WAITING, group=TGRP-ChaosMonkeySafeLeaderWithPullReplicasTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)3) Thread[id=142597, name=StoppableCommitThread-EventThread, state=WAITING, group=TGRP-ChaosMonkeySafeLeaderWithPullReplicasTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) 4) Thread[id=142717, name=zkCallback-34457-thread-3, state=TIMED_WAITING, group=TGRP-ChaosMonkeySafeLeaderWithPullReplicasTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)5) Thread[id=142596, name=StoppableCommitThread-SendThread(127.0.0.1:56707), state=TIMED_WAITING, group=TGRP-ChaosMonkeySafeLeaderWithPullReplicasTest] at java.lang.Thread.sleep(Native Method) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1051)6) Thread[id=142547, name=StoppableCommitThread, state=TIMED_WAITING, group=TGRP-ChaosMonkeySafeLeaderWithPullReplicasTest] at java.lang.Thread.sleep(Native Method) at org.apache.solr.cloud.StoppableCommitThread.run(StoppableCommitThread.java:55) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 6 threads leaked from SUITE scope at org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest: 1) Thread[id=142598, name=zkCallback-34457-thread-1, state=TIMED_WAITING, group=TGRP-ChaosMonkeySafeLeaderWithPullReplicasTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at
[jira] [Commented] (LUCENE-7976) Add a parameter to TieredMergePolicy to merge segments that have more than X percent deleted documents
[ https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188704#comment-16188704 ] Nik Everett commented on LUCENE-7976: - I had this issue on a previous project. Our indices were smaller than what you are talking about but we did have one or two of the max size segments that refused to merge away their deleted documents until they got to 50%. We had a fairly high update rate and a very high query rate. The deleted documents bloated the working set size somewhat causing more IO which was our bottleneck at the time. I would have been happy to pay for the increased merge IO to have lower query time IO. We ultimately solved the problem by throwing money at it. More ram and better SSDs makes life much easier. I would have liked to have solved the problem in software but as an very infrequent contributor I didn't feel like I'd ever get a change to TieredMergePolicy merged. > Add a parameter to TieredMergePolicy to merge segments that have more than X > percent deleted documents > -- > > Key: LUCENE-7976 > URL: https://issues.apache.org/jira/browse/LUCENE-7976 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Erick Erickson > > We're seeing situations "in the wild" where there are very large indexes (on > disk) handled quite easily in a single Lucene index. This is particularly > true as features like docValues move data into MMapDirectory space. The > current TMP algorithm allows on the order of 50% deleted documents as per a > dev list conversation with Mike McCandless (and his blog here: > https://www.elastic.co/blog/lucenes-handling-of-deleted-documents). > Especially in the current era of very large indexes in aggregate, (think many > TB) solutions like "you need to distribute your collection over more shards" > become very costly. Additionally, the tempting "optimize" button exacerbates > the issue since once you form, say, a 100G segment (by > optimizing/forceMerging) it is not eligible for merging until 97.5G of the > docs in it are deleted (current default 5G max segment size). > The proposal here would be to add a new parameter to TMP, something like > (no, that's not serious name, suggestions > welcome) which would default to 100 (or the same behavior we have now). > So if I set this parameter to, say, 20%, and the max segment size stays at > 5G, the following would happen when segments were selected for merging: > > any segment with > 20% deleted documents would be merged or rewritten NO > > MATTER HOW LARGE. There are two cases, > >> the segment has < 5G "live" docs. In that case it would be merged with > >> smaller segments to bring the resulting segment up to 5G. If no smaller > >> segments exist, it would just be rewritten > >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). > >> It would be rewritten into a single segment removing all deleted docs no > >> matter how big it is to start. The 100G example above would be rewritten > >> to an 80G segment for instance. > Of course this would lead to potentially much more I/O which is why the > default would be the same behavior we see now. As it stands now, though, > there's no way to recover from an optimize/forceMerge except to re-index from > scratch. We routinely see 200G-300G Lucene indexes at this point "in the > wild" with 10s of shards replicated 3 or more times. And that doesn't even > include having these over HDFS. > Alternatives welcome! Something like the above seems minimally invasive. A > new merge policy is certainly an alternative. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[VOTE] Release Lucene/Solr 7.0.1 RC1
Please vote for release candidate 1 for Lucene/Solr 7.0.1 The artifacts can be downloaded from: https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.0.1-RC1-rev8d6c3889aa543954424d8ac1dbb3f03bf207140b You can run the smoke tester directly with this command: python3 -u dev-tools/scripts/smokeTestRelease.py \ https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.0.1-RC1-rev8d6c3889aa543954424d8ac1dbb3f03bf207140b Here's my +1 [0:28:08.126321] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9) - Build # 535 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/535/ Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseSerialGC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelExecutorStream Error Message: Error from server at http://127.0.0.1:40751/solr/workQueue_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404HTTP ERROR: 404 Problem accessing /solr/workQueue_shard2_replica_n3/update. Reason: Can not find: /solr/workQueue_shard2_replica_n3/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:40751/solr/workQueue_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 HTTP ERROR: 404 Problem accessing /solr/workQueue_shard2_replica_n3/update. Reason: Can not find: /solr/workQueue_shard2_replica_n3/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 at __randomizedtesting.SeedInfo.seed([9C3ECB223DFB77FF:2129BE3B04D74AA2]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelExecutorStream(StreamExpressionTest.java:7540) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
Re: Question concerning refs on TestDemoParallelLeafReader
Hi Mike, Thanks for the feedback. > I think the delayed deletes might have to do w/ segment warming? I'll have to digest the scenario you described tomorrow. I didn't hit any exceptions when running those modified code snippets (which I'd be very grateful to see -- they'd provide an immediate proof something is wrong...). > I am glad you're finding a use for this crazy class! It's super-useful for people who wish to low-level tweak the index format. I dreaded this for a long time, but for us it'd provide many benefits. We have a scenario where documents can be indexed once (and stay in the primary index) and certain derived indexes (features indexed on top of those documents) can be placed in the secondary index. The benefit here is that our data used to index features can change from time to time (as new documents emerge); then we can simply drop those existing secondary indexes and provide up-to-date ones. This saves disk I/O and is still fairly transparent to the rest of the application (because fields never clash between the primary and the secondary index and documents are always aligned). Your 'demo' class is a great example of how this can be done. The class is surely advanced. Read: it crams way too many aspects into one class :) Each of these could be a separate demo: - splitting indexes into parallel once (primary/ secondary), with automatic secondary index creation on merges and startup. - folding back secondary index data into the primary index on merges (we don't need it, but I imagine there exist a scenario for this), - keeping multiple versions of the secondary index (those "generations"). And probably lots more. It's a very interesting advanced use case. > And how did you find this test :) I've been looking at ParallelCompositeReader for some time; as I was scanning it internally for its use cases within the code I somehow came across that "demo" class which leveraged its lower-level internals. It did take me some time to go through the class's internal workings because of confusingly named variables (I ended up renaming them to 'primary' and 'secondary' index instead of the original 'parallel'). But hey, I don't complain -- it's still an awesome piece of code! Dawid - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Question concerning refs on TestDemoParallelLeafReader
On Mon, Oct 2, 2017 at 9:34 AM Michael McCandlesswrote: > I am glad you're finding a use for this crazy class! I think it is a > powerful way for Lucene to efficiently add "derived fields" at search time. > +1 agreed! Could be used for NRT updates as well. But very expert; it'd be nice if it was easier to use achieve higher level goals. -- Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www.solrenterprisesearchserver.com
[jira] [Commented] (SOLR-11417) Crashed leader's hanging emphemral will make restarting followers stuck in recovering
[ https://issues.apache.org/jira/browse/SOLR-11417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188501#comment-16188501 ] Mark Miller commented on SOLR-11417: bq. becomes RECOVERING, so it won't participate anymore. My first thought to try would be detecting a connection based error and in that case, use the method that publishes state but does not update the last state variable that gets checked. It might even make sense to do that on any fail, not just connection errors - I'm not sure its preferable to have a replica disable it's own ability to be a leader - kind of defeats the repeated attempts. > Crashed leader's hanging emphemral will make restarting followers stuck in > recovering > - > > Key: SOLR-11417 > URL: https://issues.apache.org/jira/browse/SOLR-11417 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.3 >Reporter: Mano Kovacs > Attachments: SOLR-11417.png > > > If replicas are starting up after leader crash and within the ZK session > timeout, replicas > * will lose leader election due to hanging ephemerals > * will read stale data from ZK about current leader > * will fail recovery and stuck in recovering state > If leader is down permanently (eg. hardware failure) and all replicas are > affected, shard will not come up (see also SOLR-7065). > Tested on 6.3. See attached image for details. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 6936 - Still unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6936/ Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.lucene.index.TestStressIndexing2 Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_557C0623B6231573-001\tempDir-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_557C0623B6231573-001\tempDir-001 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_557C0623B6231573-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_557C0623B6231573-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_557C0623B6231573-001\tempDir-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_557C0623B6231573-001\tempDir-001 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_557C0623B6231573-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestStressIndexing2_557C0623B6231573-001 at __randomizedtesting.SeedInfo.seed([557C0623B6231573]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.update.AutoCommitTest.testMaxDocs Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([C8B5C6390F32CC04:713410E623D8C88E]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:884) at org.apache.solr.update.AutoCommitTest.testMaxDocs(AutoCommitTest.java:225) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-Tests-master - Build # 2111 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2111/ 3 tests failed. FAILED: org.apache.solr.cloud.RecoveryZkTest.test Error Message: Stack Trace: java.util.concurrent.TimeoutException at __randomizedtesting.SeedInfo.seed([FE1B883B99B87E41:764FB7E1374413B9]:0) at org.apache.solr.common.cloud.ZkStateReader.waitForState(ZkStateReader.java:1268) at org.apache.solr.client.solrj.impl.CloudSolrClient.waitForState(CloudSolrClient.java:438) at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:122) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit Error Message: KeeperErrorCode = Session expired for /clusterstate.json Stack Trace: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /clusterstate.json at
[jira] [Commented] (SOLR-10285) Reduce state messages when there are leader only shards
[ https://issues.apache.org/jira/browse/SOLR-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188454#comment-16188454 ] Varun Thacker commented on SOLR-10285: -- Hi Dat, Do you think it will be a good idea to wait for the leader message to be processed before we return? > Reduce state messages when there are leader only shards > --- > > Key: SOLR-10285 > URL: https://issues.apache.org/jira/browse/SOLR-10285 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Cao Manh Dat > Attachments: SOLR-10285.patch, SOLR-10285.patch, SOLR-10285.patch > > > For shards which have 1 replica ( leader ) we know it doesn't need to recover > from anyone. We should short-circuit the recovery process in this case. > The motivation for this being that we will generate less state events and be > able to mark these replicas as active again without it needing to go into > 'recovering' state. > We already short circuit when you set {{-Dsolrcloud.skip.autorecovery=true}} > but that sys prop was meant for tests only. Extending this to make sure the > code short-circuits when the core knows its the only replica in the shard is > the motivation of the Jira. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10842) Move quickstart.html to Ref Guide
[ https://issues.apache.org/jira/browse/SOLR-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188451#comment-16188451 ] Hoss Man commented on SOLR-10842: - if we setup a "RedirectMatch Permanent" from {{/solr/quickstart.html}} to {{/solr/guide/solr-tutorial.html}} then the order of the rules shouldn't matter -- the browser will execute the redirect, and on the second request, the (existing) rules to redirect to the "current" version will kick in. I think that's probably the best thing to do long term? (the other option would be to setup a rewrite rule so that on the *first* request quickstart.html is _internally_ rewritten to /guide/solr-tutorial.html, and _then_ the redirect would happen (i think) ... it would mean the browser would only see a single redirect -- but it would also mean search engine caches & book marks would never update -- and they'd keep sending people to the "old" quickstart.html url) > Move quickstart.html to Ref Guide > - > > Key: SOLR-10842 > URL: https://issues.apache.org/jira/browse/SOLR-10842 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Minor > Fix For: 7.0 > > Attachments: SOLR-10842.patch > > > The Solr Quick Start at https://lucene.apache.org/solr/quickstart.html has > been problematic to keep up to date - until Ishan just updated it yesterday > for 6.6, it said "6.2.1", so hadn't been updated for several releases. > Now that the Ref Guide is in AsciiDoc format, we can easily use variables for > package versions, and it could be released as part of the Ref Guide and kept > up to date. It could also integrate links to more information on topics, and > users would already be IN the docs, so they would not need to wonder where > the docs are. > There are a few places on the site that will need to be updated to point to > the new location, but I can also put a redirect rule into .htaccess so people > are redirected to the new location if there are other links "in the wild" > that we cannot control. This allows it to be versioned also, if that becomes > necessary. > As part of this, I would like to also update the entire "Getting Started" > section of the Ref Guide, which is effectively identical to what was in the > first release of the Ref Guide back in 2009 for Solr 1.4 and is in serious > need of reconsideration. > My thought is that moving the page + redoing the Getting Started section > would be for 7.0, but if folks are excited about this idea I could move the > page for 6.6 and hold off redoing the larger section until 7.0. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11277) Add auto hard commit setting based on tlog size
[ https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188416#comment-16188416 ] Anshum Gupta commented on SOLR-11277: - Disclaimer: Me, and Rupa are coworkers and I’ve discussed this idea, and looked at this code before reviewing it here. This looks really good! I'll be adding to this feedback, but here are a few things to start looking at: CommitTracker.java - SIZE_COMMIT_DELAY_MS can be static DirectUpdateHandler2.java - Can we rename fileSizeUpperBound to tLogFileSizeUpperBound? Just so that it’s clear? - In addedDocument, we should extract a method for the docsUpperBound part as well. It’s not directly a part of your change, but would be good to do while we are at it. - Can we define a fileSizeUpperBound value of -1 to a static final and use it instead of hardcoding it in the CommitTracker constructor ? - We need the currentTlogSize at multiple places, we should extract that into a method SolrConfig.java - convertAutoCommitMaxSizeStringToBytes is more generic, so either we should rename it to a more generic name like convertConfigStringToBytes or we should call it getAutoCommitMaxSizeInBytes, not pass the path, and have it default to -1. - The Javadoc for convertAutoCommitMaxSizeStringToBytes doesn’t mention that it returns -1 when autoCommitMaxSizeStr is not set. - I would like more information to be spit out with the RuntimeException. A good idea would be to highlight what the correct/accepted format looks like. - UpdateHandlerInfo constructor now has autoCommitMaxSize but is missing the entry from the Javadoc TransactionLog.java - Synchronizing is required in getLogSizeFromStream(), but can we run a basic benchmark to make sure that this isn’t impacting the update throughput? bad-solrconfig-no-autocommit-tag.xml - Can you add one line about what is this config used for. It would be a good idea to just replace the current “Minimal solrconfig.xml with /select, /admin, and /update….” line. Thanks for adding the javadocs to older methods and removing commented out code from years ago too :) > Add auto hard commit setting based on tlog size > --- > > Key: SOLR-11277 > URL: https://issues.apache.org/jira/browse/SOLR-11277 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Rupa Shankar >Assignee: Anshum Gupta > Attachments: max_size_auto_commit.patch > > > When indexing documents of variable sizes and at variable schedules, it can > be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. > We’ve had some occurrences of really huge tlogs, resulting in serious issues, > so in an attempt to avoid this, it would be great to have a “maxSize” setting > based on the tlog size on disk. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11413) SolrGraphiteReporter fails to report metrics due to non-thread safe code
[ https://issues.apache.org/jira/browse/SOLR-11413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki reassigned SOLR-11413: Assignee: Andrzej Bialecki > SolrGraphiteReporter fails to report metrics due to non-thread safe code > > > Key: SOLR-11413 > URL: https://issues.apache.org/jira/browse/SOLR-11413 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Affects Versions: 6.6, 7.0 >Reporter: Erik Persson >Assignee: Andrzej Bialecki > Attachments: SOLR-11413.patch > > > Symptom: > Intermittent errors writing graphite metrics. Errors indicate use of sockets > which have already been closed. > Cause: > SolrGraphiteReporter caches and shares dropwizard Graphite instances. These > reporters are not thread safe as they open and close an instance variable of > type GraphiteSender. On modern bare metal hardware this problem was observed > consistently, and resulted in the majority of metrics failing to be delivered > to graphite. > Proposed Fix: > Graphite (and PickledGraphite) are not designed to be cached, and should not > be. > Test: > Patch file includes test which forces error. > Alternative Fixes Considered: > * Totally change solr metrics architecture to use a single metrics registry - > seems undesirable and impractical > * Create a synchronized or otherwise thread-safe implementation of dropwizard > graphite reporter - should be fixed upstream in dropwizard and not obviously > preferred to current model -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11425) SolrClientBuilder does not allow infinite timeout (value 0)
[ https://issues.apache.org/jira/browse/SOLR-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller reassigned SOLR-11425: -- Assignee: Mark Miller > SolrClientBuilder does not allow infinite timeout (value 0) > --- > > Key: SOLR-11425 > URL: https://issues.apache.org/jira/browse/SOLR-11425 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.0 >Reporter: Peter Szantai-Kis >Assignee: Mark Miller > Attachments: SOLR-11425.patch > > > [org.apache.solr.client.solrj.impl.SolrClientBuilder#withConnectionTimeout|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/SolrClientBuilder.java#L53] > does not allow to set the value of 0 which means infinite timeout, but > [RequestConfig|https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.html#getConnectTimeout()] > where it will be used have the option to do so. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 220 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/220/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC 3 tests failed. FAILED: org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelExecutorStream Error Message: Error from server at http://127.0.0.1:49224/solr/workQueue_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404HTTP ERROR: 404 Problem accessing /solr/workQueue_shard2_replica_n3/update. Reason: Can not find: /solr/workQueue_shard2_replica_n3/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:49224/solr/workQueue_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 HTTP ERROR: 404 Problem accessing /solr/workQueue_shard2_replica_n3/update. Reason: Can not find: /solr/workQueue_shard2_replica_n3/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 at __randomizedtesting.SeedInfo.seed([8FC03EAD54C6D7C5:32D74BB46DEAEA98]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelExecutorStream(StreamExpressionTest.java:7540) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[jira] [Commented] (SOLR-10842) Move quickstart.html to Ref Guide
[ https://issues.apache.org/jira/browse/SOLR-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188322#comment-16188322 ] Cassandra Targett commented on SOLR-10842: -- I've updated all the places on the Solr website that link to the tutorial so they now go to the Ref Guide page (https://lucene.apache.org/solr/guide/solr-tutorial.html). I'm not sure about the redirect rule, though. For ease of updating, I'd like to make it non-versioned, but that is itself a redirect rule and I don't know enough about how this works to know the proper order for these two rules. Should I instead just replace the text of the current quickstart.html page with a link to the new location? [~hossman] or [~steve_rowe], maybe one of you have an idea how I could make it work in {{.htaccess}}? > Move quickstart.html to Ref Guide > - > > Key: SOLR-10842 > URL: https://issues.apache.org/jira/browse/SOLR-10842 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Minor > Fix For: 7.0 > > Attachments: SOLR-10842.patch > > > The Solr Quick Start at https://lucene.apache.org/solr/quickstart.html has > been problematic to keep up to date - until Ishan just updated it yesterday > for 6.6, it said "6.2.1", so hadn't been updated for several releases. > Now that the Ref Guide is in AsciiDoc format, we can easily use variables for > package versions, and it could be released as part of the Ref Guide and kept > up to date. It could also integrate links to more information on topics, and > users would already be IN the docs, so they would not need to wonder where > the docs are. > There are a few places on the site that will need to be updated to point to > the new location, but I can also put a redirect rule into .htaccess so people > are redirected to the new location if there are other links "in the wild" > that we cannot control. This allows it to be versioned also, if that becomes > necessary. > As part of this, I would like to also update the entire "Getting Started" > section of the Ref Guide, which is effectively identical to what was in the > first release of the Ref Guide back in 2009 for Solr 1.4 and is in serious > need of reconsideration. > My thought is that moving the page + redoing the Getting Started section > would be for 7.0, but if folks are excited about this idea I could move the > page for 6.6 and hold off redoing the larger section until 7.0. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11426) TestLazyCores fails too ften
Erick Erickson created SOLR-11426: - Summary: TestLazyCores fails too ften Key: SOLR-11426 URL: https://issues.apache.org/jira/browse/SOLR-11426 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Erick Erickson Rather then re-opening SOLR-10101 I thought I'd start a new issue. I may have to put some code up on Jenkins to test, last time I tried to get this to fail locally I couldn't -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11426) TestLazyCores fails too ften
[ https://issues.apache.org/jira/browse/SOLR-11426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson reassigned SOLR-11426: - Assignee: Erick Erickson > TestLazyCores fails too ften > > > Key: SOLR-11426 > URL: https://issues.apache.org/jira/browse/SOLR-11426 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson > > Rather then re-opening SOLR-10101 I thought I'd start a new issue. I may have > to put some code up on Jenkins to test, last time I tried to get this to fail > locally I couldn't -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7974) Add N-dimensional FloatPoint K-nearest-neighbor implementation
[ https://issues.apache.org/jira/browse/LUCENE-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188213#comment-16188213 ] ASF subversion and git services commented on LUCENE-7974: - Commit f33ed4ba12aaf215628d010daaa0e271b8ab3d1f in lucene-solr's branch refs/heads/branch_7x from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f33ed4b ] LUCENE-7974: nearest() method returning NearestHit should be private, and NearestHit class should be package private > Add N-dimensional FloatPoint K-nearest-neighbor implementation > -- > > Key: LUCENE-7974 > URL: https://issues.apache.org/jira/browse/LUCENE-7974 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/sandbox >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Minor > Fix For: 7.1, master (8.0) > > Attachments: LUCENE-7974.patch > > > From LUCENE-7069: > {quote} > KD trees (used by Lucene's new dimensional points) excel at finding "nearest > neighbors" to a given query point ... I think we should add this to Lucene's > sandbox > [...] > It could also be generalized to more than 2 dimensions, but for now I'm > making the class package private in sandbox for just the geo2d (lat/lon) use > case. > {quote} > This issue is to generalize {{LatLonPoint.nearest()}} to more than 2 > dimensions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7974) Add N-dimensional FloatPoint K-nearest-neighbor implementation
[ https://issues.apache.org/jira/browse/LUCENE-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188214#comment-16188214 ] ASF subversion and git services commented on LUCENE-7974: - Commit 74050a3f159eca393ccb8e0b28c4fc4f974f6d5e in lucene-solr's branch refs/heads/master from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=74050a3 ] LUCENE-7974: nearest() method returning NearestHit should be private, and NearestHit class should be package private > Add N-dimensional FloatPoint K-nearest-neighbor implementation > -- > > Key: LUCENE-7974 > URL: https://issues.apache.org/jira/browse/LUCENE-7974 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/sandbox >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Minor > Fix For: 7.1, master (8.0) > > Attachments: LUCENE-7974.patch > > > From LUCENE-7069: > {quote} > KD trees (used by Lucene's new dimensional points) excel at finding "nearest > neighbors" to a given query point ... I think we should add this to Lucene's > sandbox > [...] > It could also be generalized to more than 2 dimensions, but for now I'm > making the class package private in sandbox for just the geo2d (lat/lon) use > case. > {quote} > This issue is to generalize {{LatLonPoint.nearest()}} to more than 2 > dimensions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7974) Add N-dimensional FloatPoint K-nearest-neighbor implementation
[ https://issues.apache.org/jira/browse/LUCENE-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188199#comment-16188199 ] Michael McCandless commented on LUCENE-7974: Oh yes that makes sense [~steve_rowe], thanks! > Add N-dimensional FloatPoint K-nearest-neighbor implementation > -- > > Key: LUCENE-7974 > URL: https://issues.apache.org/jira/browse/LUCENE-7974 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/sandbox >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Minor > Fix For: 7.1, master (8.0) > > Attachments: LUCENE-7974.patch > > > From LUCENE-7069: > {quote} > KD trees (used by Lucene's new dimensional points) excel at finding "nearest > neighbors" to a given query point ... I think we should add this to Lucene's > sandbox > [...] > It could also be generalized to more than 2 dimensions, but for now I'm > making the class package private in sandbox for just the geo2d (lat/lon) use > case. > {quote} > This issue is to generalize {{LatLonPoint.nearest()}} to more than 2 > dimensions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1449 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1449/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=30319, name=jetty-launcher-4309-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=30319, name=jetty-launcher-4309-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) at __randomizedtesting.SeedInfo.seed([8EBC4DF65222051A]:0) Build Log: [...truncated 13266 lines...] [junit4] Suite: org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation [junit4] 2> Creating dataDir: /export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.cloud.TestSolrCloudWithSecureImpersonation_8EBC4DF65222051A-001/init-core-data-001 [junit4] 2> 3399209 WARN (SUITE-TestSolrCloudWithSecureImpersonation-seed#[8EBC4DF65222051A]-worker) [ ] o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=4 numCloses=4 [junit4] 2> 3399209 INFO (SUITE-TestSolrCloudWithSecureImpersonation-seed#[8EBC4DF65222051A]-worker) [ ] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=true [junit4] 2> 3399211 INFO (SUITE-TestSolrCloudWithSecureImpersonation-seed#[8EBC4DF65222051A]-worker) [ ] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: @org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN) [junit4] 2>
[jira] [Updated] (LUCENE-7982) NormsFieldExistsQuery to match documents where field exists based on field norms
[ https://issues.apache.org/jira/browse/LUCENE-7982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Colin Goodheart-Smithe updated LUCENE-7982: --- Attachment: LUCENE-7982.patch Patch containing the new NormsFieldExistsQuery and test > NormsFieldExistsQuery to match documents where field exists based on field > norms > > > Key: LUCENE-7982 > URL: https://issues.apache.org/jira/browse/LUCENE-7982 > Project: Lucene - Core > Issue Type: Bug > Components: core/search >Reporter: Colin Goodheart-Smithe > Fix For: 7.1 > > Attachments: LUCENE-7982.patch > > > This patch adds a new NormsFieldExistsQuery which is similar to > DocValuesFieldExistsQuery but instead of determining whether a document has a > value for a field based on doc values it does this based on the field norms > so the same kind of exists query functionality can be performed on TextFields > which have no doc values. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.0-Linux (64bit/jdk-9) - Build # 407 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.0-Linux/407/ Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseParallelGC --illegal-access=deny 3 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI Error Message: Error from server at http://127.0.0.1:39375/solr/awhollynewcollection_0_shard2_replica_n1: ClusterState says we are the leader (http://127.0.0.1:39375/solr/awhollynewcollection_0_shard2_replica_n1), but locally we don't think so. Request came from null Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:39375/solr/awhollynewcollection_0_shard2_replica_n1: ClusterState says we are the leader (http://127.0.0.1:39375/solr/awhollynewcollection_0_shard2_replica_n1), but locally we don't think so. Request came from null at __randomizedtesting.SeedInfo.seed([9F0192C8D00EA5C2:D774E67CD63D8A57]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:458) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-7974) Add N-dimensional FloatPoint K-nearest-neighbor implementation
[ https://issues.apache.org/jira/browse/LUCENE-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188049#comment-16188049 ] Steve Rowe commented on LUCENE-7974: Hi [~mikemccand], thanks for the fix, but I was about to commit a different one: making private the nearest() method returning NearestHit, since it's only going to be used by the (intentionally) public nearest() method that returns TopFieldDocs. OK with you if I un-re-fix? > Add N-dimensional FloatPoint K-nearest-neighbor implementation > -- > > Key: LUCENE-7974 > URL: https://issues.apache.org/jira/browse/LUCENE-7974 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/sandbox >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Minor > Fix For: 7.1, master (8.0) > > Attachments: LUCENE-7974.patch > > > From LUCENE-7069: > {quote} > KD trees (used by Lucene's new dimensional points) excel at finding "nearest > neighbors" to a given query point ... I think we should add this to Lucene's > sandbox > [...] > It could also be generalized to more than 2 dimensions, but for now I'm > making the class package private in sandbox for just the geo2d (lat/lon) use > case. > {quote} > This issue is to generalize {{LatLonPoint.nearest()}} to more than 2 > dimensions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-7982) NormsFieldExistsQuery to match documents where field exists based on field norms
Colin Goodheart-Smithe created LUCENE-7982: -- Summary: NormsFieldExistsQuery to match documents where field exists based on field norms Key: LUCENE-7982 URL: https://issues.apache.org/jira/browse/LUCENE-7982 Project: Lucene - Core Issue Type: Bug Components: core/search Reporter: Colin Goodheart-Smithe Fix For: 7.1 This patch adds a new NormsFieldExistsQuery which is similar to DocValuesFieldExistsQuery but instead of determining whether a document has a value for a field based on doc values it does this based on the field norms so the same kind of exists query functionality can be performed on TextFields which have no doc values. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4202 - Failure!
I pushed a fix for this. Mike McCandless http://blog.mikemccandless.com On Mon, Oct 2, 2017 at 7:02 AM, Policeman Jenkins Server < jenk...@thetaphi.de> wrote: > Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4202/ > Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC > > All tests passed > > Build Log: > [...truncated 51045 lines...] > -documentation-lint: > [echo] checking for broken html... > [jtidy] Checking for broken html (such as invalid tags)... >[delete] Deleting directory /Users/jenkins/workspace/ > Lucene-Solr-master-MacOSX/lucene/build/jtidy_tmp > [echo] Checking for broken links... > [exec] > [exec] Crawl/parse... > [exec] > [exec] Verify... > [exec] > [exec] file:///build/docs/sandbox/org/apache/lucene/document/ > FloatPointNearestNeighbor.html > [exec] BROKEN LINK: file:///build/docs/core/org/ > apache/lucene/document/FloatPointNearestNeighbor.NearestHit.html > [exec] BROKEN LINK: file:///build/docs/core/org/ > apache/lucene/document/FloatPointNearestNeighbor.NearestHit.html > [exec] > [exec] Broken javadocs links were found! Common root causes: > [exec] * A typo of some sort for manually created links. > [exec] * Public methods referencing non-public classes in their > signature. > > BUILD FAILED > /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/build.xml:826: The > following error occurred while executing this line: > /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/build.xml:101: The > following error occurred while executing this line: > /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build.xml:142: > The following error occurred while executing this line: > /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build.xml:155: > The following error occurred while executing this line: > /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/common-build.xml:2570: > exec returned: 1 > > Total time: 87 minutes 9 seconds > Build step 'Invoke Ant' marked build as failure > Archiving artifacts > [WARNINGS] Skipping publisher since build result is FAILURE > Recording test results > Email was triggered for: Failure - Any > Sending email for trigger: Failure - Any > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org >
[jira] [Commented] (LUCENE-7974) Add N-dimensional FloatPoint K-nearest-neighbor implementation
[ https://issues.apache.org/jira/browse/LUCENE-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188042#comment-16188042 ] ASF subversion and git services commented on LUCENE-7974: - Commit b1d4c01568cf2b965bc2e97dc5edb274755ab72e in lucene-solr's branch refs/heads/branch_7x from Mike McCandless [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b1d4c01 ] LUCENE-7974: make NearestHit public, and add javadocs, to make precommit happy > Add N-dimensional FloatPoint K-nearest-neighbor implementation > -- > > Key: LUCENE-7974 > URL: https://issues.apache.org/jira/browse/LUCENE-7974 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/sandbox >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Minor > Fix For: 7.1, master (8.0) > > Attachments: LUCENE-7974.patch > > > From LUCENE-7069: > {quote} > KD trees (used by Lucene's new dimensional points) excel at finding "nearest > neighbors" to a given query point ... I think we should add this to Lucene's > sandbox > [...] > It could also be generalized to more than 2 dimensions, but for now I'm > making the class package private in sandbox for just the geo2d (lat/lon) use > case. > {quote} > This issue is to generalize {{LatLonPoint.nearest()}} to more than 2 > dimensions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7974) Add N-dimensional FloatPoint K-nearest-neighbor implementation
[ https://issues.apache.org/jira/browse/LUCENE-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16188040#comment-16188040 ] ASF subversion and git services commented on LUCENE-7974: - Commit 73f3403381b95a908543cf859b2f43f28cb9a34a in lucene-solr's branch refs/heads/master from Mike McCandless [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=73f3403 ] LUCENE-7974: make NearestHit public, and add javadocs, to make precommit happy > Add N-dimensional FloatPoint K-nearest-neighbor implementation > -- > > Key: LUCENE-7974 > URL: https://issues.apache.org/jira/browse/LUCENE-7974 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/sandbox >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Minor > Fix For: 7.1, master (8.0) > > Attachments: LUCENE-7974.patch > > > From LUCENE-7069: > {quote} > KD trees (used by Lucene's new dimensional points) excel at finding "nearest > neighbors" to a given query point ... I think we should add this to Lucene's > sandbox > [...] > It could also be generalized to more than 2 dimensions, but for now I'm > making the class package private in sandbox for just the geo2d (lat/lon) use > case. > {quote} > This issue is to generalize {{LatLonPoint.nearest()}} to more than 2 > dimensions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Question concerning refs on TestDemoParallelLeafReader
I think the delayed deletes might have to do w/ segment warming? I.e., after a merge finishes, but before IW exposes that segment in the current SIS, it's merged, at which point (via the merged segment warmer the test installs) we build its parallel index, but then I think (maybe!) its parallel reader is closed? But we don't want to rm its index directory, because on the next NRT refresh the merged segment becomes live and we will open that parallel index. This ensures that it's the BG merge thread that pays the cost to build the parallel index, not the NRT reopen thread, keeping NRT reopen latency low (ish). I am glad you're finding a use for this crazy class! I think it is a powerful way for Lucene to efficiently add "derived fields" at search time. Can you share any details on how you are using it? And how did you find this test :) Mike McCandless http://blog.mikemccandless.com On Sun, Oct 1, 2017 at 7:01 AM, Dawid Weisswrote: > > I'll have to think about the first 2 questions still, but MDW stands for > > MockDirectoryWrapper! > > Ah, sure thing. For what it's worth, I locally removed this delayed > 'delete' list and removed the leaf folder immediately -- the tests > passed without any problems on my Windows machine. Could be I didn't > hit the corner case, so I'm interested in any follow-up you might > have, Mike. > > Dawid > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > >
[jira] [Updated] (SOLR-10912) Adding automatic patch validation
[ https://issues.apache.org/jira/browse/SOLR-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mano Kovacs updated SOLR-10912: --- Attachment: SOLR-10912.ok-patch-in-core.patch > Adding automatic patch validation > - > > Key: SOLR-10912 > URL: https://issues.apache.org/jira/browse/SOLR-10912 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Build >Reporter: Mano Kovacs > Attachments: SOLR-10912.ok-patch-in-core.patch, > SOLR-10912.sample-patch.patch, SOLR-10912.solj-contrib-facet-error.patch > > > Proposing introduction of automated patch validation, similar what Hadoop or > other Apache projects are using (see link). This would ensure that every > patch passes a certain set of criterions before getting approved. It would > save time for developer (faster feedback loop), save time for committers > (less step to do manually), and would increase quality. > Hadoop is currently using Apache Yetus to run validations, which seems to be > a good direction to start. This jira could be the board of discussing the > preferred solution. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187994#comment-16187994 ] Uwe Schindler commented on LUCENE-7966: --- bq. I had a closer look at the branch and like the patching approach. Should we modify the smoke tester at the same time to enforce that both Java 8 and 9 are tested? I think so - that was my plan! In general the testing is automatically done. As soon as you start Lucene Demo or Solr with Java 9 it will test the JAR file. But a separate test might be good (like the Exception test I posted before), to see if stack trace looks as expected. I will work soon on changing the patching mechanism to be globally (not only in root module). I would also like to remove the {{@Deprecated}} from the Future classes (because at this time, they are the only way) and instead add {{@lucene.internal}}. We should add a separate issue about renoving Future classes, once we swicth to Java 9. Are there any other tests we should do. I talked with Robert - we both don't understand Mike's findings. I don't trust them unless we have a reproducible benchmark using BytesRefHash and similar. The improvements in LZ4 are phantastic, I would have expected the same from BytesRefHash. > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: core/other, general/build >Reporter: Robert Muir > Labels: Java9 > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace call sites with java 9 > methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only > have to worry about testing that our java 8 fallback methods work. > I found that many of the current byte array methods today are willy-nilly and > very lenient for example, passing invalid offsets at times and relying on > compare methods not throwing exceptions, etc. I fixed all the instances in > core/codecs but have not looked at the problems with AnalyzingSuggester. Also > SimpleText still uses a silly method in ArrayUtil in similar crazy way, have > not removed that one yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Solr 7 default Response now JSON instead of XML causing issues
Hi Default response in Solr 7 is now JSON instead of XML (https://issues.apache.org/jira/browse/SOLR-10494) We are using a system that use the Solr admin/cores api for core status etc. and we can't really change that system. That system expects the XML response. And as far as I can see default also changed to JSON there. So: Are there any way I can change the admin/cores API back to responses using XML instead of JSON? /Roland Villemoes
[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 53 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/53/ No tests ran. Build Log: [...truncated 28017 lines...] prepare-release-no-sign: [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist [copy] Copying 476 files to /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 215 files to /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL "file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.02 sec (10.1 MB/sec) [smoker] check changes HTML... [smoker] download lucene-7.1.0-src.tgz... [smoker] 30.7 MB in 0.12 sec (248.5 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-7.1.0.tgz... [smoker] 69.4 MB in 0.32 sec (214.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-7.1.0.zip... [smoker] 79.8 MB in 0.37 sec (216.8 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-7.1.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6223 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-7.1.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6223 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-7.1.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run "ant validate" [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.8... [smoker] got 213 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] generate javadocs w/ Java 8... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] [smoker] file:///home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/lucene-7.1.0/build/docs/sandbox/org/apache/lucene/document/FloatPointNearestNeighbor.html [smoker] BROKEN LINK: file:///home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/lucene-7.1.0/build/docs/core/org/apache/lucene/document/FloatPointNearestNeighbor.NearestHit.html [smoker] BROKEN LINK: file:///home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/lucene-7.1.0/build/docs/core/org/apache/lucene/document/FloatPointNearestNeighbor.NearestHit.html [smoker] Traceback (most recent call last): [smoker] File "/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", line 1484, in [smoker] main() [smoker] File "/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", line 1428, in main [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed, ' '.join(c.test_args)) [smoker] File "/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", line 1466, in smokeTest [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % version, gitRevision, version, testArgs, baseURL) [smoker] File "/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", line 622, in unpackAndVerify [smoker] verifyUnpacked(java, project, artifact, unpackPath, gitRevision, version, testArgs, tmpDir, baseURL) [smoker] File "/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", line 727, in verifyUnpacked [smoker] checkJavadocpathFull('%s/build/docs' % unpackPath) [smoker] File "/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", line 908, in checkJavadocpathFull [smoker] raise RuntimeError('broken javadocs links found!') [smoker] RuntimeError: broken javadocs links found! BUILD FAILED /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/build.xml:622: exec returned: 1 Total time: 172 minutes 39 seconds Build step 'Invoke Ant' marked build as failure Email was triggered for: Failure - Any Sending email for trigger: Failure - Any
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 222 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/222/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([D853AFC32C37680:8A04875136E30C84]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler(TestMergeSchedulerExternal.java:147) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI Error Message: Error from server at http://127.0.0.1:53192/solr/awhollynewcollection_0_shard2_replica_n2: ClusterState says we are the leader (http://127.0.0.1:53192/solr/awhollynewcollection_0_shard2_replica_n2), but locally we don't think so. Request came from null Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:53192/solr/awhollynewcollection_0_shard2_replica_n2: ClusterState says we are the leader
[jira] [Commented] (SOLR-10078) "Unknown query type:org.apache.lucene.search.MatchNoDocsQuery" error with Solr v6.3
[ https://issues.apache.org/jira/browse/SOLR-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187898#comment-16187898 ] Bjarke Mortensen commented on SOLR-10078: - When https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a3fc7ef was committed, a case for MatchNoDocsQuery was made in ComplexPhraseQuery.rewrite: {{} else if (qc instanceof MatchNoDocsQuery) {}} Will it be needed to make the same case in ComplexPhraseQuery.addComplexPhraseClause when the child clauses are iterated? > "Unknown query type:org.apache.lucene.search.MatchNoDocsQuery" error with > Solr v6.3 > --- > > Key: SOLR-10078 > URL: https://issues.apache.org/jira/browse/SOLR-10078 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.3 >Reporter: Andy Tran >Priority: Minor > > With Solr v6.3, when I issue this query: > http://localhost:8983/solr/BestBuy/select?wt=json=10={!complexphrase%20inOrder=false}_text_:%22maytag~%20(refri~%20OR%20refri*)%20%22=id=true=false=60=nameX,shortDescription,longDescription,artistName,type,manufacturer,department > I get this error in the JSON response: > * > { > "responseHeader": { > "zkConnected": true, > "status": 500, > "QTime": 8, > "params": { > "q": "{!complexphrase inOrder=false}_text_:\"maytag~ (refri~ OR refri*) > \"", > "hl": "true", > "hl.preserveMulti": "false", > "fl": "id", > "hl.fragsize": "60", > "hl.fl": > "nameX,shortDescription,longDescription,artistName,type,manufacturer,department", > "rows": "10", > "wt": "json" > } > }, > "response": { > "numFound": 2, > "start": 0, > "docs": [ > { > "id": "5411379" > }, > { > "id": "5411404" > } > ] > }, > "error": { > "msg": "Unknown query type:org.apache.lucene.search.MatchNoDocsQuery", > "trace": "java.lang.IllegalArgumentException: Unknown query > type:org.apache.lucene.search.MatchNoDocsQuery\n\tat > org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery.addComplexPhraseClause(ComplexPhraseQueryParser.java:388)\n\tat > > org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery.rewrite(ComplexPhraseQueryParser.java:289)\n\tat > > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:230)\n\tat > > org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:522)\n\tat > > org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:218)\n\tat > > org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:186)\n\tat > > org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:195)\n\tat > > org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:602)\n\tat > > org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingOfField(DefaultSolrHighlighter.java:448)\n\tat > > org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:410)\n\tat > > org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:141)\n\tat > > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)\n\tat > > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)\n\tat > org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)\n\tat > org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)\n\tat > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)\n\tat > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)\n\tat > > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)\n\tat > > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)\n\tat > > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)\n\tat > > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat > > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat > > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat > > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)\n\tat > > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)\n\tat > > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat > >
[jira] [Updated] (SOLR-11425) SolrClientBuilder does not allow infinite timeout (value 0)
[ https://issues.apache.org/jira/browse/SOLR-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Szantai-Kis updated SOLR-11425: - Attachment: SOLR-11425.patch > SolrClientBuilder does not allow infinite timeout (value 0) > --- > > Key: SOLR-11425 > URL: https://issues.apache.org/jira/browse/SOLR-11425 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.0 >Reporter: Peter Szantai-Kis > Attachments: SOLR-11425.patch > > > [org.apache.solr.client.solrj.impl.SolrClientBuilder#withConnectionTimeout|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/SolrClientBuilder.java#L53] > does not allow to set the value of 0 which means infinite timeout, but > [RequestConfig|https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.html#getConnectTimeout()] > where it will be used have the option to do so. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-7974) Add N-dimensional FloatPoint K-nearest-neighbor implementation
[ https://issues.apache.org/jira/browse/LUCENE-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187877#comment-16187877 ] Peter Szantai-Kis edited comment on LUCENE-7974 at 10/2/17 11:25 AM: - Hi [~steve_rowe], "ant precommit" build seems to be failing for me on a -document-lint step. The line that seems suspicious: [FloatPointNearestNeighbor.java#206|https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blob;f=lucene/sandbox/src/java/org/apache/lucene/document/FloatPointNearestNeighbor.java;h=d3360a838c9733bc41ab30e37e55b1cfddf5509a;hb=d52564c#l206] https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/532/console {code:title=failure message} -documentation-lint: [echo] checking for broken html... [jtidy] Checking for broken html (such as invalid tags)... [delete] Deleting directory ~/dev/src/lucene-solr/lucene/build/jtidy_tmp [echo] Checking for broken links... [exec] [exec] Crawl/parse... [exec] [exec] Verify... [exec] [exec] file:///build/docs/sandbox/org/apache/lucene/document/FloatPointNearestNeighbor.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/document/FloatPointNearestNeighbor.NearestHit.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/document/FloatPointNearestNeighbor.NearestHit.html [exec] [exec] Broken javadocs links were found! Common root causes: [exec] * A typo of some sort for manually created links. [exec] * Public methods referencing non-public classes in their signature. BUILD FAILED {code} was (Author: szantaikis): Hi [~steve_rowe], "ant precommit" build seems to be failing for me on a -document-lint step. The line that seems suspicious: [FloatPointNearestNeighbor.java#206|https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blob;f=lucene/sandbox/src/java/org/apache/lucene/document/FloatPointNearestNeighbor.java;h=d3360a838c9733bc41ab30e37e55b1cfddf5509a;hb=d52564c#l206] {code:title=failure message} -documentation-lint: [echo] checking for broken html... [jtidy] Checking for broken html (such as invalid tags)... [delete] Deleting directory ~/dev/src/lucene-solr/lucene/build/jtidy_tmp [echo] Checking for broken links... [exec] [exec] Crawl/parse... [exec] [exec] Verify... [exec] [exec] file:///build/docs/sandbox/org/apache/lucene/document/FloatPointNearestNeighbor.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/document/FloatPointNearestNeighbor.NearestHit.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/document/FloatPointNearestNeighbor.NearestHit.html [exec] [exec] Broken javadocs links were found! Common root causes: [exec] * A typo of some sort for manually created links. [exec] * Public methods referencing non-public classes in their signature. BUILD FAILED {code} > Add N-dimensional FloatPoint K-nearest-neighbor implementation > -- > > Key: LUCENE-7974 > URL: https://issues.apache.org/jira/browse/LUCENE-7974 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/sandbox >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Minor > Fix For: 7.1, master (8.0) > > Attachments: LUCENE-7974.patch > > > From LUCENE-7069: > {quote} > KD trees (used by Lucene's new dimensional points) excel at finding "nearest > neighbors" to a given query point ... I think we should add this to Lucene's > sandbox > [...] > It could also be generalized to more than 2 dimensions, but for now I'm > making the class package private in sandbox for just the geo2d (lat/lon) use > case. > {quote} > This issue is to generalize {{LatLonPoint.nearest()}} to more than 2 > dimensions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7974) Add N-dimensional FloatPoint K-nearest-neighbor implementation
[ https://issues.apache.org/jira/browse/LUCENE-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187877#comment-16187877 ] Peter Szantai-Kis commented on LUCENE-7974: --- Hi [~steve_rowe], "ant precommit" build seems to be failing for me on a -document-lint step. The line that seems suspicious: [FloatPointNearestNeighbor.java#206|https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blob;f=lucene/sandbox/src/java/org/apache/lucene/document/FloatPointNearestNeighbor.java;h=d3360a838c9733bc41ab30e37e55b1cfddf5509a;hb=d52564c#l206] {code:title=failure message} -documentation-lint: [echo] checking for broken html... [jtidy] Checking for broken html (such as invalid tags)... [delete] Deleting directory ~/dev/src/lucene-solr/lucene/build/jtidy_tmp [echo] Checking for broken links... [exec] [exec] Crawl/parse... [exec] [exec] Verify... [exec] [exec] file:///build/docs/sandbox/org/apache/lucene/document/FloatPointNearestNeighbor.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/document/FloatPointNearestNeighbor.NearestHit.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/document/FloatPointNearestNeighbor.NearestHit.html [exec] [exec] Broken javadocs links were found! Common root causes: [exec] * A typo of some sort for manually created links. [exec] * Public methods referencing non-public classes in their signature. BUILD FAILED {code} > Add N-dimensional FloatPoint K-nearest-neighbor implementation > -- > > Key: LUCENE-7974 > URL: https://issues.apache.org/jira/browse/LUCENE-7974 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/sandbox >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Minor > Fix For: 7.1, master (8.0) > > Attachments: LUCENE-7974.patch > > > From LUCENE-7069: > {quote} > KD trees (used by Lucene's new dimensional points) excel at finding "nearest > neighbors" to a given query point ... I think we should add this to Lucene's > sandbox > [...] > It could also be generalized to more than 2 dimensions, but for now I'm > making the class package private in sandbox for just the geo2d (lat/lon) use > case. > {quote} > This issue is to generalize {{LatLonPoint.nearest()}} to more than 2 > dimensions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-10078) "Unknown query type:org.apache.lucene.search.MatchNoDocsQuery" error with Solr v6.3
[ https://issues.apache.org/jira/browse/SOLR-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187870#comment-16187870 ] Bjarke Mortensen edited comment on SOLR-10078 at 10/2/17 11:18 AM: --- I have a similar error on Solr 6.6.1 As the stack trace shows it stems from ComplexPhrase and highlighting. { "responseHeader":{ "status":500, "QTime":4, "params":{ "q":"_query_:\"{!complexphrase inOrder=false df=all_text}\\\"patien* (alarm* OR nødkald*)\\\"~5\" OR sikringsanlæg*", "hl":"on", "indent":"on", "fl":"content_hash", "fq":["(document_date:[2016-11-09T00:00:00Z TO *])", "document_type:(minutes OR addendum OR agenda OR budget OR financial_report)"], "wt":"json"}}, "response":{"numFound":106,"start":0,"docs":[ { "content_hash":"762a3e39abb55ee1e554c30caaf094a325c42d98"}, { "content_hash":"616c10b300e4537226375a78e8f8ecf789aeb6ac"}, { "content_hash":"d466a7d69d3e7bca336f4d20584d1193005874f7"}, { "content_hash":"918567c6917d97061e20f6df1d205e69202e941b"}, { "content_hash":"c321a91bb9bf2143eb63a10b7492b6fc19be58cc"}, { "content_hash":"a56fb74298b10930f4895f43c7c11dcf83e9a9e7"}, { "content_hash":"6ffdd5476907e87fdc62a2a33f1cd1fd1823cc83"}, { "content_hash":"60a1d8d6a9f54b62a69af41cb951ab90884bd48a"}, { "content_hash":"7e9b19bcad0c0a88b89286e74ad94f202d67e8ca"}, { "content_hash":"18b25f0a032d848d92918df7db0e3fac98576a58"}] }, "error":{ "msg":"Unknown query type:org.apache.lucene.search.MatchNoDocsQuery", "trace":"java.lang.IllegalArgumentException: Unknown query type:org.apache.lucene.search.MatchNoDocsQuery\n\tat org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery.addComplexPhraseClause(ComplexPhraseQueryParser.java:403)\n\tat org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery.rewrite(ComplexPhraseQueryParser.java:296)\n\tat org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:230)\n\tat org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:113)\n\tat org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:522)\n\tat org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:218)\n\tat org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:186)\n\tat org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:195)\n\tat org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:612)\n\tat org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingOfField(DefaultSolrHighlighter.java:456)\n\tat org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:418)\n\tat org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:182)\n\tat org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)\n\tat org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)\n\tat org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)\n\tat org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)\n\tat org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)\n\tat org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)\n\tat org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)\n\tat org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
[jira] [Commented] (SOLR-10078) "Unknown query type:org.apache.lucene.search.MatchNoDocsQuery" error with Solr v6.3
[ https://issues.apache.org/jira/browse/SOLR-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187870#comment-16187870 ] Bjarke Mortensen commented on SOLR-10078: - I have a similar error on Solr 6.6.1 As the stack trace shows it stems from ComplexPhrase and highlighting. { "responseHeader":{ "status":500, "QTime":9, "params":{ "q":"vowifi OR personalealarm* OR personalealarmer OR personsikring* OR demenssikring* OR kaldeanlæg* OR kaldeanlægget OR kaldesystem* OR kaldesystemer OR beboerkald* OR dect OR overfaldsalarm* OR overfaldsalarmer OR tryghedsalarm* OR tryghedsalarmer OR ascom* OR condig* OR nødkaldeanlæg* OR nødkaldsanlægget OR nødkaldsløsning* OR nødkaldsløsningen OR patientkald* OR _query_:\"{!complexphrase inOrder=false df=all_text}\\\"patien* (alarm* OR nødkald*)\\\"~5\" OR sikringsanlæg* OR _query_:\"{!complexphrase inOrder=false df=all_text}\\\"gps (hospital* OR syghus*)\\\"~10\" OR positioneringsteknologi* OR kortidscenter* OR gulvsensor* OR _query_:\"{!complexphrase inOrder=false df=all_text}\\\"sensor* OR gulv*\\\"~1\" OR _query_:\"{!complexphrase inOrder=false df=all_text}\\\"patientsikkerhed* (fok?s OR initiativ* OR tiltag OR aktivitete* OR projek*)\\\"~3\" OR _query_:\"{!complexphrase inOrder=false df=all_text}\\\"(enheder* OR modtagecentral* OR callcenter* OR beskyttels* OR tryghe* OR sikkerhe*) (udebesøg* OR hjemmebesøg* OR udekøre*)\\\"~3\" OR sundhedsplatform* OR sundhedsplatformen OR _query_:\"{!complexphrase inOrder=false df=all_text}\\\"(nødkald* OR alarm* OR alarmer) (plejecenter* OR ældrebolig* OR demenscenter* OR plejehjem* OR patien* OR demens* OR ældr* OR gennemgan* OR udskif* OR nedslid* OR gamme* OR utids*)\\\"~5\" OR ( _query_:\"{!complexphrase inOrder=false df=all_text}\\\"nødkald* plejebolig*\\\"~10\" NOT taastru*)", "hl":"on", "indent":"on", "fl":"content_hash", "fq":["(document_date:[2016-11-09T00:00:00Z TO *])", "document_type:(minutes OR addendum OR agenda OR budget OR financial_report)"], "hl.fl":"title,full_text,pcv_text,authority,department", "wt":"json"}}, "response":{"numFound":1860,"start":0,"docs":[ { "content_hash":"c41c26deefd28edf8f588d9ee6b3cab9970e4451"}, { "content_hash":"cbfe965b1910d7d94ef01b7222722adb1d21fe44"}, { "content_hash":"426d417d4f4fe6c7f8955dd2291bd9670e577b57"}, { "content_hash":"00ad8409210d8f851cce43a544125adfd02d0031"}, { "content_hash":"7a71fe577c7f32df52841da3f1234a93f9facee5"}, { "content_hash":"baf517f930a2870ec31069471064ad217928dd9e"}, { "content_hash":"d3b43e1bdf9b310e2ba7e0a42a1e23a033919ade"}, { "content_hash":"9974c198550cf725f8ac405565d7375d897cc282"}, { "content_hash":"b4c1b8aa925581d362fd17f831d472f216fe9bc9"}, { "content_hash":"1ad959998e9e66b24bfaafb9f6a89b3f514377d7"}] }, "error":{ "msg":"Unknown query type:org.apache.lucene.search.MatchNoDocsQuery", "trace":"java.lang.IllegalArgumentException: Unknown query type:org.apache.lucene.search.MatchNoDocsQuery\n\tat org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery.addComplexPhraseClause(ComplexPhraseQueryParser.java:403)\n\tat org.apache.lucene.queryparser.complexPhrase.ComplexPhraseQueryParser$ComplexPhraseQuery.rewrite(ComplexPhraseQueryParser.java:296)\n\tat org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:230)\n\tat org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:113)\n\tat org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:522)\n\tat org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:218)\n\tat org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:186)\n\tat org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:195)\n\tat org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:612)\n\tat org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingOfField(DefaultSolrHighlighter.java:456)\n\tat org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:418)\n\tat org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:182)\n\tat org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:296)\n\tat org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)\n\tat org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)\n\tat org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)\n\tat org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)\n\tat
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 533 - Still unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/533/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.TestTlogReplica.testRecovery Error Message: Can not find doc 8 in https://127.0.0.1:44305/solr Stack Trace: java.lang.AssertionError: Can not find doc 8 in https://127.0.0.1:44305/solr at __randomizedtesting.SeedInfo.seed([4E529F8FBFD45403:8FA2E62392849EA4]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:861) at org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:582) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 13425 lines...] [junit4] Suite: org.apache.solr.cloud.TestTlogReplica [junit4] 2> Creating dataDir:
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4202 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4202/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC All tests passed Build Log: [...truncated 51045 lines...] -documentation-lint: [echo] checking for broken html... [jtidy] Checking for broken html (such as invalid tags)... [delete] Deleting directory /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/jtidy_tmp [echo] Checking for broken links... [exec] [exec] Crawl/parse... [exec] [exec] Verify... [exec] [exec] file:///build/docs/sandbox/org/apache/lucene/document/FloatPointNearestNeighbor.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/document/FloatPointNearestNeighbor.NearestHit.html [exec] BROKEN LINK: file:///build/docs/core/org/apache/lucene/document/FloatPointNearestNeighbor.NearestHit.html [exec] [exec] Broken javadocs links were found! Common root causes: [exec] * A typo of some sort for manually created links. [exec] * Public methods referencing non-public classes in their signature. BUILD FAILED /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/build.xml:826: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/build.xml:101: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build.xml:142: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build.xml:155: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/common-build.xml:2570: exec returned: 1 Total time: 87 minutes 9 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts [WARNINGS] Skipping publisher since build result is FAILURE Recording test results Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Welcome Hrishikesh Gadre as Lucene/Solr committer
Welcome Hrishikesh! Le lun. 2 oct. 2017 à 02:34, Koji Sekiguchia écrit : > Welcome Hrishikesh! > > Koji > > On 2017/09/30 2:23, Yonik Seeley wrote: > > Hi All, > > > > Please join me in welcoming Hrishikesh Gadre as the latest Lucene/Solr > > committer. > > Hrishikesh, it's tradition for you to introduce yourself with a brief > bio. > > > > Congrats and Welcome! > > -Yonik > > > > - > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > > For additional commands, e-mail: dev-h...@lucene.apache.org > > > > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > >
Updating doc values field
Hello! We are using Solr and sometimes perform partial updates for doc values fields. It takes significant time because each update results in recreation of *.dvd and *.dvm files. As far as I understand API of DV codecs by it's nature implies recreation of whole file on disk. (By API I mean here DocValuesConsumer and DocValuesProducer.) Here is my question: why nobody tried to support not rewriting whole *.dvd file, but just update only changed values? It would perform much-much faster. Are there some ideological or technical obstacles? Is it just the desire to keep segment files immutable? Looking forward for your replies.
[jira] [Created] (SOLR-11425) SolrClientBuilder does not allow infinite timeout (value 0)
Peter Szantai-Kis created SOLR-11425: Summary: SolrClientBuilder does not allow infinite timeout (value 0) Key: SOLR-11425 URL: https://issues.apache.org/jira/browse/SOLR-11425 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: SolrJ Affects Versions: 7.0 Reporter: Peter Szantai-Kis [org.apache.solr.client.solrj.impl.SolrClientBuilder#withConnectionTimeout|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/client/solrj/impl/SolrClientBuilder.java#L53] does not allow to set the value of 0 which means infinite timeout, but [RequestConfig|https://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/config/RequestConfig.html#getConnectTimeout()] where it will be used have the option to do so. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.0-Linux (64bit/jdk-9) - Build # 406 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.0-Linux/406/ Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseG1GC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.cloud.TestTlogReplica.testRecovery Error Message: Can not find doc 3 in http://127.0.0.1:45143/solr Stack Trace: java.lang.AssertionError: Can not find doc 3 in http://127.0.0.1:45143/solr at __randomizedtesting.SeedInfo.seed([D83AB919F1451FEB:19CAC0B5DC15D54C]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:868) at org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:559) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 11466 lines...] [junit4] Suite: org.apache.solr.cloud.TestTlogReplica [junit4] 2> Creating dataDir:
[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 226 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/226/ Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI Error Message: Error from server at http://127.0.0.1:62408/solr/awhollynewcollection_0: {"awhollynewcollection_0":7} Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:62408/solr/awhollynewcollection_0: {"awhollynewcollection_0":7} at __randomizedtesting.SeedInfo.seed([B36F42A0BB758B11:FB1A3614BD46A484]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:460) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-Tests-7.0 - Build # 142 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.0/142/ 2 tests failed. FAILED: org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter Error Message: Could not load collection from ZK: withShardField Stack Trace: org.apache.solr.common.SolrException: Could not load collection from ZK: withShardField at __randomizedtesting.SeedInfo.seed([2E72F03359EB9CE:57B7C7919967763E]:0) at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1114) at org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:647) at org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1227) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:827) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter(CustomCollectionTest.java:141) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at
[jira] [Commented] (LUCENE-7966) build mr-jar and use some java 9 methods if available
[ https://issues.apache.org/jira/browse/LUCENE-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187782#comment-16187782 ] Adrien Grand commented on LUCENE-7966: -- I had a closer look at the branch and like the patching approach. Should we modify the smoke tester at the same time to enforce that both Java 8 and 9 are tested? > build mr-jar and use some java 9 methods if available > - > > Key: LUCENE-7966 > URL: https://issues.apache.org/jira/browse/LUCENE-7966 > Project: Lucene - Core > Issue Type: Improvement > Components: core/other, general/build >Reporter: Robert Muir > Labels: Java9 > Attachments: LUCENE-7966.patch, LUCENE-7966.patch, LUCENE-7966.patch, > LUCENE-7966.patch, LUCENE-7966.patch > > > See background: http://openjdk.java.net/jeps/238 > It would be nice to use some of the newer array methods and range checking > methods in java 9 for example, without waiting for lucene 10 or something. If > we build an MR-jar, we can start migrating our code to use java 9 methods > right now, it will use optimized methods from java 9 when thats available, > otherwise fall back to java 8 code. > This patch adds: > {code} > Objects.checkIndex(int,int) > Objects.checkFromToIndex(int,int,int) > Objects.checkFromIndexSize(int,int,int) > Arrays.mismatch(byte[],int,int,byte[],int,int) > Arrays.compareUnsigned(byte[],int,int,byte[],int,int) > Arrays.equal(byte[],int,int,byte[],int,int) > // did not add char/int/long/short/etc but of course its possible if needed > {code} > It sets these up in {{org.apache.lucene.future}} as 1-1 mappings to java > methods. This way, we can simply directly replace call sites with java 9 > methods when java 9 is a minimum. Simple 1-1 mappings mean also that we only > have to worry about testing that our java 8 fallback methods work. > I found that many of the current byte array methods today are willy-nilly and > very lenient for example, passing invalid offsets at times and relying on > compare methods not throwing exceptions, etc. I fixed all the instances in > core/codecs but have not looked at the problems with AnalyzingSuggester. Also > SimpleText still uses a silly method in ArrayUtil in similar crazy way, have > not removed that one yet. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #255: how to use Learning To Rank
GitHub user ZivHsu opened a pull request: https://github.com/apache/lucene-solr/pull/255 how to use Learning To Rank I'm used Solr 6.1 I need add Ranking from field value, And I found Learning To Rank can do this, Can I do it? how ? can set this on config file by collection? And I not used solrcloud mode You can merge this pull request into a Git repository by running: $ git pull https://github.com/apache/lucene-solr master Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/255.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #255 --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7759) DebugComponent's explain should be implemented as a distributed query
[ https://issues.apache.org/jira/browse/SOLR-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187761#comment-16187761 ] Alessandro Benedetti commented on SOLR-7759: Let me post here what I noticed when i was doing my investigations : 1) Real score / Debug score is not aligned When we operate a shard request with purpose '16388' ( GET_TOP_IDS,SET_TERM_STATS) we correctly pass the global collection stats and we calculate the real score. When we operate a shard request with purpose '320' ( GET_FIELDS,GET_DEBUG ) we don't pass the global collection stats so the debug score calculus and rendering is not the same as the real score. This can be really confusing and not easy to spot. This should help you reproducing the problem. Regards > DebugComponent's explain should be implemented as a distributed query > - > > Key: SOLR-7759 > URL: https://issues.apache.org/jira/browse/SOLR-7759 > Project: Solr > Issue Type: Bug >Reporter: Varun Thacker > Attachments: SOLR_7759.patch > > > Currently when we use debugQuery to see the explanation of the matched > documents, the query fired to get the statistics for the matched documents is > not a distributed query. > This is a problem when using distributed idf. The actual documents are being > scored using the global stats and not per shard stats , but the explain will > show us the score by taking into account the stats from the shard where the > document belongs to. > We should try to implement the explain query as a distributed request so that > the scores match the actual document scores. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 20593 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20593/ Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=20321, name=searcherExecutor-5815-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=20321, name=searcherExecutor-5815-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([71B3730F6B051170]:0) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=20321, name=searcherExecutor-5815-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=20321, name=searcherExecutor-5815-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([71B3730F6B051170]:0) FAILED: org.apache.solr.core.TestLazyCores.testNoCommit Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([71B3730F6B051170:AED3D2DEA02272D5]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:884) at org.apache.solr.core.TestLazyCores.check10(TestLazyCores.java:847) at org.apache.solr.core.TestLazyCores.testNoCommit(TestLazyCores.java:829) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at
[jira] [Commented] (SOLR-10811) Speed up MultipleAdditiveTreesModel by using QuickScorer algorithm
[ https://issues.apache.org/jira/browse/SOLR-10811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187746#comment-16187746 ] Yuki Yano commented on SOLR-10811: -- [~diegoceccarelli] Thank you for letting me know the patent of QuickScorer! I'll keep paying attention to a progress of the patent. > Speed up MultipleAdditiveTreesModel by using QuickScorer algorithm > -- > > Key: SOLR-10811 > URL: https://issues.apache.org/jira/browse/SOLR-10811 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Reporter: Yuki Yano >Priority: Minor > Attachments: quickscorer_model.pdf, SOLR-10811_master.patch, > SOLR-10811.patch > > > QuickScorer is an algorithm which can calculate multiple additive trees fast > by using bitvectors for detecting target leaves. > It was first published in SIGIR 2015 and won the best paper award of the > conference. > refs: > http://zola.di.unipi.it/rossano/wp-content/papercite-data/pdf/sigir15.pdf > We implemented QuickScorer as one of LTRScoringModel. > This model uses same configuration of MultipleAdditiveTreesModel, thus it is > easy to replace the model. > Our experiments show our model can calculate scores about twice faster than > MultipleAdditiveTreesModel. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10285) Reduce state messages when there are leader only shards
[ https://issues.apache.org/jira/browse/SOLR-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat updated SOLR-10285: Attachment: SOLR-10285.patch Cleanup, will commit soon. > Reduce state messages when there are leader only shards > --- > > Key: SOLR-10285 > URL: https://issues.apache.org/jira/browse/SOLR-10285 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Cao Manh Dat > Attachments: SOLR-10285.patch, SOLR-10285.patch, SOLR-10285.patch > > > For shards which have 1 replica ( leader ) we know it doesn't need to recover > from anyone. We should short-circuit the recovery process in this case. > The motivation for this being that we will generate less state events and be > able to mark these replicas as active again without it needing to go into > 'recovering' state. > We already short circuit when you set {{-Dsolrcloud.skip.autorecovery=true}} > but that sys prop was meant for tests only. Extending this to make sure the > code short-circuits when the core knows its the only replica in the shard is > the motivation of the Jira. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org