[jira] [Commented] (LUCENE-7974) Add N-dimensional FloatPoint K-nearest-neighbor implementation
[ https://issues.apache.org/jira/browse/LUCENE-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16180346#comment-16180346 ] Adrien Grand commented on LUCENE-7974: -- OK, I see now. I (wrongly) thought it was only a way to make sure that the float cast did not round down. > Add N-dimensional FloatPoint K-nearest-neighbor implementation > -- > > Key: LUCENE-7974 > URL: https://issues.apache.org/jira/browse/LUCENE-7974 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/sandbox >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Minor > Attachments: LUCENE-7974.patch > > > From LUCENE-7069: > {quote} > KD trees (used by Lucene's new dimensional points) excel at finding "nearest > neighbors" to a given query point ... I think we should add this to Lucene's > sandbox > [...] > It could also be generalized to more than 2 dimensions, but for now I'm > making the class package private in sandbox for just the geo2d (lat/lon) use > case. > {quote} > This issue is to generalize {{LatLonPoint.nearest()}} to more than 2 > dimensions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9) - Build # 491 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/491/ Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseG1GC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelExecutorStream Error Message: Error from server at https://127.0.0.1:35689/solr/workQueue_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404HTTP ERROR: 404 Problem accessing /solr/workQueue_shard2_replica_n3/update. Reason: Can not find: /solr/workQueue_shard2_replica_n3/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.20.v20170531 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:35689/solr/workQueue_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 HTTP ERROR: 404 Problem accessing /solr/workQueue_shard2_replica_n3/update. Reason: Can not find: /solr/workQueue_shard2_replica_n3/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.20.v20170531 at __randomizedtesting.SeedInfo.seed([5D8C555080DAB489:E09B2049B9F689D4]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelExecutorStream(StreamExpressionTest.java:7333) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoSha
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9) - Build # 20549 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20549/ Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseParallelGC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.cloud.TestAuthenticationFramework.testBasics Error Message: Error from server at http://127.0.0.1:43937/solr/testcollection_shard1_replica_n2: Expected mime type application/octet-stream but got text/html.Error 404HTTP ERROR: 404 Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason: Can not find: /solr/testcollection_shard1_replica_n2/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.20.v20170531 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:43937/solr/testcollection_shard1_replica_n2: Expected mime type application/octet-stream but got text/html. Error 404 HTTP ERROR: 404 Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason: Can not find: /solr/testcollection_shard1_replica_n2/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.20.v20170531 at __randomizedtesting.SeedInfo.seed([55B0E05E26E4E852:68684E721E0AB622]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.cloud.TestAuthenticationFramework.collectionCreateSearchDeleteTwice(TestAuthenticationFramework.java:126) at org.apache.solr.cloud.TestAuthenticationFramework.testBasics(TestAuthenticationFramework.java:74) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOn
[jira] [Comment Edited] (LUCENE-7974) Add N-dimensional FloatPoint K-nearest-neighbor implementation
[ https://issues.apache.org/jira/browse/LUCENE-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16180245#comment-16180245 ] Steve Rowe edited comment on LUCENE-7974 at 9/26/17 4:53 AM: - bq. I'm wondering whether the use of getMinDelta could be replaced with Math.nextUp/Math.nextDown? These do different things, and I'm not sure how to express one in terms of the other. Suggestions welcome :). {{getMinDelta}} calculates a fudge factor from the distance exponent reduced by (at most) 23, the number of bits in a float mantissa. This is necessary when the result of subtracting/adding the distance in a single dimension has an exponent that differs significantly from that of the distance value. Without this fudge factor (i.e. only subtracting/adding the distance), cells and values can be inappropriately judged as outside the search radius. By contrast, {{Math.nextUp}}/{{Math.nextDown}} produce adjacent values (i.e. the equivalent of incrementing/decrementing the mantissa value by one). was (Author: steve_rowe): bq. I'm wondering whether the use of getMinDelta could be replaced with Math.nextUp/Math.nextDown? These do different things, and I'm not sure how to express one in terms of the other. Suggestions welcome :). {{getMinDelta}} calculates a fudge factor from the distance exponent reduced by (at most) 23, the number of bits in a float mantissa. This is necessary when the result of subtracting/adding the distance in a single dimension has an exponent that differs significantly from the distance value. Without this fudge factor (i.e. only subtracting/adding the distance), cells and values can be inappropriately judged as outside the search radius. By contrast, {{Math.nextUp}}/{{Math.nextDown}} produce adjacent values (i.e. the equivalent of incrementing/decrementing the mantissa value by one). > Add N-dimensional FloatPoint K-nearest-neighbor implementation > -- > > Key: LUCENE-7974 > URL: https://issues.apache.org/jira/browse/LUCENE-7974 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/sandbox >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Minor > Attachments: LUCENE-7974.patch > > > From LUCENE-7069: > {quote} > KD trees (used by Lucene's new dimensional points) excel at finding "nearest > neighbors" to a given query point ... I think we should add this to Lucene's > sandbox > [...] > It could also be generalized to more than 2 dimensions, but for now I'm > making the class package private in sandbox for just the geo2d (lat/lon) use > case. > {quote} > This issue is to generalize {{LatLonPoint.nearest()}} to more than 2 > dimensions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7974) Add N-dimensional FloatPoint K-nearest-neighbor implementation
[ https://issues.apache.org/jira/browse/LUCENE-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16180245#comment-16180245 ] Steve Rowe commented on LUCENE-7974: bq. I'm wondering whether the use of getMinDelta could be replaced with Math.nextUp/Math.nextDown? These do different things, and I'm not sure how to express one in terms of the other. Suggestions welcome :). {{getMinDelta}} calculates a fudge factor from the distance exponent reduced by (at most) 23, the number of bits in a float mantissa. This is necessary when the result of subtracting/adding the distance in a single dimension has an exponent that differs significantly from the distance value. Without this fudge factor (i.e. only subtracting/adding the distance), cells and values can be inappropriately judged as outside the search radius. By contrast, {{Math.nextUp}}/{{Math.nextDown}} produce adjacent values (i.e. the equivalent of incrementing/decrementing the mantissa value by one). > Add N-dimensional FloatPoint K-nearest-neighbor implementation > -- > > Key: LUCENE-7974 > URL: https://issues.apache.org/jira/browse/LUCENE-7974 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/sandbox >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Minor > Attachments: LUCENE-7974.patch > > > From LUCENE-7069: > {quote} > KD trees (used by Lucene's new dimensional points) excel at finding "nearest > neighbors" to a given query point ... I think we should add this to Lucene's > sandbox > [...] > It could also be generalized to more than 2 dimensions, but for now I'm > making the class package private in sandbox for just the geo2d (lat/lon) use > case. > {quote} > This issue is to generalize {{LatLonPoint.nearest()}} to more than 2 > dimensions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 490 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/490/ Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC 3 tests failed. FAILED: org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream Error Message: Error from server at https://127.0.0.1:35431/solr/mainCorpus_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404HTTP ERROR: 404 Problem accessing /solr/mainCorpus_shard2_replica_n3/update. Reason: Can not find: /solr/mainCorpus_shard2_replica_n3/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.20.v20170531 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:35431/solr/mainCorpus_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 HTTP ERROR: 404 Problem accessing /solr/mainCorpus_shard2_replica_n3/update. Reason: Can not find: /solr/mainCorpus_shard2_replica_n3/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.20.v20170531 at __randomizedtesting.SeedInfo.seed([67874D4D39A483A3:4547CCB61ACEA9B3]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream(StreamExpressionTest.java:7265) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.
Re: [JENKINS-MAVEN] Lucene-Solr-Maven-master #2108: POMs out of sync
Thanks Steve! On Mon, Sep 25, 2017 at 8:11 PM Steve Rowe wrote: > I committed a fix - ‘mvn -DtestSkip install’ succeeded for me at the top > level of the project. > > -- > Steve > www.lucidworks.com > > > On Sep 25, 2017, at 7:52 PM, Steve Rowe wrote: > > > > The issue here is that the test-jar artifact for lucene-spatial3d isn’t > being installed in the local repo. I’ll work on it. > > > > -- > > Steve > > www.lucidworks.com > > > >> On Sep 24, 2017, at 1:15 AM, Apache Jenkins Server < > jenk...@builds.apache.org> wrote: > >> > >> Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2108/ > >> > >> No tests ran. > >> > >> Build Log: > >> [...truncated 19293 lines...] > >> BUILD FAILED > >> > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:851: > The following error occurred while executing this line: > >> : Java returned: 1 > >> > >> Total time: 27 minutes 18 seconds > >> Build step 'Invoke Ant' marked build as failure > >> Email was triggered for: Failure - Any > >> Sending email for trigger: Failure - Any > >> > >> - > >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > >> For additional commands, e-mail: dev-h...@lucene.apache.org > > > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > > -- Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www.solrenterprisesearchserver.com
[jira] [Commented] (SOLR-11399) UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR
[ https://issues.apache.org/jira/browse/SOLR-11399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16180178#comment-16180178 ] David Smiley commented on SOLR-11399: - Nice catch Marc! Thanks for the Pull Request. > UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR > > > Key: SOLR-11399 > URL: https://issues.apache.org/jira/browse/SOLR-11399 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: highlighter >Reporter: Marc Morissette > > The UnifiedHighlighter always acts as if hl.fragsize=-1 when > hl.bs.type=SEPARATOR. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11402) DataImportHandler dataimport.properties should write to data dir by default
[ https://issues.apache.org/jira/browse/SOLR-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16180125#comment-16180125 ] David Smiley commented on SOLR-11402: - +1 clearly a problem. FYI in SolrCloud, dataimport.properites is saved to ZooKeeper, which is a decent spot for it since it's a Solr "collection" level setting, not core. > DataImportHandler dataimport.properties should write to data dir by default > --- > > Key: SOLR-11402 > URL: https://issues.apache.org/jira/browse/SOLR-11402 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 4.10, 5.5, 6.6 >Reporter: Jamie Jackson >Priority: Minor > > Currently, DIH drops the {{dataimport.properties}} file in the cores > directory by default, but the data directory seems to be the logical choice. > * The core directory tends to be read-only. > * The data directory is the write area, and the {{dataimport.properties}} > file is tied to the index, rather than the core configurations. > Docker is a use case where the current behavior is glaringly problematic: The > cores directory lives in the container layer, and any files that Solr writes > there disappear when the container is restarted (forcing a subsequent full > index). The data directory, on the other hand, is already persisted to a > volume (according to normal practice), so if it were the default location to > write {{dataimport.properties}}, it would behave as one would expect. > It's possible to work around this (using PropertyWriter, symlinks, or other > tricks), but this shouldn't be necessary. > * Downstream Solr Docker ticket: > https://github.com/docker-solr/docker-solr/issues/150 > * SOLR-1970, in which others make the same argument -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7121) Solr nodes should go down based on configurable thresholds and not rely on resource exhaustion
[ https://issues.apache.org/jira/browse/SOLR-7121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16180113#comment-16180113 ] Cao Manh Dat commented on SOLR-7121: [~shalinmangar] [~noble.paul] Kinda similar to what autoscaling framework can do? > Solr nodes should go down based on configurable thresholds and not rely on > resource exhaustion > -- > > Key: SOLR-7121 > URL: https://issues.apache.org/jira/browse/SOLR-7121 > Project: Solr > Issue Type: New Feature >Reporter: Sachin Goyal >Assignee: Mark Miller > Attachments: SOLR-7121.patch, SOLR-7121.patch, SOLR-7121.patch, > SOLR-7121.patch, SOLR-7121.patch, SOLR-7121.patch, SOLR-7121.patch > > > Currently, there is no way to control when a Solr node goes down. > If the server is having high GC pauses or too many threads or is just getting > too many queries due to some bad load-balancer, the cores in the machine keep > on serving unless they exhaust the machine's resources and everything comes > to a stall. > Such a slow-dying core can affect other cores as well by taking huge time to > serve their distributed queries. > There should be a way to specify some threshold values beyond which the > targeted core can its ill-health and proactively go down to recover. > When the load improves, the core should come up automatically. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11402) DataImportHandler dataimport.properties should write to data dir by default
Jamie Jackson created SOLR-11402: Summary: DataImportHandler dataimport.properties should write to data dir by default Key: SOLR-11402 URL: https://issues.apache.org/jira/browse/SOLR-11402 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: contrib - DataImportHandler Affects Versions: 6.6, 5.5, 4.10 Reporter: Jamie Jackson Priority: Minor Currently, DIH drops the {{dataimport.properties}} file in the cores directory by default, but the data directory seems to be the logical choice. * The core directory tends to be read-only. * The data directory is the write area, and the {{dataimport.properties}} file is tied to the index, rather than the core configurations. Docker is a use case where the current behavior is glaringly problematic: The cores directory lives in the container layer, and any files that Solr writes there disappear when the container is restarted (forcing a subsequent full index). The data directory, on the other hand, is already persisted to a volume (according to normal practice), so if it were the default location to write {{dataimport.properties}}, it would behave as one would expect. It's possible to work around this (using PropertyWriter, symlinks, or other tricks), but this shouldn't be necessary. * Downstream Solr Docker ticket: https://github.com/docker-solr/docker-solr/issues/150 * SOLR-1970, in which others make the same argument -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11398) Add weibullDistribution Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16180083#comment-16180083 ] ASF subversion and git services commented on SOLR-11398: Commit c80c745bb873e3e1efe6ed3ea9338c59ca948195 in lucene-solr's branch refs/heads/branch_7x from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c80c745 ] SOLR-11398: Add weibullDistribution Stream Evaluator > Add weibullDistribution Stream Evaluator > > > Key: SOLR-11398 > URL: https://issues.apache.org/jira/browse/SOLR-11398 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: master (8.0), 7.1 > > Attachments: SOLR-11398.patch > > > This ticket adds support for the Weibull probability distribution to the > Streaming Expression probability distribution framework. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11398) Add weibullDistribution Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16180076#comment-16180076 ] ASF subversion and git services commented on SOLR-11398: Commit 0e5c3aa3dc5a094d974716b8ee018f7469a8534a in lucene-solr's branch refs/heads/master from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0e5c3aa ] SOLR-11398: Add weibullDistribution Stream Evaluator > Add weibullDistribution Stream Evaluator > > > Key: SOLR-11398 > URL: https://issues.apache.org/jira/browse/SOLR-11398 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: master (8.0), 7.1 > > Attachments: SOLR-11398.patch > > > This ticket adds support for the Weibull probability distribution to the > Streaming Expression probability distribution framework. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 489 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/489/ Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest Error Message: Timeout waiting for all live and active Stack Trace: java.lang.AssertionError: Timeout waiting for all live and active at __randomizedtesting.SeedInfo.seed([99D1AB3B3E299377:1AA7F4C9E8509DD6]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest(TestCloudRecovery.java:184) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.TestTlogReplica.testRecovery Error Message: Can not find doc 3 in http://127.0.0.1:46007/solr Stack Trace: java.lang.AssertionError: Can not find doc 3 in http://127.0.0.1:46007/solr at __randomizedtesting.SeedInfo.seed([99D1AB3B3E299377:5821D2
[jira] [Updated] (SOLR-11401) Add zipFDistriubtion Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-11401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11401: -- Fix Version/s: 7.1 master (8.0) > Add zipFDistriubtion Stream Evaluator > - > > Key: SOLR-11401 > URL: https://issues.apache.org/jira/browse/SOLR-11401 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: master (8.0), 7.1 > > > This ticket adds support for the ZipF probability distribution to the > Streaming Expression statistical library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11401) Add zipFDistribution Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-11401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11401: -- Summary: Add zipFDistribution Stream Evaluator (was: Add zipFDistriubtion Stream Evaluator) > Add zipFDistribution Stream Evaluator > - > > Key: SOLR-11401 > URL: https://issues.apache.org/jira/browse/SOLR-11401 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: master (8.0), 7.1 > > > This ticket adds support for the ZipF probability distribution to the > Streaming Expression statistical library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11401) Add zipFDistriubtion Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-11401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein reassigned SOLR-11401: - Assignee: Joel Bernstein > Add zipFDistriubtion Stream Evaluator > - > > Key: SOLR-11401 > URL: https://issues.apache.org/jira/browse/SOLR-11401 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > > This ticket adds support for the ZipF probability distribution to the > Streaming Expression statistical library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11401) Add zipFDistriubtion Stream Evaluator
Joel Bernstein created SOLR-11401: - Summary: Add zipFDistriubtion Stream Evaluator Key: SOLR-11401 URL: https://issues.apache.org/jira/browse/SOLR-11401 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Reporter: Joel Bernstein This ticket adds support for the ZipF probability distribution to the Streaming Expression statistical library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11400) Add logNormalDistribution Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11400: -- Fix Version/s: 7.1 master (8.0) > Add logNormalDistribution Stream Evaluator > -- > > Key: SOLR-11400 > URL: https://issues.apache.org/jira/browse/SOLR-11400 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: master (8.0), 7.1 > > > This ticket adds the Log Normal probability distribution to the Streaming > Expression statistical library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11400) Add logNormalDistribution Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein reassigned SOLR-11400: - Assignee: Joel Bernstein > Add logNormalDistribution Stream Evaluator > -- > > Key: SOLR-11400 > URL: https://issues.apache.org/jira/browse/SOLR-11400 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: master (8.0), 7.1 > > > This ticket adds the Log Normal probability distribution to the Streaming > Expression statistical library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11400) Add logNormalDistribution Stream Evaluator
Joel Bernstein created SOLR-11400: - Summary: Add logNormalDistribution Stream Evaluator Key: SOLR-11400 URL: https://issues.apache.org/jira/browse/SOLR-11400 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Reporter: Joel Bernstein This ticket adds the Log Normal probability distribution to the Streaming Expression statistical library. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS-MAVEN] Lucene-Solr-Maven-master #2108: POMs out of sync
I committed a fix - ‘mvn -DtestSkip install’ succeeded for me at the top level of the project. -- Steve www.lucidworks.com > On Sep 25, 2017, at 7:52 PM, Steve Rowe wrote: > > The issue here is that the test-jar artifact for lucene-spatial3d isn’t being > installed in the local repo. I’ll work on it. > > -- > Steve > www.lucidworks.com > >> On Sep 24, 2017, at 1:15 AM, Apache Jenkins Server >> wrote: >> >> Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2108/ >> >> No tests ran. >> >> Build Log: >> [...truncated 19293 lines...] >> BUILD FAILED >> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:851: >> The following error occurred while executing this line: >> : Java returned: 1 >> >> Total time: 27 minutes 18 seconds >> Build step 'Invoke Ant' marked build as failure >> Email was triggered for: Failure - Any >> Sending email for trigger: Failure - Any >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11382) Better Geo3d support
[ https://issues.apache.org/jira/browse/SOLR-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179986#comment-16179986 ] ASF subversion and git services commented on SOLR-11382: Commit 001fa289e446e45e169a3fbd680ed63e393f3447 in lucene-solr's branch refs/heads/master from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=001fa28 ] SOLR-11382: Maven build: Build the test-jar for lucene-spatial3d, which lucene-spatial tests now depend on > Better Geo3d support > > > Key: SOLR-11382 > URL: https://issues.apache.org/jira/browse/SOLR-11382 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: spatial >Reporter: David Smiley >Assignee: David Smiley > Fix For: 7.1 > > > LUCENE-7951 added Geo3d support to spatial-extas. Solr can leverage this > directly thanks to reflection based construction in Spatial4j but we can do > better: > * {{spatialContextFactory="Geo3D"}} -- a convenience for > org.apache.lucene.spatial.spatial4j.Geo3dSpatialContextFactory > * test > * documentation -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11382) Better Geo3d support
[ https://issues.apache.org/jira/browse/SOLR-11382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179985#comment-16179985 ] ASF subversion and git services commented on SOLR-11382: Commit 749813d9d2aec9e0eaced3556b37be12815e2cb0 in lucene-solr's branch refs/heads/branch_7x from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=749813d ] SOLR-11382: Maven build: Build the test-jar for lucene-spatial3d, which lucene-spatial tests now depend on > Better Geo3d support > > > Key: SOLR-11382 > URL: https://issues.apache.org/jira/browse/SOLR-11382 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: spatial >Reporter: David Smiley >Assignee: David Smiley > Fix For: 7.1 > > > LUCENE-7951 added Geo3d support to spatial-extas. Solr can leverage this > directly thanks to reflection based construction in Spatial4j but we can do > better: > * {{spatialContextFactory="Geo3D"}} -- a convenience for > org.apache.lucene.spatial.spatial4j.Geo3dSpatialContextFactory > * test > * documentation -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20547 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20547/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI Error Message: Error from server at https://127.0.0.1:34845/solr/awhollynewcollection_0: {"awhollynewcollection_0":7} Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:34845/solr/awhollynewcollection_0: {"awhollynewcollection_0":7} at __randomizedtesting.SeedInfo.seed([2007E576A2672AE1:687291C2A4540574]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:967) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:460) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.uti
Re: [JENKINS-MAVEN] Lucene-Solr-Maven-master #2108: POMs out of sync
The issue here is that the test-jar artifact for lucene-spatial3d isn’t being installed in the local repo. I’ll work on it. -- Steve www.lucidworks.com > On Sep 24, 2017, at 1:15 AM, Apache Jenkins Server > wrote: > > Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2108/ > > No tests ran. > > Build Log: > [...truncated 19293 lines...] > BUILD FAILED > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:851: > The following error occurred while executing this line: > : Java returned: 1 > > Total time: 27 minutes 18 seconds > Build step 'Invoke Ant' marked build as failure > Email was triggered for: Failure - Any > Sending email for trigger: Failure - Any > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11398) Add weibullDistribution Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11398: -- Attachment: SOLR-11398.patch > Add weibullDistribution Stream Evaluator > > > Key: SOLR-11398 > URL: https://issues.apache.org/jira/browse/SOLR-11398 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: master (8.0), 7.1 > > Attachments: SOLR-11398.patch > > > This ticket adds support for the Weibull probability distribution to the > Streaming Expression probability distribution framework. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11399) UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR
[ https://issues.apache.org/jira/browse/SOLR-11399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179869#comment-16179869 ] Marc Morissette commented on SOLR-11399: I've created a pull request that fixes this issue: https://github.com/apache/lucene-solr/pull/253 > UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR > > > Key: SOLR-11399 > URL: https://issues.apache.org/jira/browse/SOLR-11399 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: highlighter >Reporter: Marc Morissette > > The UnifiedHighlighter always acts as if hl.fragsize=-1 when > hl.bs.type=SEPARATOR. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (SOLR-11399) UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR
[ https://issues.apache.org/jira/browse/SOLR-11399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marc Morissette updated SOLR-11399: --- Comment: was deleted (was: I've created a pull request that fixes this issue: https://github.com/apache/lucene-solr/pull/253) > UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR > > > Key: SOLR-11399 > URL: https://issues.apache.org/jira/browse/SOLR-11399 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: highlighter >Reporter: Marc Morissette > > The UnifiedHighlighter always acts as if hl.fragsize=-1 when > hl.bs.type=SEPARATOR. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11399) UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR
[ https://issues.apache.org/jira/browse/SOLR-11399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179867#comment-16179867 ] ASF GitHub Bot commented on SOLR-11399: --- GitHub user morissm opened a pull request: https://github.com/apache/lucene-solr/pull/253 SOLR-11399: UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR You can merge this pull request into a Git repository by running: $ git pull https://github.com/morissm/lucene-solr jira/solr-11399 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/253.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #253 commit c67339f0f360e02061d6be03e9b8c6d48fcb136d Author: Marc-Andre Morissette Date: 2017-09-25T22:07:02Z SOLR-11399: UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR > UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR > > > Key: SOLR-11399 > URL: https://issues.apache.org/jira/browse/SOLR-11399 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: highlighter >Reporter: Marc Morissette > > The UnifiedHighlighter always acts as if hl.fragsize=-1 when > hl.bs.type=SEPARATOR. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #253: SOLR-11399: UnifiedHighlighter ignores hl.fra...
GitHub user morissm opened a pull request: https://github.com/apache/lucene-solr/pull/253 SOLR-11399: UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR You can merge this pull request into a Git repository by running: $ git pull https://github.com/morissm/lucene-solr jira/solr-11399 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/253.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #253 commit c67339f0f360e02061d6be03e9b8c6d48fcb136d Author: Marc-Andre Morissette Date: 2017-09-25T22:07:02Z SOLR-11399: UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11399) UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR
Marc Morissette created SOLR-11399: -- Summary: UnifiedHighlighter ignores hl.fragsize value if hl.bs.type=SEPARATOR Key: SOLR-11399 URL: https://issues.apache.org/jira/browse/SOLR-11399 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: highlighter Reporter: Marc Morissette The UnifiedHighlighter always acts as if hl.fragsize=-1 when hl.bs.type=SEPARATOR. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179831#comment-16179831 ] Nawab Zada Asad iqbal commented on SOLR-11297: -- It seems that this error is difficult to reproduce. I tried Luiz's script on my Mac laptop and wasn't able to reproduce this issue even after decreasing the 'sleep' between iteration to `0.005`. I tried it on a production like machine (which is similar to what I had done last month although using an haproxy instead of a command line script) and I was able to hit the above error however my cores were still loaded by the 'Core' thread and were functional. Last month when initially I hit this issue, my server was giving the above 'Lock held by this virtual machine' error and also they were **not** usable. Unfortunately, I don't have access to those specific machines anymore. > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Fix For: 7.1 > > Attachments: SOLR-11297.patch, SOLR-11297.patch, SOLR-11297.patch, > SOLR-11297.sh, solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. > One critical detail to this issue: The cores are all perfectly functional. > If somebody is seeing an error message that results in a core not working at > all, then it is likely a different issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9120) o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- NoSuchFileException
[ https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179817#comment-16179817 ] Shawn Heisey commented on SOLR-9120: By resolved, I mean that the error message no longer shows up in my log. The error never did cause me any actual problems, but I do not use the backup feature. > o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- > NoSuchFileException > > > Key: SOLR-9120 > URL: https://issues.apache.org/jira/browse/SOLR-9120 > Project: Solr > Issue Type: Bug >Affects Versions: 6.0 >Reporter: Markus Jelsma > Attachments: SOLR-9120.patch, SOLR-9120.patch > > > On Solr 6.0, we frequently see the following errors popping up: > {code} > java.nio.file.NoSuchFileException: > /var/lib/solr/logs_shard2_replica1/data/index/segments_2c5 > at > sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) > at > sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55) > at > sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144) > at > sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99) > at java.nio.file.Files.readAttributes(Files.java:1737) > at java.nio.file.Files.size(Files.java:2332) > at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243) > at > org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:131) > at > org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:597) > at > org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:585) > at > org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:137) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:518) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message wa
[jira] [Updated] (SOLR-9120) o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- NoSuchFileException
[ https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey updated SOLR-9120: --- Attachment: SOLR-9120.patch With the patch for SOLR-11297 applied, I was still running into this issue on branch_6_6. After manually applying the patch included here (because it would not apply to branch_6_6 automatically), this problem seems to be resolved. This is an updated patch, against branch_6_6. I have not yet tried it against 7x or master. > o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- > NoSuchFileException > > > Key: SOLR-9120 > URL: https://issues.apache.org/jira/browse/SOLR-9120 > Project: Solr > Issue Type: Bug >Affects Versions: 6.0 >Reporter: Markus Jelsma > Attachments: SOLR-9120.patch, SOLR-9120.patch > > > On Solr 6.0, we frequently see the following errors popping up: > {code} > java.nio.file.NoSuchFileException: > /var/lib/solr/logs_shard2_replica1/data/index/segments_2c5 > at > sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) > at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) > at > sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55) > at > sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144) > at > sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99) > at java.nio.file.Files.readAttributes(Files.java:1737) > at java.nio.file.Files.size(Files.java:2332) > at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243) > at > org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:131) > at > org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:597) > at > org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:585) > at > org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:137) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:518) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654) > at > org.eclipse.jetty
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9) - Build # 20546 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20546/ Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseParallelGC --illegal-access=deny 2 tests failed. FAILED: org.apache.solr.TestDistributedSearch.test Error Message: IOException occured when talking to server at: http://127.0.0.1:41703/nh/e/collection1 Stack Trace: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:41703/nh/e/collection1 at __randomizedtesting.SeedInfo.seed([A90C2BFAA5D0923:82C4FD6504A164DB]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:641) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:895) at org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:858) at org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:873) at org.apache.solr.BaseDistributedSearchTestCase.del(BaseDistributedSearchTestCase.java:542) at org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:1034) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1019) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Sta
[JENKINS-MAVEN] Lucene-Solr-Maven-7.x #52: POMs out of sync
Build: https://builds.apache.org/job/Lucene-Solr-Maven-7.x/52/ No tests ran. Build Log: [...truncated 19313 lines...] BUILD FAILED /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/build.xml:851: The following error occurred while executing this line: : Java returned: 1 Total time: 26 minutes 36 seconds Build step 'Invoke Ant' marked build as failure Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179704#comment-16179704 ] Richard Rominger edited comment on SOLR-11297 at 9/25/17 8:22 PM: -- I meant in 6.6.1 (which would spawn 6.6.2?) - which was the next version I wanted to test and where I ran into problems. was (Author: odie3): I meant in 6.6.2 - which was the next version I wanted to test and where I ran into problems. > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Fix For: 7.1 > > Attachments: SOLR-11297.patch, SOLR-11297.patch, SOLR-11297.patch, > SOLR-11297.sh, solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. > One critical detail to this issue: The cores are all perfectly functional. > If somebody is seeing an error message that results in a core not working at > all, then it is likely a different issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179704#comment-16179704 ] Richard Rominger commented on SOLR-11297: - I meant in 6.6.2 - which was the next version I wanted to test and where I ran into problems. > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Fix For: 7.1 > > Attachments: SOLR-11297.patch, SOLR-11297.patch, SOLR-11297.patch, > SOLR-11297.sh, solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. > One critical detail to this issue: The cores are all perfectly functional. > If somebody is seeing an error message that results in a core not working at > all, then it is likely a different issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-11297. --- Resolution: Fixed Fix Version/s: 7.1 Many thanks Luiz and Shawn! Richard: This will never be back-ported to 6.1, did you mean 6.6.1? And its not there either. If there's ever a 6.6.2 (and there are no plans for one) then we could backport it. I did try applying the patch to 6.6.1 and compiling and running Luiz's test and it works like a champ so that's an option. > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Fix For: 7.1 > > Attachments: SOLR-11297.patch, SOLR-11297.patch, SOLR-11297.patch, > SOLR-11297.sh, solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. > One critical detail to this issue: The cores are all perfectly functional. > If somebody is seeing an error message that results in a core not working at > all, then it is likely a different issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-7976) Add a parameter to TieredMergePolicy to merge segments that have more than X percent deleted documents
Erick Erickson created LUCENE-7976: -- Summary: Add a parameter to TieredMergePolicy to merge segments that have more than X percent deleted documents Key: LUCENE-7976 URL: https://issues.apache.org/jira/browse/LUCENE-7976 Project: Lucene - Core Issue Type: Improvement Reporter: Erick Erickson We're seeing situations "in the wild" where there are very large indexes (on disk) handled quite easily in a single Lucene index. This is particularly true as features like docValues move data into MMapDirectory space. The current TMP algorithm allows on the order of 50% deleted documents as per a dev list conversation with Mike McCandless (and his blog here: https://www.elastic.co/blog/lucenes-handling-of-deleted-documents). Especially in the current era of very large indexes in aggregate, (think many TB) solutions like "you need to distribute your collection over more shards" become very costly. Additionally, the tempting "optimize" button exacerbates the issue since once you form, say, a 100G segment (by optimizing/forceMerging) it is not eligible for merging until 97.5G of the docs in it are deleted (current default 5G max segment size). The proposal here would be to add a new parameter to TMP, something like (no, that's not serious name, suggestions welcome) which would default to 100 (or the same behavior we have now). So if I set this parameter to, say, 20%, and the max segment size stays at 5G, the following would happen when segments were selected for merging: > any segment with > 20% deleted documents would be merged or rewritten NO > MATTER HOW LARGE. There are two cases, >> the segment has < 5G "live" docs. In that case it would be merged with >> smaller segments to bring the resulting segment up to 5G. If no smaller >> segments exist, it would just be rewritten >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). >> It would be rewritten into a single segment removing all deleted docs no >> matter how big it is to start. The 100G example above would be rewritten to >> an 80G segment for instance. Of course this would lead to potentially much more I/O which is why the default would be the same behavior we see now. As it stands now, though, there's no way to recover from an optimize/forceMerge except to re-index from scratch. We routinely see 200G-300G Lucene indexes at this point "in the wild" with 10s of shards replicated 3 or more times. And that doesn't even include having these over HDFS. Alternatives welcome! Something like the above seems minimally invasive. A new merge policy is certainly an alternative. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179623#comment-16179623 ] ASF subversion and git services commented on SOLR-11297: Commit 2e0529532896c966ad11b55575de119fd8f2be3b in lucene-solr's branch refs/heads/branch_7x from [~erickerickson] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2e05295 ] SOLR-11297: Message 'Lock held by this virtual machine' during startup. Solr is trying to start some cores twice (cherry picked from commit 6391a75) > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: SOLR-11297.patch, SOLR-11297.patch, SOLR-11297.patch, > SOLR-11297.sh, solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. > One critical detail to this issue: The cores are all perfectly functional. > If somebody is seeing an error message that results in a core not working at > all, then it is likely a different issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11398) Add weibullDistribution Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-11398: -- Fix Version/s: 7.1 master (8.0) > Add weibullDistribution Stream Evaluator > > > Key: SOLR-11398 > URL: https://issues.apache.org/jira/browse/SOLR-11398 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: master (8.0), 7.1 > > > This ticket adds support for the Weibull probability distribution to the > Streaming Expression probability distribution framework. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11398) Add weibullDistribution Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein reassigned SOLR-11398: - Assignee: Joel Bernstein > Add weibullDistribution Stream Evaluator > > > Key: SOLR-11398 > URL: https://issues.apache.org/jira/browse/SOLR-11398 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: master (8.0), 7.1 > > > This ticket adds support for the Weibull probability distribution to the > Streaming Expression probability distribution framework. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11398) Add weibullDistribution Stream Evaluator
Joel Bernstein created SOLR-11398: - Summary: Add weibullDistribution Stream Evaluator Key: SOLR-11398 URL: https://issues.apache.org/jira/browse/SOLR-11398 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Reporter: Joel Bernstein This ticket adds support for the Weibull probability distribution to the Streaming Expression probability distribution framework. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179604#comment-16179604 ] ASF subversion and git services commented on SOLR-11297: Commit 6391a75a50ecc05db0d7a5ed9adc9fe187a4f57e in lucene-solr's branch refs/heads/master from [~erickerickson] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6391a75 ] SOLR-11297: Message 'Lock held by this virtual machine' during startup. Solr is trying to start some cores twice > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: SOLR-11297.patch, SOLR-11297.patch, SOLR-11297.patch, > SOLR-11297.sh, solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. > One critical detail to this issue: The cores are all perfectly functional. > If somebody is seeing an error message that results in a core not working at > all, then it is likely a different issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-11297: -- Attachment: SOLR-11297.patch Final patch, just the last patch I uploaded with CHANGES.txt > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: SOLR-11297.patch, SOLR-11297.patch, SOLR-11297.patch, > SOLR-11297.sh, solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. > One critical detail to this issue: The cores are all perfectly functional. > If somebody is seeing an error message that results in a core not working at > all, then it is likely a different issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 487 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/487/ Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.update.TestInPlaceUpdatesDistrib.test Error Message: This doc was supposed to have been deleted, but was: SolrDocument{id=0, title_s=title0, id_i=0, inplace_updatable_float=1.0, _version_=1579538936970084352, inplace_updatable_int_with_default=666, inplace_updatable_float_with_default=42.0} Stack Trace: java.lang.AssertionError: This doc was supposed to have been deleted, but was: SolrDocument{id=0, title_s=title0, id_i=0, inplace_updatable_float=1.0, _version_=1579538936970084352, inplace_updatable_int_with_default=666, inplace_updatable_float_with_default=42.0} at __randomizedtesting.SeedInfo.seed([A5F05496AFF48000:2DA46B4C0108EDF8]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.update.TestInPlaceUpdatesDistrib.reorderedDBQsSimpleTest(TestInPlaceUpdatesDistrib.java:247) at org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:151) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:5
[jira] [Updated] (LUCENE-7975) Replace facets taxonomy writer "cache" with BytesRefHash based implementation
[ https://issues.apache.org/jira/browse/LUCENE-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-7975: --- Attachment: LUCENE-7975.patch Another patch fixing the things [~jpountz] caught (thank you!). This patch is even faster -- ~25% overall speedup to indexing in my private facets-heavy use case. > Replace facets taxonomy writer "cache" with BytesRefHash based implementation > - > > Key: LUCENE-7975 > URL: https://issues.apache.org/jira/browse/LUCENE-7975 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: master (8.0), 7.1 > > Attachments: LUCENE-7975.patch, LUCENE-7975.patch > > > When the facets module was first created we didn't have {{BytesRefHash}} and > so the default cache ({{Cl2oTaxonomyWriterCache}} was quite a bit more > complex than needed. > I changed this to use a {{BytesRefHash}}, which stores labels as UTF8 > (reduces memory for ascii-only usage), and is also faster (~12% overall > speedup on indexing time in my private tests). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179465#comment-16179465 ] Shawn Heisey commented on SOLR-11297: - bq. so, all this stuff about patching is this in a Solr 6.1.x download? It's just a patch attached to a Jira issue right now. To be seen in a downloaded version, a change must be committed to one or more code branches, and then somebody must volunteer to be the release manager for a new version built from a branch with the fix. At this time, I think the earliest release that *might* get this issue resolved will be 7.1.0. The problems seen in SOLR-11361 (which I think Erick's patch also fixes) might be severe enough to justify a 6.6.2 release, but that's not something I would count on. Side note: Just like a user on IRC, I was having a problem accessing /solr/admin/metrics with 6.6.0 and a 6.6.2-SNAPSHOT version that I had built previously, but that problem seems to be fixed now. I do not know if Erick's patch is what made that work, or if it was another change made to the 6.6 branch. > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: SOLR-11297.patch, SOLR-11297.patch, SOLR-11297.sh, > solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. > One critical detail to this issue: The cores are all perfectly functional. > If somebody is seeing an error message that results in a core not working at > all, then it is likely a different issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179443#comment-16179443 ] Shawn Heisey commented on SOLR-11297: - With the latest patch applied to branch_6_6 and installing from solr-6.6.2-SNAPSHOT.tgz, everything works as expected. There are no errors in the admin UI, and the "Lock held by this virtual machine" message is not in the log. The only errors I see in the log are those generated by accessing cores while they are loading. {code} Error 503 {metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg=SolrCore is loading,code=503} {code} > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: SOLR-11297.patch, SOLR-11297.patch, SOLR-11297.sh, > solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. > One critical detail to this issue: The cores are all perfectly functional. > If somebody is seeing an error message that results in a core not working at > all, then it is likely a different issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20545 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20545/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest Error Message: Timeout waiting for all live and active Stack Trace: java.lang.AssertionError: Timeout waiting for all live and active at __randomizedtesting.SeedInfo.seed([2180036A5571C5F1:A2F65C988308CB50]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest(TestCloudRecovery.java:184) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 13161 lines...] [junit4] Suite: org.apache.solr.cloud.TestCloudRecovery [junit4] 2> 1690675 INFO (SUITE-TestCloudRecovery-seed#[2180036A5571C5F1]-worker) [] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.se
Re: [JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 48 - Still Failing
Sorry about the delay. I'll add the indexes in a bit. On Mon, Sep 25, 2017 at 9:28 AM Apache Jenkins Server < jenk...@builds.apache.org> wrote: > Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/48/ > > No tests ran. > > Build Log: > [...truncated 28024 lines...] > prepare-release-no-sign: > [mkdir] Created dir: > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist > [copy] Copying 476 files to > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene > [copy] Copying 215 files to > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr >[smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8 >[smoker] NOTE: output encoding is UTF-8 >[smoker] >[smoker] Load release URL > "file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"... >[smoker] >[smoker] Test Lucene... >[smoker] test basics... >[smoker] get KEYS >[smoker] 0.2 MB in 0.01 sec (25.0 MB/sec) >[smoker] check changes HTML... >[smoker] download lucene-7.1.0-src.tgz... >[smoker] 29.5 MB in 0.03 sec (1090.1 MB/sec) >[smoker] verify md5/sha1 digests >[smoker] download lucene-7.1.0.tgz... >[smoker] 69.4 MB in 0.07 sec (1055.5 MB/sec) >[smoker] verify md5/sha1 digests >[smoker] download lucene-7.1.0.zip... >[smoker] 79.8 MB in 0.07 sec (1154.2 MB/sec) >[smoker] verify md5/sha1 digests >[smoker] unpack lucene-7.1.0.tgz... >[smoker] verify JAR metadata/identity/no javax.* or java.* > classes... >[smoker] test demo with 1.8... >[smoker] got 6221 hits for query "lucene" >[smoker] checkindex with 1.8... >[smoker] check Lucene's javadoc JAR >[smoker] unpack lucene-7.1.0.zip... >[smoker] verify JAR metadata/identity/no javax.* or java.* > classes... >[smoker] test demo with 1.8... >[smoker] got 6221 hits for query "lucene" >[smoker] checkindex with 1.8... >[smoker] check Lucene's javadoc JAR >[smoker] unpack lucene-7.1.0-src.tgz... >[smoker] make sure no JARs/WARs in src dist... >[smoker] run "ant validate" >[smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... >[smoker] test demo with 1.8... >[smoker] got 213 hits for query "lucene" >[smoker] checkindex with 1.8... >[smoker] generate javadocs w/ Java 8... >[smoker] >[smoker] Crawl/parse... >[smoker] >[smoker] Verify... >[smoker] confirm all releases have coverage in > TestBackwardsCompatibility >[smoker] find all past Lucene releases... >[smoker] run TestBackwardsCompatibility.. >[smoker] Releases that don't seem to be tested: >[smoker] 7.0.0 >[smoker] Traceback (most recent call last): >[smoker] File > "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", > line 1484, in >[smoker] main() >[smoker] File > "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", > line 1428, in main >[smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, > c.is_signed, ' '.join(c.test_args)) >[smoker] File > "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", > line 1466, in smokeTest >[smoker] unpackAndVerify(java, 'lucene', tmpDir, > 'lucene-%s-src.tgz' % version, gitRevision, version, testArgs, baseURL) >[smoker] File > "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", > line 622, in unpackAndVerify >[smoker] verifyUnpacked(java, project, artifact, unpackPath, > gitRevision, version, testArgs, tmpDir, baseURL) >[smoker] File > "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", > line 774, in verifyUnpacked >[smoker] confirmAllReleasesAreTestedForBackCompat(version, > unpackPath) >[smoker] File > "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", > line 1404, in confirmAllReleasesAreTestedForBackCompat >[smoker] raise RuntimeError('some releases are not tested by > TestBackwardsCompatibility?') >[smoker] RuntimeError: some releases are not tested by > TestBackwardsCompatibility? > > BUILD FAILED > /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/build.xml:622: > exec returned: 1 > > Total time: 192 minutes 19 seconds > Build step 'Invoke Ant' marked build as failure > Email was triggered for: Failure - Any > Sending email for trigger: Failure - Any > > - > To un
[jira] [Updated] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey updated SOLR-11297: Description: Sometimes when Solr is restarted, I get some "lock held by this virtual machine" messages in the log, and the admin UI has messages about a failure to open a new searcher. It doesn't happen on all cores, and the list of cores that have the problem changes on subsequent restarts. The cores that exhibit the problems are working just fine -- the first core load is successful, the failure to open a new searcher is on a second core load attempt, which fails. None of the cores in the system are sharing an instanceDir or dataDir. This has been verified several times. The index is sharded manually, and the servers are not running in cloud mode. One critical detail to this issue: The cores are all perfectly functional. If somebody is seeing an error message that results in a core not working at all, then it is likely a different issue. was: Sometimes when Solr is restarted, I get some "lock held by this virtual machine" messages in the log, and the admin UI has messages about a failure to open a new searcher. It doesn't happen on all cores, and the list of cores that have the problem changes on subsequent restarts. The cores that exhibit the problems are working just fine -- the first core load is successful, the failure to open a new searcher is on a second core load attempt, which fails. None of the cores in the system are sharing an instanceDir or dataDir. This has been verified several times. The index is sharded manually, and the servers are not running in cloud mode. One critical detail to this issue: The cores are all perfectly functional. If somebody is seeing an error message that results in a core not working at all, then it is ilikely > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: SOLR-11297.patch, SOLR-11297.patch, SOLR-11297.sh, > solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. > One critical detail to this issue: The cores are all perfectly functional. > If somebody is seeing an error message that results in a core not working at > all, then it is likely a different issue. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11297) Message "Lock held by this virtual machine" during startup. Solr is trying to start some cores twice
[ https://issues.apache.org/jira/browse/SOLR-11297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey updated SOLR-11297: Description: Sometimes when Solr is restarted, I get some "lock held by this virtual machine" messages in the log, and the admin UI has messages about a failure to open a new searcher. It doesn't happen on all cores, and the list of cores that have the problem changes on subsequent restarts. The cores that exhibit the problems are working just fine -- the first core load is successful, the failure to open a new searcher is on a second core load attempt, which fails. None of the cores in the system are sharing an instanceDir or dataDir. This has been verified several times. The index is sharded manually, and the servers are not running in cloud mode. One critical detail to this issue: The cores are all perfectly functional. If somebody is seeing an error message that results in a core not working at all, then it is ilikely was: Sometimes when Solr is restarted, I get some "lock held by this virtual machine" messages in the log, and the admin UI has messages about a failure to open a new searcher. It doesn't happen on all cores, and the list of cores that have the problem changes on subsequent restarts. The cores that exhibit the problems are working just fine -- the first core load is successful, the failure to open a new searcher is on a second core load attempt, which fails. None of the cores in the system are sharing an instanceDir or dataDir. This has been verified several times. The index is sharded manually, and the servers are not running in cloud mode. > Message "Lock held by this virtual machine" during startup. Solr is trying > to start some cores twice > - > > Key: SOLR-11297 > URL: https://issues.apache.org/jira/browse/SOLR-11297 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Shawn Heisey >Assignee: Erick Erickson > Attachments: SOLR-11297.patch, SOLR-11297.patch, SOLR-11297.sh, > solr6_6-startup.log > > > Sometimes when Solr is restarted, I get some "lock held by this virtual > machine" messages in the log, and the admin UI has messages about a failure > to open a new searcher. It doesn't happen on all cores, and the list of > cores that have the problem changes on subsequent restarts. The cores that > exhibit the problems are working just fine -- the first core load is > successful, the failure to open a new searcher is on a second core load > attempt, which fails. > None of the cores in the system are sharing an instanceDir or dataDir. This > has been verified several times. > The index is sharded manually, and the servers are not running in cloud mode. > One critical detail to this issue: The cores are all perfectly functional. > If somebody is seeing an error message that results in a core not working at > all, then it is ilikely -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-10962) replicationHandler's reserveCommitDuration configurable in SolrCloud mode
[ https://issues.apache.org/jira/browse/SOLR-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved SOLR-10962. Resolution: Fixed Fix Version/s: 7.1 master (8.0) Thanks everyone! > replicationHandler's reserveCommitDuration configurable in SolrCloud mode > - > > Key: SOLR-10962 > URL: https://issues.apache.org/jira/browse/SOLR-10962 > Project: Solr > Issue Type: New Feature > Components: replication (java) >Reporter: Ramsey Haddad >Assignee: Christine Poerschke >Priority: Minor > Fix For: master (8.0), 7.1 > > Attachments: SOLR-10962.patch, SOLR-10962.patch, SOLR-10962.patch, > SOLR-10962.patch, SOLR-10962.patch, SOLR-10962.patch, SOLR-10962.patch > > > With SolrCloud mode, when doing replication via IndexFetcher, we occasionally > see the Fetch fail and then get restarted from scratch in cases where an > Index file is deleted after fetch manifest is computed and before the fetch > actually transfers the file. The risk of this happening can be reduced with a > higher value of reserveCommitDuration. However, the current configuration > only allows this value to be adjusted for "master" mode. This change allows > the value to also be changed when using "SolrCloud" mode. > https://lucene.apache.org/solr/guide/6_6/index-replication.html -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10962) replicationHandler's reserveCommitDuration configurable in SolrCloud mode
[ https://issues.apache.org/jira/browse/SOLR-10962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179301#comment-16179301 ] ASF subversion and git services commented on SOLR-10962: Commit 20f1e633eff373d04aad65e8d7f13fa37194b32a in lucene-solr's branch refs/heads/branch_7x from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=20f1e63 ] SOLR-10962: Make ReplicationHandler's commitReserveDuration configurable in SolrCloud mode. (Ramsey Haddad, Christine Poerschke, hossman) > replicationHandler's reserveCommitDuration configurable in SolrCloud mode > - > > Key: SOLR-10962 > URL: https://issues.apache.org/jira/browse/SOLR-10962 > Project: Solr > Issue Type: New Feature > Components: replication (java) >Reporter: Ramsey Haddad >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-10962.patch, SOLR-10962.patch, SOLR-10962.patch, > SOLR-10962.patch, SOLR-10962.patch, SOLR-10962.patch, SOLR-10962.patch > > > With SolrCloud mode, when doing replication via IndexFetcher, we occasionally > see the Fetch fail and then get restarted from scratch in cases where an > Index file is deleted after fetch manifest is computed and before the fetch > actually transfers the file. The risk of this happening can be reduced with a > higher value of reserveCommitDuration. However, the current configuration > only allows this value to be adjusted for "master" mode. This change allows > the value to also be changed when using "SolrCloud" mode. > https://lucene.apache.org/solr/guide/6_6/index-replication.html -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11397) Implement simulated DistributedQueue
Andrzej Bialecki created SOLR-11397: Summary: Implement simulated DistributedQueue Key: SOLR-11397 URL: https://issues.apache.org/jira/browse/SOLR-11397 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Reporter: Andrzej Bialecki Assignee: Andrzej Bialecki -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 48 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/48/ No tests ran. Build Log: [...truncated 28024 lines...] prepare-release-no-sign: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist [copy] Copying 476 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 215 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL "file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.01 sec (25.0 MB/sec) [smoker] check changes HTML... [smoker] download lucene-7.1.0-src.tgz... [smoker] 29.5 MB in 0.03 sec (1090.1 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-7.1.0.tgz... [smoker] 69.4 MB in 0.07 sec (1055.5 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-7.1.0.zip... [smoker] 79.8 MB in 0.07 sec (1154.2 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-7.1.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6221 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-7.1.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6221 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-7.1.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run "ant validate" [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.8... [smoker] got 213 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] generate javadocs w/ Java 8... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] Releases that don't seem to be tested: [smoker] 7.0.0 [smoker] Traceback (most recent call last): [smoker] File "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", line 1484, in [smoker] main() [smoker] File "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", line 1428, in main [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed, ' '.join(c.test_args)) [smoker] File "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", line 1466, in smokeTest [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % version, gitRevision, version, testArgs, baseURL) [smoker] File "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", line 622, in unpackAndVerify [smoker] verifyUnpacked(java, project, artifact, unpackPath, gitRevision, version, testArgs, tmpDir, baseURL) [smoker] File "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", line 774, in verifyUnpacked [smoker] confirmAllReleasesAreTestedForBackCompat(version, unpackPath) [smoker] File "/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/dev-tools/scripts/smokeTestRelease.py", line 1404, in confirmAllReleasesAreTestedForBackCompat [smoker] raise RuntimeError('some releases are not tested by TestBackwardsCompatibility?') [smoker] RuntimeError: some releases are not tested by TestBackwardsCompatibility? BUILD FAILED /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/build.xml:622: exec returned: 1 Total time: 192 minutes 19 seconds Build step 'Invoke Ant' marked build as failure Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 486 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/486/ Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=26353, name=jetty-launcher-5090-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=26353, name=jetty-launcher-5090-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) at __randomizedtesting.SeedInfo.seed([96D84E5F35344A2]:0) Build Log: [...truncated 13306 lines...] [junit4] Suite: org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestSolrCloudWithSecureImpersonation_96D84E5F35344A2-001/init-core-data-001 [junit4] 2> 2082135 INFO (SUITE-TestSolrCloudWithSecureImpersonation-seed#[96D84E5F35344A2]-worker) [ ] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=false [junit4] 2> 2082137 INFO (SUITE-TestSolrCloudWithSecureImpersonation-seed#[96D84E5F35344A2]-worker) [ ] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN) [junit4] 2> 2082137 INFO (SUITE-TestSolrCloudWithSecureImpersonation-seed#[96D84E5F35344A2]-worker) [ ] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:
[jira] [Commented] (LUCENE-7975) Replace facets taxonomy writer "cache" with BytesRefHash based implementation
[ https://issues.apache.org/jira/browse/LUCENE-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179123#comment-16179123 ] Michael McCandless commented on LUCENE-7975: Oh, woops, yes the {{xx}} is leftover -- I'll remove those methods. bq. Do we really need the bytes ThreadLocal in UTF8TaxonomyWriterCache? It looks like it is always accessed under 'this' lock Eeek, nice catch! I meant to perf test w/ that code outside of the lock; I'll re-test and see if it's warranted. > Replace facets taxonomy writer "cache" with BytesRefHash based implementation > - > > Key: LUCENE-7975 > URL: https://issues.apache.org/jira/browse/LUCENE-7975 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: master (8.0), 7.1 > > Attachments: LUCENE-7975.patch > > > When the facets module was first created we didn't have {{BytesRefHash}} and > so the default cache ({{Cl2oTaxonomyWriterCache}} was quite a bit more > complex than needed. > I changed this to use a {{BytesRefHash}}, which stores labels as UTF8 > (reduces memory for ascii-only usage), and is also faster (~12% overall > speedup on indexing time in my private tests). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20544 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20544/ Java: 32bit/jdk1.8.0_144 -server -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=4437, name=searcherExecutor-2088-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) Thread[id=4437, name=searcherExecutor-2088-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([A597B9F9B2199E1E]:0) FAILED: junit.framework.TestSuite.org.apache.solr.core.TestLazyCores Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=4437, name=searcherExecutor-2088-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated: 1) Thread[id=4437, name=searcherExecutor-2088-thread-1, state=WAITING, group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([A597B9F9B2199E1E]:0) FAILED: org.apache.solr.core.TestLazyCores.testNoCommit Error Message: Exception during query Stack Trace: java.lang.RuntimeException: Exception during query at __randomizedtesting.SeedInfo.seed([A597B9F9B2199E1E:7AF71828793EFDBB]:0) at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:884) at org.apache.solr.core.TestLazyCores.check10(TestLazyCores.java:847) at org.apache.solr.core.TestLazyCores.testNoCommit(TestLazyCores.java:829) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(Ra
Ref Guide for 7.0 - status
My goal is to get a release candidate of the Ref Guide for 7.0 out for a vote later today or at worst early tomorrow. The documentation for the new Analytics Component will be missing. With everything else I've had going on, I haven't been able to get to it. I will update the Upgrade Notes to point to the issue that has the pull request for Analytics, and it will be #1 priority for 7.1 Ref Guide (which I think will likely to happen soon). If anyone has any other edits, please go ahead and make them today. You can review drafts of HTML and PDF versions from Jenkins before the vote starts if you have time and/or interest: https://builds.apache.org/view/L/view/Lucene/job/Solr-reference-guide-7.0/ Thanks, Cassandra - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7974) Add N-dimensional FloatPoint K-nearest-neighbor implementation
[ https://issues.apache.org/jira/browse/LUCENE-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179061#comment-16179061 ] Adrien Grand commented on LUCENE-7974: -- +1 to add this feature to the sandbox I'm wondering whether the use of {{getMinDelta}} could be replaced with {{Math.nextUp}}/{{Math.nextDown}}? > Add N-dimensional FloatPoint K-nearest-neighbor implementation > -- > > Key: LUCENE-7974 > URL: https://issues.apache.org/jira/browse/LUCENE-7974 > Project: Lucene - Core > Issue Type: New Feature > Components: modules/sandbox >Reporter: Steve Rowe >Assignee: Steve Rowe >Priority: Minor > Attachments: LUCENE-7974.patch > > > From LUCENE-7069: > {quote} > KD trees (used by Lucene's new dimensional points) excel at finding "nearest > neighbors" to a given query point ... I think we should add this to Lucene's > sandbox > [...] > It could also be generalized to more than 2 dimensions, but for now I'm > making the class package private in sandbox for just the geo2d (lat/lon) use > case. > {quote} > This issue is to generalize {{LatLonPoint.nearest()}} to more than 2 > dimensions. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7975) Replace facets taxonomy writer "cache" with BytesRefHash based implementation
[ https://issues.apache.org/jira/browse/LUCENE-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16179027#comment-16179027 ] Adrien Grand commented on LUCENE-7975: -- Wow, nice simplification! - I think you forgot to remove the {{xx}} prefix to some methods (which I believe were used to make the old and new impls co-exist). - Do we really need the bytes ThreadLocal in UTF8TaxonomyWriterCache? It looks like it is always accessed under 'this' lock > Replace facets taxonomy writer "cache" with BytesRefHash based implementation > - > > Key: LUCENE-7975 > URL: https://issues.apache.org/jira/browse/LUCENE-7975 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: master (8.0), 7.1 > > Attachments: LUCENE-7975.patch > > > When the facets module was first created we didn't have {{BytesRefHash}} and > so the default cache ({{Cl2oTaxonomyWriterCache}} was quite a bit more > complex than needed. > I changed this to use a {{BytesRefHash}}, which stores labels as UTF8 > (reduces memory for ascii-only usage), and is also faster (~12% overall > speedup on indexing time in my private tests). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 50 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/50/ 13 tests failed. FAILED: org.apache.lucene.index.TestIndexSorting.testRandom3 Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([DB028E1A36FB4CA8]:0) FAILED: junit.framework.TestSuite.org.apache.lucene.index.TestIndexSorting Error Message: Suite timeout exceeded (>= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (>= 720 msec). at __randomizedtesting.SeedInfo.seed([DB028E1A36FB4CA8]:0) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation Error Message: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=116587, name=jetty-launcher-27798-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) 2) Thread[id=116581, name=jetty-launcher-27798-thread-1-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=116587, name=jetty-launcher-27798-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(Cou
[jira] [Updated] (LUCENE-7975) Replace facets taxonomy writer "cache" with BytesRefHash based implementation
[ https://issues.apache.org/jira/browse/LUCENE-7975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-7975: --- Attachment: LUCENE-7975.patch Patch; I think it's ready. > Replace facets taxonomy writer "cache" with BytesRefHash based implementation > - > > Key: LUCENE-7975 > URL: https://issues.apache.org/jira/browse/LUCENE-7975 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael McCandless >Assignee: Michael McCandless > Fix For: master (8.0), 7.1 > > Attachments: LUCENE-7975.patch > > > When the facets module was first created we didn't have {{BytesRefHash}} and > so the default cache ({{Cl2oTaxonomyWriterCache}} was quite a bit more > complex than needed. > I changed this to use a {{BytesRefHash}}, which stores labels as UTF8 > (reduces memory for ascii-only usage), and is also faster (~12% overall > speedup on indexing time in my private tests). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-7975) Replace facets taxonomy writer "cache" with BytesRefHash based implementation
Michael McCandless created LUCENE-7975: -- Summary: Replace facets taxonomy writer "cache" with BytesRefHash based implementation Key: LUCENE-7975 URL: https://issues.apache.org/jira/browse/LUCENE-7975 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: master (8.0), 7.1 When the facets module was first created we didn't have {{BytesRefHash}} and so the default cache ({{Cl2oTaxonomyWriterCache}} was quite a bit more complex than needed. I changed this to use a {{BytesRefHash}}, which stores labels as UTF8 (reduces memory for ascii-only usage), and is also faster (~12% overall speedup on indexing time in my private tests). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 485 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/485/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=6176, name=jetty-launcher-751-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=6176, name=jetty-launcher-751-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) at __randomizedtesting.SeedInfo.seed([6EE6F98416450D41]:0) Build Log: [...truncated 12070 lines...] [junit4] Suite: org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestSolrCloudWithSecureImpersonation_6EE6F98416450D41-001/init-core-data-001 [junit4] 2> 680966 WARN (SUITE-TestSolrCloudWithSecureImpersonation-seed#[6EE6F98416450D41]-worker) [ ] o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=2 numCloses=2 [junit4] 2> 680966 INFO (SUITE-TestSolrCloudWithSecureImpersonation-seed#[6EE6F98416450D41]-worker) [ ] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=false [junit4] 2> 680967 INFO (SUITE-TestSolrCloudWithSecureImpersonation-seed#[6EE6F98416450D41]-worker) [ ] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN) [junit4] 2> 680968 INFO (SUITE-TestSolrCloudWithSecure
Possible Solr Cloud regression?
I sent this over the weekend as well but it may have gotten lost in the test failures. While researching (https://issues.apache.org/jira/browse/SOLR-11392) I ran into the following: The test is failing on jenkins with the following error: HTTP ERROR: 404 Problem accessing /solr/mainCorpus_shard2_replica_n3/update. Reason: Can not find: /solr/mainCorpus_shard2_replica_n3/update Notice this is looking for the "_n3" replica. What's odd about this is that only two replicas where created for this collection. From the test logs: [junit4] 2> 134364 INFO (OverseerStateUpdate-98710583079665671-127.0.0.1:33171_solr-n_00) [n:127.0.0.1:33171_solr] o.a.s.c.o.SliceMutator createReplica() { [junit4] 2> "operation":"ADDREPLICA", [junit4] 2> "collection":"mainCorpus", [junit4] 2> "shard":"shard1", [junit4] 2> "core":"mainCorpus_shard1_replica_n1", [junit4] 2> "state":"down", [junit4] 2> "base_url":"http://127.0.0.1:44379/solr";, [junit4] 2> "type":"NRT"} [junit4] 2> 134365 INFO (OverseerStateUpdate-98710583079665671-127.0.0.1:33171_solr-n_00) [n:127.0.0.1:33171_solr] o.a.s.c.o.SliceMutator createReplica() { [junit4] 2> "operation":"ADDREPLICA", [junit4] 2> "collection":"mainCorpus", [junit4] 2> "shard":"shard2", [junit4] 2> "core":"mainCorpus_shard2_replica_n2", [junit4] 2> "state":"down", [junit4] 2> "base_url":"http://127.0.0.1:45595/solr";, [junit4] 2> "type":"NRT"} So the question is why is the client looking for the third replica? Another odd thing with this failure is that it doesn't reproduce and I've never seen it locally. So this only happens on Jenkins. Anyone run across an issue like this before? Joel Bernstein http://joelsolr.blogspot.com/
[jira] [Resolved] (LUCENE-7973) Update dictionary version for Ukrainian analyzer to 3.9.0
[ https://issues.apache.org/jira/browse/LUCENE-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss resolved LUCENE-7973. - Resolution: Fixed Lucene Fields: (was: New) > Update dictionary version for Ukrainian analyzer to 3.9.0 > - > > Key: LUCENE-7973 > URL: https://issues.apache.org/jira/browse/LUCENE-7973 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Andriy Rysin >Assignee: Dawid Weiss >Priority: Minor > Fix For: 7.1 > > > Update morfologik dictionary version to 3.9.0 for Ukrainian analyzer. > There's 60K of new lemmas there along with some other improvements and fixes, > particularly Ukrainian town names have been synchronized with official > standard. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7973) Update dictionary version for Ukrainian analyzer to 3.9.0
[ https://issues.apache.org/jira/browse/LUCENE-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated LUCENE-7973: Priority: Minor (was: Major) > Update dictionary version for Ukrainian analyzer to 3.9.0 > - > > Key: LUCENE-7973 > URL: https://issues.apache.org/jira/browse/LUCENE-7973 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Andriy Rysin >Assignee: Dawid Weiss >Priority: Minor > Fix For: 7.1 > > > Update morfologik dictionary version to 3.9.0 for Ukrainian analyzer. > There's 60K of new lemmas there along with some other improvements and fixes, > particularly Ukrainian town names have been synchronized with official > standard. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7973) Update dictionary version for Ukrainian analyzer to 3.9.0
[ https://issues.apache.org/jira/browse/LUCENE-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated LUCENE-7973: Fix Version/s: 7.1 > Update dictionary version for Ukrainian analyzer to 3.9.0 > - > > Key: LUCENE-7973 > URL: https://issues.apache.org/jira/browse/LUCENE-7973 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Andriy Rysin >Assignee: Dawid Weiss > Fix For: 7.1 > > > Update morfologik dictionary version to 3.9.0 for Ukrainian analyzer. > There's 60K of new lemmas there along with some other improvements and fixes, > particularly Ukrainian town names have been synchronized with official > standard. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7973) Update dictionary version for Ukrainian analyzer to 3.9.0
[ https://issues.apache.org/jira/browse/LUCENE-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16178880#comment-16178880 ] ASF subversion and git services commented on LUCENE-7973: - Commit 3f42d6721de96ee68e15621df86fa63501c8f3d3 in lucene-solr's branch refs/heads/branch_7x from [~dawid.weiss] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3f42d67 ] LUCENE-7973: Update dictionary version for Ukrainian analyzer to 3.9.0. > Update dictionary version for Ukrainian analyzer to 3.9.0 > - > > Key: LUCENE-7973 > URL: https://issues.apache.org/jira/browse/LUCENE-7973 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Andriy Rysin >Assignee: Dawid Weiss > Fix For: 7.1 > > > Update morfologik dictionary version to 3.9.0 for Ukrainian analyzer. > There's 60K of new lemmas there along with some other improvements and fixes, > particularly Ukrainian town names have been synchronized with official > standard. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7973) Update dictionary version for Ukrainian analyzer to 3.9.0
[ https://issues.apache.org/jira/browse/LUCENE-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated LUCENE-7973: Summary: Update dictionary version for Ukrainian analyzer to 3.9.0 (was: Update dictionary version for Ukrainian analyzer) > Update dictionary version for Ukrainian analyzer to 3.9.0 > - > > Key: LUCENE-7973 > URL: https://issues.apache.org/jira/browse/LUCENE-7973 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Andriy Rysin >Assignee: Dawid Weiss > > Update morfologik dictionary version to 3.9.0 for Ukrainian analyzer. > There's 60K of new lemmas there along with some other improvements and fixes, > particularly Ukrainian town names have been synchronized with official > standard. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11396) Implement simulated ClusterDataProvider
Andrzej Bialecki created SOLR-11396: Summary: Implement simulated ClusterDataProvider Key: SOLR-11396 URL: https://issues.apache.org/jira/browse/SOLR-11396 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Reporter: Andrzej Bialecki Assignee: Andrzej Bialecki Fix For: master (8.0) Implement a simulated {{ClusterDataProvider}} that can simulate per-node data, nodes going down and up, replica placement and operations, etc. It should be also possible to initialize this simulator using real data samples, eg. a {{ClusterState}} instance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11368) refactor DistibutedQueue to an interface
[ https://issues.apache.org/jira/browse/SOLR-11368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki resolved SOLR-11368. -- Resolution: Fixed > refactor DistibutedQueue to an interface > > > Key: SOLR-11368 > URL: https://issues.apache.org/jira/browse/SOLR-11368 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Noble Paul > Attachments: SOLR-11368.patch > > > This helps simulating many ZK actions without actually starting a ZK -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11395) Implement simulated DistribStateManager
[ https://issues.apache.org/jira/browse/SOLR-11395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki resolved SOLR-11395. -- Resolution: Fixed Committed to branch {{jira/solr-11285-sim}}. > Implement simulated DistribStateManager > --- > > Key: SOLR-11395 > URL: https://issues.apache.org/jira/browse/SOLR-11395 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki > Fix For: master (8.0) > > > Implement a simulated Zookeeper-like state manager. > Some requirements: > * using in-memory structures (no support for actual distributed operation) > * support ZK Watcher-s > * support ephemeral and sequential nodes -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11395) Implement simulated DistribStateManager
Andrzej Bialecki created SOLR-11395: Summary: Implement simulated DistribStateManager Key: SOLR-11395 URL: https://issues.apache.org/jira/browse/SOLR-11395 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Reporter: Andrzej Bialecki Assignee: Andrzej Bialecki Fix For: master (8.0) Implement a simulated Zookeeper-like state manager. Some requirements: * using in-memory structures (no support for actual distributed operation) * support ZK Watcher-s * support ephemeral and sequential nodes -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-7973) Update dictionary version for Ukrainian analyzer
[ https://issues.apache.org/jira/browse/LUCENE-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss reassigned LUCENE-7973: --- Assignee: Dawid Weiss > Update dictionary version for Ukrainian analyzer > > > Key: LUCENE-7973 > URL: https://issues.apache.org/jira/browse/LUCENE-7973 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Andriy Rysin >Assignee: Dawid Weiss > > Update morfologik dictionary version to 3.9.0 for Ukrainian analyzer. > There's 60K of new lemmas there along with some other improvements and fixes, > particularly Ukrainian town names have been synchronized with official > standard. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 484 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/484/ Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=10024, name=jetty-launcher-1437-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) Thread[id=10024, name=jetty-launcher-1437-thread-2-EventThread, state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037) at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328) at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277) at org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244) at org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44) at org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505) at __randomizedtesting.SeedInfo.seed([C11860C74A9858CC]:0) Build Log: [...truncated 12445 lines...] [junit4] Suite: org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestSolrCloudWithSecureImpersonation_C11860C74A9858CC-001/init-core-data-001 [junit4] 2> 1034401 WARN (SUITE-TestSolrCloudWithSecureImpersonation-seed#[C11860C74A9858CC]-worker) [ ] o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=20 numCloses=20 [junit4] 2> 1034401 INFO (SUITE-TestSolrCloudWithSecureImpersonation-seed#[C11860C74A9858CC]-worker) [ ] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=false [junit4] 2> 1034403 INFO (SUITE-TestSolrCloudWithSecureImpersonation-seed#[C11860C74A9858CC]-worker) [ ] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: @org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN) [junit4] 2> 1034404 INFO (SUITE-Te
[jira] [Created] (SOLR-11394) Deploy Solr at root context (/) and remove all context randomization in tests
Cao Manh Dat created SOLR-11394: --- Summary: Deploy Solr at root context (/) and remove all context randomization in tests Key: SOLR-11394 URL: https://issues.apache.org/jira/browse/SOLR-11394 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Cao Manh Dat Deploy Solr at root context (/) and remove all context randomization in tests -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11394) Deploy Solr at root context (/) and remove all context randomization in tests
[ https://issues.apache.org/jira/browse/SOLR-11394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat reassigned SOLR-11394: --- Assignee: Cao Manh Dat > Deploy Solr at root context (/) and remove all context randomization in tests > - > > Key: SOLR-11394 > URL: https://issues.apache.org/jira/browse/SOLR-11394 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat > > Deploy Solr at root context (/) and remove all context randomization in tests -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10181) CREATEALIAS and DELETEALIAS commands consistency problems under concurrency
[ https://issues.apache.org/jira/browse/SOLR-10181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16178716#comment-16178716 ] Samuel García Martínez commented on SOLR-10181: --- [~erickerickson] Don't worry, I'll create the patch this week and post it here. > CREATEALIAS and DELETEALIAS commands consistency problems under concurrency > --- > > Key: SOLR-10181 > URL: https://issues.apache.org/jira/browse/SOLR-10181 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 5.3, 5.4, 5.5, 6.4.1 >Reporter: Samuel García Martínez >Assignee: Erick Erickson > Attachments: SOLR-10181_testcase.patch > > > When several CREATEALIAS are run at the same time by the OCP it could happen > that, even tho the API response is OK, some of those CREATEALIAS request > changes are lost. > h3. The problem > The problem happens because the CREATEALIAS cmd implementation relies on > _zkStateReader.getAliases()_ to create the map that will be stored in ZK. If > several threads reach that line at the same time it will happen that only one > will be stored correctly and the others will be overridden. > The code I'm referencing is [this > piece|https://github.com/apache/lucene-solr/blob/8c1e67e30e071ceed636083532d4598bf6a8791f/solr/core/src/java/org/apache/solr/cloud/CreateAliasCmd.java#L65]. > As an example, let's say that the current aliases map has {a:colA, b:colB}. > If two CREATEALIAS (one adding c:colC and other creating d:colD) are > submitted to the _tpe_ and reach that line at the same time, the resulting > maps will look like {a:colA, b:colB, c:colC} and {a:colA, b:colB, d:colD} and > only one of them will be stored correctly in ZK, resulting in "data loss", > meaning that API is returning OK despite that it didn't work as expected. > On top of this, another concurrency problem could happen when the command > checks if the alias has been set using _checkForAlias_ method. if these two > CREATEALIAS zk writes had ran at the same time, the alias check fir one of > the threads can timeout since only one of the writes has "survived" and has > been "committed" to the _zkStateReader.getAliases()_ map. > h3. How to fix it > I can post a patch to this if someone gives me directions on how it should be > fixed. As I see this, there are two places where the issue can be fixed: in > the processor (OverseerCollectionMessageHandler) in a generic way or inside > the command itself. > h5. The processor fix > The locking mechanism (_OverseerCollectionMessageHandler#lockTask_) should be > the place to fix this inside the processor. I thought that adding the > operation name instead of only "collection" or "name" to the locking key > would fix the issue, but I realized that the problem will happen anyway if > the concurrency happens between different operations modifying the same > resource (like CREATEALIAS and DELETEALIAS do). So, if this should be the > path to follow I don't know what should be used as a locking key. > h5. The command fix > Fixing it at the command level (_CreateAliasCmd_ and _DeleteAliasCmd_) would > be relatively easy. Using optimistic locking, i.e, using the aliases.json zk > version in the keeper.setData. To do that, Aliases class should offer the > aliases version so the commands can forward that version with the update and > retry when it fails. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11013) remove /v2/c alias in favour of /v2/collections only
[ https://issues.apache.org/jira/browse/SOLR-11013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16178676#comment-16178676 ] Noble Paul commented on SOLR-11013: --- there are only the following prefixes now * {{/collections}} * {{/c}} * {{/cores}} * {{/cluster}} * {{/node}} So, is there any scope for confusion? > remove /v2/c alias in favour of /v2/collections only > > > Key: SOLR-11013 > URL: https://issues.apache.org/jira/browse/SOLR-11013 > Project: Solr > Issue Type: Wish > Components: v2 API >Reporter: Christine Poerschke >Priority: Minor > Attachments: SOLR-11013.patch > > > (Perhaps this has already been considered previously elsewhere or on the > mailing list and I just missed it then and couldn't find it now, in which > case happy to withdraw this ticket.) > Would like propose that {{/v2/c}} be removed in favour of {{/v2/collections}} > only: > * there being two ways to do the same thing is potentially confusing > * {{/v2/c}} is short but _c_ could stand not only for _collections_ but also > for _cores_ or _cluster_ or _config_ or _cloud_ etc. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7970) Add a Geo3d shape that models an exact circle, even when the planet model is not a sphere
[ https://issues.apache.org/jira/browse/LUCENE-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16178650#comment-16178650 ] Karl Wright commented on LUCENE-7970: - [~ivera], I've committed both the original code and your revision. If you have tests that verify behavior it would be great to have those committed too; my tests are clearly inadequate. Thanks for verifying this! > Add a Geo3d shape that models an exact circle, even when the planet model is > not a sphere > - > > Key: LUCENE-7970 > URL: https://issues.apache.org/jira/browse/LUCENE-7970 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright > Attachments: circle.jpg, LUCENE-7970-exact.diff, LUCENE_7970.patch, > LUCENE-7970.patch, LUCENE-7970-proposed.patch, > LUCENE-7970_testBearingPoint.patch > > > Hi [~Karl wright], > How circles are currently build do not behave very well when the planet model > is not an sphere. when you are close to the border in WGS84 you might get > false positves or false negatives when checking if a point is WITHIN. I think > the reason is how the points to generate the circle plane are generated which > assumes a sphere. > My proposal is the following: > Add a new method to PlanetModel: > public GeoPoint pointOnBearing(GeoPoint from, double dist, double bearing); > Which uses and algorithm that takes into account that the planet might not be > spherical. For example Vincenty's formulae > (https://en.wikipedia.org/wiki/Vincenty%27s_formulae). > Use this method to generate the points for the circle plane. My experiments > shows that this approach removes false negatives in WGS84 meanwhile it works > nicely in the Sphere. > Does it make sense? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7970) Add a Geo3d shape that models an exact circle, even when the planet model is not a sphere
[ https://issues.apache.org/jira/browse/LUCENE-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16178645#comment-16178645 ] ASF subversion and git services commented on LUCENE-7970: - Commit 9add14a513af620da46bfeeb05f6ccf9af61b1c2 in lucene-solr's branch refs/heads/branch_7x from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9add14a ] LUCENE-7970: Correct a misinterpretation of bearing direction > Add a Geo3d shape that models an exact circle, even when the planet model is > not a sphere > - > > Key: LUCENE-7970 > URL: https://issues.apache.org/jira/browse/LUCENE-7970 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright > Attachments: circle.jpg, LUCENE-7970-exact.diff, LUCENE_7970.patch, > LUCENE-7970.patch, LUCENE-7970-proposed.patch, > LUCENE-7970_testBearingPoint.patch > > > Hi [~Karl wright], > How circles are currently build do not behave very well when the planet model > is not an sphere. when you are close to the border in WGS84 you might get > false positves or false negatives when checking if a point is WITHIN. I think > the reason is how the points to generate the circle plane are generated which > assumes a sphere. > My proposal is the following: > Add a new method to PlanetModel: > public GeoPoint pointOnBearing(GeoPoint from, double dist, double bearing); > Which uses and algorithm that takes into account that the planet might not be > spherical. For example Vincenty's formulae > (https://en.wikipedia.org/wiki/Vincenty%27s_formulae). > Use this method to generate the points for the circle plane. My experiments > shows that this approach removes false negatives in WGS84 meanwhile it works > nicely in the Sphere. > Does it make sense? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7970) Add a Geo3d shape that models an exact circle, even when the planet model is not a sphere
[ https://issues.apache.org/jira/browse/LUCENE-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16178643#comment-16178643 ] ASF subversion and git services commented on LUCENE-7970: - Commit bcb2076fe6ee8c1eccb8ec95d53d89408f6150c6 in lucene-solr's branch refs/heads/branch_6x from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bcb2076 ] LUCENE-7970: Correct a misinterpretation of bearing direction > Add a Geo3d shape that models an exact circle, even when the planet model is > not a sphere > - > > Key: LUCENE-7970 > URL: https://issues.apache.org/jira/browse/LUCENE-7970 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright > Attachments: circle.jpg, LUCENE-7970-exact.diff, LUCENE_7970.patch, > LUCENE-7970.patch, LUCENE-7970-proposed.patch, > LUCENE-7970_testBearingPoint.patch > > > Hi [~Karl wright], > How circles are currently build do not behave very well when the planet model > is not an sphere. when you are close to the border in WGS84 you might get > false positves or false negatives when checking if a point is WITHIN. I think > the reason is how the points to generate the circle plane are generated which > assumes a sphere. > My proposal is the following: > Add a new method to PlanetModel: > public GeoPoint pointOnBearing(GeoPoint from, double dist, double bearing); > Which uses and algorithm that takes into account that the planet might not be > spherical. For example Vincenty's formulae > (https://en.wikipedia.org/wiki/Vincenty%27s_formulae). > Use this method to generate the points for the circle plane. My experiments > shows that this approach removes false negatives in WGS84 meanwhile it works > nicely in the Sphere. > Does it make sense? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7970) Add a Geo3d shape that models an exact circle, even when the planet model is not a sphere
[ https://issues.apache.org/jira/browse/LUCENE-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16178641#comment-16178641 ] ASF subversion and git services commented on LUCENE-7970: - Commit f8f19562ee359cbf1a7711d13754e3a1c6c61920 in lucene-solr's branch refs/heads/master from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f8f1956 ] LUCENE-7970: Correct a misinterpretation of bearing direction > Add a Geo3d shape that models an exact circle, even when the planet model is > not a sphere > - > > Key: LUCENE-7970 > URL: https://issues.apache.org/jira/browse/LUCENE-7970 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright > Attachments: circle.jpg, LUCENE-7970-exact.diff, LUCENE_7970.patch, > LUCENE-7970.patch, LUCENE-7970-proposed.patch, > LUCENE-7970_testBearingPoint.patch > > > Hi [~Karl wright], > How circles are currently build do not behave very well when the planet model > is not an sphere. when you are close to the border in WGS84 you might get > false positves or false negatives when checking if a point is WITHIN. I think > the reason is how the points to generate the circle plane are generated which > assumes a sphere. > My proposal is the following: > Add a new method to PlanetModel: > public GeoPoint pointOnBearing(GeoPoint from, double dist, double bearing); > Which uses and algorithm that takes into account that the planet might not be > spherical. For example Vincenty's formulae > (https://en.wikipedia.org/wiki/Vincenty%27s_formulae). > Use this method to generate the points for the circle plane. My experiments > shows that this approach removes false negatives in WGS84 meanwhile it works > nicely in the Sphere. > Does it make sense? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org