[jira] [Resolved] (SOLR-14225) Upgrade jaegertracing
[ https://issues.apache.org/jira/browse/SOLR-14225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl resolved SOLR-14225. Resolution: Duplicate Closing this as duplicate of SOLR-14286 (even if this came first :) ) > Upgrade jaegertracing > - > > Key: SOLR-14225 > URL: https://issues.apache.org/jira/browse/SOLR-14225 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Jan Høydahl >Priority: Major > > Upgrade jaegertracing from 0.35.5 to 1.1.0. This will also give us a newer > libthrift which is more stable and secure -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046302#comment-17046302 ] Jan Høydahl commented on SOLR-14286: This should go in 8.5.0, right? > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046303#comment-17046303 ] ASF subversion and git services commented on SOLR-14286: Commit e059455004f76585097a8929cb8f6b06c385eb79 in lucene-solr's branch refs/heads/master from Cao Manh Dat [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e059455 ] SOLR-14286: Update sha files > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046305#comment-17046305 ] Cao Manh Dat commented on SOLR-14286: - yes [~janhoy], I'm doing backporting > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046305#comment-17046305 ] Cao Manh Dat edited comment on SOLR-14286 at 2/27/20 8:10 AM: -- yes [~janhoy], I'm doing backporting. My bad, it should be in 8.5.0 changes.txt was (Author: caomanhdat): yes [~janhoy], I'm doing backporting > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046310#comment-17046310 ] ASF subversion and git services commented on SOLR-14286: Commit 0e0aa6e2077559d15d15473c27f4d992ee269993 in lucene-solr's branch refs/heads/branch_8x from Cao Manh Dat [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0e0aa6e ] SOLR-14286: Update sha files > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046309#comment-17046309 ] ASF subversion and git services commented on SOLR-14286: Commit 5b4f07ee759b6cc78038cbe68df52ba1d81bcd37 in lucene-solr's branch refs/heads/branch_8x from Cao Manh Dat [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5b4f07e ] SOLR-14286: Upgrade Jaegar to 1.1.0 > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046311#comment-17046311 ] ASF subversion and git services commented on SOLR-14286: Commit 043a3cf849b9e54f9324328a4f5df9196648b158 in lucene-solr's branch refs/heads/master from Cao Manh Dat [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=043a3cf ] SOLR-14286: Move entry in CHANGES.txt to 8.5.0 > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat resolved SOLR-14286. - Fix Version/s: 8.5 master (9.0) Resolution: Fixed > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9236) Having a modular Doc Values format
[ https://issues.apache.org/jira/browse/LUCENE-9236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046339#comment-17046339 ] juan camilo rodriguez duran commented on LUCENE-9236: - [~dsmiley] just throwing the exception wouldn't work at least If you want to run all test (and be compliant with the API), and here is the main point of this API, why if this is used independently, the API force you to support other sub formats that can't co-exist at same time for a given field. This same pattern is replicated using the EmptyDocValuesProducer, ideally DocValues#checkField would be easier if we use only the sub formats. But still this is not the point of this PR, the first objective is at least simplify the code readability by spiting the big classes DocValuesProducer/Consumer into Single responsible classes, then do a refactor to have more symmetric read and writing classes, and finally If it worth to refactor some common components between all formats as the DISI iterator reading and writing part. > Having a modular Doc Values format > -- > > Key: LUCENE-9236 > URL: https://issues.apache.org/jira/browse/LUCENE-9236 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Reporter: juan camilo rodriguez duran >Priority: Minor > Labels: docValues > > Today DocValues Consumer/Producer require override 5 different methods, even > if you only want to use one and given that one given field can only support > one doc values type at same time. > > In the attached PR I’ve implemented a new modular version of those classes > (consumer/producer) each one having a single responsibility and writing in > the same unique file. > This is mainly a refactor of the existing format opening the possibility to > override or implement the sub-format you need. > > I’ll do in 3 steps: > # Create a CompositeDocValuesFormat and moving the code of > Lucene80DocValuesFormat in separate classes, without modifying the inner > code. At same time I created a Lucene85CompositeDocValuesFormat based on > these changes. > # I’ll introduce some basic components for writing doc values in general > such as: > ## DocumentIdSetIterator Serializer: used in each type of field based on an > IndexedDISI. > ## Document Ordinals Serializer: Used in Sorted and SortedSet for > deduplicate values using a dictionary. > ## Document Boundaries Serializer (optional used only for multivalued > fields: SortedNumeric and SortedSet) > ## TermsEnum Serializer: useful to write and read the terms dictionary for > sorted and sorted set doc values. > # I’ll create the new Sub-DocValues format using the previous components. > > PR: [https://github.com/apache/lucene-solr/pull/1282] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046348#comment-17046348 ] Jan Høydahl commented on SOLR-14286: By creating a PR and allowing a few days for review and automatic precommit checks, you would avoid needing 3 commits for this :) > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-9033) Update Release docs an scripts with new site instructions
[ https://issues.apache.org/jira/browse/LUCENE-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl reassigned LUCENE-9033: --- Assignee: Jan Høydahl > Update Release docs an scripts with new site instructions > - > > Key: LUCENE-9033 > URL: https://issues.apache.org/jira/browse/LUCENE-9033 > Project: Lucene - Core > Issue Type: Sub-task > Components: general/tools >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > > * releaseWizard.py > * ReleaseTODO page > * addBackcompatIndexes.py > * archive-solr-ref-guide.sh > * createPatch.py > * publish-solr-ref-guide.sh > * solr-ref-gudie/src/meta-docs/publish.adoc > There may be others -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Reopened] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl reopened SOLR-14286: Precommit is failing with gradle. Seems you failed to update versions for gradle in `versions.props` and `versions.lock`. > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-7796) Implement a "gather support info" button
[ https://issues.apache.org/jira/browse/SOLR-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046418#comment-17046418 ] Jan Høydahl commented on SOLR-7796: --- Now that we have the "Nodes" tab in the UI, we have much of this info already. Of course people could do a screenshot. Or we could do a "Copy to clipboard" button on that screen that would generate some targeted JSON suitable for sharing? Another option is of course to make it easy to share the full /admin/info/system, /admin/metrics and all CLUSTERSTATUS json's, and then make another tool for support personnell that parse these, spitting out some nice reports similar to the Nodes screen but richer? > Implement a "gather support info" button > > > Key: SOLR-7796 > URL: https://issues.apache.org/jira/browse/SOLR-7796 > Project: Solr > Issue Type: Improvement > Components: Admin UI >Reporter: Shawn Heisey >Priority: Minor > > A "gather support info" button in the admin UI would be extremely helpful. > There are some basic pieces of info that we like to have for problem reports > on the user list, so there should be an easy way for a user to gather that > info. > Some of the more basic bits of info would be easy to include in a single file > that's easy to cut/paste -- java version, heap info, core/collection names, > directories, and stats, etc. If available, it should include server info > like memory, commandline args, ZK info, and possibly disk space. > There could be two buttons -- one that gathers smaller info into an XML, > JSON, or .properties structure that can be easily cut/paste into an email > message, and another that gathers larger info like files for configuration > and schema along with the other info (grabbing from zookeeper if running in > cloud mode) and packages it into a .zip file. Because the user list eats > almost all attachments, we would need to come up with some advice for sharing > the zipfile. I hate to ask INFRA for a file sharing service, but that might > not be a bad idea. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] romseygeek closed pull request #1232: LUCENE-9171: Add BoostAttribute handling to QueryBuilder
romseygeek closed pull request #1232: LUCENE-9171: Add BoostAttribute handling to QueryBuilder URL: https://github.com/apache/lucene-solr/pull/1232 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] romseygeek commented on issue #1232: LUCENE-9171: Add BoostAttribute handling to QueryBuilder
romseygeek commented on issue #1232: LUCENE-9171: Add BoostAttribute handling to QueryBuilder URL: https://github.com/apache/lucene-solr/pull/1232#issuecomment-591880765 This was merged as part of 663611c99c7d48dd31d53ea17644fcecd5e0fad7 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046434#comment-17046434 ] Cao Manh Dat commented on SOLR-14286: - Thanks [~janhoy] I should move in a slower pace. > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046457#comment-17046457 ] Cao Manh Dat commented on SOLR-14286: - Hi [~janhoy], I'm seeing this problem on trying to run gradle precommit {code} Execution failed for task ':verifyLocks'. > Found dependencies whose dependents changed: -io.opentracing:opentracing-api:0.33.0 (5 constraints: 4c3c8052) +io.opentracing:opentracing-api:0.33.0 (5 constraints: 4d3cfe52) -io.opentracing:opentracing-util:0.33.0 (3 constraints: f61f583b) +io.opentracing:opentracing-util:0.33.0 (3 constraints: f71f843b) -org.slf4j:slf4j-api:1.7.24 (18 constraints: 6ef487eb) +org.slf4j:slf4j-api:1.7.24 (18 constraints: 74f4c3f7) Please run './gradlew --write-locks'. {code} But it was not get changed by this issue? So should I run {{gradlew --write-locks}}? > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14287) Admin UI Properties screen does not show colons
Jan Høydahl created SOLR-14287: -- Summary: Admin UI Properties screen does not show colons Key: SOLR-14287 URL: https://issues.apache.org/jira/browse/SOLR-14287 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: Admin UI Reporter: Jan Høydahl Assignee: Jan Høydahl Attachments: Skjermbilde 2020-02-27 kl. 11.24.24.png Instead it seems to replace colons with newlines, see screenshot. !Skjermbilde 2020-02-27 kl. 11.24.24.png|width=500! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14252) NullPointerException in AggregateMetric
[ https://issues.apache.org/jira/browse/SOLR-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki updated SOLR-14252: Fix Version/s: 8.5 > NullPointerException in AggregateMetric > --- > > Key: SOLR-14252 > URL: https://issues.apache.org/jira/browse/SOLR-14252 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Andy Webb >Assignee: Andrzej Bialecki >Priority: Major > Fix For: 8.5 > > Time Spent: 3.5h > Remaining Estimate: 0h > > The {{getMax}} and {{getMin}} methods in > [AggregateMetric|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/metrics/AggregateMetric.java] > can throw an NPE if non-{{Number}} values are present in {{values}}, when it > tries to cast a {{null}} {{Double}} to a {{double}}. > This PR prevents the NPE occurring: > [https://github.com/apache/lucene-solr/pull/1265] > (We've also noticed an error in the documentation - see > https://github.com/apache/lucene-solr/commit/109d3411cd3866d83273187170dbc5b8b3211d20 > - this could be pulled out into a separate ticket if necessary?) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-8776) Support RankQuery in grouping
[ https://issues.apache.org/jira/browse/SOLR-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046493#comment-17046493 ] Diego Ceccarelli commented on SOLR-8776: Hi all, I'm still fresh on solr grouping code in master, and I'll be happy to do the work to fix this, if a committer can help me. I would also be happy to throw away the current PR and plan fresh work to fix the issue (I have to admit that the current PR Is hard to review and it might be split in different PRs). Please if anyone is interested in fixing this issue let me know :). > Support RankQuery in grouping > - > > Key: SOLR-8776 > URL: https://issues.apache.org/jira/browse/SOLR-8776 > Project: Solr > Issue Type: Improvement > Components: search >Affects Versions: 6.0 >Reporter: Diego Ceccarelli >Priority: Minor > Attachments: 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, > 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, > 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, > 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, > 0001-SOLR-8776-Support-RankQuery-in-grouping.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Currently it is not possible to use RankQuery [1] and Grouping [2] together > (see also [3]). In some situations Grouping can be replaced by Collapse and > Expand Results [4] (that supports reranking), but i) collapse cannot > guarantee that at least a minimum number of groups will be returned for a > query, and ii) in the Solr Cloud setting you will have constraints on how to > partition the documents among the shards. > I'm going to start working on supporting RankQuery in grouping. I'll start > attaching a patch with a test that fails because grouping does not support > the rank query and then I'll try to fix the problem, starting from the non > distributed setting (GroupingSearch). > My feeling is that since grouping is mostly performed by Lucene, RankQuery > should be refactored and moved (or partially moved) there. > Any feedback is welcome. > [1] https://cwiki.apache.org/confluence/display/solr/RankQuery+API > [2] https://cwiki.apache.org/confluence/display/solr/Result+Grouping > [3] > http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201507.mbox/%3ccahm-lpuvspest-sw63_8a6gt-wor6ds_t_nb2rope93e4+s...@mail.gmail.com%3E > [4] > https://cwiki.apache.org/confluence/display/solr/Collapse+and+Expand+Results -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046520#comment-17046520 ] ASF subversion and git services commented on SOLR-14286: Commit d9c43d9fa380a740aba2915c712899aec478a05a in lucene-solr's branch refs/heads/master from Cao Manh Dat [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d9c43d9 ] SOLR-14286: Fix gradle precommit > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] sigram commented on issue #1265: SOLR-14252: avoid NPE in metric aggregation
sigram commented on issue #1265: SOLR-14252: avoid NPE in metric aggregation URL: https://github.com/apache/lucene-solr/pull/1265#issuecomment-591927359 Part of `MetricUtilsTest.testMetrics()` doesn't pass for me - the `aggregate2` metric in compact format has different values than `aggregate1` so it needed a similar section as in the full format. I'll add this before merging. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14288) Search on parent-child by specific key - results returns more than the required document after some attempt
bhavna created SOLR-14288: - Summary: Search on parent-child by specific key - results returns more than the required document after some attempt Key: SOLR-14288 URL: https://issues.apache.org/jira/browse/SOLR-14288 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Affects Versions: 7.4 Environment: Could Reporter: bhavna Attachments: Solr_Issue.txt Hi Team , Am trying to perform a POC in solrcloud and am quite new to this .. I have did a bulk ingestion using java into a solrcollection( data is in form of json as higlighted), my parent has more than one child - that may increase till 10 ...) Now when i try retrieving the data based on specific parent key - i get extra child and parent along with "requested parent" where my requirement is to return only those parent -child combination for who's am requesting... -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13411) CompositeIdRouter calculates wrong route hash if atomic update is used for route.field
[ https://issues.apache.org/jira/browse/SOLR-13411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046539#comment-17046539 ] ASF subversion and git services commented on SOLR-13411: Commit 3befb8be941afb0f6e5bdf757ceda631291e5f60 in lucene-solr's branch refs/heads/master from Mikhail Khludnev [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3befb8b ] SOLR-13411: reject incremental update for route.field, uniqueKey and _version_. > CompositeIdRouter calculates wrong route hash if atomic update is used for > route.field > -- > > Key: SOLR-13411 > URL: https://issues.apache.org/jira/browse/SOLR-13411 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 7.5 >Reporter: Niko Himanen >Assignee: Mikhail Khludnev >Priority: Minor > Attachments: SOLR-13411.patch, SOLR-13411.patch > > > If collection is created with router.field -parameter to define some other > field than uniqueField as route field and document update comes containing > route field updated using atomic update syntax (for example set=123), hash > for document routing is calculated from "set=123" and not from 123 which is > the real value which may lead into routing document to wrong shard. > > This happens in CompositeIdRouter#sliceHash, where field value is used as is > for hash calculation. > > I think there are two possible solutions to fix this: > a) Allow use of atomic update also for route.field, but use real value > instead of atomic update syntax to route document into right shard. > b) Deny atomic update for route.field and throw exception. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14252) NullPointerException in AggregateMetric
[ https://issues.apache.org/jira/browse/SOLR-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046541#comment-17046541 ] ASF subversion and git services commented on SOLR-14252: Commit c6bf8b6cec8e09c7c155419173fcfa8aa5f75d51 in lucene-solr's branch refs/heads/master from Andrzej Bialecki [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c6bf8b6 ] SOLR-14252: NullPointerException in AggregateMetric. > NullPointerException in AggregateMetric > --- > > Key: SOLR-14252 > URL: https://issues.apache.org/jira/browse/SOLR-14252 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Andy Webb >Assignee: Andrzej Bialecki >Priority: Major > Fix For: 8.5 > > Time Spent: 3h 40m > Remaining Estimate: 0h > > The {{getMax}} and {{getMin}} methods in > [AggregateMetric|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/metrics/AggregateMetric.java] > can throw an NPE if non-{{Number}} values are present in {{values}}, when it > tries to cast a {{null}} {{Double}} to a {{double}}. > This PR prevents the NPE occurring: > [https://github.com/apache/lucene-solr/pull/1265] > (We've also noticed an error in the documentation - see > https://github.com/apache/lucene-solr/commit/109d3411cd3866d83273187170dbc5b8b3211d20 > - this could be pulled out into a separate ticket if necessary?) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] msokolov commented on a change in pull request #1294: LUCENE-9074: Slice Allocation Control Plane For Concurrent Searches
msokolov commented on a change in pull request #1294: LUCENE-9074: Slice Allocation Control Plane For Concurrent Searches URL: https://github.com/apache/lucene-solr/pull/1294#discussion_r385082544 ## File path: lucene/core/src/java/org/apache/lucene/search/DefaultSliceExecutionControlPlane.java ## @@ -0,0 +1,103 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.lucene.search; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Executor; +import java.util.concurrent.Future; +import java.util.concurrent.FutureTask; +import java.util.concurrent.RejectedExecutionException; + +/** + * Implementation of SliceExecutionControlPlane with queue backpressure based thread allocation + */ +public class DefaultSliceExecutionControlPlane implements SliceExecutionControlPlane, FutureTask> { + private final Executor executor; + + public DefaultSliceExecutionControlPlane(Executor executor) { +this.executor = executor; + } + + @Override + public List invokeAll(Collection tasks) { + +if (tasks == null) { + throw new IllegalArgumentException("Tasks is null"); +} + +if (executor == null) { + throw new IllegalArgumentException("Executor is null"); +} + +List futures = new ArrayList(); + +int i = 0; + +for (FutureTask task : tasks) { + boolean shouldExecuteOnCallerThread = false; + + // Execute last task on caller thread + if (i == tasks.size() - 1) { +shouldExecuteOnCallerThread = true; + } + + processTask(task, futures, shouldExecuteOnCallerThread); + ++i; +} + +return futures; + } + + // Helper method to execute a single task + protected void processTask(FutureTask task, List futures, + boolean shouldExecuteOnCallerThread) { Review comment: The logic around this boolean is hard to read. I think it would be clearer if we restructure the logic a bit and avoid modifying an incoming boolean parameter This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] msokolov commented on a change in pull request #1294: LUCENE-9074: Slice Allocation Control Plane For Concurrent Searches
msokolov commented on a change in pull request #1294: LUCENE-9074: Slice Allocation Control Plane For Concurrent Searches URL: https://github.com/apache/lucene-solr/pull/1294#discussion_r385082712 ## File path: lucene/core/src/java/org/apache/lucene/search/DefaultSliceExecutionControlPlane.java ## @@ -0,0 +1,103 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.lucene.search; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Executor; +import java.util.concurrent.Future; +import java.util.concurrent.FutureTask; +import java.util.concurrent.RejectedExecutionException; + +/** + * Implementation of SliceExecutionControlPlane with queue backpressure based thread allocation + */ +public class DefaultSliceExecutionControlPlane implements SliceExecutionControlPlane, FutureTask> { + private final Executor executor; + + public DefaultSliceExecutionControlPlane(Executor executor) { +this.executor = executor; + } + + @Override + public List invokeAll(Collection tasks) { + +if (tasks == null) { + throw new IllegalArgumentException("Tasks is null"); +} + +if (executor == null) { + throw new IllegalArgumentException("Executor is null"); +} + +List futures = new ArrayList(); + +int i = 0; + +for (FutureTask task : tasks) { + boolean shouldExecuteOnCallerThread = false; + + // Execute last task on caller thread + if (i == tasks.size() - 1) { +shouldExecuteOnCallerThread = true; + } + + processTask(task, futures, shouldExecuteOnCallerThread); + ++i; +} + +return futures; + } + + // Helper method to execute a single task + protected void processTask(FutureTask task, List futures, + boolean shouldExecuteOnCallerThread) { +if (task == null) { + throw new IllegalArgumentException("Input is null"); +} + +if (!shouldExecuteOnCallerThread) { + try { +executor.execute(task); + } catch (RejectedExecutionException e) { +// Execute on caller thread +shouldExecuteOnCallerThread = true; + } +} + +if (shouldExecuteOnCallerThread) { + try { +task.run(); + } catch (Exception e) { +throw new RuntimeException(e); + } +} + +if (!shouldExecuteOnCallerThread) { Review comment: else? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] msokolov commented on a change in pull request #1294: LUCENE-9074: Slice Allocation Control Plane For Concurrent Searches
msokolov commented on a change in pull request #1294: LUCENE-9074: Slice Allocation Control Plane For Concurrent Searches URL: https://github.com/apache/lucene-solr/pull/1294#discussion_r385080416 ## File path: lucene/core/src/java/org/apache/lucene/search/SliceExecutionControlPlane.java ## @@ -0,0 +1,32 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.lucene.search; + +import java.util.Collection; + +/** + * Execution control plane which is responsible + * for execution of slices based on the current status + * of the system and current system load + */ +public interface SliceExecutionControlPlane { + /** + * Invoke all slices that are allocated for the query + */ + C invokeAll(Collection tasks); Review comment: I think it does not need to be generic at all since the only use case is for Future and FutureTask. In fact the interface serves no visible purpose; we could simply rename DefaultSliceExecutionControlPlane to SliceExecutionControlPlane and nothing else would change. Since we're on naming, the naming seems rather grandiose for my tastes. I tend to think of a ControlPlane as a component in a distributed system, but I struggle to come up with anything better. Maybe SliceRunner? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] msokolov commented on a change in pull request #1294: LUCENE-9074: Slice Allocation Control Plane For Concurrent Searches
msokolov commented on a change in pull request #1294: LUCENE-9074: Slice Allocation Control Plane For Concurrent Searches URL: https://github.com/apache/lucene-solr/pull/1294#discussion_r385083507 ## File path: lucene/core/src/java/org/apache/lucene/search/DefaultSliceExecutionControlPlane.java ## @@ -0,0 +1,103 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.lucene.search; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.List; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.Executor; +import java.util.concurrent.Future; +import java.util.concurrent.FutureTask; +import java.util.concurrent.RejectedExecutionException; + +/** + * Implementation of SliceExecutionControlPlane with queue backpressure based thread allocation + */ +public class DefaultSliceExecutionControlPlane implements SliceExecutionControlPlane, FutureTask> { + private final Executor executor; + + public DefaultSliceExecutionControlPlane(Executor executor) { +this.executor = executor; + } + + @Override + public List invokeAll(Collection tasks) { + +if (tasks == null) { + throw new IllegalArgumentException("Tasks is null"); +} + +if (executor == null) { + throw new IllegalArgumentException("Executor is null"); +} + +List futures = new ArrayList(); + +int i = 0; + +for (FutureTask task : tasks) { + boolean shouldExecuteOnCallerThread = false; + + // Execute last task on caller thread + if (i == tasks.size() - 1) { +shouldExecuteOnCallerThread = true; + } + + processTask(task, futures, shouldExecuteOnCallerThread); + ++i; +} + +return futures; + } + + // Helper method to execute a single task + protected void processTask(FutureTask task, List futures, + boolean shouldExecuteOnCallerThread) { +if (task == null) { + throw new IllegalArgumentException("Input is null"); +} + +if (!shouldExecuteOnCallerThread) { + try { +executor.execute(task); + } catch (RejectedExecutionException e) { +// Execute on caller thread +shouldExecuteOnCallerThread = true; + } +} + +if (shouldExecuteOnCallerThread) { + try { +task.run(); + } catch (Exception e) { +throw new RuntimeException(e); + } +} + +if (!shouldExecuteOnCallerThread) { Review comment: could we add this to the first if block? ie executor.execute() .. futures.add() ...? Then use early return and you don't need to modify the incoming parameter This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] irvingzhang opened a new pull request #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers
irvingzhang opened a new pull request #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers URL: https://github.com/apache/lucene-solr/pull/1295 `if (dist < f.distance() || results.size() < ef) { Neighbor n = new ImmutableNeighbor(e.docId(), dist); candidates.add(n); results.insertWithOverflow(n); f = results.top(); }` If (dist < f.distance()) but results.size() >= ef, the "Neighbor n" would be added to "results" ("results" is a sub-type of PriorityQueue). The actual size of "results" would be between "ef" and results' max queue size, while its expected size if "ef". Consider the following situation: `FurthestNeighbors neighbors = new FurthestNeighbors(ef, ep); for (int l = hnsw.topLevel(); l > 0; l--) { visitedCount += hnsw.searchLayer(query, neighbors, 1, l, vectorValues); } visitedCount += hnsw.searchLayer(query, neighbors, ef, 0, vectorValues);` where the max size of "neighbors" ("neighbors" is also a sub-type of PriorityQueue) is ef (assume ef > 1). When search over a non-zero layer, we are going to find the nearest one neighbor by `hnsw.searchLayer(query, neighbors, 1, l, vectorValues);`, where l is the layer and layer > 0. The actual size of "neighbors" may be larger than 1. Assume that "results.size() <= ef", I think "results.pop();" when "results.size() == ef" can solve this problem. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046562#comment-17046562 ] Jan Høydahl commented on SOLR-14286: Thanks. Looks good now. Remember to close this issue again, as I reopened it with my previous comment. > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13579) Create resource management API
[ https://issues.apache.org/jira/browse/SOLR-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046564#comment-17046564 ] Andrzej Bialecki commented on SOLR-13579: - [~dsmiley] this is a bit unusual case because this is just a single instance of the component to control - normally the framework manages multiple components that compete for a limited pool of resources. Still, it should be possible to use it, just to benefit from soft optimization implemented in {{CacheManagerPool}}. This cache would then use its own pre-defined pool, similarly to those that are defined for searcher caches (in {{DefaultResourceManager}}). Not sure if it's worth the trouble for a single instance, though - {{TransientSolrCoreCache}} would have to implement {{ManagedComponent}} and either use the "cache" type interface (but then it would have to implement {{SolrCache}} in order to be managed by {{CacheManagerPlugin}}) or it would have to define its own special type and then implement a specialized subclass of {{ResourceManagerPool}}. So it's probably not worth it just to manage a single instance. In principle it doesn't really matter at what level a particular component is, which you want to control - as long as there's a mechanism to control it, and if it makes sense to control the total resource usage globally ie. at a {{CoreContainer}} level. > Create resource management API > -- > > Key: SOLR-13579 > URL: https://issues.apache.org/jira/browse/SOLR-13579 > Project: Solr > Issue Type: New Feature >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Attachments: SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, > SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, > SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch > > Time Spent: 3h 10m > Remaining Estimate: 0h > > Resource management framework API supporting the goals outlined in SOLR-13578. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8987) Move Lucene web site from svn to git
[ https://issues.apache.org/jira/browse/LUCENE-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046571#comment-17046571 ] Uwe Schindler commented on LUCENE-8987: --- I implemented the last solution to work around the strange "__root" URLs appearing after automatic redirects > Move Lucene web site from svn to git > > > Key: LUCENE-8987 > URL: https://issues.apache.org/jira/browse/LUCENE-8987 > Project: Lucene - Core > Issue Type: Task > Components: general/website >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Attachments: lucene-site-repo.png > > > INFRA just enabled [a new way of configuring website > build|https://s.apache.org/asfyaml] from a git branch, [see dev list > email|https://lists.apache.org/thread.html/b6f7e40bece5e83e27072ecc634a7815980c90240bc0a2ccb417f1fd@%3Cdev.lucene.apache.org%3E]. > It allows for automatic builds of both staging and production site, much > like the old CMS. We can choose to auto publish the html content of an > {{output/}} folder, or to have a bot build the site using > [Pelican|https://github.com/getpelican/pelican] from a {{content/}} folder. > The goal of this issue is to explore how this can be done for > [http://lucene.apache.org|http://lucene.apache.org/] by, by creating a new > git repo {{lucene-site}}, copy over the site from svn, see if it can be > "Pelicanized" easily and then test staging. Benefits are that more people > will be able to edit the web site and we can take PRs from the public (with > GitHub preview of pages). > Non-goals: > * Create a new web site or a new graphic design > * Change from Markdown to Asciidoc -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14252) NullPointerException in AggregateMetric
[ https://issues.apache.org/jira/browse/SOLR-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046574#comment-17046574 ] ASF subversion and git services commented on SOLR-14252: Commit ad9b2d2e1994f6cc868444eed9c3438408ef6078 in lucene-solr's branch refs/heads/branch_8x from Andrzej Bialecki [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ad9b2d2 ] SOLR-14252: NullPointerException in AggregateMetric. > NullPointerException in AggregateMetric > --- > > Key: SOLR-14252 > URL: https://issues.apache.org/jira/browse/SOLR-14252 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Andy Webb >Assignee: Andrzej Bialecki >Priority: Major > Fix For: 8.5 > > Time Spent: 3h 40m > Remaining Estimate: 0h > > The {{getMax}} and {{getMin}} methods in > [AggregateMetric|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/metrics/AggregateMetric.java] > can throw an NPE if non-{{Number}} values are present in {{values}}, when it > tries to cast a {{null}} {{Double}} to a {{double}}. > This PR prevents the NPE occurring: > [https://github.com/apache/lucene-solr/pull/1265] > (We've also noticed an error in the documentation - see > https://github.com/apache/lucene-solr/commit/109d3411cd3866d83273187170dbc5b8b3211d20 > - this could be pulled out into a separate ticket if necessary?) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046580#comment-17046580 ] Jan Høydahl commented on SOLR-14286: Another PR I have failed gradle build now due to some of this: https://github.com/apache/lucene-solr/pull/1288/checks?check_run_id=472329702 {code} > Task :solr:contrib:jaegertracer-configurator:collectJarInfos 3893 FAILURE: Build failed with an exception. 3894 > Task :solr:contrib:jaegertracer-configurator:validateJarChecksums FAILED 3895 3896 3897 * Where: 3898 Script '/home/runner/work/lucene-solr/lucene-solr/gradle/validation/jar-checks.gradle' line: 195 3899 3900 * What went wrong: 3901 Execution failed for task ':solr:contrib:jaegertracer-configurator:validateJarChecksums'. 3902 > Dependency checksum validation failed: 3903 - Dependency checksum missing ('io.jaegertracing:jaeger-core:0.35.5'), expected it at: /home/runner/work/lucene-solr/lucene-solr/solr/licenses/jaeger-core-0.35.5.jar.sha1 3904 - Dependency checksum missing ('io.jaegertracing:jaeger-thrift:0.35.5'), expected it at: /home/runner/work/lucene-solr/lucene-solr/solr/licenses/jaeger-thrift-0.35.5.jar.sha1 3905 - Dependency checksum missing ('org.apache.thrift:libthrift:0.12.0'), expected it at: /home/runner/work/lucene-solr/lucene-solr/solr/licenses/libthrift-0.12.0.jar.sha1 3906 {code} Have not tried to reproduce - could it be that the GitHub checks just lags behind on its git checkout? > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046583#comment-17046583 ] Cao Manh Dat commented on SOLR-14286: - I think so because I tried to run precommit both on gradlew and ant multiple times. > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat resolved SOLR-14286. - Resolution: Fixed > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13579) Create resource management API
[ https://issues.apache.org/jira/browse/SOLR-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046589#comment-17046589 ] David Smiley commented on SOLR-13579: - Thanks; makes sense. > Create resource management API > -- > > Key: SOLR-13579 > URL: https://issues.apache.org/jira/browse/SOLR-13579 > Project: Solr > Issue Type: New Feature >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Attachments: SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, > SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch, > SOLR-13579.patch, SOLR-13579.patch, SOLR-13579.patch > > Time Spent: 3h 10m > Remaining Estimate: 0h > > Resource management framework API supporting the goals outlined in SOLR-13578. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046596#comment-17046596 ] Jan Høydahl commented on SOLR-14286: False alarm, I need to merge your last commit into the PR branch... > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] msokolov commented on a change in pull request #1233: LUCENE-9202: refactor leaf collectors in TopFieldCollector
msokolov commented on a change in pull request #1233: LUCENE-9202: refactor leaf collectors in TopFieldCollector URL: https://github.com/apache/lucene-solr/pull/1233#discussion_r385118273 ## File path: lucene/core/src/java/org/apache/lucene/search/TopFieldCollector.java ## @@ -555,7 +519,7 @@ public static void populateScores(ScoreDoc[] topDocs, IndexSearcher searcher, Qu final void add(int slot, int doc) { bottom = pq.add(new Entry(slot, docBase + doc)); -queueFull = totalHits == numHits; +queueFull = slot == numHits - 1; Review comment: Thanks, I did This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] janhoy merged pull request #1288: SOLR-14281: Make sharedLib configurable through SysProp
janhoy merged pull request #1288: SOLR-14281: Make sharedLib configurable through SysProp URL: https://github.com/apache/lucene-solr/pull/1288 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9202) Refactor TopFieldCollector
[ https://issues.apache.org/jira/browse/LUCENE-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046606#comment-17046606 ] ASF subversion and git services commented on LUCENE-9202: - Commit 294b8d4ee1586ba6e20480aa5072386963056347 in lucene-solr's branch refs/heads/master from Michael Sokolov [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=294b8d4 ] LUCENE-9202: refactor leaf collectors in TopFieldCollector > Refactor TopFieldCollector > -- > > Key: LUCENE-9202 > URL: https://issues.apache.org/jira/browse/LUCENE-9202 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael Sokolov >Priority: Major > Time Spent: 1h 20m > Remaining Estimate: 0h > > While working on LUCENE-8929, I found it difficult to manage all the > duplicated code in {{TopFieldCollector,}} which has many branching > conditionals with slightly different logic across its main leaf subclasses, > {{SimpleFieldCollector}} and {{PagingFieldCollector}}. As I want to introduce > further branching, depending on the early termination strategy, it was > getting to be too much, so first I want to do this no-change refactor. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14281) Make sharedLib configurable through SysProp
[ https://issues.apache.org/jira/browse/SOLR-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046609#comment-17046609 ] ASF subversion and git services commented on SOLR-14281: Commit 62f5bd50cdd5ce8dd47267d39c02a0595f944064 in lucene-solr's branch refs/heads/master from Jan Høydahl [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=62f5bd5 ] SOLR-14281: Make sharedLib configurable through SysProp (#1288) > Make sharedLib configurable through SysProp > --- > > Key: SOLR-14281 > URL: https://issues.apache.org/jira/browse/SOLR-14281 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Time Spent: 50m > Remaining Estimate: 0h > > solr.xml has support for configuring a {{sharedLib}} location for where to > look for shared jar files. But there is currenlty no way to change that > location through {{solr.in.sh}} or through SysProp without first editing > solr.xml. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] msokolov closed pull request #1233: LUCENE-9202: refactor leaf collectors in TopFieldCollector
msokolov closed pull request #1233: LUCENE-9202: refactor leaf collectors in TopFieldCollector URL: https://github.com/apache/lucene-solr/pull/1233 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] msokolov commented on issue #1233: LUCENE-9202: refactor leaf collectors in TopFieldCollector
msokolov commented on issue #1233: LUCENE-9202: refactor leaf collectors in TopFieldCollector URL: https://github.com/apache/lucene-solr/pull/1233#issuecomment-591966009 pushed via command line This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14281) Make sharedLib configurable through SysProp and allow multiple paths
[ https://issues.apache.org/jira/browse/SOLR-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl updated SOLR-14281: --- Summary: Make sharedLib configurable through SysProp and allow multiple paths (was: Make sharedLib configurable through SysProp) > Make sharedLib configurable through SysProp and allow multiple paths > > > Key: SOLR-14281 > URL: https://issues.apache.org/jira/browse/SOLR-14281 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Time Spent: 50m > Remaining Estimate: 0h > > solr.xml has support for configuring a {{sharedLib}} location for where to > look for shared jar files. But there is currenlty no way to change that > location through {{solr.in.sh}} or through SysProp without first editing > solr.xml. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14281) Make sharedLib configurable through SysProp and allow multiple paths
[ https://issues.apache.org/jira/browse/SOLR-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl updated SOLR-14281: --- Fix Version/s: 8.5 > Make sharedLib configurable through SysProp and allow multiple paths > > > Key: SOLR-14281 > URL: https://issues.apache.org/jira/browse/SOLR-14281 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Fix For: 8.5 > > Time Spent: 50m > Remaining Estimate: 0h > > solr.xml has support for configuring a {{sharedLib}} location for where to > look for shared jar files. But there is currenlty no way to change that > location through {{solr.in.sh}} or through SysProp without first editing > solr.xml. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] msokolov commented on issue #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers
msokolov commented on issue #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers URL: https://github.com/apache/lucene-solr/pull/1295#issuecomment-591970917 I believe in practice that results. max size is always set to ef, so there shouldn't be any real issue. I agree that the interface doesn't make that plain; we should enforce this invariant by API contract This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13996) Refactor HttpShardHandler#prepDistributed() into smaller pieces
[ https://issues.apache.org/jira/browse/SOLR-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046629#comment-17046629 ] Shalin Shekhar Mangar commented on SOLR-13996: -- Fair enough, I'll rename the class. > Refactor HttpShardHandler#prepDistributed() into smaller pieces > --- > > Key: SOLR-13996 > URL: https://issues.apache.org/jira/browse/SOLR-13996 > Project: Solr > Issue Type: Improvement >Reporter: Ishan Chattopadhyaya >Assignee: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-13996.patch, SOLR-13996.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > Currently, it is very hard to understand all the various things being done in > HttpShardHandler. I'm starting with refactoring the prepDistributed() method > to make it easier to grasp. It has standalone and cloud code intertwined, and > wanted to cleanly separate them out. Later, we can even have two separate > method (for standalone and cloud, each). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14281) Make sharedLib configurable through SysProp and allow multiple paths
[ https://issues.apache.org/jira/browse/SOLR-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046636#comment-17046636 ] ASF subversion and git services commented on SOLR-14281: Commit e8922a2299a74ac8ea4f2962ce3412fa8a8e5e75 in lucene-solr's branch refs/heads/branch_8x from Jan Høydahl [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e8922a2 ] SOLR-14281: Make sharedLib configurable through SysProp (#1288) (cherry picked from commit 62f5bd50cdd5ce8dd47267d39c02a0595f944064) > Make sharedLib configurable through SysProp and allow multiple paths > > > Key: SOLR-14281 > URL: https://issues.apache.org/jira/browse/SOLR-14281 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Fix For: 8.5 > > Time Spent: 50m > Remaining Estimate: 0h > > solr.xml has support for configuring a {{sharedLib}} location for where to > look for shared jar files. But there is currenlty no way to change that > location through {{solr.in.sh}} or through SysProp without first editing > solr.xml. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14281) Make sharedLib configurable through SysProp and allow multiple paths
[ https://issues.apache.org/jira/browse/SOLR-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl resolved SOLR-14281. Resolution: Fixed > Make sharedLib configurable through SysProp and allow multiple paths > > > Key: SOLR-14281 > URL: https://issues.apache.org/jira/browse/SOLR-14281 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Fix For: 8.5 > > Time Spent: 50m > Remaining Estimate: 0h > > solr.xml has support for configuring a {{sharedLib}} location for where to > look for shared jar files. But there is currenlty no way to change that > location through {{solr.in.sh}} or through SysProp without first editing > solr.xml. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14281) Make sharedLib configurable through SysProp and allow multiple paths
[ https://issues.apache.org/jira/browse/SOLR-14281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046641#comment-17046641 ] Jan Høydahl commented on SOLR-14281: Merged. I did not put anything in upgrade notes as it is backward compatible. The only issue would be if someone have literal "," in their path name, or if someone already configure a custom {{sharedLib}} setting in their solr.xml and at the same time have bogous jars in {{SOLR_HOME/lib/}} which they do *not* want to load. Earlier a new setting would remove {{SOLR_HOME/lib}} from path, but now it is always there. If anyone have a concern for this we can commit updates. > Make sharedLib configurable through SysProp and allow multiple paths > > > Key: SOLR-14281 > URL: https://issues.apache.org/jira/browse/SOLR-14281 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Fix For: 8.5 > > Time Spent: 50m > Remaining Estimate: 0h > > solr.xml has support for configuring a {{sharedLib}} location for where to > look for shared jar files. But there is currenlty no way to change that > location through {{solr.in.sh}} or through SysProp without first editing > solr.xml. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-14252) NullPointerException in AggregateMetric
[ https://issues.apache.org/jira/browse/SOLR-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki updated SOLR-14252: Resolution: Fixed Status: Resolved (was: Patch Available) Thank you Andy! > NullPointerException in AggregateMetric > --- > > Key: SOLR-14252 > URL: https://issues.apache.org/jira/browse/SOLR-14252 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Andy Webb >Assignee: Andrzej Bialecki >Priority: Major > Fix For: 8.5 > > Time Spent: 3h 40m > Remaining Estimate: 0h > > The {{getMax}} and {{getMin}} methods in > [AggregateMetric|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/metrics/AggregateMetric.java] > can throw an NPE if non-{{Number}} values are present in {{values}}, when it > tries to cast a {{null}} {{Double}} to a {{double}}. > This PR prevents the NPE occurring: > [https://github.com/apache/lucene-solr/pull/1265] > (We've also noticed an error in the documentation - see > https://github.com/apache/lucene-solr/commit/109d3411cd3866d83273187170dbc5b8b3211d20 > - this could be pulled out into a separate ticket if necessary?) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-3837) A modest proposal for updateable fields
[ https://issues.apache.org/jira/browse/LUCENE-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki resolved LUCENE-3837. -- Resolution: Won't Fix This functionality is already supported (in a different way) in recent versions of Lucene. > A modest proposal for updateable fields > --- > > Key: LUCENE-3837 > URL: https://issues.apache.org/jira/browse/LUCENE-3837 > Project: Lucene - Core > Issue Type: New Feature > Components: core/index >Affects Versions: 4.0-ALPHA >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Attachments: LUCENE-3837.patch > > > I'd like to propose a simple design for implementing updateable fields in > Lucene. This design has some limitations, so I'm not claiming it will be > appropriate for every use case, and it's obvious it has some performance > consequences, but at least it's a start... > This proposal uses a concept of "overlays" or "stacked updates", where the > original data is not removed but instead it's overlaid with the new data. I > propose to reuse as much of the existing APIs as possible, and represent > updates as an IndexReader. Updates to documents in a specific segment would > be collected in an "overlay" index specific to that segment, i.e. there would > be as many overlay indexes as there are segments in the primary index. > A field update would be represented as a new document in the overlay index . > The document would consist of just the updated fields, plus a field that > records the id in the primary segment of the document affected by the update. > These updates would be processed as usual via secondary IndexWriter-s, as > many as there are primary segments, so the same analysis chains would be > used, the same field types, etc. > On opening a segment with updates the SegmentReader (see also LUCENE-3836) > would check for the presence of the "overlay" index, and if so it would open > it first (as an AtomicReader? or it would open individual codec format > readers? perhaps it should load the whole thing into memory?), and it would > construct an in-memory map between the primary's docId-s and the overlay's > docId-s. And finally it would wrap the original format readers with "overlay > readers", initialized also with the id map. > Now, when consumers of the 4D API would ask for specific data, the "overlay > readers" would first re-map the primary's docId to the overlay's docId, and > check whether overlay data exists for that docId and this type of data (e.g. > postings, stored fields, vectors) and return this data instead of the > original. Otherwise they would return the original data. > One obvious performance issue with this appraoch is that the sequential > access to primary data would translate into random access to the overlay > data. This could be solved by sorting the overlay index so that at least the > overlay ids increase monotonically as primary ids do. > Updates to the primary index would be handled as usual, i.e. segment merges, > since the segments with updates would pretend to have no overlays) would just > work as usual, only the overlay index would have to be deleted once the > primary segment is deleted after merge. > Updates to the existing documents that already had some fields updated would > be again handled as usual, only underneath they would open an IndexWriter on > the overlay index for a specific segment. > That's the broad idea. Feel free to pipe in - I started some coding at the > codec level but got stuck using the approach in LUCENE-3836. The approach > that uses a modified SegmentReader seems more promising. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14252) NullPointerException in AggregateMetric
[ https://issues.apache.org/jira/browse/SOLR-14252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046804#comment-17046804 ] Andy Webb commented on SOLR-14252: -- No worries - thank you for picking this up! Andy > NullPointerException in AggregateMetric > --- > > Key: SOLR-14252 > URL: https://issues.apache.org/jira/browse/SOLR-14252 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Andy Webb >Assignee: Andrzej Bialecki >Priority: Major > Fix For: 8.5 > > Time Spent: 3h 40m > Remaining Estimate: 0h > > The {{getMax}} and {{getMin}} methods in > [AggregateMetric|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/metrics/AggregateMetric.java] > can throw an NPE if non-{{Number}} values are present in {{values}}, when it > tries to cast a {{null}} {{Double}} to a {{double}}. > This PR prevents the NPE occurring: > [https://github.com/apache/lucene-solr/pull/1265] > (We've also noticed an error in the documentation - see > https://github.com/apache/lucene-solr/commit/109d3411cd3866d83273187170dbc5b8b3211d20 > - this could be pulled out into a separate ticket if necessary?) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] andywebb1975 closed pull request #1265: SOLR-14252: avoid NPE in metric aggregation
andywebb1975 closed pull request #1265: SOLR-14252: avoid NPE in metric aggregation URL: https://github.com/apache/lucene-solr/pull/1265 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] andywebb1975 commented on issue #1265: SOLR-14252: avoid NPE in metric aggregation
andywebb1975 commented on issue #1265: SOLR-14252: avoid NPE in metric aggregation URL: https://github.com/apache/lucene-solr/pull/1265#issuecomment-592071776 Merged in https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c6bf8b6 - thanks @sigram! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9253) Support custom dictionaries in KoreanTokenizer
Namgyu Kim created LUCENE-9253: -- Summary: Support custom dictionaries in KoreanTokenizer Key: LUCENE-9253 URL: https://issues.apache.org/jira/browse/LUCENE-9253 Project: Lucene - Core Issue Type: Improvement Reporter: Namgyu Kim Assignee: Namgyu Kim KoreanTokenizer does not support custom dictionaries(system, unknown) now, even though Nori provides DictionaryBuilder that creates custom dictionary. In the current state, it is very difficult for Nori users to use a custom dictionary. Therefore, we need to open a new constructor that uses it. Kuromoji is already supported(LUCENE-8971) that, and I referenced it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-14289) Solr may attempt to check Chroot after already having connected once
Mike Drob created SOLR-14289: Summary: Solr may attempt to check Chroot after already having connected once Key: SOLR-14289 URL: https://issues.apache.org/jira/browse/SOLR-14289 Project: Solr Issue Type: Task Security Level: Public (Default Security Level. Issues are Public) Components: Server Reporter: Mike Drob Assignee: Mike Drob Attachments: Screen Shot 2020-02-26 at 2.56.14 PM.png On server startup, we will attempt to load the solr.xml from zookeeper if we have the right properties set, and then later when starting up the core container will take time to verify (and create) the chroot even if it is the same string that we already used before. We can likely skip the second short-lived zookeeper connection to speed up our startup sequence a little bit. See this attached image from thread profiling during startup. !Screen Shot 2020-02-26 at 2.56.14 PM.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] danmuzi opened a new pull request #1296: LUCENE-9253: Support custom dictionaries in KoreanTokenizer
danmuzi opened a new pull request #1296: LUCENE-9253: Support custom dictionaries in KoreanTokenizer URL: https://github.com/apache/lucene-solr/pull/1296 KoreanTokenizer does not support custom dictionaries(system, unknown) now, even though Nori provides DictionaryBuilder that creates custom dictionary. In the current state, it is very difficult for Nori users to use a custom dictionary. Therefore, we need to open a new constructor that uses it. JIRA : https://issues.apache.org/jira/browse/LUCENE-9253 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] madrob opened a new pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits
madrob opened a new pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits URL: https://github.com/apache/lucene-solr/pull/1297 # Description There are lots of places in the code where we poll and sleep rather than using proper ZK callbacks. # Solution Replace poll loops with waitForState This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046890#comment-17046890 ] Mikhail Khludnev commented on SOLR-14286: - Does it go up to plan? https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/25859/ {code:java} /home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:496: Source checkout is modified!!! Offending files: * solr/licenses/jaeger-core-1.1.0.jar.sha1 * solr/licenses/libthrift-0.13.0.jar.sha1 {code} > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13411) CompositeIdRouter calculates wrong route hash if atomic update is used for route.field
[ https://issues.apache.org/jira/browse/SOLR-13411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046892#comment-17046892 ] ASF subversion and git services commented on SOLR-13411: Commit 64193f052acbb216d393c719400637648664a1c1 in lucene-solr's branch refs/heads/branch_8x from Mikhail Khludnev [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=64193f0 ] SOLR-13411: reject incremental update for route.field, uniqueKey and _version_. > CompositeIdRouter calculates wrong route hash if atomic update is used for > route.field > -- > > Key: SOLR-13411 > URL: https://issues.apache.org/jira/browse/SOLR-13411 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 7.5 >Reporter: Niko Himanen >Assignee: Mikhail Khludnev >Priority: Minor > Attachments: SOLR-13411.patch, SOLR-13411.patch > > > If collection is created with router.field -parameter to define some other > field than uniqueField as route field and document update comes containing > route field updated using atomic update syntax (for example set=123), hash > for document routing is calculated from "set=123" and not from 123 which is > the real value which may lead into routing document to wrong shard. > > This happens in CompositeIdRouter#sliceHash, where field value is used as is > for hash calculation. > > I think there are two possible solutions to fix this: > a) Allow use of atomic update also for route.field, but use real value > instead of atomic update syntax to route document into right shard. > b) Deny atomic update for route.field and throw exception. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14284) Document that you can add a new stream function via add-expressible
[ https://issues.apache.org/jira/browse/SOLR-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046928#comment-17046928 ] David Eric Pugh commented on SOLR-14284: I went ahead and did a first pass of documenting the various `actions` that you can pass to the the `/stream` request handler. This PR is ready for review and ideally commit! I haven't ever added a new page to the solr ref guide, so advice appreciated. > Document that you can add a new stream function via add-expressible > --- > > Key: SOLR-14284 > URL: https://issues.apache.org/jira/browse/SOLR-14284 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 8.5 >Reporter: David Eric Pugh >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > I confirmed that in Solr 8.5 you will be able to dynamically add a Stream > function (assuming the Jar is in the path) via the configset api: > curl -X POST -H 'Content-type:application/json' -d '{ > "add-expressible": { > "name": "dog", > "class": "org.apache.solr.handler.CatStream" > } > }' http://localhost:8983/solr/gettingstarted/config -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-7796) Implement a "gather support info" button
[ https://issues.apache.org/jira/browse/SOLR-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17046944#comment-17046944 ] Cassandra Targett commented on SOLR-7796: - There's a related aspect to this sort of feature that no one has mentioned yet, which is logs. In my experience, only getting configs without logs is only getting half the story. Logs are bigger and more complex (so better suited for the .zip option), so maybe that should be a separate effort but I wanted to mention it. An additional challenge there is getting logs for all the nodes in a cluster, which can be critical for diagnosing performance issues, for example. Being able to anonymize all configs (& logs) will be really huge for some users. Any solutions that don't include that as an option will be less effective for those users. > Implement a "gather support info" button > > > Key: SOLR-7796 > URL: https://issues.apache.org/jira/browse/SOLR-7796 > Project: Solr > Issue Type: Improvement > Components: Admin UI >Reporter: Shawn Heisey >Priority: Minor > > A "gather support info" button in the admin UI would be extremely helpful. > There are some basic pieces of info that we like to have for problem reports > on the user list, so there should be an easy way for a user to gather that > info. > Some of the more basic bits of info would be easy to include in a single file > that's easy to cut/paste -- java version, heap info, core/collection names, > directories, and stats, etc. If available, it should include server info > like memory, commandline args, ZK info, and possibly disk space. > There could be two buttons -- one that gathers smaller info into an XML, > JSON, or .properties structure that can be easily cut/paste into an email > message, and another that gathers larger info like files for configuration > and schema along with the other info (grabbing from zookeeper if running in > cloud mode) and packages it into a .zip file. Because the user list eats > almost all attachments, we would need to come up with some advice for sharing > the zipfile. I hate to ask INFRA for a file sharing service, but that might > not be a bad idea. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] madrob opened a new pull request #1298: SOLR-14289 Skip ZkChroot check when not necessary
madrob opened a new pull request #1298: SOLR-14289 Skip ZkChroot check when not necessary URL: https://github.com/apache/lucene-solr/pull/1298 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] madrob opened a new pull request #1299: SOLR-14274 Do not register multiple sets of JVM metrics
madrob opened a new pull request #1299: SOLR-14274 Do not register multiple sets of JVM metrics URL: https://github.com/apache/lucene-solr/pull/1299 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-13411) CompositeIdRouter calculates wrong route hash if atomic update is used for route.field
[ https://issues.apache.org/jira/browse/SOLR-13411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13411: Fix Version/s: 8.5 > CompositeIdRouter calculates wrong route hash if atomic update is used for > route.field > -- > > Key: SOLR-13411 > URL: https://issues.apache.org/jira/browse/SOLR-13411 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 7.5 >Reporter: Niko Himanen >Assignee: Mikhail Khludnev >Priority: Minor > Fix For: 8.5 > > Attachments: SOLR-13411.patch, SOLR-13411.patch > > > If collection is created with router.field -parameter to define some other > field than uniqueField as route field and document update comes containing > route field updated using atomic update syntax (for example set=123), hash > for document routing is calculated from "set=123" and not from 123 which is > the real value which may lead into routing document to wrong shard. > > This happens in CompositeIdRouter#sliceHash, where field value is used as is > for hash calculation. > > I think there are two possible solutions to fix this: > a) Allow use of atomic update also for route.field, but use real value > instead of atomic update syntax to route document into right shard. > b) Deny atomic update for route.field and throw exception. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] madrob commented on a change in pull request #1191: SOLR-14197 Reduce API of SolrResourceLoader
madrob commented on a change in pull request #1191: SOLR-14197 Reduce API of SolrResourceLoader URL: https://github.com/apache/lucene-solr/pull/1191#discussion_r385359973 ## File path: solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java ## @@ -104,7 +104,7 @@ private RestManager.Registry managedResourceRegistry; /** @see #reloadLuceneSPI() */ - private boolean needToReloadLuceneSPI = false; + private boolean needToReloadLuceneSPI = false; // requires synchronization Review comment: You mentioned in your reply that this needs to be `volatile` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits
tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits URL: https://github.com/apache/lucene-solr/pull/1297#discussion_r385436856 ## File path: solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java ## @@ -526,57 +518,31 @@ static UpdateResponse softCommit(String url) throws SolrServerException, IOExcep } String waitForCoreNodeName(String collectionName, String msgNodeName, String msgCore) { -int retryCount = 320; -while (retryCount-- > 0) { - final DocCollection docCollection = zkStateReader.getClusterState().getCollectionOrNull(collectionName); - if (docCollection != null && docCollection.getSlicesMap() != null) { -Map slicesMap = docCollection.getSlicesMap(); -for (Slice slice : slicesMap.values()) { - for (Replica replica : slice.getReplicas()) { -// TODO: for really large clusters, we could 'index' on this - -String nodeName = replica.getStr(ZkStateReader.NODE_NAME_PROP); -String core = replica.getStr(ZkStateReader.CORE_NAME_PROP); - -if (nodeName.equals(msgNodeName) && core.equals(msgCore)) { - return replica.getName(); -} - } +AtomicReference coreNodeName = new AtomicReference<>(); +try { + zkStateReader.waitForState(collectionName, 320, TimeUnit.SECONDS, c -> { +String name = ClusterStateMutator.getAssignedCoreNodeName(c, msgNodeName, msgCore); +if (name == null) { + return false; } - } - try { -Thread.sleep(1000); - } catch (InterruptedException e) { -Thread.currentThread().interrupt(); - } +coreNodeName.set(name); +return true; + }); +} catch (TimeoutException | InterruptedException e) { + throw new SolrException(ErrorCode.SERVER_ERROR, "Timeout waiting for collection state", e); Review comment: `Thread.currentThread().interrupt()`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits
tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits URL: https://github.com/apache/lucene-solr/pull/1297#discussion_r385440678 ## File path: solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java ## @@ -672,35 +636,35 @@ void cleanupCollection(String collectionName, NamedList results) throws Exceptio commandMap.get(DELETE).call(zkStateReader.getClusterState(), new ZkNodeProps(props), results); } - Map waitToSeeReplicasInState(String collectionName, Collection coreNames) throws InterruptedException { -assert coreNames.size() > 0; -Map result = new HashMap<>(); -TimeOut timeout = new TimeOut(Integer.getInteger("solr.waitToSeeReplicasInStateTimeoutSeconds", 120), TimeUnit.SECONDS, timeSource); // could be a big cluster -while (true) { - DocCollection coll = zkStateReader.getClusterState().getCollection(collectionName); - for (String coreName : coreNames) { -if (result.containsKey(coreName)) continue; -for (Slice slice : coll.getSlices()) { - for (Replica replica : slice.getReplicas()) { -if (coreName.equals(replica.getStr(ZkStateReader.CORE_NAME_PROP))) { - result.put(coreName, replica); - break; + Map waitToSeeReplicasInState(String collectionName, Collection coreNames) { +final Map result = new HashMap<>(); +int timeout = Integer.getInteger("solr.waitToSeeReplicasInStateTimeoutSeconds", 120); // could be a big cluster +try { + zkStateReader.waitForState(collectionName, timeout, TimeUnit.SECONDS, c -> { +// todo this is ugly, but I'm not sure there is a better way to fix it? Review comment: Can't we iterate shards/replicas, and for each one check if coreNames .contains(replica.getStr(ZkStateReader.CORE_NAME_PROP)). Maybe make a set with all the elements in coreNames and remove them as you find them, and break if empty? I guess it depends on how big coreNames will be This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits
tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits URL: https://github.com/apache/lucene-solr/pull/1297#discussion_r385438713 ## File path: solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java ## @@ -630,34 +596,32 @@ private void modifyCollection(ClusterState clusterState, ZkNodeProps message, Na overseer.offerStateUpdate(Utils.toJSON(message)); -TimeOut timeout = new TimeOut(30, TimeUnit.SECONDS, timeSource); -boolean areChangesVisible = true; -while (!timeout.hasTimedOut()) { - DocCollection collection = cloudManager.getClusterStateProvider().getClusterState().getCollection(collectionName); - areChangesVisible = true; - for (Map.Entry updateEntry : message.getProperties().entrySet()) { -String updateKey = updateEntry.getKey(); - -if (!updateKey.equals(ZkStateReader.COLLECTION_PROP) -&& !updateKey.equals(Overseer.QUEUE_OPERATION) -&& updateEntry.getValue() != null // handled below in a separate conditional -&& !updateEntry.getValue().equals(collection.get(updateKey))) { - areChangesVisible = false; - break; +try { + zkStateReader.waitForState(collectionName, 30, TimeUnit.SECONDS, c -> { +if (c == null) { Review comment: Can this happen? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits
tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits URL: https://github.com/apache/lucene-solr/pull/1297#discussion_r385436094 ## File path: solr/core/src/java/org/apache/solr/cloud/ZkController.java ## @@ -1684,58 +1685,37 @@ private void doGetShardIdAndNodeNameProcess(CoreDescriptor cd) { } private void waitForCoreNodeName(CoreDescriptor descriptor) { -int retryCount = 320; -log.debug("look for our core node name"); -while (retryCount-- > 0) { - final DocCollection docCollection = zkStateReader.getClusterState() - .getCollectionOrNull(descriptor.getCloudDescriptor().getCollectionName()); - if (docCollection != null && docCollection.getSlicesMap() != null) { -final Map slicesMap = docCollection.getSlicesMap(); -for (Slice slice : slicesMap.values()) { - for (Replica replica : slice.getReplicas()) { -// TODO: for really large clusters, we could 'index' on this - -String nodeName = replica.getStr(ZkStateReader.NODE_NAME_PROP); -String core = replica.getStr(ZkStateReader.CORE_NAME_PROP); - -String msgNodeName = getNodeName(); -String msgCore = descriptor.getName(); - -if (msgNodeName.equals(nodeName) && core.equals(msgCore)) { - descriptor.getCloudDescriptor() - .setCoreNodeName(replica.getName()); - getCoreContainer().getCoresLocator().persist(getCoreContainer(), descriptor); - return; -} - } +log.debug("waitForCoreNodeName >>> look for our core node name"); +try { + zkStateReader.waitForState(descriptor.getCollectionName(), 320, TimeUnit.SECONDS, c -> { +String name = ClusterStateMutator.getAssignedCoreNodeName(c, getNodeName(), descriptor.getName()); +if (name == null) { + return false; } - } - try { -Thread.sleep(1000); - } catch (InterruptedException e) { -Thread.currentThread().interrupt(); - } +descriptor.getCloudDescriptor().setCoreNodeName(name); +return true; + }); +} catch (TimeoutException | InterruptedException e) { + throw new SolrException(ErrorCode.SERVER_ERROR, "Timeout waiting for collection state", e); } +getCoreContainer().getCoresLocator().persist(getCoreContainer(), descriptor); } - private void waitForShardId(CoreDescriptor cd) { + private void waitForShardId(final CoreDescriptor cd) { log.debug("waiting to find shard id in clusterstate for " + cd.getName()); -int retryCount = 320; -while (retryCount-- > 0) { - final String shardId = zkStateReader.getClusterState().getShardId(cd.getCollectionName(), getNodeName(), cd.getName()); - if (shardId != null) { -cd.getCloudDescriptor().setShardId(shardId); -return; - } - try { -Thread.sleep(1000); - } catch (InterruptedException e) { -Thread.currentThread().interrupt(); - } +try { + zkStateReader.waitForState(cd.getCollectionName(), 320, TimeUnit.SECONDS, c -> { +if (c == null) return false; +final String shardId = c.getShardId(getNodeName(), cd.getName()); +if (shardId != null) { + cd.getCloudDescriptor().setShardId(shardId); + return true; +} +return false; + }); +} catch (TimeoutException | InterruptedException e) { + throw new SolrException(ErrorCode.SERVER_ERROR, "Could not get shard id for core: " + cd.getName()); Review comment: Same as before, we should probably re set the interruption. Also, did you intentionally not wrap the exception? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits
tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits URL: https://github.com/apache/lucene-solr/pull/1297#discussion_r385439085 ## File path: solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java ## @@ -630,34 +596,32 @@ private void modifyCollection(ClusterState clusterState, ZkNodeProps message, Na overseer.offerStateUpdate(Utils.toJSON(message)); -TimeOut timeout = new TimeOut(30, TimeUnit.SECONDS, timeSource); -boolean areChangesVisible = true; -while (!timeout.hasTimedOut()) { - DocCollection collection = cloudManager.getClusterStateProvider().getClusterState().getCollection(collectionName); - areChangesVisible = true; - for (Map.Entry updateEntry : message.getProperties().entrySet()) { -String updateKey = updateEntry.getKey(); - -if (!updateKey.equals(ZkStateReader.COLLECTION_PROP) -&& !updateKey.equals(Overseer.QUEUE_OPERATION) -&& updateEntry.getValue() != null // handled below in a separate conditional -&& !updateEntry.getValue().equals(collection.get(updateKey))) { - areChangesVisible = false; - break; +try { + zkStateReader.waitForState(collectionName, 30, TimeUnit.SECONDS, c -> { +if (c == null) { + return false; } +for (Map.Entry updateEntry : message.getProperties().entrySet()) { + String updateKey = updateEntry.getKey(); + + if (!updateKey.equals(ZkStateReader.COLLECTION_PROP) + && !updateKey.equals(Overseer.QUEUE_OPERATION) + && updateEntry.getValue() != null // handled below in a separate conditional + && !updateEntry.getValue().equals(c.get(updateKey))) { +return false; + } -if (updateEntry.getValue() == null && collection.containsKey(updateKey)) { - areChangesVisible = false; - break; + if (updateEntry.getValue() == null && c.containsKey(updateKey)) { +return false; + } } - } - if (areChangesVisible) break; - timeout.sleep(100); +return true; + }); +} catch (TimeoutException | InterruptedException e) { + log.debug("modifyCollection(ClusterState=" + clusterState + ", ZkNodeProps=" + message + ", NamedList=" + results + ")", e); Review comment: Can we use parametrized logging here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits
tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits URL: https://github.com/apache/lucene-solr/pull/1297#discussion_r385439336 ## File path: solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java ## @@ -630,34 +596,32 @@ private void modifyCollection(ClusterState clusterState, ZkNodeProps message, Na overseer.offerStateUpdate(Utils.toJSON(message)); -TimeOut timeout = new TimeOut(30, TimeUnit.SECONDS, timeSource); -boolean areChangesVisible = true; -while (!timeout.hasTimedOut()) { - DocCollection collection = cloudManager.getClusterStateProvider().getClusterState().getCollection(collectionName); - areChangesVisible = true; - for (Map.Entry updateEntry : message.getProperties().entrySet()) { -String updateKey = updateEntry.getKey(); - -if (!updateKey.equals(ZkStateReader.COLLECTION_PROP) -&& !updateKey.equals(Overseer.QUEUE_OPERATION) -&& updateEntry.getValue() != null // handled below in a separate conditional -&& !updateEntry.getValue().equals(collection.get(updateKey))) { - areChangesVisible = false; - break; +try { + zkStateReader.waitForState(collectionName, 30, TimeUnit.SECONDS, c -> { +if (c == null) { + return false; } +for (Map.Entry updateEntry : message.getProperties().entrySet()) { + String updateKey = updateEntry.getKey(); + + if (!updateKey.equals(ZkStateReader.COLLECTION_PROP) + && !updateKey.equals(Overseer.QUEUE_OPERATION) + && updateEntry.getValue() != null // handled below in a separate conditional + && !updateEntry.getValue().equals(c.get(updateKey))) { +return false; + } -if (updateEntry.getValue() == null && collection.containsKey(updateKey)) { - areChangesVisible = false; - break; + if (updateEntry.getValue() == null && c.containsKey(updateKey)) { +return false; + } } - } - if (areChangesVisible) break; - timeout.sleep(100); +return true; + }); +} catch (TimeoutException | InterruptedException e) { + log.debug("modifyCollection(ClusterState=" + clusterState + ", ZkNodeProps=" + message + ", NamedList=" + results + ")", e); Review comment: reset interruption This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits
tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits URL: https://github.com/apache/lucene-solr/pull/1297#discussion_r385435944 ## File path: solr/core/src/java/org/apache/solr/cloud/ZkController.java ## @@ -1684,58 +1685,37 @@ private void doGetShardIdAndNodeNameProcess(CoreDescriptor cd) { } private void waitForCoreNodeName(CoreDescriptor descriptor) { -int retryCount = 320; -log.debug("look for our core node name"); -while (retryCount-- > 0) { - final DocCollection docCollection = zkStateReader.getClusterState() - .getCollectionOrNull(descriptor.getCloudDescriptor().getCollectionName()); - if (docCollection != null && docCollection.getSlicesMap() != null) { -final Map slicesMap = docCollection.getSlicesMap(); -for (Slice slice : slicesMap.values()) { - for (Replica replica : slice.getReplicas()) { -// TODO: for really large clusters, we could 'index' on this - -String nodeName = replica.getStr(ZkStateReader.NODE_NAME_PROP); -String core = replica.getStr(ZkStateReader.CORE_NAME_PROP); - -String msgNodeName = getNodeName(); -String msgCore = descriptor.getName(); - -if (msgNodeName.equals(nodeName) && core.equals(msgCore)) { - descriptor.getCloudDescriptor() - .setCoreNodeName(replica.getName()); - getCoreContainer().getCoresLocator().persist(getCoreContainer(), descriptor); - return; -} - } +log.debug("waitForCoreNodeName >>> look for our core node name"); +try { + zkStateReader.waitForState(descriptor.getCollectionName(), 320, TimeUnit.SECONDS, c -> { +String name = ClusterStateMutator.getAssignedCoreNodeName(c, getNodeName(), descriptor.getName()); +if (name == null) { + return false; } - } - try { -Thread.sleep(1000); - } catch (InterruptedException e) { -Thread.currentThread().interrupt(); - } +descriptor.getCloudDescriptor().setCoreNodeName(name); +return true; + }); +} catch (TimeoutException | InterruptedException e) { + throw new SolrException(ErrorCode.SERVER_ERROR, "Timeout waiting for collection state", e); } +getCoreContainer().getCoresLocator().persist(getCoreContainer(), descriptor); } - private void waitForShardId(CoreDescriptor cd) { + private void waitForShardId(final CoreDescriptor cd) { log.debug("waiting to find shard id in clusterstate for " + cd.getName()); -int retryCount = 320; -while (retryCount-- > 0) { - final String shardId = zkStateReader.getClusterState().getShardId(cd.getCollectionName(), getNodeName(), cd.getName()); - if (shardId != null) { -cd.getCloudDescriptor().setShardId(shardId); -return; - } - try { -Thread.sleep(1000); - } catch (InterruptedException e) { -Thread.currentThread().interrupt(); - } +try { + zkStateReader.waitForState(cd.getCollectionName(), 320, TimeUnit.SECONDS, c -> { +if (c == null) return false; +final String shardId = c.getShardId(getNodeName(), cd.getName()); +if (shardId != null) { + cd.getCloudDescriptor().setShardId(shardId); Review comment: Do we need any synchronization? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits
tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits URL: https://github.com/apache/lucene-solr/pull/1297#discussion_r385434556 ## File path: solr/core/src/java/org/apache/solr/cloud/ZkController.java ## @@ -1684,58 +1685,37 @@ private void doGetShardIdAndNodeNameProcess(CoreDescriptor cd) { } private void waitForCoreNodeName(CoreDescriptor descriptor) { -int retryCount = 320; -log.debug("look for our core node name"); -while (retryCount-- > 0) { - final DocCollection docCollection = zkStateReader.getClusterState() - .getCollectionOrNull(descriptor.getCloudDescriptor().getCollectionName()); - if (docCollection != null && docCollection.getSlicesMap() != null) { -final Map slicesMap = docCollection.getSlicesMap(); -for (Slice slice : slicesMap.values()) { - for (Replica replica : slice.getReplicas()) { -// TODO: for really large clusters, we could 'index' on this - -String nodeName = replica.getStr(ZkStateReader.NODE_NAME_PROP); -String core = replica.getStr(ZkStateReader.CORE_NAME_PROP); - -String msgNodeName = getNodeName(); -String msgCore = descriptor.getName(); - -if (msgNodeName.equals(nodeName) && core.equals(msgCore)) { - descriptor.getCloudDescriptor() - .setCoreNodeName(replica.getName()); - getCoreContainer().getCoresLocator().persist(getCoreContainer(), descriptor); - return; -} - } +log.debug("waitForCoreNodeName >>> look for our core node name"); +try { + zkStateReader.waitForState(descriptor.getCollectionName(), 320, TimeUnit.SECONDS, c -> { +String name = ClusterStateMutator.getAssignedCoreNodeName(c, getNodeName(), descriptor.getName()); +if (name == null) { + return false; } - } - try { -Thread.sleep(1000); - } catch (InterruptedException e) { -Thread.currentThread().interrupt(); - } +descriptor.getCloudDescriptor().setCoreNodeName(name); Review comment: Do we need any synchronization, since this will now be running on a different thread? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits
tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits URL: https://github.com/apache/lucene-solr/pull/1297#discussion_r385437194 ## File path: solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java ## @@ -526,57 +518,31 @@ static UpdateResponse softCommit(String url) throws SolrServerException, IOExcep } String waitForCoreNodeName(String collectionName, String msgNodeName, String msgCore) { -int retryCount = 320; -while (retryCount-- > 0) { - final DocCollection docCollection = zkStateReader.getClusterState().getCollectionOrNull(collectionName); - if (docCollection != null && docCollection.getSlicesMap() != null) { -Map slicesMap = docCollection.getSlicesMap(); -for (Slice slice : slicesMap.values()) { - for (Replica replica : slice.getReplicas()) { -// TODO: for really large clusters, we could 'index' on this - -String nodeName = replica.getStr(ZkStateReader.NODE_NAME_PROP); -String core = replica.getStr(ZkStateReader.CORE_NAME_PROP); - -if (nodeName.equals(msgNodeName) && core.equals(msgCore)) { - return replica.getName(); -} - } +AtomicReference coreNodeName = new AtomicReference<>(); +try { + zkStateReader.waitForState(collectionName, 320, TimeUnit.SECONDS, c -> { +String name = ClusterStateMutator.getAssignedCoreNodeName(c, msgNodeName, msgCore); +if (name == null) { + return false; } - } - try { -Thread.sleep(1000); - } catch (InterruptedException e) { -Thread.currentThread().interrupt(); - } +coreNodeName.set(name); +return true; + }); +} catch (TimeoutException | InterruptedException e) { + throw new SolrException(ErrorCode.SERVER_ERROR, "Timeout waiting for collection state", e); } -throw new SolrException(ErrorCode.SERVER_ERROR, "Could not find coreNodeName"); +return coreNodeName.get(); } - ClusterState waitForNewShard(String collectionName, String sliceName) throws KeeperException, InterruptedException { + ClusterState waitForNewShard(String collectionName, String sliceName) { log.debug("Waiting for slice {} of collection {} to be available", sliceName, collectionName); -RTimer timer = new RTimer(); -int retryCount = 320; -while (retryCount-- > 0) { - ClusterState clusterState = zkStateReader.getClusterState(); - DocCollection collection = clusterState.getCollection(collectionName); - - if (collection == null) { -throw new SolrException(ErrorCode.SERVER_ERROR, -"Unable to find collection: " + collectionName + " in clusterstate"); - } - Slice slice = collection.getSlice(sliceName); - if (slice != null) { -log.debug("Waited for {}ms for slice {} of collection {} to be available", -timer.getTime(), sliceName, collectionName); -return clusterState; - } - Thread.sleep(1000); +try { + zkStateReader.waitForState(collectionName, 320, TimeUnit.SECONDS, c -> c != null && c.getSlice(sliceName) != null); +} catch (TimeoutException | InterruptedException e) { + throw new SolrException(ErrorCode.SERVER_ERROR, "Timeout waiting for new slice", e); Review comment: interruption? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits
tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits URL: https://github.com/apache/lucene-solr/pull/1297#discussion_r385434650 ## File path: solr/core/src/java/org/apache/solr/cloud/ZkController.java ## @@ -1684,58 +1685,37 @@ private void doGetShardIdAndNodeNameProcess(CoreDescriptor cd) { } private void waitForCoreNodeName(CoreDescriptor descriptor) { -int retryCount = 320; -log.debug("look for our core node name"); -while (retryCount-- > 0) { - final DocCollection docCollection = zkStateReader.getClusterState() - .getCollectionOrNull(descriptor.getCloudDescriptor().getCollectionName()); - if (docCollection != null && docCollection.getSlicesMap() != null) { -final Map slicesMap = docCollection.getSlicesMap(); -for (Slice slice : slicesMap.values()) { - for (Replica replica : slice.getReplicas()) { -// TODO: for really large clusters, we could 'index' on this - -String nodeName = replica.getStr(ZkStateReader.NODE_NAME_PROP); -String core = replica.getStr(ZkStateReader.CORE_NAME_PROP); - -String msgNodeName = getNodeName(); -String msgCore = descriptor.getName(); - -if (msgNodeName.equals(nodeName) && core.equals(msgCore)) { - descriptor.getCloudDescriptor() - .setCoreNodeName(replica.getName()); - getCoreContainer().getCoresLocator().persist(getCoreContainer(), descriptor); - return; -} - } +log.debug("waitForCoreNodeName >>> look for our core node name"); +try { + zkStateReader.waitForState(descriptor.getCollectionName(), 320, TimeUnit.SECONDS, c -> { +String name = ClusterStateMutator.getAssignedCoreNodeName(c, getNodeName(), descriptor.getName()); +if (name == null) { + return false; } - } - try { -Thread.sleep(1000); - } catch (InterruptedException e) { -Thread.currentThread().interrupt(); - } +descriptor.getCloudDescriptor().setCoreNodeName(name); +return true; + }); +} catch (TimeoutException | InterruptedException e) { + throw new SolrException(ErrorCode.SERVER_ERROR, "Timeout waiting for collection state", e); Review comment: We still want to reset interruption, right? Also, am I reading right? before we'd just continue running normally even after a "timeout", while now we throw an exception. Sounds like a good change, but that's intended, right? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits
tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits URL: https://github.com/apache/lucene-solr/pull/1297#discussion_r385436558 ## File path: solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java ## @@ -471,29 +471,21 @@ void checkResults(String label, NamedList results, boolean failureIsFata private void migrateStateFormat(ClusterState state, ZkNodeProps message, NamedList results) throws Exception { final String collectionName = message.getStr(COLLECTION_PROP); -boolean firstLoop = true; -// wait for a while until the state format changes -TimeOut timeout = new TimeOut(30, TimeUnit.SECONDS, timeSource); -while (! timeout.hasTimedOut()) { - DocCollection collection = zkStateReader.getClusterState().getCollection(collectionName); - if (collection == null) { -throw new SolrException(ErrorCode.BAD_REQUEST, "Collection: " + collectionName + " not found"); - } - if (collection.getStateFormat() == 2) { -// Done. -results.add("success", new SimpleOrderedMap<>()); -return; - } +ZkNodeProps m = new ZkNodeProps(Overseer.QUEUE_OPERATION, MIGRATESTATEFORMAT.toLower(), COLLECTION_PROP, collectionName); +overseer.offerStateUpdate(Utils.toJSON(m)); - if (firstLoop) { -// Actually queue the migration command. -firstLoop = false; -ZkNodeProps m = new ZkNodeProps(Overseer.QUEUE_OPERATION, MIGRATESTATEFORMAT.toLower(), COLLECTION_PROP, collectionName); -overseer.offerStateUpdate(Utils.toJSON(m)); - } - timeout.sleep(100); +try { + zkStateReader.waitForState(collectionName, 30, TimeUnit.SECONDS, c -> { Review comment: same comment/questions as with the other methods This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits
tflobbe commented on a change in pull request #1297: SOLR-14253 Replace various sleep calls with ZK waits URL: https://github.com/apache/lucene-solr/pull/1297#discussion_r385438210 ## File path: solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java ## @@ -526,57 +518,31 @@ static UpdateResponse softCommit(String url) throws SolrServerException, IOExcep } String waitForCoreNodeName(String collectionName, String msgNodeName, String msgCore) { -int retryCount = 320; -while (retryCount-- > 0) { - final DocCollection docCollection = zkStateReader.getClusterState().getCollectionOrNull(collectionName); - if (docCollection != null && docCollection.getSlicesMap() != null) { -Map slicesMap = docCollection.getSlicesMap(); -for (Slice slice : slicesMap.values()) { - for (Replica replica : slice.getReplicas()) { -// TODO: for really large clusters, we could 'index' on this - -String nodeName = replica.getStr(ZkStateReader.NODE_NAME_PROP); -String core = replica.getStr(ZkStateReader.CORE_NAME_PROP); - -if (nodeName.equals(msgNodeName) && core.equals(msgCore)) { - return replica.getName(); -} - } +AtomicReference coreNodeName = new AtomicReference<>(); +try { + zkStateReader.waitForState(collectionName, 320, TimeUnit.SECONDS, c -> { +String name = ClusterStateMutator.getAssignedCoreNodeName(c, msgNodeName, msgCore); +if (name == null) { + return false; } - } - try { -Thread.sleep(1000); - } catch (InterruptedException e) { -Thread.currentThread().interrupt(); - } +coreNodeName.set(name); +return true; + }); +} catch (TimeoutException | InterruptedException e) { + throw new SolrException(ErrorCode.SERVER_ERROR, "Timeout waiting for collection state", e); } -throw new SolrException(ErrorCode.SERVER_ERROR, "Could not find coreNodeName"); +return coreNodeName.get(); } - ClusterState waitForNewShard(String collectionName, String sliceName) throws KeeperException, InterruptedException { + ClusterState waitForNewShard(String collectionName, String sliceName) { log.debug("Waiting for slice {} of collection {} to be available", sliceName, collectionName); -RTimer timer = new RTimer(); -int retryCount = 320; -while (retryCount-- > 0) { - ClusterState clusterState = zkStateReader.getClusterState(); - DocCollection collection = clusterState.getCollection(collectionName); - - if (collection == null) { -throw new SolrException(ErrorCode.SERVER_ERROR, -"Unable to find collection: " + collectionName + " in clusterstate"); - } - Slice slice = collection.getSlice(sliceName); - if (slice != null) { -log.debug("Waited for {}ms for slice {} of collection {} to be available", -timer.getTime(), sliceName, collectionName); -return clusterState; - } - Thread.sleep(1000); +try { + zkStateReader.waitForState(collectionName, 320, TimeUnit.SECONDS, c -> c != null && c.getSlice(sliceName) != null); +} catch (TimeoutException | InterruptedException e) { + throw new SolrException(ErrorCode.SERVER_ERROR, "Timeout waiting for new slice", e); } -throw new SolrException(ErrorCode.SERVER_ERROR, -"Could not find new slice " + sliceName + " in collection " + collectionName -+ " even after waiting for " + timer.getTime() + "ms" -); +// nocommit is there a race condition here since we're not returning the same clusterstate we inspected? Review comment: Isn't that the case with most of this methods? While the predicate is being executed for example, there is no watch in ZooKeeper AFAICT, unless we go back and write in ZooKeeper and use the version. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] andyvuong commented on issue #1223: SOLR-14213: Configuring Solr Cloud to use Shared Storage
andyvuong commented on issue #1223: SOLR-14213: Configuring Solr Cloud to use Shared Storage URL: https://github.com/apache/lucene-solr/pull/1223#issuecomment-592242831 cc @yonik can you merge when you get a chance. Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1191: SOLR-14197 Reduce API of SolrResourceLoader
dsmiley commented on a change in pull request #1191: SOLR-14197 Reduce API of SolrResourceLoader URL: https://github.com/apache/lucene-solr/pull/1191#discussion_r385443497 ## File path: solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java ## @@ -104,7 +104,7 @@ private RestManager.Registry managedResourceRegistry; /** @see #reloadLuceneSPI() */ - private boolean needToReloadLuceneSPI = false; + private boolean needToReloadLuceneSPI = false; // requires synchronization Review comment: Yeah I did... but then I found it difficult to reason about the sequence of when it's set to true/false... vs when the actual work is done and so I instead made both methods that touch it synchronized to be clearer/safer. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17047129#comment-17047129 ] ASF subversion and git services commented on SOLR-14286: Commit 3ad9915547874e713e6da71b6e3e1cb86dab8158 in lucene-solr's branch refs/heads/master from Cao Manh Dat [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3ad9915 ] SOLR-14286: Try and fix sha1 file. > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17047130#comment-17047130 ] ASF subversion and git services commented on SOLR-14286: Commit f2ac34373f95a8f886f9b325ca408c9d6002d84c in lucene-solr's branch refs/heads/branch_8x from Cao Manh Dat [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f2ac343 ] SOLR-14286: Try and fix sha1 file. > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17047131#comment-17047131 ] Cao Manh Dat commented on SOLR-14286: - Hoping that above fixes solved the problem. The only difference between "jeger-thrift-1.1.0.jar.sha1" and above files are a newline at the end of sha file. This problem seems happened before https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ded726a I think we need to write out all steps needed to upgrade a library, there are several mistakes can be made easily. > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-14286) Upgrade Jaegar to 1.1.0
[ https://issues.apache.org/jira/browse/SOLR-14286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17047131#comment-17047131 ] Cao Manh Dat edited comment on SOLR-14286 at 2/28/20 1:45 AM: -- Hoping that above fixes solved the problem. The only difference between "jeger-thrift-1.1.0.jar.sha1" and above files are a newline at the end of sha file. This problem seems happened before https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ded726a I think we need to write out all steps needed to upgrade a library, there are several mistakes can be made easily. Above error was not be able to find during precommit. was (Author: caomanhdat): Hoping that above fixes solved the problem. The only difference between "jeger-thrift-1.1.0.jar.sha1" and above files are a newline at the end of sha file. This problem seems happened before https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ded726a I think we need to write out all steps needed to upgrade a library, there are several mistakes can be made easily. > Upgrade Jaegar to 1.1.0 > --- > > Key: SOLR-14286 > URL: https://issues.apache.org/jira/browse/SOLR-14286 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Fix For: master (9.0), 8.5 > > > Rohit Singh pointed to me that we are using thrift 0.12.0 (in > JaegarTracer-Configurator module) which has several security issues. We > should upgrade to Jaegar 1.1.0 which compatible which the current version we > are using. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] yonik merged pull request #1223: SOLR-14213: Configuring Solr Cloud to use Shared Storage
yonik merged pull request #1223: SOLR-14213: Configuring Solr Cloud to use Shared Storage URL: https://github.com/apache/lucene-solr/pull/1223 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14213) Configuring Solr Cloud to use Shared Storage
[ https://issues.apache.org/jira/browse/SOLR-14213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17047144#comment-17047144 ] ASF subversion and git services commented on SOLR-14213: Commit d46803f4fe9844f5e09f7b7d4548457e446933f0 in lucene-solr's branch refs/heads/jira/SOLR-13101 from Andy Vuong [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d46803f ] SOLR-14213: Configuring Solr Cloud to use Shared Storage (#1223) * Add SharedStoreConfig for initiating shared store support and refactor tests setup * Add missing condition * Fix test failure * Initialize fields in constructor and fix tests * load shared store manager vs corecontainer * Undo change > Configuring Solr Cloud to use Shared Storage > > > Key: SOLR-14213 > URL: https://issues.apache.org/jira/browse/SOLR-14213 > Project: Solr > Issue Type: Sub-task > Components: SolrCloud >Reporter: Andy Vuong >Priority: Minor > Time Spent: 1.5h > Remaining Estimate: 0h > > Clients can currently create shared collections by sending a collection > admin command such as > *_solr/admin/collections?action=CREATE&name=gettingstarted&sharedIndex=true&numShards=1_* > > There are a set of shared storage specific classes such as > SharedStorageManager that get initialized on startup when the CoreContainer > loads. There are also components that are lazily loaded when shared storage > functionality is needed. This was initially written this way because a Solr > Cloud cluster could spin up and not used shared collections in which case > shared store components wouldn’t need to be loaded. There is also no support > for configuring Solr Cloud to use shared storage via config files. Lazy > loading leads to some poor code and initialization flow that should be > revisited. > This JIRA is for designing the configuration of Solr Cloud to use shared > storage and initializing shared storage components based on this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-14283) Fix NPE in SolrTestCaseJ4 preventing it from working on external projects
[ https://issues.apache.org/jira/browse/SOLR-14283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17047163#comment-17047163 ] ASF subversion and git services commented on SOLR-14283: Commit 9c9a69c643243caee119666c50434b1cf485d0ca in lucene-solr's branch refs/heads/branch_8x from Gus Heck [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9c9a69c ] SOLR-14283 - fix NPE in SolrTestCaseJ4 > Fix NPE in SolrTestCaseJ4 preventing it from working on external projects > - > > Key: SOLR-14283 > URL: https://issues.apache.org/jira/browse/SOLR-14283 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Affects Versions: 8.5 >Reporter: Gus Heck >Assignee: Gus Heck >Priority: Blocker > Fix For: 8.5 > > Attachments: SOLR-14283.patch > > > Though it's goals were laudable, SOLR-14217 ran afoul of an untested > requirement that the SolrTestCaseJ4 class continue to function even if > ExternalPaths#SOURCE_HOME has been set to null by the logic in > org.apache.solr.util.ExternalPaths#determineSourceHome > The result is that in any almost all usages of the solr test framework > outside of Solr that rely on SolrTestCaseJ4 or it's sub-classes including > SolrCloudTestCase the following exception would be thrown: > {code} > java.lang.NullPointerException > at __randomizedtesting.SeedInfo.seed([7A1D202DDAEA1C07]:0) > at > java.base/java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011) > at > java.base/java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006) > at java.base/java.util.Properties.put(Properties.java:1337) > at java.base/java.util.Properties.setProperty(Properties.java:225) > at java.base/java.lang.System.setProperty(System.java:895) > at > org.apache.solr.SolrTestCaseJ4.setupTestCases(SolrTestCaseJ4.java:284) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) > at > org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) > at java.base/java.lang.Thread.run(Thread.java:834) > {code} > This ticket will fix the issue and provides a test so that we don't break > nearly every user of the test framework other than ourselves in the future. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mai
[jira] [Created] (SOLR-14290) Fix NPE in SolrTestCaseJ4 breaking external usage for master/9.x
Gus Heck created SOLR-14290: --- Summary: Fix NPE in SolrTestCaseJ4 breaking external usage for master/9.x Key: SOLR-14290 URL: https://issues.apache.org/jira/browse/SOLR-14290 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: Tests Affects Versions: master (9.0) Environment: Solr Test Framework when run externally such that ExternalPaths.SOURCE_HOME is null Reporter: Gus Heck Assignee: Gus Heck A fix for this was provided on branch_8x in SOLR-14283 but that same fix won't work in java 11 due to reflection restrictions that make it impossible (AFAIK) to un-final a final variable. We will likely need to employ powermock or our own java agent based solution, or redesign the way ExternalPaths.determinSourceHome and ExternalPaths.SOURCE_HOME work. 8.5 is coming up soon and we can't release with this broken in 8.5 so I'm separating the more complicated 9x fix into this ticket. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14283) Fix NPE in SolrTestCaseJ4 preventing it from working on external projects
[ https://issues.apache.org/jira/browse/SOLR-14283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gus Heck resolved SOLR-14283. - Resolution: Fixed > Fix NPE in SolrTestCaseJ4 preventing it from working on external projects > - > > Key: SOLR-14283 > URL: https://issues.apache.org/jira/browse/SOLR-14283 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Affects Versions: 8.5 >Reporter: Gus Heck >Assignee: Gus Heck >Priority: Blocker > Fix For: 8.5 > > Attachments: SOLR-14283.patch > > > Though it's goals were laudable, SOLR-14217 ran afoul of an untested > requirement that the SolrTestCaseJ4 class continue to function even if > ExternalPaths#SOURCE_HOME has been set to null by the logic in > org.apache.solr.util.ExternalPaths#determineSourceHome > The result is that in any almost all usages of the solr test framework > outside of Solr that rely on SolrTestCaseJ4 or it's sub-classes including > SolrCloudTestCase the following exception would be thrown: > {code} > java.lang.NullPointerException > at __randomizedtesting.SeedInfo.seed([7A1D202DDAEA1C07]:0) > at > java.base/java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1011) > at > java.base/java.util.concurrent.ConcurrentHashMap.put(ConcurrentHashMap.java:1006) > at java.base/java.util.Properties.put(Properties.java:1337) > at java.base/java.util.Properties.setProperty(Properties.java:225) > at java.base/java.lang.System.setProperty(System.java:895) > at > org.apache.solr.SolrTestCaseJ4.setupTestCases(SolrTestCaseJ4.java:284) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) > at > org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) > at java.base/java.lang.Thread.run(Thread.java:834) > {code} > This ticket will fix the issue and provides a test so that we don't break > nearly every user of the test framework other than ourselves in the future. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] irvingzhang commented on issue #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers
irvingzhang commented on issue #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers URL: https://github.com/apache/lucene-solr/pull/1295#issuecomment-592287771 > I believe in practice that results. max size is always set to ef, so there shouldn't be any real issue. I agree that the interface doesn't make that plain; we should enforce this invariant by API contract I agree that the max size is always set to _ef_. According to **Algorithm 5** of [papar](https://arxiv.org/pdf/1603.09320.pdf), HNSW searches the nearest one (namely, _ef_=1) neighbor from top layer to the 1st layer, and then finds the nearest _ef_ (_ef_=topK) neighbors from layer 0. In the implementation of Lucene HNSW, the actual size of result queue (Line 63, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java)) is set to _ef_=topK when searching from top layer to the 1st layer, result in finding more neighbors than expected. Even if the parameter _ef_ is set to 1 in Line 66, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java), the code `if (dist < f.distance() || results.size() < ef)` (Line 87, [HNSWGraph](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraph.java)) allows insert more than 1 neighbors to the "results" when `dist < f.distance()` because the max size of "results" is _ef_=topK. The simplest way to check this problem is to print the actual size of neighbors. For example, add "System.out.println(neighbors.size());" after "visitedCount += hnsw.searchLayer(query, neighbors, 1, l, vectorValues);" (Line 66, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java)), where the nearest one neighbor is expected, but the actual neighbor size would be range from 1~topK. Which also applies to [HNSWGraphWriter](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphWriter.java). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] irvingzhang edited a comment on issue #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers
irvingzhang edited a comment on issue #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers URL: https://github.com/apache/lucene-solr/pull/1295#issuecomment-592287771 > I believe in practice that results. max size is always set to ef, so there shouldn't be any real issue. I agree that the interface doesn't make that plain; we should enforce this invariant by API contract I agree that the max size is always set to _ef_, but _ef_ has different values in different layers. According to **Algorithm 5** of [papar](https://arxiv.org/pdf/1603.09320.pdf), HNSW searches the nearest one (namely, _ef_=1) neighbor from top layer to the 1st layer, and then finds the nearest _ef_ (_ef_=topK) neighbors from layer 0. In the implementation of Lucene HNSW, the actual size of result queue (Line 63, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java)) is set to _ef_=topK when searching from top layer to the 1st layer, result in finding more neighbors than expected. Even if the parameter _ef_ is set to 1 in Line 66, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java), the code `if (dist < f.distance() || results.size() < ef)` (Line 87, [HNSWGraph](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraph.java)) allows insert more than 1 neighbors to the "results" when `dist < f.distance()` because the max size of "results" is _ef_=topK. The simplest way to check this problem is to print the actual size of neighbors. For example, add "System.out.println(neighbors.size());" after "visitedCount += hnsw.searchLayer(query, neighbors, 1, l, vectorValues);" (Line 66, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java)), where the nearest one neighbor is expected, but the actual neighbor size would be range from 1~topK. Which also applies to [HNSWGraphWriter](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphWriter.java). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] irvingzhang edited a comment on issue #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers
irvingzhang edited a comment on issue #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers URL: https://github.com/apache/lucene-solr/pull/1295#issuecomment-592287771 > I believe in practice that results. max size is always set to ef, so there shouldn't be any real issue. I agree that the interface doesn't make that plain; we should enforce this invariant by API contract I agree that the max size is always set to _ef_, but _ef_ has different values in different layers. According to **Algorithm 5** of [papar](https://arxiv.org/pdf/1603.09320.pdf), HNSW searches the nearest one (namely, _ef_=1) neighbor from top layer to the 1st layer, and then finds the nearest _ef_ (_ef_=topK) neighbors from layer 0. In the implementation of Lucene HNSW, the actual size of result queue (Line 63, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java)) is set to _ef_=topK when searching from top layer to the 1st layer, result in finding more neighbors than expected. Even if the parameter _ef_ is set to 1 in Line 66, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java), the code `if (dist < f.distance() || results.size() < ef)` (Line 87, [HNSWGraph](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraph.java)) allows insert more than 1 neighbor to the "results" when `dist < f.distance()` because the max size of "results" is _ef_=topK, which implies that actual size of "results" belongs to [1, topK]. The simplest way to check this problem is to print the actual size of neighbors. For example, add "System.out.println(neighbors.size());" after "visitedCount += hnsw.searchLayer(query, neighbors, 1, l, vectorValues);" (Line 66, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java)), where the nearest one neighbor is expected, but the actual neighbor size would be range from 1~topK. Which also applies to [HNSWGraphWriter](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphWriter.java). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] irvingzhang edited a comment on issue #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers
irvingzhang edited a comment on issue #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers URL: https://github.com/apache/lucene-solr/pull/1295#issuecomment-592287771 > I believe in practice that results. max size is always set to ef, so there shouldn't be any real issue. I agree that the interface doesn't make that plain; we should enforce this invariant by API contract I agree that the max size is always set to _ef_, but _ef_ has different values in different layers. According to **Algorithm 5** of [papar](https://arxiv.org/pdf/1603.09320.pdf), HNSW searches the nearest one (namely, _ef_=1) neighbor from top layer to the 1st layer, and then finds the nearest _ef_ (_ef_=topK) neighbors from layer 0. In the implementation of Lucene HNSW, the actual size of result queue (Line 64, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java)) is set to _ef_=topK when searching from top layer to the 1st layer, result in finding more neighbors than expected. Even if the parameter _ef_ is set to 1 in Line 66, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java), the code `if (dist < f.distance() || results.size() < ef)` (Line 87, [HNSWGraph](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraph.java)) allows insert more than 1 neighbor to the "results" when `dist < f.distance()` because the max size of "results" is _ef_=topK, which implies that actual size of "results" belongs to [1, topK]. The simplest way to check this problem is to print the actual size of neighbors. For example, add "System.out.println(neighbors.size());" after "visitedCount += hnsw.searchLayer(query, neighbors, 1, l, vectorValues);" (Line 66, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java)), where the nearest one neighbor is expected, but the actual neighbor size would be range from 1~topK. Which also applies to [HNSWGraphWriter](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphWriter.java). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] irvingzhang edited a comment on issue #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers
irvingzhang edited a comment on issue #1295: Lucene-9004: bug fix for searching the nearest one neighbor in higher layers URL: https://github.com/apache/lucene-solr/pull/1295#issuecomment-592287771 > I believe in practice that results. max size is always set to ef, so there shouldn't be any real issue. I agree that the interface doesn't make that plain; we should enforce this invariant by API contract I agree that the max size is always set to _ef_, but _ef_ has different values in different layers. According to **Algorithm 5** of [papar](https://arxiv.org/pdf/1603.09320.pdf), HNSW searches the nearest one (namely, _ef_=1) neighbor from top layer to the 1st layer, and then finds the nearest _ef_ (_ef_=topK) neighbors from layer 0. In the implementation of Lucene HNSW, the actual size of result queue (Line 64, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java)) is set to _ef_=topK when searching from top layer to the 1st layer, result in finding more neighbors than expected. Even if the parameter _ef_ is set to 1 in Line 66, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java), the code `if (dist < f.distance() || results.size() < ef)` (Line 87, [HNSWGraph](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraph.java)) allows inserting more than 1 neighbor to the "results" when `dist < f.distance()` but `results.size() >= ef` (here _ef_=1, corresponding to Line 66, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java)) because the max size of "results" is topK, which implies that actual size of "results" belongs to [1, topK]. The simplest way to check this problem is to print the actual size of neighbors. For example, add "System.out.println(neighbors.size());" after "visitedCount += hnsw.searchLayer(query, neighbors, 1, l, vectorValues);" (Line 66, [HNSWGraphReader](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphReader.java)), where the nearest one neighbor is expected, but the actual neighbor size would be range from 1~topK. Which also applies to [HNSWGraphWriter](https://github.com/apache/lucene-solr/blob/jira/lucene-9004-aknn-2/lucene/core/src/java/org/apache/lucene/util/hnsw/HNSWGraphWriter.java). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-13411) CompositeIdRouter calculates wrong route hash if atomic update is used for route.field
[ https://issues.apache.org/jira/browse/SOLR-13411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13411: Resolution: Fixed Status: Resolved (was: Patch Available) Thanks [~osavrasov] > CompositeIdRouter calculates wrong route hash if atomic update is used for > route.field > -- > > Key: SOLR-13411 > URL: https://issues.apache.org/jira/browse/SOLR-13411 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 7.5 >Reporter: Niko Himanen >Assignee: Mikhail Khludnev >Priority: Minor > Fix For: 8.5 > > Attachments: SOLR-13411.patch, SOLR-13411.patch > > > If collection is created with router.field -parameter to define some other > field than uniqueField as route field and document update comes containing > route field updated using atomic update syntax (for example set=123), hash > for document routing is calculated from "set=123" and not from 123 which is > the real value which may lead into routing document to wrong shard. > > This happens in CompositeIdRouter#sliceHash, where field value is used as is > for hash calculation. > > I think there are two possible solutions to fix this: > a) Allow use of atomic update also for route.field, but use real value > instead of atomic update syntax to route document into right shard. > b) Deny atomic update for route.field and throw exception. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase commented on issue #1289: LUCENE-9250: Add support for Circle2d#intersectsLine around the dateline.
iverase commented on issue #1289: LUCENE-9250: Add support for Circle2d#intersectsLine around the dateline. URL: https://github.com/apache/lucene-solr/pull/1289#issuecomment-592380092 I will push this shortly as it is a left over from the original commit for supporting distance queries over LatLonShapes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org