[jira] [Created] (SOLR-13643) ResponseBuilder should provide accessors/setters for analytics response handling
Neal Sidhwaney created SOLR-13643: - Summary: ResponseBuilder should provide accessors/setters for analytics response handling Key: SOLR-13643 URL: https://issues.apache.org/jira/browse/SOLR-13643 Project: Solr Issue Type: Task Security Level: Public (Default Security Level. Issues are Public) Components: Response Writers Affects Versions: 8.1.1 Reporter: Neal Sidhwaney Right now inside o.a.s.h.c.AnalyticsComponent.java, fields inside ResponseBuilder are accessed directly. Since they're in the same package, this is OK at compile tie. But when the Solr core and Analytics jars are loaded at runtime by Solr, they are done by different classloaders, which causes an IllegalAccessError during request handling. There must be soething different about y setup which is why I am running into this, but it seems like a good idea to abstract away the fields behinds setters/getters anyway. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13206) ArrayIndexOutOfBoundsException in org/apache/solr/request/SimpleFacets.java[705]
[ https://issues.apache.org/jira/browse/SOLR-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887648#comment-16887648 ] ASF subversion and git services commented on SOLR-13206: Commit 241c44a82d91aa50c2bec0e26f88126a2a7d436c in lucene-solr's branch refs/heads/branch_8x from Munendra S N [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=241c44a ] SOLR-13206: Fix AIOOBE when group.facet is specified with group.query group.facet is supported only for group.field. When group.facet is used with group.query, then return proper error code > ArrayIndexOutOfBoundsException in > org/apache/solr/request/SimpleFacets.java[705] > > > Key: SOLR-13206 > URL: https://issues.apache.org/jira/browse/SOLR-13206 > Project: Solr > Issue Type: Bug >Affects Versions: master (9.0) > Environment: h1. Steps to reproduce > * Use a Linux machine. > * Build commit {{ea2c8ba}} of Solr as described in the section below. > * Build the films collection as described below. > * Start the server using the command {{./bin/solr start -f -p 8983 -s > /tmp/home}} > * Request the URL given in the bug description. > h1. Compiling the server > {noformat} > git clone https://github.com/apache/lucene-solr > cd lucene-solr > git checkout ea2c8ba > ant compile > cd solr > ant server > {noformat} > h1. Building the collection and reproducing the bug > We followed [Exercise > 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from > the [Solr > Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. > {noformat} > mkdir -p /tmp/home > echo '' > > /tmp/home/solr.xml > {noformat} > In one terminal start a Solr instance in foreground: > {noformat} > ./bin/solr start -f -p 8983 -s /tmp/home > {noformat} > In another terminal, create a collection of movies, with no shards and no > replication, and initialize it: > {noformat} > bin/solr create -c films > curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": > {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' > http://localhost:8983/solr/films/schema > curl -X POST -H 'Content-type:application/json' --data-binary > '{"add-copy-field" : {"source":"*","dest":"_text_"}}' > http://localhost:8983/solr/films/schema > ./bin/post -c films example/films/films.json > curl -v "URL_BUG" > {noformat} > Please check the issue description below to find the "URL_BUG" that will > allow you to reproduce the issue reported. >Reporter: Marek >Priority: Minor > Labels: diffblue, newdev > > Requesting the following URL causes Solr to return an HTTP 500 error response: > {noformat} > http://localhost:8983/solr/films/select?group=true&group.func=genre&group.facet=true&facet.pivot=_version_&on&facet=true > {noformat} > The error response seems to be caused by the following uncaught exception: > {noformat} > ERROR (qtp689401025-21) [ x:films] o.a.s.h.RequestHandlerBase > java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.solr.request.SimpleFacets.getGroupedCounts(SimpleFacets.java:705) > at > org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:495) > at > org.apache.solr.request.SimpleFacets.getTermCountsForPivots(SimpleFacets.java:414) > at > org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:221) > at > org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:169) > at > org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:279) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340) > 394) > [...] > {noformat} > There is accessed the first element of an empty array of strings, stored in > the member 'org.apache.solr.search.grouping.GroupingSpecification.fields'. > There is an attept to put some strings to the array at > org/apache/solr/handler/component/QueryComponent.java[283]; however, the > string "group.field" is not present in params of the processed > org.apache.solr.request.SolrQueryRequest instance. > The cause of the issue seems to be similar to one reported in SOLR-13204. > To set up an environment
[jira] [Resolved] (SOLR-13206) ArrayIndexOutOfBoundsException in org/apache/solr/request/SimpleFacets.java[705]
[ https://issues.apache.org/jira/browse/SOLR-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N resolved SOLR-13206. - Resolution: Fixed Assignee: Munendra S N Fix Version/s: 8.3 > ArrayIndexOutOfBoundsException in > org/apache/solr/request/SimpleFacets.java[705] > > > Key: SOLR-13206 > URL: https://issues.apache.org/jira/browse/SOLR-13206 > Project: Solr > Issue Type: Bug >Affects Versions: master (9.0) > Environment: h1. Steps to reproduce > * Use a Linux machine. > * Build commit {{ea2c8ba}} of Solr as described in the section below. > * Build the films collection as described below. > * Start the server using the command {{./bin/solr start -f -p 8983 -s > /tmp/home}} > * Request the URL given in the bug description. > h1. Compiling the server > {noformat} > git clone https://github.com/apache/lucene-solr > cd lucene-solr > git checkout ea2c8ba > ant compile > cd solr > ant server > {noformat} > h1. Building the collection and reproducing the bug > We followed [Exercise > 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from > the [Solr > Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. > {noformat} > mkdir -p /tmp/home > echo '' > > /tmp/home/solr.xml > {noformat} > In one terminal start a Solr instance in foreground: > {noformat} > ./bin/solr start -f -p 8983 -s /tmp/home > {noformat} > In another terminal, create a collection of movies, with no shards and no > replication, and initialize it: > {noformat} > bin/solr create -c films > curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": > {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' > http://localhost:8983/solr/films/schema > curl -X POST -H 'Content-type:application/json' --data-binary > '{"add-copy-field" : {"source":"*","dest":"_text_"}}' > http://localhost:8983/solr/films/schema > ./bin/post -c films example/films/films.json > curl -v "URL_BUG" > {noformat} > Please check the issue description below to find the "URL_BUG" that will > allow you to reproduce the issue reported. >Reporter: Marek >Assignee: Munendra S N >Priority: Minor > Labels: diffblue, newdev > Fix For: 8.3 > > > Requesting the following URL causes Solr to return an HTTP 500 error response: > {noformat} > http://localhost:8983/solr/films/select?group=true&group.func=genre&group.facet=true&facet.pivot=_version_&on&facet=true > {noformat} > The error response seems to be caused by the following uncaught exception: > {noformat} > ERROR (qtp689401025-21) [ x:films] o.a.s.h.RequestHandlerBase > java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.solr.request.SimpleFacets.getGroupedCounts(SimpleFacets.java:705) > at > org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:495) > at > org.apache.solr.request.SimpleFacets.getTermCountsForPivots(SimpleFacets.java:414) > at > org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:221) > at > org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:169) > at > org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:279) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340) > 394) > [...] > {noformat} > There is accessed the first element of an empty array of strings, stored in > the member 'org.apache.solr.search.grouping.GroupingSpecification.fields'. > There is an attept to put some strings to the array at > org/apache/solr/handler/component/QueryComponent.java[283]; however, the > string "group.field" is not present in params of the processed > org.apache.solr.request.SolrQueryRequest instance. > The cause of the issue seems to be similar to one reported in SOLR-13204. > To set up an environment to reproduce this bug, follow the description in the > 'Environment' field. > We automatically found this issue and ~70 more like this using [Diffblue > Microservices Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. > Find more information on this [fuzz testing > campaign|https://www.diffblue.com/blog/201
[GitHub] [lucene-solr] atris commented on issue #794: LUCENE-8769: Introduce Range Query Type With Multiple Ranges
atris commented on issue #794: LUCENE-8769: Introduce Range Query Type With Multiple Ranges URL: https://github.com/apache/lucene-solr/pull/794#issuecomment-512674354 cc @jpountz @iverase This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] atris opened a new pull request #794: LUCENE-8769: Introduce Range Query Type With Multiple Ranges
atris opened a new pull request #794: LUCENE-8769: Introduce Range Query Type With Multiple Ranges URL: https://github.com/apache/lucene-solr/pull/794 Currently, multiple ranges need to be specified in different PointRangeQueries, thus leading to performance implications when the BKD tree is deep, since each range query will need a traversal. This commit introduces a new range query type which has multiple ranges logically connected. All ranges are logically connected by OR operators. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13206) ArrayIndexOutOfBoundsException in org/apache/solr/request/SimpleFacets.java[705]
[ https://issues.apache.org/jira/browse/SOLR-13206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887639#comment-16887639 ] ASF subversion and git services commented on SOLR-13206: Commit 1fc416404cbb008172717911a675e9e9113ad75a in lucene-solr's branch refs/heads/master from Munendra S N [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1fc4164 ] SOLR-13206: Fix AIOOBE when group.facet is specified with group.query group.facet is supported only for group.field. When group.facet is used with group.query, then return proper error code > ArrayIndexOutOfBoundsException in > org/apache/solr/request/SimpleFacets.java[705] > > > Key: SOLR-13206 > URL: https://issues.apache.org/jira/browse/SOLR-13206 > Project: Solr > Issue Type: Bug >Affects Versions: master (9.0) > Environment: h1. Steps to reproduce > * Use a Linux machine. > * Build commit {{ea2c8ba}} of Solr as described in the section below. > * Build the films collection as described below. > * Start the server using the command {{./bin/solr start -f -p 8983 -s > /tmp/home}} > * Request the URL given in the bug description. > h1. Compiling the server > {noformat} > git clone https://github.com/apache/lucene-solr > cd lucene-solr > git checkout ea2c8ba > ant compile > cd solr > ant server > {noformat} > h1. Building the collection and reproducing the bug > We followed [Exercise > 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from > the [Solr > Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. > {noformat} > mkdir -p /tmp/home > echo '' > > /tmp/home/solr.xml > {noformat} > In one terminal start a Solr instance in foreground: > {noformat} > ./bin/solr start -f -p 8983 -s /tmp/home > {noformat} > In another terminal, create a collection of movies, with no shards and no > replication, and initialize it: > {noformat} > bin/solr create -c films > curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": > {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' > http://localhost:8983/solr/films/schema > curl -X POST -H 'Content-type:application/json' --data-binary > '{"add-copy-field" : {"source":"*","dest":"_text_"}}' > http://localhost:8983/solr/films/schema > ./bin/post -c films example/films/films.json > curl -v "URL_BUG" > {noformat} > Please check the issue description below to find the "URL_BUG" that will > allow you to reproduce the issue reported. >Reporter: Marek >Priority: Minor > Labels: diffblue, newdev > > Requesting the following URL causes Solr to return an HTTP 500 error response: > {noformat} > http://localhost:8983/solr/films/select?group=true&group.func=genre&group.facet=true&facet.pivot=_version_&on&facet=true > {noformat} > The error response seems to be caused by the following uncaught exception: > {noformat} > ERROR (qtp689401025-21) [ x:films] o.a.s.h.RequestHandlerBase > java.lang.ArrayIndexOutOfBoundsException: 0 > at > org.apache.solr.request.SimpleFacets.getGroupedCounts(SimpleFacets.java:705) > at > org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:495) > at > org.apache.solr.request.SimpleFacets.getTermCountsForPivots(SimpleFacets.java:414) > at > org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:221) > at > org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:169) > at > org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:279) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340) > 394) > [...] > {noformat} > There is accessed the first element of an empty array of strings, stored in > the member 'org.apache.solr.search.grouping.GroupingSpecification.fields'. > There is an attept to put some strings to the array at > org/apache/solr/handler/component/QueryComponent.java[283]; however, the > string "group.field" is not present in params of the processed > org.apache.solr.request.SolrQueryRequest instance. > The cause of the issue seems to be similar to one reported in SOLR-13204. > To set up an environment to
[jira] [Resolved] (LUCENE-8913) Reproducing failure in various TestLatLon* equals/hashcode tests
[ https://issues.apache.org/jira/browse/LUCENE-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ignacio Vera resolved LUCENE-8913. -- Resolution: Fixed Assignee: Ignacio Vera Fix Version/s: 8.3 8.2 master (9.0) > Reproducing failure in various TestLatLon* equals/hashcode tests > - > > Key: LUCENE-8913 > URL: https://issues.apache.org/jira/browse/LUCENE-8913 > Project: Lucene - Core > Issue Type: Bug > Components: core/other >Affects Versions: master (9.0) >Reporter: Gus Heck >Assignee: Ignacio Vera >Priority: Major > Fix For: master (9.0), 8.2, 8.3 > > > Bumped into this while running tests locally > ant clean test -Dtests.seed=41D0C5A80C823307 -Dtests.slow=true > -Dtests.badapples=true -Dtests.locale=es-CL > -Dtests.timezone=Pacific/Rarotonga -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 > reliably produces: > > {code:java} > Tests with failures [seed: 41D0C5A80C823307]: >[junit4] - > org.apache.lucene.document.TestLatLonPointShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiPointShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonLineShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonPolygonShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiPolygonShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiLineShapeQueries.testBoxQueryEqualsAndHashcode{code} > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8913) Reproducing failure in various TestLatLon* equals/hashcode tests
[ https://issues.apache.org/jira/browse/LUCENE-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887631#comment-16887631 ] ASF subversion and git services commented on LUCENE-8913: - Commit 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe in lucene-solr's branch refs/heads/branch_8_2 from Ignacio Vera [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=31d7ec7 ] LUCENE-8913: Fix test bug in BaseLatLonShapeTestCase#testBoxQueryEqualsAndHashcode > Reproducing failure in various TestLatLon* equals/hashcode tests > - > > Key: LUCENE-8913 > URL: https://issues.apache.org/jira/browse/LUCENE-8913 > Project: Lucene - Core > Issue Type: Bug > Components: core/other >Affects Versions: master (9.0) >Reporter: Gus Heck >Priority: Major > > Bumped into this while running tests locally > ant clean test -Dtests.seed=41D0C5A80C823307 -Dtests.slow=true > -Dtests.badapples=true -Dtests.locale=es-CL > -Dtests.timezone=Pacific/Rarotonga -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 > reliably produces: > > {code:java} > Tests with failures [seed: 41D0C5A80C823307]: >[junit4] - > org.apache.lucene.document.TestLatLonPointShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiPointShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonLineShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonPolygonShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiPolygonShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiLineShapeQueries.testBoxQueryEqualsAndHashcode{code} > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8913) Reproducing failure in various TestLatLon* equals/hashcode tests
[ https://issues.apache.org/jira/browse/LUCENE-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887628#comment-16887628 ] ASF subversion and git services commented on LUCENE-8913: - Commit 0de627ee26a4489e6ee9b333bf4ed16a4aa032f8 in lucene-solr's branch refs/heads/master from Ignacio Vera [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0de627e ] LUCENE-8913: Fix test bug in BaseLatLonShapeTestCase#testBoxQueryEqualsAndHashcode > Reproducing failure in various TestLatLon* equals/hashcode tests > - > > Key: LUCENE-8913 > URL: https://issues.apache.org/jira/browse/LUCENE-8913 > Project: Lucene - Core > Issue Type: Bug > Components: core/other >Affects Versions: master (9.0) >Reporter: Gus Heck >Priority: Major > > Bumped into this while running tests locally > ant clean test -Dtests.seed=41D0C5A80C823307 -Dtests.slow=true > -Dtests.badapples=true -Dtests.locale=es-CL > -Dtests.timezone=Pacific/Rarotonga -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 > reliably produces: > > {code:java} > Tests with failures [seed: 41D0C5A80C823307]: >[junit4] - > org.apache.lucene.document.TestLatLonPointShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiPointShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonLineShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonPolygonShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiPolygonShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiLineShapeQueries.testBoxQueryEqualsAndHashcode{code} > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8913) Reproducing failure in various TestLatLon* equals/hashcode tests
[ https://issues.apache.org/jira/browse/LUCENE-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887630#comment-16887630 ] ASF subversion and git services commented on LUCENE-8913: - Commit 6f2ff2157db802789ea840454068c19c509ba75c in lucene-solr's branch refs/heads/branch_8x from Ignacio Vera [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6f2ff21 ] LUCENE-8913: Fix test bug in BaseLatLonShapeTestCase#testBoxQueryEqualsAndHashcode > Reproducing failure in various TestLatLon* equals/hashcode tests > - > > Key: LUCENE-8913 > URL: https://issues.apache.org/jira/browse/LUCENE-8913 > Project: Lucene - Core > Issue Type: Bug > Components: core/other >Affects Versions: master (9.0) >Reporter: Gus Heck >Priority: Major > > Bumped into this while running tests locally > ant clean test -Dtests.seed=41D0C5A80C823307 -Dtests.slow=true > -Dtests.badapples=true -Dtests.locale=es-CL > -Dtests.timezone=Pacific/Rarotonga -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 > reliably produces: > > {code:java} > Tests with failures [seed: 41D0C5A80C823307]: >[junit4] - > org.apache.lucene.document.TestLatLonPointShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiPointShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonLineShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonPolygonShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiPolygonShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiLineShapeQueries.testBoxQueryEqualsAndHashcode{code} > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8913) Reproducing failure in various TestLatLon* equals/hashcode tests
[ https://issues.apache.org/jira/browse/LUCENE-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887627#comment-16887627 ] Ignacio Vera commented on LUCENE-8913: -- I will fix this one as it is a trivial test bug > Reproducing failure in various TestLatLon* equals/hashcode tests > - > > Key: LUCENE-8913 > URL: https://issues.apache.org/jira/browse/LUCENE-8913 > Project: Lucene - Core > Issue Type: Bug > Components: core/other >Affects Versions: master (9.0) >Reporter: Gus Heck >Priority: Major > > Bumped into this while running tests locally > ant clean test -Dtests.seed=41D0C5A80C823307 -Dtests.slow=true > -Dtests.badapples=true -Dtests.locale=es-CL > -Dtests.timezone=Pacific/Rarotonga -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 > reliably produces: > > {code:java} > Tests with failures [seed: 41D0C5A80C823307]: >[junit4] - > org.apache.lucene.document.TestLatLonPointShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiPointShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonLineShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonPolygonShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiPolygonShapeQueries.testBoxQueryEqualsAndHashcode >[junit4] - > org.apache.lucene.document.TestLatLonMultiLineShapeQueries.testBoxQueryEqualsAndHashcode{code} > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13565) Node level runtime libs loaded from remote urls
[ https://issues.apache.org/jira/browse/SOLR-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887626#comment-16887626 ] ASF subversion and git services commented on SOLR-13565: Commit 9b3b21eeaf97f51de927df4b7bc2796ed5edb599 in lucene-solr's branch refs/heads/jira/SOLR-13565 from Noble Paul [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9b3b21e ] SOLR-13565: more tests > Node level runtime libs loaded from remote urls > --- > > Key: SOLR-13565 > URL: https://issues.apache.org/jira/browse/SOLR-13565 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Custom components to be loaded at a CorContainer level > How to configure this? > {code:json} > curl -X POST -H 'Content-type:application/json' --data-binary ' > { > "add-runtimelib": { > "name": "lib-name" , > "url" : "http://host:port/url/of/jar";, > "sha512":"" > } > }' http://localhost:8983/api/cluster > {code} > How to update your jars? > {code:json} > curl -X POST -H 'Content-type:application/json' --data-binary ' > { > "update-runtimelib": { > "name": "lib-name" , > "url" : "http://host:port/url/of/jar";, > "sha512":"" > } > }' http://localhost:8983/api/cluster > {code} > This only loads the components used in CoreContainer and it does not need to > restart the Solr node > The configuration lives in the file {{/clusterprops.json}} in ZK. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13642) Test2BPostingsBytes org.apache.lucene.index.CorruptIndexException: docs out of order (490879719 <= 490879719 )
Daniel Black created SOLR-13642: --- Summary: Test2BPostingsBytes org.apache.lucene.index.CorruptIndexException: docs out of order (490879719 <= 490879719 ) Key: SOLR-13642 URL: https://issues.apache.org/jira/browse/SOLR-13642 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Affects Versions: 8.1.1 Environment: RHEL-7.3 (ppc64le - Power9) kernel 3.10.0-957.21.3.el7.ppc64le 48G vm, 64 core java version "1.8.0_211" Java(TM) SE Runtime Environment (build 8.0.5.37 - pxl6480sr5fp37-20190618_01(SR5 FP37)) IBM J9 VM (build 2.9, JRE 1.8.0 Linux ppc64le-64-Bit Compressed References 20190617_419755 (JIT enabled, AOT enabled) OpenJ9 - 354b31d OMR - 0437c69 IBM - 4972efe) JCL - 20190606_01 based on Oracle jdk8u211-b25 Reporter: Daniel Black 8x branch at commit 081e2ef2c05e017e87a2aef2a4f55067fbba5cb4 while running {{ant -Dtests.filter=(@monster or @slow) and not(@awaitsfix) -Dtests.heapsize=4G -Dtests.jvms=64 test}} {noformat} 2> NOTE: reproduce with: ant test -Dtestcase=Test2BPostingsBytes -Dtests.method=test -Dtests.seed=1C14F78FC0AF1835 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr -Dtests.timezone=SystemV/AST4ADT -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [23:54:00.627] ERROR111s J52 | Test2BPostingsBytes.test <<< > Throwable #1: org.apache.lucene.index.CorruptIndexException: docs out of order (490879719 <= 490879719 ) (resource=MockIndexOutputWrapper(FSIndexOutput(path="/home/danielgb /lucene-solr/lucene/build/core/test/J52/temp/lucene.index.Test2BPostingsBytes_1C14F78FC0AF1835-001/2BPostingsBytes3-001/_0_Lucene50_0.doc"))) >at __randomizedtesting.SeedInfo.seed([1C14F78FC0AF1835:9440C8556E5375CD]:0) >at org.apache.lucene.codecs.lucene50.Lucene50PostingsWriter.startDoc(Lucene50PostingsWriter.java:236) >at org.apache.lucene.codecs.PushPostingsWriterBase.writeTerm(PushPostingsWriterBase.java:148) >at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter$TermsWriter.write(BlockTreeTermsWriter.java:865) >at org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:344) >at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105) >at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:169) >at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:245) >at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:140) >at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2988) >at org.apache.lucene.util.TestUtil.addIndexesSlowly(TestUtil.java:990) >at org.apache.lucene.index.Test2BPostingsBytes.test(Test2BPostingsBytes.java:127) >at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90) >at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55) >at java.lang.reflect.Method.invoke(Method.java:508) >at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) >at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) >at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) >at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) >at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) >at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) >at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) >at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) >at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) >at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) >at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) >at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) >at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) >at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) >at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) >at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) >at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(Randomi
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11.0.3) - Build # 8059 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8059/ Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseSerialGC 6 tests failed. FAILED: org.apache.solr.update.processor.DimensionalRoutedAliasUpdateProcessorTest.testCatTime Error Message: Collection not found: testCatTime__CRA__calico__TRA__2019-07-01 Stack Trace: org.apache.solr.common.SolrException: Collection not found: testCatTime__CRA__calico__TRA__2019-07-01 at __randomizedtesting.SeedInfo.seed([DD5BB3E097BBD0B4:DA50FCE3A848815C]:0) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1084) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:504) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:460) at org.apache.solr.update.processor.RoutedAliasUpdateProcessorTest.addDocsAndCommit(RoutedAliasUpdateProcessorTest.java:301) at org.apache.solr.update.processor.DimensionalRoutedAliasUpdateProcessorTest.testCatTime(DimensionalRoutedAliasUpdateProcessorTest.java:563) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterM
[jira] [Resolved] (LUCENE-8909) Deprecate getFieldNames from IndexWriter
[ https://issues.apache.org/jira/browse/LUCENE-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N resolved LUCENE-8909. -- Resolution: Done Assignee: Munendra S N Fix Version/s: 8.3 > Deprecate getFieldNames from IndexWriter > > > Key: LUCENE-8909 > URL: https://issues.apache.org/jira/browse/LUCENE-8909 > Project: Lucene - Core > Issue Type: Task >Reporter: Munendra S N >Assignee: Munendra S N >Priority: Major > Fix For: 8.3 > > Attachments: LUCENE-8909.patch > > > From SOLR-12368 > {quote}Would be nice to be able to remove IndexWriter.getFieldNames as well, > which was added in LUCENE-7659 only for this workaround.{quote} > Once Solr task resolved, deprecate {{IndexWriter#getFieldNames}} from 8x and > remove it from master -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8909) Deprecate getFieldNames from IndexWriter
[ https://issues.apache.org/jira/browse/LUCENE-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887622#comment-16887622 ] ASF subversion and git services commented on LUCENE-8909: - Commit d23da5a951c0ae9b1735c05c52e10f3fa0af2c7b in lucene-solr's branch refs/heads/branch_8x from Munendra S N [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d23da5a ] LUCENE-8909: deprecate IndexWriter#getFieldNames() > Deprecate getFieldNames from IndexWriter > > > Key: LUCENE-8909 > URL: https://issues.apache.org/jira/browse/LUCENE-8909 > Project: Lucene - Core > Issue Type: Task >Reporter: Munendra S N >Priority: Major > Attachments: LUCENE-8909.patch > > > From SOLR-12368 > {quote}Would be nice to be able to remove IndexWriter.getFieldNames as well, > which was added in LUCENE-7659 only for this workaround.{quote} > Once Solr task resolved, deprecate {{IndexWriter#getFieldNames}} from 8x and > remove it from master -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8909) Deprecate getFieldNames from IndexWriter
[ https://issues.apache.org/jira/browse/LUCENE-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887614#comment-16887614 ] ASF subversion and git services commented on LUCENE-8909: - Commit 6104f55ac0407258b096dfbf1fb81c20da580e0a in lucene-solr's branch refs/heads/master from Munendra S N [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6104f55 ] LUCENE-8909: remove deprecated IndexWriter#getFieldNames() > Deprecate getFieldNames from IndexWriter > > > Key: LUCENE-8909 > URL: https://issues.apache.org/jira/browse/LUCENE-8909 > Project: Lucene - Core > Issue Type: Task >Reporter: Munendra S N >Priority: Major > Attachments: LUCENE-8909.patch > > > From SOLR-12368 > {quote}Would be nice to be able to remove IndexWriter.getFieldNames as well, > which was added in LUCENE-7659 only for this workaround.{quote} > Once Solr task resolved, deprecate {{IndexWriter#getFieldNames}} from 8x and > remove it from master -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13486) race condition between leader's "replay on startup" and non-leader's "recover from leader" can leave replicas out of sync (TestCloudConsistency)
[ https://issues.apache.org/jira/browse/SOLR-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887575#comment-16887575 ] Cao Manh Dat commented on SOLR-13486: - Thanks [~hossman] it is indeed a serious problem. Should we introduce a lock/flag to prevent /replication if tlog replay is not finished? ([~shalinmangar] wdyt?) > race condition between leader's "replay on startup" and non-leader's "recover > from leader" can leave replicas out of sync (TestCloudConsistency) > > > Key: SOLR-13486 > URL: https://issues.apache.org/jira/browse/SOLR-13486 > Project: Solr > Issue Type: Bug >Reporter: Hoss Man >Priority: Major > Attachments: > apache_Lucene-Solr-BadApples-NightlyTests-master_61.log.txt.gz, > apache_Lucene-Solr-BadApples-Tests-8.x_102.log.txt.gz > > > I've been investigating some jenkins failures from TestCloudConsistency, > which at first glance suggest a problem w/replica(s) recovering after a > network partition from the leader - but in digging into the logs the root > cause acturally seems to be a thread race conditions when a replica (the > leader) is first registered... > * The {{ZkContainer.registerInZk(...)}} method (which is called by > {{CoreContainer.registerCore(...)}} & {{CoreContainer.load()}}) is typically > run in a background thread (via the {{ZkContainer.coreZkRegister}} > ExecutorService) > * {{ZkContainer.registerInZk(...)}} delegates to > {{ZKController.register(...)}} which is ultimately responsible for checking > if there are any "old" tlogs on disk, and if so handling the "Replaying tlog > for during startup" logic > * Because this happens in a background thread, other logic/requests can be > handled by this core/replica in the meantime - before it starts (or while in > the middle of) replaying the tlogs > ** Notably: *leader's that have not yet replayed tlogs on startup will > erroneously respond to RTG / Fingerprint / PeerSync requests from other > replicas w/incomplete data* > ...In general, it seems scary / fishy to me that a replica can (aparently) > become *ACTIVE* before it's finished it's {{registerInZk}} + "Replaying tlog > ... during startup" logic ... particularly since this can happen even for > replicas that are/become leaders. It seems like this could potentially cause > a whole host of problems, only one of which manifests in this particular test > failure: > * *BEFORE* replicaX's "coreZkRegister" thread reaches the "Replaying tlog > ... during startup" check: > ** replicaX can recognize (via zk terms) that it should be the leader(X) > ** this leaderX can then instruct some other replicaY to recover from it > ** replicaY can send RTG / PeerSync / FetchIndex requests to the leaderX > (either on it's own volition, or because it was instructed to by leaderX) in > an attempt to recover > *** the responses to these recovery requests will not include updates in the > tlog files that existed on leaderX prior to startup that hvae not yet been > replayed > * *AFTER* replicaY has finished it's recovery, leaderX's "Replaying tlog ... > during startup" can finish > ** replicaY now thinks it is in sync with leaderX, but leaderX has > (replayed) updates the other replicas know nothing about -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions
[ https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887533#comment-16887533 ] ASF subversion and git services commented on SOLR-13105: Commit 34701a18466fd73d9d4b1f17a562bcb6d9abe5e9 in lucene-solr's branch refs/heads/SOLR-13105-visual from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=34701a1 ] SOLR-13105: Start interpolation viz > A visual guide to Solr Math Expressions and Streaming Expressions > - > > Key: SOLR-13105 > URL: https://issues.apache.org/jira/browse/SOLR-13105 > Project: Solr > Issue Type: New Feature >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot > 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, > Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 > AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png > > > Visualization is now a fundamental element of Solr Streaming Expressions and > Math Expressions. This ticket will create a visual guide to Solr Math > Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* > visualization examples. > It will also cover using the JDBC expression to *analyze* and *visualize* > results from any JDBC compliant data source. > Intro from the guide: > {code:java} > Streaming Expressions exposes the capabilities of Solr Cloud as composable > functions. These functions provide a system for searching, transforming, > analyzing and visualizing data stored in Solr Cloud collections. > At a high level there are four main capabilities that will be explored in the > documentation: > * Searching, sampling and aggregating results from Solr. > * Transforming result sets after they are retrieved from Solr. > * Analyzing and modeling result sets using probability and statistics and > machine learning libraries. > * Visualizing result sets, aggregations and statistical models of the data. > {code} > > A few sample visualizations are attached to the ticket. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9802) Cannot group by a datefield in SolrCloud
[ https://issues.apache.org/jira/browse/SOLR-9802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887529#comment-16887529 ] Lucene/Solr QA commented on SOLR-9802: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 15s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 3m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 52s{color} | {color:green} core in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 72m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-9802 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12974918/SOLR-9802.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 4b75776 | | ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 | | Default Java | LTS | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/495/testReport/ | | modules | C: solr/core U: solr/core | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/495/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Cannot group by a datefield in SolrCloud > > > Key: SOLR-9802 > URL: https://issues.apache.org/jira/browse/SOLR-9802 > Project: Solr > Issue Type: Bug >Reporter: Erick Erickson >Priority: Major > Attachments: SOLR-9802.patch > > > While working on SOLR-5260 I ran across this. It is easily reproducible by > indexing techproducts to a two-shard collection and then > &group=true&group.field=manufacturedate_dt > This works fine stand-alone. > When 5260 gets checked in look in DocValuesNotIndexedTest.java for a > reference to this JIRA and take out the special processing that avoids this > bug for a unit test. > Stack trace: > 80770 ERROR (qtp845642178-32) [n:127.0.0.1:50799_solr c:dv_coll s:shard1 > r:core_node2 x:dv_coll_shard1_replica1] o.a.s.h.RequestHandlerBase > org.apache.solr.common.SolrException: > Invalid Date String:'Mon Feb 02 13:40:21 MSK 239906837' > > at org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:234) > at org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:530) > at > org.apache.solr.search.grouping.distributed.command.GroupConverter.fromMutable(GroupConverter.java:59) > at > org.apache.solr.search.grouping.distributed.command.SearchGroupsFieldCommand.result(SearchGroupsFieldCommand.java:124) > at > org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:57) > at > org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:36) -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6672) function results' names should not include trailing whitespace
[ https://issues.apache.org/jira/browse/SOLR-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887524#comment-16887524 ] Lucene/Solr QA commented on SOLR-6672: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 36s{color} | {color:green} core in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 43m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-6672 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12974798/SOLR-6672.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 4b75776 | | ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 | | Default Java | LTS | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/496/testReport/ | | modules | C: solr/core U: solr/core | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/496/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > function results' names should not include trailing whitespace > -- > > Key: SOLR-6672 > URL: https://issues.apache.org/jira/browse/SOLR-6672 > Project: Solr > Issue Type: Bug > Components: search >Reporter: Mike Sokolov >Priority: Minor > Attachments: SOLR-6672.patch, SOLR-6672.patch > > > If you include a function as a result field in a list of multiple fields > separated by white space, the corresponding key in the result markup includes > trailing whitespace; Example: > {code} > fl="id field(units_used) archive_id" > {code} > ends up returning results like this: > {code} > { > "id": "nest.epubarchive.1", > "archive_id": "urn:isbn:97849D42C5A01", > "field(units_used) ": 123 > ^ > } > {code} > A workaround is to use comma separators instead of whitespace > {code} > fl="id,field(units_used),archive_id" > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8908) Specified default value not returned for query() when doc doesn't match
[ https://issues.apache.org/jira/browse/LUCENE-8908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887518#comment-16887518 ] Hoss Man commented on LUCENE-8908: -- [~munendrasn] at first glance this looks good ... but i'm wondering if this actually fixes all of the examples i mentioned when this was opened -- in particular things like {{exists(query($qx,0))}} vs {{exists(query($qx))}} ... those should return different things depending on whether the doc matches $qx or not. IIRC that would require modifying QueryDocValues.exists() to return "true" anytime there is a defVal, but i don't think that's really possible ATM because it's a {float}} (not a nullable Float) ... and i'm not sure off the top of my head that it would even be the ideal behavior for the QueryDocValues code ? ... maybe the solr ValueSourceParser logic should be changed to put an explicit wrapper around the QueryValueSource when a default is (isn't?) used ... not sure, i haven't looked / thought about this code in a long time. > Specified default value not returned for query() when doc doesn't match > --- > > Key: LUCENE-8908 > URL: https://issues.apache.org/jira/browse/LUCENE-8908 > Project: Lucene - Core > Issue Type: Bug >Reporter: Bill Bell >Priority: Major > Attachments: LUCENE-8908.patch, SOLR-7845.patch, SOLR-7845.patch > > > The 2 arg version of the "query()" was designed so that the second argument > would specify the value used for any document that does not match the query > pecified by the first argument -- but the "exists" property of the resulting > ValueSource only takes into consideration wether or not the document matches > the query -- and ignores the use of the second argument. > > The work around is to ignore the 2 arg form of the query() function, and > instead wrap he query function in def(). > for example: {{def(query($something), $defaultval)}} instead of > {{query($something, $defaultval)}} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk-12.0.1) - Build # 247 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/247/ Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC 5 tests failed. FAILED: org.apache.solr.handler.export.TestExportWriter.testStringWithCase Error Message: expected:<...:"3"}, {"id":"[1"}, {"id":"4]"}, {"id":"2"}...> but was:<...:"3"}, {"id":"[4"}, {"id":"1]"}, {"id":"2"}...> Stack Trace: org.junit.ComparisonFailure: expected:<...:"3"}, {"id":"[1"}, {"id":"4]"}, {"id":"2"}...> but was:<...:"3"}, {"id":"[4"}, {"id":"1]"}, {"id":"2"}...> at __randomizedtesting.SeedInfo.seed([FF4ABEF2F9A97973:9B01427460247916]:0) at org.junit.Assert.assertEquals(Assert.java:115) at org.junit.Assert.assertEquals(Assert.java:144) at org.apache.solr.handler.export.TestExportWriter.assertJsonEquals(TestExportWriter.java:540) at org.apache.solr.handler.export.TestExportWriter.testStringWithCase(TestExportWriter.java:365) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 152 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/152/ No tests ran. Build Log: [...truncated 24989 lines...] [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [java] Processed 2590 links (2119 relative) to 3405 anchors in 259 files [echo] Validated Links & Anchors via: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/ -dist-changes: [copy] Copying 4 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes package: -unpack-solr-tgz: -ensure-solr-tgz-exists: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked [untar] Expanding: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.3.0.tgz into /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked generate-maven-artifacts: resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings ::
[jira] [Updated] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.
[ https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-11556: Attachment: SOLR-11556.patch > Backup/Restore with multiple BackupRepository objects defined results in the > wrong repo being used. > --- > > Key: SOLR-11556 > URL: https://issues.apache.org/jira/browse/SOLR-11556 > Project: Solr > Issue Type: Bug > Components: Backup/Restore >Affects Versions: 6.3 >Reporter: Timothy Potter >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-11556.patch, SOLR-11556.patch, SOLR-11556.patch > > > I defined two repos for backup/restore, one local and one remote on GCS, e.g. > {code} > > class="org.apache.solr.core.backup.repository.HdfsBackupRepository" > default="false"> > ... > > class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" > default="false"> > /tmp/solr-backups > > > {code} > Since the CollectionHandler does not pass the "repository" param along, once > the BackupCmd gets the ZkNodeProps, it selects the wrong repo! > The error I'm seeing is: > {code} > 2017-10-26 17:07:27.326 ERROR > (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [ ] > o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: > backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not > installed > at java.nio.file.Paths.get(Paths.java:147) > at > org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82) > at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99) > at > org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224) > at > org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > {code} > Notice the Local backup repo is being selected in the BackupCmd even though I > passed repository=hdfs in my backup command, e.g. > {code} > curl > "http://localhost:8983/solr/admin/collections?action=BACKUP&name=foo&collection=foo&location=gs://tjp-solr-test/backups&repository=hdfs"; > {code} > I think the fix here is to include the repository param, see patch. I'll fix > for the next 7.x release and those on 6 can just apply the patch here. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8924) Remove Fields Order Checks from CheckIndex?
[ https://issues.apache.org/jira/browse/LUCENE-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887340#comment-16887340 ] Atri Sharma commented on LUCENE-8924: - I see. Should we make this more explicit and robust then? For E.g., since we do not explicitly maintain a sort order but rely on the key set to do the right thing, a change from Collections.unModifiableSet to Set.copyOf breaks this assertion in checkIndex (since Ser.copyOf explicitly calls out that there is no guarantee in the order of traversal) > Remove Fields Order Checks from CheckIndex? > --- > > Key: LUCENE-8924 > URL: https://issues.apache.org/jira/browse/LUCENE-8924 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Atri Sharma >Priority: Major > > CheckIndex checks the order of fields read from the FieldsEnum for the > posting reader. Since we do not explicitly sort or use a sorted data > structure to represent keys (atleast explicitly), and no FieldsEnum depends > on the order apart from MultiFieldsEnum, which no longer exists. > > Should we remove the check? -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8924) Remove Fields Order Checks from CheckIndex?
[ https://issues.apache.org/jira/browse/LUCENE-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887300#comment-16887300 ] Adrien Grand commented on LUCENE-8924: -- We rely on the order for merging, see "MultiFields". > Remove Fields Order Checks from CheckIndex? > --- > > Key: LUCENE-8924 > URL: https://issues.apache.org/jira/browse/LUCENE-8924 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Atri Sharma >Priority: Major > > CheckIndex checks the order of fields read from the FieldsEnum for the > posting reader. Since we do not explicitly sort or use a sorted data > structure to represent keys (atleast explicitly), and no FieldsEnum depends > on the order apart from MultiFieldsEnum, which no longer exists. > > Should we remove the check? -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8909) Deprecate getFieldNames from IndexWriter
[ https://issues.apache.org/jira/browse/LUCENE-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887291#comment-16887291 ] Adrien Grand commented on LUCENE-8909: -- +1 > Deprecate getFieldNames from IndexWriter > > > Key: LUCENE-8909 > URL: https://issues.apache.org/jira/browse/LUCENE-8909 > Project: Lucene - Core > Issue Type: Task >Reporter: Munendra S N >Priority: Major > Attachments: LUCENE-8909.patch > > > From SOLR-12368 > {quote}Would be nice to be able to remove IndexWriter.getFieldNames as well, > which was added in LUCENE-7659 only for this workaround.{quote} > Once Solr task resolved, deprecate {{IndexWriter#getFieldNames}} from 8x and > remove it from master -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8908) Specified default value not returned for query() when doc doesn't match
[ https://issues.apache.org/jira/browse/LUCENE-8908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887290#comment-16887290 ] Adrien Grand commented on LUCENE-8908: -- +1 > Specified default value not returned for query() when doc doesn't match > --- > > Key: LUCENE-8908 > URL: https://issues.apache.org/jira/browse/LUCENE-8908 > Project: Lucene - Core > Issue Type: Bug >Reporter: Bill Bell >Priority: Major > Attachments: LUCENE-8908.patch, SOLR-7845.patch, SOLR-7845.patch > > > The 2 arg version of the "query()" was designed so that the second argument > would specify the value used for any document that does not match the query > pecified by the first argument -- but the "exists" property of the resulting > ValueSource only takes into consideration wether or not the document matches > the query -- and ignores the use of the second argument. > > The work around is to ignore the 2 arg form of the query() function, and > instead wrap he query function in def(). > for example: {{def(query($something), $defaultval)}} instead of > {{query($something, $defaultval)}} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 153 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/153/ 6 tests failed. FAILED: org.apache.lucene.index.TestIndexWriterDelete.testDeletesOnDiskFull Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([2CB10673F2E9420B]:0) FAILED: junit.framework.TestSuite.org.apache.lucene.index.TestIndexWriterDelete Error Message: Suite timeout exceeded (>= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (>= 720 msec). at __randomizedtesting.SeedInfo.seed([2CB10673F2E9420B]:0) FAILED: org.apache.lucene.document.TestXYPolygonShapeQueries.testRandomBig Error Message: Java heap space Stack Trace: java.lang.OutOfMemoryError: Java heap space at __randomizedtesting.SeedInfo.seed([6F6F329B8FF5C104:E8384F141EACBD84]:0) at org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:84) at org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:57) at org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:168) at org.apache.lucene.store.RAMOutputStream.writeBytes(RAMOutputStream.java:154) at org.apache.lucene.store.MockIndexOutputWrapper.writeBytes(MockIndexOutputWrapper.java:141) at org.apache.lucene.util.bkd.OfflinePointWriter.append(OfflinePointWriter.java:67) at org.apache.lucene.util.bkd.BKDRadixSelector.offlinePartition(BKDRadixSelector.java:282) at org.apache.lucene.util.bkd.BKDRadixSelector.buildHistogramAndPartition(BKDRadixSelector.java:258) at org.apache.lucene.util.bkd.BKDRadixSelector.select(BKDRadixSelector.java:126) at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1594) at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1611) at org.apache.lucene.util.bkd.BKDWriter.finish(BKDWriter.java:785) at org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.writeField(Lucene60PointsWriter.java:129) at org.apache.lucene.codecs.PointsWriter.mergeOneField(PointsWriter.java:62) at org.apache.lucene.codecs.PointsWriter.merge(PointsWriter.java:191) at org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:143) at org.apache.lucene.codecs.asserting.AssertingPointsFormat$AssertingPointsWriter.merge(AssertingPointsFormat.java:149) at org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:202) at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:162) at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4462) at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4056) at org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40) at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2157) at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1990) at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1939) at org.apache.lucene.document.BaseShapeTestCase.indexRandomShapes(BaseShapeTestCase.java:242) at org.apache.lucene.document.BaseShapeTestCase.verify(BaseShapeTestCase.java:211) at org.apache.lucene.document.BaseShapeTestCase.doTestRandom(BaseShapeTestCase.java:137) at org.apache.lucene.document.TestXYPolygonShapeQueries.testRandomBig(TestXYPolygonShapeQueries.java:119) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) FAILED: org.apache.solr.cloud.RollingRestartTest.test Error Message: Address already in use Stack Trace: java.net.BindException: Address already in use at __randomizedtesting.SeedInfo.seed([21AAE082AEE603F0:A9FEDF58001A6E08]:0) at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342) at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:308) at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.server.Server.doStart(Server.java:396) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.apache.s
[jira] [Created] (LUCENE-8924) Remove Fields Order Checks from CheckIndex?
Atri Sharma created LUCENE-8924: --- Summary: Remove Fields Order Checks from CheckIndex? Key: LUCENE-8924 URL: https://issues.apache.org/jira/browse/LUCENE-8924 Project: Lucene - Core Issue Type: Improvement Reporter: Atri Sharma CheckIndex checks the order of fields read from the FieldsEnum for the posting reader. Since we do not explicitly sort or use a sorted data structure to represent keys (atleast explicitly), and no FieldsEnum depends on the order apart from MultiFieldsEnum, which no longer exists. Should we remove the check? -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] atris closed pull request #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq
atris closed pull request #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq URL: https://github.com/apache/lucene-solr/pull/779 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-13634) ResponseBuilderTest should be in same package as ReponseBuilder
[ https://issues.apache.org/jira/browse/SOLR-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N resolved SOLR-13634. - Resolution: Done Assignee: Munendra S N Fix Version/s: 8.3 Thanks [~nealsidhwaney] > ResponseBuilderTest should be in same package as ReponseBuilder > --- > > Key: SOLR-13634 > URL: https://issues.apache.org/jira/browse/SOLR-13634 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Server >Affects Versions: 8.1.1 >Reporter: Neal Sidhwaney >Assignee: Munendra S N >Priority: Trivial > Fix For: 8.3 > > Attachments: SOLR-13634.patch > > > While playing around with the analytics package, I noticed ResponseBuilder is > in Java package org.apache.solr.handler.component, whereas > ResponseBuilderTest is in org.apache.solr.handler. We should make them > consistent. I'll send a patch to move ResponseBuilderTest into the same > package as ResponsBuilder. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13634) ResponseBuilderTest should be in same package as ReponseBuilder
[ https://issues.apache.org/jira/browse/SOLR-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887273#comment-16887273 ] ASF subversion and git services commented on SOLR-13634: Commit 6899e0520ed3365be50c69a1b2c5f18d9624751b in lucene-solr's branch refs/heads/branch_8x from Munendra S N [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6899e05 ] SOLR-13634:move ResponseBuilderTest to same package as ResponseBuilder > ResponseBuilderTest should be in same package as ReponseBuilder > --- > > Key: SOLR-13634 > URL: https://issues.apache.org/jira/browse/SOLR-13634 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Server >Affects Versions: 8.1.1 >Reporter: Neal Sidhwaney >Priority: Trivial > Attachments: SOLR-13634.patch > > > While playing around with the analytics package, I noticed ResponseBuilder is > in Java package org.apache.solr.handler.component, whereas > ResponseBuilderTest is in org.apache.solr.handler. We should make them > consistent. I'll send a patch to move ResponseBuilderTest into the same > package as ResponsBuilder. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13634) ResponseBuilderTest should be in same package as ReponseBuilder
[ https://issues.apache.org/jira/browse/SOLR-13634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887265#comment-16887265 ] ASF subversion and git services commented on SOLR-13634: Commit 4b75776f5a7962200ae55f0125625890bf7ed1bd in lucene-solr's branch refs/heads/master from Munendra S N [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4b75776 ] SOLR-13634:move ResponseBuilderTest to same package as ResponseBuilder > ResponseBuilderTest should be in same package as ReponseBuilder > --- > > Key: SOLR-13634 > URL: https://issues.apache.org/jira/browse/SOLR-13634 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Server >Affects Versions: 8.1.1 >Reporter: Neal Sidhwaney >Priority: Trivial > Attachments: SOLR-13634.patch > > > While playing around with the analytics package, I noticed ResponseBuilder is > in Java package org.apache.solr.handler.component, whereas > ResponseBuilderTest is in org.apache.solr.handler. We should make them > consistent. I'll send a patch to move ResponseBuilderTest into the same > package as ResponsBuilder. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8908) Specified default value not returned for query() when doc doesn't match
[ https://issues.apache.org/jira/browse/LUCENE-8908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887239#comment-16887239 ] Munendra S N commented on LUCENE-8908: -- [~hossman] [~jpountz] could you guys please review this?? > Specified default value not returned for query() when doc doesn't match > --- > > Key: LUCENE-8908 > URL: https://issues.apache.org/jira/browse/LUCENE-8908 > Project: Lucene - Core > Issue Type: Bug >Reporter: Bill Bell >Priority: Major > Attachments: LUCENE-8908.patch, SOLR-7845.patch, SOLR-7845.patch > > > The 2 arg version of the "query()" was designed so that the second argument > would specify the value used for any document that does not match the query > pecified by the first argument -- but the "exists" property of the resulting > ValueSource only takes into consideration wether or not the document matches > the query -- and ignores the use of the second argument. > > The work around is to ignore the 2 arg form of the query() function, and > instead wrap he query function in def(). > for example: {{def(query($something), $defaultval)}} instead of > {{query($something, $defaultval)}} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8908) Specified default value not returned for query() when doc doesn't match
[ https://issues.apache.org/jira/browse/LUCENE-8908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated LUCENE-8908: - Summary: Specified default value not returned for query() when doc doesn't match (was: 2 arg "query()" does not exist for all docs, even though second arg specifies a default value) > Specified default value not returned for query() when doc doesn't match > --- > > Key: LUCENE-8908 > URL: https://issues.apache.org/jira/browse/LUCENE-8908 > Project: Lucene - Core > Issue Type: Bug >Reporter: Bill Bell >Priority: Major > Attachments: LUCENE-8908.patch, SOLR-7845.patch, SOLR-7845.patch > > > The 2 arg version of the "query()" was designed so that the second argument > would specify the value used for any document that does not match the query > pecified by the first argument -- but the "exists" property of the resulting > ValueSource only takes into consideration wether or not the document matches > the query -- and ignores the use of the second argument. > > The work around is to ignore the 2 arg form of the query() function, and > instead wrap he query function in def(). > for example: {{def(query($something), $defaultval)}} instead of > {{query($something, $defaultval)}} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8909) Deprecate getFieldNames from IndexWriter
[ https://issues.apache.org/jira/browse/LUCENE-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887235#comment-16887235 ] Munendra S N commented on LUCENE-8909: -- [^LUCENE-8909.patch] [~jpountz] could you please review the changes.txt? This patch is for master, will deprecate these methods in 8x > Deprecate getFieldNames from IndexWriter > > > Key: LUCENE-8909 > URL: https://issues.apache.org/jira/browse/LUCENE-8909 > Project: Lucene - Core > Issue Type: Task >Reporter: Munendra S N >Priority: Major > Attachments: LUCENE-8909.patch > > > From SOLR-12368 > {quote}Would be nice to be able to remove IndexWriter.getFieldNames as well, > which was added in LUCENE-7659 only for this workaround.{quote} > Once Solr task resolved, deprecate {{IndexWriter#getFieldNames}} from 8x and > remove it from master -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8909) Deprecate getFieldNames from IndexWriter
[ https://issues.apache.org/jira/browse/LUCENE-8909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated LUCENE-8909: - Attachment: LUCENE-8909.patch > Deprecate getFieldNames from IndexWriter > > > Key: LUCENE-8909 > URL: https://issues.apache.org/jira/browse/LUCENE-8909 > Project: Lucene - Core > Issue Type: Task >Reporter: Munendra S N >Priority: Major > Attachments: LUCENE-8909.patch > > > From SOLR-12368 > {quote}Would be nice to be able to remove IndexWriter.getFieldNames as well, > which was added in LUCENE-7659 only for this workaround.{quote} > Once Solr task resolved, deprecate {{IndexWriter#getFieldNames}} from 8x and > remove it from master -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11286) First doc Inplace Update, updating whole document.
[ https://issues.apache.org/jira/browse/SOLR-11286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N resolved SOLR-11286. - Resolution: Fixed Fix Version/s: 8.3 > First doc Inplace Update, updating whole document. > -- > > Key: SOLR-11286 > URL: https://issues.apache.org/jira/browse/SOLR-11286 > Project: Solr > Issue Type: Bug > Components: update >Affects Versions: 6.6 > Environment: stored="false" docValues="true"/> > trying inplace update for > stored="false" docValues="true"/> >Reporter: Abhishek Umarjikar >Priority: Major > Fix For: 8.3 > > > I am trying in place update , for first doc the whole document is getting > indexed. so in place update is not working for first time. After that it > works for remaining docs. I am using solrj for inplace update. > First Doc For in place update > *2017-08-24 21:59:14,603 DEBUG org.apache.solr.update.DirectUpdateHandler2 ? > updateDocument(add{_version_=1576617435037958144,id=US9668251B2})* > After First In place update > *2017-08-24 22:01:33,109 DEBUG org.apache.solr.update.DirectUpdateHandler2 ? > updateDocValues(add{_version_=1576617580281462784,id=US2014029560A1})* -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] MarcusSorealheis closed pull request #793: testing the PR template
MarcusSorealheis closed pull request #793: testing the PR template URL: https://github.com/apache/lucene-solr/pull/793 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] MarcusSorealheis commented on issue #793: testing the PR template
MarcusSorealheis commented on issue #793: testing the PR template URL: https://github.com/apache/lucene-solr/pull/793#issuecomment-512356262 https://user-images.githubusercontent.com/2353608/61393288-85be6c00-a875-11e9-9d1f-1e7b56ca63d8.png";> This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12368) in-place DV updates should no longer have to jump through hoops if field does not yet exist
[ https://issues.apache.org/jira/browse/SOLR-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-12368: Resolution: Done Assignee: Munendra S N Fix Version/s: 8.3 Status: Resolved (was: Patch Available) > in-place DV updates should no longer have to jump through hoops if field does > not yet exist > --- > > Key: SOLR-12368 > URL: https://issues.apache.org/jira/browse/SOLR-12368 > Project: Solr > Issue Type: Improvement >Reporter: Hoss Man >Assignee: Munendra S N >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-12368.patch, SOLR-12368.patch, SOLR-12368.patch, > SOLR-12368.patch > > > When SOLR-5944 first added "in-place" DocValue updates to Solr, one of the > edge cases thta had to be dealt with was the limitation imposed by > IndexWriter that docValues could only be updated if they already existed - if > a shard did not yet have a document w/a value in the field where the update > was attempted, we would get an error. > LUCENE-8316 seems to have removed this error, which i believe means we can > simplify & speed up some of the checks in Solr, and support this situation as > well, rather then falling back on full "read stored fields & reindex" atomic > update -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] MarcusSorealheis opened a new pull request #793: testing the PR template
MarcusSorealheis opened a new pull request #793: testing the PR template URL: https://github.com/apache/lucene-solr/pull/793 # Description Please provide a short description of the changes you're making with this pull request. # Solution Please provide a short description of the approach taken to implement your solution. # Tests Please describe the tests you've developed or run to confirm this patch implements the feature or solves the problem. # Checklist Please review the following and check all that apply: - [ ] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [ ] I have created a Jira issue and added the issue ID to my pull request title. - [ ] I am authorized to contribute this code to the ASF and have removed any code I do not have a license to distribute. - [ ] I have developed this patch against the `master` branch. - [ ] I have run `ant precommit` and the appropriate test suite. - [ ] I have added tests for my changes. - [ ] I have added documentation for the Ref Guide (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] MarcusSorealheis commented on issue #793: testing the PR template
MarcusSorealheis commented on issue #793: testing the PR template URL: https://github.com/apache/lucene-solr/pull/793#issuecomment-512355940 works This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12368) in-place DV updates should no longer have to jump through hoops if field does not yet exist
[ https://issues.apache.org/jira/browse/SOLR-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887215#comment-16887215 ] ASF subversion and git services commented on SOLR-12368: Commit 4c11633c03d302590a95a30af36b743a22fc5340 in lucene-solr's branch refs/heads/branch_8x from Munendra S N [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4c11633 ] SOLR-12368: inplace update for field that doesn't yet exist in any doc If the field is non-stored, non-indexed and docvalue enabled numeric field then inplace update can be done. previously, lucene didn't support docvalue update for field that is not yet present in indexWriter but LUCENE-8316 added support for this. This adds support to update field which satisfies inplace conditions but which doesn't yet exist in any docs > in-place DV updates should no longer have to jump through hoops if field does > not yet exist > --- > > Key: SOLR-12368 > URL: https://issues.apache.org/jira/browse/SOLR-12368 > Project: Solr > Issue Type: Improvement >Reporter: Hoss Man >Priority: Major > Attachments: SOLR-12368.patch, SOLR-12368.patch, SOLR-12368.patch, > SOLR-12368.patch > > > When SOLR-5944 first added "in-place" DocValue updates to Solr, one of the > edge cases thta had to be dealt with was the limitation imposed by > IndexWriter that docValues could only be updated if they already existed - if > a shard did not yet have a document w/a value in the field where the update > was attempted, we would get an error. > LUCENE-8316 seems to have removed this error, which i believe means we can > simplify & speed up some of the checks in Solr, and support this situation as > well, rather then falling back on full "read stored fields & reindex" atomic > update -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8316) Allow DV updates for not existing fields
[ https://issues.apache.org/jira/browse/LUCENE-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887216#comment-16887216 ] ASF subversion and git services commented on LUCENE-8316: - Commit 4c11633c03d302590a95a30af36b743a22fc5340 in lucene-solr's branch refs/heads/branch_8x from Munendra S N [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4c11633 ] SOLR-12368: inplace update for field that doesn't yet exist in any doc If the field is non-stored, non-indexed and docvalue enabled numeric field then inplace update can be done. previously, lucene didn't support docvalue update for field that is not yet present in indexWriter but LUCENE-8316 added support for this. This adds support to update field which satisfies inplace conditions but which doesn't yet exist in any docs > Allow DV updates for not existing fields > > > Key: LUCENE-8316 > URL: https://issues.apache.org/jira/browse/LUCENE-8316 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 7.4, 8.0 >Reporter: Simon Willnauer >Priority: Major > Fix For: 7.4, 8.0 > > Attachments: LUCENE-8316.patch, LUCENE-8316.patch > > > Today we prevent DV updates for non-existing fields except > of the soft deletes case. Yet, this can cause inconsitent field numbers > etc. since we don't go through the global field number map etc. This > change removes the limitation of updating DVs in docs even if the field > doesn't exists. This also has the benefit that the error messages if > the field type doesn't match is consistent with what DWPT throws. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] erikhatcher merged pull request #781: updated the pull request template to make checkboxes work
erikhatcher merged pull request #781: updated the pull request template to make checkboxes work URL: https://github.com/apache/lucene-solr/pull/781 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-10288) Javascript housekeeping in UI
[ https://issues.apache.org/jira/browse/SOLR-10288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Hatcher reassigned SOLR-10288: --- Assignee: Erik Hatcher > Javascript housekeeping in UI > - > > Key: SOLR-10288 > URL: https://issues.apache.org/jira/browse/SOLR-10288 > Project: Solr > Issue Type: Bug > Components: Admin UI >Affects Versions: 6.4.2 >Reporter: Shawn Heisey >Assignee: Erik Hatcher >Priority: Minor > Time Spent: 1h 40m > Remaining Estimate: 0h > > I noticed a couple of things about the javascript files included in Solr for > the Admin UI: > * There is unnecessary duplication between the "js" and "libs" directories. > * Some of the files are not minified, and for some of those that are, the > non-minified originals are still included in the binary release. > Removing the duplicates entirely and the non-minified files from the binary > release would shave a little bit of size off of the binary download. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-10288) Javascript housekeeping in UI
[ https://issues.apache.org/jira/browse/SOLR-10288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Hatcher resolved SOLR-10288. - Resolution: Fixed dusting off commit privs... may have done the push with more intermediate commits than I should have, so I'm learning. --squash? > Javascript housekeeping in UI > - > > Key: SOLR-10288 > URL: https://issues.apache.org/jira/browse/SOLR-10288 > Project: Solr > Issue Type: Bug > Components: Admin UI >Affects Versions: 6.4.2 >Reporter: Shawn Heisey >Assignee: Erik Hatcher >Priority: Minor > Time Spent: 1h 40m > Remaining Estimate: 0h > > I noticed a couple of things about the javascript files included in Solr for > the Admin UI: > * There is unnecessary duplication between the "js" and "libs" directories. > * Some of the files are not minified, and for some of those that are, the > non-minified originals are still included in the binary release. > Removing the duplicates entirely and the non-minified files from the binary > release would shave a little bit of size off of the binary download. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12368) in-place DV updates should no longer have to jump through hoops if field does not yet exist
[ https://issues.apache.org/jira/browse/SOLR-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887207#comment-16887207 ] ASF subversion and git services commented on SOLR-12368: Commit 1ecd02deb504f27d602fdac83862a50e896c2dc6 in lucene-solr's branch refs/heads/master from Munendra S N [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1ecd02d ] SOLR-12368: inplace update for field that doesn't yet exist in any doc If the field is non-stored, non-indexed and docvalue enabled numeric field then inplace update can be done. previously, lucene didn't support docvalue update for field that is not yet present in indexWriter but LUCENE-8316 added support for this. This adds support to update field which satisfies inplace conditions but which doesn't yet exist in any docs > in-place DV updates should no longer have to jump through hoops if field does > not yet exist > --- > > Key: SOLR-12368 > URL: https://issues.apache.org/jira/browse/SOLR-12368 > Project: Solr > Issue Type: Improvement >Reporter: Hoss Man >Priority: Major > Attachments: SOLR-12368.patch, SOLR-12368.patch, SOLR-12368.patch, > SOLR-12368.patch > > > When SOLR-5944 first added "in-place" DocValue updates to Solr, one of the > edge cases thta had to be dealt with was the limitation imposed by > IndexWriter that docValues could only be updated if they already existed - if > a shard did not yet have a document w/a value in the field where the update > was attempted, we would get an error. > LUCENE-8316 seems to have removed this error, which i believe means we can > simplify & speed up some of the checks in Solr, and support this situation as > well, rather then falling back on full "read stored fields & reindex" atomic > update -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] erikhatcher closed pull request #778: SOLR-10288 remove non minified js
erikhatcher closed pull request #778: SOLR-10288 remove non minified js URL: https://github.com/apache/lucene-solr/pull/778 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8316) Allow DV updates for not existing fields
[ https://issues.apache.org/jira/browse/LUCENE-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887208#comment-16887208 ] ASF subversion and git services commented on LUCENE-8316: - Commit 1ecd02deb504f27d602fdac83862a50e896c2dc6 in lucene-solr's branch refs/heads/master from Munendra S N [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1ecd02d ] SOLR-12368: inplace update for field that doesn't yet exist in any doc If the field is non-stored, non-indexed and docvalue enabled numeric field then inplace update can be done. previously, lucene didn't support docvalue update for field that is not yet present in indexWriter but LUCENE-8316 added support for this. This adds support to update field which satisfies inplace conditions but which doesn't yet exist in any docs > Allow DV updates for not existing fields > > > Key: LUCENE-8316 > URL: https://issues.apache.org/jira/browse/LUCENE-8316 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 7.4, 8.0 >Reporter: Simon Willnauer >Priority: Major > Fix For: 7.4, 8.0 > > Attachments: LUCENE-8316.patch, LUCENE-8316.patch > > > Today we prevent DV updates for non-existing fields except > of the soft deletes case. Yet, this can cause inconsitent field numbers > etc. since we don't go through the global field number map etc. This > change removes the limitation of updating DVs in docs even if the field > doesn't exists. This also has the benefit that the error messages if > the field type doesn't match is consistent with what DWPT throws. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-10377) Improve readability of the explain output for JSON format
[ https://issues.apache.org/jira/browse/SOLR-10377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887184#comment-16887184 ] Munendra S N edited comment on SOLR-10377 at 7/17/19 3:43 PM: -- [^SOLR-10377.patch] This adds {{debug.explain.structured}} parameter in Admin UI. was (Author: munendrasn): [^SOLR-10377.patch] This adds {{debug.explain.structured}} parameter in Admin UI. !Screenshot 2019-07-17 at 6.09.17 PM.png! !Screenshot 2019-07-17 at 6.09.27 PM.png! > Improve readability of the explain output for JSON format > - > > Key: SOLR-10377 > URL: https://issues.apache.org/jira/browse/SOLR-10377 > Project: Solr > Issue Type: Improvement >Reporter: Varun Thacker >Priority: Minor > Attachments: SOLR-10377.patch, Screenshot 2019-07-17 at 6.09.17 > PM.png, Screenshot 2019-07-17 at 6.09.27 PM.png > > > Today when I ask solr for the debug query output In json with indent I get > this: > {code} > 1: " 3.545981 = sum of: 3.545981 = weight(name:dns in 0) [SchemaSimilarity], > result of: 3.545981 = score(doc=0,freq=1.0 = termFreq=1.0 ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + > 0.5)) from: 2.0 = docFreq 24.0 = docCount 1.54 = tfNorm, computed as (freq * > (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 7.0 = avgFieldLength > 1.0 = fieldLength ", > 2: " 7.4202514 = sum of: 7.4202514 = sum of: 2.7921112 = weight(name:domain > in 1) [SchemaSimilarity], result of: 2.7921112 = score(doc=1,freq=1.0 = > termFreq=1.0 ), product of: 2.3025851 = idf, computed as log(1 + (docCount - > docFreq + 0.5) / (docFreq + 0.5)) from: 2.0 = docFreq 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * > fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 = parameter k1 > 0.75 = parameter b 7.0 = avgFieldLength 4.0 = fieldLength 2.7921112 = > weight(name:name in 1) [SchemaSimilarity], result of: 2.7921112 = > score(doc=1,freq=1.0 = termFreq=1.0 ), product of: 2.3025851 = idf, computed > as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 2.0 = docFreq > 24.0 = docCount 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + > k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 > = parameter k1 0.75 = parameter b 7.0 = avgFieldLength 4.0 = fieldLength > 1.8360289 = weight(name:system in 1) [SchemaSimilarity], result of: 1.8360289 > = score(doc=1,freq=1.0 = termFreq=1.0 ), product of: 1.5141277 = idf, > computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 5.0 = > docFreq 24.0 = docCount 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / > (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = > termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 7.0 = avgFieldLength 4.0 = > fieldLength " > {code} > When I run the same query with "wt=ruby" I get a much nicer output > {code} > '2'=>' > 7.4202514 = sum of: > 7.4202514 = sum of: > 2.7921112 = weight(name:domain in 1) [SchemaSimilarity], result of: > 2.7921112 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 2.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > 2.7921112 = weight(name:name in 1) [SchemaSimilarity], result of: > 2.7921112 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 2.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > 1.8360289 = weight(name:system in 1) [SchemaSimilarity], result of: > 1.8360289 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 1.5141277 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 5.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b >
[jira] [Updated] (SOLR-10377) Improve readability of the explain output for JSON format
[ https://issues.apache.org/jira/browse/SOLR-10377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-10377: Status: Patch Available (was: Reopened) > Improve readability of the explain output for JSON format > - > > Key: SOLR-10377 > URL: https://issues.apache.org/jira/browse/SOLR-10377 > Project: Solr > Issue Type: Improvement >Reporter: Varun Thacker >Priority: Minor > Attachments: SOLR-10377.patch, Screenshot 2019-07-17 at 6.09.17 > PM.png, Screenshot 2019-07-17 at 6.09.27 PM.png > > > Today when I ask solr for the debug query output In json with indent I get > this: > {code} > 1: " 3.545981 = sum of: 3.545981 = weight(name:dns in 0) [SchemaSimilarity], > result of: 3.545981 = score(doc=0,freq=1.0 = termFreq=1.0 ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + > 0.5)) from: 2.0 = docFreq 24.0 = docCount 1.54 = tfNorm, computed as (freq * > (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 7.0 = avgFieldLength > 1.0 = fieldLength ", > 2: " 7.4202514 = sum of: 7.4202514 = sum of: 2.7921112 = weight(name:domain > in 1) [SchemaSimilarity], result of: 2.7921112 = score(doc=1,freq=1.0 = > termFreq=1.0 ), product of: 2.3025851 = idf, computed as log(1 + (docCount - > docFreq + 0.5) / (docFreq + 0.5)) from: 2.0 = docFreq 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * > fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 = parameter k1 > 0.75 = parameter b 7.0 = avgFieldLength 4.0 = fieldLength 2.7921112 = > weight(name:name in 1) [SchemaSimilarity], result of: 2.7921112 = > score(doc=1,freq=1.0 = termFreq=1.0 ), product of: 2.3025851 = idf, computed > as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 2.0 = docFreq > 24.0 = docCount 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + > k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 > = parameter k1 0.75 = parameter b 7.0 = avgFieldLength 4.0 = fieldLength > 1.8360289 = weight(name:system in 1) [SchemaSimilarity], result of: 1.8360289 > = score(doc=1,freq=1.0 = termFreq=1.0 ), product of: 1.5141277 = idf, > computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 5.0 = > docFreq 24.0 = docCount 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / > (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = > termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 7.0 = avgFieldLength 4.0 = > fieldLength " > {code} > When I run the same query with "wt=ruby" I get a much nicer output > {code} > '2'=>' > 7.4202514 = sum of: > 7.4202514 = sum of: > 2.7921112 = weight(name:domain in 1) [SchemaSimilarity], result of: > 2.7921112 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 2.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > 2.7921112 = weight(name:name in 1) [SchemaSimilarity], result of: > 2.7921112 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 2.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > 1.8360289 = weight(name:system in 1) [SchemaSimilarity], result of: > 1.8360289 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 1.5141277 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 5.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > ', > '1'=>' > 3.545981 = sum of: > 3.545981 = weight(name:dns in 0) [SchemaSimilarity], result of: > 3.545981 = score(doc=0,freq=1.0 = termFreq=1.0 > ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 2.
[jira] [Commented] (SOLR-10377) Improve readability of the explain output for JSON format
[ https://issues.apache.org/jira/browse/SOLR-10377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887184#comment-16887184 ] Munendra S N commented on SOLR-10377: - [^SOLR-10377.patch] This adds {{debug.explain.structured}} parameter in Admin UI. !Screenshot 2019-07-17 at 6.09.17 PM.png! !Screenshot 2019-07-17 at 6.09.27 PM.png! > Improve readability of the explain output for JSON format > - > > Key: SOLR-10377 > URL: https://issues.apache.org/jira/browse/SOLR-10377 > Project: Solr > Issue Type: Improvement >Reporter: Varun Thacker >Priority: Minor > Attachments: SOLR-10377.patch, Screenshot 2019-07-17 at 6.09.17 > PM.png, Screenshot 2019-07-17 at 6.09.27 PM.png > > > Today when I ask solr for the debug query output In json with indent I get > this: > {code} > 1: " 3.545981 = sum of: 3.545981 = weight(name:dns in 0) [SchemaSimilarity], > result of: 3.545981 = score(doc=0,freq=1.0 = termFreq=1.0 ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + > 0.5)) from: 2.0 = docFreq 24.0 = docCount 1.54 = tfNorm, computed as (freq * > (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 7.0 = avgFieldLength > 1.0 = fieldLength ", > 2: " 7.4202514 = sum of: 7.4202514 = sum of: 2.7921112 = weight(name:domain > in 1) [SchemaSimilarity], result of: 2.7921112 = score(doc=1,freq=1.0 = > termFreq=1.0 ), product of: 2.3025851 = idf, computed as log(1 + (docCount - > docFreq + 0.5) / (docFreq + 0.5)) from: 2.0 = docFreq 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * > fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 = parameter k1 > 0.75 = parameter b 7.0 = avgFieldLength 4.0 = fieldLength 2.7921112 = > weight(name:name in 1) [SchemaSimilarity], result of: 2.7921112 = > score(doc=1,freq=1.0 = termFreq=1.0 ), product of: 2.3025851 = idf, computed > as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 2.0 = docFreq > 24.0 = docCount 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + > k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 > = parameter k1 0.75 = parameter b 7.0 = avgFieldLength 4.0 = fieldLength > 1.8360289 = weight(name:system in 1) [SchemaSimilarity], result of: 1.8360289 > = score(doc=1,freq=1.0 = termFreq=1.0 ), product of: 1.5141277 = idf, > computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 5.0 = > docFreq 24.0 = docCount 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / > (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = > termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 7.0 = avgFieldLength 4.0 = > fieldLength " > {code} > When I run the same query with "wt=ruby" I get a much nicer output > {code} > '2'=>' > 7.4202514 = sum of: > 7.4202514 = sum of: > 2.7921112 = weight(name:domain in 1) [SchemaSimilarity], result of: > 2.7921112 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 2.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > 2.7921112 = weight(name:name in 1) [SchemaSimilarity], result of: > 2.7921112 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 2.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > 1.8360289 = weight(name:system in 1) [SchemaSimilarity], result of: > 1.8360289 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 1.5141277 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 5.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > ', > '1'=>' > 3.545981 = sum of: > 3.545981 = weight(name:dns in 0) [SchemaSimilarity], result o
[jira] [Updated] (SOLR-10377) Improve readability of the explain output for JSON format
[ https://issues.apache.org/jira/browse/SOLR-10377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-10377: Attachment: Screenshot 2019-07-17 at 6.09.27 PM.png Screenshot 2019-07-17 at 6.09.17 PM.png > Improve readability of the explain output for JSON format > - > > Key: SOLR-10377 > URL: https://issues.apache.org/jira/browse/SOLR-10377 > Project: Solr > Issue Type: Improvement >Reporter: Varun Thacker >Priority: Minor > Attachments: SOLR-10377.patch, Screenshot 2019-07-17 at 6.09.17 > PM.png, Screenshot 2019-07-17 at 6.09.27 PM.png > > > Today when I ask solr for the debug query output In json with indent I get > this: > {code} > 1: " 3.545981 = sum of: 3.545981 = weight(name:dns in 0) [SchemaSimilarity], > result of: 3.545981 = score(doc=0,freq=1.0 = termFreq=1.0 ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + > 0.5)) from: 2.0 = docFreq 24.0 = docCount 1.54 = tfNorm, computed as (freq * > (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 7.0 = avgFieldLength > 1.0 = fieldLength ", > 2: " 7.4202514 = sum of: 7.4202514 = sum of: 2.7921112 = weight(name:domain > in 1) [SchemaSimilarity], result of: 2.7921112 = score(doc=1,freq=1.0 = > termFreq=1.0 ), product of: 2.3025851 = idf, computed as log(1 + (docCount - > docFreq + 0.5) / (docFreq + 0.5)) from: 2.0 = docFreq 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * > fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 = parameter k1 > 0.75 = parameter b 7.0 = avgFieldLength 4.0 = fieldLength 2.7921112 = > weight(name:name in 1) [SchemaSimilarity], result of: 2.7921112 = > score(doc=1,freq=1.0 = termFreq=1.0 ), product of: 2.3025851 = idf, computed > as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 2.0 = docFreq > 24.0 = docCount 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + > k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 > = parameter k1 0.75 = parameter b 7.0 = avgFieldLength 4.0 = fieldLength > 1.8360289 = weight(name:system in 1) [SchemaSimilarity], result of: 1.8360289 > = score(doc=1,freq=1.0 = termFreq=1.0 ), product of: 1.5141277 = idf, > computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 5.0 = > docFreq 24.0 = docCount 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / > (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = > termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 7.0 = avgFieldLength 4.0 = > fieldLength " > {code} > When I run the same query with "wt=ruby" I get a much nicer output > {code} > '2'=>' > 7.4202514 = sum of: > 7.4202514 = sum of: > 2.7921112 = weight(name:domain in 1) [SchemaSimilarity], result of: > 2.7921112 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 2.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > 2.7921112 = weight(name:name in 1) [SchemaSimilarity], result of: > 2.7921112 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 2.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > 1.8360289 = weight(name:system in 1) [SchemaSimilarity], result of: > 1.8360289 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 1.5141277 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 5.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > ', > '1'=>' > 3.545981 = sum of: > 3.545981 = weight(name:dns in 0) [SchemaSimilarity], result of: > 3.545981 = score(doc=0,freq=1.0 = termFreq=1.0 > ), product of: > 2.3025851 = idf, computed as log(1 +
[jira] [Updated] (SOLR-10377) Improve readability of the explain output for JSON format
[ https://issues.apache.org/jira/browse/SOLR-10377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-10377: Attachment: SOLR-10377.patch > Improve readability of the explain output for JSON format > - > > Key: SOLR-10377 > URL: https://issues.apache.org/jira/browse/SOLR-10377 > Project: Solr > Issue Type: Improvement >Reporter: Varun Thacker >Priority: Minor > Attachments: SOLR-10377.patch > > > Today when I ask solr for the debug query output In json with indent I get > this: > {code} > 1: " 3.545981 = sum of: 3.545981 = weight(name:dns in 0) [SchemaSimilarity], > result of: 3.545981 = score(doc=0,freq=1.0 = termFreq=1.0 ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + > 0.5)) from: 2.0 = docFreq 24.0 = docCount 1.54 = tfNorm, computed as (freq * > (k1 + 1)) / (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 7.0 = avgFieldLength > 1.0 = fieldLength ", > 2: " 7.4202514 = sum of: 7.4202514 = sum of: 2.7921112 = weight(name:domain > in 1) [SchemaSimilarity], result of: 2.7921112 = score(doc=1,freq=1.0 = > termFreq=1.0 ), product of: 2.3025851 = idf, computed as log(1 + (docCount - > docFreq + 0.5) / (docFreq + 0.5)) from: 2.0 = docFreq 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - b + b * > fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 = parameter k1 > 0.75 = parameter b 7.0 = avgFieldLength 4.0 = fieldLength 2.7921112 = > weight(name:name in 1) [SchemaSimilarity], result of: 2.7921112 = > score(doc=1,freq=1.0 = termFreq=1.0 ), product of: 2.3025851 = idf, computed > as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 2.0 = docFreq > 24.0 = docCount 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + > k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = termFreq=1.0 1.2 > = parameter k1 0.75 = parameter b 7.0 = avgFieldLength 4.0 = fieldLength > 1.8360289 = weight(name:system in 1) [SchemaSimilarity], result of: 1.8360289 > = score(doc=1,freq=1.0 = termFreq=1.0 ), product of: 1.5141277 = idf, > computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from: 5.0 = > docFreq 24.0 = docCount 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / > (freq + k1 * (1 - b + b * fieldLength / avgFieldLength)) from: 1.0 = > termFreq=1.0 1.2 = parameter k1 0.75 = parameter b 7.0 = avgFieldLength 4.0 = > fieldLength " > {code} > When I run the same query with "wt=ruby" I get a much nicer output > {code} > '2'=>' > 7.4202514 = sum of: > 7.4202514 = sum of: > 2.7921112 = weight(name:domain in 1) [SchemaSimilarity], result of: > 2.7921112 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 2.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > 2.7921112 = weight(name:name in 1) [SchemaSimilarity], result of: > 2.7921112 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 2.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > 1.8360289 = weight(name:system in 1) [SchemaSimilarity], result of: > 1.8360289 = score(doc=1,freq=1.0 = termFreq=1.0 > ), product of: > 1.5141277 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 5.0 = docFreq > 24.0 = docCount > 1.2125984 = tfNorm, computed as (freq * (k1 + 1)) / (freq + k1 * (1 - > b + b * fieldLength / avgFieldLength)) from: > 1.0 = termFreq=1.0 > 1.2 = parameter k1 > 0.75 = parameter b > 7.0 = avgFieldLength > 4.0 = fieldLength > ', > '1'=>' > 3.545981 = sum of: > 3.545981 = weight(name:dns in 0) [SchemaSimilarity], result of: > 3.545981 = score(doc=0,freq=1.0 = termFreq=1.0 > ), product of: > 2.3025851 = idf, computed as log(1 + (docCount - docFreq + 0.5) / > (docFreq + 0.5)) from: > 2.0 = docFreq > 24.0 = docCount > 1.54 = tfNorm, computed as (freq * (k1 + 1)) / (fre
[jira] [Updated] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.
[ https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-11556: Status: Patch Available (was: Open) > Backup/Restore with multiple BackupRepository objects defined results in the > wrong repo being used. > --- > > Key: SOLR-11556 > URL: https://issues.apache.org/jira/browse/SOLR-11556 > Project: Solr > Issue Type: Bug > Components: Backup/Restore >Affects Versions: 6.3 >Reporter: Timothy Potter >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-11556.patch, SOLR-11556.patch > > > I defined two repos for backup/restore, one local and one remote on GCS, e.g. > {code} > > class="org.apache.solr.core.backup.repository.HdfsBackupRepository" > default="false"> > ... > > class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" > default="false"> > /tmp/solr-backups > > > {code} > Since the CollectionHandler does not pass the "repository" param along, once > the BackupCmd gets the ZkNodeProps, it selects the wrong repo! > The error I'm seeing is: > {code} > 2017-10-26 17:07:27.326 ERROR > (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [ ] > o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: > backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not > installed > at java.nio.file.Paths.get(Paths.java:147) > at > org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82) > at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99) > at > org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224) > at > org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > {code} > Notice the Local backup repo is being selected in the BackupCmd even though I > passed repository=hdfs in my backup command, e.g. > {code} > curl > "http://localhost:8983/solr/admin/collections?action=BACKUP&name=foo&collection=foo&location=gs://tjp-solr-test/backups&repository=hdfs"; > {code} > I think the fix here is to include the repository param, see patch. I'll fix > for the next 7.x release and those on 6 can just apply the patch here. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.
[ https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-11556: Attachment: SOLR-11556.patch Status: Open (was: Open) > Backup/Restore with multiple BackupRepository objects defined results in the > wrong repo being used. > --- > > Key: SOLR-11556 > URL: https://issues.apache.org/jira/browse/SOLR-11556 > Project: Solr > Issue Type: Bug > Components: Backup/Restore >Affects Versions: 6.3 >Reporter: Timothy Potter >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-11556.patch, SOLR-11556.patch > > > I defined two repos for backup/restore, one local and one remote on GCS, e.g. > {code} > > class="org.apache.solr.core.backup.repository.HdfsBackupRepository" > default="false"> > ... > > class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" > default="false"> > /tmp/solr-backups > > > {code} > Since the CollectionHandler does not pass the "repository" param along, once > the BackupCmd gets the ZkNodeProps, it selects the wrong repo! > The error I'm seeing is: > {code} > 2017-10-26 17:07:27.326 ERROR > (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [ ] > o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: > backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not > installed > at java.nio.file.Paths.get(Paths.java:147) > at > org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82) > at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99) > at > org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224) > at > org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > {code} > Notice the Local backup repo is being selected in the BackupCmd even though I > passed repository=hdfs in my backup command, e.g. > {code} > curl > "http://localhost:8983/solr/admin/collections?action=BACKUP&name=foo&collection=foo&location=gs://tjp-solr-test/backups&repository=hdfs"; > {code} > I think the fix here is to include the repository param, see patch. I'll fix > for the next 7.x release and those on 6 can just apply the patch here. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8884) Add Directory wrapper to track per-query IO counters
[ https://issues.apache.org/jira/browse/LUCENE-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887154#comment-16887154 ] Michael McCandless commented on LUCENE-8884: Argh!! Not sure how I messed that up ... I’ll fix once I have access to laptop again. Thanks for checking [~jpountz]! > Add Directory wrapper to track per-query IO counters > > > Key: LUCENE-8884 > URL: https://issues.apache.org/jira/browse/LUCENE-8884 > Project: Lucene - Core > Issue Type: Improvement > Components: core/store >Reporter: Michael McCandless >Assignee: Michael McCandless >Priority: Minor > > Lucene's IO abstractions ({{Directory, IndexInput/Output}}) make it really > easy to track counters of how many IOPs and net bytes are read for each > query, which is a useful metric to track/aggregate/alarm on in production or > dev benchmarks. > At my day job we use these wrappers in our nightly benchmarks to catch any > accidental performance regressions. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8923) Release procedure does not add new version in CHANGES.txt in master
[ https://issues.apache.org/jira/browse/LUCENE-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887109#comment-16887109 ] Tomoko Uchida commented on LUCENE-8923: --- bq. I am moving you issues to Lucene 8.3 in master, let me know if it is correct. It's correct, thank you! > Release procedure does not add new version in CHANGES.txt in master > --- > > Key: LUCENE-8923 > URL: https://issues.apache.org/jira/browse/LUCENE-8923 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ignacio Vera >Priority: Minor > Attachments: LUCENE-8923.patch > > > This issue is just to track something that maybe missing in the release > procedure. It currently adds a new version on CHANGES.txt in the minor > version branch but it does not do it in master. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] cbuescher opened a new pull request #792: Update Wordnet file format description link
cbuescher opened a new pull request #792: Update Wordnet file format description link URL: https://github.com/apache/lucene-solr/pull/792 The link to the description of the Wordnet prolog database files seems outdated. This change replaces it with a working link. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8920) Reduce size of FSTs due to use of direct-addressing encoding
[ https://issues.apache.org/jira/browse/LUCENE-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887089#comment-16887089 ] Mike Sokolov commented on LUCENE-8920: -- Note: I pushed the old-format Kuromoji dictionary and it seems to have fixed the build > Reduce size of FSTs due to use of direct-addressing encoding > - > > Key: LUCENE-8920 > URL: https://issues.apache.org/jira/browse/LUCENE-8920 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Mike Sokolov >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > Some data can lead to worst-case ~4x RAM usage due to this optimization. > Several ideas were suggested to combat this on the mailing list: > bq. I think we can improve thesituation here by tracking, per-FST instance, > the size increase we're seeing while building (or perhaps do a preliminary > pass before building) in order to decide whether to apply the encoding. > bq. we could also make the encoding a bit more efficient. For instance I > noticed that arc metadata is pretty large in some cases (in the 10-20 bytes) > which make gaps very costly. Associating each label with a dense id and > having an intermediate lookup, ie. lookup label -> id and then id->arc offset > instead of doing label->arc directly could save a lot of space in some cases? > Also it seems that we are repeating the label in the arc metadata when > array-with-gaps is used, even though it shouldn't be necessary since the > label is implicit from the address? -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8914) Small improvement in FloatPointNearestNeighbor
[ https://issues.apache.org/jira/browse/LUCENE-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ignacio Vera resolved LUCENE-8914. -- Resolution: Fixed Assignee: Ignacio Vera Fix Version/s: 8.3 master (9.0) > Small improvement in FloatPointNearestNeighbor > -- > > Key: LUCENE-8914 > URL: https://issues.apache.org/jira/browse/LUCENE-8914 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Ignacio Vera >Assignee: Ignacio Vera >Priority: Minor > Fix For: master (9.0), 8.3 > > Time Spent: 20m > Remaining Estimate: 0h > > Currently the logic to visit inner nodes of the BKD tree in > FloatPointNearestNeighbor is in the custom tree traversing logic instead of > in the IntersectVisitor. This approach is missing the improvement added on > LUCENE-7862 which my experiments shows that for a high number of dimensions > can give a performance improvements of around 10%. > This change proposes to move the logic for discarding inner modes to the > IntersectVisitor. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-12.0.1) - Build # 8058 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8058/ Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseG1GC 14 tests failed. FAILED: org.apache.solr.cloud.BasicDistributedZkTest.test Error Message: distrib-dup-test-chain-explicit: doc#58 has wrong value for regex_dup_B_s expected: but was: Stack Trace: org.junit.ComparisonFailure: distrib-dup-test-chain-explicit: doc#58 has wrong value for regex_dup_B_s expected: but was: at __randomizedtesting.SeedInfo.seed([43CA208F8151BCDD:CB9E1F552FADD125]:0) at org.junit.Assert.assertEquals(Assert.java:115) at org.apache.solr.cloud.BasicDistributedZkTest.testUpdateProcessorsRunOnlyOnce(BasicDistributedZkTest.java:867) at org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:438) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.
Re: [JENKINS] Lucene-Solr-NightlyTests-8.2 - Build # 5 - Unstable
Ah, never mind - I found the link in the email, doh On Wed, Jul 17, 2019 at 9:26 AM Michael Sokolov wrote: > > I believe I checked in a fix for this, and saw an email from another > recent 8.2 jenkins build job that seems to have had only a single > failure (something different from this Kuromoji one). I guess this > nightly job started before I committed my fix, but I'd like to check > the status of all the jenkins jobs. I'm not sure how to do that other > than watching this email list - can anyone point me to the Apache > Jenkins UI? > > On Wed, Jul 17, 2019 at 7:24 AM Apache Jenkins Server > wrote: > > > > Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.2/5/ > > > > 110 tests failed. > > FAILED: org.apache.lucene.analysis.ja.TestExtendedMode.testSurrogates2 > > > > Error Message: > > Could not initialize class > > org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder > > > > Stack Trace: > > java.lang.NoClassDefFoundError: Could not initialize class > > org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder > > at > > __randomizedtesting.SeedInfo.seed([CE49B77A88E38DF6:5B545AE26E6487EF]:0) > > at > > org.apache.lucene.analysis.ja.dict.TokenInfoDictionary.getInstance(TokenInfoDictionary.java:62) > > at > > org.apache.lucene.analysis.ja.JapaneseTokenizer.(JapaneseTokenizer.java:215) > > at > > org.apache.lucene.analysis.ja.TestExtendedMode$1.createComponents(TestExtendedMode.java:41) > > at > > org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:199) > > at > > org.apache.lucene.analysis.ja.TestExtendedMode.testSurrogates2(TestExtendedMode.java:64) > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > > at > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > > at > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.lang.reflect.Method.invoke(Method.java:498) > > at > > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) > > at > > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) > > at > > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) > > at > > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) > > at > > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) > > at > > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) > > at > > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) > > at > > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) > > at > > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) > > at > > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > > at > > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) > > at > > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) > > at > > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) > > at > > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) > > at > > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) > > at > > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) > > at > > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) > > at > > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) > > at > > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > > at > > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) > > at > > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > > at > > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > > at > > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > > at > > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > > at > > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > > at > > org.apache.lucene.util.TestRuleAssertions
Re: [JENKINS] Lucene-Solr-NightlyTests-8.2 - Build # 5 - Unstable
I believe I checked in a fix for this, and saw an email from another recent 8.2 jenkins build job that seems to have had only a single failure (something different from this Kuromoji one). I guess this nightly job started before I committed my fix, but I'd like to check the status of all the jenkins jobs. I'm not sure how to do that other than watching this email list - can anyone point me to the Apache Jenkins UI? On Wed, Jul 17, 2019 at 7:24 AM Apache Jenkins Server wrote: > > Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.2/5/ > > 110 tests failed. > FAILED: org.apache.lucene.analysis.ja.TestExtendedMode.testSurrogates2 > > Error Message: > Could not initialize class > org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder > > Stack Trace: > java.lang.NoClassDefFoundError: Could not initialize class > org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder > at > __randomizedtesting.SeedInfo.seed([CE49B77A88E38DF6:5B545AE26E6487EF]:0) > at > org.apache.lucene.analysis.ja.dict.TokenInfoDictionary.getInstance(TokenInfoDictionary.java:62) > at > org.apache.lucene.analysis.ja.JapaneseTokenizer.(JapaneseTokenizer.java:215) > at > org.apache.lucene.analysis.ja.TestExtendedMode$1.createComponents(TestExtendedMode.java:41) > at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:199) > at > org.apache.lucene.analysis.ja.TestExtendedMode.testSurrogates2(TestExtendedMode.java:64) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) > at > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) > at > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) > at > org.apa
[JENKINS] Lucene-Solr-repro-Java11 - Build # 223 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro-Java11/223/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1901/consoleText [repro] Revision: 2d357c960c13ee3c1370bb1caa8bc3fc18e079bd [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=HdfsAutoAddReplicasIntegrationTest -Dtests.method=testSimple -Dtests.seed=5BBBFB8EAF07AEBA -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=sv-AX -Dtests.timezone=Poland -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: f026053d4d8269c7f7135d8a76ffa21235a05d4b [repro] git fetch [repro] git checkout 2d357c960c13ee3c1370bb1caa8bc3fc18e079bd [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] HdfsAutoAddReplicasIntegrationTest [repro] ant compile-test [...truncated 3315 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.HdfsAutoAddReplicasIntegrationTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.seed=5BBBFB8EAF07AEBA -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=sv-AX -Dtests.timezone=Poland -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [...truncated 5567 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 2/5 failed: org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest [repro] git checkout f026053d4d8269c7f7135d8a76ffa21235a05d4b [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8914) Small improvement in FloatPointNearestNeighbor
[ https://issues.apache.org/jira/browse/LUCENE-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887004#comment-16887004 ] ASF subversion and git services commented on LUCENE-8914: - Commit 568dedab6d390f6c39be197ae5f6dfe32cb3f29b in lucene-solr's branch refs/heads/branch_8x from Ignacio Vera [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=568deda ] LUCENE-8914: Move compare logic to IntersectVisitor in FloatPointNearestNeighbor (#783) Move the logic for discarding inner modes to the IntersectVisitor so we take advantage of the change introduced in LUCENE-7862 > Small improvement in FloatPointNearestNeighbor > -- > > Key: LUCENE-8914 > URL: https://issues.apache.org/jira/browse/LUCENE-8914 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Ignacio Vera >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Currently the logic to visit inner nodes of the BKD tree in > FloatPointNearestNeighbor is in the custom tree traversing logic instead of > in the IntersectVisitor. This approach is missing the improvement added on > LUCENE-7862 which my experiments shows that for a high number of dimensions > can give a performance improvements of around 10%. > This change proposes to move the logic for discarding inner modes to the > IntersectVisitor. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] atris commented on issue #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq
atris commented on issue #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq URL: https://github.com/apache/lucene-solr/pull/779#issuecomment-512228939 > I ran luceneutil on this patch on my machine too and I'm getting similar results. It's a bit disappointing, I was expecting some gains though I'm rather happy that we can keep things easier to maintain by not having to sepcialize too much. Sorry for the time you spent on it but I think it's better to keep things the way they are today? No sweat, I understand. Makes sense to keep things straightforward! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7862) Should BKD cells store their min/max packed values?
[ https://issues.apache.org/jira/browse/LUCENE-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887001#comment-16887001 ] ASF subversion and git services commented on LUCENE-7862: - Commit f026053d4d8269c7f7135d8a76ffa21235a05d4b in lucene-solr's branch refs/heads/master from Ignacio Vera [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f026053 ] LUCENE-8914: Move compare logic to IntersectVisitor in FloatPointNearestNeighbor (#783) Move the logic for discarding inner modes to the IntersectVisitor so we take advantage of the change introduced in LUCENE-7862 > Should BKD cells store their min/max packed values? > --- > > Key: LUCENE-7862 > URL: https://issues.apache.org/jira/browse/LUCENE-7862 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Assignee: Ignacio Vera >Priority: Minor > Fix For: 7.5, 8.0 > > Attachments: LUCENE-7862.patch, LUCENE-7862.patch, LUCENE-7862.patch > > > The index of the BKD tree already allows to know lower and upper bounds of > values in a given dimension. However the actual range of values might be more > narrow than what the index tells us, especially if splitting on one dimension > reduces the range of values in at least one other dimension. For instance > this tends to be the case with range fields: since we enforce that lower > bounds are less than upper bounds, splitting on one dimension will also > affect the range of values in the other dimension. > So I'm wondering whether we should store the actual range of values for each > dimension in leaf blocks, this will hopefully allow to figure out that either > none or all values match in a block without having to check them all. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8914) Small improvement in FloatPointNearestNeighbor
[ https://issues.apache.org/jira/browse/LUCENE-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16887000#comment-16887000 ] ASF subversion and git services commented on LUCENE-8914: - Commit f026053d4d8269c7f7135d8a76ffa21235a05d4b in lucene-solr's branch refs/heads/master from Ignacio Vera [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f026053 ] LUCENE-8914: Move compare logic to IntersectVisitor in FloatPointNearestNeighbor (#783) Move the logic for discarding inner modes to the IntersectVisitor so we take advantage of the change introduced in LUCENE-7862 > Small improvement in FloatPointNearestNeighbor > -- > > Key: LUCENE-8914 > URL: https://issues.apache.org/jira/browse/LUCENE-8914 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Ignacio Vera >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Currently the logic to visit inner nodes of the BKD tree in > FloatPointNearestNeighbor is in the custom tree traversing logic instead of > in the IntersectVisitor. This approach is missing the improvement added on > LUCENE-7862 which my experiments shows that for a high number of dimensions > can give a performance improvements of around 10%. > This change proposes to move the logic for discarding inner modes to the > IntersectVisitor. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] iverase merged pull request #783: LUCENE-8914: Move compare logic to IntersectVisitor in FloatPointNearestNeighbor
iverase merged pull request #783: LUCENE-8914: Move compare logic to IntersectVisitor in FloatPointNearestNeighbor URL: https://github.com/apache/lucene-solr/pull/783 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] jpountz commented on issue #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq
jpountz commented on issue #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq URL: https://github.com/apache/lucene-solr/pull/779#issuecomment-512227693 I ran luceneutil on this patch on my machine too and I'm getting similar results. It's a bit disappointing, I was expecting some gains though I'm rather happy that we can keep things easier to maintain by not having to sepcialize too much. Sorry for the time you spent on it but I think it's better to keep things the way they are today? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] atris commented on issue #789: LUCENE-8915 : Improve Javadocs for RateLimiter and SimpleRateLimiter
atris commented on issue #789: LUCENE-8915 : Improve Javadocs for RateLimiter and SimpleRateLimiter URL: https://github.com/apache/lucene-solr/pull/789#issuecomment-51469 @sigram Thanks, updated the same This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] sigram commented on a change in pull request #789: LUCENE-8915 : Improve Javadocs for RateLimiter and SimpleRateLimiter
sigram commented on a change in pull request #789: LUCENE-8915 : Improve Javadocs for RateLimiter and SimpleRateLimiter URL: https://github.com/apache/lucene-solr/pull/789#discussion_r304360145 ## File path: lucene/core/src/java/org/apache/lucene/store/RateLimiter.java ## @@ -47,7 +49,11 @@ * */ public abstract long pause(long bytes) throws IOException; - /** How many bytes caller should add up itself before invoking {@link #pause}. */ + /** How many bytes caller should add up itself before invoking {@link #pause}. + * NOTE: The value returned by this method may change over time and is not guaranteed + * to be constant throughout the lifetime of the RateLimiter. Users are advised to + * refresh their local values with calls to this method to ensure consistency Review comment: Overall, LGTM, +1. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13558) Allow dynamic resizing of SolrCache-s
[ https://issues.apache.org/jira/browse/SOLR-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki updated SOLR-13558: - Attachment: SOLR-13558.patch > Allow dynamic resizing of SolrCache-s > - > > Key: SOLR-13558 > URL: https://issues.apache.org/jira/browse/SOLR-13558 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Attachments: SOLR-13558.patch, SOLR-13558.patch > > > Currently SolrCache limits are configured statically and can't be > reconfigured without cache re-initialization (core reload), which is costly. > In some situations it would help to be able to dynamically re-size the cache > based on the resource contention (such as the total heap size used for > caching across all cores in a node). > Each cache implementation already knows how to evict its entries when it runs > into configured limits - what is missing is to expose this mechanism using a > uniform API. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8923) Release procedure does not add new version in CHANGES.txt in master
[ https://issues.apache.org/jira/browse/LUCENE-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886967#comment-16886967 ] Ignacio Vera commented on LUCENE-8923: -- I added the entries. I leave the issue open so we can clarify if the current procedure needs to be updated to add this entries, > Release procedure does not add new version in CHANGES.txt in master > --- > > Key: LUCENE-8923 > URL: https://issues.apache.org/jira/browse/LUCENE-8923 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ignacio Vera >Priority: Minor > Attachments: LUCENE-8923.patch > > > This issue is just to track something that maybe missing in the release > procedure. It currently adds a new version on CHANGES.txt in the minor > version branch but it does not do it in master. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8923) Release procedure does not add new version in CHANGES.txt in master
[ https://issues.apache.org/jira/browse/LUCENE-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886966#comment-16886966 ] ASF subversion and git services commented on LUCENE-8923: - Commit 41ae03a9a0dedd41865d5e6200fa1a73c8ee7b7f in lucene-solr's branch refs/heads/master from Ignacio Vera [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=41ae03a ] LUCENE-8923: Add Lucene-8.3 entry in CHANGES.txt > Release procedure does not add new version in CHANGES.txt in master > --- > > Key: LUCENE-8923 > URL: https://issues.apache.org/jira/browse/LUCENE-8923 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ignacio Vera >Priority: Minor > Attachments: LUCENE-8923.patch > > > This issue is just to track something that maybe missing in the release > procedure. It currently adds a new version on CHANGES.txt in the minor > version branch but it does not do it in master. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] jpountz commented on a change in pull request #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq
jpountz commented on a change in pull request #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq URL: https://github.com/apache/lucene-solr/pull/779#discussion_r304360904 ## File path: lucene/core/src/java/org/apache/lucene/codecs/lucene50/Lucene50PostingsReader.java ## @@ -1761,6 +1763,223 @@ public long cost() { } + final class BlockImpactsDocsEnum extends ImpactsEnum { + +private final byte[] encoded; + +private final int[] docDeltaBuffer = new int[MAX_DATA_SIZE]; +private final int[] freqBuffer = new int[MAX_DATA_SIZE]; + +private int docBufferUpto; + +private final Lucene50ScoreSkipReader skipper; + +final IndexInput docIn; + +final boolean indexHasPos; +final boolean indexHasOffsets; +final boolean indexHasPayloads; +final boolean indexHasFreq; + +private int docFreq; // number of docs in this posting list +private int docUpto; // how many docs we've read +private int doc; // doc we last read +private int accum;// accumulator for doc deltas +private int freq; // freq we last read + +private boolean needsFreq; // true if the caller actually needs frequencies Review comment: This will always be true, it is illegal to read impacts and not ask for term frequencies (it doesn't make much sense). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] sigram commented on a change in pull request #789: LUCENE-8915 : Improve Javadocs for RateLimiter and SimpleRateLimiter
sigram commented on a change in pull request #789: LUCENE-8915 : Improve Javadocs for RateLimiter and SimpleRateLimiter URL: https://github.com/apache/lucene-solr/pull/789#discussion_r304356013 ## File path: lucene/core/src/java/org/apache/lucene/store/RateLimiter.java ## @@ -30,6 +30,8 @@ /** * Sets an updated MB per second rate limit. + * A subclass is allowed to perform dynamic updates of the rate limit + * during use Review comment: Period. :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] sigram commented on a change in pull request #789: LUCENE-8915 : Improve Javadocs for RateLimiter and SimpleRateLimiter
sigram commented on a change in pull request #789: LUCENE-8915 : Improve Javadocs for RateLimiter and SimpleRateLimiter URL: https://github.com/apache/lucene-solr/pull/789#discussion_r304356220 ## File path: lucene/core/src/java/org/apache/lucene/store/RateLimiter.java ## @@ -47,7 +49,11 @@ * */ public abstract long pause(long bytes) throws IOException; - /** How many bytes caller should add up itself before invoking {@link #pause}. */ + /** How many bytes caller should add up itself before invoking {@link #pause}. + * NOTE: The value returned by this method may change over time and is not guaranteed + * to be constant throughout the lifetime of the RateLimiter. Users are advised to + * refresh their local values with calls to this method to ensure consistency Review comment: Period. :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] atris commented on issue #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq
atris commented on issue #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq URL: https://github.com/apache/lucene-solr/pull/779#issuecomment-512220733 luceneutil run: https://gist.github.com/atris/1b2c25021ca11138338aa73efde2aa38 wikimedium2m This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8922) Speed up retrieval of top hits of DisjunctionMaxQuery
[ https://issues.apache.org/jira/browse/LUCENE-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886944#comment-16886944 ] Adrien Grand commented on LUCENE-8922: -- Here is a patch. It uses the first clause that has a score greater than or equal to the minimum competitive score to lead iteration of impacts and propagates min competitive scores when the tie break multiplier is 0. I ran wikibigall with the wikinightly tasks where I added 4 new tasks: - DisMaxHighMed: same as OrHighMed but with a DisjunctionMaxQuery and a tie break multiplier of 0.1 - DisMaxHighHigh: same as OrHighHigh but with a DisjunctionMaxQuery and a tie break multiplier of 0.1 - DisMax0HighMed: same as OrHighMed but with a DisjunctionMaxQuery and a tie break multiplier of 0 - DisMax0HighHigh: same as OrHighHigh but with a DisjunctionMaxQuery and a tie break multiplier of 0 {noformat} TaskQPS baseline StdDev QPS patch StdDev Pct diff Fuzzy1 177.71 (11.7%) 174.01 (11.2%) -2.1% ( -22% - 23%) SloppyPhrase6.26 (6.1%)6.23 (6.2%) -0.4% ( -12% - 12%) SpanNear2.32 (3.0%)2.32 (3.4%) -0.0% ( -6% -6%) IntervalsOrdered0.85 (1.7%)0.85 (1.8%) 0.0% ( -3% -3%) Prefix3 47.79 (12.6%) 47.85 (12.7%) 0.1% ( -22% - 29%) OrHighHigh9.87 (2.8%)9.89 (2.8%) 0.2% ( -5% -5%) Phrase 70.88 (3.2%) 71.04 (3.1%) 0.2% ( -5% -6%) Wildcard 128.13 (8.6%) 128.43 (9.0%) 0.2% ( -16% - 19%) AndHighMed 65.61 (3.5%) 65.85 (2.9%) 0.4% ( -5% -6%) AndHighHigh 36.41 (3.4%) 36.60 (3.1%) 0.5% ( -5% -7%) AndHighOrMedMed 25.99 (2.0%) 26.13 (1.8%) 0.5% ( -3% -4%) OrHighMed 36.42 (2.7%) 36.61 (2.6%) 0.5% ( -4% -5%) Fuzzy2 92.96 (16.1%) 93.59 (13.7%) 0.7% ( -25% - 36%) IntNRQ 132.08 (37.3%) 133.02 (38.0%) 0.7% ( -54% - 121%) AndMedOrHighHigh 26.80 (2.0%) 27.07 (2.1%) 1.0% ( -3% -5%) Term 1308.93 (3.6%) 1331.58 (3.7%) 1.7% ( -5% -9%) DisMaxHighMed 83.40 (3.1%) 111.26 (3.0%) 33.4% ( 26% - 40%) DisMaxHighHigh 54.28 (4.8%) 81.35 (4.1%) 49.9% ( 39% - 61%) DisMax0HighHigh 45.39 (5.7%) 217.70 (20.1%) 379.6% ( 334% - 430%) DisMax0HighMed 129.09 (3.9%) 905.16 (16.5%) 601.2% ( 558% - 646%) {noformat} > Speed up retrieval of top hits of DisjunctionMaxQuery > - > > Key: LUCENE-8922 > URL: https://issues.apache.org/jira/browse/LUCENE-8922 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > There a simple optimization that we are not doing in the case that > tieBreakMultiplier is 0: we could propagate the min competitive score to sub > clauses as-is. > Even in the general case, we currently compute the block boundary of the > DisjunctionMaxQuery as the minimum of the block boundaries of its sub > clauses. This generates blocks that have very low score upper bounds but > unfortunately they are also very small, which means that we might sometimes > not make progress quickly enough. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] jpountz opened a new pull request #791: LUCENE-8922: Better impacts for DisjunctionMaxQuery.
jpountz opened a new pull request #791: LUCENE-8922: Better impacts for DisjunctionMaxQuery. URL: https://github.com/apache/lucene-solr/pull/791 Note that we already have tests that cover impacts for DisjunctionMaxQuery. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8923) Release procedure does not add new version in CHANGES.txt in master
[ https://issues.apache.org/jira/browse/LUCENE-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886941#comment-16886941 ] Adrien Grand commented on LUCENE-8923: -- +1 Even if some changes are missing I think we'd benefit from pushing this rather soon so that developers don't automatically add their changes to 8.2 as the last minor. > Release procedure does not add new version in CHANGES.txt in master > --- > > Key: LUCENE-8923 > URL: https://issues.apache.org/jira/browse/LUCENE-8923 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ignacio Vera >Priority: Minor > Attachments: LUCENE-8923.patch > > > This issue is just to track something that maybe missing in the release > procedure. It currently adds a new version on CHANGES.txt in the minor > version branch but it does not do it in master. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13641) Undocumented and untested "cleanupThread" functionality in LFUCache and FastLRUCache
Andrzej Bialecki created SOLR-13641: Summary: Undocumented and untested "cleanupThread" functionality in LFUCache and FastLRUCache Key: SOLR-13641 URL: https://issues.apache.org/jira/browse/SOLR-13641 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Andrzej Bialecki Both LFUCache and FastLRUCache support a functionality for running evictions asynchronously, in a thread different from the one that executes a {{put(K, V)}} operation. Additionally, these asynchronous evictions can use either a one-off thread created after each put, or a single long-running cleanup thread. However, this functionality is not documented anywhere and it's not tested. It should either be removed, if it's not used, or properly documented and tested. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-8.2 - Build # 5 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.2/5/ 110 tests failed. FAILED: org.apache.lucene.analysis.ja.TestExtendedMode.testSurrogates2 Error Message: Could not initialize class org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder Stack Trace: java.lang.NoClassDefFoundError: Could not initialize class org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder at __randomizedtesting.SeedInfo.seed([CE49B77A88E38DF6:5B545AE26E6487EF]:0) at org.apache.lucene.analysis.ja.dict.TokenInfoDictionary.getInstance(TokenInfoDictionary.java:62) at org.apache.lucene.analysis.ja.JapaneseTokenizer.(JapaneseTokenizer.java:215) at org.apache.lucene.analysis.ja.TestExtendedMode$1.createComponents(TestExtendedMode.java:41) at org.apache.lucene.analysis.Analyzer.tokenStream(Analyzer.java:199) at org.apache.lucene.analysis.ja.TestExtendedMode.testSurrogates2(TestExtendedMode.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.lucene.analysis.ja.TestExtendedMode.testSurrogates Error Message: Could not initialize class org.apache.lucene.analysis.ja.dict.TokenInfoDictionary$SingletonHolder Stack Trace: java.lang.NoClassDefFoundError: Could not initialize class org.apache.lucene.analysis.ja.dict.Token
[JENKINS] Lucene-Solr-SmokeRelease-8.2 - Build # 6 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.2/6/ No tests ran. Build Log: [...truncated 24963 lines...] [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [java] Processed 2587 links (2117 relative) to 3396 anchors in 259 files [echo] Validated Links & Anchors via: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/solr/build/solr-ref-guide/bare-bones-html/ -dist-changes: [copy] Copying 4 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/solr/package/changes package: -unpack-solr-tgz: -ensure-solr-tgz-exists: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/solr/build/solr.tgz.unpacked [untar] Expanding: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/solr/package/solr-8.2.0.tgz into /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/solr/build/solr.tgz.unpacked generate-maven-artifacts: resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.2/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: f
[JENKINS] Lucene-Solr-8.x-Solaris (64bit/jdk1.8.0) - Build # 240 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Solaris/240/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC 8 tests failed. FAILED: org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitStaticIndexReplication Error Message: We expected shard split to succeed on a static index but it didn't. Found state = failed Stack Trace: java.lang.AssertionError: We expected shard split to succeed on a static index but it didn't. Found state = failed at __randomizedtesting.SeedInfo.seed([D1CB35239E84C05A:9B81A1230F2CEF5F]:0) at org.junit.Assert.fail(Assert.java:88) at org.apache.solr.cloud.api.collections.ShardSplitTest.doSplitStaticIndexReplication(ShardSplitTest.java:249) at org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitStaticIndexReplication(ShardSplitTest.java:127) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rul
[GitHub] [lucene-solr] atris commented on issue #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq
atris commented on issue #779: LUCENE-8762: Introduce Specialized Impacts For Doc + Freq URL: https://github.com/apache/lucene-solr/pull/779#issuecomment-512198512 @jpountz Updated the PR with lazy loading. Please let me know if it looks fine This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] thomaswoeckinger commented on issue #755: SOLR-13592: Introduce EmbeddedSolrTestBase for better integration tests
thomaswoeckinger commented on issue #755: SOLR-13592: Introduce EmbeddedSolrTestBase for better integration tests URL: https://github.com/apache/lucene-solr/pull/755#issuecomment-512197991 @gerlowskija Are you still working on this? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] thomaswoeckinger commented on issue #665: Fixes SOLR-13539
thomaswoeckinger commented on issue #665: Fixes SOLR-13539 URL: https://github.com/apache/lucene-solr/pull/665#issuecomment-512198034 @gerlowskija Are you still working on this? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8921) IndexSearcher.termStatistics should not require TermStates but docFreq and totalTermFreq
[ https://issues.apache.org/jira/browse/LUCENE-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886895#comment-16886895 ] Alan Woodward commented on LUCENE-8921: --- Can you open any PRs against master? Then we can backport as needed. > IndexSearcher.termStatistics should not require TermStates but docFreq and > totalTermFreq > > > Key: LUCENE-8921 > URL: https://issues.apache.org/jira/browse/LUCENE-8921 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: 8.1 >Reporter: Bruno Roustant >Priority: Major > Fix For: master (9.0) > > > IndexSearcher.termStatistics(Term term, TermStates context) is the way to > create a TermStatistics. It requires a TermStates param although it only > cares about the docFreq and totalTermFreq. > > For customizations that what to create TermStatistics based on docFreq and > totalTermFreq, but that do not have available TermStates, this method forces > to create a TermStates instance (which is not very lightweight) only to pass > two ints. > termStatistics could be modified to the following signature: > termStatistics(Term term, int docFreq, int totalTermFreq) > Since it would change the API, it could be done in master for next major > release. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8921) IndexSearcher.termStatistics should not require TermStates but docFreq and totalTermFreq
[ https://issues.apache.org/jira/browse/LUCENE-8921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16886889#comment-16886889 ] Bruno Roustant commented on LUCENE-8921: Yes, sure. I could work on a PR for 8.2. > IndexSearcher.termStatistics should not require TermStates but docFreq and > totalTermFreq > > > Key: LUCENE-8921 > URL: https://issues.apache.org/jira/browse/LUCENE-8921 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: 8.1 >Reporter: Bruno Roustant >Priority: Major > Fix For: master (9.0) > > > IndexSearcher.termStatistics(Term term, TermStates context) is the way to > create a TermStatistics. It requires a TermStates param although it only > cares about the docFreq and totalTermFreq. > > For customizations that what to create TermStatistics based on docFreq and > totalTermFreq, but that do not have available TermStates, this method forces > to create a TermStates instance (which is not very lightweight) only to pass > two ints. > termStatistics could be modified to the following signature: > termStatistics(Term term, int docFreq, int totalTermFreq) > Since it would change the API, it could be done in master for next major > release. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8923) Release procedure does not add new version in CHANGES.txt in master
[ https://issues.apache.org/jira/browse/LUCENE-8923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ignacio Vera updated LUCENE-8923: - Attachment: LUCENE-8923.patch Status: Open (was: Open) In the meanwhile I propose to create them manually. Attached a patch. [~tomoko] I am moving you issues to Lucene 8.3 in master, let me know if it is correct. > Release procedure does not add new version in CHANGES.txt in master > --- > > Key: LUCENE-8923 > URL: https://issues.apache.org/jira/browse/LUCENE-8923 > Project: Lucene - Core > Issue Type: Bug >Reporter: Ignacio Vera >Priority: Minor > Attachments: LUCENE-8923.patch > > > This issue is just to track something that maybe missing in the release > procedure. It currently adds a new version on CHANGES.txt in the minor > version branch but it does not do it in master. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 3442 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/3442/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/152/consoleText [repro] Revision: 2f3451c3b637dee6e39e2c20ca8a1c50e4c17fca [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=HdfsAutoAddReplicasIntegrationTest -Dtests.method=testSimple -Dtests.seed=3D7CBA07C693AD3A -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=no -Dtests.timezone=America/Argentina/Cordoba -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 19c78ddf98b1cef86f7a1c6d124811af8726b41d [repro] git fetch [repro] git checkout 2f3451c3b637dee6e39e2c20ca8a1c50e4c17fca [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] HdfsAutoAddReplicasIntegrationTest [repro] ant compile-test [...truncated 3577 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.HdfsAutoAddReplicasIntegrationTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.seed=3D7CBA07C693AD3A -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=no -Dtests.timezone=America/Argentina/Cordoba -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [...truncated 2774 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 1/5 failed: org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest [repro] git checkout 19c78ddf98b1cef86f7a1c6d124811af8726b41d [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8923) Release procedure does not add new version in CHANGES.txt in master
Ignacio Vera created LUCENE-8923: Summary: Release procedure does not add new version in CHANGES.txt in master Key: LUCENE-8923 URL: https://issues.apache.org/jira/browse/LUCENE-8923 Project: Lucene - Core Issue Type: Bug Reporter: Ignacio Vera This issue is just to track something that maybe missing in the release procedure. It currently adds a new version on CHANGES.txt in the minor version branch but it does not do it in master. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8922) Speed up retrieval of top hits of DisjunctionMaxQuery
Adrien Grand created LUCENE-8922: Summary: Speed up retrieval of top hits of DisjunctionMaxQuery Key: LUCENE-8922 URL: https://issues.apache.org/jira/browse/LUCENE-8922 Project: Lucene - Core Issue Type: Improvement Reporter: Adrien Grand There a simple optimization that we are not doing in the case that tieBreakMultiplier is 0: we could propagate the min competitive score to sub clauses as-is. Even in the general case, we currently compute the block boundary of the DisjunctionMaxQuery as the minimum of the block boundaries of its sub clauses. This generates blocks that have very low score upper bounds but unfortunately they are also very small, which means that we might sometimes not make progress quickly enough. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org