[GitHub] [solr] dsmiley commented on a diff in pull request #1215: DocRouter: strengthen abstraction

2022-12-20 Thread GitBox


dsmiley commented on code in PR #1215:
URL: https://github.com/apache/solr/pull/1215#discussion_r1054005304


##
solr/core/src/java/org/apache/solr/handler/admin/SplitOp.java:
##
@@ -263,8 +263,9 @@ private void handleGetRanges(CoreAdminHandler.CallInfo it, 
String coreName) thro
 DocCollection collection = clusterState.getCollection(collectionName);
 String sliceName = 
parentCore.getCoreDescriptor().getCloudDescriptor().getShardId();
 Slice slice = collection.getSlice(sliceName);
-DocRouter router =
-collection.getRouter() != null ? collection.getRouter() : 
DocRouter.DEFAULT;
+CompositeIdRouter router =

Review Comment:
   Changed declared type to be specifically CompositeIdRouter because in fact 
splits depend on CompositeIdRouter in particular.  I added a method or two 
there as well that SplitOp calls.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[GitHub] [solr] noblepaul commented on a diff in pull request #1215: DocRouter: strengthen abstraction

2022-12-20 Thread GitBox


noblepaul commented on code in PR #1215:
URL: https://github.com/apache/solr/pull/1215#discussion_r1053984023


##
solr/core/src/java/org/apache/solr/handler/admin/SplitOp.java:
##
@@ -263,8 +263,9 @@ private void handleGetRanges(CoreAdminHandler.CallInfo it, 
String coreName) thro
 DocCollection collection = clusterState.getCollection(collectionName);
 String sliceName = 
parentCore.getCoreDescriptor().getCloudDescriptor().getShardId();
 Slice slice = collection.getSlice(sliceName);
-DocRouter router =
-collection.getRouter() != null ? collection.getRouter() : 
DocRouter.DEFAULT;
+CompositeIdRouter router =

Review Comment:
   is it just a formatting change? why is this required ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-16332) Upgrade Jetty to latest 9.4.x

2022-12-20 Thread Jigar Shah (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-16332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17650051#comment-17650051
 ] 

Jigar Shah commented on SOLR-16332:
---

+1 [~janhoy] and [~krisden], many thanks for the fix!

This fix on 8_11 branch (for 8.11.3) is very critical fix affecting in and out 
of SolrCloud cluster. It's a blocker to move overall to http2.

Community, any release plans for 8.11.3, it's 6 months since 8.11.2. Many 
thanks for great work!

> Upgrade Jetty to latest 9.4.x
> -
>
> Key: SOLR-16332
> URL: https://issues.apache.org/jira/browse/SOLR-16332
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 9.0, 8.11.2
>Reporter: Chris Sabelstrom
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 9.1, main (10.0), 8.11.3
>
> Attachments: image-2022-08-09-09-39-43-134.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Fixes Vulnerability CVE-2022-2048 and other known Jetty bugs.
>  
> *User report:*
> A security scanner detected the following vulnerability. Please upgrade to 
> version noted in Status column. Please fix this for 8.11 as well as 9.0
> !image-2022-08-09-09-39-43-134.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[GitHub] [solr] dsmiley commented on pull request #1215: DocRouter: strengthen abstraction

2022-12-20 Thread GitBox


dsmiley commented on PR #1215:
URL: https://github.com/apache/solr/pull/1215#issuecomment-1360572392

   I can sympathize somewhat but the change/PR as it is strengthens a weak 
abstraction, even if it's not pluggable.  _Not all abstractions need to be 
pluggable_.  I could remove some comments added in the PR that may be 
suggestive of the possibility of a subclass that actually isn't present.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[GitHub] [solr] noblepaul commented on a diff in pull request #1242: SOLR-16580: Avoid making copies of DocCollection for PRS updates

2022-12-20 Thread GitBox


noblepaul commented on code in PR #1242:
URL: https://github.com/apache/solr/pull/1242#discussion_r1053868722


##
solr/solrj/src/java/org/apache/solr/common/cloud/DocCollection.java:
##
@@ -488,4 +468,35 @@ public interface CollectionStateProps {
 String SHARDS = "shards";
 String PER_REPLICA_STATE = "perReplicaState";
   }
+
+  public static class PrsSupplier implements Supplier {
+
+protected volatile PerReplicaStates prs;
+
+PrsSupplier() {}
+
+PrsSupplier(PerReplicaStates prs) {
+  this.prs = prs;
+}
+
+@Override
+public PerReplicaStates get() {
+  return prs;
+}
+  }
+
+  public static final ThreadLocal 
REPLICASTATES_PROVIDER =

Review Comment:
   Yes, there is. The  only reason why I used the `ThreadLocal` Option was to 
minimize the changes. However, I think I should explore that route



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[GitHub] [solr] noblepaul commented on a diff in pull request #1242: SOLR-16580: Avoid making copies of DocCollection for PRS updates

2022-12-20 Thread GitBox


noblepaul commented on code in PR #1242:
URL: https://github.com/apache/solr/pull/1242#discussion_r1053868722


##
solr/solrj/src/java/org/apache/solr/common/cloud/DocCollection.java:
##
@@ -488,4 +468,35 @@ public interface CollectionStateProps {
 String SHARDS = "shards";
 String PER_REPLICA_STATE = "perReplicaState";
   }
+
+  public static class PrsSupplier implements Supplier {
+
+protected volatile PerReplicaStates prs;
+
+PrsSupplier() {}
+
+PrsSupplier(PerReplicaStates prs) {
+  this.prs = prs;
+}
+
+@Override
+public PerReplicaStates get() {
+  return prs;
+}
+  }
+
+  public static final ThreadLocal 
REPLICASTATES_PROVIDER =

Review Comment:
   Yes, there is. The  only reason why I used the ThreadLocal Option was to 
minimize the changes. However, I think I should explore that route



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[GitHub] [solr] noblepaul commented on pull request #1215: DocRouter: strengthen abstraction

2022-12-20 Thread GitBox


noblepaul commented on PR #1215:
URL: https://github.com/apache/solr/pull/1215#issuecomment-1360554944

   IMHO  this PR is mixing both the new capability and impl. If we could just 
have a pluggable option, then only experts would use it and that should be fine


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-16594) eDismax should use startOffset when converting per-field to per-term queries

2022-12-20 Thread Rudi Seitz (Jira)
Rudi Seitz created SOLR-16594:
-

 Summary: eDismax should use startOffset when converting per-field 
to per-term queries
 Key: SOLR-16594
 URL: https://issues.apache.org/jira/browse/SOLR-16594
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: query parsers
Reporter: Rudi Seitz


When parsing a multi-term query that spans multiple fields, edismax sometimes 
switches from a "term-centric" to a "field-centric" approach. This creates 
inconsistent semantics for the {{mm}} or "min should match" parameter and may 
have an impact on scoring. The goal of this ticket is to improve the approach 
that edismax uses for generating term-centric queries so that edismax would 
less frequently "give up" and resort to the field-centric approach. 
Specifically, we propose that edismax should create a dismax query for each 
distinct startOffset found among the tokens emitted by the field analyzers. 
Since the relevant code in edismax works with Query objects that contain Terms, 
and since Terms do not hold the startOffset of the Token from which Term was 
derived, some plumbing work would need to be done to make the startOffsets 
available to edismax.

 

BACKGROUND:

 

If a user searches for "foo bar" with {{{}qf=f1 f2{}}}, a field-centric 
interpretation of the query would contain a clause for each field:

{{  (f1:foo f1:bar) (f2:foo f2:bar)}}

while a term-centric interpretation would contain a clause for each term:

{{  (f1:foo f2:foo) (f1:bar f2:bar)}}

The challenge in generating a term-centric query is that we need to take the 
tokens that emerge from each field's analysis chain and group them according to 
the terms in the user's original query. However, the tokens that emerge from an 
analysis chain do not store a reference to their corresponding input terms. For 
example, if we pass "foo bar" through an ngram analyzer we would get a token 
stream containing "f", "fo", "foo", "b", "ba", "bar". While it may be obvious 
to a human that "f", "fo", and "foo" all come from the "foo" input term, and 
that "b", "ba", and "bar" come from the "bar" input term, there is not always 
an easy way for edismax to see this connection. When {{{}sow=true{}}}, edismax 
passes each whitespace-separated term through each analysis chain separately, 
and therefore edismax "knows" that the output tokens from any given analysis 
chain are all derived from the single input term that was passed into that 
chain. However, when {{{}sow=false{}}}, edismax passes the entire multi-term 
query through each analysis chain as a whole, resulting in multiple output 
tokens that are not "connected" to their source term.

Edismax still tries to generate a term-centric query when {{sow=false}} by 
first generating a boolean query for each field, and then checking whether all 
of these per-field queries have the same structure. The structure will 
generally be uniform if each analysis chain emits the same number of tokens for 
the given input. If one chain has a synonym filter and another doesn’t, this 
uniformity may depend on whether a synonym rule happened to match a term in the 
user's input. 


Assuming the per-field boolean queries _do_ have the same structure, edismax 
reorganizes them into a new boolean query. The new query contains a dismax for 
each clause position in the original queries. If the original queries are 
{{(f1:foo f1:bar) }}and {{(f2:foo f2:bar)}} we can see they have two clauses 
each, so we would get a dismax containing all the first position clauses 
{{(f1:foo f1:bar)}} and another dismax containing all the second position 
clauses {{{}(f2:foo f2:bar){}}}.

We can see that edismax is using clause position as a heuristic to reorganize 
the per-field boolean queries into per-term ones, even though it doesn't know 
for sure which clauses inside those per-field boolean queries are related to 
which input terms. We propose that a better way of reorganizing the per-field 
boolean queries is to create a dismax for each distinct startOffset seen among 
the tokens in the token streams emitted by each field analyzer. The startOffset 
of a token (rather, a PackedTokenAttributeImpl) is "the position of the first 
character corresponding to this token in the source text".

We propose that startOffset is a resonable way of matching output tokens up 
with the input terms that gave rise to them. For example, if we pass "foo bar" 
through an ngram analysis chain we see that the foo-related tokens all have 
startOffset=0 while the bar-related tokens all have startOffset=4. Likewise, 
tokens that are generated via synonym expansion have a startOffset that points 
to the beginning of the matching input term. For example, if the query "GB" 
generates "GB gib gigabyte gigabytes" via synonym expansion, all of those four 
tokens would have startOffset=0.


[GitHub] [solr] dsmiley commented on pull request #1215: DocRouter: strengthen abstraction

2022-12-20 Thread GitBox


dsmiley commented on PR #1215:
URL: https://github.com/apache/solr/pull/1215#issuecomment-1360449384

   We could; I just chose not to go that far in this PR.  If I did something 
trivial, I'd be inclined to then document it, test it.  If you think I could 
skip some of that (rationale: this is "expert" option only) then I'd be willing 
to do that in this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[GitHub] [solr] hiteshk25 commented on a diff in pull request #1242: SOLR-16580: Avoid making copies of DocCollection for PRS updates

2022-12-20 Thread GitBox


hiteshk25 commented on code in PR #1242:
URL: https://github.com/apache/solr/pull/1242#discussion_r1053824406


##
solr/solrj/src/java/org/apache/solr/common/cloud/DocCollection.java:
##
@@ -488,4 +468,35 @@ public interface CollectionStateProps {
 String SHARDS = "shards";
 String PER_REPLICA_STATE = "perReplicaState";
   }
+
+  public static class PrsSupplier implements Supplier {
+
+protected volatile PerReplicaStates prs;
+
+PrsSupplier() {}
+
+PrsSupplier(PerReplicaStates prs) {
+  this.prs = prs;
+}
+
+@Override
+public PerReplicaStates get() {
+  return prs;
+}
+  }
+
+  public static final ThreadLocal 
REPLICASTATES_PROVIDER =

Review Comment:
   Quick question: is there any alternative to avoid threadlocal. Give me some 
more context on this



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[GitHub] [solr] noblepaul commented on pull request #1215: DocRouter: strengthen abstraction

2022-12-20 Thread GitBox


noblepaul commented on PR #1215:
URL: https://github.com/apache/solr/pull/1215#issuecomment-1360406066

   why can't we make it properly pluggable?
   
   ```
   "router" : {"class": "fully.qualified.clasName"}
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[GitHub] [solr] dsmiley commented on pull request #1221: SOLR-16577: Always log core load issues

2022-12-20 Thread GitBox


dsmiley commented on PR #1221:
URL: https://github.com/apache/solr/pull/1221#issuecomment-1360080469

   We should also probably set & clear the MDC so that if we log an issue, we 
have more context (e.g. *which* core).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[GitHub] [solr] dsmiley commented on a diff in pull request #1245: SOLR-16567: KnnQueryParser support for both pre-filters and post-filter

2022-12-20 Thread GitBox


dsmiley commented on code in PR #1245:
URL: https://github.com/apache/solr/pull/1245#discussion_r1053656524


##
solr/core/src/java/org/apache/solr/search/neural/KnnQParser.java:
##
@@ -80,44 +82,68 @@ public Query parse() {
 float[] parsedVectorToSearch = parseVector(vectorToSearch, 
denseVectorType.getDimension());
 
 return denseVectorType.getKnnVectorQuery(
-schemaField.getName(), parsedVectorToSearch, topK, getFilterQuery());
+schemaField.getName(), parsedVectorToSearch, topK, buildPreFilter());
   }
 
-  private Query getFilterQuery() throws SolrException {
-if (!isFilter()) {
+  private Query buildPreFilter() throws SolrException {

Review Comment:
   IMO its name was just fine.  "pre-filter" is not something I've ever heard 
someone say in Solr.



##
solr/core/src/java/org/apache/solr/search/neural/KnnQParser.java:
##
@@ -80,44 +82,60 @@ public Query parse() {
 float[] parsedVectorToSearch = parseVector(vectorToSearch, 
denseVectorType.getDimension());
 
 return denseVectorType.getKnnVectorQuery(
-schemaField.getName(), parsedVectorToSearch, topK, getFilterQuery());
+schemaField.getName(), parsedVectorToSearch, topK, buildPreFilter());
   }
 
-  private Query getFilterQuery() throws SolrException {
-if (!isFilter()) {
+  private Query buildPreFilter() throws SolrException {
+boolean isSubQuery = recurseCount != 0;
+if (!isFilter() && !isSubQuery) {
   String[] filterQueries = req.getParams().getParams(CommonParams.FQ);
   if (filterQueries != null && filterQueries.length != 0) {
-List filters;
+List preFilters;
 
 try {
-  filters = QueryUtils.parseFilterQueries(req, true);
+  preFilters = acceptPreFiltersOnly(QueryUtils.parseFilterQueries(req, 
true));
 } catch (SyntaxError e) {
   throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
 }
 
-if (filters.size() == 1) {
-  return filters.get(0);
+if (preFilters.size() == 0) {
+  return null;
+} else if (preFilters.size() == 1) {
+  return preFilters.get(0);
+} else {
+  BooleanQuery.Builder builder = new BooleanQuery.Builder();
+  for (Query query : preFilters) {
+builder.add(query, BooleanClause.Occur.FILTER);
+  }
+  return builder.build();
 }
-
-BooleanQuery.Builder builder = new BooleanQuery.Builder();
-for (Query query : filters) {
-  builder.add(query, BooleanClause.Occur.FILTER);
-}
-
-return builder.build();
   }
 }
-
 return null;
   }
 
+  private List acceptPreFiltersOnly(List filters) {

Review Comment:
   Can we name this method `excludePostFilters`?  I've never heard of 
"pre-filter" before.  Any part of a query that isn't a PostFilter is not 
necessarily executed in a defined order (is not "pre"); it's a combined query 
tree with interesting leap-frogging behavior based on various costs.
   
   The logic appears correct but I'd like to recommend a simplification to make 
it even easier to read.  I suggest adding a static utility method on PostFilter 
called `boolean runAsPostFilter(Query q)`.  You could then simply make this a 
method reference.  Perhaps use this elsewhere (e.g. in 
SolrIndexSearcher.getProcessedFilter). 
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-16567) java.lang.StackOverflowError when combining KnnQParser and FunctionRangeQParser

2022-12-20 Thread Gabriel Magno (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-16567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17649940#comment-17649940
 ] 

Gabriel Magno commented on SOLR-16567:
--

Not sure if I understood it correctly this "threshold" feature, but just to let 
you know that the `frange` already covers 3 use cases I had:
 * Filter the results of a KNN query based on its own similarity score
 * Filter the results of an edismax query based on the similarity score of a 
KNN sub-query
 * Filter the results of a query based on a combined score of multiple sources 
(created with function queries)

 

> java.lang.StackOverflowError when combining KnnQParser and 
> FunctionRangeQParser
> ---
>
> Key: SOLR-16567
> URL: https://issues.apache.org/jira/browse/SOLR-16567
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query
>Affects Versions: 9.1
> Environment: Solr Cloud with `solr:9.1` Docker image
>Reporter: Gabriel Magno
>Priority: Major
> Attachments: create_example-solr_9_0.sh, create_example-solr_9_1.sh, 
> error_full.txt, response-error.json, run_query.sh
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Hello there!
> I had a Solr 9.0 cluster running, using the new Dense Vector feature. 
> Recently I have migrated to Solr 9.1. Most of the things are working fine, 
> except for a special case I have here.
> *Error Description*
> The problem happens when I try making an Edismax query with a KNN sub-query 
> and a Function Range filter. For example, I try making this query.
>  * defType=edismax
>  * df=name
>  * q=the
>  * similarity_vector=\{!knn f=vector topK=10}[1.1,2.2,3.3,4.4]
>  * {!frange l=0.99}$similarity_vector
> In other words, I want all the documents matching the term "the" in the 
> "name" field, and I filter to return only documents having a vector 
> similarity of at least 0.99. This query was working fine on Solr 9.0, but on 
> Solr 9.1, I get his error:
>  
> {code:java}
> java.lang.RuntimeException: java.lang.StackOverflowErrorat 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:840)at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:641)at 
> org.apache.solr.servlet.SolrDispatchFilter.dispatch(SolrDispatchFilter.java:250)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.lambda/usr/bin/zsh(SolrDispatchFilter.java:218)
> at 
> org.apache.solr.servlet.ServletUtils.traceHttpRequestExecution2(ServletUtils.java:257)
> at 
> org.apache.solr.servlet.ServletUtils.rateLimitRequest(ServletUtils.java:227)  
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:213)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)
> at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:201) 
>... (manually supressed for brevity)at 
> java.base/java.lang.Thread.run(Unknown Source)Caused by: 
> java.lang.StackOverflowErrorat 
> org.apache.solr.search.StrParser.getId(StrParser.java:172)at 
> org.apache.solr.search.StrParser.getId(StrParser.java:168)at 
> org.apache.solr.search.QueryParsing.parseLocalParams(QueryParsing.java:100)   
>  at 
> org.apache.solr.search.QueryParsing.parseLocalParams(QueryParsing.java:65)
> at org.apache.solr.search.QParser.getParser(QParser.java:364)at 
> org.apache.solr.search.QParser.getParser(QParser.java:334)at 
> org.apache.solr.search.QParser.getParser(QParser.java:321)at 
> org.apache.solr.search.QueryUtils.parseFilterQueries(QueryUtils.java:244)
> at 
> org.apache.solr.search.neural.KnnQParser.getFilterQuery(KnnQParser.java:93)   
>  at org.apache.solr.search.neural.KnnQParser.parse(KnnQParser.java:83)at 
> org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:384)
> at org.apache.solr.search.FunctionQParser.parse(FunctionQParser.java:94)  
>   at org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.FunctionRangeQParserPlugin.parse(FunctionRangeQParserPlugin.java:53)
> at org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.QueryUtils.parseFilterQueries(QueryUtils.java:246)
> at 
> org.apache.solr.search.neural.KnnQParser.getFilterQuery(KnnQParser.java:93)   
>  at org.apache.solr.search.neural.KnnQParser.parse(KnnQParser.java:83)at 
> org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:384)
> at org.apache.solr.search.FunctionQParser.parse(FunctionQParser.java:94)  
>   at 

[GitHub] [solr] dsmiley commented on a diff in pull request #1221: SOLR-16577: Always log core load issues

2022-12-20 Thread GitBox


dsmiley commented on code in PR #1221:
URL: https://github.com/apache/solr/pull/1221#discussion_r1043577392


##
solr/CHANGES.txt:
##
@@ -160,6 +160,8 @@ Other Changes
 
 SOLR-16575: splitshard should honour createNodeSet (noble)
 
+* SOLR-16577: Always log core load issues (Haythem Khiri)

Review Comment:
   ```suggestion
   * SOLR-16577: Encore core load failures are always logged. (Haythem Khiri, 
David Smiley)
   ```



##
solr/CHANGES.txt:
##
@@ -160,6 +160,8 @@ Other Changes
 
 SOLR-16575: splitshard should honour createNodeSet (noble)
 
+* SOLR-16577: Always log core load issues (Haythem Khiri)

Review Comment:
   I'd also put this under Improvements.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[GitHub] [solr-operator] HoustonPutman merged pull request #508: Fix issue with namespaces in helm charts

2022-12-20 Thread GitBox


HoustonPutman merged PR #508:
URL: https://github.com/apache/solr-operator/pull/508


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Comment Edited] (SOLR-16556) Solr stream expression: Implement Page Streaming Decorator to allow results to be displayed with pagination.

2022-12-20 Thread Joel Bernstein (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-16556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17649815#comment-17649815
 ] 

Joel Bernstein edited comment on SOLR-16556 at 12/20/22 3:08 PM:
-

[~mnrathod], I did a review of PagingStream this morning and it appears that it 
will always return tuples starting from index 0:

https://github.com/apache/solr/blob/9b1058f003f5dcbbe30ee5a9a8d57fc3af847a5d/solr/solrj-streaming/src/java/org/apache/solr/client/solrj/io/stream/PagingStream.java#L229

I assumed since the start parameter is part of the specification, tuples would 
start flowing from the start param. Let me know if I'm missing something in the 
code.

The reordering is useful but I think it should be optional as in many cases you 
would just want to page over an existing stream and skip all the priority queue 
logic.



was (Author: joel.bernstein):
[~mnrathod], I did a review of PagingStream this morning and it appears that it 
will always return tuples starting from index 0:

https://github.com/apache/solr/blob/9b1058f003f5dcbbe30ee5a9a8d57fc3af847a5d/solr/solrj-streaming/src/java/org/apache/solr/client/solrj/io/stream/PagingStream.java#L229

I assumed since the start parameter is part of the specification, tuples would 
start flowing from the start param. Let me know if I'm missing something in the 
code.

The reording is useful but I think it should be optional as in many cases you 
would just want to page over an existing stream and skip the all the priority 
queue logic.


> Solr stream expression: Implement Page Streaming Decorator to allow results 
> to be displayed with pagination.
> 
>
> Key: SOLR-16556
> URL: https://issues.apache.org/jira/browse/SOLR-16556
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Maulin
>Priority: Major
>  Labels: Streamingexpression, decorator, paging
> Attachments: Page Decorator Performance Reading.xlsx
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Solr stream expression: Implement Page Streaming Decorator to allow results 
> to be displayed with pagination.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Comment Edited] (SOLR-16556) Solr stream expression: Implement Page Streaming Decorator to allow results to be displayed with pagination.

2022-12-20 Thread Joel Bernstein (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-16556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17649815#comment-17649815
 ] 

Joel Bernstein edited comment on SOLR-16556 at 12/20/22 3:07 PM:
-

[~mnrathod], I did a review of PagingStream this morning and it appears that it 
will always return tuples starting from index 0:

https://github.com/apache/solr/blob/9b1058f003f5dcbbe30ee5a9a8d57fc3af847a5d/solr/solrj-streaming/src/java/org/apache/solr/client/solrj/io/stream/PagingStream.java#L229

I assumed since the start parameter is part of the specification, tuples would 
start flowing from the start param. Let me know if I'm missing something in the 
code.

The reording is useful but I think it should be optional as in many cases you 
would just want to page over an existing stream and skip the all the priority 
queue logic.



was (Author: joel.bernstein):
[~mnrathod], I did a review of PagingStream this morning and it appears that it 
will always return tuples starting from index 0:

https://github.com/apache/solr/blob/9b1058f003f5dcbbe30ee5a9a8d57fc3af847a5d/solr/solrj-streaming/src/java/org/apache/solr/client/solrj/io/stream/PagingStream.java#L229

I'm assumed since the start parameter is part of the specification tuples would 
start flowing from the start param. Let me know if I'm missing something in the 
code.

The reording is useful but I think it should be optional as in many cases you 
would just want to page over an existing stream and skip the all the priority 
queue logic.


> Solr stream expression: Implement Page Streaming Decorator to allow results 
> to be displayed with pagination.
> 
>
> Key: SOLR-16556
> URL: https://issues.apache.org/jira/browse/SOLR-16556
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Maulin
>Priority: Major
>  Labels: Streamingexpression, decorator, paging
> Attachments: Page Decorator Performance Reading.xlsx
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Solr stream expression: Implement Page Streaming Decorator to allow results 
> to be displayed with pagination.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Comment Edited] (SOLR-16556) Solr stream expression: Implement Page Streaming Decorator to allow results to be displayed with pagination.

2022-12-20 Thread Joel Bernstein (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-16556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17649815#comment-17649815
 ] 

Joel Bernstein edited comment on SOLR-16556 at 12/20/22 3:03 PM:
-

[~mnrathod], I did a review of PagingStream this morning and it appears that it 
will always return tuples starting from index 0:

https://github.com/apache/solr/blob/9b1058f003f5dcbbe30ee5a9a8d57fc3af847a5d/solr/solrj-streaming/src/java/org/apache/solr/client/solrj/io/stream/PagingStream.java#L229

I'm assumed since the start parameter is part of the specification tuples would 
start flowing from the start param. Let me know if I'm missing something in the 
code.

The reording is useful but I think it should be optional as in many cases you 
would just want to page over an existing stream and skip the all the priority 
queue logic.



was (Author: joel.bernstein):
[~mnrathod], I did a review of PagingStream this morning and it appears that it 
will always return tuples starting from index 0:

https://github.com/apache/solr/blob/9b1058f003f5dcbbe30ee5a9a8d57fc3af847a5d/solr/solrj-streaming/src/java/org/apache/solr/client/solrj/io/stream/PagingStream.java#L229

I'm assumed since the start parameter is part of the specification tuples would 
start flowing from the start param. Let me know if I'm missing something in the 
code.

The reording is useful but I think should be optional as in many cases you 
would just want to page over an existing stream and skip the all the priority 
queue logic.


> Solr stream expression: Implement Page Streaming Decorator to allow results 
> to be displayed with pagination.
> 
>
> Key: SOLR-16556
> URL: https://issues.apache.org/jira/browse/SOLR-16556
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Maulin
>Priority: Major
>  Labels: Streamingexpression, decorator, paging
> Attachments: Page Decorator Performance Reading.xlsx
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Solr stream expression: Implement Page Streaming Decorator to allow results 
> to be displayed with pagination.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Comment Edited] (SOLR-16556) Solr stream expression: Implement Page Streaming Decorator to allow results to be displayed with pagination.

2022-12-20 Thread Joel Bernstein (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-16556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17649815#comment-17649815
 ] 

Joel Bernstein edited comment on SOLR-16556 at 12/20/22 3:02 PM:
-

[~mnrathod], I did a review of PagingStream this morning and it appears that it 
will always return tuples starting from index 0:

https://github.com/apache/solr/blob/9b1058f003f5dcbbe30ee5a9a8d57fc3af847a5d/solr/solrj-streaming/src/java/org/apache/solr/client/solrj/io/stream/PagingStream.java#L229

I'm assumed since the start parameter is part of the specification tuples would 
start flowing from the start param. Let me know if I'm missing something in the 
code.

The reording is useful but I think should be optional as in many cases you 
would just want to page over an existing stream and skip the all the priority 
queue logic.



was (Author: joel.bernstein):
[~mnrathod], I did a review of PagingStream this morning and it appears that it 
will always return tuples starting from index 0:

https://github.com/apache/solr/blob/9b1058f003f5dcbbe30ee5a9a8d57fc3af847a5d/solr/solrj-streaming/src/java/org/apache/solr/client/solrj/io/stream/PagingStream.java#L229

I'm assuming since the start parameter is part of the specification tuples 
would start flowing from the start param. Let me know if I'm missing something 
in the code.

The reording is useful but I think should be optional as in many cases you 
would just want to page over an existing stream and skip the all the priority 
queue logic.


> Solr stream expression: Implement Page Streaming Decorator to allow results 
> to be displayed with pagination.
> 
>
> Key: SOLR-16556
> URL: https://issues.apache.org/jira/browse/SOLR-16556
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Maulin
>Priority: Major
>  Labels: Streamingexpression, decorator, paging
> Attachments: Page Decorator Performance Reading.xlsx
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Solr stream expression: Implement Page Streaming Decorator to allow results 
> to be displayed with pagination.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-16556) Solr stream expression: Implement Page Streaming Decorator to allow results to be displayed with pagination.

2022-12-20 Thread Joel Bernstein (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-16556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17649815#comment-17649815
 ] 

Joel Bernstein commented on SOLR-16556:
---

[~mnrathod], I did a review of PagingStream this morning and it appears that it 
will always return tuples starting from index 0:

https://github.com/apache/solr/blob/9b1058f003f5dcbbe30ee5a9a8d57fc3af847a5d/solr/solrj-streaming/src/java/org/apache/solr/client/solrj/io/stream/PagingStream.java#L229

I'm assuming since the start parameter is part of the specification tuples 
would start flowing from the start param. Let me know if I'm missing something 
in the code.

The reording is useful but I think should be optional as in many cases you 
would just want to page over an existing stream and skip the all the priority 
queue logic.


> Solr stream expression: Implement Page Streaming Decorator to allow results 
> to be displayed with pagination.
> 
>
> Key: SOLR-16556
> URL: https://issues.apache.org/jira/browse/SOLR-16556
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Maulin
>Priority: Major
>  Labels: Streamingexpression, decorator, paging
> Attachments: Page Decorator Performance Reading.xlsx
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Solr stream expression: Implement Page Streaming Decorator to allow results 
> to be displayed with pagination.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-16567) java.lang.StackOverflowError when combining KnnQParser and FunctionRangeQParser

2022-12-20 Thread Alessandro Benedetti (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-16567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17649702#comment-17649702
 ] 

Alessandro Benedetti commented on SOLR-16567:
-

And by the way, I think the "threshold" use case would need to go in Lucene 
(and then in Solr) as a separate contribution (rather than using frange).
Will keep you updated on this :)


> java.lang.StackOverflowError when combining KnnQParser and 
> FunctionRangeQParser
> ---
>
> Key: SOLR-16567
> URL: https://issues.apache.org/jira/browse/SOLR-16567
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query
>Affects Versions: 9.1
> Environment: Solr Cloud with `solr:9.1` Docker image
>Reporter: Gabriel Magno
>Priority: Major
> Attachments: create_example-solr_9_0.sh, create_example-solr_9_1.sh, 
> error_full.txt, response-error.json, run_query.sh
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Hello there!
> I had a Solr 9.0 cluster running, using the new Dense Vector feature. 
> Recently I have migrated to Solr 9.1. Most of the things are working fine, 
> except for a special case I have here.
> *Error Description*
> The problem happens when I try making an Edismax query with a KNN sub-query 
> and a Function Range filter. For example, I try making this query.
>  * defType=edismax
>  * df=name
>  * q=the
>  * similarity_vector=\{!knn f=vector topK=10}[1.1,2.2,3.3,4.4]
>  * {!frange l=0.99}$similarity_vector
> In other words, I want all the documents matching the term "the" in the 
> "name" field, and I filter to return only documents having a vector 
> similarity of at least 0.99. This query was working fine on Solr 9.0, but on 
> Solr 9.1, I get his error:
>  
> {code:java}
> java.lang.RuntimeException: java.lang.StackOverflowErrorat 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:840)at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:641)at 
> org.apache.solr.servlet.SolrDispatchFilter.dispatch(SolrDispatchFilter.java:250)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.lambda/usr/bin/zsh(SolrDispatchFilter.java:218)
> at 
> org.apache.solr.servlet.ServletUtils.traceHttpRequestExecution2(ServletUtils.java:257)
> at 
> org.apache.solr.servlet.ServletUtils.rateLimitRequest(ServletUtils.java:227)  
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:213)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)
> at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:201) 
>... (manually supressed for brevity)at 
> java.base/java.lang.Thread.run(Unknown Source)Caused by: 
> java.lang.StackOverflowErrorat 
> org.apache.solr.search.StrParser.getId(StrParser.java:172)at 
> org.apache.solr.search.StrParser.getId(StrParser.java:168)at 
> org.apache.solr.search.QueryParsing.parseLocalParams(QueryParsing.java:100)   
>  at 
> org.apache.solr.search.QueryParsing.parseLocalParams(QueryParsing.java:65)
> at org.apache.solr.search.QParser.getParser(QParser.java:364)at 
> org.apache.solr.search.QParser.getParser(QParser.java:334)at 
> org.apache.solr.search.QParser.getParser(QParser.java:321)at 
> org.apache.solr.search.QueryUtils.parseFilterQueries(QueryUtils.java:244)
> at 
> org.apache.solr.search.neural.KnnQParser.getFilterQuery(KnnQParser.java:93)   
>  at org.apache.solr.search.neural.KnnQParser.parse(KnnQParser.java:83)at 
> org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:384)
> at org.apache.solr.search.FunctionQParser.parse(FunctionQParser.java:94)  
>   at org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.FunctionRangeQParserPlugin.parse(FunctionRangeQParserPlugin.java:53)
> at org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.QueryUtils.parseFilterQueries(QueryUtils.java:246)
> at 
> org.apache.solr.search.neural.KnnQParser.getFilterQuery(KnnQParser.java:93)   
>  at org.apache.solr.search.neural.KnnQParser.parse(KnnQParser.java:83)at 
> org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:384)
> at org.apache.solr.search.FunctionQParser.parse(FunctionQParser.java:94)  
>   at org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.FunctionRangeQParserPlugin.parse(FunctionRangeQParserPlugin.java:53)
> at org.apache.solr.search.QParser.getQuery(QParser.java:188)... 
> (manually supressed 

[jira] [Commented] (SOLR-16567) java.lang.StackOverflowError when combining KnnQParser and FunctionRangeQParser

2022-12-20 Thread Alessandro Benedetti (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-16567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17649701#comment-17649701
 ] 

Alessandro Benedetti commented on SOLR-16567:
-

We are checking the PR, when we are happy with the reviews I'll merge it!
thanks again [~gmagno] for raising this!

> java.lang.StackOverflowError when combining KnnQParser and 
> FunctionRangeQParser
> ---
>
> Key: SOLR-16567
> URL: https://issues.apache.org/jira/browse/SOLR-16567
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query
>Affects Versions: 9.1
> Environment: Solr Cloud with `solr:9.1` Docker image
>Reporter: Gabriel Magno
>Priority: Major
> Attachments: create_example-solr_9_0.sh, create_example-solr_9_1.sh, 
> error_full.txt, response-error.json, run_query.sh
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Hello there!
> I had a Solr 9.0 cluster running, using the new Dense Vector feature. 
> Recently I have migrated to Solr 9.1. Most of the things are working fine, 
> except for a special case I have here.
> *Error Description*
> The problem happens when I try making an Edismax query with a KNN sub-query 
> and a Function Range filter. For example, I try making this query.
>  * defType=edismax
>  * df=name
>  * q=the
>  * similarity_vector=\{!knn f=vector topK=10}[1.1,2.2,3.3,4.4]
>  * {!frange l=0.99}$similarity_vector
> In other words, I want all the documents matching the term "the" in the 
> "name" field, and I filter to return only documents having a vector 
> similarity of at least 0.99. This query was working fine on Solr 9.0, but on 
> Solr 9.1, I get his error:
>  
> {code:java}
> java.lang.RuntimeException: java.lang.StackOverflowErrorat 
> org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:840)at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:641)at 
> org.apache.solr.servlet.SolrDispatchFilter.dispatch(SolrDispatchFilter.java:250)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.lambda/usr/bin/zsh(SolrDispatchFilter.java:218)
> at 
> org.apache.solr.servlet.ServletUtils.traceHttpRequestExecution2(ServletUtils.java:257)
> at 
> org.apache.solr.servlet.ServletUtils.rateLimitRequest(ServletUtils.java:227)  
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:213)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:195)
> at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:201) 
>... (manually supressed for brevity)at 
> java.base/java.lang.Thread.run(Unknown Source)Caused by: 
> java.lang.StackOverflowErrorat 
> org.apache.solr.search.StrParser.getId(StrParser.java:172)at 
> org.apache.solr.search.StrParser.getId(StrParser.java:168)at 
> org.apache.solr.search.QueryParsing.parseLocalParams(QueryParsing.java:100)   
>  at 
> org.apache.solr.search.QueryParsing.parseLocalParams(QueryParsing.java:65)
> at org.apache.solr.search.QParser.getParser(QParser.java:364)at 
> org.apache.solr.search.QParser.getParser(QParser.java:334)at 
> org.apache.solr.search.QParser.getParser(QParser.java:321)at 
> org.apache.solr.search.QueryUtils.parseFilterQueries(QueryUtils.java:244)
> at 
> org.apache.solr.search.neural.KnnQParser.getFilterQuery(KnnQParser.java:93)   
>  at org.apache.solr.search.neural.KnnQParser.parse(KnnQParser.java:83)at 
> org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:384)
> at org.apache.solr.search.FunctionQParser.parse(FunctionQParser.java:94)  
>   at org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.FunctionRangeQParserPlugin.parse(FunctionRangeQParserPlugin.java:53)
> at org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.QueryUtils.parseFilterQueries(QueryUtils.java:246)
> at 
> org.apache.solr.search.neural.KnnQParser.getFilterQuery(KnnQParser.java:93)   
>  at org.apache.solr.search.neural.KnnQParser.parse(KnnQParser.java:83)at 
> org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:384)
> at org.apache.solr.search.FunctionQParser.parse(FunctionQParser.java:94)  
>   at org.apache.solr.search.QParser.getQuery(QParser.java:188)at 
> org.apache.solr.search.FunctionRangeQParserPlugin.parse(FunctionRangeQParserPlugin.java:53)
> at org.apache.solr.search.QParser.getQuery(QParser.java:188)... 
> (manually supressed for brevity){code}
>  
> The backtrace is much bigger, I'm attaching the