[jira] [Commented] (SOLR-17468) Revamp Ref Guide to feature SolrCloud.

2024-10-02 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886483#comment-17886483
 ] 

David Smiley commented on SOLR-17468:
-

See [this relevant dev-list 
thread|https://lists.apache.org/list?d...@solr.apache.org:2024-2] from February 
and that which makes reference to others.  Cassandra, I urge you to reply to 
that thread and thus keep the context.  Maybe copy-paste your argument here.   
(Sorry I hate "user managed")

> Revamp Ref Guide to feature SolrCloud.  
> 
>
> Key: SOLR-17468
> URL: https://issues.apache.org/jira/browse/SOLR-17468
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: main (10.0)
>Reporter: Eric Pugh
>Assignee: Eric Pugh
>Priority: Major
>
> Move SolrCloud mode to the top of each page, and update how we refer to 
> "user-managed" or "single node" as just "standalone".   Standalone is what we 
> all agreed is the default term, despite attempts to move away after much bike 
> shedding!
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-16470) Create V2 equivalent of V1 Replication: Get IndexVersion, Get FileStream, Get File List

2024-10-02 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-16470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886479#comment-17886479
 ] 

David Smiley commented on SOLR-16470:
-

If the API requires certain parameters to respond in a reasonable manner, it 
should be modified to throw a 400 and tell you what to do.  It could also be 
modified to have different defaults than standard defaults, like 
org.apache.solr.handler.ExportHandler#handleRequestBody does.

> Create V2 equivalent of V1 Replication: Get IndexVersion, Get FileStream, Get 
> File List
> ---
>
> Key: SOLR-16470
> URL: https://issues.apache.org/jira/browse/SOLR-16470
> Project: Solr
>  Issue Type: Sub-task
>  Components: v2 API
>Affects Versions: 9.2
>Reporter: Sanjay Dutt
>Priority: Major
>  Labels: V2, newdev
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Replication Handler has no v2 equivalent. This ticket is covering few 
> endpoints from ReplicationHandler such as Get IndexVersion, Get FileStream, 
> Get File List.
> Existing V1
> |-GET /solr/collName/replication?command=indexversion-|
> |GET /solr/collName/replication?command=filecontent|
> |-GET /solr/collName/replication?command=filelist-|
> Proposed API design
> |-GET /api/cores/coreName/replication/indexversion-|
> |GET /api/cores/coreName/replication/files/filePath|
> |-GET /api/cores/coreName/replication/files-|
>  few other pointers that might be helpful, especially for newcomers:
>  * The v1 logic for this API lives in ReplicationHandler
>  * [Some discussion of how APIs work in Solr (Particularly the "APIs in Solr" 
> section.)|https://github.com/apache/solr/blob/main/dev-docs/apis.adoc#apis-in-solr]
>  * [A step-by-step guide to creating APIs using the preferred v2 API 
> framework|https://github.com/apache/solr/blob/main/dev-docs/apis.adoc#writing-jax-rs-apis]
>  * [A recent PR that adds a v2 API, as an 
> example|https://github.com/apache/solr/pull/2144]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17285) Move RemoteSolrException to SolrClient in v10

2024-10-02 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17285.
-
Fix Version/s: main (10.0)
   Resolution: Fixed

Thanks for contributing Samuel!

> Move RemoteSolrException to SolrClient in v10
> -
>
> Key: SOLR-17285
> URL: https://issues.apache.org/jira/browse/SOLR-17285
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrJ
>Reporter: David Smiley
>Priority: Major
>  Labels: newdev, pull-request-available
> Fix For: main (10.0)
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> RemoteSolrException lives in BaseHttpSolrClient.  BaseHttpSolrClient should 
> be deprecated; it's sort of replaced by HttpSolrClientBase.  Even though this 
> exception is only for Http, SolrClient is a decent parent class.  Or make top 
> level.
> To make this transition from 9x to 10x better, we could simply add new 
> classes without removing the old ones in 9x.  The old can subclass the new.  
> Eventually all of BaseHttpSolrClient will be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-17473) Apply formatting to build source files

2024-10-02 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-17473:

Summary: Apply formatting to build source files  (was: Improve formatting 
in infrastructure classes)

> Apply formatting to build source files
> --
>
> Key: SOLR-17473
> URL: https://issues.apache.org/jira/browse/SOLR-17473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Gradle
>Reporter: Christos Malliaridis
>Priority: Minor
>  Labels: formatting, gradle
>
> Right now java classes in Gradle are not formatted or checked by tidy and 
> forbidden-apis.
> This can be tackled in a similar way as demonstrated in 
> [Lucene#PR13484|https://github.com/apache/lucene/pull/13484.], by moving 
> affected files / classes into a composite included build and running tidy 
> etc. on it via "gradle.includedBuilds".
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17454) ERROR message in logs with multithreaded searches

2024-09-27 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17454.
-
Resolution: Fixed

Thanks Andrew!

> ERROR message in logs with multithreaded searches
> -
>
> Key: SOLR-17454
> URL: https://issues.apache.org/jira/browse/SOLR-17454
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 9.7
> Environment: Solr 9.7.0
> Ubuntu 22.04 LTS
>Reporter: Andrew Hankinson
>Priority: Minor
>  Labels: docs, error, log-level, multithreaded, 
> pull-request-available
> Fix For: 9.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I sent a message to the Users mailing list about this, but received no 
> response. However, I think it is still a problem. 
> When searching with 9.7.0 and enabled multithreaded search, I now get an 
> ERROR message in my logs: 
> {code:java}
> 2024-09-16 08:32:05.795 ERROR (qtp1756573246-34-null-1) [c: s: r: x:core_name 
> t:null-1] o.a.s.s.MultiThreadedSearcher raw read max=5922019 {code}
> The max number is the total number of documents in the core.
> I've tracked it down to this part of the code:
> [https://github.com/apache/solr/blob/5bc7c1618e05b35bd0fa8471ae09329357a82036/solr/core/src/java/org/apache/solr/search/MultiThreadedSearcher.java#L86-L91]
> I'm not entirely convinced that an ERROR level message is necessary here? 
>  * The query seems to still function;
>  * Once the error condition is logged, the code seems to create a new doc set 
> and continues;
>  * The documentation doesn't suggest anything for how to avoid this? I'm not 
> sure why "needDocSet" is true here, and how it can be anything otherwise?
> Surely an "info" or "warn" log message is more appropriate for these cases? 
> Unless it really is an error condition, but then the docs should be updated 
> to mention what could be done to avoid the error?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17414) multiThreaded=true can result in RejectedExecutionException

2024-09-27 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885462#comment-17885462
 ] 

David Smiley commented on SOLR-17414:
-

(PR is ready)

> multiThreaded=true can result in RejectedExecutionException
> ---
>
> Key: SOLR-17414
> URL: https://issues.apache.org/jira/browse/SOLR-17414
> Project: Solr
>  Issue Type: Bug
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
>  Labels: pull-request-available
> Attachments: build-out2.txt
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Since the new multiThreaded search feature landed, I see a new test
> failure involving "RejectedExecutionException" being thrown 
> [link|https://ge.apache.org/s/5ack462ji4mlu/tests/task/:solr:core:test/details/org.apache.solr.search.TestRealTimeGet/testStressGetRealtime?top-execution=1].
> It is thrown at a low level in Lucene building TermStates
> concurrently.  I doubt the problem is specific to that test
> (TestRealTimeGet) but that test might induce more activity than most
> tests, thus crossing some thresholds like the queue size -- apparently
> 1000.
> *I don't think we should be throwing a RejectedExecutionException
> when running a Search query*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17441) MetricUtils optimization: skip unreadable properties

2024-09-27 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885458#comment-17885458
 ] 

David Smiley commented on SOLR-17441:
-

FYI Solr has a great "benchmark" module, and it's pretty easy to run a request 
like this a bunch of times with detailed output.

> MetricUtils optimization: skip unreadable properties
> 
>
> Key: SOLR-17441
> URL: https://issues.apache.org/jira/browse/SOLR-17441
> Project: Solr
>  Issue Type: Improvement
>Reporter: Haythem Khiri
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 9.8
>
> Attachments: Screenshot 2024-09-27 at 11.28.28.png, Screenshot 
> 2024-09-27 at 11.28.51.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The {{/solr/admin/info/system}} endpoint, used by the Solr Operator for 
> start-up and liveness probes, can fail if response times are excessive. This 
> update optimizes the addMXBeanMetrics method by skipping unreadable 
> properties earlier, minimizing exceptions and boosting performance. This 
> leads to more efficient MBean metric collection, enhancing monitoring and 
> diagnostics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17441) MetricUtils optimization: skip unreadable properties

2024-09-27 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17441.
-
Fix Version/s: 9.8
   Resolution: Fixed

Nice; thanks Haythem!

> MetricUtils optimization: skip unreadable properties
> 
>
> Key: SOLR-17441
> URL: https://issues.apache.org/jira/browse/SOLR-17441
> Project: Solr
>  Issue Type: Improvement
>Reporter: Haythem Khiri
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 9.8
>
> Attachments: Screenshot 2024-09-27 at 11.28.28.png, Screenshot 
> 2024-09-27 at 11.28.51.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The {{/solr/admin/info/system}} endpoint, used by the Solr Operator for 
> start-up and liveness probes, can fail if response times are excessive. This 
> update optimizes the addMXBeanMetrics method by skipping unreadable 
> properties earlier, minimizing exceptions and boosting performance. This 
> leads to more efficient MBean metric collection, enhancing monitoring and 
> diagnostics.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17448) Audit usage of ExecutorService#submit in Solr codebase

2024-09-27 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17448.
-
Fix Version/s: 9.8
   Resolution: Fixed

Also back ported to 9.8 – 
[24bd803|https://github.com/apache/solr/commit/24bd8039173a7270826301f982342ab1455429e8]

Thanks for contributing Andrey!

> Audit usage of ExecutorService#submit in Solr codebase
> --
>
> Key: SOLR-17448
> URL: https://issues.apache.org/jira/browse/SOLR-17448
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 9.7
>Reporter: Andrey Bozhko
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 9.8
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> There are quite a few places in Solr codebase where the background task is 
> created by invoking `ExecutorService#submit(...)` method - but where the 
> reference to the returned future is not retained.
> So if the background task fails for any reason, and the task doesn't itself 
> have a try-catch block to log the failure, - the failure will go completely 
> unnoticed.
>  
> This ticket is to review the usage of ExecutorService#submit method in the 
> codebase, and replace those with Executor#execute where appropriate.
>  
> Originally brought up in the dev mailing list: 
> [https://lists.apache.org/thread/5f1965rltcspgw0j8nzcn2qnz9l4s8qm]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-1990) blockUntilFinished() is called in StreamingUpdateSolrServer more often then it should

2024-09-26 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-1990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-1990.

Fix Version/s: 4.6
   Resolution: Duplicate

Marking as duplicate of 2 other issues that resolved the matter.

> blockUntilFinished() is called in StreamingUpdateSolrServer more often then 
> it should
> -
>
> Key: SOLR-1990
> URL: https://issues.apache.org/jira/browse/SOLR-1990
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 1.4.1
>Reporter: ofer fort
>Priority: Major
> Fix For: 4.6
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> in the StreamingUpdateSolrServer .request() it identifies a commit/optimize 
> request by having no document...
> {code}
> // this happens for commit...
> if( req.getDocuments()==null || req.getDocuments().isEmpty() ) {
>   blockUntilFinished();
> {code}
> ...but there are other situations where an UpdateRequest will nave no 
> documents (delete, updates using stream.url or stream.file, etc...)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17256) Remove SolrRequest.getBasePath setBasePath

2024-09-26 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885197#comment-17885197
 ] 

David Smiley commented on SOLR-17256:
-

Do we assume that the lambda doesn't need to really return anything?  Should 
the lambda actually be a SolrRequest?  If a SolrRequest, then method returns 
whatever SolrRequest's response type is.

I hate the ThreadLocal but it's an implementation detail that could be 
eliminated with more work.

It's unclear we need to bother with non-Http2SolrClients but could do so if 
someone puts in the time.

> Remove SolrRequest.getBasePath setBasePath
> --
>
> Key: SOLR-17256
> URL: https://issues.apache.org/jira/browse/SOLR-17256
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev, pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> SolrRequest has a getBasePath & setBasePath.  The naming is poor; it's the 
> URL base to the Solr node like "http://localhost:8983/solr";.  It's only 
> recognized by HttpSolrClient; LBSolrClient (used by CloudSolrClient) ignores 
> it and will in fact mutate the passed in request to its liking, which is 
> rather ugly because it means a request cannot be used concurrently if the 
> user wants to.  But moreover I think there's a conceptual discordance of 
> placing this concept on SolrRequest given that some clients want to route 
> requests to nodes *they* choose.  I propose removing this from SolrRequest 
> and instead adding a method specific to HttpSolrClient.  Almost all existing 
> usages of setBasePath immediately execute the request on an HttpSolrClient, 
> so should be easy to change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17441) MetricUtils optimization: skip unreadable properties

2024-09-25 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884757#comment-17884757
 ] 

David Smiley commented on SOLR-17441:
-

It's be useful to communicate in the description what Solr operation(s) this 
impacts.

> MetricUtils optimization: skip unreadable properties
> 
>
> Key: SOLR-17441
> URL: https://issues.apache.org/jira/browse/SOLR-17441
> Project: Solr
>  Issue Type: Improvement
>Reporter: Haythem Khiri
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Enhance the property introspection logic by skipping unreadable properties 
> early in the process. By checking if desc.getReadMethod() is null before 
> attempting to access the property, we can avoid unnecessary attempts to 
> access properties that do not have a read method. This will reduce the number 
> of exceptions thrown and the overhead associated with handling those 
> exceptions, improving overall performance and stability.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-17441) MetricUtils optimization: skip unreadable properties

2024-09-25 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-17441:

Summary: MetricUtils optimization: skip unreadable properties  (was: Skip 
Unreadable Properties Early to Reduce Overhead)

> MetricUtils optimization: skip unreadable properties
> 
>
> Key: SOLR-17441
> URL: https://issues.apache.org/jira/browse/SOLR-17441
> Project: Solr
>  Issue Type: Improvement
>Reporter: Haythem Khiri
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Enhance the property introspection logic by skipping unreadable properties 
> early in the process. By checking if desc.getReadMethod() is null before 
> attempting to access the property, we can avoid unnecessary attempts to 
> access properties that do not have a read method. This will reduce the number 
> of exceptions thrown and the overhead associated with handling those 
> exceptions, improving overall performance and stability.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-16503) Switch UpdateShardHandler.getDefaultHttpClient to Jetty HTTP2

2024-09-24 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-16503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884493#comment-17884493
 ] 

David Smiley commented on SOLR-16503:
-

Sanjay is working on this one but you're welcome to review and maybe switch 
over some usages of it once it's in place (will be separate PRs instead of a 
mega PR; same JIRA).  The latest PR introduces a CoreContainer method since 
it's used so pervasively.  UpdateShardHandler is a terrible home for it.  He 
and I discussed where it should live quite a bit; explored another option 
already and settled here.

> Switch UpdateShardHandler.getDefaultHttpClient to Jetty HTTP2
> -
>
> Key: SOLR-16503
> URL: https://issues.apache.org/jira/browse/SOLR-16503
> Project: Solr
>  Issue Type: Sub-task
>Reporter: David Smiley
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2024-03-16 at 9.14.36 PM.png
>
>  Time Spent: 9h
>  Remaining Estimate: 0h
>
> Much of Solr's remaining uses of Apache HttpClient (HTTP 1) is due to 
> {{org.apache.solr.update.UpdateShardHandler#getDefaultHttpClient}} which 
> underlies most Solr-to-Solr connectivity.  This also underlies the 
> {{{}CoreContainer.getSolrClientCache{}}}.  Lets switch to Jetty (HTTP 2).
> 
> In SolrClientCache in particular:
> Switch use of CloudLegacySolrClient.Builder to CloudSolrClient.Builder
> Switch use of HttpSolrClient.Builder to Http2SolrClient.Builder
> Undeprecate all the methods here.  They should not have been deprecated in 
> the first place.
> The constructor: switch from Apache HttpClient to a Jetty HttpClient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17461) (cleanup) Move ClusterState string "Interner" json parser to Utils

2024-09-24 Thread David Smiley (Jira)
David Smiley created SOLR-17461:
---

 Summary: (cleanup) Move ClusterState string "Interner" json parser 
to Utils
 Key: SOLR-17461
 URL: https://issues.apache.org/jira/browse/SOLR-17461
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


ClusterState.setStrInternerParser and its functionality is potentially of 
general utility and it's also a bit distracting in ClusterState.  I also don't 
like CoreContainer being involved in initializing it (CoreContainer is doing 
too much!).  It's doing it because ClusterState is in SolrJ without Caffeine 
being on the classpath.   Instead, imagine a class in solr-core that implements 
this Function.  Utils could then self-initialize via reflection detecting if 
that class is available, otherwise gracefully resorting to the non-intern 
mechanism.  No touching CoreContainer which is way too busy doing many things.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-5951) SolrDispatchFilter no longer displays useful error message on statup when logging jars are missing

2024-09-23 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-5951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884083#comment-17884083
 ] 

David Smiley commented on SOLR-5951:


[~uschindler] , I think the problem of not having the right logging JARs should 
be a non-issue since Solr 5, where Solr insists on Jetty pre-packaged with our 
opinionated choice of logging JARs.  If someone messes with them; they are on 
their own to figure out the right ones.  I'm skeptical a person would really 
need the code change here to figure that out.  I'd like to remove 
CheckLoggingConfiguration and BaseSolrFilter and BaseSolrServlet, which are in 
servitude of it.  WDYT?  My motivation is merely a small simplification.

> SolrDispatchFilter no longer displays useful error message on statup when 
> logging jars are missing
> --
>
> Key: SOLR-5951
> URL: https://issues.apache.org/jira/browse/SOLR-5951
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.7, 4.7.1
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 4.7.2, 4.8, 6.0
>
> Attachments: SOLR-5951.patch
>
>
> We no longer have logging jars in the webapp since SOLR-3706. Because of this 
> we added a extra check in SolrDispatchFilter's ctor to print a nice exception 
> if the logging jars were failing. This check was unfortunately never tests 
> and recently broke:
> The check delays initialization of the Logger instance to inside a try-catch 
> block inside the explicit ctor. If it fails with ClassNotFound, it throws 
> Exception.
> Recently we upgraded to a newer HttpClient version. Unfortunately 
> SolrDispatchFliter also has an implicit constructor a few lines before the 
> main constructor:
> {code:java}
>   protected final HttpClient httpClient = HttpClientUtil.createClient(new 
> ModifiableSolrParams()); // <-- this breaks the detection
>   
>   private static final Charset UTF8 = StandardCharsets.UTF_8;
>   public SolrDispatchFilter() {
> try {
>   log = LoggerFactory.getLogger(SolrDispatchFilter.class);
> } catch (NoClassDefFoundError e) {
>   throw new SolrException(
>   ErrorCode.SERVER_ERROR,
>   "Could not find necessary SLF4j logging jars. If using Jetty, the 
> SLF4j logging jars need to go in "
>   +"the jetty lib/ext directory. For other containers, the 
> corresponding directory should be used. "
>   +"For more information, see: 
> http://wiki.apache.org/solr/SolrLogging";,
>   e);
> }
>   }
> {code}
> The first line above {{HttpClientUtil.createClient(new 
> ModifiableSolrParams());}} breaks the whole thing, because it is executed 
> before the declared constructor. The user just sees a ClassNotFoundEx at this 
> line of code, the nice error message is hidden.
> Because this is so easy to break, we should make the whole thing more safe 
> (any maybe test it). 2 options:
> # Into the webapp add a fake Servlet (not bound to anything, just loaded 
> first) that does not use any Solr classes at all, nothing only plain java
> # Alternatively add a Superclass between ServletFilter and SolrDispatchFilter 
> (pkg-private). When the servlet container loads SolrDispatchFilter, it has in 
> any case to first load the superclass. And this superclass does the check and 
> throws ServletException or whatever (no Solr Exception) with the message from 
> the current code.
> I tend to the second approach, because it does not need to modify web-inf. It 
> will also work with other Solr servlets, they must just extend this hidden 
> class. I will provide a patch for that.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-17458) Metrics: switch from DropWizard to OpenTelemetry

2024-09-18 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-17458:

Summary: Metrics: switch from DropWizard to OpenTelemetry  (was: 
OpenTelemetry integration for metrics)

> Metrics: switch from DropWizard to OpenTelemetry
> 
>
> Key: SOLR-17458
> URL: https://issues.apache.org/jira/browse/SOLR-17458
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Matthew Biscocho
>Priority: Major
>
> Solr currently captures metrics with Dropwizard 4. There was some limitations 
> to Dropwizard, biggest one being metrics without tags/attributes making 
> aggregation difficult and requires the Prometheus Exporter to work with 
> Grafana.
> Creating this to track and explore integrating OpenTelemetry into Solr and 
> possibly replace Dropwizard giving a larger exposure of observability tools.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-17458) OpenTelemetry integration for metrics

2024-09-18 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-17458:

Priority: Major  (was: Minor)

> OpenTelemetry integration for metrics
> -
>
> Key: SOLR-17458
> URL: https://issues.apache.org/jira/browse/SOLR-17458
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Matthew Biscocho
>Priority: Major
>
> Solr currently captures metrics with Dropwizard 4. There was some limitations 
> to Dropwizard, biggest one being metrics without tags/attributes making 
> aggregation difficult and requires the Prometheus Exporter to work with 
> Grafana.
> Creating this to track and explore integrating OpenTelemetry into Solr and 
> possibly replace Dropwizard giving a larger exposure of observability tools.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17453) Replace CloudUtil.waitForState and some TimeOut with ZkStateReader.waitForState

2024-09-17 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882448#comment-17882448
 ] 

David Smiley commented on SOLR-17453:
-

Sounds good Pierre; thanks for volunteering!

_FYI note the issue links to a dev list thread where there's more context and 
even a suggested replacement for one spot I noticed._

> Replace CloudUtil.waitForState and some TimeOut with 
> ZkStateReader.waitForState
> ---
>
> Key: SOLR-17453
> URL: https://issues.apache.org/jira/browse/SOLR-17453
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev
>
> We should universally use ZkStateReader.waitForState when waiting for the 
> ClusterState to change based on a predicate.  SolrCloudTestCase.waitForState 
> is fine since it calls the former.  But CloudUtil.waitForState does not; it 
> should be replaced.  Additionally, TimeOut is used in some places wait 
> waitForState ought to be used, like CreateCollectionCmd.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17454) ERROR message in logs with multithreaded searches

2024-09-17 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882388#comment-17882388
 ] 

David Smiley commented on SOLR-17454:
-

It's a sloppy practice but some devs do log.error at dev-time instead of 
log.debug because you don't need to go tweak a log level in a configuration.  
And then it gets forgotten.  An added "// nocommit" comment on our project have 
prevented it from shipping.  I remember lng ago, Oracle embarrassingly 
logged a bunch of stuff on the error log in their published JDBC driver.   
Ooops!

> ERROR message in logs with multithreaded searches
> -
>
> Key: SOLR-17454
> URL: https://issues.apache.org/jira/browse/SOLR-17454
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 9.7
> Environment: Solr 9.7.0
> Ubuntu 22.04 LTS
>Reporter: Andrew Hankinson
>Priority: Minor
>  Labels: docs, error, log-level, multithreaded
> Fix For: 9.8
>
>
> I sent a message to the Users mailing list about this, but received no 
> response. However, I think it is still a problem. 
> When searching with 9.7.0 and enabled multithreaded search, I now get an 
> ERROR message in my logs: 
> {code:java}
> 2024-09-16 08:32:05.795 ERROR (qtp1756573246-34-null-1) [c: s: r: x:core_name 
> t:null-1] o.a.s.s.MultiThreadedSearcher raw read max=5922019 {code}
> The max number is the total number of documents in the core.
> I've tracked it down to this part of the code:
> [https://github.com/apache/solr/blob/5bc7c1618e05b35bd0fa8471ae09329357a82036/solr/core/src/java/org/apache/solr/search/MultiThreadedSearcher.java#L86-L91]
> I'm not entirely convinced that an ERROR level message is necessary here? 
>  * The query seems to still function;
>  * Once the error condition is logged, the code seems to create a new doc set 
> and continues;
>  * The documentation doesn't suggest anything for how to avoid this? I'm not 
> sure why "needDocSet" is true here, and how it can be anything otherwise?
> Surely an "info" or "warn" log message is more appropriate for these cases? 
> Unless it really is an error condition, but then the docs should be updated 
> to mention what could be done to avoid the error?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17456) TransactionLog NPE

2024-09-16 Thread David Smiley (Jira)
David Smiley created SOLR-17456:
---

 Summary: TransactionLog NPE
 Key: SOLR-17456
 URL: https://issues.apache.org/jira/browse/SOLR-17456
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


In an erroneous case, a TransactionLog should throw an exception if an 
unexpected log file exists instead of merely log a warning in its constructor.  
The latter leaves the file in a partially constructed state that leads to NPEs 
when it's used later.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17256) Remove SolrRequest.getBasePath setBasePath

2024-09-16 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17882198#comment-17882198
 ] 

David Smiley commented on SOLR-17256:
-

Another approach would be for Http2SolrClient.withBaseUrl to take only the URL 
and _return_ a SolrClient that is scoped to that URL.  Close would no-op; it'd 
hold no resources other than a reference back to the original Http2SolrClient.  
requestAsync should perhaps be a base method on SolrClient.

> Remove SolrRequest.getBasePath setBasePath
> --
>
> Key: SOLR-17256
> URL: https://issues.apache.org/jira/browse/SOLR-17256
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> SolrRequest has a getBasePath & setBasePath.  The naming is poor; it's the 
> URL base to the Solr node like "http://localhost:8983/solr";.  It's only 
> recognized by HttpSolrClient; LBSolrClient (used by CloudSolrClient) ignores 
> it and will in fact mutate the passed in request to its liking, which is 
> rather ugly because it means a request cannot be used concurrently if the 
> user wants to.  But moreover I think there's a conceptual discordance of 
> placing this concept on SolrRequest given that some clients want to route 
> requests to nodes *they* choose.  I propose removing this from SolrRequest 
> and instead adding a method specific to HttpSolrClient.  Almost all existing 
> usages of setBasePath immediately execute the request on an HttpSolrClient, 
> so should be easy to change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17453) Replace CloudUtil.waitForState and some TimeOut with ZkStateReader.waitForState

2024-09-15 Thread David Smiley (Jira)
David Smiley created SOLR-17453:
---

 Summary: Replace CloudUtil.waitForState and some TimeOut with 
ZkStateReader.waitForState
 Key: SOLR-17453
 URL: https://issues.apache.org/jira/browse/SOLR-17453
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


We should universally use ZkStateReader.waitForState when waiting for the 
ClusterState to change based on a predicate.  SolrCloudTestCase.waitForState is 
fine since it calls the former.  But CloudUtil.waitForState does not; it should 
be replaced.  Additionally, TimeOut is used in some places wait waitForState 
ought to be used, like CreateCollectionCmd.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17449) bboxField subfield Error in atomic updating

2024-09-14 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17881838#comment-17881838
 ] 

David Smiley commented on SOLR-17449:
-

Thanks for reporting the bug.  docValues isn't necessarily required; it depends 
on what your search requirements are.  If you are ranking (i.e. sorting) in 
some way, then it's required.  But merely filtering -- no.

It'd be helpful to give a basic example of what the atomic update looks like to 
induce the issue.  Then whoever looks into this further could write a test and 
eventually fix it hopefully.

> bboxField subfield Error in atomic updating
> ---
>
> Key: SOLR-17449
> URL: https://issues.apache.org/jira/browse/SOLR-17449
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Affects Versions: main (10.0)
>Reporter: Seunghan Jung
>Priority: Minor
> Attachments: image-2024-09-13-17-45-17-161.png
>
>
> For fields of type `bboxField`, derived fields such as `minX`, `maxX`, 
> `minY`, `maxY`, etc., are added to the schema and indexed.
> However, this causes issues during atomic updates. During an atomic update, 
> when a new document is moved and indexed, the fields of type `bboxField` are 
> re-indexed with `minX`, `maxX`, `minY`, `maxY` just as they were initially. 
> Since the original document already contains these fields, they are indexed 
> again in the new document. However, because they are already indexed by the 
> `bbox` field, this results in duplicate indexing. If the `numberType` 
> attribute of the bbox field type has `docValues=true`, an error occurs due to 
> the docValues being written twice.
> Here is the error message for this case:
> {code:java}
> Caused by: java.lang.IllegalArgumentException: DocValuesField "bbox__maxX" 
> appears more than once in this document (only one value is allowed per field)
>         at 
> org.apache.lucene.index.NumericDocValuesWriter.addValue(NumericDocValuesWriter.java:53)
>  ~[?:?]
>         at 
> org.apache.lucene.index.IndexingChain.indexDocValue(IndexingChain.java:937) 
> ~[?:?]
>         at 
> org.apache.lucene.index.IndexingChain.processField(IndexingChain.java:723) 
> ~[?:?]
>         at 
> org.apache.lucene.index.IndexingChain.processDocument(IndexingChain.java:576) 
> ~[?:?]
>         at 
> org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:242)
>  ~[?:?]
>         at 
> org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:432)
>  ~[?:?]
>         at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1545) 
> ~[?:?]
>         at 
> org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1521) 
> ~[?:?]
>         at 
> org.apache.solr.update.DirectUpdateHandler2.updateDocOrDocValues(DirectUpdateHandler2.java:1062)
>  ~[?:?]
>         at 
> org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:421)
>  ~[?:?]
>         at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:374)
>  ~[?:?]
>         at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:311)
>  ~[?:?]{code}
>  
> Of course, setting the docValues="false" for the field type used in 
> numberType resolves the issue.
> However, this is not explained in the[ Solr Reference 
> Guide|https://solr.apache.org/guide/solr/latest/query-guide/spatial-search.html#bboxfield].
>  Instead, the example schema shows docValues="true", which makes it seem like 
> this is how it should be configured.
> !image-2024-09-13-17-45-17-161.png|width=1037,height=233!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-13234) Prometheus Metric Exporter Not Threadsafe

2024-09-11 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17881123#comment-17881123
 ] 

David Smiley commented on SOLR-13234:
-

Years later, observing this PR creates an Http SolrClient per node.  This isn't 
necessary; one can be used for all the nodes.

> Prometheus Metric Exporter Not Threadsafe
> -
>
> Key: SOLR-13234
> URL: https://issues.apache.org/jira/browse/SOLR-13234
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - prometheus-exporter, metrics
>Affects Versions: 7.6, 8.0
>Reporter: Danyal Prout
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
>  Labels: metric-collector
> Fix For: 7.7.2, 8.1, 9.0
>
> Attachments: SOLR-13234-branch_7x.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The Solr Prometheus Exporter collects metrics when it receives a HTTP request 
> from Prometheus. Prometheus sends this request, on its [scrape 
> interval|https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config].
>  When the time taken to collect the Solr metrics is greater than the scrape 
> interval of the Prometheus server, this results in concurrent metric 
> collection occurring in this 
> [method|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L86].
>  This method doesn’t appear to be thread safe, for instance you could have 
> concurrent modifications of a 
> [map|https://github.com/apache/lucene-solr/blob/master/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SolrCollector.java#L119].
>  After a while the Solr Exporter processes becomes nondeterministic, we've 
> observed NPE and loss of metrics.
> To address this, I'm proposing the following fixes:
> 1. Read/parse the configuration at startup and make it immutable. 
>  2. Collect metrics from Solr on an interval which is controlled by the Solr 
> Exporter and cache the metric samples to return during Prometheus scraping. 
> Metric collection can be expensive, for example executing arbitrary Solr 
> searches, it's not ideal to allow for concurrent metric collection and on an 
> interval which is not defined by the Solr Exporter.
> There are also a few other performance improvements that we've made while 
> fixing this, for example using the ClusterStateProvider instead of sending 
> multiple HTTP requests to each Solr node to lookup all the cores.
> I'm currently finishing up these changes which I'll submit as a PR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17256) Remove SolrRequest.getBasePath setBasePath

2024-09-11 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17881058#comment-17881058
 ] 

David Smiley commented on SOLR-17256:
-

>From the description:

bq. But moreover I think there's a conceptual discordance of placing this 
concept on SolrRequest given that some clients want to route requests to nodes 
they choose. 

CloudSolrClient & LBSolrClient.  It's not a deal breaker... we could say 
LBSolrClient is internal so nobody use it please.  CloudSolrClient, we could 
allow the caller in the request to choose instead of CSC finding the right 
node.  It would then somehow have to skip LBSolrClient.  Not sure how much work 
this is.  Nonetheless need to somehow change LBSolrClient so that it stops 
mutating the request.  Our clients shouldn't modify the requests!

Furthermore the EmbeddedSolrServer can't even handle it.  Could throw 
Unsupported, okay :-|

So this is why I'm thinking a special request method.

bq. Didn't we have a builder for SolrRequest earlier?

No.

> Remove SolrRequest.getBasePath setBasePath
> --
>
> Key: SOLR-17256
> URL: https://issues.apache.org/jira/browse/SOLR-17256
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> SolrRequest has a getBasePath & setBasePath.  The naming is poor; it's the 
> URL base to the Solr node like "http://localhost:8983/solr";.  It's only 
> recognized by HttpSolrClient; LBSolrClient (used by CloudSolrClient) ignores 
> it and will in fact mutate the passed in request to its liking, which is 
> rather ugly because it means a request cannot be used concurrently if the 
> user wants to.  But moreover I think there's a conceptual discordance of 
> placing this concept on SolrRequest given that some clients want to route 
> requests to nodes *they* choose.  I propose removing this from SolrRequest 
> and instead adding a method specific to HttpSolrClient.  Almost all existing 
> usages of setBasePath immediately execute the request on an HttpSolrClient, 
> so should be easy to change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-13759) Optimize Queries when query filtering by TRA router.field

2024-09-09 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17880413#comment-17880413
 ] 

David Smiley commented on SOLR-13759:
-

Just observing this hasn't been completed yet -- a big opportunity for TRAs to 
be more useful.

> Optimize Queries when query filtering by TRA router.field
> -
>
> Key: SOLR-13759
> URL: https://issues.apache.org/jira/browse/SOLR-13759
> Project: Solr
>  Issue Type: Sub-task
>Reporter: mosh
>Assignee: Gus Heck
>Priority: Minor
> Attachments: QueryVisitorExample.java, SOLR-13759.patch, 
> SOLR-13759.patch, SOLR-13759.patch, SOLR-13759.patch, SOLR-13759.patch, 
> image-2019-12-09-22-45-51-721.png
>
>
> We are currently testing TRA using Solr 7.7, having >300 shards in the alias, 
> with much growth in the coming months.
> The "hot" data(in our case, more recent) will be stored on stronger 
> nodes(SSD, more RAM, etc).
> A proposal of optimizing queries will be by filtering query by date range, by 
> that we will be able to querying the specific TRA collections taking 
> advantage of the TRA mechanism of partitioning data based on date.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-9023) Improve idea solrcloud launch config

2024-09-08 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-9023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17880178#comment-17880178
 ] 

David Smiley commented on SOLR-9023:


The point is ultimately to run Solr in your IDE; in this case in IntelliJ IDEA. 
 We no longer generate IntelliJ configs from the build; instead IntelliJ can 
open Solr naturally which is awesome.  However that means, I think, that 
there's no clear way to share an IntelliJ specific "run configuration" with 
other devs.  I have a run config that does this but I have no idea how to share 
it other than saying here's some XML snippet; edit your .idea/workspace.xml and 
add this under /project/component.  Shrug.  Furthermore... it's not a big deal. 
 Without this, run "gw dev" then CD to the right dir and do "bin/solr start -f 
-c -a DEBUG_ARGS_HERE" and have IntelliJ 's debugger listening in advance.  
It's annoying, admittedly, but whatever.

> Improve idea solrcloud launch config
> 
>
> Key: SOLR-9023
> URL: https://issues.apache.org/jira/browse/SOLR-9023
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 6.0
>Reporter: Scott Blum
>Priority: Minor
>  Labels: easyfix, idea, intellij, newbie
>
> Two main problems:
> 1) The solrcloud launch config requires ant to have been run before it works. 
>  This is tolerable if not ideal.
> 2) It uses the precompiled jars in WEB-INF/lib instead of using the IntelliJ 
> compiled classes that reflect up-to-date IDE edits.  Fixing this would be a 
> be win, but may require some digging into setting up the jetty container 
> properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-3696) LBHttpSolrServer's aliveCheckExecutor is not closed in RecoveryZkTest (and possibly other tests)

2024-09-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-3696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879955#comment-17879955
 ] 

David Smiley commented on SOLR-3696:


Searching for "aliveCheckExecutor" in CI build emails, and based on my 
experience with a test failure today, the most common test exhibiting this is 
CollectionsRepairEventListenerTest.
I reviewed LBSolrClient and I think it's properly handling the lifecycle.  It 
won't be resurrected after closure.  So I suspect the LBSolrClient itself might 
be mismanaged (not closed).  I see LBSolrClient isn't using 
ObjectReleaseTracker so I suggest the next step is to do this and hope 
something comes up.

> LBHttpSolrServer's aliveCheckExecutor is not closed in RecoveryZkTest (and 
> possibly other tests)
> 
>
> Key: SOLR-3696
> URL: https://issues.apache.org/jira/browse/SOLR-3696
> Project: Solr
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Priority: Major
>
> LBHttpSolrServer is never shut down properly and leaks pool threads from 
> aliveCheckExecutor.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17440) Add metrics for /admin/collections and /admin/cores per ACTION

2024-09-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879914#comment-17879914
 ] 

David Smiley commented on SOLR-17440:
-

TBD on what such metrics should "look" like in terms of metadata.  

All the /admin/collections ones should probably fall under the OVERSEER 
registry, or maybe only some of them?  BTW some actions are dispatched to the 
Overseer (or happen via distributed cluster processing logic, which we can 
pretend are the same for this comment), and some are processed on the receiving 
node always like LIST.   Hmm.

All the /admin/cores can be on the node registry

> Add metrics for /admin/collections and /admin/cores per ACTION
> --
>
> Key: SOLR-17440
> URL: https://issues.apache.org/jira/browse/SOLR-17440
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: David Smiley
>Priority: Major
>
> Solr has "handler" metrics (i.e. HTTP endpoints) for the most part.  But (A) 
> a number of operations for administering the Solr cluster are on two 
> endpoints that are really dispatching endpoints based on an ACTION to a thing 
> that doesn't have metrics.  And (B) when "async" is used, we really want to 
> measure the operation itself, not the HTTP endpoint, which will return 
> trivially quickly in this case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17440) Add metrics for /admin/collections and /admin/cores per ACTION

2024-09-06 Thread David Smiley (Jira)
David Smiley created SOLR-17440:
---

 Summary: Add metrics for /admin/collections and /admin/cores per 
ACTION
 Key: SOLR-17440
 URL: https://issues.apache.org/jira/browse/SOLR-17440
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics
Reporter: David Smiley


Solr has "handler" metrics (i.e. HTTP endpoints) for the most part.  But (A) a 
number of operations for administering the Solr cluster are on two endpoints 
that are really dispatching endpoints based on an ACTION to a thing that 
doesn't have metrics.  And (B) when "async" is used, we really want to measure 
the operation itself, not the HTTP endpoint, which will return trivially 
quickly in this case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-6122) API to cancel an already submitted/running Collections API call

2024-09-05 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-6122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879670#comment-17879670
 ] 

David Smiley commented on SOLR-6122:


My comment on https://github.com/apache/solr/pull/76 is purely an 
implementation detail to avoid code duplication within SolrCloud.  I could 
elaborate on that but would prefer to create a JIRA issue on such a 
refactoring.  This issue here is not that.

On HTTP API harmony, we should look at this mostly from a V2 API standpoint, 
which is undergoing change throughout the 9x release.  The paint is wet; we can 
change anything.  See this [Google 
Sheet|https://cwiki.apache.org/confluence/display/SOLR/SIP-16%3A+Polish+and+Prepare+v2+APIs+for+v1+Deprecation]
 (linked from 
[SIP-16|https://cwiki.apache.org/confluence/display/SOLR/SIP-16%3A+Polish+and+Prepare+v2+APIs+for+v1+Deprecation]).
  CC [~gerlowskija]

> API to cancel an already submitted/running Collections API call
> ---
>
> Key: SOLR-6122
> URL: https://issues.apache.org/jira/browse/SOLR-6122
> Project: Solr
>  Issue Type: Wish
>  Components: SolrCloud
>Reporter: Anshum Gupta
>Priority: Major
>
> Right now we can trigger a long running task with no way to cancel it 
> cleanly. 
> We should have an API that interrupts the already running/submitted 
> collections API call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17435) TestStressReorder reproducible failures: RejectedExecutionException from o.a.lucene.search.TaskExecutor

2024-09-04 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879328#comment-17879328
 ] 

David Smiley commented on SOLR-17435:
-

I commented an update to SOLR-17414.  I characterized that JIRA issue such that 
it's fundamentally wrong to throw a RejectedExecutionException for 
multi-threaded search.  The stack trace you share here for TestStressReorder is 
the same use-case.  Fixing it as I proposed would not yield such an exception.

> TestStressReorder reproducible failures: RejectedExecutionException from 
> o.a.lucene.search.TaskExecutor
> ---
>
> Key: SOLR-17435
> URL: https://issues.apache.org/jira/browse/SOLR-17435
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: TEST-org.apache.solr.search.TestStressReorder.xml
>
>
> I'm seeing this on main, reproduces reliably...
> {noformat}
> ERROR: The following test(s) have failed:
>   - org.apache.solr.search.TestStressReorder.testStressReorderVersions 
> (:solr:core)
> Test history: 
> https://ge.apache.org/scans/tests?search.rootProjectNames=solr-root&tests.container=org.apache.solr.search.TestStressReorder&tests.test=testStressReorderVersions
>  
> http://fucit.org/solr-jenkins-reports/history-trend-of-recent-failures.html#series/org.apache.solr.search.TestStressReorder.testStressReorderVersions
> Test output: 
> /home/hossman/lucene/solr/solr/core/build/test-results/test/outputs/OUTPUT-org.apache.solr.search.TestStressReorder.txt
> Reproduce with: ./gradlew :solr:core:test --tests 
> "org.apache.solr.search.TestStressReorder.testStressReorderVersions" 
> -Ptests.jvms=5 "-Ptests.jvmargs=-XX:TieredStopAtLevel=1 -XX:+UseParallelGC 
> -XX:ActiveProcessorCount=1 -XX:ReservedCodeCacheSize=120m" 
> -Ptests.seed=3F1CED560F2D6629 -Ptests.file.encoding=ISO-8859-1
> {noformat}
> First exception i noticed in console...
> {noformat}
> ...
>   2> 5615 INFO  (READER11) [n: c: s: r: x: t:] o.a.s.c.S.Request webapp=null 
> path=null params={qt=/get&ids=49&wt=json} status=0 QTime=0
>   2> 5615 INFO  (READER8) [n: c: s: r: x: t:] o.a.s.c.S.Request webapp=null 
> path=null params={qt=/get&ids=12&wt=json} status=0 QTime=0
>   2> 5603 ERROR (READER24) [n: c: s: r: x: t:] o.a.s.h.RequestHandlerBase 
> Server exception
>   2>   => java.util.concurrent.RejectedExecutionException: Task 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$385/0x000100596c40@4b8dfe1
>  rejected from 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@215a8ea0[Running,
>  pool size = 1, active threads = 1, queued tasks = 894, completed tasks = 
> 57525]
>   2>at 
> java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2055)
>   2> java.util.concurrent.RejectedExecutionException: Task 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$385/0x000100596c40@4b8dfe1
>  rejected from 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@215a8ea0[Running,
>  pool size = 1, active threads = 1, queued tasks = 894, completed tasks = 
> 57525]
>   2>at 
> java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2055)
>  ~[?:?]
>   2>at 
> java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:825)
>  ~[?:?]
>   2>at 
> java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1355)
>  ~[?:?]
>   2>at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:430)
>  ~[solr-solrj-10.0.0-SNAPSHOT.jar:10.0.0-SNAPSHOT 
> fbd96cf5d3a2187c587d48f9f8c735493a4a0899 [snapshot build, details omitted]]
>   2>at 
> org.apache.lucene.search.TaskExecutor$TaskGroup.invokeAll(TaskExecutor.java:152)
>  ~[lucene-core-9.11.1.jar:9.11.1 0c087dfdd10e0f6f3f6faecc6af4415e671a9e69 - 
> 2024-06-23 12:31:02]
>   2>at 
> org.apache.lucene.search.TaskExecutor.invokeAll(TaskExecutor.java:76) 
> ~[lucene-core-9.11.1.jar:9.11.1 0c087dfdd10e0f6f3f6faecc6af4415e671a9e69 - 
> 2024-06-23 12:31:02]
>   2>at org.apache.lucene.index.TermStates.build(TermStates.java:116) 
> ~[lucene-core-9.11.1.jar:9.11.1 0c087dfdd10e0f6f3f6faecc6af4415e671a9e69 - 
> 2024-06-23 12:31:02]
>   2>at 
> org.apache.lucene.search.TermQuery.createWeight(TermQuery.java:275) 
> ~[lucene-core-9.11.1.jar:9.11.1 0c087dfdd10e0f6f3f6faecc6af4415e671a9e69 - 
> 2024-06-23 12:31:02]
>   2>at 
> org.apache.lucene.search.IndexSearcher.createWeight(IndexSearcher.java:882) 
> ~[lucene-core-9.11.1.jar:9.11.1 0c087dfdd10e0f6f3f6fa

[jira] [Commented] (SOLR-16295) Modernize and Standardize Solr description across all platforms

2024-09-04 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-16295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879224#comment-17879224
 ] 

David Smiley commented on SOLR-16295:
-

Since your blurb already calls out some of those, then multi-modal is 
redundant.  (in addition to ambiguous)

> Modernize and Standardize Solr description across all platforms
> ---
>
> Key: SOLR-16295
> URL: https://issues.apache.org/jira/browse/SOLR-16295
> Project: Solr
>  Issue Type: Bug
>  Components: documentation
>Reporter: Houston Putman
>Assignee: Eric Pugh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently everywhere we have a page on "Solr", we have a short description on 
> what the project/product is. They are all roughly the same, but we should try 
> to improve this language and standardize it everywhere.
> The places I can think of currently are:
> * [solr.apache.org|https://solr.apache.org/]
> * [Ref Guide|https://solr.apache.org/guide/solr/latest/]
> * [Github - Solr|https://github.com/apache/solr]
> * [DockerHub - Solr|https://hub.docker.com/_/solr]
> * [ArtifactHub - Solr|https://artifacthub.io/packages/helm/apache-solr/solr]
> The Solr Operator pages don't really give a Solr description, which is fine.
> Please comment if I forgot any, so that we can have a comprehensive list.
> Once we agree on the standardized language, we can then update it everywhere 
> it needs to go (since the above list are managed in a variety of places).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17434) Jetty relativeRedirectAllowed should be true

2024-09-03 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17434.
-
Fix Version/s: 9.7
   Resolution: Fixed

> Jetty relativeRedirectAllowed should be true
> 
>
> Key: SOLR-17434
> URL: https://issues.apache.org/jira/browse/SOLR-17434
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 9.7
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For a minor security benefit, avoiding exposing Solr's host & port number in 
> an obscure case:
> [https://github.com/jetty/jetty.project/issues/11014]
> Assuming Solr main/10 moves on to Jetty 12, this configuration change is only 
> applicable to Solr 9.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17434) Jetty relativeRedirectAllowed should be true

2024-09-03 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17879024#comment-17879024
 ] 

David Smiley commented on SOLR-17434:
-

Before:

{noformat}
 curl -0 -v -H "Host:" http://YOURHOSTNAME:8983/

...
< HTTP/1.1 302 Found
< Location: http://YOURIP:8983/solr/
...
{noformat}

The "YOURIP" isn't great.
Preferably the Location header is relative, just containing "/solr/" for this 
example.

> Jetty relativeRedirectAllowed should be true
> 
>
> Key: SOLR-17434
> URL: https://issues.apache.org/jira/browse/SOLR-17434
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>
> For a minor security benefit, avoiding exposing Solr's host & port number in 
> an obscure case:
> [https://github.com/jetty/jetty.project/issues/11014]
> Assuming Solr main/10 moves on to Jetty 12, this configuration change is only 
> applicable to Solr 9.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17434) Jetty relativeRedirectAllowed should be true

2024-09-03 Thread David Smiley (Jira)
David Smiley created SOLR-17434:
---

 Summary: Jetty relativeRedirectAllowed should be true
 Key: SOLR-17434
 URL: https://issues.apache.org/jira/browse/SOLR-17434
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


For a minor security benefit, avoiding exposing Solr's host & port number in an 
obscure case:

[https://github.com/jetty/jetty.project/issues/11014]

Assuming Solr main/10 moves on to Jetty 12, this configuration change is only 
applicable to Solr 9.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-6572) lineshift in solrconfig.xml is not supported

2024-09-02 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17878651#comment-17878651
 ] 

David Smiley commented on SOLR-6572:


We don't normally trim our configuration values at a higher level (code 
interpreting a particular config value); I think it's very haphazard to do it 
on-read (like once we do it here and there, then everywhere we wonder, should 
we do here too? a mess IMO).

I understand that a leading or trailing space might be pertinent in some edge 
cases.  Couldn't this be addressed with an attribute like trim="false" 
(defaulting to true)?

 

> lineshift in solrconfig.xml is not supported
> 
>
> Key: SOLR-6572
> URL: https://issues.apache.org/jira/browse/SOLR-6572
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.8.1
>Reporter: Fredrik Rodland
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: difficulty-easy, impact-low, solrconfig.xml
> Fix For: 9.7
>
> Attachments: SOLR-6572.patch, SOLR-6572.unittest
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This has been a problem for a long time, and is still a problem at least for 
> SOLR 4.8.1.
> If lineshifts are introduced in some elements in solrconfig.xml SOLR fails to 
> pick up on the values.
> example:
> ok:
> {code}
>  enable="${enable.replication:false}">
> 
>  name="masterUrl">${solr.master.url:http://solr-admin1.finn.no:12910/solr/front-static/replication}
> {code}
> not ok:
> {code}
>  enable="${enable.replication:false}">
> 
>  name="masterUrl">${solr.master.url:http://solr-admin1.finn.no:12910/solr/front-static/replication}
> 
> {code}
> Other example:
> ok:
> {code}
>  name="shards">localhost:12100/solr,localhost:12200/solr,localhost:12300/solr,localhost:12400/solr,localhost:12500/solr,localhost:12530/solr
> {code}
> not ok:
> {code}
> 
> localhost:12100/solr,localhost:12200/solr,localhost:12300/solr,localhost:12400/solr,localhost:12500/solr,localhost:12530/solr
>
> {code}
> IDEs and people tend to introduce lineshifts in xml-files to make them 
> prettyer.  SOLR should really not be affected by this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17102) VersionBucket not needed

2024-09-02 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17102.
-
Fix Version/s: 9.8
   Resolution: Fixed

> VersionBucket not needed
> 
>
> Key: SOLR-17102
> URL: https://issues.apache.org/jira/browse/SOLR-17102
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
>  Labels: pull-request-available
> Fix For: 9.8
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> SolrCloud ensures that updates for the same document ID are done in the 
> correct order internally in the face of possible re-orders during replication 
> / log replay.  In order to ensure the updates are applied consecutively, a 
> lock is held on a hash of the ID for the doc.  A hash is used to limit the 
> number of total locks because the locks are pre-created in advance for the 
> core (numVersionBuckets == 65k by default).  The memory is non-negligible 
> with many cores, and it introduces the possibility of collisions, especially 
> at lower bucket counts if you configure it much lower.
> Here I propose doing away with a pre-created hashed bucket strategy.  
> Instead, I propose more simply creating and GC'ing a lock per update being 
> processed, and using a ConcurrentHashMap to hold those in-flight.  This 
> strategy is already used in 
> org.apache.solr.util.OrderedExecutor.SparseStripedLock, more or less.
> Doing this is more tractable now that VersionBucket only holds a lock, not a 
> version anymore – SOLR-17036
> The biggest challenge is that the code calls for the ability to use a 
> Condition to away/notify, which means the solution can't just re-use 
> SparseStripedLock above nor be quite so simple.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17432) Enable use of OTEL Agent

2024-08-30 Thread David Smiley (Jira)
David Smiley created SOLR-17432:
---

 Summary: Enable use of OTEL Agent
 Key: SOLR-17432
 URL: https://issues.apache.org/jira/browse/SOLR-17432
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: tracing
Reporter: David Smiley
Assignee: David Smiley


The [OpenTelemetry Java Agent 
|https://opentelemetry.io/docs/zero-code/java/agent/]is really powerful, 
supporting {{WithSpan}} annotations and auto-instrumentation of many libraries 
like the AWS SDK.  It also isolates its transitive dependencies in another 
classloader so as not to conflict with Solr's choices.  Solr currently only 
supports OTEL via Solr itself calling into OTEL to initialize.  This ticket 
proposes _also_ supporting recognizing that the OTEL agent is loaded, and if so 
then using that without any change to solr.xml.

Without this, someone can write a trivial TracerConfigurator and configure it 
in solr.xml but ideally Solr should detect the situation.  If you want to run 
tests with tracing (an overlooked use of tracing!), it's annoying to go touch 
the pertinent solr.xml.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17421) With overseer node role enabled, overseer may be stopped without giving-up leadership

2024-08-27 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17421.
-
Fix Version/s: 9.8
   Resolution: Fixed

Merged.

Thanks for contributing!

> With overseer node role enabled, overseer may be stopped without giving-up 
> leadership
> -
>
> Key: SOLR-17421
> URL: https://issues.apache.org/jira/browse/SOLR-17421
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.11, 9.6
>Reporter: Pierre Salagnac
>Priority: Major
>  Labels: pull-request-available
> Fix For: 9.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Overseer may retain the leadership status while the thread pool that is 
> supposed to consume the collection state mutator queue was already shut down.
> Occurrences of this but are probably not frequent. But when it happens, it 
> has a huge impact. The overseer cluster state updater is stuck and all 
> collection admin requests are very likely to fail. Because of the stuck 
> overseer, all the enqueued operations (collection creation, deletion...) fail 
> and remain in the collection API queue.
> h2. Root cause
> Root cause is the {{QUIT}} command does not cancel overseer election if any 
> error happens while shutting down the state updater thread pool.
> {code:java}
> level:  ERROR
> logger:  org.apache.solr.cloud.Overseer
> message:  Overseer could not process the current clusterstate state 
> update message, skipping the message: {
> "operation":"quit",
> "id":"72073405485023239-_solr-n_000948"}
> node_name:  :8983_solr
> threadId:  281272
> threadName:  
> OverseerStateUpdate-72073405485023239-_solr-n_000948
> thrown:  java.lang.RuntimeException: Timeout waiting for pool 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor@2c1da18d[Shutting
>  down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 
> 0] to shutdown.
> at 
> org.apache.solr.common.util.ExecutorUtil.awaitTermination(ExecutorUtil.java:142)
> at 
> org.apache.solr.common.util.ExecutorUtil.awaitTermination(ExecutorUtil.java:129)
> at 
> org.apache.solr.common.util.ExecutorUtil.shutdownAndAwaitTermination(ExecutorUtil.java:112)
> at 
> org.apache.solr.cloud.OverseerTaskProcessor.close(OverseerTaskProcessor.java:431)
> at 
> org.apache.solr.cloud.Overseer$ClusterStateUpdater.processMessage(Overseer.java:601)
> at 
> org.apache.solr.cloud.Overseer$ClusterStateUpdater.processQueueItem(Overseer.java:450)
> at org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:377)
> at java.base/java.lang.Thread.run(Thread.java:1583)
> {code}
> h2. Proximate cause
> It seems to me long running operations in the collection API could trigger 
> the bug more frequently. Because of a long running operation, we get an 
> exception when shutting down the thread pool. This has a 60 seconds timeout.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-15748) Create v2 equivalent of v1 'CLUSTERSTATUS' (or document alternatives)

2024-08-26 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17876891#comment-17876891
 ] 

David Smiley commented on SOLR-15748:
-

I filed SOLR-17422 with a PR

> Create v2 equivalent of v1 'CLUSTERSTATUS' (or document alternatives)
> -
>
> Key: SOLR-15748
> URL: https://issues.apache.org/jira/browse/SOLR-15748
> Project: Solr
>  Issue Type: Sub-task
>  Components: v2 API
>Affects Versions: 9.1
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
>  Labels: V2
> Fix For: main (10.0), 9.2
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Solr's 'CLUSTERSTATUS' command under the v1 {{/solr/admin/collections}} 
> endpoint has no v2 equivalent. This should be remedied to inch v2 closer to 
> parity with v1 in preparation for eventual v1 deprecation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-17422) Remove CLUSTERSTATE from v2; redundant

2024-08-26 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-17422:

Description: 
Revert SOLR-15748 – remove v2 /api/cluster being a CLUSTERSTATUS call

The information (e.g. aliases, or the status of one collection, live nodes, 
etc.) should be made available via V2 but we don't need a macro API to return a 
bunch of these things at once, which is what CLUSTERSTATUS is.  I suspect 
CLUSTERSTATUS may have been one of the original SolrCloud level APIs so it 
became a kitchen sink of all things about the state/status.  In hindsight, I 
don't agree with it.  It ends up blowing up in size for clients that only need 
a subset of it, leading to decomposing it – SOLR-17381.

  was:
Revert SOLR-15748 – remove v2 /admin/cluster being a CLUSTERSTATUS call

The information (e.g. aliases, or the status of one collection, live nodes, 
etc.) should be made available via V2 but we don't need a macro API to return a 
bunch of these things at once, which is what CLUSTERSTATUS is.  I suspect 
CLUSTERSTATUS may have been one of the original SolrCloud level APIs so it 
became a kitchen sink of all things about the state/status.  In hindsight, I 
don't agree with it.  It ends up blowing up in size for clients that only need 
a subset of it, leading to decomposing it – SOLR-17381.


> Remove CLUSTERSTATE from v2; redundant
> --
>
> Key: SOLR-17422
> URL: https://issues.apache.org/jira/browse/SOLR-17422
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Reporter: David Smiley
>Priority: Minor
>
> Revert SOLR-15748 – remove v2 /api/cluster being a CLUSTERSTATUS call
> The information (e.g. aliases, or the status of one collection, live nodes, 
> etc.) should be made available via V2 but we don't need a macro API to return 
> a bunch of these things at once, which is what CLUSTERSTATUS is.  I suspect 
> CLUSTERSTATUS may have been one of the original SolrCloud level APIs so it 
> became a kitchen sink of all things about the state/status.  In hindsight, I 
> don't agree with it.  It ends up blowing up in size for clients that only 
> need a subset of it, leading to decomposing it – SOLR-17381.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17422) Remove CLUSTERSTATE from v2; redundant

2024-08-26 Thread David Smiley (Jira)
David Smiley created SOLR-17422:
---

 Summary: Remove CLUSTERSTATE from v2; redundant
 Key: SOLR-17422
 URL: https://issues.apache.org/jira/browse/SOLR-17422
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: v2 API
Reporter: David Smiley


Revert SOLR-15748 – remove v2 /admin/cluster being a CLUSTERSTATUS call

The information (e.g. aliases, or the status of one collection, live nodes, 
etc.) should be made available via V2 but we don't need a macro API to return a 
bunch of these things at once, which is what CLUSTERSTATUS is.  I suspect 
CLUSTERSTATUS may have been one of the original SolrCloud level APIs so it 
became a kitchen sink of all things about the state/status.  In hindsight, I 
don't agree with it.  It ends up blowing up in size for clients that only need 
a subset of it, leading to decomposing it – SOLR-17381.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-14370) Refactor bin/solr to allow external override of Jetty modules

2024-08-26 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17876814#comment-17876814
 ] 

David Smiley commented on SOLR-14370:
-

This is a minor improvement.  Any way, my colleague Andy and I are no longer 
using this.

> Refactor bin/solr to allow external override of Jetty modules
> -
>
> Key: SOLR-14370
> URL: https://issues.apache.org/jira/browse/SOLR-14370
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: Andy Throgmorton
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The bin/solr script currently does not allow for externally overriding the 
> modules passed to Jetty on startup.
> This PR adds the ability to override the Jetty modules on startup by setting 
> {{JETTY_MODULES}} as an environment variable; when passed, bin/solr will pass 
> through (and not clobber) the string verbatim into {{SOLR_JETTY_CONFIG}}. For 
> example, you can now run:
> {{JETTY_MODULES=--module=foo bin/solr start}}
> We've added some custom Jetty modules that can be optionally enabled; this 
> change allows us to keep our logic (regarding which modules to use) in a 
> separate script, rather than maintaining a forked bin/solr.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-3913) SimplePostTool optimize does a redundant commit

2024-08-26 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17876755#comment-17876755
 ] 

David Smiley commented on SOLR-3913:


[PostTool|https://github.com/apache/solr/blob/1084db9477ec33cbe228f44c82e8612dc051222d/solr/core/src/java/org/apache/solr/cli/PostTool.java#L1056].
 Can use {{source.transferTo(dest);}} as we are on JDK 9+. This is really 
unrelated to this issue though... I'll submit a PR that does that everywhere

> SimplePostTool optimize does a redundant commit
> ---
>
> Key: SOLR-3913
> URL: https://issues.apache.org/jira/browse/SOLR-3913
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Reporter: David Smiley
>Assignee: Eric Pugh
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> At the end of SimplePostTool.execute() there is:
> {code}
> if (commit)   commit();
> if (optimize) optimize();
> {code}
> Each of these calls involves a separate request to Solr.  The thing is, an 
> optimize internally commits, and so the logic should forgo committing is 
> optimize is true.
> And as an aside, I think the 1kb pipe() buffer on line 893 is too small; it 
> should be around 8kb (8192) bytes which is the same value as 
> BufferedInputStream's default.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-12429) ZK upconfig throws confusing error when it encounters a symlink

2024-08-23 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17876347#comment-17876347
 ] 

David Smiley commented on SOLR-12429:
-

Why block symlinks; shouldn't they be supported?

> ZK upconfig throws confusing error when it encounters a symlink
> ---
>
> Key: SOLR-12429
> URL: https://issues.apache.org/jira/browse/SOLR-12429
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.3.1
>Reporter: Shawn Heisey
>Assignee: Eric Pugh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> If a configset being uploaded to ZK contains a symlink pointing at a 
> directory, an error is thrown, but it doesn't explain the real problem.  The 
> upconfig should detect symlinks and throw an error indicating that they 
> aren't supported.  If we can detect any other type of file that upconfig 
> can't use (sockets, device files, etc), the error message should be relevant.
> {noformat}
> Exception in thread "main" java.io.IOException: File 
> '/var/solr/mbs/artist/conf/common' exists but is a directory
>   at org.apache.commons.io.FileUtils.openInputStream(FileUtils.java:286)
>   at 
> org.apache.commons.io.FileUtils.readFileToByteArray(FileUtils.java:1815)
>   at 
> org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:391)
>   at 
> org.apache.solr.common.cloud.ZkMaintenanceUtils$1.visitFile(ZkMaintenanceUtils.java:305)
>   at 
> org.apache.solr.common.cloud.ZkMaintenanceUtils$1.visitFile(ZkMaintenanceUtils.java:291)
>   at java.nio.file.Files.walkFileTree(Files.java:2670)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.solr.common.cloud.ZkMaintenanceUtils.uploadToZK(ZkMaintenanceUtils.java:291)
>   at 
> org.apache.solr.common.cloud.SolrZkClient.uploadToZK(SolrZkClient.java:793)
>   at 
> org.apache.solr.common.cloud.ZkConfigManager.uploadConfigDir(ZkConfigManager.java:78)
>   at org.apache.solr.cloud.ZkCLI.main(ZkCLI.java:236)
> {noformat}
> I have not tested whether a symlink pointing at a file works, but I think 
> that an error should be thrown for ANY symlink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17420) Solr does not load some cores occasionally on startup due to waiting on searcher lock

2024-08-22 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17420.
-
Fix Version/s: 9.4.1
   Resolution: Duplicate

Duplicate of SOLR-17060 fixed in 9.4.1

> Solr does not load some cores occasionally on startup due to waiting on 
> searcher lock
> -
>
> Key: SOLR-17420
> URL: https://issues.apache.org/jira/browse/SOLR-17420
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 9.4
>Reporter: Jerry Chung
>Priority: Critical
> Fix For: 9.4.1
>
>
> We've noticed this issue quite a bit. If it happens, the replica is marked as 
> {{down. Workaround seems to be restarting Solr service, but this is quite 
> random and it might not be feasible.}}
>  
> Today I noticed that it seemed to be hanging while loading the replica. When 
> the service stopped, these messages were logged.
>  
> 2024-08-20 18:27:15.827 INFO  
> (coreLoadExecutor-17-thread-1-processing-ip-100-65-231-167.ec2.internal:8983_solr)
>  [c:1_80084c8562132c
> 47_2d076556_1914ca95bf9__8000I18454740_a2f8_5f48_a1d7_9ecfea41540d s:shard1 
> r:core_node18 x:1_80084c8562132c47_2d076556_1914ca95bf9_
> _8000I18454740_a2f8_5f48_a1d7_9ecfea41540d_shard1_replica_n17] 
> o.a.s.c.SolrCore Interrupted waiting for searcherLock => java.lang.In
> terruptedException
>         at java.base/java.lang.Object.wait(Native Method)
> java.lang.InterruptedException: null
>         at java.lang.Object.wait(Native Method) ~[?:?]
>         at java.lang.Object.wait(Object.java:338) ~[?:?]
>         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2538) 
> ~[solr-core-9.4.0.jar:9.4.0 71e101bb37497f730078d9afe1991b6
> 0d10bfe96 - stillalex - 2023-10-10 19:10:39]
>         at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1290) 
> ~[solr-core-9.4.0.jar:9.4.0 71e101bb37497f730078d9afe1991b
> 60d10bfe96 - stillalex - 2023-10-10 19:10:39]
>         at org.apache.solr.core.SolrCore.(SolrCore.java:1175) 
> ~[solr-core-9.4.0.jar:9.4.0 71e101bb37497f730078d9afe1991b60d10b
> fe96 - stillalex - 2023-10-10 19:10:39]
>         at org.apache.solr.core.SolrCore.(SolrCore.java:1056) 
> ~[solr-core-9.4.0.jar:9.4.0 71e101bb37497f730078d9afe1991b60d10b
> fe96 - stillalex - 2023-10-10 19:10:39]
>         at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1705)
>  ~[solr-core-9.4.0.jar:9.4.0 71e101bb3749
> 7f730078d9afe1991b60d10bfe96 - stillalex - 2023-10-10 19:10:39]
>         at 
> org.apache.solr.core.CoreContainer.lambda$loadInternal$12(CoreContainer.java:1043)
>  ~[solr-core-9.4.0.jar:9.4.0 71e101bb37
> 497f730078d9afe1991b60d10bfe96 - stillalex - 2023-10-10 19:10:39]
>         at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:234)
>  ~[metric
> s-core-4.2.20.jar:4.2.20]
>         at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?]
>  
> There were tons of threads waiting for the lock as well:
>  
> 2024-08-20 18:28:16.012 INFO  (qtp1768242710-971) [] o.a.s.c.SolrCore 
> Interrupted waiting for searcherLock => java.lang.InterruptedException
>         at java.base/java.lang.Object.wait(Native Method)
> java.lang.InterruptedException: null
>         at java.lang.Object.wait(Native Method) ~[?:?]
>         at java.lang.Object.wait(Object.java:338) ~[?:?]
>         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2538) 
> ~[solr-core-9.4.0.jar:9.4.0 71e101bb37497f730078d9afe1991b60d10bfe96 - 
> stillalex - 2023-10-10 19:10:39]
>         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2281) 
> ~[solr-core-9.4.0.jar:9.4.0 71e101bb37497f730078d9afe1991b60d10bfe96 - 
> stillalex - 2023-10-10 19:10:39]
>         at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2116) 
> ~[solr-core-9.4.0.jar:9.4.0 71e101bb37497f730078d9afe1991b60d10bfe96 - 
> stillalex - 2023-10-10 19:10:39]
>         at org.apache.solr.core.SolrCore.withSearcher(SolrCore.java:2134) 
> ~[solr-core-9.4.0.jar:9.4.0 71e101bb37497f730078d9afe1991b60d10bfe96 - 
> stillalex - 2023-10-10 19:10:39]
>         at org.apache.solr.core.SolrCore.getSegmentCount(SolrCore.java:539) 
> ~[solr-core-9.4.0.jar:9.4.0 71e101bb37497f730078d9afe1991b60d10bfe96 - 
> stillalex - 2023-10-10 19:10:39]
>         at 
> org.apache.solr.core.SolrCore.lambda$initializeMetrics$11(SolrCore.java:1360) 
> ~[solr-core-9.4.0.jar:9.4.0 71e101bb37497f730078d9afe1991b60d10bfe96 - 
> stillalex - 2023-10-10 19:10:39]
>         at 
> org.apache.solr.util.stats.MetricUtils.convertGauge(MetricUtils.java:656) 
> ~[solr-core-9.4.0.jar:9.4.0 71e101bb37497f730078d9afe1991b60d10bfe96 - 
> stillalex - 2023-10-10 19:10:39]
>

[jira] [Resolved] (SOLR-17408) Calls to COLSTATUS are not optimized

2024-08-21 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17408.
-
Fix Version/s: 9.8
   Resolution: Fixed

Thanks Mathieu!

> Calls to COLSTATUS are not optimized
> 
>
> Key: SOLR-17408
> URL: https://issues.apache.org/jira/browse/SOLR-17408
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.1
>Reporter: Mathieu Marie
>Priority: Major
>  Labels: pull-request-available
> Fix For: 9.8
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> I found out about it in solr 9.4.1 and is still present in main.
> When calling COLSTATUS, by default, Solr will fetch segment information and 
> return it even if no flag requested that information.
> When there is a huge amount of shards, this can lead to delays and 
> unnecessary traffic (as well as extra information in the returned payload 
> that was not requested/necessary).
>  And extra check would be sufficient to prevent that extra work and have a 
> better response time 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17415) TimeLimitingCollector isn't adding value and is deprecated; remove

2024-08-21 Thread David Smiley (Jira)
David Smiley created SOLR-17415:
---

 Summary: TimeLimitingCollector isn't adding value and is 
deprecated; remove
 Key: SOLR-17415
 URL: https://issues.apache.org/jira/browse/SOLR-17415
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


TimeLimitingCollector is deprecated (removed in Lucene 10 in fact) and 
redundant with mechanisms we have in place.  It likely interferes with other 
optimizations.  Remove it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-17415) TimeLimitingCollector isn't adding value and is deprecated; remove

2024-08-21 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-17415:

Labels: newdev  (was: )

> TimeLimitingCollector isn't adding value and is deprecated; remove
> --
>
> Key: SOLR-17415
> URL: https://issues.apache.org/jira/browse/SOLR-17415
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev
>
> TimeLimitingCollector is deprecated (removed in Lucene 10 in fact) and 
> redundant with mechanisms we have in place.  It likely interferes with other 
> optimizations.  Remove it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17413) UpdateLog Replay can throw ConcurrentModificationException from sharing the request

2024-08-21 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17875506#comment-17875506
 ] 

David Smiley commented on SOLR-17413:
-

I'd prefer to avoid catching this exception to take some other action -- would 
feel too much like a hack.  I'm optimistic a more elegant solution without 
doing this can be found.

Perhaps SynchronousQueue?  Maybe I can rule this out based on a little test I 
did, as it throws a RejectedExecutionException (instead of blocking), which I'd 
rather not catch.

Perhaps a LinkedBlockingQueue (thus infinite queue size) but have an 
ExecutorService subclass that observes the queue size to see if a threshold is 
reached and if so then runs the task directly?  This way we never reject and 
the caller threads receive back pressure by doing the work that they would have 
done anyway with multiThreaded=false ! There is plenty of precedent for an 
ExecutorService that runs the task in the caller thread -- I'm thinking of 
Lucene's SameThreadExecutorService and Solr's SimpleFacets.directExecutor.

> UpdateLog Replay can throw ConcurrentModificationException from sharing the 
> request
> ---
>
> Key: SOLR-17413
> URL: https://issues.apache.org/jira/browse/SOLR-17413
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
>  Labels: newdev
>
> I saw org.apache.solr.cloud.BasicDistributedZkTest fail with a stack trace 
> revealing we have a real issue with UpdateLog replay.  Essentially, a 
> SolrQueryRequest is not threadsafe but we replay logs in parallel sharing the 
> same request instance.  Creating DistributedUpdateProcessor ends up adding to 
> a shared HashMap in req.getContext() that should not be shared.
> {noformat}
>   2> WARNING: Uncaught exception in thread: 
> Thread[replayUpdatesExecutor-590-thread-2,5,TGRP-BasicDistributedZkTest]
>   2> java.util.ConcurrentModificationException
>   2>  at __randomizedtesting.SeedInfo.seed([F2227B12A8FC234]:0)
>   2>  at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1135)
>   2>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessorFactory.addParamToDistributedRequestWhitelist(DistributedUpdateProcessorFactory.java:46)
>   2>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.(DistributedUpdateProcessor.java:190)
>   2>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.(DistributedUpdateProcessor.java:160)
>   2>  at 
> org.apache.solr.update.processor.DistributedZkUpdateProcessor.(DistributedZkUpdateProcessor.java:114)
>   2>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessorFactory.getInstance(DistributedUpdateProcessorFactory.java:59)
>   2>  at 
> org.apache.solr.update.processor.UpdateRequestProcessorChain.createProcessor(UpdateRequestProcessorChain.java:242)
>   2>  at 
> org.apache.solr.update.processor.UpdateRequestProcessorChain.createProcessor(UpdateRequestProcessorChain.java:214)
>   2>  at 
> org.apache.solr.update.UpdateLog$LogReplayer.lambda$doReplay$0(UpdateLog.java:2103)
>   2>  at 
> java.base/java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:305)
>   2>  at java.base/java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:195)
>   2>  at java.base/java.lang.ThreadLocal.get(ThreadLocal.java:172)
>   2>  at 
> org.apache.solr.update.UpdateLog$LogReplayer.lambda$execute$2(UpdateLog.java:2342)
>   2>  at 
> org.apache.solr.util.OrderedExecutor.lambda$execute$0(OrderedExecutor.java:68)
>   2>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$1(ExecutorUtil.java:449)
>   2>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   2>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   2>  at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17414) multiThreaded=true can result in RejectedExecutionException

2024-08-21 Thread David Smiley (Jira)
David Smiley created SOLR-17414:
---

 Summary: multiThreaded=true can result in 
RejectedExecutionException
 Key: SOLR-17414
 URL: https://issues.apache.org/jira/browse/SOLR-17414
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


Since the new multiThreaded search feature landed, I see a new test
failure involving "RejectedExecutionException" being thrown 
[link|https://ge.apache.org/s/5ack462ji4mlu/tests/task/:solr:core:test/details/org.apache.solr.search.TestRealTimeGet/testStressGetRealtime?top-execution=1].

It is thrown at a low level in Lucene building TermStates
concurrently.  I doubt the problem is specific to that test
(TestRealTimeGet) but that test might induce more activity than most
tests, thus crossing some thresholds like the queue size -- apparently
1000.

*I don't think we should be throwing a RejectedExecutionException
when running a Search query*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-17408) Calls to COLSTATUS are not optimized

2024-08-21 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-17408:

Description: 
I found out about it in solr 9.4.1 and is still present in main.

When calling COLSTATUS, by default, Solr will fetch segment information and 
return it even if no flag requested that information.

When there is a huge amount of shards, this can lead to delays and unnecessary 
traffic (as well as extra information in the returned payload that was not 
requested/necessary).

 And extra check would be sufficient to prevent that extra work and have a 
better response time 


  was:
all versions are affected (I would guess).

I found out about it in solr 9.4.1 and is still present in main.

When calling COLSTATUS, by default, Solr will fetch segment information and 
return it even if no flag requested that information.

When there is a huge amount of shards, this can lead to delays and unnecessary 
traffic (as well as extra information in the returned payload that was not 
requested/necessary).

 

And extra check would be sufficient to prevent that extra work and have a 
better response time 

 


> Calls to COLSTATUS are not optimized
> 
>
> Key: SOLR-17408
> URL: https://issues.apache.org/jira/browse/SOLR-17408
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.1
>Reporter: Mathieu Marie
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> I found out about it in solr 9.4.1 and is still present in main.
> When calling COLSTATUS, by default, Solr will fetch segment information and 
> return it even if no flag requested that information.
> When there is a huge amount of shards, this can lead to delays and 
> unnecessary traffic (as well as extra information in the returned payload 
> that was not requested/necessary).
>  And extra check would be sufficient to prevent that extra work and have a 
> better response time 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-17408) Calls to COLSTATUS are not optimized

2024-08-21 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-17408:

Affects Version/s: 8.1
   (was: main (10.0))

> Calls to COLSTATUS are not optimized
> 
>
> Key: SOLR-17408
> URL: https://issues.apache.org/jira/browse/SOLR-17408
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.1
>Reporter: Mathieu Marie
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> all versions are affected (I would guess).
> I found out about it in solr 9.4.1 and is still present in main.
> When calling COLSTATUS, by default, Solr will fetch segment information and 
> return it even if no flag requested that information.
> When there is a huge amount of shards, this can lead to delays and 
> unnecessary traffic (as well as extra information in the returned payload 
> that was not requested/necessary).
>  
> And extra check would be sufficient to prevent that extra work and have a 
> better response time 
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17413) UpdateLog Replay can throw ConcurrentModificationException from sharing the request

2024-08-20 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17875329#comment-17875329
 ] 

David Smiley commented on SOLR-17413:
-

This should be somewhat straightforward to fix by creating a copy of the 
request in the ThreadLocal.withInitial lambda.

> UpdateLog Replay can throw ConcurrentModificationException from sharing the 
> request
> ---
>
> Key: SOLR-17413
> URL: https://issues.apache.org/jira/browse/SOLR-17413
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
>
> I saw org.apache.solr.cloud.BasicDistributedZkTest fail with a stack trace 
> revealing we have a real issue with UpdateLog replay.  Essentially, a 
> SolrQueryRequest is not threadsafe but we replay logs in parallel sharing the 
> same request instance.  Creating DistributedUpdateProcessor ends up adding to 
> a shared HashMap in req.getContext() that should not be shared.
> {noformat}
>   2> WARNING: Uncaught exception in thread: 
> Thread[replayUpdatesExecutor-590-thread-2,5,TGRP-BasicDistributedZkTest]
>   2> java.util.ConcurrentModificationException
>   2>  at __randomizedtesting.SeedInfo.seed([F2227B12A8FC234]:0)
>   2>  at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1135)
>   2>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessorFactory.addParamToDistributedRequestWhitelist(DistributedUpdateProcessorFactory.java:46)
>   2>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.(DistributedUpdateProcessor.java:190)
>   2>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.(DistributedUpdateProcessor.java:160)
>   2>  at 
> org.apache.solr.update.processor.DistributedZkUpdateProcessor.(DistributedZkUpdateProcessor.java:114)
>   2>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessorFactory.getInstance(DistributedUpdateProcessorFactory.java:59)
>   2>  at 
> org.apache.solr.update.processor.UpdateRequestProcessorChain.createProcessor(UpdateRequestProcessorChain.java:242)
>   2>  at 
> org.apache.solr.update.processor.UpdateRequestProcessorChain.createProcessor(UpdateRequestProcessorChain.java:214)
>   2>  at 
> org.apache.solr.update.UpdateLog$LogReplayer.lambda$doReplay$0(UpdateLog.java:2103)
>   2>  at 
> java.base/java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:305)
>   2>  at java.base/java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:195)
>   2>  at java.base/java.lang.ThreadLocal.get(ThreadLocal.java:172)
>   2>  at 
> org.apache.solr.update.UpdateLog$LogReplayer.lambda$execute$2(UpdateLog.java:2342)
>   2>  at 
> org.apache.solr.util.OrderedExecutor.lambda$execute$0(OrderedExecutor.java:68)
>   2>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$1(ExecutorUtil.java:449)
>   2>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   2>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   2>  at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-17413) UpdateLog Replay can throw ConcurrentModificationException from sharing the request

2024-08-20 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-17413:

Labels: newdev  (was: )

> UpdateLog Replay can throw ConcurrentModificationException from sharing the 
> request
> ---
>
> Key: SOLR-17413
> URL: https://issues.apache.org/jira/browse/SOLR-17413
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
>  Labels: newdev
>
> I saw org.apache.solr.cloud.BasicDistributedZkTest fail with a stack trace 
> revealing we have a real issue with UpdateLog replay.  Essentially, a 
> SolrQueryRequest is not threadsafe but we replay logs in parallel sharing the 
> same request instance.  Creating DistributedUpdateProcessor ends up adding to 
> a shared HashMap in req.getContext() that should not be shared.
> {noformat}
>   2> WARNING: Uncaught exception in thread: 
> Thread[replayUpdatesExecutor-590-thread-2,5,TGRP-BasicDistributedZkTest]
>   2> java.util.ConcurrentModificationException
>   2>  at __randomizedtesting.SeedInfo.seed([F2227B12A8FC234]:0)
>   2>  at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1135)
>   2>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessorFactory.addParamToDistributedRequestWhitelist(DistributedUpdateProcessorFactory.java:46)
>   2>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.(DistributedUpdateProcessor.java:190)
>   2>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.(DistributedUpdateProcessor.java:160)
>   2>  at 
> org.apache.solr.update.processor.DistributedZkUpdateProcessor.(DistributedZkUpdateProcessor.java:114)
>   2>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessorFactory.getInstance(DistributedUpdateProcessorFactory.java:59)
>   2>  at 
> org.apache.solr.update.processor.UpdateRequestProcessorChain.createProcessor(UpdateRequestProcessorChain.java:242)
>   2>  at 
> org.apache.solr.update.processor.UpdateRequestProcessorChain.createProcessor(UpdateRequestProcessorChain.java:214)
>   2>  at 
> org.apache.solr.update.UpdateLog$LogReplayer.lambda$doReplay$0(UpdateLog.java:2103)
>   2>  at 
> java.base/java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:305)
>   2>  at java.base/java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:195)
>   2>  at java.base/java.lang.ThreadLocal.get(ThreadLocal.java:172)
>   2>  at 
> org.apache.solr.update.UpdateLog$LogReplayer.lambda$execute$2(UpdateLog.java:2342)
>   2>  at 
> org.apache.solr.util.OrderedExecutor.lambda$execute$0(OrderedExecutor.java:68)
>   2>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$1(ExecutorUtil.java:449)
>   2>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   2>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   2>  at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17413) UpdateLog Replay can throw ConcurrentModificationException from sharing the request

2024-08-20 Thread David Smiley (Jira)
David Smiley created SOLR-17413:
---

 Summary: UpdateLog Replay can throw 
ConcurrentModificationException from sharing the request
 Key: SOLR-17413
 URL: https://issues.apache.org/jira/browse/SOLR-17413
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


I saw org.apache.solr.cloud.BasicDistributedZkTest fail with a stack trace 
revealing we have a real issue with UpdateLog replay.  Essentially, a 
SolrQueryRequest is not threadsafe but we replay logs in parallel sharing the 
same request instance.  Creating DistributedUpdateProcessor ends up adding to a 
shared HashMap in req.getContext() that should not be shared.

{noformat}
  2> WARNING: Uncaught exception in thread: 
Thread[replayUpdatesExecutor-590-thread-2,5,TGRP-BasicDistributedZkTest]
  2> java.util.ConcurrentModificationException
  2>at __randomizedtesting.SeedInfo.seed([F2227B12A8FC234]:0)
  2>at java.base/java.util.HashMap.computeIfAbsent(HashMap.java:1135)
  2>at 
org.apache.solr.update.processor.DistributedUpdateProcessorFactory.addParamToDistributedRequestWhitelist(DistributedUpdateProcessorFactory.java:46)
  2>at 
org.apache.solr.update.processor.DistributedUpdateProcessor.(DistributedUpdateProcessor.java:190)
  2>at 
org.apache.solr.update.processor.DistributedUpdateProcessor.(DistributedUpdateProcessor.java:160)
  2>at 
org.apache.solr.update.processor.DistributedZkUpdateProcessor.(DistributedZkUpdateProcessor.java:114)
  2>at 
org.apache.solr.update.processor.DistributedUpdateProcessorFactory.getInstance(DistributedUpdateProcessorFactory.java:59)
  2>at 
org.apache.solr.update.processor.UpdateRequestProcessorChain.createProcessor(UpdateRequestProcessorChain.java:242)
  2>at 
org.apache.solr.update.processor.UpdateRequestProcessorChain.createProcessor(UpdateRequestProcessorChain.java:214)
  2>at 
org.apache.solr.update.UpdateLog$LogReplayer.lambda$doReplay$0(UpdateLog.java:2103)
  2>at 
java.base/java.lang.ThreadLocal$SuppliedThreadLocal.initialValue(ThreadLocal.java:305)
  2>at java.base/java.lang.ThreadLocal.setInitialValue(ThreadLocal.java:195)
  2>at java.base/java.lang.ThreadLocal.get(ThreadLocal.java:172)
  2>at 
org.apache.solr.update.UpdateLog$LogReplayer.lambda$execute$2(UpdateLog.java:2342)
  2>at 
org.apache.solr.util.OrderedExecutor.lambda$execute$0(OrderedExecutor.java:68)
  2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$1(ExecutorUtil.java:449)
  2>at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  2>at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  2>at java.base/java.lang.Thread.run(Thread.java:829)
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-17069) Upgrade Jetty to 12.x

2024-08-14 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-17069:

Fix Version/s: main (10.0)
 Priority: Blocker  (was: Major)

> Upgrade Jetty to 12.x
> -
>
> Key: SOLR-17069
> URL: https://issues.apache.org/jira/browse/SOLR-17069
> Project: Solr
>  Issue Type: Improvement
>  Components: Server
>Reporter: Kevin Risden
>Priority: Blocker
> Fix For: main (10.0)
>
>
> On SOLR-16441 PR it was mentioned that Jetty 12.x supports multiple servlet 
> versions and we could stick with javax.servlet instead of all the changes 
> required to go to Jetty 11.x in SOLR-16441 PR. 
> Jetty 12 requires JDK 17
> References:
> * https://webtide.com/introducing-jetty-12/
> * 
> https://eclipse.dev/jetty/documentation/jetty-12/programming-guide/index.html#pg-migration-11-to-12



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17410) Gradle up-to-date checks not working with openApiGenerate

2024-08-14 Thread David Smiley (Jira)
David Smiley created SOLR-17410:
---

 Summary: Gradle up-to-date checks not working with openApiGenerate
 Key: SOLR-17410
 URL: https://issues.apache.org/jira/browse/SOLR-17410
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Build
Reporter: David Smiley
Assignee: Houston Putman


If we do {{./gradlew compile}} twice without doing anything in-between, we 
should expect a fully up-to-date gradle build, thus doesn't do a task that 
writes to disk.   Instead, {{openApiGenerate}} outputs a bunch of stuff 
(perhaps it's our most noisy task?) including that it cleaned the output and 
did generation.  My expectation is that it
should do nothing and ideally print nothing either.

Probably a regression since: [https://github.com/apache/solr/pull/2502] 
cleanOutput=true"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-15603) Fix Gradle build cache (still disabled by default)

2024-08-13 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-15603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-15603:

Summary: Fix Gradle build cache (still disabled by default)  (was: Activate 
Gradle build cache)

> Fix Gradle build cache (still disabled by default)
> --
>
> Key: SOLR-15603
> URL: https://issues.apache.org/jira/browse/SOLR-15603
> Project: Solr
>  Issue Type: Improvement
>Reporter: Alexis Tual
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 9.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Activate Gradle build cache to avoid re-executing cacheable tasks.
> Make as well some custom tasks cacheable, this effort can be quite large 
> depending on build complexity, so this Jira issue will cover only the 
> straightforward fixes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-15748) Create v2 equivalent of v1 'CLUSTERSTATUS' (or document alternatives)

2024-08-13 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17873227#comment-17873227
 ] 

David Smiley commented on SOLR-15748:
-

I would argue we didn't need this.  The information (e.g. aliases, or the 
status of one collection, live nodes) should be made available via V2 but we 
don't need a macro API to return a bunch of these things at once, which is what 
CLUSTERSTATUS is.  I suspect CLUSTERSTATUS may have been one of the original 
SolrCloud level APIs so it became somewhat of a kitchen sink of all things 
about the state/status.  In hindsight, I don't agree with it.  It ends up 
blowing up in size for clients that only need a subset of it, leading to 
decomposing it.  

> Create v2 equivalent of v1 'CLUSTERSTATUS' (or document alternatives)
> -
>
> Key: SOLR-15748
> URL: https://issues.apache.org/jira/browse/SOLR-15748
> Project: Solr
>  Issue Type: Sub-task
>  Components: v2 API
>Affects Versions: 9.1
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Major
>  Labels: V2
> Fix For: main (10.0), 9.2
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Solr's 'CLUSTERSTATUS' command under the v1 {{/solr/admin/collections}} 
> endpoint has no v2 equivalent. This should be remedied to inch v2 closer to 
> parity with v1 in preparation for eventual v1 deprecation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17398) Add Javadoc links from V1 APIs to V2 APIs

2024-08-09 Thread David Smiley (Jira)
David Smiley created SOLR-17398:
---

 Summary: Add Javadoc links from V1 APIs to V2 APIs
 Key: SOLR-17398
 URL: https://issues.apache.org/jira/browse/SOLR-17398
 Project: Solr
  Issue Type: Sub-task
  Components: v2 API
Reporter: David Smiley


The Java code for V1 APIs should have a javadoc comment pointing to equivalent 
v2 APIs when it's not obvious (when V1 is not simply calling V2 and/or they 
occupy different classes).  If V2 doesn't exist yet, a TBD/TODO comment is 
helpful nonetheless.

For example in CollectionsHandler above the enum entry for CLUSTERSTATUS_OP, 
add:
{quote}Superceded by V2: \{@link ClusterAPI} and \{@link ListAliasesApi}
and \{@link CollectionStatusAPI}.
{quote}
In that specific case, the ClusterStatus class itself could be marked 
deprecated, albeit that deprecation should perhaps be only for Solr 10 as some 
of such classes will not be removed until Solr 11. Nonetheless a javadoc 
comment saying will be removed in Solr 11 would be very helpful!

Perhaps in many cases there is no need because V1 tends to call V2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-15751) Create a v2 equivalent for 'COLSTATUS'

2024-08-09 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17872537#comment-17872537
 ] 

David Smiley commented on SOLR-15751:
-

I agree that for V2 we should have one endpoint instead of V1's dual endpoint.  
Note there are some peculiarities... notably CLUSTERSTATUS will check liveNodes 
when returning the replica's state but COLSTATUS doesn't do that.  I do 
recommend the live node check as the replica is effectively down.

If we agree on the above, then I think this issue could at least mark 
CLUSTERSTATUS's duplicative code as deprecated, pointing to ColStatus code as 
it's successor.

> Create a v2 equivalent for 'COLSTATUS'
> --
>
> Key: SOLR-15751
> URL: https://issues.apache.org/jira/browse/SOLR-15751
> Project: Solr
>  Issue Type: Sub-task
>  Components: v2 API
>Reporter: Jason Gerlowski
>Priority: Major
>  Labels: V2, newdev
>
> Solr's 'COLSTATUS' command under the v1 \{{/solr/admin/collections}} endpoint 
> has no full v2 equivalent. The \{{/v2/collections/}} API has 
> similar (identical?) output as a vanilla COLSTATUS request, but COLSTATUS can 
> also return detailed index information that \{{/v2/collections/}} 
> cannot expose. We should add parameters to this v2 API to expose similar data 
> to achieve parity with the v1 COLSTATUS API.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17381) Make CLUSTERSTATUS request configurable

2024-08-09 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17381.
-
Fix Version/s: 9.8
   Resolution: Fixed

Thanks for contributing!

I wondered more about the V2 API situation here.  I found it and it's already 
decomposed as we want (done in SOLR-15748); I wish I had seen this before!  
(face-palm).  Granted, the approach taken here was very backwards compatible so 
that we could modify SolrJ HttpClusterStateProvider to continue to use a V1 API 
that would still function talking to some older Solr server or one that didn't 
have V2 enabled.

> Make CLUSTERSTATUS request configurable
> ---
>
> Key: SOLR-17381
> URL: https://issues.apache.org/jira/browse/SOLR-17381
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Aparna Suresh
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 9.8
>
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> Fetching {{CLUSTERSTATUS}} remotely is resource-intensive and should be done 
> with caution. Currently, if no parameters are specified, the call returns all 
> information, including collections, shards, replicas, aliases, cluster 
> properties, roles, and more. This can have significant performance 
> implications for clients using a Solr cluster with thousands of collections.
> Several performance [issues|https://issues.apache.org/jira/browse/SOLR-14985] 
> have been identified when switching {{CloudSolrClient}} to use HTTP-based 
> CSP, particularly in two instances where the entire cluster state is fetched 
> unnecessarily.
> *Proposal:* Modify the requests to retrieve only the necessary information, 
> such as the cluster status for a specific collection, live nodes, or cluster 
> properties. Ensure these changes maintain backward compatibility. 
> Additionally, update the HTTP CSP to reflect these optimizations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17396) Reduce thread contention in ZkStateReader.getCollectionProperties()

2024-08-08 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17396.
-
Fix Version/s: 9.8
   Resolution: Fixed

Thanks Aparna & Paul!

> Reduce thread contention in ZkStateReader.getCollectionProperties()
> ---
>
> Key: SOLR-17396
> URL: https://issues.apache.org/jira/browse/SOLR-17396
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Aparna Suresh
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 9.8
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Building on 
> [https://github.com/apache/solr/pull/2585|PR|https://github.com/apache/solr/pull/2585%7CPR]
>  which delegated Collection Properties Management to 
> {{{}CollectionPropertiesZkStateReader{}}}, this PR seeks to minimize thread 
> contention within {{{}CollectionPropertiesZkStateReader{}}}.
> Proposal: 
>  * Use collection level locking where relevant instead of synchronizing on 
> "watchedCollectionProperties"
>  * With the double checked locking implemented in 
> CollectionPropertiesZkStateReader, the scope of the synchronized (this) will 
> be reduced to collection property operations, and no longer be in contention 
> with the synchronization of unrelated operations on ZkStateReader.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-14985) Slow indexing and search performance when using HttpClusterStateProvider

2024-08-08 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-14985.
-
Fix Version/s: 9.8
   Resolution: Fixed

Thanks Aparna, & Shalin!

Note that there are other linked JIRAs to address additional performance 
concerns.  This is not the only one.

> Slow indexing and search performance when using HttpClusterStateProvider
> 
>
> Key: SOLR-14985
> URL: https://issues.apache.org/jira/browse/SOLR-14985
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Shalin Shekhar Mangar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 9.8
>
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>
> HttpClusterStateProvider fetches and caches Aliases and Live Nodes for 5 
> seconds.
> The BaseSolrCloudClient caches DocCollection for 60 seconds but only if the 
> DocCollection is not lazy and all collections returned by 
> HttpClusterStateProvider are not lazy which means they are never cached.
> The BaseSolrCloudClient has a method for resolving aliases which fetches 
> DocCollection for each input collection. This is an HTTP call with no caching 
> when using HttpClusterStateProvider. This resolveAliases method is called 
> twice for each update.
> So overall, at least 3 HTTP calls are made to fetch cluster state for each 
> update request when using HttpClusterStateProvider. There may be more if 
> aliases are involved or if more than one collection is specified in the 
> request. Similar problems exist on the query path as well.
> Due to these reasons, using HttpClusterStateProvider causes horrible 
> latencies and throughput for update and search requests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17391) Optimize Backup/Restore Operations for Large Collections

2024-08-08 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17391.
-
Fix Version/s: 9.7
   Resolution: Fixed

> Optimize Backup/Restore Operations for Large Collections
> 
>
> Key: SOLR-17391
> URL: https://issues.apache.org/jira/browse/SOLR-17391
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 9.4, 9.5, 9.4.1, 9.6, 9.6.1
>Reporter: Hakan Özler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 9.7
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> The backup/restore performance issue was first reported on [the users 
> mailing|https://lists.apache.org/thread/ssmzg5nhhxdhgz4980opn1vzxs81o9pk] 
> list.
>  
> We're experiencing performance issues in the recent Solr versions — 9.5.0 and 
> 9.6.1 — regarding backup and restore. In 9.2.1, we could take a backup of 
> 10TB data in just 1 and a half hours. Currently, as of 9.5.0, taking a backup 
> of the collection takes 7 hours! We're unable to make use of disaster 
> recovery effectively and reliably in Solr. Therefore, Solr 9.2.1 still 
> remains the most effective choice among the other 9.x versions for our use.
> It seems that this is the ticket causing this issue:
> 1. https://issues.apache.org/jira/browse/SOLR-16879
> Interestingly, we never encountered a throttling problem during operations 
> when this was introduced to be solved based on this argument on 9.2.1. From a 
> devops perspective, we have some details and metrics on these tasks to 
> distinguish the difference between two versions. The overall IOPS was 150MB 
> on 9.6.1, while IOPS was 500MB on 9.2.1 during the same backup and restore 
> tasks. In the first below, the peak on the left represents a backup, in 
> contrast, in the 2nd image, the same backup operation in 9.5.0 uses less 
> resource. As you may spot, 9.5.0 seems to be using a fifth of the resources 
> of 9.2.1. 
>  
> !https://i.imgur.com/aSrs8OM.png!
> Image 1.
> !https://i.imgur.com/aSrs8OM.png!
> Image 2.
>  
> Apart from that, monitoring some relevant metrics during the operations, I 
> had some difficulty interpreting the following metrics:
> {code:java}
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.pool.core: 0,
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.pool.max: 5,
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.pool.size: 1,
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.running: 
> 1,{code}
> The pool size was 1 although the pool max size is 5. Shouldn't the pool size 
> be 5, instead? However, there is always one task running on a single node, 
> not 5 concurrently, if I'm not mistaken. 
> I was also wondering if the max thread size, which is currently 5 in 9.4+, 
> could be configurable with either an environment variable or Java parameter? 
> The part that needs to be changed seems to be in CoreAdminHandler.java on 
> line 446 [1] I've made a small adjustment to add a Solr parameter called 
> `solr.maxExpensiveTaskThreads` for those who want to set a different thread 
> size for expensive tasks. The number given in this parameter must meet the 
> criteria of ThreadPoolExecutor, otherwise IllegalArgumentException will 
> occur. I've generated a patch [2] and I would love to see if someone from the 
> Solr committers would take on this and apply for the upcoming release. Do you 
> think our observation is accurate and would this patch be feasible to 
> implement?
>  
> 1. 
> [https://github.com/apache/solr/commit/82a847f0f9af18d6eceee18743d636db7a879f3e#diff-5bc3d44ca8b189f44fe9e6f75af8a5510463bdba79ff72a7d0ed190973a32533L446]
> 2. [https://gist.github.com/ozlerhakan/e4d11bddae6a2f89d2c212c220f4c965] 
>  
> Follow up on this, we managed to backup a data of 3TB in 50 minutes with the 
> patch using `solr.maxExpensiveTaskThreads=5` :
>  
> !https://i.imgur.com/oeCrhLn.png|width=626,height=239!
>  
> I also answered the questions from @Kevin Liang , 
> {quote}Was this change tested on a cloud that was also taking active 
> ingest/query requests as the same time as the backup? 
> {quote}
> The test is completed in a SolrCloud 9.6.1 + the patch cluster managed by the 
> official Solr operator on Amazon EKS. The backup strategy is not intended to 
> happen frequently. Instead, we plan to take some backups for a certain period 
> of time, therefore we won't expect intense search traffic in and out during 
> backups.  
>  
> {quote}This performance is really exciting, but I'm curious how much burden 
> it puts on CPU and memory.
> {quote}
> I'd say that Solr was pretty relaxed during the test based on the CPU 

[jira] [Commented] (SOLR-17391) Optimize Backup/Restore Operations for Large Collections

2024-08-07 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17871790#comment-17871790
 ] 

David Smiley commented on SOLR-17391:
-

For collectorExecutor (parallel segment search) – I agree! CC [~cpoerschke] 
[~ishan]
_(an aside – the name of that field is very non-obvious. IMO should have been 
named searcherCollectorExecutor.)_

This should also improve the "replayUpdatesExecutor" in CoreContainer in a 
minor way since for that one, queueSize == threads. If there were 4 docs to add 
previously and if there were 4 threads, it would have queued them all and not 
done any in parallel. In practice there are many more docs in the update log, 
however, using all available threads after the short queue is full.  I enhanced 
the test on this PR for this case.  Come to think of it, this feature could use 
a nominal queue size of 1 because it's gated by a semaphore.

> Optimize Backup/Restore Operations for Large Collections
> 
>
> Key: SOLR-17391
> URL: https://issues.apache.org/jira/browse/SOLR-17391
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 9.4, 9.5, 9.4.1, 9.6, 9.6.1
>Reporter: Hakan Özler
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The backup/restore performance issue was first reported on [the users 
> mailing|https://lists.apache.org/thread/ssmzg5nhhxdhgz4980opn1vzxs81o9pk] 
> list.
>  
> We're experiencing performance issues in the recent Solr versions — 9.5.0 and 
> 9.6.1 — regarding backup and restore. In 9.2.1, we could take a backup of 
> 10TB data in just 1 and a half hours. Currently, as of 9.5.0, taking a backup 
> of the collection takes 7 hours! We're unable to make use of disaster 
> recovery effectively and reliably in Solr. Therefore, Solr 9.2.1 still 
> remains the most effective choice among the other 9.x versions for our use.
> It seems that this is the ticket causing this issue:
> 1. https://issues.apache.org/jira/browse/SOLR-16879
> Interestingly, we never encountered a throttling problem during operations 
> when this was introduced to be solved based on this argument on 9.2.1. From a 
> devops perspective, we have some details and metrics on these tasks to 
> distinguish the difference between two versions. The overall IOPS was 150MB 
> on 9.6.1, while IOPS was 500MB on 9.2.1 during the same backup and restore 
> tasks. In the first below, the peak on the left represents a backup, in 
> contrast, in the 2nd image, the same backup operation in 9.5.0 uses less 
> resource. As you may spot, 9.5.0 seems to be using a fifth of the resources 
> of 9.2.1. 
>  
> !https://i.imgur.com/aSrs8OM.png!
> Image 1.
> !https://i.imgur.com/aSrs8OM.png!
> Image 2.
>  
> Apart from that, monitoring some relevant metrics during the operations, I 
> had some difficulty interpreting the following metrics:
> {code:java}
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.pool.core: 0,
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.pool.max: 5,
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.pool.size: 1,
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.running: 
> 1,{code}
> The pool size was 1 although the pool max size is 5. Shouldn't the pool size 
> be 5, instead? However, there is always one task running on a single node, 
> not 5 concurrently, if I'm not mistaken. 
> I was also wondering if the max thread size, which is currently 5 in 9.4+, 
> could be configurable with either an environment variable or Java parameter? 
> The part that needs to be changed seems to be in CoreAdminHandler.java on 
> line 446 [1] I've made a small adjustment to add a Solr parameter called 
> `solr.maxExpensiveTaskThreads` for those who want to set a different thread 
> size for expensive tasks. The number given in this parameter must meet the 
> criteria of ThreadPoolExecutor, otherwise IllegalArgumentException will 
> occur. I've generated a patch [2] and I would love to see if someone from the 
> Solr committers would take on this and apply for the upcoming release. Do you 
> think our observation is accurate and would this patch be feasible to 
> implement?
>  
> 1. 
> [https://github.com/apache/solr/commit/82a847f0f9af18d6eceee18743d636db7a879f3e#diff-5bc3d44ca8b189f44fe9e6f75af8a5510463bdba79ff72a7d0ed190973a32533L446]
> 2. [https://gist.github.com/ozlerhakan/e4d11bddae6a2f89d2c212c220f4c965] 
>  
> Follow up on this, we managed to backup a data of 3TB in 50 minutes with the 
> patch using `solr.maxExpensiveTaskThreads=5` :
>  
> !https://i.imgur.com/oeCrhLn.png|width=626,height=239!
>  
> 

[jira] [Commented] (SOLR-17391) Optimize Backup/Restore Operations for Large Collections

2024-08-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17871345#comment-17871345
 ] 

David Smiley commented on SOLR-17391:
-

Ah; this is the same gotcha of ThreadPoolExecutor that our project encountered 
weeks ago on the segment multiThreaded search feature (see 
CoreContainer.collectorExecutor).  Using a fixed pool was the solution, which 
seems appropriate for that spot.  It's too bad there isn't a ready-made 
alternative solution for the use-case here but a fixed pool would be adequate.

> Optimize Backup/Restore Operations for Large Collections
> 
>
> Key: SOLR-17391
> URL: https://issues.apache.org/jira/browse/SOLR-17391
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 9.4, 9.5, 9.4.1, 9.6, 9.6.1
>Reporter: Hakan Özler
>Priority: Major
>
> The backup/restore performance issue was first reported on [the users 
> mailing|https://lists.apache.org/thread/ssmzg5nhhxdhgz4980opn1vzxs81o9pk] 
> list.
>  
> We're experiencing performance issues in the recent Solr versions — 9.5.0 and 
> 9.6.1 — regarding backup and restore. In 9.2.1, we could take a backup of 
> 10TB data in just 1 and a half hours. Currently, as of 9.5.0, taking a backup 
> of the collection takes 7 hours! We're unable to make use of disaster 
> recovery effectively and reliably in Solr. Therefore, Solr 9.2.1 still 
> remains the most effective choice among the other 9.x versions for our use.
> It seems that this is the ticket causing this issue:
> 1. https://issues.apache.org/jira/browse/SOLR-16879
> Interestingly, we never encountered a throttling problem during operations 
> when this was introduced to be solved based on this argument on 9.2.1. From a 
> devops perspective, we have some details and metrics on these tasks to 
> distinguish the difference between two versions. The overall IOPS was 150MB 
> on 9.6.1, while IOPS was 500MB on 9.2.1 during the same backup and restore 
> tasks. In the first below, the peak on the left represents a backup, in 
> contrast, in the 2nd image, the same backup operation in 9.5.0 uses less 
> resource. As you may spot, 9.5.0 seems to be using a fifth of the resources 
> of 9.2.1. 
>  
> !https://i.imgur.com/aSrs8OM.png!
> Image 1.
> !https://i.imgur.com/aSrs8OM.png!
> Image 2.
>  
> Apart from that, monitoring some relevant metrics during the operations, I 
> had some difficulty interpreting the following metrics:
> {code:java}
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.pool.core: 0,
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.pool.max: 5,
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.pool.size: 1,
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.running: 
> 1,{code}
> The pool size was 1 although the pool max size is 5. Shouldn't the pool size 
> be 5, instead? However, there is always one task running on a single node, 
> not 5 concurrently, if I'm not mistaken. 
> I was also wondering if the max thread size, which is currently 5 in 9.4+, 
> could be configurable with either an environment variable or Java parameter? 
> The part that needs to be changed seems to be in CoreAdminHandler.java on 
> line 446 [1] I've made a small adjustment to add a Solr parameter called 
> `solr.maxExpensiveTaskThreads` for those who want to set a different thread 
> size for expensive tasks. The number given in this parameter must meet the 
> criteria of ThreadPoolExecutor, otherwise IllegalArgumentException will 
> occur. I've generated a patch [2] and I would love to see if someone from the 
> Solr committers would take on this and apply for the upcoming release. Do you 
> think our observation is accurate and would this patch be feasible to 
> implement?
>  
> 1. 
> [https://github.com/apache/solr/commit/82a847f0f9af18d6eceee18743d636db7a879f3e#diff-5bc3d44ca8b189f44fe9e6f75af8a5510463bdba79ff72a7d0ed190973a32533L446]
> 2. [https://gist.github.com/ozlerhakan/e4d11bddae6a2f89d2c212c220f4c965] 
>  
> Follow up on this, we managed to backup a data of 3TB in 50 minutes with the 
> patch using `solr.maxExpensiveTaskThreads=5` :
>  
> !https://i.imgur.com/oeCrhLn.png|width=626,height=239!
>  
> I also answered the questions from @Kevin Liang , 
> {quote}Was this change tested on a cloud that was also taking active 
> ingest/query requests as the same time as the backup? 
> {quote}
> The test is completed in a SolrCloud 9.6.1 + the patch cluster managed by the 
> official Solr operator on Amazon EKS. The backup strategy is not intended to 
> happen frequently. Instead, we plan to take some backups for a certain period 
> of time, therefore we won't expect i

[jira] [Commented] (SOLR-17391) Optimize Backup/Restore Operations for Large Collections

2024-08-06 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17871332#comment-17871332
 ] 

David Smiley commented on SOLR-17391:
-

Hello; thanks for reporting in nice detail!  I didn't yet look at this 
carefully but did see that your proposed patch replaces the thread pool with a 
fixed size (albeit configurable).  Can't we have a cached pool so that we don't 
use any threads if there's nothing to do?  Many servers & tests are commonly 
not doing any of these "expensive" tasks at any one moment.

> Optimize Backup/Restore Operations for Large Collections
> 
>
> Key: SOLR-17391
> URL: https://issues.apache.org/jira/browse/SOLR-17391
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 9.4, 9.5, 9.4.1, 9.6, 9.6.1
>Reporter: Hakan Özler
>Priority: Major
>
> The backup/restore performance issue was first reported on [the users 
> mailing|https://lists.apache.org/thread/ssmzg5nhhxdhgz4980opn1vzxs81o9pk] 
> list.
>  
> We're experiencing performance issues in the recent Solr versions — 9.5.0 and 
> 9.6.1 — regarding backup and restore. In 9.2.1, we could take a backup of 
> 10TB data in just 1 and a half hours. Currently, as of 9.5.0, taking a backup 
> of the collection takes 7 hours! We're unable to make use of disaster 
> recovery effectively and reliably in Solr. Therefore, Solr 9.2.1 still 
> remains the most effective choice among the other 9.x versions for our use.
> It seems that this is the ticket causing this issue:
> 1. https://issues.apache.org/jira/browse/SOLR-16879
> Interestingly, we never encountered a throttling problem during operations 
> when this was introduced to be solved based on this argument on 9.2.1. From a 
> devops perspective, we have some details and metrics on these tasks to 
> distinguish the difference between two versions. The overall IOPS was 150MB 
> on 9.6.1, while IOPS was 500MB on 9.2.1 during the same backup and restore 
> tasks. In the first below, the peak on the left represents a backup, in 
> contrast, in the 2nd image, the same backup operation in 9.5.0 uses less 
> resource. As you may spot, 9.5.0 seems to be using a fifth of the resources 
> of 9.2.1. 
>  
> !https://i.imgur.com/aSrs8OM.png!
> Image 1.
> !https://i.imgur.com/aSrs8OM.png!
> Image 2.
>  
> Apart from that, monitoring some relevant metrics during the operations, I 
> had some difficulty interpreting the following metrics:
> {code:java}
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.pool.core: 0,
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.pool.max: 5,
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.pool.size: 1,
> ADMIN./admin/cores.threadPool.parallelCoreExpensiveAdminExecutor.running: 
> 1,{code}
> The pool size was 1 although the pool max size is 5. Shouldn't the pool size 
> be 5, instead? However, there is always one task running on a single node, 
> not 5 concurrently, if I'm not mistaken. 
> I was also wondering if the max thread size, which is currently 5 in 9.4+, 
> could be configurable with either an environment variable or Java parameter? 
> The part that needs to be changed seems to be in CoreAdminHandler.java on 
> line 446 [1] I've made a small adjustment to add a Solr parameter called 
> `solr.maxExpensiveTaskThreads` for those who want to set a different thread 
> size for expensive tasks. The number given in this parameter must meet the 
> criteria of ThreadPoolExecutor, otherwise IllegalArgumentException will 
> occur. I've generated a patch [2] and I would love to see if someone from the 
> Solr committers would take on this and apply for the upcoming release. Do you 
> think our observation is accurate and would this patch be feasible to 
> implement?
>  
> 1. 
> [https://github.com/apache/solr/commit/82a847f0f9af18d6eceee18743d636db7a879f3e#diff-5bc3d44ca8b189f44fe9e6f75af8a5510463bdba79ff72a7d0ed190973a32533L446]
> 2. [https://gist.github.com/ozlerhakan/e4d11bddae6a2f89d2c212c220f4c965] 
>  
> Follow up on this, we managed to backup a data of 3TB in 50 minutes with the 
> patch using `solr.maxExpensiveTaskThreads=5` :
>  
> !https://i.imgur.com/oeCrhLn.png|width=626,height=239!
>  
> I also answered the questions from @Kevin Liang , 
> {quote}Was this change tested on a cloud that was also taking active 
> ingest/query requests as the same time as the backup? 
> {quote}
> The test is completed in a SolrCloud 9.6.1 + the patch cluster managed by the 
> official Solr operator on Amazon EKS. The backup strategy is not intended to 
> happen frequently. Instead, we plan to take some backups for a certain period 
> of time, therefore we won't expect i

[jira] [Commented] (SOLR-17392) Reproducing failure in TestExportWriter (bits = null NPE)

2024-08-05 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17871214#comment-17871214
 ] 

David Smiley commented on SOLR-17392:
-

Can we assume the new segment/SolrIndexSearcher multiThreaded thing is off?

> Reproducing failure in TestExportWriter (bits = null NPE)
> -
>
> Key: SOLR-17392
> URL: https://issues.apache.org/jira/browse/SOLR-17392
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Gus Heck
>Priority: Major
>
> While running the test suite locally I came across the NPE below. I notice 
> that there are a couple other issues out there that hit NPE in 
> BitsetIterator as well. There might be some need for static analysis to 
> flag any case where the constructor might be passed a null bitset (The 
> constructor calls .length() blindly on whatever is passed in)
>   2> 4915 INFO  
> (TEST-TestExportWriter.testRandomNumerics-seed#[4D8B4D17DC5E971F]) [n: c: s: 
> r: x: t:] o.a.s.SolrTestCaseJ4 ###Ending testRandomNumerics
>    >     java.io.IOException: java.lang.NullPointerException: Cannot invoke 
> "org.apache.lucene.util.BitSet.length()" because "bits" is null
>    >         at 
> __randomizedtesting.SeedInfo.seed([4D8B4D17DC5E971F:7E9A173E1EE4C6F2]:0)
>    >         at 
> org.apache.solr.handler.export.ExportWriter$SegmentIterator.topDocs(ExportWriter.java:840)
>    >         at 
> org.apache.solr.handler.export.ExportWriter$SegmentIterator.(ExportWriter.java:782)
>    >         at 
> org.apache.solr.handler.export.ExportWriter.getMergeIterator(ExportWriter.java:754)
>    >         at 
> org.apache.solr.handler.export.ExportBuffers.(ExportBuffers.java:97)
>    >         at 
> org.apache.solr.handler.export.ExportWriter.writeDocs(ExportWriter.java:395)
>    >         at 
> org.apache.solr.handler.export.ExportWriter.lambda$_write$1(ExportWriter.java:344)
>    >         at 
> org.apache.solr.common.util.JsonTextWriter.writeIterator(JsonTextWriter.java:150)
>    >         at 
> org.apache.solr.common.util.TextWriter.writeIterator(TextWriter.java:260)
>    >         at 
> org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:86)
>    >         at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:213)
>    >         at 
> org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:48)
>    >         at 
> org.apache.solr.common.util.JsonTextWriter$2.put(JsonTextWriter.java:187)
>    >         at 
> org.apache.solr.handler.export.ExportWriter.lambda$_write$2(ExportWriter.java:344)
>    >         at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:174)
>    >         at 
> org.apache.solr.common.util.TextWriter.writeMap(TextWriter.java:251)
>    >         at 
> org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:88)
>    >         at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:213)
>    >         at 
> org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:48)
>    >         at 
> org.apache.solr.common.util.JsonTextWriter$2.put(JsonTextWriter.java:187)
>    >         at 
> org.apache.solr.handler.export.ExportWriter.lambda$_write$3(ExportWriter.java:339)
>    >         at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:174)
>    >         at 
> org.apache.solr.handler.export.ExportWriter._write(ExportWriter.java:336)
>    >         at 
> org.apache.solr.handler.export.ExportWriter.write(ExportWriter.java:195)
>    >         at org.apache.solr.core.SolrCore$3.write(SolrCore.java:3045)
>    >         at org.apache.solr.util.TestHarness.query(TestHarness.java:361)
>    >         at org.apache.solr.util.TestHarness.query(TestHarness.java:333)
>    >         at 
> org.apache.solr.handler.export.TestExportWriter.doTestQuery(TestExportWriter.java:1483)
>    >         at 
> org.apache.solr.handler.export.TestExportWriter.testRandomNumerics(TestExportWriter.java:1108)
>    >         at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    >         at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>    >         at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>    >         at java.base/java.lang.reflect.Method.invoke(Method.java:569)
>    >         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
>    >         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
>    >         at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
>    >         at 
> com.carrotsear

[jira] [Created] (SOLR-17390) EmbeddedSolrServer should support a ResponseParser

2024-08-04 Thread David Smiley (Jira)
David Smiley created SOLR-17390:
---

 Summary: EmbeddedSolrServer should support a ResponseParser
 Key: SOLR-17390
 URL: https://issues.apache.org/jira/browse/SOLR-17390
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


By default, a SolrRequest has a null/unspecified ResponseParser; it's handled 
automatically within SolrJ.  But an explicit one communicates an intent for the 
client code to need it, like JsonMapResponseParser, InputStreamResponseParser, 
or NoOpResponseParser (particularly those 3).  EmbeddedSolrServer doesn't look 
at this; the NamedList right out of the core/handler is normalized (via javabin 
round-trip) and returned.  While that makes sense _normally_, a ResponseParser 
should also be supported.  This enables tests that might want to use 
EmbeddedSolrServer but that which need to test JSON or XML (for convenience of 
xpath/json expressions, for example).  Also, the newer V2 API generated clients 
would need this to support EmbeddedSolrServer as they are currently based off 
of InputStreamResponseParser.

Doing this means determining the correct ResponseWriter (not assuming JavaBin 
during normalization).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17368) TestPrometheusResponseWriter redesign

2024-08-03 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17368.
-
Resolution: Fixed

> TestPrometheusResponseWriter redesign
> -
>
> Key: SOLR-17368
> URL: https://issues.apache.org/jira/browse/SOLR-17368
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Priority: Major
>  Labels: pull-request-available
> Fix For: 9.7
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> TestPrometheusResponseWriter currently fails in 100% of all jenkins builds 
> due to how it is designed, and depending on what other tests may run before 
> it in the same JVM.
> This problem only affects the test, not the functionality of the underlying 
> code.
> See SOLR-10654 for background discussions of the problems with this test, and 
> options for improving it's design relative to it's purpose.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-10654) Expose Metrics in Prometheus format DIRECTLY from Solr

2024-08-03 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-10654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17870766#comment-17870766
 ] 

David Smiley commented on SOLR-10654:
-

[~mbiscocho], the internal class names for this new functionality has names 
that sound like the existing so-called "prometheus exporter" (e.g. 
"SolrPrometheusExporter" !).  Without looking carefully for the location of 
where the source lives or the package, the class names will confuse us 
developers, I think.  Might you suggest alternative names; perhaps ones without 
"Exporter" -- like maybe swap "Format" for it?  (Not sure if we discussed this 
previously but it seems glaring to me now)

> Expose Metrics in Prometheus format DIRECTLY from Solr
> --
>
> Key: SOLR-10654
> URL: https://issues.apache.org/jira/browse/SOLR-10654
> Project: Solr
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Keith Laban
>Priority: Major
> Fix For: 9.7
>
> Attachments: prometheus_metrics.txt
>
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> Expose metrics via a `wt=prometheus` response type.
> Example scape_config in prometheus.yml:
> {code:java}
> scrape_configs:
>   - job_name: 'solr'
> metrics_path: '/solr/admin/metrics'
> params:
>   wt: ["prometheus"]
> static_configs:
>   - targets: ['localhost:8983']
> {code}
> [Rationale|https://issues.apache.org/jira/browse/SOLR-11795?focusedCommentId=17261423&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17261423]
>  for having this despite the "Prometheus Exporter".  They have different 
> strengths and weaknesses.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17276) Use fixed rate instead of fixed delay in prometheus-exporter

2024-08-02 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17276.
-
Fix Version/s: 9.7
   Resolution: Fixed

Thanks for contributing Rafal!

> Use fixed rate instead of fixed delay in prometheus-exporter
> 
>
> Key: SOLR-17276
> URL: https://issues.apache.org/jira/browse/SOLR-17276
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - prometheus-exporter
>Reporter: Rafał Harabień
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 9.7
>
> Attachments: image-2024-05-06-18-10-30-739.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Prometheus-exporter is supposed to read metrics from Solr servers every 60 
> seconds (scrape interval can be changed using --scrape-interval argument). 
> But the truth is it does it every 60+X seconds where X is the time needed to 
> read metrics from all Solr servers. In my case X is 1-2 s. If Prometheus 
> scrapes the exporter every 60 seconds it can lead to duplicated samples (e.g. 
> metrics will stay the same for 2 minutes).
> !image-2024-05-06-18-10-30-739.png!
> It's result of using 
> [scheduler.scheduleWithFixedDelay|https://github.com/apache/solr/blob/2bb2ada0a372f4d101b78df8d43e0fc44c8edbf3/solr/prometheus-exporter/src/java/org/apache/solr/prometheus/collector/SchedulerMetricsCollector.java#L77]
>  instead of 
> scheduler.scheduleAtFixedRate.
> Note: function scheduled with scheduleAtFixedRate can still be started late 
> if previous execution has not finished. There is no risk of overlapping 
> executions.
> I am going to prepare a PR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17368) TestPrometheusResponseWriter redesign

2024-08-02 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17870658#comment-17870658
 ] 

David Smiley commented on SOLR-17368:
-

Bingo -- see DropWizard's {{SharedMetricRegistries.clear()}} method; I 
confirmed this can be used to fix the test!

It's tempting to consider adding this in SolrTestCase solrClassRules in an 
"afterAlways" (I tried this to verify the technique works) but vanishingly few 
tests should care (just TestPrometheusResponseWriter?), and SolrMetricManager 
documents they are shared across CoreContainers so it's not like the status quo 
is wrong.  So maybe just this test should call it in a BeforeClass and 
AfterClass.  WDYT [~hossman]?

> TestPrometheusResponseWriter redesign
> -
>
> Key: SOLR-17368
> URL: https://issues.apache.org/jira/browse/SOLR-17368
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Priority: Major
>  Labels: pull-request-available
> Fix For: 9.7
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> TestPrometheusResponseWriter currently fails in 100% of all jenkins builds 
> due to how it is designed, and depending on what other tests may run before 
> it in the same JVM.
> This problem only affects the test, not the functionality of the underlying 
> code.
> See SOLR-10654 for background discussions of the problems with this test, and 
> options for improving it's design relative to it's purpose.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17298) Multithreaded search breaks limits, and possibly other things

2024-08-02 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17870633#comment-17870633
 ] 

David Smiley commented on SOLR-17298:
-

bq. What was your reason for disallowing RankQuery, GraphQuery and JoinQuery? 
Tests seem to pass with the check commented out

Just a guess -- the multiThreaded enablement is often effectively ignored based 
on internal heuristics on the segment count and other factors (available 
processor thread count?).  Thus it's possible those queries you listed do have 
an issue but the tests using them don't tickle the heuristics just right to 
enable them.  

BTW this JIRA issue starts off with a rant.  That rant very much resonates with 
me, I was also annoyed and felt things were done improperly, but I don't think 
a JIRA description is the place to rant.  Just get to the point of what this 
JIRA issue is about.  Rant in a comment on the offending issue or on the dev 
list.

> Multithreaded search breaks limits, and possibly other things
> -
>
> Key: SOLR-17298
> URL: https://issues.apache.org/jira/browse/SOLR-17298
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: main (10.0), 9.7, 9x
>Reporter: Gus Heck
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 9.7
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> https://issues.apache.org/jira/browse/SOLR-13350 was merged to main somewhat 
> unexpectedly, and then back-ported to 9x without any response to feedback 
> from multiple committers, including feedback that
>  * By turning it on by default, it breaks the recently released CPU limits 
> (as shown by changes to unit tests).
>  * Incompatibility with timeAllowed, cpuTimeAllowed, segmentTerminateEarly, 
> GraphQuery, RankQuery and JoinQuery was not clearly documented
>  * The code presents a possibility for users to receive a non-numeric max 
> score ("NaN").
> I have not verified it yet, but I would also be worried about the health of 
> CPU time logging to be added in 
> https://issues.apache.org/jira/browse/SOLR-16986 after this change.
> Given that:
>  * Some of the above issues represent back compatibility breaks or potential 
> back compatibility breaks for released features
>  * The decision to break compatibility within the 9x release series deserves 
> a formal vote (or a fix).
>  * There has been no communication/response from the committer who merged 
> these changes since May 6 (aside from the backport to 9x on May 13) it seems 
> that this state may persist for some time.
> Therefore it appears necessary to file this issue to ensure anything but a 
> 9.6.1 is blocked until the above issues are sorted out. This ticket can serve 
> as a parent ticket to whatever various solutions are agreed upon.
> Multi-threaded search is an awesome feature that has taken a very long time 
> to be realized and is obviously desirable, but we have now placed ourselves 
> in an awkward position by not resolving these last few issues before back 
> porting.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17320) HttpShardHandler should obey `timeAllowed` parameter in query

2024-07-31 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17870044#comment-17870044
 ] 

David Smiley commented on SOLR-17320:
-

Linking to existing SOLR-17158 as duplicate

> HttpShardHandler should obey `timeAllowed` parameter in query
> -
>
> Key: SOLR-17320
> URL: https://issues.apache.org/jira/browse/SOLR-17320
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query, SolrCloud
>Affects Versions: main (10.0), 9.6.1
>Reporter: Hitesh Khamesra
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> HttpShardHandler should use `timeAllowed` param in query to timeout for any 
> shard response. We have observed that sometime different shard takes 
> different time to process the query. In those cases, if user has specify 
> timeAllowed, then solr should use that time to return any partial response. 
> i have added the patch for it. [https://github.com/apache/solr/pull/2493]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17368) TestPrometheusResponseWriter redesign

2024-07-31 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17368.
-
Fix Version/s: 9.7
   Resolution: Fixed

Thanks for resolving Mathew!

> TestPrometheusResponseWriter redesign
> -
>
> Key: SOLR-17368
> URL: https://issues.apache.org/jira/browse/SOLR-17368
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Priority: Major
>  Labels: pull-request-available
> Fix For: 9.7
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestPrometheusResponseWriter currently fails in 100% of all jenkins builds 
> due to how it is designed, and depending on what other tests may run before 
> it in the same JVM.
> This problem only affects the test, not the functionality of the underlying 
> code.
> See SOLR-10654 for background discussions of the problems with this test, and 
> options for improving it's design relative to it's purpose.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-17383) CLI: Resolve overlapping arguments

2024-07-30 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-17383:

Summary: CLI: Resolve overlapping arguments  (was: Resolve overlapping 
arguments)

> CLI: Resolve overlapping arguments
> --
>
> Key: SOLR-17383
> URL: https://issues.apache.org/jira/browse/SOLR-17383
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christos Malliaridis
>Priority: Major
>
> With the changes from SOLR-16757 arguments were moved to java source code to 
> simplify and merge shell script logic. We noticed many overlappings in 
> arguments' short and long forms that cause confusion and possible unexpected 
> results.
> Since commands are executed with "bin/solr", the user has a hard time 
> learning the short form of each argument and the long form of the same 
> argument may vary too, because it is often context-specific.
> Arguments that have conflicts are:
> ||Short Param||Long Param||Occurences||
> |{{-c}}|{{--name}}|{{HealthCheckTool, ConfigTool, CreateTool, DeleteTool, 
> LinkConfigTool, PostTool}}|
> |{{-c}}|{{--cloud}}|{{AssertTool, RunExampleTool, bin/solr}}|
> |{{-c}}|{{--collection}}|{{PackageTool, bin/solr}}|
> |{{-c}}|{{--collections}}|? (unknown if there is a short form)|
> | | | |
> |{{-v}}|{{--verbose}}|{{SolrCLI, bin/solr}}|
> |{{-v}}|{{--value}}|{{ConfigTool}}|
> |{{-V (cap)}}|{{--verbose}}|{{bin/solr}}|
> |{{-v}}|{{--version}}|{{SolrCLI}}|
> | |{{ }}| |
> |{{-s}}|{{--shards}}|{{CreateTool}}|
> |{{-s}}|{{--started}}|{{AssertTool}}|
> |{{-s}}|{{--script}}|{{RunExampleTool}}|
> |{{-s}}|{{--solr-url}}|{{bin/solr}}|
> |{{-s}}|{{--solr-home}}|{{bin/solr}}|
> |{{-s}}|{{--scrape-interval}}|{{SolrExporter}}|
> | | | |
> |{{-url}}|{{--solr-url}}|{{SolrCLI}}|
> |{{(no short form)}}|{{--solr-url}}|{{ApiTool, StatusTool}}|
> |{{-url}}|{{--solr-collection-url}}|{{PostLogsTool, ExportTool}}|
> |{{-url}}|{{--solr-update-url}}|{{PostTool}}|
> |{{-b}}|{{--base-url}}|{{SolrExporter}}|
> | | | |
> |{{-d}}|{{--conf-dir}}|{{CreateTool, ConfigSetUploadTool, bin/solr}}|
> |{{-d}}|{{--delete-config}}|{{DeleteTool}}|
> |{{-d}}|{{--delay}}|{{PostTool}}|
> |{{-d}}|{{--server-dir}}|{{RunExampleTool}}|
> |{{-d}}|{{--dir}}|{{bin/solr}}|
> |{{ }}|{{ }}|{{ }}|
> |{{-r}}|{{--recurse}}|{{SolrCLI}}|
> |{{-r}}|{{--root}}|{{AssertTool}}|
> |{{-r}}|{{--recursive}}|{{PostTool}}|
> |{{ }}|{{ }}|{{ }}|
> |{{-m}}|{{--memory}}|{{RunExampleTool, bin/solr}}|
> |{{-m}}|{{--message}}|{{AssertTool}}|
> |{{ }}|{{ }}|{{ }}|
> |{{-t}}|{{--type}}|{{PostTool}}|
> |{{-t}}|{{--timeout}}|{{AssertTool}}|
> |{{-t}}|{{--data-home}}|{{bin/solr}}|
> |{{ }}|{{ }}|{{ }}|
> |{{-e}}|{{--example}}|{{bin/solr, RunExampleTool}}|
> |{{-e}}|{{--exitcode}}|{{AssertTool}}|
> |{{ }}|{{ }}|{{ }}|
> |{{-n}}|{{--no-prompt}}|{{RunExampleTool}}|
> |{{-y}}|{{--no-prompt}}|{{PackageTool}}|
> |{{-noprompt}}|{{--no-prompt}}|{{bin/solr}}|
> |{{-n}}|{{--conf-name}}|{{ConfigSetUploadTool, CreateTool, LinkConfigTool, 
> bin/solr}}|
> |{{-n}}|{{--num-threads}}|{{SolrExporter}}|
> |{{ }}|{{ }}|{{ }}|
> |{{-a}}|{{--addlopts}}|{{RunExampleTool (see also SOLR-16757)}}|
> |{{-a}}|{{--additional-options}}|{{bin/solr}}|
> |{{-a}}|{{-action}}|{{ConfigTool}}|
> |{{ }}|{{ }}|{{ }}|
> |{{-p}}|{{--port}}|{{RunExampleTool, bin/solr, SolrExporter}}|
> |{{-p}}|{{--property}}|{{ConfigTool}}|
> |{{-p}}|{{--param}}|{{PackageTool}}|
> |{{-p}}|{{--params}}|{{PostTool}}|
> |{{ }}|{{ }}|{{ }}|
> |{{-f}}|{{--force}}|{{RunExampleTool, bin/solr}}|
> |{{-f}}|{{--force-delete-config}}|{{DeleteTool}}|
> |{{-f}}|{{--format}}|{{PostTool}}|
> |{{-f}}|{{--foreground}}|{{bin/solr}}|
> |{{-f}}|{{--config-file}}|{{SolrExporter}}|
> |{{ }}|{{ }}|{{ }}|
> |{{-h}}|{{--help}}|{{SolrCLI, bin/solr, SolrExporter}}|
> |{{-h}}|{{--host}}|{{RunExampleTool, bin/solr}}|
> |{{ }}|{{ }}| |
> |{{-u (not obvious)}}|{{--credentials}}|{{SolrCLI, SolrExporter}}|
> Noticable confusions for beginners may be:
>  * 
> {code:java}
> bin/solr start -c -e techproducts # "creates" and starts a solr cloud 
> instance with example data, -c does not receive an argument
> bin/solr create -c mycollection # "creates" a new collection in an existing 
> solr, -c requires a value{code}
>  * 
> {code:java}
> bin/solr create -c mycollection # succeeds
> bin/solr create --collection mycollection (fails?)
> bin/solr create --name mycollection (succeeds){code}
>  * 
> {code:java}
> bin/solr config -c ... --action set-user-property --property 
> update.autoCreateFields -v false # Does this set property to false or execute 
> command in verbose mode, or both{code}
> We should consider for which arguments it is fine to have overlapping short 
> forms, which arguments can be unified and use same short and long-form to 
> improve learnability and which arguments

[jira] [Commented] (SOLR-10255) Large psuedo-stored fields via BinaryDocValuesField

2024-07-30 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17869737#comment-17869737
 ] 

David Smiley commented on SOLR-10255:
-

LOL I didn't know SortableBinaryField even existed.  That one should definitely 
have docValues for sorting.

> Large psuedo-stored fields via BinaryDocValuesField
> ---
>
> Key: SOLR-10255
> URL: https://issues.apache.org/jira/browse/SOLR-10255
> Project: Solr
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
>  Labels: pull-request-available
> Fix For: 9.7
>
> Attachments: SOLR-10255.patch, SOLR-10255.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> (sub-issue of SOLR-10117)  This is a proposal for a better way for Solr to 
> handle "large" text fields.  Large docs that are in Lucene StoredFields slow 
> requests that don't involve access to such fields.  This is fundamental to 
> the fact that StoredFields are row-stored.  Worse, the Solr documentCache 
> will wind up holding onto massive Strings.  While the latter could be tackled 
> on it's own somehow as it's the most serious issue, nevertheless it seems 
> wrong that such large fields are in row-stored storage to begin with.  After 
> all, relational DBs seemed to have figured this out and put CLOBs/BLOBs in a 
> separate place.  Here, we do similarly by using, Lucene 
> {{BinaryDocValuesField}}.  BDVF isn't well known in the DocValues family as 
> it's not for typical DocValues purposes like sorting/faceting etc.  The 
> default DocValuesFormat doesn't compress these but we could write one that 
> does.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-10255) Large psuedo-stored fields via BinaryDocValuesField

2024-07-30 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17869727#comment-17869727
 ] 

David Smiley commented on SOLR-10255:
-

IMO it doesn't make sense for such a field to be stored & docValues, so I'd say 
"no" to enable docValues by default on BinaryField.  The new docValues default 
of true only makes sense on "short" fields.  BinaryField generally doesn't 
apply.

> Large psuedo-stored fields via BinaryDocValuesField
> ---
>
> Key: SOLR-10255
> URL: https://issues.apache.org/jira/browse/SOLR-10255
> Project: Solr
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
>  Labels: pull-request-available
> Fix For: 9.7
>
> Attachments: SOLR-10255.patch, SOLR-10255.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> (sub-issue of SOLR-10117)  This is a proposal for a better way for Solr to 
> handle "large" text fields.  Large docs that are in Lucene StoredFields slow 
> requests that don't involve access to such fields.  This is fundamental to 
> the fact that StoredFields are row-stored.  Worse, the Solr documentCache 
> will wind up holding onto massive Strings.  While the latter could be tackled 
> on it's own somehow as it's the most serious issue, nevertheless it seems 
> wrong that such large fields are in row-stored storage to begin with.  After 
> all, relational DBs seemed to have figured this out and put CLOBs/BLOBs in a 
> separate place.  Here, we do similarly by using, Lucene 
> {{BinaryDocValuesField}}.  BDVF isn't well known in the DocValues family as 
> it's not for typical DocValues purposes like sorting/faceting etc.  The 
> default DocValuesFormat doesn't compress these but we could write one that 
> does.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17379) ParsingFieldUpdateProcessorsTest failures using CLDR locale provider

2024-07-26 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17868983#comment-17868983
 ] 

David Smiley commented on SOLR-17379:
-

Approving your patch (I did read it) and wanted to thank you for your work.  
Admittedly I don't have more time for this one.

> ParsingFieldUpdateProcessorsTest failures using CLDR locale provider
> 
>
> Key: SOLR-17379
> URL: https://issues.apache.org/jira/browse/SOLR-17379
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-17379.test-1.patch, SOLR-17379.test.patch
>
>
> Background: https://lists.apache.org/thread/o7xwz8df6j0bx7w2m3w8ptrp4r7q957n
> Test failures from {{ParsingFieldUpdateProcessorsTest.testAKSTZone}} and 
> {{ParsingFieldUpdateProcessorsTest.testParseFrenchDate}} are seemingly 
> guaranteed on JDK23, due to the removal of the {{COMPAT}} local provider 
> option.
> On (some) earlier JDKs, these failures can be reproduced using...
> {noformat}
> ./gradlew test --tests ParsingFieldUpdateProcessorsTest  
> -Ptests.jvmargs="-Djava.locale.providers=CLDR -XX:TieredStopAtLevel=1 
> -XX:+UseParallelGC -XX:ActiveProcessorCount=1 -XX:ReservedCodeCacheSize=120m"
> {noformat}
> ...to force the use off {{CLDR}} and exclude the use of {{COMPAT}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17379) ParsingFieldUpdateProcessorsTest failures using CLDR locale provider

2024-07-25 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17868804#comment-17868804
 ] 

David Smiley commented on SOLR-17379:
-

+1 thanks

> ParsingFieldUpdateProcessorsTest failures using CLDR locale provider
> 
>
> Key: SOLR-17379
> URL: https://issues.apache.org/jira/browse/SOLR-17379
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-17379.test.patch
>
>
> Background: https://lists.apache.org/thread/o7xwz8df6j0bx7w2m3w8ptrp4r7q957n
> Test failures from {{ParsingFieldUpdateProcessorsTest.testAKSTZone}} and 
> {{ParsingFieldUpdateProcessorsTest.testParseFrenchDate}} are seemingly 
> guaranteed on JDK23, due to the removal of the {{COMPAT}} local provider 
> option.
> On (some) earlier JDKs, these failures can be reproduced using...
> {noformat}
> ./gradlew test --tests ParsingFieldUpdateProcessorsTest  
> -Ptests.jvmargs="-Djava.locale.providers=CLDR -XX:TieredStopAtLevel=1 
> -XX:+UseParallelGC -XX:ActiveProcessorCount=1 -XX:ReservedCodeCacheSize=120m"
> {noformat}
> ...to force the use off {{CLDR}} and exclude the use of {{COMPAT}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-13350) Explore collector managers for multi-threaded search

2024-07-24 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17868449#comment-17868449
 ] 

David Smiley commented on SOLR-13350:
-

There is no dedicated test for this functionality, not that there needs to be 
one.  But nonetheless, can someone recommend a particular test I might use that 
exploits the CollectorManager functionality, especially the DocSet construction 
aspect?  [~cpoerschke] maybe you can recommend one as you were working on that.

> Explore collector managers for multi-threaded search
> 
>
> Key: SOLR-13350
> URL: https://issues.apache.org/jira/browse/SOLR-13350
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 9.7
>
> Attachments: SOLR-13350-pre-PR-2508.patch, SOLR-13350.patch, 
> SOLR-13350.patch, SOLR-13350.patch
>
>  Time Spent: 15h 20m
>  Remaining Estimate: 0h
>
> AFAICT, SolrIndexSearcher can be used only to search all the segments of an 
> index in series. However, using CollectorManagers, segments can be searched 
> concurrently and result in reduced latency. Opening this issue to explore the 
> effectiveness of using CollectorManagers in SolrIndexSearcher from latency 
> and throughput perspective.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Closed] (SOLR-5933) BoolField should support docValues

2024-07-24 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-5933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-5933.
--

> BoolField should support docValues
> --
>
> Key: SOLR-5933
> URL: https://issues.apache.org/jira/browse/SOLR-5933
> Project: Solr
>  Issue Type: Improvement
>Reporter: Chris M. Hostetter
>Priority: Major
> Fix For: 6.2
>
>
> It appears that {{BoolField}} does not support docvalues - but i can't think 
> of any reason why three would be a fundemental limitation to supporting it -- 
> i think it's just an oversight we could remedy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-5933) BoolField should support docValues

2024-07-24 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-5933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-5933.

Fix Version/s: 6.2
   Resolution: Duplicate

> BoolField should support docValues
> --
>
> Key: SOLR-5933
> URL: https://issues.apache.org/jira/browse/SOLR-5933
> Project: Solr
>  Issue Type: Improvement
>Reporter: Chris M. Hostetter
>Priority: Major
> Fix For: 6.2
>
>
> It appears that {{BoolField}} does not support docvalues - but i can't think 
> of any reason why three would be a fundemental limitation to supporting it -- 
> i think it's just an oversight we could remedy.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17378) Add a RequestHandler metric for "number of outstanding concurrent requests"

2024-07-23 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17868199#comment-17868199
 ] 

David Smiley commented on SOLR-17378:
-

Sounds useful!

> Add a RequestHandler metric for "number of outstanding concurrent requests"
> ---
>
> Key: SOLR-17378
> URL: https://issues.apache.org/jira/browse/SOLR-17378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Gibney
>Priority: Minor
>
> "number of outstanding concurrent requests" is an important metric to track, 
> and can have a significant impact on resource usage and throughput. Existing 
> metrics provide a window into request count and request latency, but neither 
> of these is sufficient to supply the desired "concurrency" metric.
> Leveraging request latency and completed request timestamp, it's possible to 
> retroactively compute outstanding concurrent requests, but existing metrics 
> are incapable of capturing this information directly or presenting it in an 
> aggregate format.
> In addition to the implications for performance, it is important to have a 
> window into request concurrency to complement the solr rate limiting feature, 
> whose "slot acquisition" design really limits request concurrency, _not_ (as 
> the name implies) request count/throughput. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17340) Call to /solr/admin/info/system is abnormally slow

2024-07-22 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17340.
-
Fix Version/s: 9.7
   Resolution: Fixed

Thanks; well done.

> Call to /solr/admin/info/system is abnormally slow
> --
>
> Key: SOLR-17340
> URL: https://issues.apache.org/jira/browse/SOLR-17340
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 9.6
>Reporter: Pierre Salagnac
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 9.7
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Solr endpoint {{/solr/admin/info/system}} is used to return some generic 
> system metrics (memory, JVM...)
> This endpoint is also used by {{solr-operator}} by default for start-up probe 
> and liveness probe. (details 
> [here|https://github.com/apache/solr-operator/blob/5fec11f8ef181a58b1f72123b44ae6532c49b62d/controllers/util/solr_security_util.go#L44]).
>  Very long runtime can cause failures of the probes.
> Runtime is abnormally slow because of the time spent in introspecting beans 
> to create {{BeanInfo}} instances. Most of the time is spent here:
> {code}
> java.lang.Exception: Stack trace
>   at 
> java.desktop/java.beans.Introspector.getBeanInfo(Introspector.java:279)
>   at 
> org.apache.solr.util.stats.MetricUtils.addMXBeanMetrics(MetricUtils.java:777)
>   at 
> org.apache.solr.util.stats.MetricUtils.addMXBeanMetrics(MetricUtils.java:841)
>   at 
> org.apache.solr.handler.admin.SystemInfoHandler.getSystemInfo(SystemInfoHandler.java:223)
>   at 
> org.apache.solr.handler.admin.SystemInfoHandler.handleRequestBody(SystemInfoHandler.java:156)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:248)
>   at org.apache.solr.handler.admin.InfoHandler.handle(InfoHandler.java:96)
>   at 
> org.apache.solr.handler.admin.InfoHandler.handleRequestBody(InfoHandler.java:84)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:248)
> {code}
> There is no need to execute the bean introspection at each call. We could 
> lazily instantiate them and keep them in memory for efficiency.
> Note: I haven't been able to figure out the exact cause, but the runtime can 
> exponentially increase under heady querying load.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17321) Bump minimum required Java version to 21

2024-07-22 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17867758#comment-17867758
 ] 

David Smiley commented on SOLR-17321:
-

If we bring the URL/URI business to 9x, _ideally_ it would have its own JIRA 
because the Java 21 issue (this issue) isn't going to 9x yet we'd like this 
part in 9x. The "Fix Version" for this issue will be main (10) but the URL/URI 
could then be 9.7. But this URL/URI business is minor internal stuff that 
should affect nobody so I'm comfortable with less bureaucracy here. At least 
clarify this very much in the commit message to 9x to say this is a *part of* 
SOLR-17321 .  Also, in 9x if any part gives us grief, we can skip those parts.  
We're only partially bringing some aspects to 9x to align the branches more.

> Bump minimum required Java version to 21
> 
>
> Key: SOLR-17321
> URL: https://issues.apache.org/jira/browse/SOLR-17321
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Sanjay Dutt
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> We are upgrading the minimum Java version for Solr main branch to 21. 
> However, at the same, It has been suggested to be not so aggressive with 
> SolrJ (and thus solr-api, a dependency) Java version – setting it to 17.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17375) Close IndexReader asynchronously on commit for performance

2024-07-21 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17867656#comment-17867656
 ] 

David Smiley commented on SOLR-17375:
-

Ideally this is fixed in a 9x version but it'll be more pressing in Solr 10 as 
it's not avoidable there without other performance compromises.  In Lucene 9 
you can simply set 
{{-Dorg.apache.lucene.store.MMapDirectory.enableMemorySegments=false}}

I'd love to see the problem present itself in the solr/benchmark module so that 
we can see the performance regression and its resolution.  Maybe with a 
modified CloudIndexing.java to do a commit per update request.  It does no 
commits now.

> Close IndexReader asynchronously on commit for performance
> --
>
> Key: SOLR-17375
> URL: https://issues.apache.org/jira/browse/SOLR-17375
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 9.3
>Reporter: David Smiley
>Priority: Critical
>
> Since Lucene 9.5, and with a recent Java VM (19), Lucene uses Java's new 
> MemorySegments API. A negative consequence is that IndexReader.close becomes 
> expensive, particularly when there are many threads as it's 
> {{{}O(threads){}}}. Solr closes the (previous) reader on a SolrIndexSearcher 
> open, which is basically on commit (both soft and hard).  (See Lucene 
> [#13325|https://github.com/apache/lucene/issues/13325])
> Proposal: SolrIndexSearcher.close should perform the {{rawReader.decRef()}} 
> in another thread, probably a global (statically defined) thread pool of one 
> or two in size ([~uschindler] 's  recommendation). The call to 
> {{core.getDeletionPolicy().releaseCommitPoint(cpg)}} which follows it should 
> probably go along with it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17375) Close IndexReader asynchronously on commit for performance

2024-07-21 Thread David Smiley (Jira)
David Smiley created SOLR-17375:
---

 Summary: Close IndexReader asynchronously on commit for performance
 Key: SOLR-17375
 URL: https://issues.apache.org/jira/browse/SOLR-17375
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 9.3
Reporter: David Smiley


Since Lucene 9.5, and with a recent Java VM (19), Lucene uses Java's new 
MemorySegments API. A negative consequence is that IndexReader.close becomes 
expensive, particularly when there are many threads as it's {{{}O(threads){}}}. 
Solr closes the (previous) reader on a SolrIndexSearcher open, which is 
basically on commit (both soft and hard).  (See Lucene 
[#13325|https://github.com/apache/lucene/issues/13325])

Proposal: SolrIndexSearcher.close should perform the {{rawReader.decRef()}} in 
another thread, probably a global (statically defined) thread pool of one or 
two in size ([~uschindler] 's  recommendation). The call to 
{{core.getDeletionPolicy().releaseCommitPoint(cpg)}} which follows it should 
probably go along with it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-9057) CloudSolrClient should be able to work w/o ZK url

2024-07-19 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-9057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-9057:
---
Fix Version/s: 6.6

Note that there are severe performance problems with this functionality as 
reported in SOLR-14985 (3 years after this functionality was released) that I 
would be surprised if anyone is actually using this.  There's a refreshed PR 
there -- looking forward to feedback!

> CloudSolrClient should be able to work w/o ZK url
> -
>
> Key: SOLR-9057
> URL: https://issues.apache.org/jira/browse/SOLR-9057
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Noble Paul
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 6.6
>
>
> It should be possible to pass one or more Solr urls to Solrj and it should be 
> able to get started from there. Exposing ZK to users should not be required. 
> it is a security vulnerability 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17373) Shard splitByPrefix should not do so if it would be too imbalanced/inefficient

2024-07-18 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17867017#comment-17867017
 ] 

David Smiley commented on SOLR-17373:
-

Also or separately, I think the computed prefix histogram should be filtered so 
as to ensure that each prefix has at least one non-deleted doc.  This should be 
fairly cheap and simple, and addresses the particularly egregious scenario we 
encountered.

> Shard splitByPrefix should not do so if it would be too imbalanced/inefficient
> --
>
> Key: SOLR-17373
> URL: https://issues.apache.org/jira/browse/SOLR-17373
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Major
>
> Shard split "splitByPrefix" exists to reduce the number of shards that a 
> typical prefix is in, thus reducing query fanout distributed search (assuming 
> the route param is used), and it can isolate indexing activity as well.  
> Sometimes this can result in a very imbalanced (in-efficient) shard split 
> that may even quickly lead to another split back-to-back!  (imagine splitting 
> off less than 1%).  Here we propose that if the split would only split off < 
> 20% of docs or so, then it's too inefficient.  Instead, split the middle of 
> the largest key prefix.
> Note: it's also been observed that a prefix might be so extremely low 
> represented that it's likely those docs are marked deleted as part of a 
> previous shard split (if "link" split method).  Thus this inefficiency can 
> have a cascading badness effect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17373) Shard splitByPrefix should not do so if it would be too imbalanced/inefficient

2024-07-18 Thread David Smiley (Jira)
David Smiley created SOLR-17373:
---

 Summary: Shard splitByPrefix should not do so if it would be too 
imbalanced/inefficient
 Key: SOLR-17373
 URL: https://issues.apache.org/jira/browse/SOLR-17373
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: David Smiley


Shard split "splitByPrefix" exists to reduce the number of shards that a 
typical prefix is in, thus reducing query fanout distributed search (assuming 
the route param is used), and it can isolate indexing activity as well.  
Sometimes this can result in a very imbalanced (in-efficient) shard split that 
may even quickly lead to another split back-to-back!  (imagine splitting off 
less than 1%).  Here we propose that if the split would only split off < 20% of 
docs or so, then it's too inefficient.  Instead, split the middle of the 
largest key prefix.

Note: it's also been observed that a prefix might be so extremely low 
represented that it's likely those docs are marked deleted as part of a 
previous shard split (if "link" split method).  Thus this inefficiency can have 
a cascading badness effect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Updated] (SOLR-10654) Expose Metrics in Prometheus format DIRECTLY from Solr

2024-07-17 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-10654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-10654:

Fix Version/s: 9.7

> Expose Metrics in Prometheus format DIRECTLY from Solr
> --
>
> Key: SOLR-10654
> URL: https://issues.apache.org/jira/browse/SOLR-10654
> Project: Solr
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Keith Laban
>Priority: Major
> Fix For: 9.7
>
> Attachments: prometheus_metrics.txt
>
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> Expose metrics via a `wt=prometheus` response type.
> Example scape_config in prometheus.yml:
> {code:java}
> scrape_configs:
>   - job_name: 'solr'
> metrics_path: '/solr/admin/metrics'
> params:
>   wt: ["prometheus"]
> static_configs:
>   - targets: ['localhost:8983']
> {code}
> [Rationale|https://issues.apache.org/jira/browse/SOLR-11795?focusedCommentId=17261423&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17261423]
>  for having this despite the "Prometheus Exporter".  They have different 
> strengths and weaknesses.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-10255) Large psuedo-stored fields via BinaryDocValuesField

2024-07-17 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-10255.
-
Fix Version/s: 9.7
   Resolution: Fixed

Thanks for contributing Alexey!

> Large psuedo-stored fields via BinaryDocValuesField
> ---
>
> Key: SOLR-10255
> URL: https://issues.apache.org/jira/browse/SOLR-10255
> Project: Solr
>  Issue Type: Improvement
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
>  Labels: pull-request-available
> Fix For: 9.7
>
> Attachments: SOLR-10255.patch, SOLR-10255.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> (sub-issue of SOLR-10117)  This is a proposal for a better way for Solr to 
> handle "large" text fields.  Large docs that are in Lucene StoredFields slow 
> requests that don't involve access to such fields.  This is fundamental to 
> the fact that StoredFields are row-stored.  Worse, the Solr documentCache 
> will wind up holding onto massive Strings.  While the latter could be tackled 
> on it's own somehow as it's the most serious issue, nevertheless it seems 
> wrong that such large fields are in row-stored storage to begin with.  After 
> all, relational DBs seemed to have figured this out and put CLOBs/BLOBs in a 
> separate place.  Here, we do similarly by using, Lucene 
> {{BinaryDocValuesField}}.  BDVF isn't well known in the DocValues family as 
> it's not for typical DocValues purposes like sorting/faceting etc.  The 
> default DocValuesFormat doesn't compress these but we could write one that 
> does.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17160) Bulk admin operations may fail because of max tracked requests

2024-07-16 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-17160.
-
Fix Version/s: 9.7
   Resolution: Fixed

Merged.  Thanks for contributing!

> Bulk admin operations may fail because of max tracked requests
> --
>
> Key: SOLR-17160
> URL: https://issues.apache.org/jira/browse/SOLR-17160
> Project: Solr
>  Issue Type: Bug
>  Components: Backup/Restore
>Affects Versions: 8.11, 9.5
>Reporter: Pierre Salagnac
>Priority: Minor
> Fix For: 9.7
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> In {{{}CoreAdminHandler{}}}, we maintain in-memory the list of in-flight 
> requests and completed/failed request.
> _Note they are core/replica level async requests, and not top level requests 
> which mostly at the collection level. Top level requests are tracked by 
> storing the async ID in a Zookeeper node, which is not related to this 
> ticket._
>  
> For completed/failed requests, we only track a maximum of 100 requests by 
> dropping the oldest ones. The typical client in 
> {{CollectionHandlingUtils.waitForCoreAdminAsyncCallToComplete()}} polls 
> status of the submitted requests, with a retry loop until requests are 
> completed. If for some reason we have more than 100 requests that complete or 
> fail on a node before all statuses are polled by the client, the statuses are 
> lost and the client will fail with an unexpected error similar to:
> {{Invalid status request for requestId: '{_}{_}' - 'notfound'. Retried 
> __ times}}
>  
> Instead of having a hard limit for the number of requests we track, we could 
> have time based eviction. I think it makes sense to keep status of a request 
> until a given timeout, and then drop it ignoring how many requests we 
> currently track.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-11535) Weird behavior of CollectionStateWatcher

2024-07-16 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17866496#comment-17866496
 ] 

David Smiley commented on SOLR-11535:
-

Absolutely no pressure from me for you to improve the comment; updating it is 
annoying, I know.  Mostly I spread this general feedback to help us all do 
better next time in CHANGES.txt: communicate in ways a user might understand.  

I still am not 100% sure what the impact of this is :-)

> Weird behavior of CollectionStateWatcher
> 
>
> Key: SOLR-11535
> URL: https://issues.apache.org/jira/browse/SOLR-11535
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 7.2, 8.0
>Reporter: Andrzej Bialecki
>Assignee: Michael Gibney
>Priority: Major
> Fix For: 9.6
>
> Attachments: test.log
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> While working on SOLR-11320 I noticed a strange behavior in 
> {{ActiveReplicaWatcher}}, which is a subclass of {{CollectionStateWatcher}} - 
> it appears that its {{onStateChanged}} method can be called from multiple 
> threads with exactly the same {{DocCollection}} state, ie. unchanged between 
> the calls.
> This seems to run contrary to the javadoc, which implies that this method is 
> called only when the state actually changes, and it also doesn't mention 
> anything about the need for thread-safety in the method implementation.
> I attached the log, which has a lot of additional debugging - but the most 
> pertinent part being where a Watcher-s hashCode is printed together with the 
> {{DocCollection}} - notice that these overlapping calls both submit an 
> instance of {{DocCollection}} with the same zkVersion.
> [~dragonsinth], [~romseygeek] - could you please take a look at this? If this 
> behavior is expected then the javadoc should be updated to state clearly that 
> multiple calls can be made concurrently, with exactly the same state (which 
> is kind of a weak guarantee for a method called {{onStateChanged}} ;) ).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



  1   2   3   4   5   6   7   8   9   10   >