Re: [PR] Build: gradlew avoid downloader [solr]

2024-05-22 Thread via GitHub


dsmiley merged PR #2419:
URL: https://github.com/apache/solr/pull/2419


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] Minor: Http2SolrClient cleanups [solr]

2024-05-22 Thread via GitHub


dsmiley merged PR #2453:
URL: https://github.com/apache/solr/pull/2453


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Assigned] (SOLR-17118) Solr deadlock during servlet container start

2024-05-22 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-17118:
---

Assignee: David Smiley

> Solr deadlock during servlet container start
> 
>
> Key: SOLR-17118
> URL: https://issues.apache.org/jira/browse/SOLR-17118
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 9.2.1
>Reporter: Andreas Hubold
>Assignee: David Smiley
>Priority: Major
>  Labels: deadlock, servlet-context
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In rare cases, Solr can run into a deadlock when started. The servlet 
> container startup thread gets blocked and there's no other thread that could 
> unblock it:
> {noformat}
> "main" #1 prio=5 os_prio=0 cpu=5922.39ms elapsed=7490.27s 
> tid=0x7f637402ae70 nid=0x47 waiting on condition [0x7f6379488000]
>java.lang.Thread.State: WAITING (parking)
> at jdk.internal.misc.Unsafe.park(java.base@17.0.9/Native Method)
> - parking to wait for  <0x81da8000> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(java.base@17.0.9/Unknown 
> Source)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@17.0.9/Unknown
>  Source)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(java.base@17.0.9/Unknown
>  Source)
> at java.util.concurrent.CountDownLatch.await(java.base@17.0.9/Unknown 
> Source)
> at 
> org.apache.solr.servlet.CoreContainerProvider$ContextInitializationKey.waitForReadyService(CoreContainerProvider.java:523)
> at 
> org.apache.solr.servlet.CoreContainerProvider$ServiceHolder.getService(CoreContainerProvider.java:562)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:148)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:133)
> at 
> org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$2(ServletHandler.java:725)
> at 
> org.eclipse.jetty.servlet.ServletHandler$$Lambda$315/0x7f62fc2674b8.accept(Unknown
>  Source)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(java.base@17.0.9/Unknown
>  Source)
> at 
> java.util.stream.Streams$ConcatSpliterator.forEachRemaining(java.base@17.0.9/Unknown
>  Source)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(java.base@17.0.9/Unknown 
> Source)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:749)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:392)
>  
> {noformat}
> ContextInitializationKey.waitForReadyService should have been unblocked by 
> CoreContainerProvider#init, which is calling ServiceHolder#setService. This 
> should work because CoreContainerProvider#init is always called before 
> SolrDispatchFilter#init (ServletContextListeners are initialized before 
> Filters). 
> But there's a problem: CoreContainerProvider#init stores the 
> ContextInitializationKey and the mapped ServiceHolder in 
> CoreContainerProvider#services, and that's a *WeakHashMap*: 
> {code:java}
>   services 
>   .computeIfAbsent(new ContextInitializationKey(servletContext), 
> ServiceHolder::new) 
>   .setService(this); 
> {code}
> The key is not referenced anywhere else, which makes the mapping a candidate 
> for garbage collection. The ServiceHolder value also does not reference the 
> key anymore, because #setService cleared the reference. 
> With bad luck, the mapping is already gone from the WeakHashMap before 
> SolrDispatchFilter#init tries to retrieve it with 
> CoreContainerProvider#serviceForContext. And that method will then create a 
> new ContextInitializationKey and ServiceHolder, which is then used for 
> #waitForReadyService. But such a new ContextInitializationKey has never 
> received a #makeReady call, and #waitForReadyService will block forever.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Resolved] (SOLR-17307) CachingDirectoryFactory uses the wrong separator on Windows

2024-05-22 Thread Houston Putman (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman resolved SOLR-17307.
---
Fix Version/s: 9.7
   9.6.1
 Assignee: Houston Putman
   Resolution: Fixed

> CachingDirectoryFactory uses the wrong separator on Windows
> ---
>
> Key: SOLR-17307
> URL: https://issues.apache.org/jira/browse/SOLR-17307
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>Assignee: Houston Putman
>Priority: Major
> Fix For: 9.7, 9.6.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{CachingDirectoryFactory.isSubPath()}} method, uses '/' instead of 
> File.separator. This gives us failures for 
> {{CachingDirectoryFactoryTest.reorderingTest}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17307) CachingDirectoryFactory uses the wrong separator on Windows

2024-05-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848782#comment-17848782
 ] 

ASF subversion and git services commented on SOLR-17307:


Commit 91e761edf443cbd62e742d3e1302aef42da96f8f in solr's branch 
refs/heads/branch_9_6 from Houston Putman
[ https://gitbox.apache.org/repos/asf?p=solr.git;h=91e761edf44 ]

SOLR-17307: Use file separator instead of '/' in CachingDirectoryFactory (#2464)

(cherry picked from commit 917e682d269fb4e18f3aeaedb585dd268a489af0)


> CachingDirectoryFactory uses the wrong separator on Windows
> ---
>
> Key: SOLR-17307
> URL: https://issues.apache.org/jira/browse/SOLR-17307
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{CachingDirectoryFactory.isSubPath()}} method, uses '/' instead of 
> File.separator. This gives us failures for 
> {{CachingDirectoryFactoryTest.reorderingTest}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17307) CachingDirectoryFactory uses the wrong separator on Windows

2024-05-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848780#comment-17848780
 ] 

ASF subversion and git services commented on SOLR-17307:


Commit a8bd28c7b3f82e0d7b9ec2216894f9fd346c0c6a in solr's branch 
refs/heads/branch_9x from Houston Putman
[ https://gitbox.apache.org/repos/asf?p=solr.git;h=a8bd28c7b3f ]

SOLR-17307: Use file separator instead of '/' in CachingDirectoryFactory (#2464)

(cherry picked from commit 917e682d269fb4e18f3aeaedb585dd268a489af0)


> CachingDirectoryFactory uses the wrong separator on Windows
> ---
>
> Key: SOLR-17307
> URL: https://issues.apache.org/jira/browse/SOLR-17307
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{CachingDirectoryFactory.isSubPath()}} method, uses '/' instead of 
> File.separator. This gives us failures for 
> {{CachingDirectoryFactoryTest.reorderingTest}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17307) CachingDirectoryFactory uses the wrong separator on Windows

2024-05-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848779#comment-17848779
 ] 

ASF subversion and git services commented on SOLR-17307:


Commit 917e682d269fb4e18f3aeaedb585dd268a489af0 in solr's branch 
refs/heads/main from Houston Putman
[ https://gitbox.apache.org/repos/asf?p=solr.git;h=917e682d269 ]

SOLR-17307: Use file separator instead of '/' in CachingDirectoryFactory (#2464)



> CachingDirectoryFactory uses the wrong separator on Windows
> ---
>
> Key: SOLR-17307
> URL: https://issues.apache.org/jira/browse/SOLR-17307
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Houston Putman
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{CachingDirectoryFactory.isSubPath()}} method, uses '/' instead of 
> File.separator. This gives us failures for 
> {{CachingDirectoryFactoryTest.reorderingTest}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17307: Use file separator instead of '/' in CachingDirectoryFactory [solr]

2024-05-22 Thread via GitHub


HoustonPutman merged PR #2464:
URL: https://github.com/apache/solr/pull/2464


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17118: Simplify CoreContainerProvider initialization [solr]

2024-05-22 Thread via GitHub


dsmiley commented on code in PR #2474:
URL: https://github.com/apache/solr/pull/2474#discussion_r1610784556


##
solr/core/src/test/org/apache/solr/handler/admin/HealthCheckHandlerTest.java:
##
@@ -112,36 +112,6 @@ public void testHealthCheckHandler() throws Exception {
   newJetty.stop();
 }
 
-// add a new node for the purpose of negative testing
-// negative check that if core container is not available at the node
-newJetty = cluster.startJettySolrRunner();
-try (SolrClient solrClient = 
getHttpSolrClient(newJetty.getBaseUrl().toString())) {
-
-  // positive check that our (new) "healthy" node works with direct http 
client
-  assertEquals(CommonParams.OK, 
req.process(solrClient).getResponse().get(CommonParams.STATUS));
-
-  // shutdown the core container of new node
-  newJetty.getCoreContainer().shutdown();

Review Comment:
   I considered this portion of the test invalid since it quite intentionally 
(see the opening comment) shuts down the container without touching Jetty.  
That's simply not how Jetty shuts down; only Jetty's lifecycle initiates an 
orderly shutdown.  I could test trying an attempt to do a request against a 
non-existent server but that seemed silly.



##
solr/test-framework/src/java/org/apache/solr/embedded/JettySolrRunner.java:
##
@@ -402,11 +385,20 @@ public synchronized void lifeCycleStarted(LifeCycle arg0) 
{
   .setAttribute(SolrDispatchFilter.PROPERTIES_ATTRIBUTE, 
nodeProperties);
   root.getServletContext()
   .setAttribute(SolrDispatchFilter.SOLRHOME_ATTRIBUTE, 
solrHome);
+
   SSLConfigurationsFactory.current().init(); // normally happens 
in jetty-ssl.xml
-  coreContainerProvider = new CoreContainerProvider();
-  coreContainerProvider.init(root.getServletContext());
+
   log.info("Jetty properties: {}", nodeProperties);
 
+  super.contextInitialized(event);
+}
+  });
+  // TODO the below could be done immediately; not be an event listener

Review Comment:
   I'm holding myself back from scope creep :-). There's definitely more 
needless complexity I see



##
solr/core/src/test/org/apache/solr/handler/admin/HealthCheckHandlerTest.java:
##
@@ -112,36 +112,6 @@ public void testHealthCheckHandler() throws Exception {
   newJetty.stop();
 }
 
-// add a new node for the purpose of negative testing
-// negative check that if core container is not available at the node
-newJetty = cluster.startJettySolrRunner();
-try (SolrClient solrClient = 
getHttpSolrClient(newJetty.getBaseUrl().toString())) {
-
-  // positive check that our (new) "healthy" node works with direct http 
client
-  assertEquals(CommonParams.OK, 
req.process(solrClient).getResponse().get(CommonParams.STATUS));
-
-  // shutdown the core container of new node
-  newJetty.getCoreContainer().shutdown();
-
-  // api shouldn't unreachable
-  SolrException thrown =
-  expectThrows(
-  SolrException.class,
-  () -> {
-req.process(solrClient).getResponse().get(CommonParams.STATUS);
-fail("API shouldn't be available, and fail at above request");
-  });
-  assertEquals("Exception code should be 404", 404, thrown.code());
-  assertTrue(
-  "Should have seen an exception containing the an error",
-  thrown
-  .getMessage()
-  .contains(
-  "Error processing the request. CoreContainer is either not 
initialized or shutting down."));
-} finally {
-  newJetty.stop();
-}
-

Review Comment:
   I guess I should remove the lower "redundant" part too...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17118: Simplify CoreContainerProvider initialization [solr]

2024-05-22 Thread via GitHub


dsmiley commented on code in PR #2474:
URL: https://github.com/apache/solr/pull/2474#discussion_r1610783362


##
solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java:
##
@@ -73,10 +72,7 @@
 public class SolrDispatchFilter extends BaseSolrFilter implements PathExcluder 
{
   private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
-  // TODO: see if we can get rid of the holder here (Servlet spec actually 
guarantees
-  // ContextListeners run before filter init, but JettySolrRunner that we use 
for tests is
-  // complicated)
-  private ServiceHolder coreService;

Review Comment:
   Can you please recommend the beasting options I should use today/tonight?
   
   I'd like to merge this to main only and then only backport after awhile.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Created] (SOLR-17307) CachingDirectoryFactory uses the wrong separator on Windows

2024-05-22 Thread Houston Putman (Jira)
Houston Putman created SOLR-17307:
-

 Summary: CachingDirectoryFactory uses the wrong separator on 
Windows
 Key: SOLR-17307
 URL: https://issues.apache.org/jira/browse/SOLR-17307
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Houston Putman


The {{CachingDirectoryFactory.isSubPath()}} method, uses '/' instead of 
File.separator. This gives us failures for 
{{CachingDirectoryFactoryTest.reorderingTest}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17118: Simplify CoreContainerProvider initialization [solr]

2024-05-22 Thread via GitHub


dsmiley commented on code in PR #2474:
URL: https://github.com/apache/solr/pull/2474#discussion_r1610765299


##
solr/core/src/java/org/apache/solr/servlet/CoreContainerProvider.java:
##
@@ -93,69 +87,69 @@ public class CoreContainerProvider implements 
ServletContextListener {
   private HttpClient httpClient;
   private SolrMetricManager metricManager;
   private RateLimitManager rateLimitManager;
-  private final CountDownLatch init = new CountDownLatch(1);
   private String registryName;
-  // AFAIK the only reason we need this is to support JettySolrRunner for 
tests. In tests we might
-  // have multiple CoreContainers in the same JVM, but I *think* that doesn't 
happen in a real
-  // server.
-  private static final Map services =
-  Collections.synchronizedMap(new WeakHashMap<>());
-
-  // todo: dependency injection instead, but for now this method and the 
associated map will have
-  // to suffice.
-  // Note that this relies on ServletContext.equals() not implementing 
anything significantly
-  // different than Object.equals for its .equals method (I've found no 
implementation that even
-  // implements it).
-  public static ServiceHolder serviceForContext(ServletContext ctx) throws 
InterruptedException {
-ContextInitializationKey key = new ContextInitializationKey(ctx);
-return services.computeIfAbsent(key, ServiceHolder::new);
+
+  /** Acquires an instance from the context, waiting if necessary. */
+  public static CoreContainerProvider serviceForContext(ServletContext ctx)
+  throws InterruptedException {
+long startWait = System.nanoTime();
+synchronized (ctx) {

Review Comment:
   > Never know when they might also (try to) lock it during server 
initialization.
   
   I'm not concerned; it's only Jetty and us to worry about.  The code here is 
resilient to `wait` being notified inexplicably.  There's a loop and it keeps 
checking for the CoreContainer to show up.  Perhaps it'll log a bit more that 
it's waiting; it's okay.  In practice I don't think it'll wait at all since you 
had pointed out in comments that the ServletContextListener is initialized 
first.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17118: Simplify CoreContainerProvider initialization [solr]

2024-05-22 Thread via GitHub


dsmiley commented on code in PR #2474:
URL: https://github.com/apache/solr/pull/2474#discussion_r1610754549


##
solr/core/src/java/org/apache/solr/servlet/CoreContainerProvider.java:
##
@@ -93,69 +87,69 @@ public class CoreContainerProvider implements 
ServletContextListener {
   private HttpClient httpClient;
   private SolrMetricManager metricManager;
   private RateLimitManager rateLimitManager;
-  private final CountDownLatch init = new CountDownLatch(1);
   private String registryName;
-  // AFAIK the only reason we need this is to support JettySolrRunner for 
tests. In tests we might
-  // have multiple CoreContainers in the same JVM, but I *think* that doesn't 
happen in a real
-  // server.
-  private static final Map services =
-  Collections.synchronizedMap(new WeakHashMap<>());
-
-  // todo: dependency injection instead, but for now this method and the 
associated map will have
-  // to suffice.
-  // Note that this relies on ServletContext.equals() not implementing 
anything significantly
-  // different than Object.equals for its .equals method (I've found no 
implementation that even
-  // implements it).
-  public static ServiceHolder serviceForContext(ServletContext ctx) throws 
InterruptedException {
-ContextInitializationKey key = new ContextInitializationKey(ctx);
-return services.computeIfAbsent(key, ServiceHolder::new);
+
+  /** Acquires an instance from the context, waiting if necessary. */
+  public static CoreContainerProvider serviceForContext(ServletContext ctx)
+  throws InterruptedException {
+long startWait = System.nanoTime();
+synchronized (ctx) {

Review Comment:
   > Worse yet the use of JettySolrRunner for our tests means that we don't 
have any tests for what start.jar is doing.
   
   BATS integration tests is our answer to that.  JettySolrRunner just needs to 
handle the sequencing realistically (what starts and stops when).  You might 
have also noticed some bolted-on requirements for tests like debug filter, 
proxy port, and more, so I don't see JettySolrRunner going away.  It could be 
simplified further!
   
   I'm not optimistic about that QuickStart thing helping us -- we've already 
tuned Jetty.  Most notably, our web.xml has `metadata-complete="true"` thus 
there is no annotation scanning.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-16659: Properly construct V2 base urls instead of replacing substring "/solr" with "/api" [solr]

2024-05-22 Thread via GitHub


AndreyBozhko commented on code in PR #2473:
URL: https://github.com/apache/solr/pull/2473#discussion_r1610734437


##
solr/solrj-zookeeper/src/java/org/apache/solr/common/cloud/ZkStateReader.java:
##
@@ -1331,12 +1331,24 @@ public ConfigData getSecurityProps(boolean getFresh) {
* Returns the baseURL corresponding to a given node's nodeName -- NOTE: 
does not (currently)
* imply that the nodeName (or resulting baseURL) exists in the cluster.
*
-   * @lucene.experimental
+   * @param nodeName name of the node
+   * @return url that looks like {@code https://localhost:8983/solr}
*/
   public String getBaseUrlForNodeName(final String nodeName) {
 return Utils.getBaseUrlForNodeName(nodeName, 
getClusterProperty(URL_SCHEME, "http"));
   }
 
+  /**
+   * Returns the V2 baseURL corresponding to a given node's nodeName -- NOTE: 
does not (currently)
+   * imply that the nodeName (or resulting baseURL) exists in the cluster.
+   *
+   * @param nodeName name of the node
+   * @return url that looks like {@code https://localhost:8983/api}
+   */
+  public String getBaseUrlV2ForNodeName(final String nodeName) {
+return Utils.getBaseUrlForNodeName(nodeName, 
getClusterProperty(URL_SCHEME, "http"), true);

Review Comment:
   I added javadoc to the methods in the Utils class, and refactored the code a 
little in 50c959f



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17007) TestDenseVectorFunctionQuery reproducible failures

2024-05-22 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848752#comment-17848752
 ] 

David Smiley commented on SOLR-17007:
-

[GE|https://ge.apache.org/scans/tests?search.relativeStartTime=P90D=solr-root=America%2FNew_York=org.apache.solr.search.function.TestDenseVectorFunctionQuery]
 shows sporadic failures as well.  Today [I got several test failures from this 
class|https://github.com/apache/solr/actions/runs/9186812798/job/25263204671?pr=2474]
 on my PR...
{noformat}
Caused by:
java.lang.IllegalArgumentException: no byte vector value is indexed for 
field 'vector_byte_encoding'
at 
org.apache.lucene.queries.function.valuesource.ByteKnnVectorFieldSource.getValues(ByteKnnVectorFieldSource.java:45)
at 
org.apache.lucene.queries.function.valuesource.VectorSimilarityFunction.getValues(VectorSimilarityFunction.java:48)
at 
org.apache.lucene.queries.function.FunctionQuery$AllScorer.(FunctionQuery.java:115)
at 
org.apache.lucene.queries.function.FunctionQuery$FunctionWeight.scorer(FunctionQuery.java:76)
at org.apache.lucene.search.Weight.scorerSupplier(Weight.java:135)
{noformat}
with a number of the tests failing on this class.

> TestDenseVectorFunctionQuery reproducible failures
> --
>
> Key: SOLR-17007
> URL: https://issues.apache.org/jira/browse/SOLR-17007
> Project: Solr
>  Issue Type: Test
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: apache_solr_Solr-NightlyTests-main_928.log.txt, 
> apache_solr_Solr-NightlyTests-main_931.log.txt, 
> thetaphi_solr_Solr-main-Linux_14822.log.txt
>
>
> In the past week, the same 5 test methods of TestDenseVectorFunctionQuery 
> have all failed 3 times - in the same 3 jenkins builds (ie: same master seed 
> - which reproduces locally for me) and all of the test (method) failures have 
> the same root cause ... strongly suggesting that some aspect of the static, 
> or test class level, randomization is breaking these methods.
>  
> Recent example...
> {noformat}
> ./gradlew test --tests TestDenseVectorFunctionQuery 
> -Dtests.seed=749AD19AB618219E -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Solr/Solr-NightlyTests-main/test-data/enwiki.random.lines.txt
>  -Dtests.locale=fr-MQ -Dtests.timezone=Asia/Novosibirsk -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> ...
> org.apache.solr.search.function.TestDenseVectorFunctionQuery > 
> floatFieldVectors_missingFieldValue_shouldReturnSimilarityZero FAILED
>     java.lang.RuntimeException: Exception during query
>         at 
> __randomizedtesting.SeedInfo.seed([749AD19AB618219E:E0B29A3AECE5D888]:0)
>         at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:989)
>         at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:947)
>         at 
> org.apache.solr.search.function.TestDenseVectorFunctionQuery.floatFieldVectors_missingFieldValue_shouldReturnSimilarityZero(TestDenseVectorFunctionQuery.java:173)
> ...
>         Caused by:
>         java.lang.IllegalArgumentException: no float vector value is indexed 
> for field 'vector2'
>             at 
> org.apache.lucene.queries.function.valuesource.FloatKnnVectorFieldSource.getValues(FloatKnnVectorFieldSource.java:45)
>             at 
> org.apache.lucene.queries.function.valuesource.VectorSimilarityFunction.getValues(VectorSimilarityFunction.java:48)
>             at 
> org.apache.lucene.queries.function.FunctionQuery$AllScorer.(FunctionQuery.java:115)
>             at 
> org.apache.lucene.queries.function.FunctionQuery$FunctionWeight.scorer(FunctionQuery.java:76)
>             at org.apache.lucene.search.Weight.scorerSupplier(Weight.java:135)
>             at 
> org.apache.lucene.search.BooleanWeight.scorerSupplier(BooleanWeight.java:515)
>             at org.apache.lucene.search.Weight.bulkScorer(Weight.java:165)
>             at 
> org.apache.lucene.search.BooleanWeight.bulkScorer(BooleanWeight.java:368)
>             at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:759)
>             at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:720)
>             at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:549)
>             at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:275)
>             at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1878)
>             at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1695)
>             at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:710)
>             at 
> 

Re: [PR] SOLR-10654: Introduce output of Prometheus metrics for Solr Core registry [solr]

2024-05-22 Thread via GitHub


mlbiscoc commented on code in PR #2405:
URL: https://github.com/apache/solr/pull/2405#discussion_r1610675745


##
solr/core/src/java/org/apache/solr/metrics/prometheus/SolrPrometheusExporter.java:
##
@@ -0,0 +1,203 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.metrics.prometheus;
+
+import com.codahale.metrics.Meter;
+import com.codahale.metrics.Metric;
+import com.codahale.metrics.Timer;
+import io.prometheus.metrics.model.snapshots.CounterSnapshot;
+import io.prometheus.metrics.model.snapshots.GaugeSnapshot;
+import io.prometheus.metrics.model.snapshots.Labels;
+import io.prometheus.metrics.model.snapshots.MetricMetadata;
+import io.prometheus.metrics.model.snapshots.MetricSnapshot;
+import io.prometheus.metrics.model.snapshots.MetricSnapshots;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Base class for all {@link SolrPrometheusExporter} holding {@link 
MetricSnapshot}s. Can export
+ * {@link com.codahale.metrics.Metric} to {@link MetricSnapshot} to be 
outputted for {@link
+ * org.apache.solr.response.PrometheusResponseWriter}
+ */
+public abstract class SolrPrometheusExporter implements PrometheusExporterInfo 
{
+  protected final Map> 
metricCounters;
+  protected final Map> 
metricGauges;
+
+  public SolrPrometheusExporter() {
+this.metricCounters = new HashMap<>();
+this.metricGauges = new HashMap<>();
+  }
+
+  /**
+   * Export {@link Metric} to {@link 
io.prometheus.metrics.model.snapshots.MetricSnapshot} and
+   * registers the Snapshot
+   *
+   * @param dropwizardMetric the {@link Metric} to be exported
+   * @param metricName Dropwizard metric name
+   */
+  public abstract void exportDropwizardMetric(Metric dropwizardMetric, String 
metricName);
+
+  /**
+   * Export {@link Meter} to {@link
+   * 
io.prometheus.metrics.model.snapshots.CounterSnapshot.CounterDataPointSnapshot} 
and collect
+   * datapoint
+   *
+   * @param metricName name of metric after export
+   * @param dropwizardMetric the {@link Meter} to be exported
+   * @param labels label names and values to record
+   */
+  public void exportMeter(String metricName, Meter dropwizardMetric, Labels 
labels) {
+CounterSnapshot.CounterDataPointSnapshot dataPoint =
+createCounterDatapoint((double) dropwizardMetric.getCount(), labels);
+collectCounterDatapoint(metricName, dataPoint);
+  }
+
+  /**
+   * Export {@link com.codahale.metrics.Counter} to {@link
+   * 
io.prometheus.metrics.model.snapshots.CounterSnapshot.CounterDataPointSnapshot} 
and collect
+   * datapoint
+   *
+   * @param metricName name of prometheus metric
+   * @param dropwizardMetric the {@link com.codahale.metrics.Counter} to be 
exported
+   * @param labels label names and values to record
+   */
+  public void exportCounter(
+  String metricName, com.codahale.metrics.Counter dropwizardMetric, Labels 
labels) {
+CounterSnapshot.CounterDataPointSnapshot dataPoint =
+createCounterDatapoint((double) dropwizardMetric.getCount(), labels);
+collectCounterDatapoint(metricName, dataPoint);
+  }
+
+  /**
+   * Export {@link Timer} ands its mean rate to {@link
+   * 
io.prometheus.metrics.model.snapshots.GaugeSnapshot.GaugeDataPointSnapshot} and 
collect
+   * datapoint
+   *
+   * @param metricName name of prometheus metric
+   * @param dropwizardMetric the {@link Timer} to be exported
+   * @param labels label names and values to record
+   */
+  public void exportTimer(String metricName, Timer dropwizardMetric, Labels 
labels) {
+GaugeSnapshot.GaugeDataPointSnapshot dataPoint =
+createGaugeDatapoint(dropwizardMetric.getSnapshot().getMean(), labels);
+collectGaugeDatapoint(metricName, dataPoint);
+  }
+
+  /**
+   * Export {@link com.codahale.metrics.Gauge} to {@link
+   * 
io.prometheus.metrics.model.snapshots.GaugeSnapshot.GaugeDataPointSnapshot} and 
collect to
+   * datapoint. Unlike other Dropwizard metric types, Gauges can have more 
complex types. In the
+   * case of a hashmap, collect each as an individual metric and have its key 
appended as a label to
+   * the metric called "item"
+   *
+   * @param metricName name of prometheus 

Re: [PR] SOLR-16824: Adopt Linux Command line tool pattern of -- for long option commands [solr]

2024-05-22 Thread via GitHub


epugh commented on PR #1768:
URL: https://github.com/apache/solr/pull/1768#issuecomment-2125785227

   Waiting for the upcoming commons cli 1.7.1 to come out with nicer handling 
of deprecated options.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-10654: Introduce output of Prometheus metrics for Solr Core registry [solr]

2024-05-22 Thread via GitHub


dsmiley commented on code in PR #2405:
URL: https://github.com/apache/solr/pull/2405#discussion_r1610641774


##
solr/core/src/java/org/apache/solr/metrics/prometheus/core/SolrCoreMetric.java:
##
@@ -41,14 +46,30 @@ public SolrCoreMetric(
 this.metricName = metricName;
 labels.put("core", coreName);
 if (cloudMode) {
-  String[] coreNameParsed = coreName.split("_");
-  labels.put("collection", coreNameParsed[1]);
-  labels.put("shard", coreNameParsed[2]);
-  labels.put("replica", coreNameParsed[3] + "_" + coreNameParsed[4]);
+  appendCloudModeLabels();
 }
   }
 
+  public Labels getLabels() {
+return Labels.of(new ArrayList<>(labels.keySet()), new 
ArrayList<>(labels.values()));
+  }
+
   public abstract SolrCoreMetric parseLabels();
 
-  public abstract void toPrometheus(SolrPrometheusCoreExporter 
solrPrometheusCoreRegistry);
+  public abstract void toPrometheus(SolrPrometheusCoreExporter 
solrPrometheusCoreExporter);
+
+  private void appendCloudModeLabels() {
+Pattern p = Pattern.compile("^core_(.*)_(shard[0-9]+)_(replica_n[0-9]+)");

Review Comment:
   Should it end in a dollar-sign thus it clearly matches the entire input?
   
   why compile each time this is called?



##
solr/core/src/java/org/apache/solr/metrics/prometheus/SolrPrometheusCoreExporter.java:
##
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.metrics.prometheus;
+
+import com.codahale.metrics.Meter;
+import com.codahale.metrics.Metric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreCacheMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreHandlerMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreHighlighterMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreIndexMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreNoOpMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreSearcherMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreTlogMetric;
+
+/**
+ * This class maintains a {@link 
io.prometheus.metrics.model.snapshots.MetricSnapshot}s exported
+ * from solr.core {@link com.codahale.metrics.MetricRegistry}
+ */
+public class SolrPrometheusCoreExporter extends SolrPrometheusExporter {
+  public final String coreName;
+  public final boolean cloudMode;
+
+  public SolrPrometheusCoreExporter(String coreName, boolean cloudMode) {
+super();
+this.coreName = coreName;
+this.cloudMode = cloudMode;
+  }
+
+  /**
+   * Export {@link Metric} to {@link 
io.prometheus.metrics.model.snapshots.MetricSnapshot} and
+   * registers the Snapshot
+   *
+   * @param dropwizardMetric the {@link Meter} to be exported
+   * @param metricName Dropwizard metric name
+   */
+  @Override
+  public void exportDropwizardMetric(Metric dropwizardMetric, String 
metricName) {
+SolrCoreMetric solrCoreMetric = categorizeCoreMetric(dropwizardMetric, 
metricName);
+solrCoreMetric.parseLabels().toPrometheus(this);
+  }
+
+  private SolrCoreMetric categorizeCoreMetric(Metric dropwizardMetric, String 
metricName) {
+String metricCategory = metricName.split("\\.")[0];
+switch (CoreCategory.valueOf(metricCategory)) {
+  case ADMIN:
+  case QUERY:
+  case UPDATE:
+  case REPLICATION:
+{

Review Comment:
   can these redundant brackets in these cases be removed?



##
solr/core/src/java/org/apache/solr/core/SolrCore.java:
##
@@ -3032,6 +3034,7 @@ public PluginBag 
getResponseWriters() {
 m.put("csv", new CSVResponseWriter());
 m.put("schema.xml", new SchemaXmlResponseWriter());
 m.put("smile", new SmileResponseWriter());
+m.put(PROMETHEUS_METRICS_WT, new PrometheusResponseWriter());

Review Comment:
   Nevermind.  We even have schema xml format, another special purpose one.  
Maybe some day this could be improved.



##
solr/core/src/java/org/apache/solr/metrics/prometheus/SolrPrometheusExporter.java:
##
@@ -0,0 +1,203 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed 

Re: [PR] SOLR-16866: Use file separator instead of '/' in CachingDirectoryFactory [solr]

2024-05-22 Thread via GitHub


HoustonPutman commented on PR #2464:
URL: https://github.com/apache/solr/pull/2464#issuecomment-2125749403

   I need to make a separate JIRA so we can target 9.6.1 with it. I'll do it 
though, thanks for the offer  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17284: Remove deprecated BlobRepository [solr]

2024-05-22 Thread via GitHub


epugh commented on PR #2447:
URL: https://github.com/apache/solr/pull/2447#issuecomment-2125741320

   If this update passes all the tests, then I'm planning on merging it 
tomorrow, May 23rd


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17044: Consolidate SolrJ URL-building logic [solr]

2024-05-22 Thread via GitHub


epugh commented on PR #2455:
URL: https://github.com/apache/solr/pull/2455#issuecomment-2125737828

   Does this overlap with #2473 ?   WOuld be nice to get this in.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17289: OrderedNodePlacementPlugin: optimize don't loop collections [solr]

2024-05-22 Thread via GitHub


epugh commented on PR #2459:
URL: https://github.com/apache/solr/pull/2459#issuecomment-2125736089

   One thought I had the other day is that I've seen plenty of API's that when 
you list things have a rangeWithout the range or size parameter, you 
get X, but you can control that by specifying some other counter   Would 
that allow folks with just a small number of collections have simplicity, but 
if you have 1000's, well then you want to use a range to work your way through? 
  Kind of likes `rows` and `start` ;-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-16866: Use file separator instead of '/' in CachingDirectoryFactory [solr]

2024-05-22 Thread via GitHub


epugh commented on PR #2464:
URL: https://github.com/apache/solr/pull/2464#issuecomment-2125733793

   @HoustonPutman  would you like me to merge this one?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] Admin UI: fixed node list display bug caused by different hostname with same front part (duplicates PR #2449) [solr]

2024-05-22 Thread via GitHub


epugh merged PR #2475:
URL: https://github.com/apache/solr/pull/2475


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] Clarify the cache behavior in the Ref Guide [solr]

2024-05-22 Thread via GitHub


epugh merged PR #2472:
URL: https://github.com/apache/solr/pull/2472


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] Clarify the cache behavior in the Ref Guide [solr]

2024-05-22 Thread via GitHub


epugh commented on PR #2472:
URL: https://github.com/apache/solr/pull/2472#issuecomment-2125731069

   Okay, I added just a tweak after building the ref guide site and reading 
through it!  thank you!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-10654: Introduce output of Prometheus metrics for Solr Core registry [solr]

2024-05-22 Thread via GitHub


dsmiley commented on code in PR #2405:
URL: https://github.com/apache/solr/pull/2405#discussion_r1610636272


##
solr/core/src/java/org/apache/solr/metrics/prometheus/SolrPrometheusCoreExporter.java:
##
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.metrics.prometheus;
+
+import com.codahale.metrics.Meter;
+import com.codahale.metrics.Metric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreCacheMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreHandlerMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreHighlighterMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreIndexMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreNoOpMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreSearcherMetric;
+import org.apache.solr.metrics.prometheus.core.SolrCoreTlogMetric;
+
+/**
+ * This class maintains a {@link 
io.prometheus.metrics.model.snapshots.MetricSnapshot}s exported
+ * from solr.core {@link com.codahale.metrics.MetricRegistry}
+ */
+public class SolrPrometheusCoreExporter extends SolrPrometheusExporter {
+  public final String coreName;
+  public final boolean cloudMode;
+  public static final String ADMIN = "ADMIN";
+  public static final String QUERY = "QUERY";
+  public static final String UPDATE = "UPDATE";
+  public static final String REPLICATION = "REPLICATION";
+  public static final String TLOG = "TLOG";
+  public static final String CACHE = "CACHE";
+  public static final String SEARCHER = "SEARCHER";
+  public static final String HIGHLIGHTER = "HIGHLIGHTER";
+  public static final String INDEX = "INDEX";
+  public static final String CORE = "CORE";
+
+  public SolrPrometheusCoreExporter(String coreName, boolean cloudMode) {
+super();
+this.coreName = coreName;
+this.cloudMode = cloudMode;
+  }
+
+  /**
+   * Export {@link Metric} to {@link 
io.prometheus.metrics.model.snapshots.MetricSnapshot} and
+   * registers the Snapshot
+   *
+   * @param dropwizardMetric the {@link Meter} to be exported
+   * @param metricName Dropwizard metric name
+   */
+  @Override
+  public void exportDropwizardMetric(Metric dropwizardMetric, String 
metricName) {
+SolrCoreMetric solrCoreMetric = categorizeCoreMetric(dropwizardMetric, 
metricName);
+solrCoreMetric.parseLabels().toPrometheus(this);
+  }
+
+  private SolrCoreMetric categorizeCoreMetric(Metric dropwizardMetric, String 
metricName) {
+String metricCategory = metricName.split("\\.")[0];

Review Comment:
   split takes a "limit" second arg.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-16659: Properly construct V2 base urls instead of replacing substring "/solr" with "/api" [solr]

2024-05-22 Thread via GitHub


epugh commented on code in PR #2473:
URL: https://github.com/apache/solr/pull/2473#discussion_r1610621573


##
solr/solrj-zookeeper/src/java/org/apache/solr/common/cloud/ZkStateReader.java:
##
@@ -1331,12 +1331,24 @@ public ConfigData getSecurityProps(boolean getFresh) {
* Returns the baseURL corresponding to a given node's nodeName -- NOTE: 
does not (currently)
* imply that the nodeName (or resulting baseURL) exists in the cluster.
*
-   * @lucene.experimental
+   * @param nodeName name of the node
+   * @return url that looks like {@code https://localhost:8983/solr}
*/
   public String getBaseUrlForNodeName(final String nodeName) {
 return Utils.getBaseUrlForNodeName(nodeName, 
getClusterProperty(URL_SCHEME, "http"));
   }
 
+  /**
+   * Returns the V2 baseURL corresponding to a given node's nodeName -- NOTE: 
does not (currently)
+   * imply that the nodeName (or resulting baseURL) exists in the cluster.
+   *
+   * @param nodeName name of the node
+   * @return url that looks like {@code https://localhost:8983/api}
+   */
+  public String getBaseUrlV2ForNodeName(final String nodeName) {
+return Utils.getBaseUrlForNodeName(nodeName, 
getClusterProperty(URL_SCHEME, "http"), true);

Review Comment:
   Let me know when you decide what you want to do here, and then i look 
forward to merging it!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-12813) SolrCloud + 2 shards + subquery + auth = 401 Exception

2024-05-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848745#comment-17848745
 ] 

ASF subversion and git services commented on SOLR-12813:


Commit 2a84def1cccbac76bc0df791f66458663fe35f9b in solr's branch 
refs/heads/branch_9x from Rudi Seitz
[ https://gitbox.apache.org/repos/asf?p=solr.git;h=2a84def1ccc ]

SOLR-12813 followup -- preserve user Principal in alternate codepath in 
EmbeddedSolrServer (#2429)

EmbeddedSolrServer#request() has two separate codepaths where a 
SolrQueryRequest is created using the _parser.buildRequestFrom() utility 
method. The first codepath is active when the relevant SolrRequestHandler can 
be gotten from the CoreContainer. The second codepath is active when 
coreContainer.getRequestHandler(path) returns null and instead we have to get 
the SolrRequestHandler directly from the SolrCore. This second codepath is the 
one that's used in subquery execution. It was updated in the initial fix for 
SOLR-12813 so that the call to _parser.buildRequestFrom() would now include the 
user Principal. However, the first codepath was left alone because it was not 
found to be involved in subquery execution. In the present commit, the first 
codepath is being updated as well. This change is not needed for addressing the 
issue described in SOLR-12813, but it is being made in the interest of keeping 
the logic as consistent as possible across the two codepaths in 
EmbeddedSolrServer.request()


> SolrCloud + 2 shards + subquery + auth = 401 Exception
> --
>
> Key: SOLR-12813
> URL: https://issues.apache.org/jira/browse/SOLR-12813
> Project: Solr
>  Issue Type: Bug
>  Components: security, SolrCloud
>Affects Versions: 6.4.1, 7.5, 8.11
>Reporter: Igor Fedoryn
>Assignee: Eric Pugh
>Priority: Major
> Fix For: 9.7
>
> Attachments: screen1.png, screen2.png
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Environment: * Solr 6.4.1
>  * Zookeeper 3.4.6
>  * Java 1.8
> Run Zookeeper
> Upload simple configuration wherein the Solr schema has fields for a 
> relationship between parent/child
> Run two Solr instance (2 nodes)
> Create the collection with 1 shard on each Solr nodes
>  
> Add parent document to one shard and child document to another shard.
> The response for: * 
> /select?q=ChildIdField:VALUE=*,parents:[subqery]=\{!term f=id 
> v=$row.ParentIdsField}
> correct.
>  
> After that add Basic Authentication with some user for collection.
> Restart Solr or reload Solr collection.
> If the simple request /select?q=*:* with authorization on Solr server is a 
> success then run previously request
> with authorization on Solr server and you get the exception: "Solr HTTP 
> error: Unauthorized (401) "
>  
> Screens in the attachment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-12813) SolrCloud + 2 shards + subquery + auth = 401 Exception

2024-05-22 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848744#comment-17848744
 ] 

ASF subversion and git services commented on SOLR-12813:


Commit 0551589dffb13e25c25d6237914e2b35e2238e98 in solr's branch 
refs/heads/main from Rudi Seitz
[ https://gitbox.apache.org/repos/asf?p=solr.git;h=0551589dffb ]

SOLR-12813 followup -- preserve user Principal in alternate codepath in 
EmbeddedSolrServer (#2429)

EmbeddedSolrServer#request() has two separate codepaths where a 
SolrQueryRequest is created using the _parser.buildRequestFrom() utility 
method. The first codepath is active when the relevant SolrRequestHandler can 
be gotten from the CoreContainer. The second codepath is active when 
coreContainer.getRequestHandler(path) returns null and instead we have to get 
the SolrRequestHandler directly from the SolrCore. This second codepath is the 
one that's used in subquery execution. It was updated in the initial fix for 
SOLR-12813 so that the call to _parser.buildRequestFrom() would now include the 
user Principal. However, the first codepath was left alone because it was not 
found to be involved in subquery execution. In the present commit, the first 
codepath is being updated as well. This change is not needed for addressing the 
issue described in SOLR-12813, but it is being made in the interest of keeping 
the logic as consistent as possible across the two codepaths in 
EmbeddedSolrServer.request()


> SolrCloud + 2 shards + subquery + auth = 401 Exception
> --
>
> Key: SOLR-12813
> URL: https://issues.apache.org/jira/browse/SOLR-12813
> Project: Solr
>  Issue Type: Bug
>  Components: security, SolrCloud
>Affects Versions: 6.4.1, 7.5, 8.11
>Reporter: Igor Fedoryn
>Assignee: Eric Pugh
>Priority: Major
> Fix For: 9.7
>
> Attachments: screen1.png, screen2.png
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Environment: * Solr 6.4.1
>  * Zookeeper 3.4.6
>  * Java 1.8
> Run Zookeeper
> Upload simple configuration wherein the Solr schema has fields for a 
> relationship between parent/child
> Run two Solr instance (2 nodes)
> Create the collection with 1 shard on each Solr nodes
>  
> Add parent document to one shard and child document to another shard.
> The response for: * 
> /select?q=ChildIdField:VALUE=*,parents:[subqery]=\{!term f=id 
> v=$row.ParentIdsField}
> correct.
>  
> After that add Basic Authentication with some user for collection.
> Restart Solr or reload Solr collection.
> If the simple request /select?q=*:* with authorization on Solr server is a 
> success then run previously request
> with authorization on Solr server and you get the exception: "Solr HTTP 
> error: Unauthorized (401) "
>  
> Screens in the attachment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-12813 followup -- preserve user Principal in alternate codepath in EmbeddedSolrServer [solr]

2024-05-22 Thread via GitHub


epugh merged PR #2429:
URL: https://github.com/apache/solr/pull/2429


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-12813 followup -- preserve user Principal in alternate codepath in EmbeddedSolrServer [solr]

2024-05-22 Thread via GitHub


epugh commented on code in PR #2429:
URL: https://github.com/apache/solr/pull/2429#discussion_r1610618344


##
solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java:
##
@@ -164,7 +164,8 @@ public NamedList request(SolrRequest request, 
String coreName)
 if (handler != null) {
   try {
 SolrQueryRequest req =
-_parser.buildRequestFrom(null, request.getParams(), 
getContentStreams(request));
+_parser.buildRequestFrom(

Review Comment:
   Goign to bring it up at community meeting tomorrow...



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] Admin UI: fixed node list display bug caused by different hostname with same front part [solr]

2024-05-22 Thread via GitHub


epugh closed pull request #2449: Admin UI: fixed node list display bug caused 
by different hostname with same front part
URL: https://github.com/apache/solr/pull/2449


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] Admin UI: fixed node list display bug caused by different hostname with same front part [solr]

2024-05-22 Thread via GitHub


epugh commented on PR #2449:
URL: https://github.com/apache/solr/pull/2449#issuecomment-2125702345

   Due to challenges pushing changes, replacing this one with 
https://github.com/apache/solr/pull/2475.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[PR] Admin UI: fixed node list display bug caused by different hostname with same front part (duplicates PR #2449) [solr]

2024-05-22 Thread via GitHub


epugh opened a new pull request, #2475:
URL: https://github.com/apache/solr/pull/2475

   Having issues pushing CHANGES.txt to 
https://github.com/apache/solr/pull/2449 so opening new PR.
   
   Sigh.
   
   # Description
   old code:
   ```// Checks if this node is the first (alphabetically) for a given host. 
Used to decide rowspan in table
 $scope.isFirstNodeForHost = function(node) {
   var hostName = node.split(":")[0]; 
   var nodesInHost = $scope.filteredNodes.filter(function (node) {
 return node.startsWith(hostName);
   });
   return nodesInHost[0] === node;
 };
   ```
   hostname like `server-1`, `server-10`, this code will cause the node list to 
be displayed incorrectly
   
   # Solution
   ```
 $scope.isFirstNodeForHost = function(node) {
   var hostName = node.split(":")[0]; 
   var nodesInHost = $scope.filteredNodes.filter(function (node) {
 return node.split(":")[0] === hostName;
   });
   return nodesInHost[0] === node;
 };
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17279) consolidate security.json constants in test code

2024-05-22 Thread Eric Pugh (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848743#comment-17848743
 ] 

Eric Pugh commented on SOLR-17279:
--

[~sanjaydutt] so I looked and I was only going to make this change on main, and 
not back port it.  Mostly because many of the places it touched are around 
security for the SolrCLI commands, and supporting basic auth is a 10x feature 
that we never backported that feature to 9x.

There are days I regret that decision.

> consolidate security.json constants in test code
> 
>
> Key: SOLR-17279
> URL: https://issues.apache.org/jira/browse/SOLR-17279
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Rudi Seitz
>Assignee: Eric Pugh
>Priority: Trivial
> Fix For: main (10.0)
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Various unit tests declare a SECURITY_JSON constant. Maybe of these constants 
> are identical (likely introduced via copy/paste from other tests, resulting 
> in code duplication). This ticket is to consolidate the various SECURITY_JSON 
> constants across the tests in a central place.
> This point was raised during discussion of the fix for SOLR-12813. See 
> https://github.com/apache/solr/pull/2404#discussion_r1568012056
> In that discussion, it was agreed to address SECURITY_JSON consolidation in a 
> separate ticket -- this is that ticket. 
> Tagging [~epugh]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] User Behavior Insights implementation for Apache Solr [solr]

2024-05-22 Thread via GitHub


epugh commented on PR #2452:
URL: https://github.com/apache/solr/pull/2452#issuecomment-2125659610

   Oh, and of course, we now have a machine readable schema via Json Schema 
available here https://github.com/o19s/ubi


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] User Behavior Insights implementation for Apache Solr [solr]

2024-05-22 Thread via GitHub


epugh commented on PR #2452:
URL: https://github.com/apache/solr/pull/2452#issuecomment-2125657957

   I figured out how to parse and run a streaming expression that is used to 
write the query analytics data to, well, anywhere we want ;-).   The next area 
is to look at is actually integrating the streaming expression INTO the 
component as more than just a one off..  I gotta figure out how to take the 
data and pass it into the streaming expression...  `input()` maybe???   Also 
think about how to not rebuild/destroy/rebuild the streaming expression for 
every query.   
   
   Then more Ref Guide docs, a BATS integration test maybe, and then a 
discussion about who wants to use it first!  Plus of course the ever critical, 
"where does the code live" conversation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-10654: Introduce output of Prometheus metrics for Solr Core registry [solr]

2024-05-22 Thread via GitHub


mlbiscoc commented on code in PR #2405:
URL: https://github.com/apache/solr/pull/2405#discussion_r1610558336


##
solr/core/src/java/org/apache/solr/response/PrometheusResponseWriter.java:
##
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.response;
+
+import io.prometheus.metrics.expositionformats.PrometheusTextFormatWriter;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.Map;
+import org.apache.solr.common.util.NamedList;
+import org.apache.solr.metrics.prometheus.SolrPrometheusCoreExporter;
+import org.apache.solr.request.SolrQueryRequest;
+
+@SuppressWarnings(value = "unchecked")
+public class PrometheusResponseWriter extends RawResponseWriter {

Review Comment:
   Done!



##
solr/core/src/java/org/apache/solr/core/SolrCore.java:
##
@@ -3032,6 +3034,7 @@ public PluginBag 
getResponseWriters() {
 m.put("csv", new CSVResponseWriter());
 m.put("schema.xml", new SchemaXmlResponseWriter());
 m.put("smile", new SmileResponseWriter());
+m.put(PROMETHEUS_METRICS_WT, new PrometheusResponseWriter());

Review Comment:
   Could not figure out a way to do with without registering it. Is there some 
example somewhere where a response writer exists without registering the class?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-10654: Introduce output of Prometheus metrics for Solr Core registry [solr]

2024-05-22 Thread via GitHub


mlbiscoc commented on code in PR #2405:
URL: https://github.com/apache/solr/pull/2405#discussion_r1610557545


##
solr/core/src/java/org/apache/solr/metrics/prometheus/core/SolrCoreSearcherMetric.java:
##
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.metrics.prometheus.core;
+
+import static 
org.apache.solr.metrics.prometheus.core.SolrCoreCacheMetric.CORE_CACHE_SEARCHER_METRICS;
+
+import com.codahale.metrics.Counter;
+import com.codahale.metrics.Gauge;
+import com.codahale.metrics.Metric;
+import com.codahale.metrics.Timer;
+import io.prometheus.metrics.model.snapshots.Labels;
+import java.util.ArrayList;
+import org.apache.solr.metrics.prometheus.SolrPrometheusCoreExporter;
+
+/** Dropwizard metrics of name SEARCHER.* */
+public class SolrCoreSearcherMetric extends SolrCoreMetric {
+  public static final String CORE_SEARCHER_METRICS = 
"solr_metrics_core_searcher_documents";
+  public static final String CORE_SEARCHER_TIMES = 
"solr_metrics_core_average_searcher_warmup_time";
+
+  public SolrCoreSearcherMetric(
+  Metric dropwizardMetric, String coreName, String metricName, boolean 
cloudMode) {
+super(dropwizardMetric, coreName, metricName, cloudMode);
+  }
+
+  @Override
+  public SolrCoreMetric parseLabels() {
+String[] parsedMetric = metricName.split("\\.");
+if (!(dropwizardMetric instanceof Counter)) {
+  String type = parsedMetric[2];
+  labels.put("type", type);
+}
+return this;
+  }
+
+  @Override
+  public void toPrometheus(SolrPrometheusCoreExporter 
solrPrometheusCoreRegistry) {

Review Comment:
   Thanks for catching this. Mistakenly did not change the var names after 
changing the classes name. Should all be changed now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-10654: Introduce output of Prometheus metrics for Solr Core registry [solr]

2024-05-22 Thread via GitHub


mlbiscoc commented on code in PR #2405:
URL: https://github.com/apache/solr/pull/2405#discussion_r1610557299


##
solr/core/src/java/org/apache/solr/metrics/prometheus/core/SolrCoreMetric.java:
##
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.metrics.prometheus.core;
+
+import com.codahale.metrics.Metric;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.solr.metrics.prometheus.SolrPrometheusCoreExporter;
+
+/**
+ * Base class is a wrapper to categorize and export {@link 
com.codahale.metrics.Metric} to {@link
+ * io.prometheus.metrics.model.snapshots.DataPointSnapshot} and register to a 
{@link
+ * SolrPrometheusCoreExporter}. {@link com.codahale.metrics.MetricRegistry} 
does not support tags
+ * unlike prometheus. Metrics registered to the registry need to be parsed out 
from the metric name
+ * to be exported to {@link 
io.prometheus.metrics.model.snapshots.DataPointSnapshot}
+ */
+public abstract class SolrCoreMetric {
+  public Metric dropwizardMetric;
+  public String coreName;
+  public String metricName;
+  public Map labels = new HashMap<>();

Review Comment:
   The `PrometheusTextFormatWriter` will take the MetricSnapshots exported and 
just output in lexicographical order. So a `HashMap` to `LinkedHashMap` 
wouldn't really change anything.



##
solr/core/src/java/org/apache/solr/response/PrometheusResponseWriter.java:
##
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.response;
+
+import io.prometheus.metrics.expositionformats.PrometheusTextFormatWriter;
+import java.io.IOException;
+import java.io.OutputStream;
+import java.util.Map;
+import org.apache.solr.common.util.NamedList;
+import org.apache.solr.metrics.prometheus.SolrPrometheusCoreExporter;
+import org.apache.solr.request.SolrQueryRequest;
+
+@SuppressWarnings(value = "unchecked")
+public class PrometheusResponseWriter extends RawResponseWriter {
+  @Override
+  public void write(OutputStream out, SolrQueryRequest request, 
SolrQueryResponse response)
+  throws IOException {
+NamedList prometheusRegistries =
+(NamedList) response.getValues().get("metrics");
+Map registryMap = prometheusRegistries.asShallowMap();

Review Comment:
   Got it, moved to an `Iterator` instead.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-10654: Introduce output of Prometheus metrics for Solr Core registry [solr]

2024-05-22 Thread via GitHub


mlbiscoc commented on code in PR #2405:
URL: https://github.com/apache/solr/pull/2405#discussion_r1610557889


##
solr/core/src/java/org/apache/solr/metrics/prometheus/core/SolrCoreMetric.java:
##
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.metrics.prometheus.core;
+
+import com.codahale.metrics.Metric;
+import java.util.HashMap;
+import java.util.Map;
+import org.apache.solr.metrics.prometheus.SolrPrometheusCoreExporter;
+
+/**
+ * Base class is a wrapper to categorize and export {@link 
com.codahale.metrics.Metric} to {@link
+ * io.prometheus.metrics.model.snapshots.DataPointSnapshot} and register to a 
{@link
+ * SolrPrometheusCoreExporter}. {@link com.codahale.metrics.MetricRegistry} 
does not support tags
+ * unlike prometheus. Metrics registered to the registry need to be parsed out 
from the metric name
+ * to be exported to {@link 
io.prometheus.metrics.model.snapshots.DataPointSnapshot}
+ */
+public abstract class SolrCoreMetric {
+  public Metric dropwizardMetric;
+  public String coreName;
+  public String metricName;
+  public Map labels = new HashMap<>();
+
+  public SolrCoreMetric(
+  Metric dropwizardMetric, String coreName, String metricName, boolean 
cloudMode) {
+this.dropwizardMetric = dropwizardMetric;
+this.coreName = coreName;
+this.metricName = metricName;
+labels.put("core", coreName);
+if (cloudMode) {
+  String[] coreNameParsed = coreName.split("_");
+  labels.put("collection", coreNameParsed[1]);

Review Comment:
   Good catch. I created a new function with this regex pattern 
`"^core_(.*)_(shard[0-9]+)_(replica_n[0-9]+)"` for core names looking like 
`core_test_collection_shard1_replica_n1`. Also created a test under 
`SolrCoreMetricTest` to avoid regressions if for some reason the pattern for 
core name changes.



##
solr/core/src/java/org/apache/solr/util/stats/MetricUtils.java:
##
@@ -165,6 +169,71 @@ public static void toSolrInputDocuments(
 });
   }
 
+  /**
+   * Provides a representation of the given Dropwizard metric registry as 
{@link
+   * SolrPrometheusCoreExporter}-s. Only those metrics are converted which 
match at least one of the
+   * given MetricFilter instances.
+   *
+   * @param registry the {@link MetricRegistry} to be converted
+   * @param shouldMatchFilters a list of {@link MetricFilter} instances. A 
metric must match any
+   * one of the filters from this list to be included in the output
+   * @param mustMatchFilter a {@link MetricFilter}. A metric must 
match this filter to be
+   * included in the output.
+   * @param propertyFilter limit what properties of a metric are returned
+   * @param skipHistograms discard any {@link Histogram}-s and histogram parts 
of {@link Timer}-s.
+   * @param skipAggregateValues discard internal values of {@link 
AggregateMetric}-s.
+   * @param compact use compact representation for counters and gauges.
+   * @param consumer consumer that accepts produced {@link 
SolrPrometheusCoreExporter}-s
+   */
+  public static void toPrometheusRegistry(

Review Comment:
   That's a fair point. Moved it to the PrometheusResponseWriter so it's known 
it's only used for this.



##
solr/core/src/java/org/apache/solr/metrics/prometheus/SolrPrometheusCoreExporter.java:
##
@@ -0,0 +1,104 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package 

Re: [PR] SOLR-10654: Introduce output of Prometheus metrics for Solr Core registry [solr]

2024-05-22 Thread via GitHub


mlbiscoc commented on code in PR #2405:
URL: https://github.com/apache/solr/pull/2405#discussion_r1610556518


##
solr/core/src/java/org/apache/solr/util/stats/MetricUtils.java:
##
@@ -165,6 +169,71 @@ public static void toSolrInputDocuments(
 });
   }
 
+  /**
+   * Provides a representation of the given Dropwizard metric registry as 
{@link
+   * SolrPrometheusCoreExporter}-s. Only those metrics are converted which 
match at least one of the
+   * given MetricFilter instances.
+   *
+   * @param registry the {@link MetricRegistry} to be converted
+   * @param shouldMatchFilters a list of {@link MetricFilter} instances. A 
metric must match any
+   * one of the filters from this list to be included in the output
+   * @param mustMatchFilter a {@link MetricFilter}. A metric must 
match this filter to be
+   * included in the output.
+   * @param propertyFilter limit what properties of a metric are returned
+   * @param skipHistograms discard any {@link Histogram}-s and histogram parts 
of {@link Timer}-s.
+   * @param skipAggregateValues discard internal values of {@link 
AggregateMetric}-s.
+   * @param compact use compact representation for counters and gauges.
+   * @param consumer consumer that accepts produced {@link 
SolrPrometheusCoreExporter}-s
+   */
+  public static void toPrometheusRegistry(
+  MetricRegistry registry,
+  String registryName,
+  List shouldMatchFilters,
+  MetricFilter mustMatchFilter,
+  Predicate propertyFilter,
+  boolean skipHistograms,
+  boolean skipAggregateValues,
+  boolean compact,
+  Consumer consumer) {
+String coreName;
+boolean cloudMode = false;
+Map dropwizardMetrics = registry.getMetrics();
+String[] rawParsedRegistry = registryName.split("\\.");
+List parsedRegistry = new 
ArrayList<>(Arrays.asList(rawParsedRegistry));
+
+if (parsedRegistry.size() == 3) {
+  coreName = parsedRegistry.get(2);
+} else if (parsedRegistry.size() == 5) {
+  coreName = 
parsedRegistry.stream().skip(1).collect(Collectors.joining("_"));
+  cloudMode = true;
+} else {
+  coreName = registryName;
+}
+
+SolrPrometheusCoreExporter solrPrometheusCoreExporter =
+new SolrPrometheusCoreExporter(coreName, cloudMode);
+
+toMaps(
+registry,
+shouldMatchFilters,
+mustMatchFilter,
+propertyFilter,
+skipHistograms,
+skipAggregateValues,
+compact,
+false,
+(metricName, metric) -> {
+  try {
+Metric dropwizardMetric = dropwizardMetrics.get(metricName);

Review Comment:
   A bit confused where you would prefer this. The metric is retrieved within 
the scope of the `BiConsumer` from `toMap`. I can't move it where 
`SolrPrometheusCoreExporter` is declared because it is out of scope of this 
block and the metrics are looped through the registry from `toMaps`.



##
solr/core/src/java/org/apache/solr/metrics/prometheus/core/SolrCoreSearcherMetric.java:
##
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.metrics.prometheus.core;
+
+import static 
org.apache.solr.metrics.prometheus.core.SolrCoreCacheMetric.CORE_CACHE_SEARCHER_METRICS;
+
+import com.codahale.metrics.Counter;
+import com.codahale.metrics.Gauge;
+import com.codahale.metrics.Metric;
+import com.codahale.metrics.Timer;
+import io.prometheus.metrics.model.snapshots.Labels;
+import java.util.ArrayList;
+import org.apache.solr.metrics.prometheus.SolrPrometheusCoreExporter;
+
+/** Dropwizard metrics of name SEARCHER.* */
+public class SolrCoreSearcherMetric extends SolrCoreMetric {
+  public static final String CORE_SEARCHER_METRICS = 
"solr_metrics_core_searcher_documents";
+  public static final String CORE_SEARCHER_TIMES = 
"solr_metrics_core_average_searcher_warmup_time";
+
+  public SolrCoreSearcherMetric(
+  Metric dropwizardMetric, String coreName, String metricName, boolean 
cloudMode) {
+super(dropwizardMetric, coreName, metricName, cloudMode);
+  }
+
+  @Override
+  public SolrCoreMetric parseLabels() {
+String[] parsedMetric = metricName.split("\\.");
+if (!(dropwizardMetric instanceof 

Re: [PR] SOLR-10654: Introduce output of Prometheus metrics for Solr Core registry [solr]

2024-05-22 Thread via GitHub


mlbiscoc commented on code in PR #2405:
URL: https://github.com/apache/solr/pull/2405#discussion_r1610556050


##
solr/core/build.gradle:
##
@@ -153,6 +153,13 @@ dependencies {
   implementation 'org.apache.logging.log4j:log4j-core'
   runtimeOnly 'org.apache.logging.log4j:log4j-slf4j2-impl'
 
+  // Prometheus client

Review Comment:
   Changed accordingly



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-16659: Properly construct V2 base urls instead of replacing substring "/solr" with "/api" [solr]

2024-05-22 Thread via GitHub


epugh commented on code in PR #2473:
URL: https://github.com/apache/solr/pull/2473#discussion_r1610274309


##
solr/solrj-zookeeper/src/java/org/apache/solr/common/cloud/ZkStateReader.java:
##
@@ -1331,12 +1331,24 @@ public ConfigData getSecurityProps(boolean getFresh) {
* Returns the baseURL corresponding to a given node's nodeName -- NOTE: 
does not (currently)
* imply that the nodeName (or resulting baseURL) exists in the cluster.
*
-   * @lucene.experimental
+   * @param nodeName name of the node
+   * @return url that looks like {@code https://localhost:8983/solr}
*/
   public String getBaseUrlForNodeName(final String nodeName) {
 return Utils.getBaseUrlForNodeName(nodeName, 
getClusterProperty(URL_SCHEME, "http"));
   }
 
+  /**
+   * Returns the V2 baseURL corresponding to a given node's nodeName -- NOTE: 
does not (currently)
+   * imply that the nodeName (or resulting baseURL) exists in the cluster.
+   *
+   * @param nodeName name of the node
+   * @return url that looks like {@code https://localhost:8983/api}
+   */
+  public String getBaseUrlV2ForNodeName(final String nodeName) {
+return Utils.getBaseUrlForNodeName(nodeName, 
getClusterProperty(URL_SCHEME, "http"), true);

Review Comment:
   yeah...  Or maybe some javadocs to explain it?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-13350) Explore collector managers for multi-threaded search

2024-05-22 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848668#comment-17848668
 ] 

Andrzej Bialecki commented on SOLR-13350:
-

{quote}As of now, the timeAllowed requests are anyway executed without 
multithreading
{quote}
This is based on a {{QueryCommand.timeAllowed}} flag that is set only from the 
{{timeAllowed}} param. However, this concept was extended in SOLR-17138 to 
{{QueryLimits}} that is now initialized also using other params. There is 
indeed some inconsistency here that's a left-over from that change, in the 
sense that `QueryCommand.timeAllowed` should have been either removed 
completely or replaced with something like {{{}queryLimits{}}}, to make sure to 
check the current SolrRequestInfo for QueryLimits.

In any case, the minimal workaround for this could be to check 
{{QueryLimits.getCurrentLimits().isLimitsEnabled()}} instead of 
{{{}QueryCommand.timeAllowed{}}}. But a better fix would be to properly unbreak 
the tracking of the parent {{SolrRequestInfo}} in MT search.

> Explore collector managers for multi-threaded search
> 
>
> Key: SOLR-13350
> URL: https://issues.apache.org/jira/browse/SOLR-13350
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-13350.patch, SOLR-13350.patch, SOLR-13350.patch
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> AFAICT, SolrIndexSearcher can be used only to search all the segments of an 
> index in series. However, using CollectorManagers, segments can be searched 
> concurrently and result in reduced latency. Opening this issue to explore the 
> effectiveness of using CollectorManagers in SolrIndexSearcher from latency 
> and throughput perspective.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-16659: Properly construct V2 base urls instead of replacing substring "/solr" with "/api" [solr]

2024-05-22 Thread via GitHub


AndreyBozhko commented on code in PR #2473:
URL: https://github.com/apache/solr/pull/2473#discussion_r1610156568


##
solr/solrj-zookeeper/src/java/org/apache/solr/common/cloud/ZkStateReader.java:
##
@@ -1331,12 +1331,24 @@ public ConfigData getSecurityProps(boolean getFresh) {
* Returns the baseURL corresponding to a given node's nodeName -- NOTE: 
does not (currently)
* imply that the nodeName (or resulting baseURL) exists in the cluster.
*
-   * @lucene.experimental
+   * @param nodeName name of the node
+   * @return url that looks like {@code https://localhost:8983/solr}
*/
   public String getBaseUrlForNodeName(final String nodeName) {
 return Utils.getBaseUrlForNodeName(nodeName, 
getClusterProperty(URL_SCHEME, "http"));
   }
 
+  /**
+   * Returns the V2 baseURL corresponding to a given node's nodeName -- NOTE: 
does not (currently)
+   * imply that the nodeName (or resulting baseURL) exists in the cluster.
+   *
+   * @param nodeName name of the node
+   * @return url that looks like {@code https://localhost:8983/api}
+   */
+  public String getBaseUrlV2ForNodeName(final String nodeName) {
+return Utils.getBaseUrlForNodeName(nodeName, 
getClusterProperty(URL_SCHEME, "http"), true);

Review Comment:
   It's actually a different overload of the method which accepts `boolean 
isV2` parameter. But I agree - perhaps it's more clear to also use the same 
overload in line 1338, and pass `isV2 = false` explicitly.
   
   
https://github.com/apache/solr/blob/3837eebcd73e103c083d7b02bdf6904d08fc34b5/solr/solrj/src/java/org/apache/solr/common/util/Utils.java#L719-L738



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] Clarify the cache behavior in the Ref Guide [solr]

2024-05-22 Thread via GitHub


AndreyBozhko commented on code in PR #2472:
URL: https://github.com/apache/solr/pull/2472#discussion_r1610122730


##
solr/solr-ref-guide/modules/configuration-guide/pages/caches-warming.adoc:
##
@@ -86,7 +86,6 @@ The async cache provides most significant improvement with 
many concurrent queri
 However, the async cache will not prevent data races for time-limited queries, 
since those are expected to provide partial results.
 
 All caches can be disabled using the parameter `enabled` with a value of 
`false`.
-Caches can also be disabled on a query-by-query basis with the `cache` 
parameter, as described in the section 
xref:query-guide:common-query-parameters.adoc#cache-local-parameter[cache Local 
Parameter].

Review Comment:
   @epugh The doc does say that caches can be toggled in solrconfig.xml:
   ```
   All caches can be disabled using the parameter `enabled` with a value of 
`false`.
   ```
   
   But the **`cache` local parameter** only works with the filter cache, - so I 
removed all mentions of that parameter from the doc, and only kept one mention 
in the `Filter Cache` section (which redirects to Common Query Parameters).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17279) consolidate security.json constants in test code

2024-05-22 Thread Eric Pugh (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848643#comment-17848643
 ] 

Eric Pugh commented on SOLR-17279:
--

Thanks for flagging this...   Let me look again at why i didn't backport 
immediately..   it may have been some conflicts...

> consolidate security.json constants in test code
> 
>
> Key: SOLR-17279
> URL: https://issues.apache.org/jira/browse/SOLR-17279
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Rudi Seitz
>Assignee: Eric Pugh
>Priority: Trivial
> Fix For: main (10.0)
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Various unit tests declare a SECURITY_JSON constant. Maybe of these constants 
> are identical (likely introduced via copy/paste from other tests, resulting 
> in code duplication). This ticket is to consolidate the various SECURITY_JSON 
> constants across the tests in a central place.
> This point was raised during discussion of the fix for SOLR-12813. See 
> https://github.com/apache/solr/pull/2404#discussion_r1568012056
> In that discussion, it was agreed to address SECURITY_JSON consolidation in a 
> separate ticket -- this is that ticket. 
> Tagging [~epugh]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17279) consolidate security.json constants in test code

2024-05-22 Thread Sanjay Dutt (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848638#comment-17848638
 ] 

Sanjay Dutt commented on SOLR-17279:


Blocked on SOLR-17300  for the 9x back port because of it

> consolidate security.json constants in test code
> 
>
> Key: SOLR-17279
> URL: https://issues.apache.org/jira/browse/SOLR-17279
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Rudi Seitz
>Assignee: Eric Pugh
>Priority: Trivial
> Fix For: main (10.0)
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Various unit tests declare a SECURITY_JSON constant. Maybe of these constants 
> are identical (likely introduced via copy/paste from other tests, resulting 
> in code duplication). This ticket is to consolidate the various SECURITY_JSON 
> constants across the tests in a central place.
> This point was raised during discussion of the fix for SOLR-12813. See 
> https://github.com/apache/solr/pull/2404#discussion_r1568012056
> In that discussion, it was agreed to address SECURITY_JSON consolidation in a 
> separate ticket -- this is that ticket. 
> Tagging [~epugh]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17118: Simplify CoreContainerProvider initialization [solr]

2024-05-22 Thread via GitHub


gus-asf commented on code in PR #2474:
URL: https://github.com/apache/solr/pull/2474#discussion_r1609936991


##
solr/core/src/java/org/apache/solr/servlet/CoreContainerProvider.java:
##
@@ -93,69 +87,69 @@ public class CoreContainerProvider implements 
ServletContextListener {
   private HttpClient httpClient;
   private SolrMetricManager metricManager;
   private RateLimitManager rateLimitManager;
-  private final CountDownLatch init = new CountDownLatch(1);
   private String registryName;
-  // AFAIK the only reason we need this is to support JettySolrRunner for 
tests. In tests we might
-  // have multiple CoreContainers in the same JVM, but I *think* that doesn't 
happen in a real
-  // server.
-  private static final Map services =
-  Collections.synchronizedMap(new WeakHashMap<>());
-
-  // todo: dependency injection instead, but for now this method and the 
associated map will have
-  // to suffice.
-  // Note that this relies on ServletContext.equals() not implementing 
anything significantly
-  // different than Object.equals for its .equals method (I've found no 
implementation that even
-  // implements it).
-  public static ServiceHolder serviceForContext(ServletContext ctx) throws 
InterruptedException {
-ContextInitializationKey key = new ContextInitializationKey(ctx);
-return services.computeIfAbsent(key, ServiceHolder::new);
+
+  /** Acquires an instance from the context, waiting if necessary. */
+  public static CoreContainerProvider serviceForContext(ServletContext ctx)
+  throws InterruptedException {
+long startWait = System.nanoTime();
+synchronized (ctx) {

Review Comment:
   @epugh was the one who mentioned the jetty quickstart mechanism to me back 
when I was doing this. I don't know if he's had thoughts since then.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17118: Simplify CoreContainerProvider initialization [solr]

2024-05-22 Thread via GitHub


gus-asf commented on code in PR #2474:
URL: https://github.com/apache/solr/pull/2474#discussion_r1609932690


##
solr/core/src/java/org/apache/solr/servlet/CoreContainerProvider.java:
##
@@ -93,69 +87,69 @@ public class CoreContainerProvider implements 
ServletContextListener {
   private HttpClient httpClient;
   private SolrMetricManager metricManager;
   private RateLimitManager rateLimitManager;
-  private final CountDownLatch init = new CountDownLatch(1);
   private String registryName;
-  // AFAIK the only reason we need this is to support JettySolrRunner for 
tests. In tests we might
-  // have multiple CoreContainers in the same JVM, but I *think* that doesn't 
happen in a real
-  // server.
-  private static final Map services =
-  Collections.synchronizedMap(new WeakHashMap<>());
-
-  // todo: dependency injection instead, but for now this method and the 
associated map will have
-  // to suffice.
-  // Note that this relies on ServletContext.equals() not implementing 
anything significantly
-  // different than Object.equals for its .equals method (I've found no 
implementation that even
-  // implements it).
-  public static ServiceHolder serviceForContext(ServletContext ctx) throws 
InterruptedException {
-ContextInitializationKey key = new ContextInitializationKey(ctx);
-return services.computeIfAbsent(key, ServiceHolder::new);
+
+  /** Acquires an instance from the context, waiting if necessary. */
+  public static CoreContainerProvider serviceForContext(ServletContext ctx)
+  throws InterruptedException {
+long startWait = System.nanoTime();
+synchronized (ctx) {

Review Comment:
   Hmm interesting idea, though slightly scary to wait on something jetty owns. 
Never know when they might also (try to) lock it during server initialization. 
Also I'm unsure if this would be safe if we ever had another 
ServletContextListner do the same thing... Worse yet the use of JettySolrRunner 
for our tests means that we don't have any tests for what start.jar is doing. 
That dichotomy is a lurking problem, for which there are 2 possible solutions: 
1) make jetty startup quicker for tests with 
https://eclipse.dev/jetty/documentation/jetty-12/operations-guide/index.html#og-quickstart
 and ditch JettySolrRunner or 2) ditching start.jar and transitioning to custom 
programatic jetty startup (possibly a variant of JettySolrRunner). As it 
stands, the tests don't do a good job of covering the true startup process.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17118: Simplify CoreContainerProvider initialization [solr]

2024-05-22 Thread via GitHub


gus-asf commented on code in PR #2474:
URL: https://github.com/apache/solr/pull/2474#discussion_r1609932690


##
solr/core/src/java/org/apache/solr/servlet/CoreContainerProvider.java:
##
@@ -93,69 +87,69 @@ public class CoreContainerProvider implements 
ServletContextListener {
   private HttpClient httpClient;
   private SolrMetricManager metricManager;
   private RateLimitManager rateLimitManager;
-  private final CountDownLatch init = new CountDownLatch(1);
   private String registryName;
-  // AFAIK the only reason we need this is to support JettySolrRunner for 
tests. In tests we might
-  // have multiple CoreContainers in the same JVM, but I *think* that doesn't 
happen in a real
-  // server.
-  private static final Map services =
-  Collections.synchronizedMap(new WeakHashMap<>());
-
-  // todo: dependency injection instead, but for now this method and the 
associated map will have
-  // to suffice.
-  // Note that this relies on ServletContext.equals() not implementing 
anything significantly
-  // different than Object.equals for its .equals method (I've found no 
implementation that even
-  // implements it).
-  public static ServiceHolder serviceForContext(ServletContext ctx) throws 
InterruptedException {
-ContextInitializationKey key = new ContextInitializationKey(ctx);
-return services.computeIfAbsent(key, ServiceHolder::new);
+
+  /** Acquires an instance from the context, waiting if necessary. */
+  public static CoreContainerProvider serviceForContext(ServletContext ctx)
+  throws InterruptedException {
+long startWait = System.nanoTime();
+synchronized (ctx) {

Review Comment:
   Hmm interesting idea, though slightly scary to wait on something jetty owns. 
Never know when they might also (try to) lock it during server initialization. 
Also I'm unsure if this would be safe if we ever had another 
ServletContextListner do the same thing... Worse yet the use of JettySolrRunner 
for our tests means that we don't have any tests for what start.jar is doing. 
That dichotomy is a lurking problem, for which there are 2 possible solutions: 
1) make jetty startup quicker for tests with 
https://eclipse.dev/jetty/documentation/jetty-12/operations-guide/index.html#og-quickstart
 or 2) ditching start.jar and transitioning to custom programatic jetty startup 
(possibly a variant of JettySolrRunner). As it stands, the tests don't do a 
good job of covering the true startup process.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17118: Simplify CoreContainerProvider initialization [solr]

2024-05-22 Thread via GitHub


gus-asf commented on code in PR #2474:
URL: https://github.com/apache/solr/pull/2474#discussion_r1609909368


##
solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java:
##
@@ -73,10 +72,7 @@
 public class SolrDispatchFilter extends BaseSolrFilter implements PathExcluder 
{
   private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
 
-  // TODO: see if we can get rid of the holder here (Servlet spec actually 
guarantees
-  // ContextListeners run before filter init, but JettySolrRunner that we use 
for tests is
-  // complicated)
-  private ServiceHolder coreService;

Review Comment:
   This was the sticky bit. Before pushing I suggest you beast the tests for a 
good while. My memory is that without this I got a low level of failures and or 
deadlocks. The current state while apparently not perfect had a low enough 
frequency that it didn't crop up. (though I don't remember exactly how long I 
was beasting things either).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17304: PKG_VERSIONS not honored when loading the schema plugins [solr]

2024-05-22 Thread via GitHub


epugh commented on PR #2471:
URL: https://github.com/apache/solr/pull/2471#issuecomment-2124630652

   > > It seems like if your test is passing, then you've fixed the bug!
   > 
   > The test does pass - but for completeness, I still want to check what 
happens when the collection is reloaded or when packages are refreshed, just to 
make sure nothing else breaks.
   
   Awesome!   When you are ready, if no one else moves this forward, please 
ping me.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-17304: PKG_VERSIONS not honored when loading the schema plugins [solr]

2024-05-22 Thread via GitHub


epugh commented on code in PR #2471:
URL: https://github.com/apache/solr/pull/2471#discussion_r1609830946


##
solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java:
##
@@ -694,9 +694,14 @@ public  boolean addToCoreAware(T obj) {
 }
   }
 
+  void initConfig(SolrConfig config) {
+assert this.config == null || this.config == config;

Review Comment:
   Good question, and honestly when i see the words "resource loader" I run and 
hide!   Not familiar with that.  Maybe ask on the dev list?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



Re: [PR] SOLR-16659: Properly construct V2 base urls instead of replacing substring "/solr" with "/api" [solr]

2024-05-22 Thread via GitHub


epugh commented on code in PR #2473:
URL: https://github.com/apache/solr/pull/2473#discussion_r1609824037


##
solr/core/src/test/org/apache/solr/handler/TestContainerPlugin.java:
##
@@ -572,7 +572,7 @@ private V2Request postPlugin(Object payload) {
 
   public void waitForAllNodesToSync(String path, Map expected) 
throws Exception {
 for (JettySolrRunner jettySolrRunner : cluster.getJettySolrRunners()) {
-  String baseUrl = 
jettySolrRunner.getBaseUrl().toString().replace("/solr", "/api");
+  String baseUrl = jettySolrRunner.getBaseURLV2().toString();

Review Comment:
   so many `.toString()` ;-).   I wonder...Should we be working more with 
URL objects and less with String objects when we build our urls???



##
solr/core/src/test/org/apache/solr/cloud/AliasIntegrationTest.java:
##
@@ -260,7 +259,7 @@ public void testModifyPropertiesV2() throws Exception {
 checkFooAndBarMeta(aliasName, zkStateReader, "baz", "bam");
 
 String aliasPropertyApi =
-String.format(Locale.ENGLISH, "/api/aliases/%s/properties/%s", 
aliasName, "foo");

Review Comment:
   that's some funky logic that it's nice to see sorted out



##
solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java:
##
@@ -374,7 +374,7 @@ protected HttpRequestBase createMethod(SolrRequest 
request, String collection
 
 if (request instanceof V2Request) {
   if (System.getProperty("solr.v2RealPath") == null || ((V2Request) 
request).isForceV2()) {
-basePath = baseUrl.replace("/solr", "/api");
+basePath = changeV2RequestEndpoint(baseUrl);

Review Comment:
   unfortunately, until we get `HttpSolrClient` gone, we probably should 
because it is used in so many places.  We really DO need to get the migration 
done in Solr 10 so we can cut down on the surface of changes every time we 
touch an implementation.



##
solr/solrj-zookeeper/src/java/org/apache/solr/common/cloud/ZkStateReader.java:
##
@@ -1331,12 +1331,24 @@ public ConfigData getSecurityProps(boolean getFresh) {
* Returns the baseURL corresponding to a given node's nodeName -- NOTE: 
does not (currently)
* imply that the nodeName (or resulting baseURL) exists in the cluster.
*
-   * @lucene.experimental
+   * @param nodeName name of the node
+   * @return url that looks like {@code https://localhost:8983/solr}
*/
   public String getBaseUrlForNodeName(final String nodeName) {
 return Utils.getBaseUrlForNodeName(nodeName, 
getClusterProperty(URL_SCHEME, "http"));
   }
 
+  /**
+   * Returns the V2 baseURL corresponding to a given node's nodeName -- NOTE: 
does not (currently)
+   * imply that the nodeName (or resulting baseURL) exists in the cluster.
+   *
+   * @param nodeName name of the node
+   * @return url that looks like {@code https://localhost:8983/api}
+   */
+  public String getBaseUrlV2ForNodeName(final String nodeName) {
+return Utils.getBaseUrlForNodeName(nodeName, 
getClusterProperty(URL_SCHEME, "http"), true);

Review Comment:
   is it just me, or does this method look exactly like the one in 
`getBaseUrlForNodeName`???   Where do we do the `/api` conversion?



##
solr/solrj-zookeeper/src/java/org/apache/solr/common/cloud/ZkStateReader.java:
##
@@ -1331,12 +1331,24 @@ public ConfigData getSecurityProps(boolean getFresh) {
* Returns the baseURL corresponding to a given node's nodeName -- NOTE: 
does not (currently)
* imply that the nodeName (or resulting baseURL) exists in the cluster.
*
-   * @lucene.experimental

Review Comment:
   nice clean up!



##
solr/solrj-zookeeper/src/java/org/apache/solr/common/cloud/ZkStateReader.java:
##
@@ -1331,12 +1331,24 @@ public ConfigData getSecurityProps(boolean getFresh) {
* Returns the baseURL corresponding to a given node's nodeName -- NOTE: 
does not (currently)
* imply that the nodeName (or resulting baseURL) exists in the cluster.
*
-   * @lucene.experimental
+   * @param nodeName name of the node
+   * @return url that looks like {@code https://localhost:8983/solr}
*/
   public String getBaseUrlForNodeName(final String nodeName) {
 return Utils.getBaseUrlForNodeName(nodeName, 
getClusterProperty(URL_SCHEME, "http"));
   }
 
+  /**
+   * Returns the V2 baseURL corresponding to a given node's nodeName -- NOTE: 
does not (currently)
+   * imply that the nodeName (or resulting baseURL) exists in the cluster.
+   *
+   * @param nodeName name of the node
+   * @return url that looks like {@code https://localhost:8983/api}
+   */
+  public String getBaseUrlV2ForNodeName(final String nodeName) {
+return Utils.getBaseUrlForNodeName(nodeName, 
getClusterProperty(URL_SCHEME, "http"), true);

Review Comment:
   Looking up to line 1338



##
solr/core/src/test/org/apache/solr/cloud/TestConfigSetsAPI.java:
##
@@ -1590,8 +1590,7 @@ private long uploadGivenConfigSet(
   Map map =
   

Re: [PR] Clarify the cache behavior in the Ref Guide [solr]

2024-05-22 Thread via GitHub


epugh commented on code in PR #2472:
URL: https://github.com/apache/solr/pull/2472#discussion_r1609814674


##
solr/solr-ref-guide/modules/configuration-guide/pages/caches-warming.adoc:
##
@@ -86,7 +86,6 @@ The async cache provides most significant improvement with 
many concurrent queri
 However, the async cache will not prevent data races for time-limited queries, 
since those are expected to provide partial results.
 
 All caches can be disabled using the parameter `enabled` with a value of 
`false`.
-Caches can also be disabled on a query-by-query basis with the `cache` 
parameter, as described in the section 
xref:query-guide:common-query-parameters.adoc#cache-local-parameter[cache Local 
Parameter].

Review Comment:
   wow...so, i notice you removed the line.   Do we cover elsewhere in the 
docs that the filter cache can be dsiable on a query-by-query basis?  I wonder 
if this deleted line should be incoproated into the `[TIP]` to highligh the 
query-by-query basis which is very different that the other settings...?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org



[jira] [Commented] (SOLR-17306) Solr Repeater or Slave loses data after restart when replication is not enabled on leader

2024-05-22 Thread Eric Pugh (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-17306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848580#comment-17848580
 ] 

Eric Pugh commented on SOLR-17306:
--

Could you put together a reproducible bats style test that demonstrates this 
behavior?  For inspiration, look at [https://github.com/apache/solr/pull/1875] 
as an example.  

 

 

> Solr Repeater or Slave loses data after restart when replication is not 
> enabled on leader
> -
>
> Key: SOLR-17306
> URL: https://issues.apache.org/jira/browse/SOLR-17306
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 9.2, 9.3, 9.4, 9.6, 9.5.0
>Reporter: Peter Kroiss
>Priority: Major
>
> We are testing Solr 9.6.2 in a leader - repeater - follower configuration. We 
> have times where we write the leader heavily, in that time replication is 
> disabled to save bandwidth.
> In the time, when replication is disabled on leader, the repeater restarts 
> for some reason, the repeater loses all documents and doesn't recover when 
> the leader is opened for replication.
> The documents are deleted but indexVersion and generation properties are set 
> to the value of the leader, so the repeater or follower doesn't recover when 
> the leader is opened for replication again.
> It recovers only when there are commits on the leader after opening the 
> replication.
> Log:
> 2024-05-22 06:18:42.186 INFO  (qtp16373883-27-null-23) [c: s: r: x:mycore 
> t:null-23] o.a.s.c.S.Request webapp=/solr path=/replication 
> params=\{wt=json=details} status=0 QTime=10
> 2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore 
> t:] o.a.s.h.IndexFetcher Leader's generation: 0
> 2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore 
> t:] o.a.s.h.IndexFetcher Leader's version: 0
> 2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore 
> t:] o.a.s.h.IndexFetcher Follower's generation: 2913
> 2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore 
> t:] o.a.s.h.IndexFetcher Follower's version: 1716300697144
> 2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore 
> t:] o.a.s.h.IndexFetcher New index in Leader. Deleting mine...
>  
> --> there is no new Index in Leader it is only closed for replication
>  
>  
> We think the problem is in IndexFetcher
> old: if (IndexDeletionPolicyWrapper.getCommitTimestamp(commit) != 0L) {
> forceReplication - will probably fix the problem
> new : if (forceReplication && 
> IndexDeletionPolicyWrapper.getCommitTimestamp(commit) != 0L) {
>  
>  
>  
>  
> When investigation the problem we also found some inconsistencies in the 
> details request. There are two fragments leader. When the leader is closed 
> for replication the property leader. replicationEnabled is set to true, the 
> property follower. leaderDetails. Leader. replicationEnabled is correct.
>  
> Example
> curl -s 
> "https://solr9-repeater:8983/solr/mycore/replication?wt=json=details; 
> | jq  '.details |
> { indexSize: .indexSize, indexVersion: .indexVersion, generation: 
> .generation, indexPath: .indexPath, leader: \\{  replicableVersion: 
> .leader.replicableVersion, replicableGeneration: 
> .leader.replicableGeneration, replicationEnabled: .leader.replicationEnabled }
> ,
> follower: { leaderDetails: { indexSize: .follower.leaderDetails.indexSize, 
> generation: .follower.leaderDetails.generation,
>  indexVersion: .follower.leaderDetails.indexVersion, indexPath: 
> .follower.leaderDetails.indexPath,
> leader:
> { replicableVersion:  .follower.leaderDetails.leader.replicableVersion , 
> replicableGeneration:  .follower.leaderDetails.leader.replicableGeneration, 
> replicationEnabled:  .follower.leaderDetails.leader.replicationEnabled }
>    }}
> }'
>  
> {
>   "indexSize": "10.34 GB",
>   "indexVersion": 1716358708159,
>   "generation": 2913,
>   "indexPath": "/var/solr/data/mycore/data/index.20240522061946262",
>   "leader":
> {     "replicableVersion": 1716358708159,     "replicableGeneration": 2913,   
>   "replicationEnabled": "true"   }
> ,
>   "follower": {
>     "leaderDetails": {
>   "indexSize": "10.34 GB",
>   "generation": 2913,
>   "indexVersion": 1716358708159,
>   "indexPath": "/var/solr/data/mycore/data/restore.20240508131046932",
>   "leader":
> {     "replicableVersion": 1716358708159,     "replicableGeneration": 
> 2913,     "replicationEnabled": "false"   }
>     }
>   }
> }



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: 

[jira] [Updated] (SOLR-17306) Solr Repeater or Slave loses data after restart when replication is not enabled on leader

2024-05-22 Thread Peter Kroiss (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-17306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Kroiss updated SOLR-17306:

Description: 
We are testing Solr 9.6.2 in a leader - repeater - follower configuration. We 
have times where we write the leader heavily, in that time replication is 
disabled to save bandwidth.

In the time, when replication is disabled on leader, the repeater restarts for 
some reason, the repeater loses all documents and doesn't recover when the 
leader is opened for replication.

The documents are deleted but indexVersion and generation properties are set to 
the value of the leader, so the repeater or follower doesn't recover when the 
leader is opened for replication again.

It recovers only when there are commits on the leader after opening the 
replication.

Log:

2024-05-22 06:18:42.186 INFO  (qtp16373883-27-null-23) [c: s: r: x:mycore 
t:null-23] o.a.s.c.S.Request webapp=/solr path=/replication 
params=\{wt=json=details} status=0 QTime=10

2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore t:] 
o.a.s.h.IndexFetcher Leader's generation: 0

2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore t:] 
o.a.s.h.IndexFetcher Leader's version: 0

2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore t:] 
o.a.s.h.IndexFetcher Follower's generation: 2913

2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore t:] 
o.a.s.h.IndexFetcher Follower's version: 1716300697144

2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore t:] 
o.a.s.h.IndexFetcher New index in Leader. Deleting mine...

 

--> there is no new Index in Leader it is only closed for replication

 

 

We think the problem is in IndexFetcher

old: if (IndexDeletionPolicyWrapper.getCommitTimestamp(commit) != 0L) {

forceReplication - will probably fix the problem

new : if (forceReplication && 
IndexDeletionPolicyWrapper.getCommitTimestamp(commit) != 0L) {

 

 

 

 

When investigation the problem we also found some inconsistencies in the 
details request. There are two fragments leader. When the leader is closed for 
replication the property leader. replicationEnabled is set to true, the 
property follower. leaderDetails. Leader. replicationEnabled is correct.

 

Example

curl -s 
"https://solr9-repeater:8983/solr/mycore/replication?wt=json=details; | 
jq  '.details |

{ indexSize: .indexSize, indexVersion: .indexVersion, generation: .generation, 
indexPath: .indexPath, leader: \\{  replicableVersion: 
.leader.replicableVersion, replicableGeneration: .leader.replicableGeneration, 
replicationEnabled: .leader.replicationEnabled }

,

follower: { leaderDetails: { indexSize: .follower.leaderDetails.indexSize, 
generation: .follower.leaderDetails.generation,

 indexVersion: .follower.leaderDetails.indexVersion, indexPath: 
.follower.leaderDetails.indexPath,

leader:

{ replicableVersion:  .follower.leaderDetails.leader.replicableVersion , 
replicableGeneration:  .follower.leaderDetails.leader.replicableGeneration, 
replicationEnabled:  .follower.leaderDetails.leader.replicationEnabled }

   }}

}'

 

{

  "indexSize": "10.34 GB",

  "indexVersion": 1716358708159,

  "generation": 2913,

  "indexPath": "/var/solr/data/mycore/data/index.20240522061946262",

  "leader":

{     "replicableVersion": 1716358708159,     "replicableGeneration": 2913,     
"replicationEnabled": "true"   }

,

  "follower": {

    "leaderDetails": {

  "indexSize": "10.34 GB",

  "generation": 2913,

  "indexVersion": 1716358708159,

  "indexPath": "/var/solr/data/mycore/data/restore.20240508131046932",

  "leader":

{     "replicableVersion": 1716358708159,     "replicableGeneration": 
2913,     "replicationEnabled": "false"   }

    }

  }

}

  was:
We are testing Solr 9.6.2 in a leader - repeater - follower configuration. We 
have times where we write the leader heavily, in that time replication is 
disabled to save bandwidth.

In the time, when replication is disabled on leader, the repeater restarts for 
some reason, the repeater loses all documents and doesn't recover when the 
leader is opened for replication.

The documents are deleted but indexVersion and generation properties are set to 
the value of the leader, so the repeater or follower doesn't recover when the 
leader is opened for replication again.

It recovers only when there are commits on the leader after opening the 
replication.

Log:

2024-05-22 06:18:42.186 INFO  (qtp16373883-27-null-23) [c: s: r: x:mycore 
t:null-23] o.a.s.c.S.Request webapp=/solr path=/replication 
params=\{wt=json=details} status=0 QTime=10

2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore t:] 
o.a.s.h.IndexFetcher Leader's generation: 0

2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore t:] 
o.a.s.h.IndexFetcher Leader's version: 0


[jira] [Created] (SOLR-17306) Solr Repeater or Slave loses data after restart when replication is not enabled on leader

2024-05-22 Thread Peter Kroiss (Jira)
Peter Kroiss created SOLR-17306:
---

 Summary: Solr Repeater or Slave loses data after restart when 
replication is not enabled on leader
 Key: SOLR-17306
 URL: https://issues.apache.org/jira/browse/SOLR-17306
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 9.5.0, 9.6, 9.4, 9.3, 9.2
Reporter: Peter Kroiss


We are testing Solr 9.6.2 in a leader - repeater - follower configuration. We 
have times where we write the leader heavily, in that time replication is 
disabled to save bandwidth.

In the time, when replication is disabled on leader, the repeater restarts for 
some reason, the repeater loses all documents and doesn't recover when the 
leader is opened for replication.

The documents are deleted but indexVersion and generation properties are set to 
the value of the leader, so the repeater or follower doesn't recover when the 
leader is opened for replication again.

It recovers only when there are commits on the leader after opening the 
replication.

Log:

2024-05-22 06:18:42.186 INFO  (qtp16373883-27-null-23) [c: s: r: x:mycore 
t:null-23] o.a.s.c.S.Request webapp=/solr path=/replication 
params=\{wt=json=details} status=0 QTime=10

2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore t:] 
o.a.s.h.IndexFetcher Leader's generation: 0

2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore t:] 
o.a.s.h.IndexFetcher Leader's version: 0

2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore t:] 
o.a.s.h.IndexFetcher Follower's generation: 2913

2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore t:] 
o.a.s.h.IndexFetcher Follower's version: 1716300697144

2024-05-22 06:18:46.195 INFO  (indexFetcher-43-thread-1) [c: s: r: x:mycore t:] 
o.a.s.h.IndexFetcher New index in Leader. Deleting mine...

 

 

We think the problem is in IndexFetcher

old: if (IndexDeletionPolicyWrapper.getCommitTimestamp(commit) != 0L) {

forceReplication - will probably fix the problem

new : if (forceReplication && 
IndexDeletionPolicyWrapper.getCommitTimestamp(commit) != 0L) {

 

 

 

 

When investigation the problem we also found some inconsistencies in the 
details request. There are two fragments leader. When the leader is closed for 
replication the property leader. replicationEnabled is set to true, the 
property follower. leaderDetails. Leader. replicationEnabled is correct.

 

Example

curl -s 
"https://solr9-repeater:8983/solr/mycore/replication?wt=json=details; | 
jq  '.details | { indexSize: .indexSize, indexVersion: .indexVersion,

generation: .generation, indexPath: .indexPath,

leader: \{  replicableVersion: .leader.replicableVersion, replicableGeneration: 
.leader.replicableGeneration, replicationEnabled: .leader.replicationEnabled },

follower: { leaderDetails: { indexSize: .follower.leaderDetails.indexSize, 
generation: .follower.leaderDetails.generation,

 indexVersion: .follower.leaderDetails.indexVersion, indexPath: 
.follower.leaderDetails.indexPath,

leader: { replicableVersion:  .follower.leaderDetails.leader.replicableVersion 
, replicableGeneration:  .follower.leaderDetails.leader.replicableGeneration,

replicationEnabled:  .follower.leaderDetails.leader.replicationEnabled }

   }}

}'

 

{

  "indexSize": "10.34 GB",

  "indexVersion": 1716358708159,

  "generation": 2913,

  "indexPath": "/var/solr/data/mycore/data/index.20240522061946262",

  "leader": {

    "replicableVersion": 1716358708159,

    "replicableGeneration": 2913,

    "replicationEnabled": "true"

  },

  "follower": {

    "leaderDetails": {

  "indexSize": "10.34 GB",

  "generation": 2913,

  "indexVersion": 1716358708159,

  "indexPath": "/var/solr/data/mycore/data/restore.20240508131046932",

  "leader": {

    "replicableVersion": 1716358708159,

    "replicableGeneration": 2913,

    "replicationEnabled": "false"

  }

    }

  }

}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@solr.apache.org
For additional commands, e-mail: issues-h...@solr.apache.org