[jira] [Created] (SOLR-9789) ZkCLI throws NullPointerException if zkClient doesn't return data

2016-11-21 Thread Gary Lee (JIRA)
Gary Lee created SOLR-9789:
--

 Summary: ZkCLI throws NullPointerException if zkClient doesn't 
return data
 Key: SOLR-9789
 URL: https://issues.apache.org/jira/browse/SOLR-9789
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 5.5.1
Reporter: Gary Lee
Priority: Minor


We ran into a situation where using ZkCLI to get the Solr clusterstate in 
Zookeeper always returned a NPE. We eventually found that it was due to a 
clusterstate being too large (over 1M Zookeeper node limit) so the 
zkClient.getData call returned null, but it was confusing to instead throw an 
NPE because ZkCLI assumes non-null byte data. Could a check be added to not 
throw NPE, but report a warning instead?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9248) HttpSolrClient not compatible with compression option

2016-06-28 Thread Gary Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15353445#comment-15353445
 ] 

Gary Lee commented on SOLR-9248:


Hi Mike, yes this did work in 5.4. I tested it before on 5.4 and the logic was 
to close the response input stream directly (respBody.close()) instead of 
calling EntityUtils.consumeFully, so the GZIPInputStream was getting closed 
properly and we weren't losing connections.

The stack trace is based on 5.5, so doesn't directly correspond with 5.5.2 - 
sorry if that led to any confusion. But your explanation is correct and that is 
exactly the problem I see. The exception is now ignored (which is why it's not 
straightforward to get a stack trace in the logs anymore), but the end result 
is that the respBody input stream is never closed. I believe respBody is the 
GZIPInputStream that needs to be closed, because I'm seeing that the connection 
continues to stay leased and eventually the httpClient doesn't accept new 
connections anymore. 

Your comment on "The GZIPInputStream from the GzipDecompressingEntity was never 
fully constructed" is true when calling EntityUtils.consumeFully, but the 
GZIPInputStream is first constructed at the time we need to read the response, 
and that completes without a problem. It's the next time that we try to do the 
same thing where the error occurs, and the initial GZIPInputStream (respBody) 
never gets closed. Since the GZipDecompressingEntity is providing a new stream 
every time, it essentially ignores the one was previously constructed, and thus 
never achieves the purpose of closing out an input stream.

> HttpSolrClient not compatible with compression option
> -
>
> Key: SOLR-9248
> URL: https://issues.apache.org/jira/browse/SOLR-9248
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.5, 5.5.1
>Reporter: Gary Lee
> Attachments: CompressedConnectionTest.java
>
>
> Since Solr 5.5, using the compression option 
> (solrClient.setAllowCompression(true)) causes the HTTP client to quickly run 
> out of connections in the connection pool. After debugging through this, we 
> found that the GZIPInputStream is incompatible with changes to how the 
> response input stream is closed in 5.5. It is at this point when the 
> GZIPInputStream throws an EOFException, and while this is silently eaten up, 
> the net effect is that the stream is never closed, leaving the connection 
> open. After a number of requests, the pool is exhausted and no further 
> requests can be served.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9248) HttpSolrClient not compatible with compression option

2016-06-27 Thread Gary Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15351752#comment-15351752
 ] 

Gary Lee commented on SOLR-9248:


[~mdrob] Looks like 5.5.2 was just released, but not sure when I'll have a 
chance to integrate it with our application to test this.

However, I did look at the solr 5.5.2 source code and based on what I see, I 
don't believe it is resolved yet. I'm still seeing the same call in 
HttpSolrClient.executeMethod to close the entity and associated response input 
stream using Utils.consumeFully, and this is where the problem occurs.


> HttpSolrClient not compatible with compression option
> -
>
> Key: SOLR-9248
> URL: https://issues.apache.org/jira/browse/SOLR-9248
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.5, 5.5.1
>Reporter: Gary Lee
>
> Since Solr 5.5, using the compression option 
> (solrClient.setAllowCompression(true)) causes the HTTP client to quickly run 
> out of connections in the connection pool. After debugging through this, we 
> found that the GZIPInputStream is incompatible with changes to how the 
> response input stream is closed in 5.5. It is at this point when the 
> GZIPInputStream throws an EOFException, and while this is silently eaten up, 
> the net effect is that the stream is never closed, leaving the connection 
> open. After a number of requests, the pool is exhausted and no further 
> requests can be served.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9248) HttpSolrClient not compatible with compression option

2016-06-24 Thread Gary Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348956#comment-15348956
 ] 

Gary Lee edited comment on SOLR-9248 at 6/25/16 12:42 AM:
--

The following is a stack trace we see in the logs (note this is from 5.5, which 
has since changed in 5.5.1, but the same problem still occurs):
{noformat}
2016-04-26 18:50:28,066 ERROR org.apache.solr.client.solrj.impl.HttpSolrClient  
- Error consuming and closing http response stream. [source:]
java.io.EOFException
at java.util.zip.GZIPInputStream.readUByte(GZIPInputStream.java:268)
at java.util.zip.GZIPInputStream.readUShort(GZIPInputStream.java:258)
at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:164)
at java.util.zip.GZIPInputStream.(GZIPInputStream.java:79)
at java.util.zip.GZIPInputStream.(GZIPInputStream.java:91)
at 
org.apache.solr.client.solrj.impl.HttpClientUtil$GzipDecompressingEntity.getContent(HttpClientUtil.java:356)
at 
org.apache.http.conn.BasicManagedEntity.getContent(BasicManagedEntity.java:87)
at org.apache.http.util.EntityUtils.consume(EntityUtils.java:86)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:594)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.checkAZombieServer(LBHttpSolrClient.java:596)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.access$000(LBHttpSolrClient.java:80)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient$1.run(LBHttpSolrClient.java:671)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}


was (Author: gary.lee):
The following is a stack trace we see in the logs (note this is from 5.5, which 
has since changed in 5.5.1, but the same problem still occurs):
{noformat}
2016-04-26 18:50:28,066 [aliveCheckExecutor-3-thread-1 
sessionId:F923573474CAEF7 nodeId:node-1 vaultId:119 userId:107 
origReqUri:/ui/proxy/solr/suggestion] ERROR 
org.apache.solr.client.solrj.impl.HttpSolrClient  - Error consuming and closing 
http response stream. [source:]
java.io.EOFException
at java.util.zip.GZIPInputStream.readUByte(GZIPInputStream.java:268)
at java.util.zip.GZIPInputStream.readUShort(GZIPInputStream.java:258)
at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:164)
at java.util.zip.GZIPInputStream.(GZIPInputStream.java:79)
at java.util.zip.GZIPInputStream.(GZIPInputStream.java:91)
at 
org.apache.solr.client.solrj.impl.HttpClientUtil$GzipDecompressingEntity.getContent(HttpClientUtil.java:356)
at 
org.apache.http.conn.BasicManagedEntity.getContent(BasicManagedEntity.java:87)
at org.apache.http.util.EntityUtils.consume(EntityUtils.java:86)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:594)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.checkAZombieServer(LBHttpSolrClient.java:596)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.access$000(LBHttpSolrClient.java:80)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient$1.run(LBHttpSolrClient.java:671)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolEx

[jira] [Commented] (SOLR-9248) HttpSolrClient not compatible with compression option

2016-06-24 Thread Gary Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348956#comment-15348956
 ] 

Gary Lee commented on SOLR-9248:


The following is a stack trace we see in the logs (note this is from 5.5, which 
has since changed in 5.5.1, but the same problem still occurs):
{noformat}
2016-04-26 18:50:28,066 [aliveCheckExecutor-3-thread-1 
sessionId:F923573474CAEF7 nodeId:node-1 vaultId:119 userId:107 
origReqUri:/ui/proxy/solr/suggestion] ERROR 
org.apache.solr.client.solrj.impl.HttpSolrClient  - Error consuming and closing 
http response stream. [source:]
java.io.EOFException
at java.util.zip.GZIPInputStream.readUByte(GZIPInputStream.java:268)
at java.util.zip.GZIPInputStream.readUShort(GZIPInputStream.java:258)
at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:164)
at java.util.zip.GZIPInputStream.(GZIPInputStream.java:79)
at java.util.zip.GZIPInputStream.(GZIPInputStream.java:91)
at 
org.apache.solr.client.solrj.impl.HttpClientUtil$GzipDecompressingEntity.getContent(HttpClientUtil.java:356)
at 
org.apache.http.conn.BasicManagedEntity.getContent(BasicManagedEntity.java:87)
at org.apache.http.util.EntityUtils.consume(EntityUtils.java:86)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:594)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.checkAZombieServer(LBHttpSolrClient.java:596)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.access$000(LBHttpSolrClient.java:80)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient$1.run(LBHttpSolrClient.java:671)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}

> HttpSolrClient not compatible with compression option
> -
>
> Key: SOLR-9248
> URL: https://issues.apache.org/jira/browse/SOLR-9248
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.5, 5.5.1
>Reporter: Gary Lee
> Fix For: 5.5.2
>
>
> Since Solr 5.5, using the compression option 
> (solrClient.setAllowCompression(true)) causes the HTTP client to quickly run 
> out of connections in the connection pool. After debugging through this, we 
> found that the GZIPInputStream is incompatible with changes to how the 
> response input stream is closed in 5.5. It is at this point when the 
> GZIPInputStream throws an EOFException, and while this is silently eaten up, 
> the net effect is that the stream is never closed, leaving the connection 
> open. After a number of requests, the pool is exhausted and no further 
> requests can be served.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9248) HttpSolrClient not compatible with compression option

2016-06-24 Thread Gary Lee (JIRA)
Gary Lee created SOLR-9248:
--

 Summary: HttpSolrClient not compatible with compression option
 Key: SOLR-9248
 URL: https://issues.apache.org/jira/browse/SOLR-9248
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.5.1, 5.5
Reporter: Gary Lee
 Fix For: 5.5.2


Since Solr 5.5, using the compression option 
(solrClient.setAllowCompression(true)) causes the HTTP client to quickly run 
out of connections in the connection pool. After debugging through this, we 
found that the GZIPInputStream is incompatible with changes to how the response 
input stream is closed in 5.5. It is at this point when the GZIPInputStream 
throws an EOFException, and while this is silently eaten up, the net effect is 
that the stream is never closed, leaving the connection open. After a number of 
requests, the pool is exhausted and no further requests can be served.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9244) Lots of "Previous SolrRequestInfo was not closed" in Solr log

2016-06-24 Thread Gary Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15348946#comment-15348946
 ] 

Gary Lee commented on SOLR-9244:


[~tomasflobbe] I don't think I've seen a stack trace - just a lot of these 
messages in our solr logs:
{noformat}
2016-06-24 23:05:24,173 ERROR org.apache.solr.request.SolrRequestInfo - 
Previous SolrRequestInfo was not closed!  req=commit=true
2016-06-24 23:05:24,173 ERROR org.apache.solr.request.SolrRequestInfo - prev == 
info : false
{noformat}

When I debugged through this one time, I remember it was the HttpSolrCall.call 
where I saw the error occur, so i was led to believe that this was possibly the 
point where we do not close, but as you pointed out, the destroy is called, so 
it is getting cleaned up. In that case, the code path in SOLR-8657 is probably 
the same one we're hitting.

> Lots of "Previous SolrRequestInfo was not closed" in Solr log
> -
>
> Key: SOLR-9244
> URL: https://issues.apache.org/jira/browse/SOLR-9244
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.3.1
>Reporter: Gary Lee
>Priority: Minor
> Fix For: 5.3.1
>
>
> After upgrading to Solr 5.3.1, we started seeing a lot of "Previous 
> SolrRequestInfo was not closed" ERROR level messages in the logs. Upon 
> further inspection, it appears this is a sanity check and not an error that 
> needs attention. It appears that the SolrRequestInfo isn't freed in one 
> particular path (no corresponding call to SolrRequestInfo.clearRequestInfo in 
> HttpSolrCall.call), which often leads to a lot of these messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9244) Lots of "Previous SolrRequestInfo was not closed" in Solr log

2016-06-23 Thread Gary Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15347349#comment-15347349
 ] 

Gary Lee edited comment on SOLR-9244 at 6/23/16 10:39 PM:
--

SOLR-8657 appears to detail another path in which the same issue occurs - lots 
of those errors polluting the logs. In our case it seems that HttpSolrCall.call 
isn't properly clearing the SolrRequestInfo. See where I added a call to clear 
the request info below:
{noformat}
  public Action call() throws IOException {
MDCLoggingContext.reset();
MDCLoggingContext.setNode(cores);

if (cores == null) {
  sendError(503, "Server is shutting down or failed to initialize");
  return RETURN;
}

if (solrDispatchFilter.abortErrorMessage != null) {
  sendError(500, solrDispatchFilter.abortErrorMessage);
  return RETURN;
}

try {
  init();
...
SolrRequestInfo.setRequestInfo(new SolrRequestInfo(solrReq, 
solrRsp));
execute(solrRsp);
HttpCacheHeaderUtil.checkHttpCachingVeto(solrRsp, resp, reqMethod);
Iterator> headers = solrRsp.httpHeaders();
while (headers.hasNext()) {
  Map.Entry entry = headers.next();
  resp.addHeader(entry.getKey(), entry.getValue());
}
QueryResponseWriter responseWriter = 
core.getQueryResponseWriter(solrReq);
if (invalidStates != null) 
solrReq.getContext().put(CloudSolrClient.STATE_VERSION, invalidStates);
writeResponse(solrRsp, responseWriter, reqMethod);
  }
  return RETURN;
default: return action;
  }
} catch (Throwable ex) {
  sendError(ex);
  // walk the the entire cause chain to search for an Error
  Throwable t = ex;
  while (t != null) {
if (t instanceof Error) {
  if (t != ex) {
log.error("An Error was wrapped in another exception - please 
report complete stacktrace on SOLR-6161", ex);
  }
  throw (Error) t;
}
t = t.getCause();
  }
  return RETURN;
} finally {
// I WOULD HAVE EXPECTED SolrRequestInfo.clearRequestInfo(); call here
  MDCLoggingContext.clear();
}

  }
{noformat}

So yes appears to be the same issue as SOLR-8657, but this details another code 
path that needs to be addressed.


was (Author: gary.lee):
SOLR-8657 appears to detail another path in which the same issue occurs - lots 
of those errors polluting the logs. In our case it seems that HttpSolrCall.call 
isn't properly clearing the SolrRequestInfo:
{noformat}
  public Action call() throws IOException {
MDCLoggingContext.reset();
MDCLoggingContext.setNode(cores);

if (cores == null) {
  sendError(503, "Server is shutting down or failed to initialize");
  return RETURN;
}

if (solrDispatchFilter.abortErrorMessage != null) {
  sendError(500, solrDispatchFilter.abortErrorMessage);
  return RETURN;
}

try {
  init();
...
SolrRequestInfo.setRequestInfo(new SolrRequestInfo(solrReq, 
solrRsp));
execute(solrRsp);
HttpCacheHeaderUtil.checkHttpCachingVeto(solrRsp, resp, reqMethod);
Iterator> headers = solrRsp.httpHeaders();
while (headers.hasNext()) {
  Map.Entry entry = headers.next();
  resp.addHeader(entry.getKey(), entry.getValue());
}
QueryResponseWriter responseWriter = 
core.getQueryResponseWriter(solrReq);
if (invalidStates != null) 
solrReq.getContext().put(CloudSolrClient.STATE_VERSION, invalidStates);
writeResponse(solrRsp, responseWriter, reqMethod);
  }
  return RETURN;
default: return action;
  }
} catch (Throwable ex) {
  sendError(ex);
  // walk the the entire cause chain to search for an Error
  Throwable t = ex;
  while (t != null) {
if (t instanceof Error) {
  if (t != ex) {
log.error("An Error was wrapped in another exception - please 
report complete stacktrace on SOLR-6161", ex);
  }
  throw (Error) t;
}
t = t.getCause();
  }
  return RETURN;
} finally {
// I WOULD HAVE EXPECTED SolrRequestInfo.clearRequestInfo(); call here
  MDCLoggingContext.clear();
}

  }
{noformat}

So yes appears to be the same issue as SOLR-8657, but this details another code 
path that needs to be addressed.

> Lots of "Previous SolrRequestInfo was not closed" in Solr log
> -
>
> Key: SOLR-9244
> URL: https://issues.apache.org/jira/browse/SOLR-9244
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.3.1
>Reporter: Gary Lee
>Priority: Minor
> Fix For: 5.3.1
>
>

[jira] [Commented] (SOLR-9244) Lots of "Previous SolrRequestInfo was not closed" in Solr log

2016-06-23 Thread Gary Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15347349#comment-15347349
 ] 

Gary Lee commented on SOLR-9244:


SOLR-8657 appears to detail another path in which the same issue occurs - lots 
of those errors polluting the logs. In our case it seems that HttpSolrCall.call 
isn't properly clearing the SolrRequestInfo:
{noformat}
  public Action call() throws IOException {
MDCLoggingContext.reset();
MDCLoggingContext.setNode(cores);

if (cores == null) {
  sendError(503, "Server is shutting down or failed to initialize");
  return RETURN;
}

if (solrDispatchFilter.abortErrorMessage != null) {
  sendError(500, solrDispatchFilter.abortErrorMessage);
  return RETURN;
}

try {
  init();
...
SolrRequestInfo.setRequestInfo(new SolrRequestInfo(solrReq, 
solrRsp));
execute(solrRsp);
HttpCacheHeaderUtil.checkHttpCachingVeto(solrRsp, resp, reqMethod);
Iterator> headers = solrRsp.httpHeaders();
while (headers.hasNext()) {
  Map.Entry entry = headers.next();
  resp.addHeader(entry.getKey(), entry.getValue());
}
QueryResponseWriter responseWriter = 
core.getQueryResponseWriter(solrReq);
if (invalidStates != null) 
solrReq.getContext().put(CloudSolrClient.STATE_VERSION, invalidStates);
writeResponse(solrRsp, responseWriter, reqMethod);
  }
  return RETURN;
default: return action;
  }
} catch (Throwable ex) {
  sendError(ex);
  // walk the the entire cause chain to search for an Error
  Throwable t = ex;
  while (t != null) {
if (t instanceof Error) {
  if (t != ex) {
log.error("An Error was wrapped in another exception - please 
report complete stacktrace on SOLR-6161", ex);
  }
  throw (Error) t;
}
t = t.getCause();
  }
  return RETURN;
} finally {
// I WOULD HAVE EXPECTED SolrRequestInfo.clearRequestInfo(); call here
  MDCLoggingContext.clear();
}

  }
{noformat}

So yes appears to be the same issue as SOLR-8657, but this details another code 
path that needs to be addressed.

> Lots of "Previous SolrRequestInfo was not closed" in Solr log
> -
>
> Key: SOLR-9244
> URL: https://issues.apache.org/jira/browse/SOLR-9244
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.3.1
>Reporter: Gary Lee
>Priority: Minor
> Fix For: 5.3.1
>
>
> After upgrading to Solr 5.3.1, we started seeing a lot of "Previous 
> SolrRequestInfo was not closed" ERROR level messages in the logs. Upon 
> further inspection, it appears this is a sanity check and not an error that 
> needs attention. It appears that the SolrRequestInfo isn't freed in one 
> particular path (no corresponding call to SolrRequestInfo.clearRequestInfo in 
> HttpSolrCall.call), which often leads to a lot of these messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9244) Lots of "Previous SolrRequestInfo was not closed" in Solr log

2016-06-22 Thread Gary Lee (JIRA)
Gary Lee created SOLR-9244:
--

 Summary: Lots of "Previous SolrRequestInfo was not closed" in Solr 
log
 Key: SOLR-9244
 URL: https://issues.apache.org/jira/browse/SOLR-9244
 Project: Solr
  Issue Type: Bug
  Components: Server
Affects Versions: 5.3.1
Reporter: Gary Lee
Priority: Minor
 Fix For: 5.3.1


After upgrading to Solr 5.3.1, we started seeing a lot of "Previous 
SolrRequestInfo was not closed" ERROR level messages in the logs. Upon further 
inspection, it appears this is a sanity check and not an error that needs 
attention. It appears that the SolrRequestInfo isn't freed in one particular 
path (no corresponding call to SolrRequestInfo.clearRequestInfo in 
HttpSolrCall.call), which often leads to a lot of these messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org