Hi Oleg,

Still dealing with this stuck connections. I could get traces from httpClient5 
and we see all connections busy:
08:12:56.031    ex-00E1DFB8: acquiring connection with route 
{}->http://10.15.15.14:8082
08:12:56.087    Connection request: [route: {}->http://10.15.15.14:8083][total 
kept alive: 0; route allocated: 108 of 1500; total allocated: 1500 of 1500]

This happened previously because we were not expanding the capacity of the 
buffer in the 'updateCapacity' method. After your clarification this was fixed:
@Override
 public void updateCapacity(CapacityChannel cc) {
         try { cc.update(this.capacityToPerform); } 
         catch (Exception e) { 
GlobalData.getInstance().getLoggerService().flushCluster("log.internal.error.server.http",
 new Object[] { this.objCall.getNombre(), this.capacityToPerform, 
Utils.stackToString(e) }, true); }
 }

But now, after several days working, the pool becomes full again. And the only 
way to solve the issue is restarting it (stopping the http instance and 
starting it again from our app). This happens at 08.14.03.225 in the http log 
we have on this repository: https://github.com/joanbalaguero/HttpClient.git

08:14:03.054    Connection request: [route: {}->http://10.15.15.53:8083][total 
kept alive: 0; route allocated: 107 of 1500; total allocated: 1500 of 1500]
08:14:03.080    Connection request: [route: {}->http://10.15.15.53:8080][total 
kept alive: 0; route allocated: 108 of 1500; total allocated: 1500 of 1500]
08:14:03.107    Connection request: [route: {}->http://10.15.15.53:8082][total 
kept alive: 0; route allocated: 107 of 1500; total allocated: 1500 of 1500]
08:14:03.192    ex-00E1E1D1: preparing request execution
08:14:03.192    Target auth state: UNCHALLENGED
08:14:03.192    Proxy auth state: UNCHALLENGED
08:14:03.192    ex-00E1E1D1: acquiring connection with route 
{}->http://10.15.15.24:8080
08:14:03.197    Shutdown connection pool GRACEFUL
08:14:03.224    Connection request: [route: {}->http://10.15.15.24:8080][total 
kept alive: 0; route allocated: 10 of 1500; total allocated: 1500 of 1500]
08:14:03.225    i/o-0041DB56: Shutdown connection GRACEFUL
08:14:03.226    i/o-0041DB37: Shutdown connection GRACEFUL
08:14:03.226    i/o-0041DB19: Shutdown connection GRACEFUL
08:14:03.242    Connection pool shut down
08:14:03.242    Shutdown GRACEFUL
08:14:03.276    ex-00E1E1D2: preparing request execution
08:14:03.276    Target auth state: UNCHALLENGED
08:14:03.276    Proxy auth state: UNCHALLENGED
08:14:03.276    ex-00E1E1D2: acquiring connection with route 
{}->http://10.15.15.53:8081
08:14:03.277    Connection request: [route: {}->http://10.15.15.53:8081][total 
kept alive: 0; route allocated: 0 of 1500; total allocated: 0 of 1500]
08:14:03.277    Connection leased: [route: {}->http://10.15.15.53:8081][total 
kept alive: 0; route allocated: 1 of 1500; total allocated: 1 of 1500]
08:14:03.277    ep-00DFEDB0: acquired null
08:14:03.277    ex-00E1E1D2: connection acquired
08:14:03.279    ep-00DFEDB0: connected i/o-00423C70
08:14:03.279    Connected to target
08:14:03.279    Route fully established

The point is the variable 'this.capacityToPerform' we use to enlarge the buffer 
is set in the 'consumeResponse' method when we have a non-null EntityDetails 
(you can see it in the ServerHttpResponse.java):
@Override
 public void consumeResponse(HttpResponse response, EntityDetails 
entityDetails, HttpContext context, FutureCallback<Void> fcback) throws 
IOException, HttpException
 {
  ( . . . .)
  if (entityDetails != null) 
  {
   this.capacityToPerform = entityDetails.getContentLength() > 0 ? 
(int)entityDetails.getContentLength() : 
GlobalData.getInstance().getHttpService().getBufferInInterface();  // <-- 
GlobalData.getInstance().getHttpService().getBufferInInterface() is 32768 
   this.vbaos = new VentusByteArrayOutputStream(this.capacityToPerform);
  }
  else 
  {
   fcback.completed(this.getResult());
   this.submitTask(null);
  }
 }

So our guess (maybe wrong) is:
- There is any case when the 'updateCapacity' method is being called but 
without previously calling the 'consumeResponse' method with a non-null entity, 
so this.capacityToPerform is always 0.
- we are still doing something wrong when the 'EntityDetails' is null. Before 
this we did nothing, so the control was not returning to our app. Now we are 
calling to 'this.submitTask' and thus returning the control to our app (not 
sure if completing the FutureCallback is necessary).

With the classic blocking model one must close the connection by consuming the 
response, but in this asynchronous model we don't need to explicitly close the 
http connections, so we are not able to see where we are leaking these 
connections.

We also uploaded a histogram just with the alive instances from "org.apache.hc" 
when the pool was exhausted.

Any help would be really appreciated.

Thanks,

Joan.


-----Original Message-----
From: Oleg Kalnichevski <[email protected]> 
Sent: Thursday, May 30, 2019 11:40 AM
To: HttpClient User Discussion <[email protected]>
Subject: Re: Response timeout issue

On Wed, 2019-05-29 at 18:56 +0200, Joan grupoventus wrote:
> Hi Oleg,
> 
> I'm still trying to manage this with our client ... we are not able to 
> reproduce it in our environment.
> 
> In the meantime we have come across with another issue. I explain it 
> just to know if you are aware that someone else has experienced 
> something similar with HttpClient5. After 7 days working correctly, 
> suddenly the http pool becomes exhausted and all the asynchronous 
> tasks hang until the container asynchronous timeout is reached. What 
> we see is the request is sent to the backend but no response is 
> obtained, and finally the asynchronous task finishes due to the 
> container timeout after 150s ... no response timeout is thrown (it's 
> set at 90s).
> 

Hi Joan

No, I am not aware of anyone else experiencing similar issues.

Oleg



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to