On Tue, 2012-12-18 at 21:49 +0200, Marko Asplund wrote:
> On 2012-12-16 15:04:25 GMT Oleg Kalnichevski wrote:
> 
> > This can happen if you have a pool with the number of concurrent
> > connections much smaller than the number of work threads (which causes a
> > high resource contention) combined with an aggressive timeout value.
> ...
> 
> > Closing the response content stream is perfectly sufficient. Just make
> > sure your code _always_ consumes response entities even for non-200
> > responses.
> 
> Thanks Oleg!
> 
> So, the fragment that submits the HTTP request and consumes the
> response could look something like this?
> 
> // executed repeatedly by multiple concurrent threads
> try {
>  HttpResponse res = hc.execute(rq);
>  StatusLine s = res.getStatusLine();
>  String c = EntityUtils.toString(res.getEntity()); // closes stream +
> releases connection
>  if(s.getStatusCode() != 200) {
>    throw new BackendException("error ...");
>  }
>  // everything ok, process content here ...
> } catch (IOException e) {
>  throw new BackendException("error ...");
> } finally {
>  // no HC related cleanup needed here
> }
> 
> Previously my code was explicitly closing the content stream in a
> finally block, but because the content stream was only opened for HTTP
> 200 responses, closing never took place in practice for non-200
> responses. I guess this could explain the "connection leakage".
> 
> 
> marko
> 

I would still do EntityUtils.consume(res.getEntity()) in the finally
block just in case.

Oleg



---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org
For additional commands, e-mail: httpclient-users-h...@hc.apache.org

Reply via email to