RE: java.util.concurrent.TimeoutException with the async client
Hi Oleg, > 1. Why, exactly in the same load conditions, a pool with 600 connections is > more than enough for the blocking client and not enough for the async one? > I cannot answer this question unless I can analyze and re-run the benchmark used to load test both clients. --> ok, I'll retest it. > 2. Regarding the 'ConnectionRequestTimeout', you told me that the reason was > probably due to lock contention. Is this lock contention provoked by this > timeout of only 1ms? Do you always recommend a higher value as Stefan said > previously? > What is the point of having such a low request timeout? What is it exactly you intent to achieve by making requests fail if the pool manager fails to lease a connection after 1 ms? --> Our software is a proxy. We have some clients with a prerequirement to not permitt requests that take more than, for example, 10ms. So if for getting one connection from the pool we have to wait 10ms, then the prerequirement cannot be accomplished. So that's why, in certain installations, we don't allow to wait this time, we simply prefer to fail and return an error to the client like "Maximum number of simultaneous connections exceded". - To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org For additional commands, e-mail: httpclient-users-h...@hc.apache.org
Re: java.util.concurrent.TimeoutException with the async client
On Thu, 2016-10-27 at 15:13 +0200, Joan Balagueró wrote: > Hi Oleg, > > Sorry yes, reviewing emails I already asked about TimeoutException. I've been > reading again the httpcore tutorial because think I had a misunderstanding > about how the async client works. > > In the previous case we had about 1000 requests per second, and the average > response time was about 20ms. So Stefan proposed a pool of 50 connections > with a pool timeout of 10-20ms. > > In the current case we're having not more than 100 requests per second, but > the average response time is 1 second. > > When a connection is established, if no packets are being received, the > IOReactor dispatch thread is released to do another task, but the http > connection continues established. So in the current case, where I have less > concurrency but much longer response times I should set up a higher max > connections. Right? > > If so, just two questions: > > 1. Why, exactly in the same load conditions, a pool with 600 connections is > more than enough for the blocking client and not enough for the async one? > I cannot answer this question unless I can analyze and re-run the benchmark used to load test both clients. > 2. Regarding the 'ConnectionRequestTimeout', you told me that the reason was > probably due to lock contention. Is this lock contention provoked by this > timeout of only 1ms? Do you always recommend a higher value as Stefan said > previously? > What is the point of having such a low request timeout? What is it exactly you intent to achieve by making requests fail if the pool manager fails to lease a connection after 1 ms? Oleg - To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org For additional commands, e-mail: httpclient-users-h...@hc.apache.org
RE: java.util.concurrent.TimeoutException with the async client
Hi Oleg, Sorry yes, reviewing emails I already asked about TimeoutException. I've been reading again the httpcore tutorial because think I had a misunderstanding about how the async client works. In the previous case we had about 1000 requests per second, and the average response time was about 20ms. So Stefan proposed a pool of 50 connections with a pool timeout of 10-20ms. In the current case we're having not more than 100 requests per second, but the average response time is 1 second. When a connection is established, if no packets are being received, the IOReactor dispatch thread is released to do another task, but the http connection continues established. So in the current case, where I have less concurrency but much longer response times I should set up a higher max connections. Right? If so, just two questions: 1. Why, exactly in the same load conditions, a pool with 600 connections is more than enough for the blocking client and not enough for the async one? 2. Regarding the 'ConnectionRequestTimeout', you told me that the reason was probably due to lock contention. Is this lock contention provoked by this timeout of only 1ms? Do you always recommend a higher value as Stefan said previously? Thanks in advance, Joan. -Mensaje original- De: Oleg Kalnichevski [mailto:ol...@apache.org] Enviado el: jueves, 27 de octubre de 2016 13:48 Para: HttpClient User Discussion Asunto: Re: java.util.concurrent.TimeoutException with the async client On Thu, 2016-10-27 at 12:23 +0200, Joan Balagueró wrote: > Hello, > > After working in production with the async client (instead of the > blocking > one) everything seems ok, except that now I can see this error > intermitently: > > java.util.concurrent.TimeoutException > at > org.apache.http.nio.pool.AbstractNIOConnPool.processPendingRequest(Abs > tractN > IOConnPool.java:364) > at > org.apache.http.nio.pool.AbstractNIOConnPool.processNextPendingRequest > (Abstr > actNIOConnPool.java:344) > at > org.apache.http.nio.pool.AbstractNIOConnPool.release(AbstractNIOConnPo > ol.jav > a:318) > at > org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.rele > aseCon > nection(PoolingNHttpClientConnectionManager.java:303) > at > org.apache.http.impl.nio.client.AbstractClientExchangeHandler.releaseC > onnect > ion(AbstractClientExchangeHandler.java:239) > at > org.apache.http.impl.nio.client.MainClientExec.responseCompleted(MainC > lientE > xec.java:387) > at > org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.respo > nseCom > pleted(DefaultClientExchangeHandlerImpl.java:168) > at > org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse( > HttpAs > yncRequestExecutor.java:436) > at > org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpA > syncRe > questExecutor.java:326) > at > org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(Def > aultNH > ttpClientConnection.java:265) > at > org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(Intern > alIODi > spatch.java:81) > at > org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(Intern > alIODi > spatch.java:39) > at > org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(Abstrac > tIODis > patch.java:114) > at > org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor. > java:1 > 62) > at > org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(Abstra > ctIORe > actor.java:337) > at > org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(Abstr > actIOR > eactor.java:315) > at > org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOR > eactor > .java:276) > at > org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.j > ava:10 > 4) > at > org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.r > un(Abs > tractMultiworkerIOReactor.java:588) > at java.lang.Thread.run(Thread.java:745) > > It seems that the pool is exhausted at this point. > > The setup of both pools is similar. MaxConnections to 600, > ConnectionRequestTimeout' is set to 1 (because I don't want to wait > for a pooling connection to be free), an IdleConnectionsHandler, etc. > This looks wrong to me. I think I already explained that. > I'm not sure if some connections are not being closed. With the > blocking client one manually released the connection to the pool, but > with the async one I'm not closing connections explicitly (so I > suppose this is done automatica
Re: java.util.concurrent.TimeoutException with the async client
On Thu, 2016-10-27 at 12:23 +0200, Joan Balagueró wrote: > Hello, > > After working in production with the async client (instead of the blocking > one) everything seems ok, except that now I can see this error > intermitently: > > java.util.concurrent.TimeoutException > at > org.apache.http.nio.pool.AbstractNIOConnPool.processPendingRequest(AbstractN > IOConnPool.java:364) > at > org.apache.http.nio.pool.AbstractNIOConnPool.processNextPendingRequest(Abstr > actNIOConnPool.java:344) > at > org.apache.http.nio.pool.AbstractNIOConnPool.release(AbstractNIOConnPool.jav > a:318) > at > org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.releaseCon > nection(PoolingNHttpClientConnectionManager.java:303) > at > org.apache.http.impl.nio.client.AbstractClientExchangeHandler.releaseConnect > ion(AbstractClientExchangeHandler.java:239) > at > org.apache.http.impl.nio.client.MainClientExec.responseCompleted(MainClientE > xec.java:387) > at > org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCom > pleted(DefaultClientExchangeHandlerImpl.java:168) > at > org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAs > yncRequestExecutor.java:436) > at > org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRe > questExecutor.java:326) > at > org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNH > ttpClientConnection.java:265) > at > org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODi > spatch.java:81) > at > org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODi > spatch.java:39) > at > org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODis > patch.java:114) > at > org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:1 > 62) > at > org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIORe > actor.java:337) > at > org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOR > eactor.java:315) > at > org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor > .java:276) > at > org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:10 > 4) > at > org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(Abs > tractMultiworkerIOReactor.java:588) > at java.lang.Thread.run(Thread.java:745) > > It seems that the pool is exhausted at this point. > > The setup of both pools is similar. MaxConnections to 600, > ConnectionRequestTimeout' is set to 1 (because I don't want to wait for a > pooling connection to be free), an IdleConnectionsHandler, etc. > This looks wrong to me. I think I already explained that. > I'm not sure if some connections are not being closed. With the blocking > client one manually released the connection to the pool, but with the async > one I'm not closing connections explicitly (so I suppose this is done > automatically by the async client). > > The point is this error was not present with the blocking client. If I'm not > wrong, with the async client the possibility to exhaust the pool should be > more difficult because every time the server is not sending tcp packets the > thread is released and used for other connections. > > Well, whatever the reason I'm going to enable http traces. Since my client > only permitts updates once a week, I want to be sure that I'm enabling the > http traces correctly (otherwise I will have to wait one more week). > > With the blocking client I enabled traces putting these lines in the > log4j.xml file (and it worked). Is the same procedure with the async client? > > > > > > > > > Logging configuration should be the same as for blocking HC. Oleg - To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org For additional commands, e-mail: httpclient-users-h...@hc.apache.org
RE: java.util.concurrent.TimeoutException with the async client
Hello, Sorry, I didn't paste the last part of the xml: Thanks, Joan. -Mensaje original- De: Joan Balagueró [mailto:joan.balagu...@grupoventus.com] Enviado el: jueves, 27 de octubre de 2016 12:23 Para: httpclient-users@hc.apache.org Asunto: java.util.concurrent.TimeoutException with the async client Hello, After working in production with the async client (instead of the blocking one) everything seems ok, except that now I can see this error intermitently: java.util.concurrent.TimeoutException at org.apache.http.nio.pool.AbstractNIOConnPool.processPendingRequest(AbstractN IOConnPool.java:364) at org.apache.http.nio.pool.AbstractNIOConnPool.processNextPendingRequest(Abstr actNIOConnPool.java:344) at org.apache.http.nio.pool.AbstractNIOConnPool.release(AbstractNIOConnPool.jav a:318) at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.releaseCon nection(PoolingNHttpClientConnectionManager.java:303) at org.apache.http.impl.nio.client.AbstractClientExchangeHandler.releaseConnect ion(AbstractClientExchangeHandler.java:239) at org.apache.http.impl.nio.client.MainClientExec.responseCompleted(MainClientE xec.java:387) at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCom pleted(DefaultClientExchangeHandlerImpl.java:168) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAs yncRequestExecutor.java:436) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRe questExecutor.java:326) at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNH ttpClientConnection.java:265) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODi spatch.java:81) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODi spatch.java:39) at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODis patch.java:114) at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:1 62) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIORe actor.java:337) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOR eactor.java:315) at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor .java:276) at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:10 4) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(Abs tractMultiworkerIOReactor.java:588) at java.lang.Thread.run(Thread.java:745) It seems that the pool is exhausted at this point. The setup of both pools is similar. MaxConnections to 600, ConnectionRequestTimeout' is set to 1 (because I don't want to wait for a pooling connection to be free), an IdleConnectionsHandler, etc. I'm not sure if some connections are not being closed. With the blocking client one manually released the connection to the pool, but with the async one I'm not closing connections explicitly (so I suppose this is done automatically by the async client). The point is this error was not present with the blocking client. If I'm not wrong, with the async client the possibility to exhaust the pool should be more difficult because every time the server is not sending tcp packets the thread is released and used for other connections. Well, whatever the reason I'm going to enable http traces. Since my client only permitts updates once a week, I want to be sure that I'm enabling the http traces correctly (otherwise I will have to wait one more week). With the blocking client I enabled traces putting these lines in the log4j.xml file (and it worked). Is the same procedure with the async client? Thanks, Joan. - To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org For additional commands, e-mail: httpclient-users-h...@hc.apache.org - To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org For additional commands, e-mail: httpclient-users-h...@hc.apache.org
java.util.concurrent.TimeoutException with the async client
Hello, After working in production with the async client (instead of the blocking one) everything seems ok, except that now I can see this error intermitently: java.util.concurrent.TimeoutException at org.apache.http.nio.pool.AbstractNIOConnPool.processPendingRequest(AbstractN IOConnPool.java:364) at org.apache.http.nio.pool.AbstractNIOConnPool.processNextPendingRequest(Abstr actNIOConnPool.java:344) at org.apache.http.nio.pool.AbstractNIOConnPool.release(AbstractNIOConnPool.jav a:318) at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.releaseCon nection(PoolingNHttpClientConnectionManager.java:303) at org.apache.http.impl.nio.client.AbstractClientExchangeHandler.releaseConnect ion(AbstractClientExchangeHandler.java:239) at org.apache.http.impl.nio.client.MainClientExec.responseCompleted(MainClientE xec.java:387) at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCom pleted(DefaultClientExchangeHandlerImpl.java:168) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAs yncRequestExecutor.java:436) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRe questExecutor.java:326) at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNH ttpClientConnection.java:265) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODi spatch.java:81) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODi spatch.java:39) at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODis patch.java:114) at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:1 62) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIORe actor.java:337) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOR eactor.java:315) at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor .java:276) at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:10 4) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(Abs tractMultiworkerIOReactor.java:588) at java.lang.Thread.run(Thread.java:745) It seems that the pool is exhausted at this point. The setup of both pools is similar. MaxConnections to 600, ConnectionRequestTimeout' is set to 1 (because I don't want to wait for a pooling connection to be free), an IdleConnectionsHandler, etc. I'm not sure if some connections are not being closed. With the blocking client one manually released the connection to the pool, but with the async one I'm not closing connections explicitly (so I suppose this is done automatically by the async client). The point is this error was not present with the blocking client. If I'm not wrong, with the async client the possibility to exhaust the pool should be more difficult because every time the server is not sending tcp packets the thread is released and used for other connections. Well, whatever the reason I'm going to enable http traces. Since my client only permitts updates once a week, I want to be sure that I'm enabling the http traces correctly (otherwise I will have to wait one more week). With the blocking client I enabled traces putting these lines in the log4j.xml file (and it worked). Is the same procedure with the async client? Thanks, Joan. - To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org For additional commands, e-mail: httpclient-users-h...@hc.apache.org