Re: [POLL] Minimal JRE level as of HttpClient 4.4
[ ] keep Java 1.5 compatibility: no good reason to upgrade. [ ] upgrade to Java 1.6: one step at a time. [X] upgrade to Java 1.7: new features are more important. No point in "upgrading" to a Java version that's been EOL 6 months now. No need to slow HC development down by imposing restrictions that are relevant only for a small number of HC users. People who need Java 6 compatibility can use an older HC version or maybe even create their own fork, backporting new features from the mainline. marko
Re: Intercepting and responding to requests on client-side
On Mon, 03 Jun 2013 08:02:31 GMT Oleg Kalnichevski wrote: > I had something different in mind (see my pull request) but if that > works then it is certainly not wrong. > > https://github.com/marko-asplund/tech-protos/pull/1 Thanks a lot Oleg! I did some further testing and it turned out that my version of the code was actually trying to open a connection to the redirect target before the execute() method got a chance to run. This is a problem because the URLs refer to non-existent target in my case. Your solution on the other hand did not have this problem. marko - To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org For additional commands, e-mail: httpclient-users-h...@hc.apache.org
Re: Intercepting and responding to requests on client-side
Oleg wrote: > Marko wrote: > > I noticed that one problem with this approach was that I'm not able to > > intercept redirected requests. > > HC seems to be using DefaultRequestDirector for executing the > > redirected requests instead of the custom executor. > > > > Does using the new executions chain APIs solve this issue? > > It should. ok. I did some experimenting with this using HC 4.3b1, and it does appear to work. My proto code is available here: https://github.com/marko-asplund/tech-protos/blob/master/hc-proto/src/main/java/fi/markoa/proto/hc/RequestInterceptionDemo.java Is this the correct way to use the new API? Are any big changes expected in HC 4.3 beta before the final release? Any estimates for HC 4.3 final release date? As a side note, try-with-resources doesn't seem to work well with a set of related resources some of which implement AutoCloseable and some that don't. For example it doesn't seem to be practical to use try-with-resources with CloseableHttpClient, CloseableHttpResponse and HttpEntity, because if the first two would be closed automatically while the last one would be closed in a finally block, the resources would be closed in an incorrect order. marko - To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org For additional commands, e-mail: httpclient-users-h...@hc.apache.org
Re: Intercepting and responding to requests on client-side
Oleg wrote: > Please upgrade to 4.3 and have a look at the new execution chain APIs. > You can use the caching exec as an example of how HttpClient can be made > to respond with a response without hitting the target server. thanks Oleg! I did some experimenting earlier with HC 4.2 using the following approach: - create a custom HttpClient subclass with overridden createRequestExecutor() method - make createRequestExecutor() return a custom HttpRequestExecutor with overridded execute() method I noticed that one problem with this approach was that I'm not able to intercept redirected requests. HC seems to be using DefaultRequestDirector for executing the redirected requests instead of the custom executor. Does using the new executions chain APIs solve this issue? marko - To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org For additional commands, e-mail: httpclient-users-h...@hc.apache.org
Intercepting and responding to requests on client-side
Hi, I'd like to be able to intercept requests targeted for certain endpoints issued by HttpClient and implement a custom handler for those requests. The handler should be able to generate a response for the requests on the client-side without actually connecting to a remote server. Is this possible with HttpClient? I've looked at HttpRequestInterceptor and HttpResponseInterceptor APIs, but these interceptors don't seem to be able to generate the actual response. marko - To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org For additional commands, e-mail: httpclient-users-h...@hc.apache.org
Re: ConnectionPoolTimeoutException with multi-threaded HttpClient usage
On 2012-12-16 15:04:25 GMT Oleg Kalnichevski wrote: > This can happen if you have a pool with the number of concurrent > connections much smaller than the number of work threads (which causes a > high resource contention) combined with an aggressive timeout value. ... > Closing the response content stream is perfectly sufficient. Just make > sure your code _always_ consumes response entities even for non-200 > responses. Thanks Oleg! So, the fragment that submits the HTTP request and consumes the response could look something like this? // executed repeatedly by multiple concurrent threads try { HttpResponse res = hc.execute(rq); StatusLine s = res.getStatusLine(); String c = EntityUtils.toString(res.getEntity()); // closes stream + releases connection if(s.getStatusCode() != 200) { throw new BackendException("error ..."); } // everything ok, process content here ... } catch (IOException e) { throw new BackendException("error ..."); } finally { // no HC related cleanup needed here } Previously my code was explicitly closing the content stream in a finally block, but because the content stream was only opened for HTTP 200 responses, closing never took place in practice for non-200 responses. I guess this could explain the "connection leakage". marko - To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org For additional commands, e-mail: httpclient-users-h...@hc.apache.org
ConnectionPoolTimeoutException with multi-threaded HttpClient usage
Hi, I'm having problems using HttpClient in a multi-threaded environment. When HttpClient.execute is called I occasionally get the following error, even when there probably should be connections available in the pool. org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection from pool at org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:232) [httpclient-4.2.2.jar:4.2.2] at org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(PoolingClientConnectionManager.java:199) [httpclient-4.2.2.jar:4.2.2] at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:455) [httpclient-4.2.2.jar:4.2.2] at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906) [httpclient-4.2.2.jar:4.2.2] at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805) [httpclient-4.2.2.jar:4.2.2] at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784) [httpclient-4.2.2.jar:4.2.2] ... Should I explicitly release the connection after each HTTP request is executed? Is there something else that should be done to clean up after each request? The HC tutorial recommends passing a per-thread HttpContext object to HC.execute but is this required? What kind of state is actually stored in HttpContext? Below is a simplified version of the code. // one-time initialization PoolingClientConnectionManager cm = new PoolingClientConnectionManager(); cm.setDefaultMaxPerRoute(20); HttpClient httpClient = new DefaultHttpClient(cm); // executed repeatedly by multiple concurrent threads HttpGet rq = new HttpGet(uri); InputStream is = null; try { HttpResponse res = httpClient.execute(rq); // ... is = res.getEntity().getContent(); } finally { is.close(); } // one-time disposal httpClient.getConnectionManager().shutdown(); thanks, marko - To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org For additional commands, e-mail: httpclient-users-h...@hc.apache.org