> Try reducing the number of concurrent connections from 20k to, say, 2k 
> and you may be surprised to find out that a smaller number of 
> connections can actually chew through the same workload faster. If the 

Well... no. :) We have an experimental setup with a local proxy generating a
"fake web" that we use to check the speed of the pipeline independently of
the network conditions.

With 1000 parallel DefaultHttpClient instances (different instances, not one
instance with pooling) we download >10000 pages/s.

With 1000 parallel requests on a DefaultHttpAsyncClient we download >500
pages/s, but as soon as we try to increase the number of parallel requests
the speed drops to 100 pages/s, which makes the client useless for us at the
moment.

Of course this is somewhat artificial—you don't actually download at
100MB/s. But the fact that actually with 2000 parallel requests you go
*slower* is a problem.



--
View this message in context: 
http://httpcomponents.10934.n7.nabble.com/AbstractNIOConnPool-memory-leak-tp18554p18667.html
Sent from the HttpClient-User mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to