On Sun, 2013-01-06 at 23:14 -0800, vigna wrote:
> > Try reducing the number of concurrent connections from 20k to, say, 2k 
> > and you may be surprised to find out that a smaller number of 
> > connections can actually chew through the same workload faster. If the 
> 
> Well... no. :) We have an experimental setup with a local proxy generating a
> "fake web" that we use to check the speed of the pipeline independently of
> the network conditions.
> 
> With 1000 parallel DefaultHttpClient instances (different instances, not one
> instance with pooling) we download >10000 pages/s.
> 
> With 1000 parallel requests on a DefaultHttpAsyncClient we download >500
> pages/s, but as soon as we try to increase the number of parallel requests
> the speed drops to 100 pages/s, which makes the client useless for us at the
> moment.
> 
> Of course this is somewhat artificial—you don't actually download at
> 100MB/s. But the fact that actually with 2000 parallel requests you go
> *slower* is a problem.
> 

I am sorry but I fail to see how that all proves your point (or
disproves mine). It even sounds completely unrelated to what I was
trying to tell you. Well, then, let us just agree to disagree.

Oleg



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to