It's certainly easy to get turned around in these sorts of discussions.  
Consider the case where the incoming request rate is fixed and roughly 
independent of the server's response time.  If the latency to provide a 
response to each request is reduced (say, by a more efficient implementation of 
some key algorithm) then the number of requests actively being processed by the 
server at any given instant of time (i.e., the concurrency) will be reduced.

Turning it around, there's always going to be a level of concurrency that maxes 
out the capabilities of the server.  Amdahl's law covers some of the 
mathematics involved if you're interested in that sort of thing.  Increasing 
the request rate on a server that has already reached the land of diminishing 
returns will necessarily cause the latency observed by the individual clients 
to rise.

Adam

On Jul 13, 2013, at 1:17 PM, Yves S. Garret <[email protected]> wrote:

> On Sat, Jul 13, 2013 at 1:14 PM, Yves S. Garret
> <[email protected]>wrote:
> 
>> Hello, I was reading about CouchDB here:
>> http://guide.couchdb.org/editions/1/en/why.html
>> 
>> The sentence is just above Figure 2 (which is roughly 2/3 of the way
>> down the page).  What I don't understand is why would concurrency
>> be impacted when latency is reduced?  Wouldn't latency reduce as
>> more processes are created to do more processing?
>> 
> 
> Or was this meant as a function of increased bandwith usage, as
> reads/writes increase, the latency is increased, yes?

Reply via email to