Yes. Thanks to Deepak for a very good post.

Also, Jörg, in your case it's even more simple mathematics:
Case 1: Each thread does a request each ~30.7secs
Case 2: Each thread does a request each ~19.6secs
the intervals being avg. Timer pause plus avg. response time.

So, even though your latency increased tenfold, the average interval is
still shorter in comparison.

Cheers,
Felix

On 07/27/2010 10:42 PM, Deepak Goel wrote:
> Hey
> 
> Namaskara~Nalama~Guten Tag
> 
> As you increase the load, both the throughput and the response time
> increase. There is a knee beyond which throughput would remain constant and
> the response time would then increase exponentially.
> 
> Looks like you haven't reached that knee yet at 3.1 req/sec throughput, so
> the throughput will increase to 5 req/sec (knee throughput).
> 
> There are lot of other operations which this test is doing like network
> time, client rendering time, server socket connections that might also add
> to the increasing response time in addition to the time taken by your
> application.
> 
> Also as the response time has increased by 10 times, the throughput has only
> increased less than 2 times (not much as compared to response time).
> 
> You can draw the knee curve for both throughput against load, response time
> over load if you can measure more test.
> 
> Something like:
> 
> Request/sec    Response Time   Throughput
> 
> 700                 3000 ms             3/sec
> 5000               40000ms             4/sec
> 9600             114300ms             5/sec (Almost the knee i think)
> 15000          3337777ms             5.2/sec
> 30000          7778888ms             5.3/sec
> 
> If you put your results (the above are only a sample) in an excel sheet, and
> get the curve chart, you would be able to see the response time curve
> (exponential) and throughput curve(plateau over a hill)
> 
> Deepak
>    --
> Keigu
> 
> Deepak
> +91-9765089593
> deic...@gmail.com
> 
> Skype: thumsupdeicool
> Google talk: deicool
> Blog: http://loveandfearless.wordpress.com
> Facebook: http://www.facebook.com/deicool
> 
> Check out my Work at:
> LinkedIn: http://in.linkedin.com/in/thumsupdeicool
> 
> "Contribute to the world, environment and more : http://www.gridrepublic.org
> "
> 
> 
> On Tue, Jul 27, 2010 at 7:47 PM, Jörg Godau <j.go...@schuetze-berlin.de>wrote:
> 
>> Hi All,
>>
>> we have a fairly simple test that logs in to our application.
>>
>> We've setup a Gaussian Random Timer and are monitoring the results in an
>> Aggregate report.
>>
>> My question is about the throughput - if we reduce the delays in the Timer,
>> the time taken to log in to the application increases (which makes sense as
>> there is more load on the server).
>>
>> Why is the throughput also increasing? If each request is taking much
>> longer (ca. 10 times as long) when we increase the load - shouldn't the
>> throughput be lower?
>>
>> Some numbers to illustrate:
>> Timer ave 30 sec / deviation 15 sec  => Average request  700ms, max
>> 2760ms, throughput 3.1/sec
>> Timer ave 10 sec / deviation  3 sec  => Average request 9610ms, max
>> 114300ms, throughput 5.0/sec
>>
>> Can someone please explain how this is possible?

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscr...@jakarta.apache.org
For additional commands, e-mail: jmeter-user-h...@jakarta.apache.org

Reply via email to