Hi,

yes it is normal, and this seems confusing at the beginning fro most.
It depends very much on how you design the test, the number of virtual user etc.
IF you do not use times, THEN each VU issues a request as soon as the previous 
one is finished.
So, when response times increase, the throughput drops.
For example, if a request takes 1 second, one VU  will issue 60 request per 
minute.
When a request takes 2 second, one VU will issue 30 request per minute. So you 
have to increase the number of VU to compensate.

IN order to overcome this problem, the best option IMHO is to use a "Constant 
Throughput Timer".
This allows you to make the VU wait and not to exceed a specified throughput.
So in the example above, if you insert a time limiting the throughput to 30 op 
/s, you will not exceed this throughput.
Even if the actual request will take 0.5, or 1 or 1.8 seconds, the throughput 
will not change.
You can adjust the number of VU in order to obtain the desired load.
Obviously, if the request takes more than 2 seconds, , the throughput will drop.

HTH
Sergio

Il 07/01/2019 18:52, Marcio Prado ha scritto:
Good afternoon,

I have a question that can be simple for most.

I'm running some tests on a cloud computing with OpenStack.

When network latency increases, the number of HTTP requests met decreases. Is 
this normal behavior?

I figured that the number of HTTP requests would remain the same as when latency was low, since HTTP requests are generated regardless of response time.

Can anyone explain this behavior?

Thank you!


--

Ing. Sergio Boso




---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@jmeter.apache.org
For additional commands, e-mail: user-h...@jmeter.apache.org

Reply via email to