mkaranth-tech opened a new issue, #6570:
URL: https://github.com/apache/jmeter/issues/6570

   ### Expected behavior
   
   Expected behaviour should be equal or less latency than 200 OK response.
   
   ### Actual behavior
   
   We are trying to simulate 3 million requests/min from jmeter to background 
service. When we don't have rate limiting, jmeter instances are able to 
generate 3 million request per min. But when we enable rate limiting, 
throughout is reduced by 1/4th of what we are using for all success (200 OK) 
responses. When we checked the latency at background service/ingress, latency 
seems to be fine. But jmeter reports 4 times higher response time. Is there any 
limitation of 429s on jmeter load generation? Why 429s are reducing throughput 
at jmeter and increasing the latency at Jmeter?
   
   ### Steps to reproduce the problem
   
   1. Simulate very volume of test with all sucess response from a service
   2. Now repeat the test with enabling rate limiting for that service and 
configure rate limit to reject 50% of the requests.
   3. Now you should be able to see latency of 429s are 4x slower than 200 OK 
responses.
   
   ### JMeter Version
   
   latest
   
   ### Java Version
   
   latest
   
   ### OS Version
   
   Macos


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to