On 29/10/2007, Christiaan Lamprecht <[EMAIL PROTECTED]> wrote: > > > All requests finish ok but I want to enable timeouts as the client > > > seems to wait for the server before sending more requests. Example: > > > > Each user (thread) will wait for a response before sending the next request. > > > > Is that what you are referring to, or are you saying that all threads > > in the JMeter client sometimes wait? > > I mean that waiting takes precedence over sending more requests.
But what is waiting here? Each thread will wait for a response before sending the next request - is that what you mean? > Perhaps an example would be easier: > > Test Plan (Run each Thread group separately) > Thread Group (users 1, ramp 1, count 20) > HTTP Request HTTPClient (SSL request - KeepAlive enabled) > Uniform Random Timer (Offset 100, dev 10) > > If the server is busy the 20 requests in the above testplan will not > be made 100ms apart. (i.e the request rate will be much lower than 10 > req/s) I understand that each request has to wait for the server but > the requests seem to wait for each other and so the request rate is > much less than 10 req/s > What do you mean by "requests seem to wait for each other" ? If you mean that a single thread waits for a response before sending the next request, then yes, that is how a request-response protocol works. If you mean that one thread waits for a response in a different thread before sending its request, then please explain what data leads you to that conclusion. > > > Test Plan (Run each Thread group separately) > > > Thread Group (users 100, ramp 100, count 20) > > > HTTP Request HTTPClient (SSL request - KeepAlive enabled) > > > Thread Group (users 100, ramp 100, count 30) > > > HTTP Request HTTPClient (SSL request - KeepAlive enabled) > > > Thread Group (users 100, ramp 100, count 40) > > > HTTP Request HTTPClient (SSL request - KeepAlive enabled) > > > ... > > > ... > > > ... > > > Thread Group (users 100, ramp 100, count 310) > > > HTTP Request HTTPClient (SSL request - KeepAlive enabled) > > > Uniform Random Timer (Offset 100, dev 10) > > > > > > > > > Results for 10 clients (each running a JMeter instance): > > > > > > Requested request rate, Average time between requests, Actual > > > (average) request rate > > > 200 6.59422971 151.6477351 > > > 300 4.348678 229.9549426 > > > 400 3.276406 305.2124798 > > > 500 2.655733 376.543877 > > > 600 2.233653 447.6971132 > > > 700 1.90895 523.8481888 > > > 800 1.66188 601.7281633 > > > 900 1.465338 682.4364072 > > > 1000 1.3544235 738.3215073 > > > 1300 1.176124 850.2504838 > > > 1600 1.139619 877.4862476 > > > 1900 1.07916883 926.6390691 > > > 2200 1.008209 991.857839 > > > 2500 1.0388161 962.6342911 > > > 2800 0.97919992 1021.241914 > > > 3100 1.04185174 959.8294667 > > > > > > As you can see the actual request rate converges at 1000 req/s. To > > > check if this (100 req/s per client) is a client machine limitation I > > > ran the testplan against the server using only one client: > > > > > > 20 51.92796398 19.25744673 > > > 30 34.40246749 29.06768244 > > > 40 26.11477869 38.29249375 > > > 50 21.1030206 47.38658124 > > > 60 17.75179196 56.33234111 > > > 70 15.39391341 64.96073957 > > > 80 13.61495186 73.4486622 > > > 90 12.24736081 81.65024412 > > > 100 11.10651065 90.03727917 > > > 130 8.86183552 112.8434395 > > > 160 7.44715294 134.2795036 > > > 190 6.474603926 154.4496021 > > > 220 5.739533615 174.2301844 > > > 250 5.2158886 191.7218861 > > > 280 4.76088431 210.0450116 > > > 310 4.456917965 224.3702953 > > > > > > Clearly more than 100 req/s is achieved. So the client does not seem > > > to be the bottleneck. > > > > > > > > > > > > So why does the client wait before sending more requests?: > > > > What do you mean by this? What evidence is there? > > For the 10 client experiment you would expect the last thread group... > .... > .... > Thread Group (users 100, ramp 100, count 310) > HTTP Request HTTPClient (SSL request - KeepAlive enabled) > Uniform Random Timer (Offset 100, dev 10) > > ... to maintain about 3100 req/s almost throughout the 100 seconds > that it runs but it only does 959.8... and the results above show that > none of the Thread Groups manage more than about 1000 req/s (Which is > about 100 req/s per machine) Only if the server can cope with the load. If the server is slow in responding, then the JMeter threads will be slowed down also. This is how request-response protocols work - the thread cannot proceed with another request until the previous response has arrived. Also, if you want to maintain a constant load, then you should consider using the Constant Throughput timer. This adjusts the waits according to the current rate. But of course it won't be able to maintain a throughput greater than that supported by the server or the the JMeter host. > > > Since SSL is done transparently, perhaps without knowing JMeter has to > > > wait for the SSL session to be established, which will take longer if > > > the server is busy, and so it's unknowingly 'busy' doing that instead > > > of making new requests..? > > > > > > > Which version of JMeter are you using? > > jakarta-jmeter-2.3RC4 The current version is 2.3. I don't think there are any fixes relevant to your particular testing, but it would be sensible to update to 2.3. > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > > --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]