On Wed, Feb 17, 2010 at 4:29 PM, Dan Denton <random.da...@gmail.com> wrote:
> On Wed, Feb 17, 2010 at 3:19 PM, Robert Hall <rfh...@berkeley.edu> wrote:
>>
>> On Feb 17, 2010, at 12:50 PM, Dan Denton wrote:
>>
>>> On Wed, Feb 17, 2010 at 2:26 PM, Robert Hall <rfh...@berkeley.edu> wrote:
>>>>
>>>> On Feb 17, 2010, at 12:06 PM, Dan Denton wrote:
>>>>
>>>>> On Mon, Feb 15, 2010 at 12:53 PM, Robert Hall <rfh...@berkeley.edu>
>>>>> wrote:
>>>>>>
>>>>>> Dan,
>>>>>>
>>>>>> On Feb 15, 2010, at 10:37 AM, Dan Denton wrote:
>>>>>>
>>>>>>> Hello all. I’m trying to load test a login page served by tomcat 6,
>>>>>>> proxied through apache 2 with mod_proxy. I’m using JMeter 2.3.4 to
>>>>>>> conduct the testing. My thread group consists of 500 sessions , and
>>>>>>> the sample is a GET of a simple login page.
>>>>>>>
>>>>>>> JMeter returns errors for a varying percentage of the samples. The
>>>>>>> errors returned are generally the following:
>>>>>>>
>>>>>>> at
>>>>>>>
>>>>>>>
>>>>>>> org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1037)
>>>>>>> at
>>>>>>>
>>>>>>>
>>>>>>> org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1023)
>>>>>>> at
>>>>>>>
>>>>>>>
>>>>>>> org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:346)
>>>>>>> at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:243)
>>>>>>> at java.lang.Thread.run(Unknown Source)
>>>>>>>
>>>>>>> The issues I’m having are twofold. I’m having difficulty determining
>>>>>>> if these errors are coming from JMeter or Tomcat, as they’re displayed
>>>>>>> in the response window of JMeter. The developers think the error is
>>>>>>> coming from JMeter given the last few lines of the trace above.
>>>>>>
>>>>>> The developers are correct.
>>>>>>
>>>>>>> Given that I'm not a programmer I should probably take their word for
>>>>>>> it,
>>>>>>> but why would JMeter show this error as the response?
>>>>>>
>>>>>> The system you are running JMeter on isn't able to handle the load.
>>>>>>
>>>>>>> Second, I've tried tweaking my process counts (startservers, maxspare,
>>>>>>> etc...) with no change in the outcome. I can mitigate the issue by
>>>>>>> pointing JMeter directly to tomcat, but I need this product to go
>>>>>>> through our apache proxy for SSL.
>>>>>>>
>>>>>>> Any help on this would be greatly appreciated.
>>>>>>
>>>>>> There must be some JMeter setting that will work otherwise you would be
>>>>>> unable
>>>>>> to access the webapp over SSL from the system that is hosting JMeter.
>>>>>>
>>>>>> Try reducing everything to a count of 1 in JMeter.
>>>>>>
>>>>>> If that doesn't work, there is a problem with the SSL config in JMeter;
>>>>>> google "jmeter ssl".
>>>>>>
>>>>>> Otherwise, try spreading the load our across several JMeter instances
>>>>>> installed on separate systems.
>>>>>>
>>>>>> - Robert
>>>>>>
>>>>>
>>>>> Thanks for the reply Robert. I've set up JMeter 4 slaves, each with at
>>>>> least two 2.8 Ghz procs and 2 GB of RAM, and still regardless of
>>>>> whether it's 1 node simulating 400 sessions or 4 nodes each simulating
>>>>> 100, I still see these errors at 400 sessions or more. Also, when I
>>>>> use multiple slaves to execute the test, the percentage of failures
>>>>> when simulating 400 sessions is greater and the failures happen
>>>>> earlier in the test.
>>>>>
>>>>> This makes me think that this isn't just an issue with the systems
>>>>> running JMeter, but I'm not sure. I've tried tweaking my SSL Session
>>>>> Timeout as well, but with no effect. I did this because watching the
>>>>> mod_status page on this apache instance, I can see the current session
>>>>> count top out at about 330 every time, then subside. My guess was that
>>>>> SSL sessions were somehow bottlenecking.
>>>>>
>>>>> If anyone has any other suggestions, they would be greatly appreciated.
>>>>>
>>>>
>>>> Dan, this sounds like an Apache httpd configuration issue.
>>>>
>>>> Check the values for the 'KeepAliveTimeout' and 'KeepAlive' directives,
>>>> http://httpd.apache.org/docs/2.0/mod/core.html#keepalive
>>>>
>>>> I suggest using "KeepAlive  On" and "KeepAliveTimeout 1"; the latter is
>>>> probably defaulted to 15.
>>>>
>>>> - Robert
>>>>
>>> Thanks again Robert. My apache proxy has both directives already set,
>>> and KeepAliveTimeout was indeed set to 15. I tried running the 400
>>> session test across 4 slaves with KeepAliveTimout set to 1 and 30, and
>>> the same result. A failure rate of approximately 20 percent.
>>>
>>> Are there any other directives you suggest changing?
>>>
>>> Thanks...
>>>
>>
>> Surprising that KeepAliveTimeout of 1 and 30 produced the same results.
>>
>> Other directives: (with KeepAliveTimeout of 1)
>>
>> MaxKeepAliveRequests - defaults to 100, try raising to 500
>>
>> Separately, set KeepAlive Off.
>>
>> Have you looked at the httpd error logs?
>>
>> - Robert
>> ---------------------------------------------------------------------
>> The official User-To-User support forum of the Apache HTTP Server Project.
>> See <URL:http://httpd.apache.org/userslist.html> for more info.
>> To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
>>  "   from the digest: users-digest-unsubscr...@httpd.apache.org
>> For additional commands, e-mail: users-h...@httpd.apache.org
>>
>>
>
> Robert,
>
> With KeepAliveTimout set to 1, and with MaxKeepAliveRequests set to
> either 100 or 500, the same results. When using 4 nodes, 100 sessions
> a piece, failure rates are between 10 and 30%. With individual nodes
> firing off 400 sessions, failure rates are 10% or less.
>
> With KeepAlive off, the same results. The results are somewhat random,
> but around the same failure rates.
>
> Nothing in the error logs that seems to point to this issue. Only
> occasional messages regarding child processes during the restarts for
> the configuration changes.
>
> So far, none of these changes seem to have had an effect. The failure
> rates are the same regardless.
>
> Thanks for the help...
>

Robert,

Thanks for the reply. There are no firewalls in play here. The
original system being tested is a development system (not as robust as
our production systems), and the proxy and the tomcat instance serving
the webapp were on the same server. I moved the proxy to another
machine and that helped push the count to around 500 before we see an
error rate around 3%. Based on this, it's very likely this is a case
of simple resource limitations on the server.

I will be doing more testing next week with hardware more similar to
our production systems, which have about 400% more horsepower than our
dev systems.

Thanks again for your help!

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org

Reply via email to