(Sorry for the double-post Joel, I accidentally only sent this to you
instead of the mailing list)

Thanks Joel,

I'm working on converting the test to ab (shouldn't take long) and trying
out 1.4, but to answer your questions. RSTavg is average response time.
There's a 500ms timer in the http response, and some serialization. It's
over the local network. So that should be about 550ms under no load.

Users per second, yes.

I didn't use ab to start because I'm not interested in response time, per
se, but at what load response time starts to fail. I don't know an effective
way to do this with ab, partially because it doesn't support stepping (my
test steps through the concurrency levels specified by "users", I should
rename Usersps to sessions per second, because if a "user" takes less than 1
second they start again right away). My testing harness allows me to write
tests in my application language, blah blah.. you get the idea. But yes,
I'll run ab and see if I get the same results.

I'll also try your changes to the timeouts. Thanks for your help!

On Sat, Jan 29, 2011 at 12:57 PM, Joel Krauska <jkrau...@gmail.com> wrote:

> Sean,
>
> I think it would be helpful to further explain your testing scenario.
>
> How do you simulate concurrent users?
>
> What is RSTav?
>
> Usersps is sessions per second??
>
> I think most folks use Apache Bench
> http://httpd.apache.org/docs/2.0/programs/ab.html
> as a fairly common industry standard for HTTP server performance.
>
> Would you consider rerunning your test using ab as well?
>
> Equivalently, you might look at httpperf (see the haproxy web page for some
> notes)
>
>
> One tuning thing you might try is dropping down your timeouts.
> You have:
>    timeout connect 10000
>    timeout client 300000
>    timeout server 300000
>
> I typically use an order of magnitude smaller.
> 5000
> 50000
> 50000
> (these are exaple defaults listed in an example in 2.3 of the HA proxy
> docs)
> http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
>
>
> Best of luck,
>
> Joel
>
>
>
> On 1/29/11 10:53 AM, Sean Hess wrote:
>
>> I'm performing real-world load tests for the first time, and my results
>> aren't making a lot of sense.
>>
>> Just to make sure I have the test harness working, I'm not testing
>> "real" application code yet, I'm just hitting a web page that simulates
>> an IO delay (500 ms), and then serializes out some json (about 85 bytes
>> of content). It's not accessing the database, or doing anything other
>> than printing out that data. My application servers are written in
>> node.js, on 512MB VPSes on rackspace (centos55).
>>
>> Here are the results that don't make sense:
>>
>> https://gist.github.com/802082
>>
>> When I run this test against a single application server (bottom one),
>> You can see that it stays pretty flat (about 550ms response time) until
>> it gets to 1500 simultaneous users, when it starts to error out and get
>> slow.
>>
>> When I run it against an haproxy instance in front of 4 of the same
>> nodes (top one), my performance is worse. It doesn't drop any
>> connections, but the response time edges up much earlier than against a
>> single node.
>>
>> Does this make any sense to you? Does haproxy need more RAM? I was
>> watching the box while the test was running and the haproxy process
>> didn't get higher than 20% CPU and 10% RAM.
>>
>> Please help, thanks!
>>
>
>

Reply via email to