I'm performing real-world load tests for the first time, and my results
aren't making a lot of sense.

Just to make sure I have the test harness working, I'm not testing "real"
application code yet, I'm just hitting a web page that simulates an IO delay
(500 ms), and then serializes out some json (about 85 bytes of content).
It's not accessing the database, or doing anything other than printing out
that data. My application servers are written in node.js, on 512MB VPSes on
rackspace (centos55).

Here are the results that don't make sense:

https://gist.github.com/802082

When I run this test against a single application server (bottom one), You
can see that it stays pretty flat (about 550ms response time) until it gets
to 1500 simultaneous users, when it starts to error out and get slow.

When I run it against an haproxy instance in front of 4 of the same nodes
(top one), my performance is worse. It doesn't drop any connections, but the
response time edges up much earlier than against a single node.

Does this make any sense to you? Does haproxy need more RAM? I was watching
the box while the test was running and the haproxy process didn't get higher
than 20% CPU and 10% RAM.

Please help, thanks!

Reply via email to