This might not be of much help, but I have been though the process as well.
Where are you running the ab test from? SSL, Latency, etc? Run it from a server in the same region. Check keep alive settings and make sure you set your ulimit -n high. Have you tested to see if a static asset has greater performance? Drop something in public and ab against that. It should have a much higher throughput since rails should not be involved. Try to eliminate variables until you know where the problem is. --Dan On Thu, Mar 22, 2012 at 11:29 AM, Kevin Baker <[email protected]> wrote: > I am working to optimize the speed of a Rails application we have in > production. > > It is built on an Apache,Passenger,Rails 2.35, Ubuntu on an m1.large EC2 > instance (7.5GB RAM, 4 Compute Units) We will be switching to nginx in the > near term, but have some dependancies on Apache right now. > > I'm running load tests against it with ab and httperf. > > We are seeing consistently 45 request/sec varying between 30 - 200 > concurrent users for a simple API request to get a single record from a > database table with 100k records indexed. It seems very slow to me. > > We are focusing on both application code optimization as well as server > configuration optimization. > > I am currently focused on server configuration optimization. Here's what > I've done so far: > > Passenger adjusted PassengerMaxPoolSize to (90% Total Memory) / (Memory per > passenger process, 230mb) > Apache adjusted MaxClients to 256 max. > I did both independently and together > > I saw no impact to the req/sec. > > Another note is that this seems to scale linearly by adding servers. So > doesn't seem to be a database issue. 2 servers 90 req/sec, 3 servers was > around 120 request/sec... etc > > Any tips? It just seems like we should be getting better performance by > adding more processes? > > > -- > SD Ruby mailing list > [email protected] > http://groups.google.com/group/sdruby -- SD Ruby mailing list [email protected] http://groups.google.com/group/sdruby
