Bradley,

Thanks for sharing. I had no idea that Ruby 1.9.3 was that much faster. I
would have assumed JRuby would have performed better.

Ryan

On Thu, Mar 22, 2012 at 9:02 PM, bradleyland <[email protected]> wrote:

> I'm the one responsible for this kind of testing in our organization. It
> sounds like you're taking the right approach by drilling down. You have to
> find the bottleneck through isolation. From my experience, until you've
> spent significant time optimizing, or unless you're dealing with a very
> (stupid) simple application, the bottleneck will be your application code,
> not the app server or web server. Apache and Nginx can handle many, many
> more requests than a Rails app that is making DB calls and doing other
> work. Likewise with Passenger, Thin, Unicorn, etc. All the "versus"
> benchmarks you see are making requests to Rails apps that don't do
> anything. This is a necessity to find the maximum throughput of the app
> server, but it's not terribly relevant for those of us with applications
> that do work. You optimize toward your app server's ceiling, then, if you
> need more, you look at caching to assets that can be served by your web
> server directly.
>
> All of the above leads to this simple fact: Swapping out web and app
> servers won't get you any significant performance gains if your application
> code is slow. I benchmarked early iterations of our application on
> Apache/Passenger, Nginx/Passenger, and Nginx/Unicorn, and they all
> performed within 1% of each other in terms of req/s.
>
> A great way to benchmark your app code is to use Passenger stand-alone,
> rather than Passenger behind Apache/Nginx. A single Passenger process will
> tell you the raw throughput of your application code, which is what you're
> really after right now. All the other factors are clouding the test data.
> I'd keep headed where you're headed by starting with a request that does
> zero work, then move on to actual app code, then move in to your problem
> areas. If you find dramatic differences between a single Passenger process
> on test hardware and your production server setup, you know you have an
> infrastructure stack issue. Your setup is pretty standard though, so I
> don't think you'll find that to be the case.
>
> Depending upon your Ruby version, there are different profiling tools
> available. You can use those to dig deep in to the slow parts of your app.
> Also, depending upon what kind of work your app is doing, you might see
> some big gains by moving to Ruby 1.9.3. This is particularly true for date
> handling, which is literally orders of magnitude faster than 1.8 or even
> 1.9.2: https://gist.github.com/1352997.
>
>
> On Thursday, March 22, 2012 2:59:20 PM UTC-4, kevbaker wrote:
>>
>> Thanks Dan,
>>
>> I have done a static asset. I'm doing about 1200 req/sec on that. I am in
>> the process of doing a non-database request against an action that just
>> does some simple math. To establish a baseline that way.
>>
>>
>> On Thu, Mar 22, 2012 at 11:40 AM, Dan Simpson <[email protected]>wrote:
>>
>>> This might not be of much help, but I have been though the process as
>>> well.
>>>
>>> Where are you running the ab test from? SSL, Latency, etc?  Run it
>>> from a server in the same region.  Check keep alive settings and make
>>> sure you set your ulimit -n high.
>>> Have you tested to see if a static asset has greater performance?
>>> Drop something in public and ab against that.  It should have a much
>>> higher throughput since rails should not be involved.
>>> Try to eliminate variables until you know where the problem is.
>>>
>>> --Dan
>>>
>>> On Thu, Mar 22, 2012 at 11:29 AM, Kevin Baker <[email protected]>
>>> wrote:
>>> > I am working to optimize the speed of a Rails application we have in
>>> > production.
>>> >
>>> > It is built on an Apache,Passenger,Rails 2.35, Ubuntu on an m1.large
>>> EC2
>>> > instance (7.5GB RAM, 4 Compute Units) We will be switching to nginx in
>>> the
>>> > near term, but have some dependancies on Apache right now.
>>> >
>>> > I'm running load tests against it with ab and httperf.
>>> >
>>> > We are seeing consistently 45 request/sec varying between 30 - 200
>>> > concurrent users for a simple API request to get a single record from a
>>> > database table with 100k records indexed. It seems very slow to me.
>>> >
>>> > We are focusing on both application code optimization as well as server
>>> > configuration optimization.
>>> >
>>> > I am currently focused on server configuration optimization. Here's
>>> what
>>> > I've done so far:
>>> >
>>> > Passenger adjusted ​PassengerMaxPoolSize to (90% Total Memory) /
>>> (Memory per
>>> > passenger process, 230mb)
>>> > Apache adjusted MaxClients to 256 max.
>>> > I did both independently and together
>>> >
>>> > I saw no impact to the req/sec.
>>> >
>>> > Another note is that this seems to scale linearly by adding servers. So
>>> > doesn't seem to be a database issue. 2 servers 90 req/sec, 3 servers
>>> was
>>> > around 120 request/sec... etc
>>> >
>>> > Any tips? It just seems like we should be getting better performance by
>>> > adding more processes?
>>> >
>>> >
>>> > --
>>> > SD Ruby mailing list
>>> > [email protected]
>>> > http://groups.google.com/​group/sdruby<http://groups.google.com/group/sdruby>
>>>
>>> --
>>> SD Ruby mailing list
>>> [email protected]
>>> http://groups.google.com/​group/sdruby<http://groups.google.com/group/sdruby>
>>>
>>
>>  --
> SD Ruby mailing list
> [email protected]
> http://groups.google.com/group/sdruby
>

-- 
SD Ruby mailing list
[email protected]
http://groups.google.com/group/sdruby

Reply via email to