Interested in some feeback on this (does it sound right?), or maybe this might be of interest to others.
We are launching a new facebook app in a couple weeks and we did some load testing over the weekend on our unicorn web cluster. The servers are 8 way xeon's with 24gb ram. Our app ended up being primarily cpu bound. So far the sweet spot for the number of unicorns seems to be around 40. This seemed to yield the most requests per second without overloading the server or hitting memory bandwidth issues. The backlog is at the somaxconn default of 128, I'm still not sure if we will bump that up or not. Increasing the number of unicorns beyond a certain point resulted in a noticable drop in the requests per second the server could handle. I'm pretty sure the cause is the box running out of memory bandwidth. The load average and resource usage in general (except for memory) would keep going down but so did the requests per second. At 80 unicorns the requests per second dropped by more then half. I'm going to disable hyperthreading and rerun some of the tests to see what impact that has. Chris _______________________________________________ Mongrel-users mailing list [email protected] http://rubyforge.org/mailman/listinfo/mongrel-users
