On 7/1/2010 9:37 AM, dormando wrote:

Given the title, the overtly academic content, and the lack of serious
discussion as to the application of such knowledge, we end up with stupid
threads like this. Imagine how many people are just walking away with that
poor idea of "holy shit I should use REDIS^WCassandra^Wetc because
memcached doesn't scale!" - without anyone to inform them that redis isn't
even multithreaded and cassandra apparently sucks at scaling down. Google
is absolutely loaded with people benchmarking various shit vs memcached
and declaring that "memcached is slower", despite the issue being in the
*client software* or even the benchmark itself.

People can't understand the difference! Please don't confuse them more!

There are *many* more interesting topics with scalability gotchas, like
the memory reallocation problem that we're working on.

I have the cache spread across about 20 servers per site, with the servers also doing some other work, and could add more if the performance of a single server ever becomes an issue. The question I find more interesting and probably a lot harder to nail down is what kind of performance impact will it have if one or a few of the servers go down at a busy time. Ignoring the impact on the origin servers, how long does it typically take a client to figure out that a server is gone and what happens if it is supposed to be fielding 100k requests/second at the time? And do all the different clients rebalance in the same way once they notice a configured server is dead?

In real-world situations I think the client performance is much more important than the server, especially since in this case it has to deal with the load balancing and failover logic - and you can just add more servers if you need them.

--
  Les Mikesell
   lesmikes...@gmail.com

Reply via email to