> @dormando, great response.... this is almost exctly what i had in mind, i.e. 
> grouping all of your memcached servers into logical pools so as to
> avoid hitting all of them for every request. infact, a reasonable design, for 
> a very large server installation base, would be to aim for say 10-20%
> node hit for every request (or even less if u can manage it).
>
>  so with the facebook example, we know there's a point you get to where a 
> high node count means all sorts of problems, in this case, it was 800, i
> think (correct me if am wrong) proving the point that logical groupings 
> should be the way to go for large pools -- infact i would suggest groupings
> with varying canopies of granularity as long as your app kept a simple and 
> clear means by which to zap down to a small cross section of servers
> without losing any  intended benefits of caching in the fisrt place..
>
> in short, most of my anxities have been well addressed (both theoretical and 
> practical)... 
>
> +1 for posting this in a wiki Dormando.

http://code.google.com/p/memcached/wiki/NewPerformance added to the bottom
of this wiki page... tried to summarize the notes better, but speak up
with edits if something's missing or unclear.

Reply via email to