On Fri, Aug 21, 2009 at 12:10 PM, Bill Moseley<[email protected]> wrote:
> I'm perhaps more curious how people use multiple gets -- or really the
> design of an web application that supports this.

I use it for summing a bunch of per-minute keys.  Or getting a bunch
of object IDs from some other source, then getting all the objects in
one hope from memcached.

You're right that not all keys are on the same server, but you'll
still do n/m hops instead of n, and a good client will do the n/m hops
concurrently.

> So, I'm curious about the design of an application that supports gathering
> up a number of (perhaps unrelated) cache requests and fetch all at once.
> Then fill in the cache where there are cache-misses.  Am I misunderstanding
> how people use this feature?

Yeah, I'm interested in this use case, too.  I haven't actually done
it yet, but it seems like latency could be cut down by making
concurrent service-oriented requests, i.e. a callable that returns the
source object on a cache miss, code to gather all the hit-or-miss
paths, and a single big multi-get/function call near the end to
concurrently do all the work.

a = cache.get('x')
if not a:
   a = <expensive>
b = cache.get('y')
if not b:
  b = <expensive>

could become:
def miss_x():
   return <expensive>
def miss_y():
  return <expensive>
mux.reg('x', miss_x)
mux.reg('y', miss_y)
ret = mux.get_all() #actually does the concurrent network hops and
expensive (threaded?) calls if needed.

Reply via email to