This might work for apps with very high load - but looking at other
posts on this group, it looks like a server instance only needs to be
idle for a couple of seconds to be brought down.  In an application
with not so high traffic,  essentially we would be swapping eviction
due the the pressure of unimportant data with eviction due to being
idle.  In a system with low traffic volume, it may be even more
important to keep the important memcache data, since you will also be
contending with the cost of starting server instances.  From my
perspective a key advantage of memcache is that its life-cycle
transcends the stop/start of server instances.

Even in a high load application, the global dict alternative is not
ideal, as every new server instance started has to populate the
cache.  This essentially means the main reason for the high priority
memcache data is lost - i.e. reliably fast performance on all
requests.  I'm not sure how GAE works re. stop/start of servers, but I
wouldn't be surprised if there was a fair bit of it on all apps,
regardless of traffic.

On Jul 2, 1:55 pm, Mark Wolgemuth <fuma...@gmail.com> wrote:
> It would be interesting to offer different global memcache queues with
> different policies.
>
> In your case, it seems to me if you have things you want to cache
> indefinitely, why don't you just load them in the current interpreter
> into a global dict on first hit? Effectively run your own local cache.
> Not sure how this plays out in GAE arch though, it would only help
> subsequent requests that land on the same interpreter.
>
> On Jun 30, 10:32 am, hawkett <hawk...@gmail.com> wrote:
>
> > Hi,
>
> >    I'm wondering what the memcache architecture is - is it
>
> > a) single instance with a massive pool of memory for everyone, or
> > b) many instances that are each shared by a few apps, or
> > c) an instance per app, or
> > d) something else?
>
> > I'm guessing (a) if there is an intelligent LRU policy that provides
> > separate LRU for each app, preventing a few apps evicting everyone
> > else's data, or (b) if the LRU policy is not intelligent.  (c) would
> > lead to a very poor memory utilisation.
>
> > Apart from being interested, I am wondering about memcache
> > prioritisation - i.e. where the blunt LRU scheme is not suitable.  I
> > might have a small amount of data that I really don't want evicted
> > from memcache (even if it is not used very often), and a large stream
> > of less important data for which the blunt LRU policy is fine.  In the
> > current GAE implementation my important data can be evicted by my less
> > important data.
>
> > It would be great if there was some way to prioritise memcache data -
> > in a stand-alone system, this would be achieved using multiple
> > instances, one for each priority level.
>
> > Assuming we have option (a) with intelligent LRU on a per app basis,
> > then it shouldn't be too hard to provide multiple memcache instances
> > that can be used for prioritisation of cached data.  It should be easy
> > enough to make this backward compatible by providing an optional
> > parameter to specify the instance, defaulting if it is not provided.
>
> > I can see that this would impact memory utilisation, but it would be
> > possible to make the second, non-default instance much smaller than
> > the default one - say 5% - forcing developers to think carefully about
> > using the smaller cache.   Even if no one uses the secondary instance,
> > memory utilisation can still top 95%.
>
> > Interested to understand the issues, as this is forcing me not to
> > cache my 'big data' for fear of constantly evicting my 'little data'.
> > In my situation, my 'little data' is very important for general
> > response times, which are good to keep up even if the data is not used
> > particularly often.  Essentially what I need is a cache with low
> > volatility, and a cache whose volatility isn't so important.  Cheers,
>
> > Colin
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to