Thanks for the response Brian - it should be interesting to google
that I'm not the only one choosing not to cache data for this reason,
because it means that people are using more CPU and datastore
resources because of this issue.  Given that the availability of CPU
(especially on a per request basis) seems to be the biggest bottleneck
in GAE at the moment, there may be a noticeable drop in the use of
this resource if developers are able to choose to use memcache more
effectively.  Certainly I haven't heard any complaints that memcache
allocation is too small, so a 5% rejig as suggested would probably go
unnoticed.  Sounds like a win-win situation.  Are there any other
issues I'm not considering here?  Thanks,

Colin

On Jun 30, 4:00 pm, bFlood <bflood...@gmail.com> wrote:
> i agree colin, there is a lot of big data that I just don't cache in
> fear of evicting the smaller, more important chunks.
>
> cheers
> brian
>
> On Jun 30, 10:32 am, hawkett <hawk...@gmail.com> wrote:
>
> > Hi,
>
> >    I'm wondering what the memcache architecture is - is it
>
> > a) single instance with a massive pool of memory for everyone, or
> > b) many instances that are each shared by a few apps, or
> > c) an instance per app, or
> > d) something else?
>
> > I'm guessing (a) if there is an intelligent LRU policy that provides
> > separate LRU for each app, preventing a few apps evicting everyone
> > else's data, or (b) if the LRU policy is not intelligent.  (c) would
> > lead to a very poor memory utilisation.
>
> > Apart from being interested, I am wondering about memcache
> > prioritisation - i.e. where the blunt LRU scheme is not suitable.  I
> > might have a small amount of data that I really don't want evicted
> > from memcache (even if it is not used very often), and a large stream
> > of less important data for which the blunt LRU policy is fine.  In the
> > current GAE implementation my important data can be evicted by my less
> > important data.
>
> > It would be great if there was some way to prioritise memcache data -
> > in a stand-alone system, this would be achieved using multiple
> > instances, one for each priority level.
>
> > Assuming we have option (a) with intelligent LRU on a per app basis,
> > then it shouldn't be too hard to provide multiple memcache instances
> > that can be used for prioritisation of cached data.  It should be easy
> > enough to make this backward compatible by providing an optional
> > parameter to specify the instance, defaulting if it is not provided.
>
> > I can see that this would impact memory utilisation, but it would be
> > possible to make the second, non-default instance much smaller than
> > the default one - say 5% - forcing developers to think carefully about
> > using the smaller cache.   Even if no one uses the secondary instance,
> > memory utilisation can still top 95%.
>
> > Interested to understand the issues, as this is forcing me not to
> > cache my 'big data' for fear of constantly evicting my 'little data'.
> > In my situation, my 'little data' is very important for general
> > response times, which are good to keep up even if the data is not used
> > particularly often.  Essentially what I need is a cache with low
> > volatility, and a cache whose volatility isn't so important.  Cheers,
>
> > Colin
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to