Memcache has other benefits though.  At high levels of traffic you're
sharing a single memcache instance, whereas you may be replicated on to many
JVM processes.  As your traffic increases and the number of JVMs increases,
you'll see a lower hit rate.

That said, if you want a two-level cache (for check the heap, then memcache)
you want to make sure that your heap usage is limited by using an LRU cache
within the heap.  You are limited to about 100 MB of heap usage, and if you
exceed that you'll get OutOfMemoryErrors and your cache will end up being
flushed anyway.

On Wed, Mar 10, 2010 at 9:40 AM, Prashant Gupta <nextprash...@gmail.com>wrote:

> yes, you are right.
>
> But, in my case, suppose each request requires 100 entities to be fetched
> and for any two requests say 90-95 entities are common. So, if I use only
> memcache for caching, 100 memcache fetch will be required per request. Or,
> if I keep the data in servlet env. for each req., 10-0 memcache fetch will
> be required per request.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine for Java" group.
> To post to this group, send email to
> google-appengine-j...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine-java+unsubscr...@googlegroups.com<google-appengine-java%2bunsubscr...@googlegroups.com>
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine-java?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.

Reply via email to