On Nov 17, 8:06 pm, "Barry Hunter" <[EMAIL PROTECTED]>
wrote:
> On Mon, Nov 17, 2008 at 5:52 PM, Anders <[EMAIL PROTECTED]> wrote:
>
> > On Nov 17, 6:30 pm, Jon McAlister <[EMAIL PROTECTED]> wrote:
>
> >> For any particular "key", all instances will talk to the same memcache
> >> backend. Note that we can easily have different keys hosted on
> >> different backends, though, thanks to the simplicity of the memcache
> >> API (i.e. lack of transactions). This is how we can shard one app's
> >> memcache data on to multiple machines.
>
> > Ah! That explains it. This also means that an idea I had about
> > sharding the Memcache will work. Let's say that we have a key named
> > 'indexpage'. We can then shard the Memcache by adding an index, say
> > 0..99 to the key, so instead of just accessing a single key
> > 'indexpage' we can randomly access keys with the index added to it,
> > such as: 'indexpage_32', 'indexpage_7', 'indexpage_85' etc.
>
> Before you go and implement that, do you have any evidence that
> memcache could be a bottle neck?
>
> Otherwise it sounds like a case of possible premeture optimization.
> Not withstanding the fact that as I understand memcache 'shards' by
> hashing the key - but that leads no garenteers that your keys will end
> up on seperate instances.
>
> And that also leads to more work as you now have to generate your page
> 100 times (which if you using the cache right is probably expensive)
>
> From my experience of Memcache (not on AppEngine) - its very quick at
> dealing out the same result multiple times. And if memcache is truly
> distributed on AppEngine - and it doesnt do it already, then there is
> always the possiblility of edge caching really hot items (say on the
> machine itself)  - which your sharding would instantly make less
> effective. (memcache - can itself be cached - which as Jon points out
> the datastore cant)
>
> I guess what trying to say is if at all possible you should leave the
> 'scaling' to the platform, its only where that is not possible (like
> counters) that you should consider it yourself, (like you say in your
> opening post!)
>
>
>
> --
> Barry
>
> -www.nearby.org.uk-www.geograph.org.uk-

No, I'm not even planning to use Memcache at all yet. I think caching
should only be done when actually needed otherwise it may as you say
very well be a premature implementation. And the sharded Memcache idea
was only for the hypothetical case of having truly massive traffic
hitting the same key in the Memcache. Like millions of users just
bombarding a single key in the Memcache at the same time. By sharding
the key itself then that could potentially solve a performance
bottleneck. When the key is sharded, then each key can be served by
different machines. When only one key is used then all requests are
served by the same Memcache backend, which hypothetically (maybe not
in practice) could become a performance bottleneck. But for any site
smaller than say YouTube :-) it will probably not increase performance
by sharding the Memcache, and for ordinary and even big loads sharding
the Memcache means that it will actually create less performance
because, as you pointed out, the page has to be generated 100 times
(or whatever the number of shards is) every time the Memcache needs to
be refreshed.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to