I think this doesn't strictly relate to the OP's specific questions,
but I just wanted to brain dump some strategies I'm thinking of for my
next app to minimise front end instances.

Requests to my app are going to be doing one of two things, more or
less. Modify/Create data or read data.

The first type of request, the write will be put on a task queue and
it'll return immediately.

The queue will be handled by a backend instance. Now this costs a bit
upfront, but the key advantage is that I basically get a lot of
control over the rate at which my 'write queue' is processed.
AppEngine won't spin up new front end instances to handle this for me.
I get control over how fast and at how much cost my writes will be
handled. There may be more of a lag before writes manifest in the
datastore, but I think I can tolerate that in my app.

For read requests, I hope to use lots of caching. App caching, then
memcache. But also I'm thinking it might be possible to use the edge
cache to cut out a proportion of requests altogether. Let's say I have
a collection of data at /user/1221/activityfeed. When a (ajax) client
accesses that it'll either hit a edge cache (and not worry my
instances at all), or if it does worry my instances, will hopefully
predominantly get fetched out of cache.

My only worry is about stale data in the edge cache. But I'm thinking
you could turn the read into a two step process. When the activity
feed is updated it would set a new cache buster for the feed (in app/
memcache/datastore). So when someone comes to read it, they ask for
the URL to read from first (which will include the cache buster). Then
they ask for the URL directly, which hopefully the edge cache will
handle if someone's already hit that URL once before.

You might wonder why bother with that first step when it could return
the activity feed directly, from cache even perhaps - but I figure a
lot more cachebusters will fit in memcache/app-cache than full
activity feeds, and will be much faster to access (and thus tie up an
instance for less time). If the proxy cache holds up, you could read a
full feed from it (for free) for the cost of the fetch of a small
string from app/memcache. This could be particularly good for shared
collections - data being read by multiple users.

Still brewing over all this, but right now I think that's how I'm
going to approach my next app. Cut out as many (large) read requests
at the proxy cache as I can, throttle writes using a backend setup I
can control the cost of.

Still though - a little worried about Google's 'quietness' on the
somewhat undocumented edge cache feature!



On Sep 2, 5:59 pm, Barry Hunter <barrybhun...@gmail.com> wrote:
> On Fri, Sep 2, 2011 at 5:46 PM, Joshua Smith  wrote:
> >   Switching to HR is going to be a huge PITA, because my use case is 
> > exactly the one that "eventual consistency" screws up (user posts a 
> > meeting, then expects to see it in the list of meetings; silly users).
>
> Can't Entity groups be used to ensure 
> consistency?http://code.google.com/appengine/docs/java/datastore/hr/overview.html
>
> Put all a users meeting in a single group (based of the User Entity).
> Then all the meetings will be shown
>
> Alternativly
> A trick I used a long time ago, to deal with replication lag between
> mysql servers. When someone added a new item, store it in the session
> (as well as writing to master). then when run the query against the
> slave (which might be stale) can just tack the saved item on the end.
> Within appengine would use memcache. Not sure if its an appropriate
> fix for this issue on appengine, but it worked just fine for me then.
> (and is very little code)
>
>
>
>
>
>
>
>
>
> > -Joshua
>
> > On Sep 2, 2011, at 12:31 PM, Barry Hunter wrote:
>
> >> There is another mitigating issue, that hopefully should be making
> >> this all academic anyway.
>
> >> The $50 credit. That should cover costs for about 5 months on a *low
> >> taffic* website.
>
> >> Which in theory should be enough for Multi-threading and the scheduler
> >> to be fixed* . At which case app can return to free quotas :)
>
> >> * So that it will stick to one instance, even for a reasonable amount
> >> of traffic. I'm sure this is possible and feasible.
>
> >> (I'm tempted to signup for the multi-threading trail, just to see what
> >> sort of QPS a multi-threaded instance can handle. )
>
> >> --
> >> You received this message because you are subscribed to the Google Groups 
> >> "Google App Engine" group.
> >> To post to this group, send email to google-appengine@googlegroups.com.
> >> To unsubscribe from this group, send email to 
> >> google-appengine+unsubscr...@googlegroups.com.
> >> For more options, visit this group 
> >> athttp://groups.google.com/group/google-appengine?hl=en.
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "Google App Engine" group.
> > To post to this group, send email to google-appengine@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > google-appengine+unsubscr...@googlegroups.com.
> > For more options, visit this group 
> > athttp://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to