Hi again,

forgot 1 point: the overhead is pretty low for the end-user, no delay
due to additional I/Os, just some processing (microseconds ?) to
schedule the async request. User doesn't see the possible delay of
(e): i/o not finished when you try to retrieve.

So, I would see it as acceptable on a per user basis.

On a global aggregated scale , this may have a cost (CPU + I/O to
retrieve the data) if you schedule it billions of times a day, but in
that case, you probably have the revenue stream anyway to incurr this
cost

After re-reading myself, I can even propose something simpler: do a
regular synchronous i/o (and construct your mem structure) but after
you returned the result of current request. You would achieve same
result: next user is served with new config.

regards

didier

On Jan 13, 7:02 am, Didier Durand <durand.did...@gmail.com> wrote:
> Hi Ken,
>
> To my understanding also, adressing a specific instance (or each and
> every of the running ones) to make it "do something" is not easily
> possible (impossible ?) in App Engine: I tried for other purposes and
> could not achieve it. I didn't find any solution either: cron jobs,
> queued tasks don't give you guarantee of where they execute.
>
> What you have to to make an instance do something is stimulate it from
> "the inside". Additionnally in your case, you want to minimize impact
> on users.
>
> So, what I would suggest is the use of the new datastore async api:
>    - a) your user request starts in a given instance
>    - b) you start an async query to the ds to a known place to obtain
> new config if it has been changed
>    - c) you do the regular processing corresponding to the user
> request (with the possibly old config)
>    - d) you return the response to the user
>    - e) you obtain the async response from the ds and you update your
> in-mem (cache, singleton, etc.)  config of the current instance
> accordingly
>    - f) next user request will use the updated config.
>
> Of course, you can adjust the frequency of (b) and (e) by adding a
> time_of_last_request in your singleton and do it at the pace you wish.
>
> Your question interests me too so I look forward to seing if people
> come with better proposals to use them myself!
>
> regards
> didier
>
> On Jan 12, 10:16 pm, "b2csand...@kentraub.com"
>
> <b2csand...@kentraub.com> wrote:
> > I've got an application that works as follows.
>
> > The main thing it does is handle user request.  Handling a user
> > request entails doing a bunch of processing, then finally returning
> > the result to the user.  It is important that the total time to
> > process a user request be as small as possible.
>
> > There is also a configuration interface used by admins.  The admin can
> > make a change to configuration that affects what processing is done on
> > user requests as described above.  The information in stored in the
> > config database has to be "compiled" into fairly elaborate in-memory
> > structure that handles the user request processing; computing this in-
> > memory structure from the configuration info takes time (much longer
> > than processing a user request).
>
> > It is a goal that the time to "compile" the configuration info into
> > the in-memory structure *not* be on the critical path of processing a
> > user request.
>
> > Here's the problem.  When the admin changes the config info, it's easy
> > enough to update the in-memory structure for the instance that
> > happened to process the admin's request.  But in order to avoid start-
> > up latency in processing user requests, I've got three "always on"
> > instances (which is the minimum number of always on instances -- you
> > can't ask for just one).  How can I get the other instances to realize
> > that the config has changed and that their in-memory structure needs
> > to be recalculated?
>
> > Non-solution:  on each user request, check the database to see if the
> > config has changed.  This is not acceptable, because if the database
> > *has* changed then that user request will incur the latency to update
> > the in-memory structure.
>
> > Non-solution:  just like the previous non-solution, but use a
> > memcache.  I believe this has the same latency problem, just delayed
> > until after the memcache entry expires.
>
> > Lousy solution:  redeploy the app after a config change.  That works
> > because it restarts all instances, but it defeats the purpose of
> > making the config something that can be changed by an admin through UI
> > screens rather than wired into the code.
>
> > If I could direct a request to *all* deployed instances (or,
> > equivalently, to a specific instance coupled with the ability to
> > enumerate all of the instance IDs), then it would be easy for me to
> > "kick" each instance after the config change.  But as far as I know,
> > there's no way to do that.
>
> > I'm sure others have encountered a similar scenario.  Any suggestions?
>
> > Thanks in advance.
>
> > Ken Traub
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-java@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.

Reply via email to