Earlier this morning I had a situation where datastore reads were timing
out.  That's okay, and expected, given that I use a M/S datastore.
 However, the timeouts were of the order of 50 seconds, causing nearly 30
front-end instances to be spawned.  My usual number of active front-end
instances at that time of the day is about 5, peaking at 15 occasionally.
 This condition lasted only 3 minutes or so, and so, the cost impact was
minimal.  However, I can imagine that if this lasted an hour or more, I
would incur a lot of costs while the downtime persists.  I'm okay with such
downtimes, as long as it only leads to my customers not being able to
access my site.  But if it also leads to unnecessary increases in costs,
then it calls for further optimization.  So, my loaded question is - how
can I handle this with python2.5?  Is python 2.7 the only answer?  I
imagine python2.7 will help because while a front-end is waiting for data
store ops to complete, it can process other requests.  But are there other
ways of setting specific timeouts to datastore operations?  So, if these
operations are taking too long, I'd rather just return an error to the
user, instead of letting my front-end run indefinitely.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to