Hi all, I also had similar issues, with a recent deployed application being used with a Google Apps account. I was surprised to see timeout errors in simple requests (insert/ delete), with a database with less than 10,000 rows. I was wondering how to imagine the whole "scalability" concept if timeouts become bottleneck from start... The worse I have noticed is that the timeout sometimes actually performs the action I requested (well, actually performing like a usual timeout error), but that complicates how I should handle the data. Can I just rely on the method is_saved() to avoid unnecessary retry if timeout cases happen?
If anyone has created a wrap of db.Model with this implementation, please lead the way... Bruno On May 22, 10:04 pm, johntray <john.tur...@gmail.com> wrote: > Has anyone suggested modifying db.get(), db.put(), and db.delete() to > include an optional argument specifying a number-of-retries-on- > timeout? I can wrap these functions in my own logic, but I would think > nearly every application could use this. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appengine@googlegroups.com To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en -~----------~----~----~----~------~----~------~--~---