The recipe does cut down the Timeouts dramatically, but there are still a large number which seem to bypass the this fix completely. A sample error log entry is attached:
Exception in request: Traceback (most recent call last): File "/base/python_lib/versions/third_party/django-0.96/django/core/ handlers/base.py", line 77, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/base/data/home/apps/kbdlessons/1-01.339729324125102596/ views.py", line 725, in newlesson productentity = Products.gql("where Name = :1", ProductID).get() File "/base/python_lib/versions/1/google/appengine/ext/db/ __init__.py", line 1564, in get results = self.fetch(1, rpc=rpc) File "/base/python_lib/versions/1/google/appengine/ext/db/ __init__.py", line 1616, in fetch raw = raw_query.Get(limit, offset, rpc=rpc) File "/base/python_lib/versions/1/google/appengine/api/ datastore.py", line 1183, in Get limit=limit, offset=offset, prefetch_count=limit, **kwargs)._Get(limit) File "/base/python_lib/versions/1/google/appengine/api/ datastore.py", line 1113, in _Run raise _ToDatastoreError(err) Timeout Any ideas on how to deal with is class of Timeouts? On Jan 28, 9:48 am, phtq <pher...@typequick.com.au> wrote: > Thanks for mentioning this recipe, it worked well in testing and we > will try it on the user population tomorrow. > > On Jan 27, 9:48 am, djidjadji <djidja...@gmail.com> wrote: > > > > > There is an article series about the datastore. It explains that the > > Timeouts are inevitable. It gives the reason for the timeouts. They > > will always be part of Bigtable and the Datastore of GAE. > > > The only solution is a retry on EVERY read. The get by id/key and the > > queries. > > If you do that then very few reads will result in aTimeout. > > I wait first 3 and then 6 secs between each request. I log eachTimeout. > > If stillTimeoutafter 3 read tries I raise the exception. > > > The result is very few final read Timeouts. The log shows frequent > > requests that need a retry, but most of them will succeed with the > > first. > > > For speed, fetch the Static content object by key_name, and key_name > > is the file path. > > > 2010/1/26 phtq <pher...@typequick.com.au>: > > > > Our application error log for the 26th showed around 160 failed http > > > requests due to timeouts. That's 160 users being forced to hit the > > > refresh button on their browser to get a normal response. A more > > > typical day has 20 to 60 timeouts. We have been waiting over a year > > > for this bug to get fixed with no progress at all. Its beginning to > > > look like it's unfixable so perhaps Google could provide some > > > workaround. In our case, the issue arises because of the 1,000 file > > > limit. We are forced to hold all our .js, .css, .png. mp3, etc. files > > > in the database and serve them from there. The application is quite > > > large and there are well over 10,000 files. The Python code serving up > > > the files does just one DB fetch and has about 9 lines of code so > > > there is no way it can be magically restructured to make theTimeout > > > go away. However, putting all the files on the app engine as real > > > files would avoid the DB access and make the problem go away. Could > > > Google work towards removing that file limit? > > > > -- > > > You received this message because you are subscribed to the Google Groups > > > "Google App Engine" group. > > > To post to this group, send email to google-appeng...@googlegroups.com. > > > To unsubscribe from this group, send email to > > > google-appengine+unsubscr...@googlegroups.com. > > > For more options, visit this group > > > athttp://groups.google.com/group/google-appengine?hl=en. -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to google-appeng...@googlegroups.com. To unsubscribe from this group, send email to google-appengine+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.