You haven't said if you running this in a task or front end request,

If it is a frontend request it will never run to completion as you have a 
limit of 60 seconds.
Also look at your default batch size. You may find that a bigger batch size 
will speed things up too.

T


On Wednesday, October 8, 2014 1:50:44 AM UTC+8, Joshua Smith wrote:
>
> Yup, cursors are a good workaround. Here’s my fix…
>
>     hcur = None
>     keepGoing = True
>     while keepGoing:
>       count = 0
>       hall = HitModel.all()
>       if hcur:
>         hall.with_cursor(start_cursor=hcur)
>       keepGoing = False
>       for h in hall:
>         ...my processing stuff...
>         count += 1
>         if count == 20000:
>           hcur = hall.cursor()
>           keepGoing = True
>           break
>
>
> On Oct 7, 2014, at 11:38 AM, PK <p...@gae123.com <javascript:>> wrote:
>
> I have seen this error as well and had to change my code. My situation is 
> python 2.7/HRD/db.Query() in a module with manual scaling.
>
> q = db.Query(…)
> …
> for ent in q.run():
>      do stuff
>
> The iteration goes well for a large number of entities and then gives up 
> in a similar way. Something seems to timeout for long lived queries. 
> Needless to say it works fine on the dev server but there of course I do 
> not have so much data.
>
> Breaking it down with cursors or equivalent works fine and this is what I 
> am doing as a work around. I did not even bother to see if there is an 
> issue for it but I would happily star it if there is one.
>
> PK
> http://www.gae123.com
>
> On October 7, 2014 at 8:26:11 AM, Joshua Smith (mrjoshu...@gmail.com 
> <javascript:>) wrote:
>
> I have an app that once a day does a big data processing task.
>
> Every now and then it would throw a datastore timeout error. But now it’s 
> throwing them constantly.
>
> I thought maybe my data had tripped over some limit on how much you can 
> read, but I just added some instrumentation and it’s only reading less than 
> half of the entities. If I was tripping over an undocumented limit, I’d 
> think it would read all most all of them (since only a few get added each 
> day).
>
> Basically, the code is simply:
>
>     for h in HitModel.all():
> (do collect up info about h)
>
> and there are about 85K HitModel objects in the database. It’s dying after 
> reading 35,000 of them (which takes about a minute).
>
> It’s on HR data store. Still on Python 2.5.
>
> App ID is “kaon-log”
>
> The error I’m getting is:
>
> 2014-10-07 11:16:46.925
>
> The datastore operation timed out, or the data was temporarily unavailable.
> Traceback (most recent call last):
>   File 
> "/base/data/home/runtimes/python/python_lib/versions/1/google/appengine/ext/webapp/_webapp25.py",
>  line 714, in __call__
>     handler.get(*groups)
>   File "/base/data/home/apps/s~kaon-log/33.379217403803985923/main.py", line 
> 648, in get
>     for h in HitModel.all():
>   File 
> "/base/data/home/runtimes/python/python_lib/versions/1/google/appengine/ext/db/__init__.py",
>  line 2326, in next
>     return self.__model_class.from_entity(self.__iterator.next())
>   File 
> "/base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_query.py",
>  line 3091, in next
>     next_batch = self.__batcher.next_batch(Batcher.AT_LEAST_OFFSET)
>   File 
> "/base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_query.py",
>  line 2977, in next_batch
>     batch = self.__next_batch.get_result()
>   File 
> "/base/data/home/runtimes/python/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py",
>  line 612, in get_result
>     return self.__get_result_hook(self)
>   File 
> "/base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_query.py",
>  line 2710, in __query_result_hook
>     self._batch_shared.conn.check_rpc_success(rpc)
>   File 
> "/base/data/home/runtimes/python/python_lib/versions/1/google/appengine/datastore/datastore_rpc.py",
>  line 1333, in check_rpc_success
>     raise _ToDatastoreError(err)
> Timeout: The datastore operation timed out, or the data was temporarily 
> unavailable.
>
>
> Any ideas?
>
> (Breaking this up into multiple tasks would be really hard.)
>
>
> -Joshua
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to google-appengine+unsubscr...@googlegroups.com <javascript:>.
> To post to this group, send email to google-a...@googlegroups.com 
> <javascript:>.
> Visit this group at http://groups.google.com/group/google-appengine.
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/d/optout.

Reply via email to