True indeed it will timeout if you try to do them all in the same
request. The trick is to return the last entry info of the bunch of
1000 records as part of the request response and then query another
time for the next 1000. Still, as you mention., there is a huge
bandwidth and CPU hit to do so. So in practice that may not be a good
solution for counting, more for iterating a whole database.Using
key_only option in your query may significantly increases the
performance of the query, allowing you to do several "1000" chunk in
one request, better, but still inefficient for counting unless the
datasets stays within few thousands.

Sebastien

On Aug 12, 2:12 pm, Neal Walters <nwalt...@sprynet.com> wrote:
> Sboire,
>   There is an "offset" parm on the fetch, so yes, you can get 1000
> records at a time in a loop.
> I believe however this is discouraged because it will eat up your CPU
> quota, and potentially you could hit other limits and quotas.  Imagine
> if you had 5 million records.  Reading 1000 at a time would take 5000
> calls.  Even on a MySQL database with PHP for example, you would
> probably hit the various CPU limits per task reading so many records
> in one round-trip from the client to the server.
>
> Neal Walters
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to