I've been desperately trying to optimise my code to get rid of those
'High CPU' requests.  Some changes worked others didn't, in the end
I've really only gained a marginal improvement.  So I'm now
considering some significant structural changes and are wondering if
anyone has tried something similar and can share their experience.

The apps pretty simple it just geo-tags data points using the geoHash
algorithm, so basically each entry in the table is the geoHash of the
given lat/long with some associated meta data.  Queries are then done
by a bounding box that is also geohashed and used as datastore query
filters.  Due do some idiosyncrasies with using geoHash, any given
query may be split into up to 8 queries (by lat 90,0,-90   by long 180
90, 0, -90, 180), but generally the bounds fall into only one/two
division(s) and therefore only result in one datastore query.

All these queries are currently conducted on the one large datastore,
I'm wondering if it would be more efficient to break down this one
datastore into 8 separate tables (all containing the same type) and
query the table relevant to the current bounding box.

In summary I guess what I'm trying to ask is (sorry for the ramble),
does the query performance degrade significantly as the size of the
database increases?


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to