Since starting this thread, I've come up with something slightly
similar to what you are suggesting here. Rather than store metadata
about both lat and lng, I store a ListProperty of lngs to differing
degrees of accuracy, and still use inequalities for latitude. So if
you have a point with Lat, Lng = 12.12345678, 40.12345678, my stored
"lng-list" property would look like: [40, 40.1, 40.12, 40.123]. This
way, based on zoom level (well, actually just based on the size of the
viewport, calculated when the query is run), I can query for all of
the strips of longitude within my viewport, and then still include a
minLat < lat < minLng. This results in a single query up front, but
multiple queries behind the scenes, since the viewport will have many
longitude strips and thus I will be comparing the lng-list to an array
of values. I decide how to divide up the viewport based on the
difference in longitude between either edge, and limit the number of
strips to 10 (using the larger strips in order to decrease the
number). Here is the code for "encoding" and generating the array for
the query:

import decimal
from decimal import Decimal as d
import logging

def LngList(lng):
  """returns array of truncated lng strings"""
  lnglist = []
  lnglist.append(__truncate(lng, 0))
  lnglist.append(__truncate(lng, 1))
  lnglist.append(__truncate(lng, 2))
  lnglist.append(__truncate(lng, 3))
  return lnglist

def ViewportLngs(minLng, maxLng):
  """returns array of lngs that exist in the viewport, with the right
degree of precision"""
  vislngs = []
  dif = abs(maxLng - minLng)
  if minLng > maxLng:
    logging.info("minLng: %s maxLng: %s" %(minLng, maxLng))
    dif = (180 - minLng) + abs(-180 - maxLng)
    if dif < .01:
      vislngs = __makelist(3, minLng, 180) + __makelist(3, -180,
maxLng)
    elif dif < .1:
      vislngs = __makelist(2, minLng, 180) + __makelist(2, -180,
maxLng)
    elif dif < 1:
      vislngs = __makelist(1, minLng, 180) + __makelist(1, -180,
maxLng)
    else:
      logging.error("Viewing both sides of the dateline, zoomed out.")
  elif dif < .01:
    vislngs = __makelist(3, minLng, maxLng)
  elif dif < .1:
    vislngs = __makelist(2, minLng, maxLng)
  elif dif < 1:
    vislngs = __makelist(1, minLng, maxLng)
  else:
    # zoomed out case
    logging.error("The LngList method is being used on a viewport with
a range larger than 1 degree.")
  return vislngs

def __makelist(level, minLng, maxLng):
  """actually populates the list of longs, based on conditions passed
in, and accounts for extreme near-zero cases."""
  inc = 10**-level
  lng = __truncate(minLng, level)
  maxL = __truncate(maxLng, level)
  vislngs = [lng]
  while lng != maxL:
    if lng == 0:
      return __makelist(level + 1, minLng, maxLng)
    lng = __truncate(float(lng)+inc, level)
    vislngs.append(lng)
  return vislngs

def __truncate(num, prec):
  """truncates the float passed in to _prec_ places after the decimal,
and returns it as a string"""
  result = unicode(d(str(num)).quantize(d("1e%d" % (-prec)),
decimal.ROUND_DOWN))
  if result == u'-0.0':
    result = u'0.0'
  if result == u'-0.00':
    result == u'0.00'
  return result

This method seems to preform pretty well, though I don't yet have gobs
of data. I am able to run a query that returns 3-400 markers and load
them all up in less than a second. Unfortunately, this doesn't account
for Polylines and Polygons. We're still working on an elegant solution
for those, probably similar to your method of storing the grid-boxes
each one overlaps. Exploding indexes, here I come.

Nevin

On Sep 10, 10:11 am, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
wrote:
> Thanks for the follow-up.
>
> I guess one could still use the "geotoken" (my own term) idea, but
> only save one token (map-graph-square) per saved point.  Then, when
> someone searches on a given point, tokenize the search point, search
> for anything that shares the same token, and then go back for for data
> in the surrounding tokens.  In my example, that means 9 DB calls
> instead of one, but that's pretty manageable...  I don't know how it
> scales if you want to have greater variability in your search range,
> but you could either define your tokens to cover more ground, or
> increase the number of tokens searched.
>
> Looking at the current quota limits, with 2.5 million datastore
> requests per day, I could handle 200,000 searches per day (250,000 * 9
> = 2.45 million), which leaves room for 50,000 posts per day (one
> datastore operation per post), all under quota.  Once this turns into
> a paid service, if I have to pay for more than that, I think I'll be
> happy to do so.
>
> -B
>
> On Sep 9, 2:52 pm, uprise78 <[EMAIL PROTECTED]> wrote:
>
> > The exploding index issue can occur with list properties readily.
>
> > "Properties with multiple values, such as using a list value or a
> > ListProperty model, store each value as a separate entry in an index."
>
> > With a 9 element list you just created 9 indexes.  If you expand the
> > box to get some more resolution such as the 13 box example, you will
> > then have 13 indexes on that property alone.  You can see where this
> > is going.  It gets compounded if you even 1 single other list element
> > in your model.  If you had another list element with just 2 records
> > you would have a total of 20 indexes (18 for the 2 lists and then 2
> > for the basic indexing).
>
> > This causes a number of issues.  The two you pointed out, updating and
> > data storage.  Updating has to hit each and every index so updates
> > would be a huge bottleneck if the model is updated often.  The other
> > issue is (and correct me if I'm wrong on this anyone with more
> > experience) is that because lists are stored as separate entities Big
> > Table has to do multiple queries to return your result set.
>
> > A bit of testing would probably be a good idea to see what the speed
> > is.  If you have a small dataset, you definitely won't have an issue.
>
> > Check out this:  
> > http://code.google.com/appengine/docs/datastore/queriesandindexes.htm...
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to