[google-appengine] Re: Support for Decimal in Datastore?

2008-12-04 Thread lock

There is the FloatProperty, pretty sure that's what your after.

http://code.google.com/appengine/docs/datastore/typesandpropertyclasses.html#FloatProperty

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Is one big table table quicker than 8 smaller tables?

2008-12-04 Thread lock

I've been desperately trying to optimise my code to get rid of those
'High CPU' requests.  Some changes worked others didn't, in the end
I've really only gained a marginal improvement.  So I'm now
considering some significant structural changes and are wondering if
anyone has tried something similar and can share their experience.

The apps pretty simple it just geo-tags data points using the geoHash
algorithm, so basically each entry in the table is the geoHash of the
given lat/long with some associated meta data.  Queries are then done
by a bounding box that is also geohashed and used as datastore query
filters.  Due do some idiosyncrasies with using geoHash, any given
query may be split into up to 8 queries (by lat 90,0,-90   by long 180
90, 0, -90, 180), but generally the bounds fall into only one/two
division(s) and therefore only result in one datastore query.

All these queries are currently conducted on the one large datastore,
I'm wondering if it would be more efficient to break down this one
datastore into 8 separate tables (all containing the same type) and
query the table relevant to the current bounding box.

In summary I guess what I'm trying to ask is (sorry for the ramble),
does the query performance degrade significantly as the size of the
database increases?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Is one big table table quicker than 8 smaller tables?

2008-12-05 Thread lock

Thanks guys, that's what I was hoping to hear, you saved me a couple
hours trying to prove it for myself (not to mention the frustration).
After I went away and thought about it some more I figured there must
be some 'smarts' in the database to prevent the query time from
increasing.  Otherwise how could any database scale well...

No merge joins or IN operators in my code, so nothing to worry about
there.

After a _lot_ more testing I'm finding that query time does scale with
the number of fetched _results_, not the DB size.  During early
testing I convinced myself that increasing the DB size was slowing my
query down, when really the number of results were increasing as I
added more data, doh (it was getting late ;-)  ).

The overall solution that seems to be working well for me at the
moment is to have different tables for different resolutions.  As the
size of the geometric bounds increases I switch between a few tables,
each one with a lower fidelity therefore reducing the number of
results that can be returned.  Visually it works similar to Level Of
Detail techniques you see in some 3D modeling packages.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How far can Google take this thing?

2008-12-05 Thread lock

I honestly have no idea what Google's intentions are, I myself am very
new to web2.0, the cloud, ajax, etc.  But in my limited experience
with app engine so far I'd say it's an excellent platform for small
specific purpose applications, say the type of app created by a
developer in there spare time.  I would of never attempted to develop
the app I'm currently working on without knowing that I could host it
free on the app engine framework.  Would I recommend using it for a
large scale production system, hmmm, probably not yet, but that's
really only due to the CPU quotas.

In future I believe we'll start seeing a lot more small applications
that may not have existed had it not been for free hosting services
such as app engine.

Bouncing off the CPU limits is frustrating, but it does make you focus
on efficient design up front.  Agreed scalability may not be necessary
for 99% of apps developed, but I feel better knowing that my app can
scale if it ever gets the traffic I hope it will receive.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Is one big table table quicker than 8 smaller tables?

2008-12-06 Thread lock


On Dec 6, 9:46 pm, Nick Johnson <[EMAIL PROTECTED]> wrote:
> On Dec 6, 1:47 am, lock <[EMAIL PROTECTED]> wrote:
>
> > Thanks guys, that's what I was hoping to hear, you saved me a couple
> > hours trying to prove it for myself (not to mention the frustration).
> > After I went away and thought about it some more I figured there must
> > be some 'smarts' in the database to prevent the query time from
> > increasing.  Otherwise how could any database scale well...
>
> > No merge joins or IN operators in my code, so nothing to worry about
> > there.
>
> > After a _lot_ more testing I'm finding that query time does scale with
> > the number of fetched _results_, not the DB size.  During early
> > testing I convinced myself that increasing the DB size was slowing my
> > query down, when really the number of results were increasing as I
> > added more data, doh (it was getting late ;-)  ).
>
> One thing to bear in mind is that the dev_appserver performance is
> _not_ representative of the production performance. The dev_appserver
> holds the entire dataset in memory and does linear scans over an
> entity type for queries, so performance there _will_ degrade with
> respect to the size of an entity type.

Oh really!  That may have also contributed to my initial theory about
DB
performance being adversely affected by its size.  Thanks for the tip,
definitely
something to keep in mind.
Hopefully in future versions of the SDK, the dev server will start to
better mimic
the behavior of the actual app engine framework.  It would be really
great if
for example it gave similar CPU usage warnings.

>
>
> > The overall solution that seems to be working well for me at the
> > moment is to have different tables for different resolutions.  As the
> > size of the geometric bounds increases I switch between a few tables,
> > each one with a lower fidelity therefore reducing the number of
> > results that can be returned.  Visually it works similar to Level Of
> > Detail techniques you see in some 3D modeling packages.
>
> I'm curious how you're doing this with only a limited number of
> queries. Geohashing isn't ideally suited to satisfying bounding box
> queries (though it's certainly better than storing plain lat/longs).

Please tell me if I'm wrong but isn't geohashing the only way you can
do a
bounding box type query with a datastore query?  I must admit during
early
development I just assumed I was going to be able to do a query
something
like:
'SELECT * WHERE lat < top AND lat > bottom AND long > right AND long <
left ...'
Got a bit of a shock when I found I could only query based on one
field.

The only other way I though of doing it was to query based on
longitude,
then just filter the results by lat in a loop after.  Knowing what I
do now
(queries that return a lot of results chew up CPU cycles), I'd say
this would
be the wrong approach.

As for the level of detail stuff, it's nothing too sophisticated, I'll
try to
elaborate.  It's unrelated to geohash.

My app has 4 tables, 1 contains all data points, the other 3 are of
varying
resolutions (LOD tables).

When adding a point (lat/long) it gets put into the table containing
all data
points.  Next I start adding this same point to the appropriate LOD
tables, for
the 'high res' one I round the lat/long to 2 decimal places and
compute the
geohash.  If the geohash is present in the table then this point has
been fully
added, otherwise it is added to the 'high res' table and we continue.
The same
lat/long is then rounded to 1 decimal place, the geohash of this is
then calculated
and checked if it is in the 'medium res' LOD table, if present then
just return.
If not then do something similar again for the 'High res' LOD table.

Points are obtained from the app by a bounding box, the lat/long of
the NE and SW
corners.  From this we can calculate a rough size unit for the
bounding box.  At
the moment I'm using the diagonal length in degrees squared.  From
this number we
determine which table to query.  For large bounding boxes the 'low
res' LOD table
is used, for small boxes the 'high res' LOD table is used.  For even
smaller
bounding boxes I just get the results out of the table containing all
data points.

Hope that made sense.

Anyway, if you want to see it in action checkout
'bikebingle.appspot.com'.  Please
enter as much random data as you want, all the stuff in there at the
moment is just
test data and will be removed soonish.  If you find any bugs while
your there, I'd
love to know about them :-), hopefully BikeBingle will be 'going live'
in the next
couple days.  BTW, I wouldn't click the 'Make random'

[google-appengine] Re: Seemingly undoable to build a simple app

2008-12-07 Thread lock

I'm pretty new to app engine so there may be some
gotcha's in what I say, but I think my logic is ok.

It seems you have a pretty simple table with a few
fields (name, brand, description) and you want to be
able to search any one of the words in any of the
fields.

I'd suggest building a search term table. Keep your
current process of adding data to your table, your search
table will reference the data it contains.

The search table will be something like
class searchTerms(db.Model):
 value = db.StringProperty()
 key = db.KeyProperty()  # If there's no such thing use the id

To build the search table tokenize the name, brand and
description so that you have a list of separate words.
Add each of these words to the search terms table with the
key that references the item in your data table.

When you want to search via keyword(s) you just query the
value field of the searchTerms table.  You will however need
to do a query for each search term.

Hope that helps, cheers
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Seemingly undoable to build a simple app

2008-12-07 Thread lock

Hmmm, your right.

You could try culling some of the token words, the simple
ones like 'and', 'a', 'or' to limit the number of puts,
but that will only go so far.

I don't think sorting and checking for intersection will
cost you too much.  Its the number of results returned from
the queries that chew up the CPU cycles.
Slicing/pagination doesn't really seem worthwhile if you've
got all the results (which would occur if you used this
approach).

I'm out of ideas, would be interested to know if you do find
a solution though.  It sounds like something a lot of apps
would need to do.

Good luck ;-)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: handling classes across multiple .py files

2008-12-11 Thread lock

Hmmm, the message
   "KindError: Kind 'Board' is not a subclass of kind 'Board' "

Makes me think that maybe you are defining two classes named 'Board'
in
each of your .py files.  If this is the case, it's not what you want.

I'd agree with djidjadji, it would be good practice to separate your
Model classes into their own .py file and reference this module from
your other .py files.

On Dec 12, 11:38 am, djidjadji  wrote:
> For point 1)
>
> Place the model class definitions in 1 or more separate .py files
> You can put more then 1 model class in a .py file when you use the
> multiple file approach
>
> eq. models.py       # all model classes in one file
>
> or   user.py
>       forum.py
>       comment.py
>
> And use import statements like
>
> from models import *
>
> or
>
> from user import *
> from comment import *
>
> in the model files that need other model definitions and in the .py
> files that handle the requests (the ones named in the app.yaml file)
>
> Some people use
>
> import models
>
> and they have to use
>
> customer = models.User(..)
>
> instead of
>
> customer = User(..)
>
> At Python interpreter level it must have a different result, but I
> have no problem using the "from import *" method
>
> For point 2). I use this method and have not encountered the mentioned error.
>
> 2008/12/11 Neo42 :
>
>
>
> > If I am using multiple .py files in my project (each requiring
> > different levels of authentication, login: required, login: admin, and
> > the other is for anonymous access), how should I handle my datastore
> > class definitions?
>
> > I have 2 issues:
>
> > 1) I am afraid of forgetting to update the class definition in
> > file1.py when I update file2.py 's class definition.  Is there a way
> > to ease my concern?  Perhaps using imports?  How do I do this?
>
> > 2) At least on my local server, when I access the the datastore
> > through file1.py, it breaks's file2.py's access to the datastore until
> > I stop and start the local server again.  If I fix issue 1, will it
> > fix issue 2?
>
> > Thanks,
> > -Neo
>
>
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Twitter - rate limit exceeded

2009-03-13 Thread lock

Hmmm.  My next app engine project _was_ going to be an app that relied
on twitter.  This doesn't sound good.  As per your situation the app
wouldn't
hammer twitter, one request to the search API every 5-10 minutes or
so.

Given its not exactly an app engine problem did you try contacting
twitter to see if they could build more 'smarts' into their rate
limiting?

Would be really interested to see if you end up resolving this issue,
thanks
for the heads up.  Sorry I can't help.

Cheers, lock

On Mar 12, 10:43 pm, richyrich  wrote:
> Hi there,
>
> I have been writing a simple little app that uses the Twitter API. It
> works perfectly on my local development server but it fails when I
> upload it because I get this error from Twitter:
>
> error=Rate limit exceeded. Clients may not make more than 100 requests
> per hour.
>
> ...even though my app only makes 1 request. what is happening is that
> other people apps must be using the Twitter API from the same IP
> address. does anyone know a good way around this other than hosting my
> app somewhere else?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Twitter - rate limit exceeded

2009-03-14 Thread lock

Hi Tim,

Just had a look at Twendly, looks good! I've just got a few quick
questions, if you wouldn't mind...

1. By 'google search API' you actually mean 'twitter seach API',
yeah ? ;-)

2. How do you go about pulling data from twitter every 5 minutes?
Unless I'm missing something there are no scheduled tasks in
app engine (yet).  Using a cron job on another server to call a
special URL maybe?

The API key sounds like the proper solution, would be nice if
there was a solution now though.

Just an idea that probably won't work for most cases.  Get the
client (via javascript) to pull data from twitter and send it on to
app engine for processing/storage.  Not real pretty.

Thanks, lock

On Mar 15, 9:16 am, Tim Bull  wrote:
> Interesting,
>
> I have a Twitter app (http://twendly.appspot.com) but I don't seem to be
> having this issue at the moment.  However, while I read information every 5
> minutes from the google search API (which is rate limited differently) I
> only send a few messages (no more than 5 or 6 max and usually only 4) as the
> hour clicks over.  Although ocasionally this drops a message, it's generally
> pretty solid.  Perhaps because of when I'm sending them, I get in at the
> start of the allocation.
>
> As far as scalability goes, I would say GAE is really suited for it's read
> scalability, so if unless your Twitter bot writes are going to massive, then
> scalability shouldn't be an issue if you move these writes over to a
> seperate host.  I guess a (nasty but possible) pattern would be to have the
> Twitter interaction come from your host which could act as a proxy, then use
> App Engine for all the processing and reporting on the data.  At least in my
> application this would be a potential work-around if this becomes an issue.
>
> Cheers
>
> Tim
>
> On Sat, Mar 14, 2009 at 3:57 PM, Richard Bremner wrote:
>
> > Hmmm yes this is a difficult one. Neither Twitter nor Google are being
> > unreasonable, and each GAE developer is probably performing a sane number of
> > Twitter API requests but combined we are ruining it for everyone. Ohhh the
> > solution? I can't think of a good solution Twitter could implement which
> > wouldn't make it easy to circumvent their limit unreasonably. I do happen to
> > have a hosted linux server a I can put a proxy script on, I guess I'm lucky
> > there, but I am using GAE for its scaleability which my server certainly
> > isn't. I don't need to go into all the reasons GAE is more scaleable than my
> > own server :-)
> > If anyone thinks of anything, I'd love to know.
>
> > Rich
>
> > 2009/3/14 lock 
>
> >> Hmmm.  My next app engine project _was_ going to be an app that relied
> >> on twitter.  This doesn't sound good.  As per your situation the app
> >> wouldn't
> >> hammer twitter, one request to the search API every 5-10 minutes or
> >> so.
>
> >> Given its not exactly an app engine problem did you try contacting
> >> twitter to see if they could build more 'smarts' into their rate
> >> limiting?
>
> >> Would be really interested to see if you end up resolving this issue,
> >> thanks
> >> for the heads up.  Sorry I can't help.
>
> >> Cheers, lock
>
> >> On Mar 12, 10:43 pm, richyrich  wrote:
> >> > Hi there,
>
> >> > I have been writing a simple little app that uses the Twitter API. It
> >> > works perfectly on my local development server but it fails when I
> >> > upload it because I get this error from Twitter:
>
> >> > error=Rate limit exceeded. Clients may not make more than 100 requests
> >> > per hour.
>
> >> > ...even though my app only makes 1 request. what is happening is that
> >> > other people apps must be using the Twitter API from the same IP
> >> > address. does anyone know a good way around this other than hosting my
> >> > app somewhere else?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Twitter - rate limit exceeded

2009-03-16 Thread lock

Thanks Tim.

Think I've managed to convince myself that I can work around the
lack of inbuilt scheduled tasks.  Who knows by the time I manage
to pull together enough motivation google may have implemented
it already.  Worst case I may be able to use a work server to call
an URL, although its not a work project.  Hmmm, wonder how that
will go down.  There's also http://schedulerservice.appspot.com/,
which I might try.

It's just the twitter rate limit roulette game I'm worried about now.
Really highlights learnings from my first project, make sure
you upload and test your app early.



On Mar 16, 7:23 am, Tim Bull  wrote:
> Ahh! Yes, by google search API I meant Twitter search API!
>
> I'm using a CRON job to trigger a special URL every 5 minutes.  Originally I
> had this job on my own webhost, but I breached the terms of service because
> a) sometimes the way I update the trend lists can take a long time and the
> very basic PHP fetch I do was waiting for a return value (which it doesn't
> really need to do) - this caused CPU limits on my cheap host to be exceeded
> and b) my cheap host only allows jobs to be scheduled every 15 minutes!
>
> I ended up with a two part solution:
>
> 1) I usehttp://www.webcron.orgto schedule jobs that call a URL on my
> webhost for longer jobs every 5 minutes or direct on GAE for shorter jobs.
> Webcron charges by the length of job so sub-30 seconds is cheapest (0.0001
> Euro cents or 1000 jobs per cent)
>
> 2) On my webhost I use cURL instead of a standard PHP fetch (which is how I
> first did it) - this just triggers the job then terminates the script.  GAE
> will happily continue to execute the job even though the listening party has
> terminated. I get what I want and my webhost doesn't get upset.  I need to
> do it in this "2-part" way becase webcron won't let you terminate a job
> after calling it - this achieved what I wanted in a fairly cheap way for me.
>
> Here is the PHP script I use
>
> Note the URL doesn't need the HTTP:// part in front of it.
>
>  $url = "myurl.appsot.com/somejob";
> $ch = curl_init($url);
> curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
> curl_setopt($ch, CURLOPT_TIMEOUT, 2);
> $curl_scraped_page = curl_exec($ch);
> curl_close($ch);
> ?>
>
> On Sun, Mar 15, 2009 at 4:52 PM, lock  wrote:
>
> > Hi Tim,
>
> > Just had a look at Twendly, looks good! I've just got a few quick
> > questions, if you wouldn't mind...
>
> > 1. By 'google search API' you actually mean 'twitter seach API',
> > yeah ? ;-)
>
> > 2. How do you go about pulling data from twitter every 5 minutes?
> > Unless I'm missing something there are no scheduled tasks in
> > app engine (yet).  Using a cron job on another server to call a
> > special URL maybe?
>
> > The API key sounds like the proper solution, would be nice if
> > there was a solution now though.
>
> > Just an idea that probably won't work for most cases.  Get the
> > client (via javascript) to pull data from twitter and send it on to
> > app engine for processing/storage.  Not real pretty.
>
> > Thanks, lock
>
> > On Mar 15, 9:16 am, Tim Bull  wrote:
> > > Interesting,
>
> > > I have a Twitter app (http://twendly.appspot.com) but I don't seem to be
> > > having this issue at the moment.  However, while I read information every
> > 5
> > > minutes from the google search API (which is rate limited differently) I
> > > only send a few messages (no more than 5 or 6 max and usually only 4) as
> > the
> > > hour clicks over.  Although ocasionally this drops a message, it's
> > generally
> > > pretty solid.  Perhaps because of when I'm sending them, I get in at the
> > > start of the allocation.
>
> > > As far as scalability goes, I would say GAE is really suited for it's
> > read
> > > scalability, so if unless your Twitter bot writes are going to massive,
> > then
> > > scalability shouldn't be an issue if you move these writes over to a
> > > seperate host.  I guess a (nasty but possible) pattern would be to have
> > the
> > > Twitter interaction come from your host which could act as a proxy, then
> > use
> > > App Engine for all the processing and reporting on the data.  At least in
> > my
> > > application this would be a potential work-around if this becomes an
> > issue.
>
> > > Cheers
>
> > > Tim
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: how to config gwt to work in app engine effective and productive

2009-03-17 Thread lock

I'm a big GWT fan, never done much with javascript and don't
really want to!  It's just a personal preference, not wanting to start
another java vs javascript thread ;-) .  Sounds like your coming
from a java desktop background, there's a bit to learn, but it's
pretty straight forward.

Tim's right, GWT and App Engine are completely different bits
of the puzzle.  GWT is all client side browser stuff, App Engine
is server side.

All the HTML/javascript code on my site is static, it isn't
dynamically generated on the server. No, not even the URLs the
app requests data from change.

You probably want to look at the REST architecture concept, it
basically states that the client should manage state (not the
server).

I use the JSON format to transfer data from the client to the server
and back again.  This is done using a variety of HTTP requests, GET
for requesting data from the server, PUT for changing server data.

Eclipse works really well for GWT projects, and if you install the
PyDev project it does a pretty good job with app engine too.  I use
the same workspace for both client and server code.

There really is an awful lot more too it, strongly suggest having a
look
at some of the great app engine tutes.
http://code.google.com/appengine/docs/python/gettingstarted/


On Mar 17, 8:36 pm, Coonay  wrote:
> Gwt featurs really attractive:quickly build and maintain complex yet
> highly performant JavaScript front-end applications in the Java
> programming language,and Test your code with JUnit.The example mail
> page is really awesome.
>
> As a many years java progammmer,it's not hard to get into gwt,but the
> app engine is a different web environment,the static page can be
> served to browser directly,
> the hype link in the geranated html are needed to change accordingly.
>
> could you give me some idea how to make they 2 work together
> effective and productive?thanks so much
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: geopt query?

2009-03-23 Thread lock

You can do equality type checks, but that's about it :-( .

A quick search of this group for geohash should point you in
the right direction though.

On Mar 24, 10:41 am, pedepy  wrote:
> I just watched that talk about geo data on youtube (i think it was
> from google IO?)... and it made me really think ... what queries can
> be performed on geopt properties, if any ?
>
> are any kinds of spatial query features planned for gql and the
> datastore?  it seems that right now, these properties are of no
> worth whatsoever ... other than many saving us from writing 5 or 6
> lines of code to define them ourselves.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Every day around 9.10 AM Brussels time, huge drop in GAE performances

2013-03-10 Thread Tony Lock
I too am having the same issues. At approximately 4:00pm West Australian 
time, the application starts to misbehave. I am starting to see latencies 
around the 70,000 ms mark, and lots of deadline exceeded errors. Most of 
these are at simple page load steps, not even doing real processing. The 
application is unusable for about 45 minutes to an hour, every day.

It has been consistently bad for about a month now.


On Tuesday, 5 February 2013 16:18:55 UTC+8, gafal wrote:
>
> I've been experiencing this for almost a week now.
>
> Requests take 10x longer than usual!!
>
> It seems to start around 9 and stop around 9:30...
>
>
> my app id is myagendapro
> Can anyone have a look? 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.