[google-appengine] No longer able to deploy cron.yaml - after 1.3.2

2010-03-26 Thread morten
Hi,

We have a single cron job in our cron.yaml:

cron:
- description: Daily statistics job
  url: /admin/stats?action=runDailyStats
  schedule: every day 06:00 # UTC

When we try to deploy this after the 1.3.2 update it fails with:

Error parsing yaml file:
Unable to assign value 'every day 06:00' to attribute 'schedule':
object.__init__() takes no parameters
  in "/Users/morten/Development/agon-gae/cron.yaml", line 4, column 13

I don't see anything in the release notes regarding the cron feature,
so I hope somebody can tell us what we need to do to make this cron
job work again.

Best regards,
Morten

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Getting the We're sorry... page

2009-08-06 Thread morten

Hi

I'm getting the We're sorry... page when I try to access our GAE site
using the mapped Google Apps domain name. When I go through the
normal .appspot.com address everything works as expected.

Any idea about what is going on?

(pretty sure that I don't have a virus that is accessing our app
engine app :))

Best regards,
Morten
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: List Property containing keys - performance question

2009-06-20 Thread Morten Bek Ditlevsen
Hi Federico,

Thanks for your answers - I'm just having a bit of a hard time figuring out
which data store requests happen automatically.

I wondered because I had an error in the datastore:

  File "/base/data/home/apps/grindrservr/26.334331202299577521/main.py",
line 413, in query
if result in meStatus.blocks:
  File "/base/python_lib/versions/1/google/appengine/api/datastore_types.py",
line 472, in __cmp__
for elem in other.__reference.path().element_list():

The 'blocks' property is just like the 'favorites' described in my previous
mail - and 'result' is a value iterated over the results from a 'keys only'
query.

So I guess what I don't understand is why the datastore is in play here. I
know that my results is probably an iterator, but why is this necessary when
you just query for keys?
That's what caused be to think that the error might be related to the
'blocks' list of keys...

Sincerely,
/morten


On Sat, Jun 20, 2009 at 10:22 AM, Federico Builes  wrote:

>
> Morten Bek Ditlevsen writes:
>  > Hi there,
>  > I have an entity with a list property containing keys:
>  >
>  >   favorites = db.ListProperty(db.Key, indexed=False)
>  >
>  > I suddenly came to wonder:
>  > If I check if a key is in the list like this:
>  >
>  > if thekey in user.favorites:
>  >
>  > will that by any chance try and fetch any entities in the user.favorites
>  > list?
>  >
>  > I don't think so, but I would like to make sure! :-)
>
> When you do foo in bar it's actually calling Python methods, not the
> datastore ops., and since
> Python sees favorites as a list of keys it should not fetch the entities.
>
> If you were to do index this and do it in datastore side ("WHERE favorites
> = thekey") it might have to
> "un-marshal" the property and do a normal lookup, but I don't think the
> slowdown is noticeable.
>
> --
> Federico
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] List Property containing keys - performance question

2009-06-20 Thread Morten Bek Ditlevsen
Hi there,
I have an entity with a list property containing keys:

  favorites = db.ListProperty(db.Key, indexed=False)


I suddenly came to wonder:
If I check if a key is in the list like this:

if thekey in user.favorites:


will that by any chance try and fetch any entities in the user.favorites
list?

I don't think so, but I would like to make sure! :-)

Sincerely,
/morten

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Inconsistent memcache behaviour

2009-06-18 Thread morten

Hi

We are seeing unexpected memcache behaviour during high load and have
confirmed it in a small test app.

In the following thread
http://groups.google.com/group/google-appengine/browse_thread/thread/45272062a8e36545/2289806f3f711c09?lnk=gst&q=memcache+atomic#2289806f3f711c09
Ryan Barrett says that:

"as for the datastore, and all other current stored data APIs like
memcache, there is a single, global view of data. we go to great
lengths to ensure that these APIs are strongly consistent."

I interpret this as: "If our application successfully sets a memcache
value for a particular key, then no matter how soon after and no
matter on which application instance we try to access that key again,
then it will return the value just set"

If that interpretation is correct then we think that there is a
problem somewhere, because we are seeing very inconsistent behaviour
on memcache (i.e. different values coming back for different requests
for the same memcache key) when we stress test our application with a
lot of concurrent requests.

If anybody could shed some light on this for us it would be much
appreciated.

Best regards,
Morten Nielsen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Question about exploding index sizes

2009-04-30 Thread Morten Bek Ditlevsen
Hi Nick,

>
> > Just to make sure I understand this right (well, rather - I don't think I
> > get it 100%):
> >
> > By the inverted index entity you mean a new entity kind where I store
> word
> > lists and then reference this entity from my 'user profile' entity?
>
> Yes. The list of words used for fulltext indexing is called an 'inverted
> index'.
>
> >
> > This will allow me to create a query that fetches a list of entities for
> > which I can find the referring 'user profile' entities.
> >
> > But how can I combine that result with other queries?
>
> You need a (fairly simple) query planner, that can decide to either
> perform a text or bounding box query, then filter (in user code) the
> results for those that match the other filter. You decide which to
> pick based on which you think will have the fewest results (for
> example, a query for a small area will take preference over a search
> for the string 'friendly', whilst a search for the string 'solipsist'
> should probably be used instead of a query for all but the tiniest
> areas).


Ah, I see! Thanks, that's just great!
I must say that app engine programming is by far the most fun I've had in
programming for a long time - the possibilities and constraints really have
to make you think differently when solving problems!


>
> > The same goes for the hybrid example. I see how this can be used to give
> me
> > a subset, but can that subset be queried any further?
>
> In that case, you can (hopefully) assume that the number of results
> for your keywords in your geographical area is small enough that you
> can filter them manually, without the need for explicit query
> planning. You can also use 2 or more levels of geographical nesting -
> just fewer than your main index, to keep the index entry count under
> control.
>

Great! Sounds like both solutions would be interesting to try out.


I have a related question about a different way of solving part of this
problem:

Let's imagine that I have an entity that just contains a single word. Then I
have a 'many-to-many' entity that references both the word-entity and my
existing 'user profile' entity.

Is the query for a word, followed by a subsequent .manytomany_set reference
query expensive to do, or is that completely feasible?

What if I extended the many-to-many entity with (say) one geobox string? I
understand that the _set operator returns me a query. Then I could run a
filter on that query before fetching.

My question is: are there any restrictions/penalties for creating the *_set
queries?


Thanks for your help!
Sincerely,
/morten

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Question about exploding index sizes

2009-04-30 Thread Morten Bek Ditlevsen
Hi Nick,

Thanks a bunch! I'm really amazed that I can just throw out a question like
this and have a googler reply within minutes! :-)


>
> > The application is a location based dating service. Right now location
> > lookups are working ok, but I am wondering whether it is feasible to have
> > location based lookups paired with text search at all - or if some of my
> > processing should be in python code instead of done through table
> queries.
>
> I would suggest refactoring. You can make the inverted index for the
> text a separate entity, or you can do that for the geoboxes,  or you
> can take a Hybrid approach: For example, you could have an entity that
> stores a top-level approximation of the user's location (eg, Country,
> State, or whatever is the highest level at which someone will query if
> they're specifying a geographical bound) in addition to the keywords
> in order to narrow the search down.
>

Just to make sure I understand this right (well, rather - I don't think I
get it 100%):

By the inverted index entity you mean a new entity kind where I store word
lists and then reference this entity from my 'user profile' entity?

This will allow me to create a query that fetches a list of entities for
which I can find the referring 'user profile' entities.

But how can I combine that result with other queries?



The same goes for the hybrid example. I see how this can be used to give me
a subset, but can that subset be queried any further?

Please excuse my ignorance - I feel there's a part I'm failing to
understand.


Sincerely,
/morten

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Question about exploding index sizes

2009-04-30 Thread Morten Bek Ditlevsen
Hi there,

I have an application with an entity containing a list of geoboxes for
geographic querying. I currently generate 28 geobox entries in this list.

Since the list property is being queried along with other properties, this
causes 28 index entries to be updated whenever I update a value that is part
of the index.

My problem is that now I would like to query additional lists at the same
time - causing my index to 'explode'.

My question is: are there any recommendations with regards to how many index
entries a single change should cause to be updated?

I would like to have (pseudo) full text search of a field and thought of
doing this by adding a list of words to be queried.
If this list is 50 items long I will now have to update 28*50 indexes for
each change, right?

Is that possible at all, or should any kind of exploding index sizes be
avoided?

The application is a location based dating service. Right now location
lookups are working ok, but I am wondering whether it is feasible to have
location based lookups paired with text search at all - or if some of my
processing should be in python code instead of done through table queries.

Sincerely,
/morten

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Debugging App Engine errors

2009-04-23 Thread Morten Bek Ditlevsen
Hi Nick,

I get very few of them. In the current logs I have three from the last two
days. This is with approx. 300.000 requests per day, so it's actually very
few.

I have a feeling that this suggests that they are caused by external
factors, and that it would then make sense to retry.

Thanks a lot for your explanation!

Sincerely,
/morten


On Thu, Apr 23, 2009 at 11:23 AM, Nick Johnson wrote:

>
> Hi Bek,
>
> Two concurrent transactions on the same entity would not be enough on
> its own to cause this exception: If the two conflict, one of them will
> get in first, and the other will be failed and automatically retry. It
> should take substantially higher contention to cause this. As I
> mentioned, though, it's possible this is happening due to timeouts.
> With what frequency do you see this particular error?
>
> As far as retrying goes, if this is real contention, you'd be best to
> tell the client to come back later, since retrying will just increase
> contention further. If it's due to timeouts and other external
> factors, though, retrying would be preferable.
>
> -Nick Johnson
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Debugging App Engine errors

2009-04-23 Thread Morten Bek Ditlevsen
Hi djidjadji,

Thanks a bunch - I'll try to make the code retry the get()!

Sincerely,
/morten

On Thu, Apr 23, 2009 at 10:35 AM, djidjadji  wrote:

>
> You get a Timeout error on a get() operation.
> I solved it by doing a retry when I get an exception.
> If the retry also failed I re-raise the exception to get it logged
>
> 2009/4/22 Morten Bek Ditlevsen :
> >
> > Hi there,
> > I'm in the process of moving the database part of a social networking
> > site to GAE.
> > Each client sends a request to the server every 5 minutes with
> > location information. From the location I calculate a list of geoboxes
> > to store with the client's profile if the location is significantly
> > different from the last request.
> >
> > If I get requests more often than every 5 minutes, I send back an
> > empty response (clients send unique id, so a 5 minute memcache object
> > will tell me if I receive too many requests) so as not to burden the
> > server too much.
> >
> >
> > I already have a server up and running (apache, php, postgres) - and I
> > am experimenting with forwarding requests to GAE to measure load and
> > performance.
> >
> > In general everything looks good, but I am getting the occational
> > error on the server:
> >
> > One error I get is the datastore timeout. I am quite certain that I
> > write a record at max. every 5 minutes. For that record I only have
> > one composite index, and that index only has one list entry - so I
> > don't have 'exploding indexes'. Although the list may have 30-40
> > entries, so 30-40 index entries must be created.
> >
> > The second error I get - less frequently - is data contention. The
> > record I update (still at max every 5 mins) is unique to each user, so
> > I don't understand where the contention comes from.
> >
> > The last error is actually the one that puzzles me the most. Timeout
> > while doing datastore fetches. Traceback is similar to:
> >
> > Traceback (most recent call last):
> >  File "/base/python_lib/versions/1/google/appengine/ext/webapp/
> > __init__.py", line 501, in __call__
> >handler.get(*groups)
> >  File "/base/data/home/apps/grindrservr/3.332855696947742792/
> > main.py", line 870, in get
> >user = User.get_by_key_name(key_name)
> >  File "/base/python_lib/versions/1/google/appengine/ext/db/
> > __init__.py", line 849, in get_by_key_name
> >return get(*keys)
> >  File "/base/python_lib/versions/1/google/appengine/ext/db/
> > __init__.py", line 1044, in get
> >entities = datastore.Get(keys)
> >  File "/base/python_lib/versions/1/google/appengine/api/
> > datastore.py", line 221, in Get
> >raise _ToDatastoreError(err)
> >  File "/base/python_lib/versions/1/google/appengine/api/
> > datastore.py", line 1965, in _ToDatastoreError
> >raise errors[err.application_error](err.error_detail)
> > Timeout
> >
> > As far as I can tell the get_by_key_name should be so fast that it's
> > basically instant, so I don't understand the timeout here.
> >
> > I should state that out of maybe 300.000 requests only about 30 fail,
> > so that may actually be quite ok. But I would like to understand the
> > errors and if possible program my way around them.
> >
> > Any comments greatly appreciated.
> >
> > Sincerely,
> > /morten
> >
> > >
> >
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Debugging App Engine errors

2009-04-23 Thread Morten Bek Ditlevsen
Hi Nick,

Thank you very much for your reply.

The contention is identified in the log:


   1.

   198.66.245.68 - - [22/Apr/2009:10:19:43 -0700] "GET
/setlocation?ll=40.064929,-75.242851&uid=e5b25e2d512b10fc913c2f8c290f55114550135a&c=28
HTTP/1.1" 500 916 - "gzip(gfe)"

   2.  E 04-22 10:19AM 43.004

   too much contention on these datastore entities. please try again.
   Traceback (most recent call last):
 File "/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py",
line 501, in __call__

   handler.get(*groups)
 File "/base/data/home/apps/grindrservr/3.332855696947742792/main.py",
line 921, in get
   user.put()
 File "/base/python_lib/versions/1/google/appengine/ext/db/__init__.py",
line 669, in put

   return datastore.Put(self._entity)
 File "/base/python_lib/versions/1/google/appengine/api/datastore.py",
line 166, in Put
   raise _ToDatastoreError(err)
 File "/base/python_lib/versions/1/google/appengine/api/datastore.py",
line 1965, in _ToDatastoreError

   raise errors[err.application_error](err.error_detail)
   TransactionFailedError: too much contention on these datastore
entities. please try again.



But I must admit that although this request can only cause a put() on that
specific record once every five minutes there could also come a put() from
another request. But my guess is that at most two puts on the same record
can happen within a short period of time.

Can the contention be caused by just two puts? I guess it could...

Would it be an idea to catch this exception and try the put again?

Sincerely,
/morten

On Thu, Apr 23, 2009 at 10:46 AM, Nick Johnson wrote:

>
> Hi Morten,
>
> The likelihood you'll see a datastore timeout bears a relation to how
> expensive your operation is, but external factors - such as transient
> issues on our Bigtable Tabletservers - can have a much larger impact.
> As such, you'll occasionally (hopefully only very occasionally) see
> timeouts even on 'simple' operations like retrieving an entity by its
> key. The best approach is generally to retry the operation in
> circumstances like these.
>
> Regarding contention, how are you identifying it as such? Are you
> using explicit transactions, and getting TransactionFailedError? This
> is usually due to contention, but could also be caused by multiple
> timeouts.
>
> -Nick Johnson
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Debugging App Engine errors

2009-04-22 Thread Morten Bek Ditlevsen

Hi there,
I'm in the process of moving the database part of a social networking
site to GAE.
Each client sends a request to the server every 5 minutes with
location information. From the location I calculate a list of geoboxes
to store with the client's profile if the location is significantly
different from the last request.

If I get requests more often than every 5 minutes, I send back an
empty response (clients send unique id, so a 5 minute memcache object
will tell me if I receive too many requests) so as not to burden the
server too much.


I already have a server up and running (apache, php, postgres) - and I
am experimenting with forwarding requests to GAE to measure load and
performance.

In general everything looks good, but I am getting the occational
error on the server:

One error I get is the datastore timeout. I am quite certain that I
write a record at max. every 5 minutes. For that record I only have
one composite index, and that index only has one list entry - so I
don't have 'exploding indexes'. Although the list may have 30-40
entries, so 30-40 index entries must be created.

The second error I get - less frequently - is data contention. The
record I update (still at max every 5 mins) is unique to each user, so
I don't understand where the contention comes from.

The last error is actually the one that puzzles me the most. Timeout
while doing datastore fetches. Traceback is similar to:

Traceback (most recent call last):
  File "/base/python_lib/versions/1/google/appengine/ext/webapp/
__init__.py", line 501, in __call__
handler.get(*groups)
  File "/base/data/home/apps/grindrservr/3.332855696947742792/
main.py", line 870, in get
user = User.get_by_key_name(key_name)
  File "/base/python_lib/versions/1/google/appengine/ext/db/
__init__.py", line 849, in get_by_key_name
return get(*keys)
  File "/base/python_lib/versions/1/google/appengine/ext/db/
__init__.py", line 1044, in get
entities = datastore.Get(keys)
  File "/base/python_lib/versions/1/google/appengine/api/
datastore.py", line 221, in Get
raise _ToDatastoreError(err)
  File "/base/python_lib/versions/1/google/appengine/api/
datastore.py", line 1965, in _ToDatastoreError
raise errors[err.application_error](err.error_detail)
Timeout

As far as I can tell the get_by_key_name should be so fast that it's
basically instant, so I don't understand the timeout here.

I should state that out of maybe 300.000 requests only about 30 fail,
so that may actually be quite ok. But I would like to understand the
errors and if possible program my way around them.

Any comments greatly appreciated.

Sincerely,
/morten

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Django "Block" feature and Google App Engine

2009-03-09 Thread Morten Bruhn

Hi Guys,

First of all I am a newbie, but I still hope you can help me...


I have created an app with a template - this works! :-)

class MainHandler(webapp.RequestHandler):

  def get(self):
self.response.out.write(
template.render('base.html',''))



In my main.html I have the following:


{% block content %}

{% endblock %}




And then ind my content.html i have:

{% extends 'base.html' %}
{% block content %}
som HTML
{% endblock %}


But it just won't show up on my base.html page... Why why??

Any ideas???

Kind Regards

// Morten

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Datastore Timeouts

2009-02-28 Thread morten

I'm seeing this problem as well - seems to timeout well before the
allowed 30 seconds...

/Morten

On Feb 28, 4:53 pm, Alex Popescu 
wrote:
> On Feb 28, 5:49 pm, Alex Popescu 
> wrote:
>
> > Today starting around 07.42am the datastore has started to through
> > Timeout exceptions. I haven't read anywhere about an announced
> > maintenance window, so I do consider this a critical issue.
>
> Forgot to mention that I am getting 503 or 502 errors and these are
> NOT coming from my app.
>
> ./alex
> --
> .w( the_mindstorm )p.
>   Alexandru Popescu
>
> My app DailyCloud:http://the.dailycloud.net/
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Calculating Ranks

2009-01-29 Thread morten

Hi

Is it correct that the ranker code results in a tree where all the
nodes in the tree is stored within the same GAE entity group (they all
have the rootkey as parent), resulting in the entire tree being
serialized for access?

Best regards,
Morten Nielsen

On Jan 26, 11:08 pm, ryan  wrote:
> thanks for pinging us! they were actually ahead of me on this, and
> they published that library a while back:
>
> http://code.google.com/p/google-app-engine-ranklist/
>
> we'll probably post something about it to the app engine blog soon.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---