[google-appengine] Re: Free Quoata Reduction?

2009-06-23 Thread jianwen

I also noticed that. The free CPU hours now reduce to 6.50 per day,
and Outgoing/Incoming Bandwidth reduce to 1GB per day.

On Jun 23, 2:43 pm, conman 
wrote:
> Hi,
>
> I just looked at the dashboard and saw that nearly one third of my
> free CPU quoata has been used up for today.
>
> How can that be, because my app didn't do significantly more than the
> other day when I looked last (I guess it was end of last week)
>
> Is this a known monitoring issue or was there again a quota adjustment
> as in february?
>
> Cheers,
> Constantin
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Free Quoata Reduction?

2009-06-23 Thread conman

Why didn't they send out a notification about that... don't like it to
see that happen, but at least I would like to become informed.

The "free 5 Mio PI" claim from launch last year is now history for
sure - at least for my application.

Cheers,
Constantin




On 23 Jun., 09:17, jianwen  wrote:
> I also noticed that. The free CPU hours now reduce to 6.50 per day,
> and Outgoing/Incoming Bandwidth reduce to 1GB per day.
>
> On Jun 23, 2:43 pm, conman 
> wrote:
>
> > Hi,
>
> > I just looked at the dashboard and saw that nearly one third of my
> > free CPU quoata has been used up for today.
>
> > How can that be, because my app didn't do significantly more than the
> > other day when I looked last (I guess it was end of last week)
>
> > Is this a known monitoring issue or was there again a quota adjustment
> > as in february?
>
> > Cheers,
> > Constantin
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Free Quoata Reduction?

2009-06-23 Thread Sylvain

It was announced since many months (here and the blog)

http://code.google.com/intl/fr/appengine/docs/quotas.html
http://googleappengine.blogspot.com/2009/02/new-grow-your-app-beyond-free-quotas.html


On 23 juin, 09:30, conman 
wrote:
> Why didn't they send out a notification about that... don't like it to
> see that happen, but at least I would like to become informed.
>
> The "free 5 Mio PI" claim from launch last year is now history for
> sure - at least for my application.
>
> Cheers,
> Constantin
>
> On 23 Jun., 09:17, jianwen  wrote:
>
> > I also noticed that. The free CPU hours now reduce to 6.50 per day,
> > and Outgoing/Incoming Bandwidth reduce to 1GB per day.
>
> > On Jun 23, 2:43 pm, conman 
> > wrote:
>
> > > Hi,
>
> > > I just looked at the dashboard and saw that nearly one third of my
> > > free CPU quoata has been used up for today.
>
> > > How can that be, because my app didn't do significantly more than the
> > > other day when I looked last (I guess it was end of last week)
>
> > > Is this a known monitoring issue or was there again a quota adjustment
> > > as in february?
>
> > > Cheers,
> > > Constantin
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Free Quoata Reduction?

2009-06-23 Thread conman

Ah, ok, then this was the reduction that has beeen announced in
februrary.

Tx

On 23 Jun., 10:04, Sylvain  wrote:
> It was announced since many months (here and the blog)
>
> http://code.google.com/intl/fr/appengine/docs/quotas.htmlhttp://googleappengine.blogspot.com/2009/02/new-grow-your-app-beyond-...
>
> On 23 juin, 09:30, conman 
> wrote:
>
> > Why didn't they send out a notification about that... don't like it to
> > see that happen, but at least I would like to become informed.
>
> > The "free 5 Mio PI" claim from launch last year is now history for
> > sure - at least for my application.
>
> > Cheers,
> > Constantin
>
> > On 23 Jun., 09:17, jianwen  wrote:
>
> > > I also noticed that. The free CPU hours now reduce to 6.50 per day,
> > > and Outgoing/Incoming Bandwidth reduce to 1GB per day.
>
> > > On Jun 23, 2:43 pm, conman 
> > > wrote:
>
> > > > Hi,
>
> > > > I just looked at the dashboard and saw that nearly one third of my
> > > > free CPU quoata has been used up for today.
>
> > > > How can that be, because my app didn't do significantly more than the
> > > > other day when I looked last (I guess it was end of last week)
>
> > > > Is this a known monitoring issue or was there again a quota adjustment
> > > > as in february?
>
> > > > Cheers,
> > > > Constantin
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How do I limit searchable_text_index using SearchableModel?

2009-06-23 Thread ogterran

Hi Ian,

Thanks for the response.
I have one question on number of datastore calls.
How many datastore calls is the query below making?
Is it 1 or 100?

> class Product(db.Model):
>pid = db.StringProperty(required=True)
>title = db.StringProperty(required=True)
>site = db.StringProperty(required=True)
>url = db.LinkProperty(required=True)
>
> class ProductSearchIndex(search.SearchableModel):
>product = db.ReferenceProperty(Product)
>title = db.StringProperty(required=True)

query = ProductSearchIndex.all().search(searchtext)
results = query.fetch(100)
for i, v in enumerate(results):
print v.product.url

Thanks
Jon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Remote API security

2009-06-23 Thread hawkett

Hi,

   I have a question about the security of the remote_api - looking
through the source code, I noticed that ConfigureRemoteDatastore takes
a 'secure' parameter, which is False by default.  I assume this means
that any data submitted via remote_api is done in plain text.  What
about the credentials that are obtained using the auth_func() shown in
the example?

   Is the secure option supported?  When I set secure=True (in code
that works fine when it is set to False), I get

'urllib2.HTTPError: HTTP Error 302: Found'

which I assume is a redirect to a login page.  If it is supported,
what is the process for it use?  Thanks,

Colin
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Task Queue API Users

2009-06-23 Thread Nick Johnson (Google)
Hi hawkett,

The bug you found earlier, with Task Queue accesses returning 302s instead
of executing correctly, is definitely a bug in the dev_appserver. Can you
please file a bug on the issue tracker?

On Mon, Jun 22, 2009 at 11:18 PM, hawkett  wrote:

>
> Hi,
>
>   I've deployed an app to do some tests on live app engine, and the
> following code
>
> currentUser = users.get_current_user()
> if currentUser is not None:
>   logging.info("Current User - ID: %s, email: %s, nickname: %s" %
> (currentUser.user_id(), currentUser.email(), currentUser.nickname()))
>
> logging.info("is admin? %s" % users.is_current_user_admin())
>
> yields:  'is admin? False'
>
> as the total log output.  This is code that is run directly from a
> handler in app.yaml that specified - 'login:admin'
>
> This represents a pretty big problem - it means you can't rely on
> 'login:admin' to produce a user that is an admin.


On the contrary - only administrators and the system itself (eg, cron and
task queue services) will be able to access "login: admin" handlers.
However, when access is by a service, no user is specified, so
"is_current_user_admin()" will naturally return False, not because it's not
an admin access, but because there's no current user.


> I'm guessing that
> the goal of the Task Queue API is to be usable on generic URLs - e.g.
> in a RESTful application, the full CRUD (and more) functionality is
> exposed via a dynamic set of URL's that more than likely are not
> specifically for the Task Queue API - however the above situation
> means you really have to code explicitly for the Task Queue API,
> because the meaning of the directives in app.yaml is not reliable.  It
> looks like cron functionality works like this as well, and that has
> been around for a while.  Use cases such as write-behind outlined in
> Brett's IO talk are significantly limited by being unable to predict
> whether you will get a user or not (especially if you intend to hit
> RESTful URI that could just as easily be hit by real users).  Sure,
> there are ways to code around it, but it's not pretty.


I'm not sure I see the problem - what user would you expect to see listed
when a webhook is being called by the cron or task queue system?

-Nick Johnson


>
> I've added a defect to the issue tracker here -
> http://code.google.com/p/googleappengine/issues/detail?id=1742
>
> I'm keen to understand how google sees this situation, and whether the
> current situation is here to stay, or something short term to deliver
> the functionality early.  Cheers,
>
> Colin
>
> On Jun 22, 4:31 pm, "Nick Johnson (Google)" 
> wrote:
> > Hi hawkett,
> >
> > My mistake. This sounds like a bug in the SDK - can you please file a
> bug?
> >
> > -Nick Johnson
> >
> >
> >
> > On Mon, Jun 22, 2009 at 4:25 PM, hawkett  wrote:
> >
> > > Hi Nick,
> >
> > > In my SDK (just the normal mac download), I can inspect the queue in
> > > admin console, and have a 'run' and 'delete' button next to each task
> > > in the queue.  When I press 'run', the task fires, my server receives
> > > the request, and returns the 302.
> >
> > > Colin
> >
> > > On Jun 22, 4:15 pm, "Nick Johnson (Google)" 
> > > wrote:
> > > > Hi hawkett,
> >
> > > > In the current release of the SDK, the Task Queue stub simply logs
> tasks
> > > to
> > > > be executed, and doesn't actually execute them. How are you executing
> > > these
> > > > tasks?
> >
> > > > -Nick Johnson
> >
> > > > On Mon, Jun 22, 2009 at 3:46 PM, hawkett  wrote:
> >
> > > > > Hi,
> >
> > > > >   I'm running into some issues trying to use the Task Queue API
> with
> > > > > restricted access URL's defined in app.yaml - when a URL is defined
> as
> > > > > either 'login: admin' or 'login: required', when the task fires it
> is
> > > > > receiving a 302 - which I assume is a redirect to the login page.
>  I'm
> > > > > just running this on the SDK at the moment, but I was expecting at
> > > > > least the 'login: admin' url to work, based on the following
> comment
> > > > > from this page
> > > > >
> http://code.google.com/appengine/docs/python/taskqueue/overview.html
> >
> > > > > 'If a task performs sensitive operations (such as modifying
> important
> > > > > data), the developer may wish to protect the worker URL to prevent
> a
> > > > > malicious external user from calling it directly. This is possible
> by
> > > > > marking the worker URL as admin-only in the app configuration.'
> >
> > > > > I figure I'm probably doing something dumb, but I had expected the
> > > > > tasks to be executed as some sort of system user, so that either
> > > > > 'login: required' or 'login: admin' would work - perhaps even being
> > > > > able to specify the email and nickname of the system user as
> app.yaml
> > > > > configuration.  Another alternative would be if there was a
> mechanism
> > > > > to create an auth token to supply when the task is created.  e.g.
> > > > > users.current_user_auth_token() to execute the task as the current
> > > > > user.
> >
> 

[google-appengine] Re: Remote API security

2009-06-23 Thread Nick Johnson (Google)
Hi hawkett,

On Tue, Jun 23, 2009 at 10:11 AM, hawkett  wrote:

>
> Hi,
>
>   I have a question about the security of the remote_api - looking
> through the source code, I noticed that ConfigureRemoteDatastore takes
> a 'secure' parameter, which is False by default.  I assume this means
> that any data submitted via remote_api is done in plain text.  What
> about the credentials that are obtained using the auth_func() shown in
> the example?


Authentication is always performed over a secure channel, but the cookie
obtained with authentication is then transmitted in the clear if secure=True
is not specified.


>
>
>   Is the secure option supported?  When I set secure=True (in code
> that works fine when it is set to False), I get
>
> 'urllib2.HTTPError: HTTP Error 302: Found'
>
> which I assume is a redirect to a login page.  If it is supported,
> what is the process for it use?  Thanks,


Did you set "secure: always" or "secure:optional" for the remote_api handler
in app.yaml?

-Nick Johnson


>
> Colin
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] DataStore Item Keys and Indexing with a Constant Prefix

2009-06-23 Thread Koen Bok

I want my datastore keys to be uuid's (hex presentation), but they can
start with a digit which is not allowed. So I figured I'd add a
constant prefix.

Could that have a negative impact on the index the datastore builds?
It shouldn't right? Just double checking :-)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How do I limit searchable_text_index using SearchableModel?

2009-06-23 Thread Nick Johnson (Google)
Hi ogterran,

On Tue, Jun 23, 2009 at 9:59 AM, ogterran  wrote:

>
> Hi Ian,
>
> Thanks for the response.
> I have one question on number of datastore calls.
> How many datastore calls is the query below making?
> Is it 1 or 100?
>
> > class Product(db.Model):
> >pid = db.StringProperty(required=True)
> >title = db.StringProperty(required=True)
> >site = db.StringProperty(required=True)
> >url = db.LinkProperty(required=True)
> >
> > class ProductSearchIndex(search.SearchableModel):
> >product = db.ReferenceProperty(Product)
> >title = db.StringProperty(required=True)
>
> query = ProductSearchIndex.all().search(searchtext)
> results = query.fetch(100)
> for i, v in enumerate(results):
>print v.product.url


Only one query - your search terms are ANDed together.

-Nick Johnson


>
> Thanks
> Jon
>
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How do I limit searchable_text_index using SearchableModel?

2009-06-23 Thread Ian Lewis
ogterran,

It should do one for the search and then one for each item in the search
result. If you are worried performance on the calls to the datastore you can
modify this code to make the ProductSearchIndex entity be a child of the
Product entity and  use a key only query to retrieve only the keys for the
search index entities (since we only really care about the Products anyway).

This will still to the same number of queries but will avoid the overhead of
deserializing the ProductSearchIndex objects (and the associated index list
property which might be long).

Something like the following should work:

class Product(db.Model):
   pid = db.StringProperty(required=True)
   title = db.StringProperty(required=True)
   site = db.StringProperty(required=True)
   url = db.LinkProperty(required=True)

class ProductSearchIndex(search.SearchableModel):
   # parent == Product
   title = db.StringProperty(required=True)

...
# where you write the Product
product = Product(pid = pid, title=title, site=site, url=url)
product.put()
index = ProductSearchIndex(parent=product, title=title)
index.put()

...
# where you search
keys = ProductSearchIndex.all(keys_only=True).search(query).fetch(100)
for key in keys:
product = Product.get(key.parent())
print product.url


On Tue, Jun 23, 2009 at 5:59 PM, ogterran  wrote:

>
> Hi Ian,
>
> Thanks for the response.
> I have one question on number of datastore calls.
> How many datastore calls is the query below making?
> Is it 1 or 100?
>
> > class Product(db.Model):
> >pid = db.StringProperty(required=True)
> >title = db.StringProperty(required=True)
> >site = db.StringProperty(required=True)
> >url = db.LinkProperty(required=True)
> >
> > class ProductSearchIndex(search.SearchableModel):
> >product = db.ReferenceProperty(Product)
> >title = db.StringProperty(required=True)
>
> query = ProductSearchIndex.all().search(searchtext)
> results = query.fetch(100)
> for i, v in enumerate(results):
>print v.product.url
>
> Thanks
> Jon
>
> >
>


-- 
===
株式会社ビープラウド  イアン・ルイス
〒150-0012
東京都渋谷区広尾1-11-2アイオス広尾ビル604
email: ianmle...@beproud.jp
TEL:03-5795-2707
FAX:03-5795-2708
http://www.beproud.jp/
===

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How do I limit searchable_text_index using SearchableModel?

2009-06-23 Thread Ian Lewis
Nick,

But he is also doing v.product which will do a get() by key for each Product
entity will it not?

On Tue, Jun 23, 2009 at 7:14 PM, Nick Johnson (Google) <
nick.john...@google.com> wrote:

> Hi ogterran,
>
> On Tue, Jun 23, 2009 at 9:59 AM, ogterran  wrote:
>
>>
>> Hi Ian,
>>
>> Thanks for the response.
>> I have one question on number of datastore calls.
>> How many datastore calls is the query below making?
>> Is it 1 or 100?
>>
>> > class Product(db.Model):
>> >pid = db.StringProperty(required=True)
>> >title = db.StringProperty(required=True)
>> >site = db.StringProperty(required=True)
>> >url = db.LinkProperty(required=True)
>> >
>> > class ProductSearchIndex(search.SearchableModel):
>> >product = db.ReferenceProperty(Product)
>> >title = db.StringProperty(required=True)
>>
>> query = ProductSearchIndex.all().search(searchtext)
>> results = query.fetch(100)
>> for i, v in enumerate(results):
>>print v.product.url
>
>
> Only one query - your search terms are ANDed together.
>
> -Nick Johnson
>
>
>>
>> Thanks
>> Jon
>>
>>
>>
>
>
> --
> Nick Johnson, App Engine Developer Programs Engineer
> Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> 368047
>
>
> >
>


-- 
===
株式会社ビープラウド  イアン・ルイス
〒150-0012
東京都渋谷区広尾1-11-2アイオス広尾ビル604
email: ianmle...@beproud.jp
TEL:03-5795-2707
FAX:03-5795-2708
http://www.beproud.jp/
===

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: DataStore Item Keys and Indexing with a Constant Prefix

2009-06-23 Thread Ian Lewis
The longer the keys are the worse performance is so I would just add a
single known character prefix to your keys.

A single character shouldn't have any noticable impact on performance.

On Tue, Jun 23, 2009 at 7:14 PM, Koen Bok  wrote:

>
> I want my datastore keys to be uuid's (hex presentation), but they can
> start with a digit which is not allowed. So I figured I'd add a
> constant prefix.
>
> Could that have a negative impact on the index the datastore builds?
> It shouldn't right? Just double checking :-)
> >
>


-- 
===
株式会社ビープラウド  イアン・ルイス
〒150-0012
東京都渋谷区広尾1-11-2アイオス広尾ビル604
email: ianmle...@beproud.jp
TEL:03-5795-2707
FAX:03-5795-2708
http://www.beproud.jp/
===

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: DataStore Item Keys and Indexing with a Constant Prefix

2009-06-23 Thread Nick Johnson (Google)
Hi Koen,

No, having all your keys have the same prefix will not impact performance.
Bigtable is smart enough to handle this. :)

-Nick Johnson

On Tue, Jun 23, 2009 at 11:14 AM, Koen Bok  wrote:

>
> I want my datastore keys to be uuid's (hex presentation), but they can
> start with a digit which is not allowed. So I figured I'd add a
> constant prefix.
>
> Could that have a negative impact on the index the datastore builds?
> It shouldn't right? Just double checking :-)
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How do I limit searchable_text_index using SearchableModel?

2009-06-23 Thread Nick Johnson (Google)
2009/6/23 Ian Lewis 

> ogterran,
>
> It should do one for the search and then one for each item in the search
> result.


Not quite - it will do one _query_, and multiple gets. A get is much, much
cheaper than a query. You're right about the number of round-trips, though.


> If you are worried performance on the calls to the datastore you can modify
> this code to make the ProductSearchIndex entity be a child of the Product
> entity and  use a key only query to retrieve only the keys for the search
> index entities (since we only really care about the Products anyway).


Good idea!


>
>
> This will still to the same number of queries but will avoid the overhead
> of deserializing the ProductSearchIndex objects (and the associated index
> list property which might be long).
>
> Something like the following should work:
>
> class Product(db.Model):
>pid = db.StringProperty(required=True)
>title = db.StringProperty(required=True)
>site = db.StringProperty(required=True)
>url = db.LinkProperty(required=True)
>
> class ProductSearchIndex(search.SearchableModel):
># parent == Product
>title = db.StringProperty(required=True)
>
> ...
> # where you write the Product
> product = Product(pid = pid, title=title, site=site, url=url)
> product.put()
> index = ProductSearchIndex(parent=product, title=title)
> index.put()
>
> ...
> # where you search
> keys = ProductSearchIndex.all(keys_only=True).search(query).fetch(100)
> for key in keys:
> product = Product.get(key.parent())
> print product.url


This can be done much more efficiently:
  keys = ProductSearchIndex.all(keys_only=True).search(query).fetch(100)
  products = db.get(x.parent() for x in keys)

Now you're down to just two round-trips!

-Nick Johnson


>
>
> On Tue, Jun 23, 2009 at 5:59 PM, ogterran  wrote:
>
>>
>> Hi Ian,
>>
>> Thanks for the response.
>> I have one question on number of datastore calls.
>> How many datastore calls is the query below making?
>> Is it 1 or 100?
>>
>> > class Product(db.Model):
>> >pid = db.StringProperty(required=True)
>> >title = db.StringProperty(required=True)
>> >site = db.StringProperty(required=True)
>> >url = db.LinkProperty(required=True)
>> >
>> > class ProductSearchIndex(search.SearchableModel):
>> >product = db.ReferenceProperty(Product)
>> >title = db.StringProperty(required=True)
>>
>> query = ProductSearchIndex.all().search(searchtext)
>> results = query.fetch(100)
>> for i, v in enumerate(results):
>>print v.product.url
>>
>> Thanks
>> Jon
>>
>>
>>
>
>
> --
> ===
> 株式会社ビープラウド  イアン・ルイス
> 〒150-0012
> 東京都渋谷区広尾1-11-2アイオス広尾ビル604
> email: ianmle...@beproud.jp
> TEL:03-5795-2707
> FAX:03-5795-2708
> http://www.beproud.jp/
> ===
>
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Odd memcache behavior across multiple app instances

2009-06-23 Thread Nick Johnson (Google)
Hi Kim,

It's not clear from your description exactly how you're performing your
tests. Without extra information, the most likely explanation would be that
you're seeing a race condition in your code, where the key is modified
between subsequent requests to the memcache API.

Also, are you running this against the dev_appserver, or in production?

-Nick Johnson

On Tue, Jun 23, 2009 at 7:18 AM, Kim Riber  wrote:

>
> Just made another test, to confirm the behavior I see.
> This example is much simpler, and simply has 10 threads writing random
> values to memcahce to the same key.
> I would expect the last value written to be the one left in memcache.
> When afterwards, having 4 threads reading 10 times from that same key,
> they return 2 different values.
> This only happens if I prior to the writing threads, run some heavy
> tasks, to force gae to spawn more app instances.
> It seems like each server cluster might have its own memcache,
> independant from each other. I hope this is not true. From a thread
> from Ryan
>
> http://groups.google.com/group/google-appengine/browse_thread/thread/45272062a8e36545/2289806f3f711c09?lnk=gst&q=memcache+atomic#2289806f3f711c09
> he states that
>
> >as for the datastore, and all other current stored data APIs like
> >memcache, there is a single, global view of data. we go to great
> >lengths to ensure that these APIs are strongly consistent.
>
> Regards
> Kim
>
> On Jun 17, 8:51 pm, Kim Riber  wrote:
> > To clarify a bit:
> >
> > one thread from our server runs one loop with a unique id.
> > each requests stores a value in memcache and returns that value. In
> > the following request, the memcache is queried if the value just
> > written, is in the cache.
> > This sometimes fail.
> >
> > My fear is that it is due to the requests changing to another app
> > instance and then suddently getting wrong data.
> >
> > instance 1 +  +
> > instance 2  --
> >
> > Hope this clears out the example above a bit
> >
> > Cheers
> > Kim
> >
> > On Jun 17, 7:52 pm, Kim Riber  wrote:
> >
> > > Hi,
> > > I'm experiencing some rather strange behavior from memcache. I think
> > > I'm getting different data back from memcache using the same key
> > > The issue I see is that when putting load on our application, even
> > > simple memcache queries are starting to return inconsistant data. When
> > > running the same request from multiple threads, I get different
> > > results.
> > > I've made a very simple example, that runs fine on 1-200 threads, but
> > > if I put load on the app (with some heavier requests) just before I
> > > run my test, I see different values coming back from memcache using
> > > the same keys.
> >
> > > def get_new_memcahce_value(key, old_value):
> > > old_val = memcache.get(key)
> > > new_val = uuid.uuid4().get_hex()
> > > reply = 'good'
> > > if old_val and old_value != "":
> > > if old_val != old_value:
> > > reply = 'fail'
> > > new_val = old_value
> > > else:
> > > if not memcache.set(key, new_val):
> > > reply = 'set_fail'
> > > else:
> > > reply = 'new'
> > > if not memcache.set(key,new_val):
> > > reply = 'set_fail'
> > > return (new_value, reply)
> >
> > > and from a server posting requests:
> >
> > > def request_loop(id):
> > > key = "test:key_%d" % id
> > > val, reply = get_new_memcahce_value(key, "")
> > > for i in range(20):
> > > val,reply = get_new_memcahce_value(key, val)
> >
> > > Is memcache working localy on a cluster of servers, and if an
> > > application is spawned over more clusters, memcache will not
> > > propergate data to the other clusters?
> >
> > > I hope someone can clarify this, since I can't find any post regarding
> > > this issue.
> >
> > > Is there some way to get the application instance ID, so I can do some
> > > more investigation on the subject?
> >
> > > Thanks
> > > Kim
> >
> >
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: dev_appserver.py throws "SystemError: frexp() result out of range" and subsequenty "ValueError: bad marshal data"

2009-06-23 Thread Nick Johnson (Google)
Hi rraj,

Have you tried clearing your datastore?

-Nick Johnson

On Tue, Jun 23, 2009 at 3:07 AM, rraj  wrote:

> Hi,
> Had anybody encountered the following error and can guide me on how you
> fixed it ?
>
> When running the development app server, initial run throws
> "SystemError: frexp() result out of range"...
>
> C:\Program Files\Google\google_appengine>dev_appserver.py
> --datastore_path=C:\gae_data --history_path=C:\gae_data demos\guestbook
> Traceback (most recent call last):
>   File "C:\Program Files\Google\google_appengine\dev_appserver.py", line
> 60, in 
> run_file(__file__, globals())
>   File "C:\Program Files\Google\google_appengine\dev_appserver.py", line
> 57, in run_file
> execfile(script_path, globals_)
>   File "C:\Program
> Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py",
> line 483, in 
> sys.exit(main(sys.argv))
>   File "C:\Program
> Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py",
> line 400, in main
> SetGlobals()
>   File "C:\Program
> Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py",
> line 86, in SetGlobals
> from google.appengine.tools import dev_appserver
>   File "C:\Program
> Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line
> 86, in 
> from google.appengine.api import datastore_file_stub
>   File "C:\Program
> Files\Google\google_appengine\google\appengine\api\datastore_file_stub.py",
> line 38, in 
> import datetime
> SystemError: frexp() result out of range
>
>
>
> Subsequent attempts to run applications, throws "ValueError: bad
> marshal data"...
>
>
> C:\Program Files\Google\google_appengine>dev_appserver.py
> --datastore_path=C:\gae_data --history_path=C:\gae_data demos\guestbook
> Traceback (most recent call last):
>   File "C:\Program Files\Google\google_appengine\dev_appserver.py", line
> 60, in 
> run_file(__file__, globals())
>   File "C:\Program Files\Google\google_appengine\dev_appserver.py", line
> 57, in run_file
> execfile(script_path, globals_)
>   File "C:\Program
> Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py",
> line 483, in 
> sys.exit(main(sys.argv))
>   File "C:\Program
> Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py",
> line 400, in main
> SetGlobals()
>   File "C:\Program
> Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py",
> line 86, in SetGlobals
> from google.appengine.tools import dev_appserver
>   File "C:\Program
> Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line
> 86, in 
> from google.appengine.api import datastore_file_stub
> ValueError: bad marshal data
>
>
>
> Python Version :: Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45)
> [MSC v.1310 32 bit Intel)] on win32
>
> GAE Version : 1.2.3 (GoogleAppEngine_1.2.3.msi)
>
> Was running the application with an earlier version of GAE
> (1.2.2/1.2.1), when I encountered the "bad marshal data" problem during a
> restart of my application. Tried moving to the latest version to see if this
> has been handled.
>
> Removing the datastore_file_stub.pyc and running again reproduces
> the problem in the same sequence : "frexp() result out of range" followed by
> "bad marshal data".
>
> Tried moving to Python 2.6.2 - did not help.
>
> Tried repairing GAE 1.2.3 - did not help.
>
> Uninstalled Python 2.6.2, Python 2.5.2 & GAE and then installed Python
> 2.5.2 and GAE 1.2.3 again and tested with demo application and new
> data-store path, when I got the above traces.
>
>
> Not able to run any GAE apps now :-(
> Any tips to get me going again will be appreciated.
>
> Thanks & Regards,
> R.Rajkumar
>
>
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Efficient way to structure my data model

2009-06-23 Thread Nick Johnson (Google)
Hi ecognium,

On Tue, Jun 23, 2009 at 1:35 AM, ecognium  wrote:

>
> Thanks, Nick. Let me make sure I understand your comment correctly.
> Suppose I have the following data:
>
> ID  BlobProp1   BlobProp2-N Keywords
>  Categ
> =
> 123 blahblahtag1,tag2,tag3
>  Circle,
> Red,  Large, Dotted
> 345 blahblahtag3,tag4,tag5
> Square, Blue, Small, Solid
> 678 blahblahtag1,tag3,tag4
> Circle, Blue, Small, Solid
>
> -
>
> The field categ (list) contains four different types - Shape, Color,
> Size and Line Type. Suppose the user wants to retrieve all entities
> that are Small Dotted Blue Circles then the query will be:
>
> Select * From MyModel where categ = "Circle" AND categ = "Small" AND
> categ = "Blue" AND categ = "Dotted"
>
> When I was reading about exploding indexes the example indicated the
> issue was due to Cartesian product of two list elements. I thought the
> same will hold true with one list field when used multiple times in a
> query.


That is indeed true, though it's not quite the cartesian product - the
datastore won't bother indexing (Circle, Circle, Circle, Circle), or
(Dotted, Dotted, Dotted, Dotted) - it only indexes every unique combination,
which is a substantially smaller number than the cartesian product. It's
still only tractable for small lists, though, such as the 4 item lists
you're dealing with.

Are you saying the above query will not need {Circle, Red,
> Large, Dotted} * {Circle, , , } * {Circle, , , } * {Circle, , , }
> number of index entities for entity ID=123?


Correct - if you're not specifying a sort order, you can execute the query
without any composite indexes whatsoever. The datastore satisfies
equality-only queries using a merge join strategy.


> I was getting index errors
> when I was using the categ list property four times in my index
> specification and that's why I was wondering if I should restructure
> things.


How many items did you have in the list you were indexing in that case? If
your list has 4 items and your index specification lists it 4 times, you
should only get one index entry.

so I am guessing the following spec should not cause any index
> issues in the future?


Again, that depends on the number of entries in the 'categ' list. With 4
entries, this will only generate a single index entry, but the number of
entries will expand exponentially as the list increases in size.

-Nick Johnson


>
> - kind: MyModel
>  properties:
>  - name: categ
>  - name: categ
>  - name: categ
>  - name: categ
>  - name: keywords
>  - name: __key__   # used for paging
>
> Thanks,
> -e
>
>
> On Jun 22, 2:10 am, "Nick Johnson (Google)" 
> wrote:
> > Hi ecognium,
> >
> > If I understand your problem correctly, every entity will have 0-4
> entries
> > in the 'categ' list, corresponding to the values for each of 4 categories
> > (eg, Color, Size, Shape, etc)?
> >
> > The sample query you give, with only equality filters, will be
> satisfiable
> > using the merge join query planner, which doesn't require custom indexes,
> so
> > you won't have high indexing overhead. There will simply be one index
> entry
> > for each item in each list.
> >
> > If you do need custom indexes, the number of index entries, isn't 4^4, as
> > you suggest, but rather smaller. Assuming you want to be able to query
> with
> > any number of categories from 0 to 4, you'll need 3 or 4 custom indexes
> > (depending on if the 0-category case requires its own index), and the
> total
> > number of index entries will be 4C1 + 4C2 + 4C3 + 4C4 = 4 + 6 + 4 + 1 =
> 15.
> > For 6 categories, the number of entries would be 6 + 15 + 20 + 15 + 6 + 1
> =
> > 63, which is still a not-unreasonable number.
> >
> > -Nick Johnson
> >
> >
> >
> > On Mon, Jun 22, 2009 at 8:51 AM, ecognium  wrote:
> >
> > > Hi All,
> >
> > >I would like to get your opinion on the best way to structure my
> > > data model.
> > > My app allows the users to filter the entities by four category types
> > > (say A,B,C,D). Each category can have multiple values (for e.g.,
> > > category type A can have values 1,2,3) but the
> > > user can  choose only one value per category for filtering.  Please
> > > note the values are unique across the category types as well. I could
> > > create four fields corresponding to the four types but it does not
> > > allow me to expand to more categories later easily. Right now, I just
> > > use one list field to store the different values as it is easy to add
> > > more category types later on.
> >
> > > My model (simplified) looks like this:
> >
> > > class Example(db.Model):
> >
> > >categ= db.StringListProperty()
> >
> > >keywords = db.StringListProperty()
> >
> > > The field keywords will have about 

[google-appengine] Re: DataStore Item Keys and Indexing with a Constant Prefix

2009-06-23 Thread Koen Bok

Ok, that was stupid, I pasted the wrong quote. I meant this:

"The longer the keys are the worse performance is so I would just add
a
single known character prefix to your keys."

Now I didn't know this, and I can't find anything about that in the
docs. Can anyone verify this?

- Koen

On Jun 23, 12:57 pm, Koen Bok  wrote:
> Great, thanks Nick.
>
> "No, having all your keys have the same prefix will not impact
> performance.
> Bigtable is smart enough to handle this. :) "
>
> Now I didn't know this, and I can't find anything about that in the
> docs. Can anyone verify this?
>
> - Koen
>
> On Jun 23, 12:31 pm, "Nick Johnson (Google)" 
> wrote:
>
>
>
> > Hi Koen,
>
> > No, having all your keys have the same prefix will not impact performance.
> > Bigtable is smart enough to handle this. :)
>
> > -Nick Johnson
>
> > On Tue, Jun 23, 2009 at 11:14 AM, Koen Bok  wrote:
>
> > > I want my datastore keys to be uuid's (hex presentation), but they can
> > > start with a digit which is not allowed. So I figured I'd add a
> > > constant prefix.
>
> > > Could that have a negative impact on the index the datastore builds?
> > > It shouldn't right? Just double checking :-)
>
> > --
> > Nick Johnson, App Engine Developer Programs Engineer
> > Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> > 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: DataStore Item Keys and Indexing with a Constant Prefix

2009-06-23 Thread Nick Johnson (Google)
Hi Koen,

There isn't a significant performance penalty using longer keys. Ian is
simply referring to the fact that longer strings inevitably require more
processing (copying, memory, etcetera) than shorter ones. This isn't
generally a significant consideration when it comes to entity keys, though.

-Nick Johnson

On Tue, Jun 23, 2009 at 11:59 AM, Koen Bok  wrote:

>
> Ok, that was stupid, I pasted the wrong quote. I meant this:
>
> "The longer the keys are the worse performance is so I would just add
> a
> single known character prefix to your keys."
>
> Now I didn't know this, and I can't find anything about that in the
> docs. Can anyone verify this?
>
> - Koen
>
> On Jun 23, 12:57 pm, Koen Bok  wrote:
> > Great, thanks Nick.
> >
> > "No, having all your keys have the same prefix will not impact
> > performance.
> > Bigtable is smart enough to handle this. :) "
> >
> > Now I didn't know this, and I can't find anything about that in the
> > docs. Can anyone verify this?
> >
> > - Koen
> >
> > On Jun 23, 12:31 pm, "Nick Johnson (Google)" 
> > wrote:
> >
> >
> >
> > > Hi Koen,
> >
> > > No, having all your keys have the same prefix will not impact
> performance.
> > > Bigtable is smart enough to handle this. :)
> >
> > > -Nick Johnson
> >
> > > On Tue, Jun 23, 2009 at 11:14 AM, Koen Bok 
> wrote:
> >
> > > > I want my datastore keys to be uuid's (hex presentation), but they
> can
> > > > start with a digit which is not allowed. So I figured I'd add a
> > > > constant prefix.
> >
> > > > Could that have a negative impact on the index the datastore builds?
> > > > It shouldn't right? Just double checking :-)
> >
> > > --
> > > Nick Johnson, App Engine Developer Programs Engineer
> > > Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
> Number:
> > > 368047
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: DataStore Item Keys and Indexing with a Constant Prefix

2009-06-23 Thread Koen Bok

Great, thanks Nick.

"No, having all your keys have the same prefix will not impact
performance.
Bigtable is smart enough to handle this. :) "

Now I didn't know this, and I can't find anything about that in the
docs. Can anyone verify this?

- Koen

On Jun 23, 12:31 pm, "Nick Johnson (Google)" 
wrote:
> Hi Koen,
>
> No, having all your keys have the same prefix will not impact performance.
> Bigtable is smart enough to handle this. :)
>
> -Nick Johnson
>
> On Tue, Jun 23, 2009 at 11:14 AM, Koen Bok  wrote:
>
> > I want my datastore keys to be uuid's (hex presentation), but they can
> > start with a digit which is not allowed. So I figured I'd add a
> > constant prefix.
>
> > Could that have a negative impact on the index the datastore builds?
> > It shouldn't right? Just double checking :-)
>
> --
> Nick Johnson, App Engine Developer Programs Engineer
> Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: Can I get only entities created since some arbitrary point in time?

2009-06-23 Thread Nick Johnson (Google)
Hi Jonathan,

My answer is exactly the same as Tony's. Either add a timestamp field to
your entities, so you can filter on that, or otherwise mark your existing
entities as already downloaded.

All I would add is that you can use remote_api directly to query on only
updated entities, if you wish.

-Nick Johnson

On Mon, Jun 22, 2009 at 7:38 PM, Jonathan Feinberg wrote:

>
>
>
> On Jun 22, 2:29 pm, Tony  wrote:
> [snip]
> > Hope that's more helpful.
>
> I'm hoping that someone who actually knows the answer--someone with
> google.com in their email address--will contribute to this thread.
>
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Problem administering the apps

2009-06-23 Thread Nick Johnson (Google)
Hi Alex,

Did you create your apps using a Google Apps account? If so, you need to log
in at http://appengine.google.com/a/yourdomain - for example,
http://appengine.google.com/a/navigheaza.ro .

-Nick Johnson

On Mon, Jun 22, 2009 at 7:07 PM, Alex Geo  wrote:

>
> Hello!
>
> I need a bit of help over here regarding the managing of apps. I visit
> http://appspot.com, log in and I`m redirected to
> http://appengine.google.com/start
> where I can create applications, but I cannot see my existing
> applications and modify their settings.
>
> Has anyone experienced this problem before? Please let me know if it
> did and it fixed.
>
> Best regards,
> Alex
>
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread Nick Johnson (Google)
Hi Jonathan,

It's not clear what you're asking. On the way out of what?

-Nick Johnson

On Mon, Jun 22, 2009 at 7:02 PM, Jonathan Feinberg wrote:

>
> How should we deal with blobs on the way out? Should we build
> (potentially large) Base64 strings?
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Query with >1000 matches

2009-06-23 Thread herbie


So will this :
query = Foo.all().filter("property_x >" 50).order("property_x") .order
("-timestamp")
results = query.fetch(200)

..get the latest entities where property_x > 50 ?  Or will it get the
200 properties with the largest 'property_x'  which are then ordered
by 'timestamp' ?   A subtle but important difference.

As I said I need make sure I get the latest entities.


On Jun 22, 11:33 pm, Tony  wrote:
> Yes, that is what it means.  I forgot about that restriction.
>
> I see what you mean about changing 'x' values.  Perhaps consider
> keeping two counts - a running sum and a running count (of the # of x
> properties).  If a user modifies an 'x' value, you can adjust the sum
> up or down accordingly.
>
> On Jun 22, 5:40 pm, herbie <4whi...@o2.co.uk> wrote:
>
> > I tried your query below but I get "BadArgumentError: First ordering
> > property must be the same as inequality filter property, if specified
> > for this query;"
> > Does this mean I have to order on 'x' first, then order on 'date'?
> > Will this still return the latest 200 of all entities with x > 50 if
> > I  call query.fetch(200)?
>
> > I take your's and Nick's about keeping a 'running average'.   But in
> > my example the user can change the 'x' value so the average has to be
> > recalculated from the latest entities.
>
> > On Jun 22, 9:46 pm, Tony  wrote:
>
> > > You could accomplish this task like so:
>
> > > xlist = []
> > > query = Foo.all().filter("property_x >" 50).order("-timestamp")
> > > for q in query:
> > >   xlist.append(q.property_x)
> > > avg = sum(xlist) / len(xlist)
>
> > > What Nick is saying, I think, is that fetching 1000 entities is going
> > > to be very resource-intensive, so a better way to do it is to
> > > calculate this data at write-time instead of read-time.  For example,
> > > every time you add an entity, you could update a separate entity that
> > > has a property like "average = db.FloatProperty()" with the current
> > > average, and then you could simply fetch that entity and get the
> > > current running average.
>
> > > On Jun 22, 4:25 pm, herbie <4whi...@o2.co.uk> wrote:
>
> > > > Ok. Say I have many (>1000)  Model entities with two properties 'x'
> > > > and 'date'.    What is the most efficient query to fetch say the
> > > > latest 200 entities  where x > 50.   I don't care what their 'date's
> > > > are as long as I get the latest and x > 50
>
> > > > Thanks again for your help.
>
> > > > On Jun 22, 4:11 pm, "Nick Johnson (Google)" 
> > > > wrote:
>
> > > > > Consider precalculating this data and storing it against another 
> > > > > entity.
> > > > > This will save a lot of work on requests.
>
> > > > > -Nick Johnson
>
> > > > > On Mon, Jun 22, 2009 at 3:55 PM, herbie <4whi...@o2.co.uk> wrote:
>
> > > > > > No the users won't need to read 1000 entities, but I want to 
> > > > > > calculate
> > > > > > the average of a  property from the latest 1000 entities.
>
> > > > > > On Jun 22, 3:30 pm, "Nick Johnson (Google)" 
> > > > > > 
> > > > > > wrote:
> > > > > > > Correct. Are you sure you need 1000 entities, though? Your users 
> > > > > > > probably
> > > > > > > won't read through all 1000.
>
> > > > > > > -Nick Johnson
>
> > > > > > > On Mon, Jun 22, 2009 at 3:23 PM, herbie <4whi...@o2.co.uk> wrote:
>
> > > > > > > > So to be sure to get the latest 1000 entities I should add a 
> > > > > > > > datetime
> > > > > > > > property to my entitie model and filter and sort on that?
>
> > > > > > > > On Jun 22, 1:42 pm, herbie <4whi...@o2.co.uk> wrote:
> > > > > > > > > I know that if there are more than 1000 entities that match a 
> > > > > > > > > query,
> > > > > > > > > then only 1000 will  be return by fetch().  But my question 
> > > > > > > > > is which
> > > > > > > > > 1000? The last 1000 added to the datastore?  The first 1000 
> > > > > > > > > added to
> > > > > > > > > the datastore? Or is it undedined?
>
> > > > > > > > > Thanks
> > > > > > > > > Ian
>
> > > > > > > --
> > > > > > > Nick Johnson, App Engine Developer Programs Engineer
> > > > > > > Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
> > > > > > Number:
> > > > > > > 368047
>
> > > > > --
> > > > > Nick Johnson, App Engine Developer Programs Engineer
> > > > > Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration 
> > > > > Number:
> > > > > 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread Jonathan Feinberg

> It's not clear what you're asking. On the way out of what?

On the way out of App Engine. :)

Is there a preferred "correct" way to encode blobs in an exporter?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Remote API security

2009-06-23 Thread hawkett

Thanks Nick,

   I hadn't set anything - now I know a bit more about app.yaml :) -
I've got it on optional now, and its working fine - cheers

Colin

On Jun 23, 11:13 am, "Nick Johnson (Google)" 
wrote:
> Hi hawkett,
>
> On Tue, Jun 23, 2009 at 10:11 AM, hawkett  wrote:
>
> > Hi,
>
> >   I have a question about the security of the remote_api - looking
> > through the source code, I noticed that ConfigureRemoteDatastore takes
> > a 'secure' parameter, which is False by default.  I assume this means
> > that any data submitted via remote_api is done in plain text.  What
> > about the credentials that are obtained using the auth_func() shown in
> > the example?
>
> Authentication is always performed over a secure channel, but the cookie
> obtained with authentication is then transmitted in the clear if secure=True
> is not specified.
>
>
>
> >   Is the secure option supported?  When I set secure=True (in code
> > that works fine when it is set to False), I get
>
> > 'urllib2.HTTPError: HTTP Error 302: Found'
>
> > which I assume is a redirect to a login page.  If it is supported,
> > what is the process for it use?  Thanks,
>
> Did you set "secure: always" or "secure:optional" for the remote_api handler
> in app.yaml?
>
> -Nick Johnson
>
>
>
> > Colin
>
> --
> Nick Johnson, App Engine Developer Programs Engineer
> Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread Nick Johnson (Google)
On Tue, Jun 23, 2009 at 12:40 PM, Jonathan Feinberg wrote:

>
> > It's not clear what you're asking. On the way out of what?
>
> On the way out of App Engine. :)
>
> Is there a preferred "correct" way to encode blobs in an exporter?


No, just whatever suits you best. You can write them in binary, or to
separate files, if you want.

-Nick Johnson


> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Query with >1000 matches

2009-06-23 Thread Nick Johnson (Google)
On Tue, Jun 23, 2009 at 12:42 PM, herbie <4whi...@o2.co.uk> wrote:

>
>
> So will this :
> query = Foo.all().filter("property_x >" 50).order("property_x") .order
> ("-timestamp")
> results = query.fetch(200)
>
> ..get the latest entities where property_x > 50 ?  Or will it get the
> 200 properties with the largest 'property_x'  which are then ordered
> by 'timestamp' ?   A subtle but important difference.


It will get the 200 entities with the smallest property_x greater than 50
(since you're filtering >50 and ordering first by property_x). If two
entities have the same value for property_x, they will be sorted by
timestamp, descending.

If you need the latest, and your threshold of 50 is a constant, you can add
a BooleanProperty to your entity group encoding the condition 'is greater
than 50', and filter on that using an equality filter.

-Nick Johnson


>
> As I said I need make sure I get the latest entities.
>
>
> On Jun 22, 11:33 pm, Tony  wrote:
> > Yes, that is what it means.  I forgot about that restriction.
> >
> > I see what you mean about changing 'x' values.  Perhaps consider
> > keeping two counts - a running sum and a running count (of the # of x
> > properties).  If a user modifies an 'x' value, you can adjust the sum
> > up or down accordingly.
> >
> > On Jun 22, 5:40 pm, herbie <4whi...@o2.co.uk> wrote:
> >
> > > I tried your query below but I get "BadArgumentError: First ordering
> > > property must be the same as inequality filter property, if specified
> > > for this query;"
> > > Does this mean I have to order on 'x' first, then order on 'date'?
> > > Will this still return the latest 200 of all entities with x > 50 if
> > > I  call query.fetch(200)?
> >
> > > I take your's and Nick's about keeping a 'running average'.   But in
> > > my example the user can change the 'x' value so the average has to be
> > > recalculated from the latest entities.
> >
> > > On Jun 22, 9:46 pm, Tony  wrote:
> >
> > > > You could accomplish this task like so:
> >
> > > > xlist = []
> > > > query = Foo.all().filter("property_x >" 50).order("-timestamp")
> > > > for q in query:
> > > >   xlist.append(q.property_x)
> > > > avg = sum(xlist) / len(xlist)
> >
> > > > What Nick is saying, I think, is that fetching 1000 entities is going
> > > > to be very resource-intensive, so a better way to do it is to
> > > > calculate this data at write-time instead of read-time.  For example,
> > > > every time you add an entity, you could update a separate entity that
> > > > has a property like "average = db.FloatProperty()" with the current
> > > > average, and then you could simply fetch that entity and get the
> > > > current running average.
> >
> > > > On Jun 22, 4:25 pm, herbie <4whi...@o2.co.uk> wrote:
> >
> > > > > Ok. Say I have many (>1000)  Model entities with two properties 'x'
> > > > > and 'date'.What is the most efficient query to fetch say the
> > > > > latest 200 entities  where x > 50.   I don't care what their
> 'date's
> > > > > are as long as I get the latest and x > 50
> >
> > > > > Thanks again for your help.
> >
> > > > > On Jun 22, 4:11 pm, "Nick Johnson (Google)" <
> nick.john...@google.com>
> > > > > wrote:
> >
> > > > > > Consider precalculating this data and storing it against another
> entity.
> > > > > > This will save a lot of work on requests.
> >
> > > > > > -Nick Johnson
> >
> > > > > > On Mon, Jun 22, 2009 at 3:55 PM, herbie <4whi...@o2.co.uk>
> wrote:
> >
> > > > > > > No the users won't need to read 1000 entities, but I want to
> calculate
> > > > > > > the average of a  property from the latest 1000 entities.
> >
> > > > > > > On Jun 22, 3:30 pm, "Nick Johnson (Google)" <
> nick.john...@google.com>
> > > > > > > wrote:
> > > > > > > > Correct. Are you sure you need 1000 entities, though? Your
> users probably
> > > > > > > > won't read through all 1000.
> >
> > > > > > > > -Nick Johnson
> >
> > > > > > > > On Mon, Jun 22, 2009 at 3:23 PM, herbie <4whi...@o2.co.uk>
> wrote:
> >
> > > > > > > > > So to be sure to get the latest 1000 entities I should add
> a datetime
> > > > > > > > > property to my entitie model and filter and sort on that?
> >
> > > > > > > > > On Jun 22, 1:42 pm, herbie <4whi...@o2.co.uk> wrote:
> > > > > > > > > > I know that if there are more than 1000 entities that
> match a query,
> > > > > > > > > > then only 1000 will  be return by fetch().  But my
> question is which
> > > > > > > > > > 1000? The last 1000 added to the datastore?  The first
> 1000 added to
> > > > > > > > > > the datastore? Or is it undedined?
> >
> > > > > > > > > > Thanks
> > > > > > > > > > Ian
> >
> > > > > > > > --
> > > > > > > > Nick Johnson, App Engine Developer Programs Engineer
> > > > > > > > Google Ireland Ltd. :: Registered in Dublin, Ireland,
> Registration
> > > > > > > Number:
> > > > > > > > 368047
> >
> > > > > > --
> > > > > > Nick Johnson, App Engine Developer Programs Engineer
> > > > > > Google Ireland Ltd. :: Registered in Dublin, Ireland,
> Registration Number:
>

[google-appengine] Re: Problem with "Verify Your Account by SMS" page

2009-06-23 Thread Nick Johnson (Google)
Hi WaveyGravey,

I've manually activated your account.

-Nick Johnson

On Mon, Jun 22, 2009 at 5:43 PM, WaveyGravey  wrote:

>
> I am having the same problem.  What is the issue with this?  How can I
> get around this?
>
> On Jun 3, 4:17 pm, "Nick Johnson (Google)" 
> wrote:
> > Hi Harry,
> >
> > Your account should now be verified.
> >
> > -Nick Johnson
> >
> >
> >
> > On Wed, Jun 3, 2009 at 10:27 AM, Harry Levinson 
> wrote:
> > > I have a problem with "Verify Your Account by SMS" page.
> >
> > > The system sent me the verification code by SMS to my Verizon phone
> right
> > > away.
> >
> > > However the Verify web page says "There were errors:  Carrier".
> >
> > > How can I get help fixing my account or finding a web page in which to
> > > paste my verification code?
> >
> > > Harry Levinson- Hide quoted text -
> >
> > - Show quoted text -
>
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Free Quoata Reduction?

2009-06-23 Thread WeatherPhilip

Actually, I think that this was the timing of the reduction that they
announced in February. I don't think that the size of the quota
reductions (in particular the data reduction) was announced until very
recently. Even then, it was not publicized well (think Arthur Dent and
the 'Beware of Leopard' sign).

According to the post that I read, it said that the levels were set so
that 10% of the existing free apps at 5 million page views per month
would exceed the free quota. How they figured out a page view, I have
no idea. Certainly this will screw up any app that serves up any
significant amount of graphical content.

This new constraint means that a page view has to consume an average
of 6000 bytes or less. [ 10**9 / (5,000,000 / 30) = 6000 ] This is
really tough if a page view contains a dynamically generated image, or
if you are serving an image that is unlikely to be cached on the
user's browser (e.g. a photo gallery app). The 6000 byte limit is OK
if it just covered the html that you need to serve to render the UI.

I would be interested in Google's methodology.

Philip



On Jun 23, 4:34 am, conman 
wrote:
> Ah, ok, then this was the reduction that has beeen announced in
> februrary.
>
> Tx
>
> On 23 Jun., 10:04, Sylvain  wrote:
>
> > It was announced since many months (here and the blog)
>
> >http://code.google.com/intl/fr/appengine/docs/quotas.htmlhttp://googl..
>
> > On 23 juin, 09:30, conman 
> > wrote:
>
> > > Why didn't they send out a notification about that... don't like it to
> > > see that happen, but at least I would like to become informed.
>
> > > The "free 5 Mio PI" claim from launch last year is now history for
> > > sure - at least for my application.
>
> > > Cheers,
> > > Constantin
>
> > > On 23 Jun., 09:17, jianwen  wrote:
>
> > > > I also noticed that. The free CPU hours now reduce to 6.50 per day,
> > > > and Outgoing/Incoming Bandwidth reduce to 1GB per day.
>
> > > > On Jun 23, 2:43 pm, conman 
> > > > wrote:
>
> > > > > Hi,
>
> > > > > I just looked at the dashboard and saw that nearly one third of my
> > > > > free CPU quoata has been used up for today.
>
> > > > > How can that be, because my app didn't do significantly more than the
> > > > > other day when I looked last (I guess it was end of last week)
>
> > > > > Is this a known monitoring issue or was there again a quota adjustment
> > > > > as in february?
>
> > > > > Cheers,
> > > > > Constantin
>
>
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Query with >1000 matches

2009-06-23 Thread herbie

Thanks for your help Nick.

No my threshold value 'x' isn't constant.   I still havn't got my head
round this yet!   Can you tell me how to get the latest entities
(assuming I don't want all of them)   out of the datastore  and filter
on another property?

For example:  Get the latest 200 entities  where x > 50.   I don't
care what their 'date's are as long as I get the latest and x > 50.


On Jun 23, 1:16 pm, "Nick Johnson (Google)" 
wrote:
> On Tue, Jun 23, 2009 at 12:42 PM, herbie <4whi...@o2.co.uk> wrote:
>
> > So will this :
> > query = Foo.all().filter("property_x >" 50).order("property_x") .order
> > ("-timestamp")
> > results = query.fetch(200)
>
> > ..get the latest entities where property_x > 50 ?  Or will it get the
> > 200 properties with the largest 'property_x'  which are then ordered
> > by 'timestamp' ?   A subtle but important difference.
>
> It will get the 200 entities with the smallest property_x greater than 50
> (since you're filtering >50 and ordering first by property_x). If two
> entities have the same value for property_x, they will be sorted by
> timestamp, descending.
>
> If you need the latest, and your threshold of 50 is a constant, you can add
> a BooleanProperty to your entity group encoding the condition 'is greater
> than 50', and filter on that using an equality filter.
>
> -Nick Johnson
>
>
>
>
>
> > As I said I need make sure I get the latest entities.
>
> > On Jun 22, 11:33 pm, Tony  wrote:
> > > Yes, that is what it means.  I forgot about that restriction.
>
> > > I see what you mean about changing 'x' values.  Perhaps consider
> > > keeping two counts - a running sum and a running count (of the # of x
> > > properties).  If a user modifies an 'x' value, you can adjust the sum
> > > up or down accordingly.
>
> > > On Jun 22, 5:40 pm, herbie <4whi...@o2.co.uk> wrote:
>
> > > > I tried your query below but I get "BadArgumentError: First ordering
> > > > property must be the same as inequality filter property, if specified
> > > > for this query;"
> > > > Does this mean I have to order on 'x' first, then order on 'date'?
> > > > Will this still return the latest 200 of all entities with x > 50 if
> > > > I  call query.fetch(200)?
>
> > > > I take your's and Nick's about keeping a 'running average'.   But in
> > > > my example the user can change the 'x' value so the average has to be
> > > > recalculated from the latest entities.
>
> > > > On Jun 22, 9:46 pm, Tony  wrote:
>
> > > > > You could accomplish this task like so:
>
> > > > > xlist = []
> > > > > query = Foo.all().filter("property_x >" 50).order("-timestamp")
> > > > > for q in query:
> > > > >   xlist.append(q.property_x)
> > > > > avg = sum(xlist) / len(xlist)
>
> > > > > What Nick is saying, I think, is that fetching 1000 entities is going
> > > > > to be very resource-intensive, so a better way to do it is to
> > > > > calculate this data at write-time instead of read-time.  For example,
> > > > > every time you add an entity, you could update a separate entity that
> > > > > has a property like "average = db.FloatProperty()" with the current
> > > > > average, and then you could simply fetch that entity and get the
> > > > > current running average.
>
> > > > > On Jun 22, 4:25 pm, herbie <4whi...@o2.co.uk> wrote:
>
> > > > > > Ok. Say I have many (>1000)  Model entities with two properties 'x'
> > > > > > and 'date'.    What is the most efficient query to fetch say the
> > > > > > latest 200 entities  where x > 50.   I don't care what their
> > 'date's
> > > > > > are as long as I get the latest and x > 50
>
> > > > > > Thanks again for your help.
>
> > > > > > On Jun 22, 4:11 pm, "Nick Johnson (Google)" <
> > nick.john...@google.com>
> > > > > > wrote:
>
> > > > > > > Consider precalculating this data and storing it against another
> > entity.
> > > > > > > This will save a lot of work on requests.
>
> > > > > > > -Nick Johnson
>
> > > > > > > On Mon, Jun 22, 2009 at 3:55 PM, herbie <4whi...@o2.co.uk>
> > wrote:
>
> > > > > > > > No the users won't need to read 1000 entities, but I want to
> > calculate
> > > > > > > > the average of a  property from the latest 1000 entities.
>
> > > > > > > > On Jun 22, 3:30 pm, "Nick Johnson (Google)" <
> > nick.john...@google.com>
> > > > > > > > wrote:
> > > > > > > > > Correct. Are you sure you need 1000 entities, though? Your
> > users probably
> > > > > > > > > won't read through all 1000.
>
> > > > > > > > > -Nick Johnson
>
> > > > > > > > > On Mon, Jun 22, 2009 at 3:23 PM, herbie <4whi...@o2.co.uk>
> > wrote:
>
> > > > > > > > > > So to be sure to get the latest 1000 entities I should add
> > a datetime
> > > > > > > > > > property to my entitie model and filter and sort on that?
>
> > > > > > > > > > On Jun 22, 1:42 pm, herbie <4whi...@o2.co.uk> wrote:
> > > > > > > > > > > I know that if there are more than 1000 entities that
> > match a query,
> > > > > > > > > > > then only 1000 will  be return by fetch().  But my
> > question is which
> > > > > > > > > > > 1

[google-appengine] Re: Odd memcache behavior across multiple app instances

2009-06-23 Thread Kim Riber

Hi Nick

I run the test in 3 steps (with half a minute in between):
1. Heavy load process to spawn appinstances
2. Write a lot of random values to the same key
3. Read the key from multiple threads.
I can repeat 3rd step, and still get the same result (mostly 2
different values)

It seems like I hit 2 different memcache servers with different view
on what is in the cache at that key

>Also, are you running this against the dev_appserver, or in production?
How do I see that?
We are running towards .appspot.com

-Kim

On Jun 23, 12:39 pm, "Nick Johnson (Google)" 
wrote:
> Hi Kim,
>
> It's not clear from your description exactly how you're performing your
> tests. Without extra information, the most likely explanation would be that
> you're seeing a race condition in your code, where the key is modified
> between subsequent requests to the memcache API.
>
> Also, are you running this against the dev_appserver, or in production?
>
> -Nick Johnson
>
>
>
> On Tue, Jun 23, 2009 at 7:18 AM, Kim Riber  wrote:
>
> > Just made another test, to confirm the behavior I see.
> > This example is much simpler, and simply has 10 threads writing random
> > values to memcahce to the same key.
> > I would expect the last value written to be the one left in memcache.
> > When afterwards, having 4 threads reading 10 times from that same key,
> > they return 2 different values.
> > This only happens if I prior to the writing threads, run some heavy
> > tasks, to force gae to spawn more app instances.
> > It seems like each server cluster might have its own memcache,
> > independant from each other. I hope this is not true. From a thread
> > from Ryan
>
> >http://groups.google.com/group/google-appengine/browse_thread/thread/...
> > he states that
>
> > >as for the datastore, and all other current stored data APIs like
> > >memcache, there is a single, global view of data. we go to great
> > >lengths to ensure that these APIs are strongly consistent.
>
> > Regards
> > Kim
>
> > On Jun 17, 8:51 pm, Kim Riber  wrote:
> > > To clarify a bit:
>
> > > one thread from our server runs one loop with a unique id.
> > > each requests stores a value in memcache and returns that value. In
> > > the following request, the memcache is queried if the value just
> > > written, is in the cache.
> > > This sometimes fail.
>
> > > My fear is that it is due to the requests changing to another app
> > > instance and then suddently getting wrong data.
>
> > > instance 1 +  +
> > > instance 2      --
>
> > > Hope this clears out the example above a bit
>
> > > Cheers
> > > Kim
>
> > > On Jun 17, 7:52 pm, Kim Riber  wrote:
>
> > > > Hi,
> > > > I'm experiencing some rather strange behavior from memcache. I think
> > > > I'm getting different data back from memcache using the same key
> > > > The issue I see is that when putting load on our application, even
> > > > simple memcache queries are starting to return inconsistant data. When
> > > > running the same request from multiple threads, I get different
> > > > results.
> > > > I've made a very simple example, that runs fine on 1-200 threads, but
> > > > if I put load on the app (with some heavier requests) just before I
> > > > run my test, I see different values coming back from memcache using
> > > > the same keys.
>
> > > > def get_new_memcahce_value(key, old_value):
> > > >     old_val = memcache.get(key)
> > > >     new_val = uuid.uuid4().get_hex()
> > > >     reply = 'good'
> > > >     if old_val and old_value != "":
> > > >         if old_val != old_value:
> > > >             reply = 'fail'
> > > >             new_val = old_value
> > > >         else:
> > > >             if not memcache.set(key, new_val):
> > > >                 reply = 'set_fail'
> > > >     else:
> > > >         reply = 'new'
> > > >         if not memcache.set(key,new_val):
> > > >             reply = 'set_fail'
> > > >     return (new_value, reply)
>
> > > > and from a server posting requests:
>
> > > > def request_loop(id):
> > > >     key = "test:key_%d" % id
> > > >     val, reply = get_new_memcahce_value(key, "")
> > > >     for i in range(20):
> > > >         val,reply = get_new_memcahce_value(key, val)
>
> > > > Is memcache working localy on a cluster of servers, and if an
> > > > application is spawned over more clusters, memcache will not
> > > > propergate data to the other clusters?
>
> > > > I hope someone can clarify this, since I can't find any post regarding
> > > > this issue.
>
> > > > Is there some way to get the application instance ID, so I can do some
> > > > more investigation on the subject?
>
> > > > Thanks
> > > > Kim
>
> --
> Nick Johnson, App Engine Developer Programs Engineer
> Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To uns

[google-appengine] Re: Query with >1000 matches

2009-06-23 Thread Nick Johnson (Google)
Hi herbie,

If your query includes an inequality (such as x>50), then your first sort
order has to be on the same property as that inequality, which means you
can't (directly) fetch the most recent 200 results with x>50. You either
need to change your query to use only equality filters, or you need to fetch
extra results, then sort them in memory and only take the most recent ones.

-Nick Johnson

On Tue, Jun 23, 2009 at 1:44 PM, herbie <4whi...@o2.co.uk> wrote:

>
> Thanks for your help Nick.
>
> No my threshold value 'x' isn't constant.   I still havn't got my head
> round this yet!   Can you tell me how to get the latest entities
> (assuming I don't want all of them)   out of the datastore  and filter
> on another property?
>
> For example:  Get the latest 200 entities  where x > 50.   I don't
> care what their 'date's are as long as I get the latest and x > 50.
>
>
> On Jun 23, 1:16 pm, "Nick Johnson (Google)" 
> wrote:
> > On Tue, Jun 23, 2009 at 12:42 PM, herbie <4whi...@o2.co.uk> wrote:
> >
> > > So will this :
> > > query = Foo.all().filter("property_x >" 50).order("property_x") .order
> > > ("-timestamp")
> > > results = query.fetch(200)
> >
> > > ..get the latest entities where property_x > 50 ?  Or will it get the
> > > 200 properties with the largest 'property_x'  which are then ordered
> > > by 'timestamp' ?   A subtle but important difference.
> >
> > It will get the 200 entities with the smallest property_x greater than 50
> > (since you're filtering >50 and ordering first by property_x). If two
> > entities have the same value for property_x, they will be sorted by
> > timestamp, descending.
> >
> > If you need the latest, and your threshold of 50 is a constant, you can
> add
> > a BooleanProperty to your entity group encoding the condition 'is greater
> > than 50', and filter on that using an equality filter.
> >
> > -Nick Johnson
> >
> >
> >
> >
> >
> > > As I said I need make sure I get the latest entities.
> >
> > > On Jun 22, 11:33 pm, Tony  wrote:
> > > > Yes, that is what it means.  I forgot about that restriction.
> >
> > > > I see what you mean about changing 'x' values.  Perhaps consider
> > > > keeping two counts - a running sum and a running count (of the # of x
> > > > properties).  If a user modifies an 'x' value, you can adjust the sum
> > > > up or down accordingly.
> >
> > > > On Jun 22, 5:40 pm, herbie <4whi...@o2.co.uk> wrote:
> >
> > > > > I tried your query below but I get "BadArgumentError: First
> ordering
> > > > > property must be the same as inequality filter property, if
> specified
> > > > > for this query;"
> > > > > Does this mean I have to order on 'x' first, then order on 'date'?
> > > > > Will this still return the latest 200 of all entities with x > 50
> if
> > > > > I  call query.fetch(200)?
> >
> > > > > I take your's and Nick's about keeping a 'running average'.   But
> in
> > > > > my example the user can change the 'x' value so the average has to
> be
> > > > > recalculated from the latest entities.
> >
> > > > > On Jun 22, 9:46 pm, Tony  wrote:
> >
> > > > > > You could accomplish this task like so:
> >
> > > > > > xlist = []
> > > > > > query = Foo.all().filter("property_x >" 50).order("-timestamp")
> > > > > > for q in query:
> > > > > >   xlist.append(q.property_x)
> > > > > > avg = sum(xlist) / len(xlist)
> >
> > > > > > What Nick is saying, I think, is that fetching 1000 entities is
> going
> > > > > > to be very resource-intensive, so a better way to do it is to
> > > > > > calculate this data at write-time instead of read-time.  For
> example,
> > > > > > every time you add an entity, you could update a separate entity
> that
> > > > > > has a property like "average = db.FloatProperty()" with the
> current
> > > > > > average, and then you could simply fetch that entity and get the
> > > > > > current running average.
> >
> > > > > > On Jun 22, 4:25 pm, herbie <4whi...@o2.co.uk> wrote:
> >
> > > > > > > Ok. Say I have many (>1000)  Model entities with two properties
> 'x'
> > > > > > > and 'date'.What is the most efficient query to fetch say
> the
> > > > > > > latest 200 entities  where x > 50.   I don't care what their
> > > 'date's
> > > > > > > are as long as I get the latest and x > 50
> >
> > > > > > > Thanks again for your help.
> >
> > > > > > > On Jun 22, 4:11 pm, "Nick Johnson (Google)" <
> > > nick.john...@google.com>
> > > > > > > wrote:
> >
> > > > > > > > Consider precalculating this data and storing it against
> another
> > > entity.
> > > > > > > > This will save a lot of work on requests.
> >
> > > > > > > > -Nick Johnson
> >
> > > > > > > > On Mon, Jun 22, 2009 at 3:55 PM, herbie <4whi...@o2.co.uk>
> > > wrote:
> >
> > > > > > > > > No the users won't need to read 1000 entities, but I want
> to
> > > calculate
> > > > > > > > > the average of a  property from the latest 1000 entities.
> >
> > > > > > > > > On Jun 22, 3:30 pm, "Nick Johnson (Google)" <
> > > nick.john...@google.com>
> > > > > > > > > wrote:
> > > > > > > > 

[google-appengine] Re: Odd memcache behavior across multiple app instances

2009-06-23 Thread Nick Johnson (Google)
Hi Kim,

Are you able to send us the code you use for step 3? And are you certain
nothing is changing the memcache concurrently with step 3?

On Tue, Jun 23, 2009 at 1:44 PM, Kim Riber  wrote:

>
> Hi Nick
>
> I run the test in 3 steps (with half a minute in between):
> 1. Heavy load process to spawn appinstances
> 2. Write a lot of random values to the same key
> 3. Read the key from multiple threads.
> I can repeat 3rd step, and still get the same result (mostly 2
> different values)
>
> It seems like I hit 2 different memcache servers with different view
> on what is in the cache at that key
>
> >Also, are you running this against the dev_appserver, or in production?
> How do I see that?
> We are running towards .appspot.com


If you are testing against the local development server, that's the
dev_appserver. If you're testing code running on appspot.com, that's
production.

-Nick Johnson


> 
>
> -Kim
>
> On Jun 23, 12:39 pm, "Nick Johnson (Google)" 
> wrote:
> > Hi Kim,
> >
> > It's not clear from your description exactly how you're performing your
> > tests. Without extra information, the most likely explanation would be
> that
> > you're seeing a race condition in your code, where the key is modified
> > between subsequent requests to the memcache API.
> >
> > Also, are you running this against the dev_appserver, or in production?
> >
> > -Nick Johnson
> >
> >
> >
> > On Tue, Jun 23, 2009 at 7:18 AM, Kim Riber 
> wrote:
> >
> > > Just made another test, to confirm the behavior I see.
> > > This example is much simpler, and simply has 10 threads writing random
> > > values to memcahce to the same key.
> > > I would expect the last value written to be the one left in memcache.
> > > When afterwards, having 4 threads reading 10 times from that same key,
> > > they return 2 different values.
> > > This only happens if I prior to the writing threads, run some heavy
> > > tasks, to force gae to spawn more app instances.
> > > It seems like each server cluster might have its own memcache,
> > > independant from each other. I hope this is not true. From a thread
> > > from Ryan
> >
> > >http://groups.google.com/group/google-appengine/browse_thread/thread/.
> ..
> > > he states that
> >
> > > >as for the datastore, and all other current stored data APIs like
> > > >memcache, there is a single, global view of data. we go to great
> > > >lengths to ensure that these APIs are strongly consistent.
> >
> > > Regards
> > > Kim
> >
> > > On Jun 17, 8:51 pm, Kim Riber  wrote:
> > > > To clarify a bit:
> >
> > > > one thread from our server runs one loop with a unique id.
> > > > each requests stores a value in memcache and returns that value. In
> > > > the following request, the memcache is queried if the value just
> > > > written, is in the cache.
> > > > This sometimes fail.
> >
> > > > My fear is that it is due to the requests changing to another app
> > > > instance and then suddently getting wrong data.
> >
> > > > instance 1 +  +
> > > > instance 2  --
> >
> > > > Hope this clears out the example above a bit
> >
> > > > Cheers
> > > > Kim
> >
> > > > On Jun 17, 7:52 pm, Kim Riber  wrote:
> >
> > > > > Hi,
> > > > > I'm experiencing some rather strange behavior from memcache. I
> think
> > > > > I'm getting different data back from memcache using the same key
> > > > > The issue I see is that when putting load on our application, even
> > > > > simple memcache queries are starting to return inconsistant data.
> When
> > > > > running the same request from multiple threads, I get different
> > > > > results.
> > > > > I've made a very simple example, that runs fine on 1-200 threads,
> but
> > > > > if I put load on the app (with some heavier requests) just before I
> > > > > run my test, I see different values coming back from memcache using
> > > > > the same keys.
> >
> > > > > def get_new_memcahce_value(key, old_value):
> > > > > old_val = memcache.get(key)
> > > > > new_val = uuid.uuid4().get_hex()
> > > > > reply = 'good'
> > > > > if old_val and old_value != "":
> > > > > if old_val != old_value:
> > > > > reply = 'fail'
> > > > > new_val = old_value
> > > > > else:
> > > > > if not memcache.set(key, new_val):
> > > > > reply = 'set_fail'
> > > > > else:
> > > > > reply = 'new'
> > > > > if not memcache.set(key,new_val):
> > > > > reply = 'set_fail'
> > > > > return (new_value, reply)
> >
> > > > > and from a server posting requests:
> >
> > > > > def request_loop(id):
> > > > > key = "test:key_%d" % id
> > > > > val, reply = get_new_memcahce_value(key, "")
> > > > > for i in range(20):
> > > > > val,reply = get_new_memcahce_value(key, val)
> >
> > > > > Is memcache working localy on a cluster of servers, and if an
> > > > > application is spawned over more clusters, memcache will not
> > > > > propergate data to the other clusters?
> >

[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread GenghisOne

Hi Jonathan

Here's a sample that might provide some value to you...

http://code.google.com/p/google-app-engine-samples/source/browse/trunk/image_sharing

Cheers

On Jun 23, 5:40 am, Jonathan Feinberg  wrote:
> > It's not clear what you're asking. On the way out of what?
>
> On the way out of App Engine. :)
>
> Is there a preferred "correct" way to encode blobs in an exporter?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread Jonathan Feinberg

On Jun 23, 8:13 am, "Nick Johnson (Google)" 
wrote:

> No, just whatever suits you best. You can write them in binary, or to
> separate files, if you want.

In reading the documentation for

   appcfg.py download_data

, which is what I'm talking about here, I don't see any sample code or
documentation that describes how to do what you're suggesting. Could
you give me a pointer? Here's the documentation I've seen:
  
http://code.google.com/appengine/docs/python/tools/uploadingdata.html#Downloading_Data_from_App_Engine

Thanks.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread Jonathan Feinberg

On Jun 23, 8:57 am, GenghisOne  wrote:

> Here's a sample that might provide some value to you...
>
> http://code.google.com/p/google-app-engine-samples/source/browse/trun...

I couldn't find any reference to download_data there; which file
defines the exporter?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Task Queue API Users

2009-06-23 Thread hawkett

Hi Nick,

  Bug filed - http://code.google.com/p/googleappengine/issues/detail?id=1751

> I'm not sure I see the problem - what user would you expect to see listed
> when a webhook is being called by the cron or task queue system?

The problem is that the handler code needs to have an understanding of
the particular calling client.  This tightly couples the handler code
to the calling mechanism.  I totally wrecks the idea that the protocol
should allow loose coupling of the two end points.  From my
perspective, that's bad architecture.  If I explicitly say I need a
user (admin or otherwise) to access a URI, then the system should make
sure that URI is not accessed unless there is a user.  Once you start
introducing edge cases - 'It's true unless this, or unless that', the
platform becomes 'clunky'. app.yml is an interface contract, and
currently asynch breaks that contract. That contract is far more
important than one client's (GAE system) difficulty (which user?)
conforming to it.  My 2c anyway.  Thanks,

Colin

On Jun 23, 10:46 am, "Nick Johnson (Google)" 
wrote:
> Hi hawkett,
>
> The bug you found earlier, with Task Queue accesses returning 302s instead
> of executing correctly, is definitely a bug in the dev_appserver. Can you
> please file a bug on the issue tracker?
>
>
>
> On Mon, Jun 22, 2009 at 11:18 PM, hawkett  wrote:
>
> > Hi,
>
> >   I've deployed an app to do some tests on live app engine, and the
> > following code
>
> > currentUser = users.get_current_user()
> > if currentUser is not None:
> >   logging.info("Current User - ID: %s, email: %s, nickname: %s" %
> > (currentUser.user_id(), currentUser.email(), currentUser.nickname()))
>
> > logging.info("is admin? %s" % users.is_current_user_admin())
>
> > yields:  'is admin? False'
>
> > as the total log output.  This is code that is run directly from a
> > handler in app.yaml that specified - 'login:admin'
>
> > This represents a pretty big problem - it means you can't rely on
> > 'login:admin' to produce a user that is an admin.
>
> On the contrary - only administrators and the system itself (eg, cron and
> task queue services) will be able to access "login: admin" handlers.
> However, when access is by a service, no user is specified, so
> "is_current_user_admin()" will naturally return False, not because it's not
> an admin access, but because there's no current user.
>
> > I'm guessing that
> > the goal of the Task Queue API is to be usable on generic URLs - e.g.
> > in a RESTful application, the full CRUD (and more) functionality is
> > exposed via a dynamic set of URL's that more than likely are not
> > specifically for the Task Queue API - however the above situation
> > means you really have to code explicitly for the Task Queue API,
> > because the meaning of the directives in app.yaml is not reliable.  It
> > looks like cron functionality works like this as well, and that has
> > been around for a while.  Use cases such as write-behind outlined in
> > Brett's IO talk are significantly limited by being unable to predict
> > whether you will get a user or not (especially if you intend to hit
> > RESTful URI that could just as easily be hit by real users).  Sure,
> > there are ways to code around it, but it's not pretty.
>
> I'm not sure I see the problem - what user would you expect to see listed
> when a webhook is being called by the cron or task queue system?
>
> -Nick Johnson
>
>
>
>
>
> > I've added a defect to the issue tracker here -
> >http://code.google.com/p/googleappengine/issues/detail?id=1742
>
> > I'm keen to understand how google sees this situation, and whether the
> > current situation is here to stay, or something short term to deliver
> > the functionality early.  Cheers,
>
> > Colin
>
> > On Jun 22, 4:31 pm, "Nick Johnson (Google)" 
> > wrote:
> > > Hi hawkett,
>
> > > My mistake. This sounds like a bug in the SDK - can you please file a
> > bug?
>
> > > -Nick Johnson
>
> > > On Mon, Jun 22, 2009 at 4:25 PM, hawkett  wrote:
>
> > > > Hi Nick,
>
> > > > In my SDK (just the normal mac download), I can inspect the queue in
> > > > admin console, and have a 'run' and 'delete' button next to each task
> > > > in the queue.  When I press 'run', the task fires, my server receives
> > > > the request, and returns the 302.
>
> > > > Colin
>
> > > > On Jun 22, 4:15 pm, "Nick Johnson (Google)" 
> > > > wrote:
> > > > > Hi hawkett,
>
> > > > > In the current release of the SDK, the Task Queue stub simply logs
> > tasks
> > > > to
> > > > > be executed, and doesn't actually execute them. How are you executing
> > > > these
> > > > > tasks?
>
> > > > > -Nick Johnson
>
> > > > > On Mon, Jun 22, 2009 at 3:46 PM, hawkett  wrote:
>
> > > > > > Hi,
>
> > > > > >   I'm running into some issues trying to use the Task Queue API
> > with
> > > > > > restricted access URL's defined in app.yaml - when a URL is defined
> > as
> > > > > > either 'login: admin' or 'login: required', when the task fires it
> > is
> > > > > > receiving a 302 -

[google-appengine] Re: Getting copies of all sent mail

2009-06-23 Thread anjs

Is it a limitation from google that it sends the email to sender in
all cases?

Although the google doc says it sends email to sender only in case of
error and
bouunce messages,

http://code.google.com/appengine/docs/python/mail/overview.html

but the above thread discussion mentions that it is intended behavior
and email will be
sent in all cases.

Can anybody please confirm. We dont want the sender to be received
emails. Is this possible?

thanks

On Jun 7 2008, 5:05 am, Brian Hartvigsen  wrote:
> Sorry apparently I'm majorly dense today ;-)  I swear I read that.
> Looks like I'll be setting up a filter in Gmail.
>
> -- Brian
>
> On Jun 6, 4:07 pm, "Daniel O'Brien"  wrote:
>
>
>
> > Hi Brian,
>
> > This is the intended behavior, and discussed in the Mail API 
> > Overview:http://code.google.com/appengine/docs/mail/overview.html
>
> > Daniel
>
> > On Jun 6, 10:20 am, Brian Hartvigsen  wrote:
>
> > > So I'm sending out a validation link to users when they submit an new
> > > email address to my app.  However everytime an email is sent I'm
> > >gettinga copy in my email as well.  This happens even if the address
> > > is valid and the message was properly delivered,
>
> > > The headers of the message sent to the valid email and them message I
> > > receive are almost identical, but not quite..
>
> > > Original Message, properly delivered:
>
> > > Delivered-To: brian.and...@brianandjenny.com
> > > Received: by 10.90.35.15 with SMTP id i15cs27574agi;
> > >         Thu, 5 Jun 2008 17:24:28 -0700 (PDT)
> > > Received: by 10.100.154.19 with SMTP id b19mr3584989ane.
> > > 115.1212711867929;
> > >         Thu, 05 Jun 2008 17:24:27 -0700 (PDT)
> > > Return-Path:
> > > <3u4nisayjdm8eczd8317v36.x97wc3v8.v8yczhwc3v8v8y4z88j@apphosting.bounces.google.com>
> > > Received: from an-out-0910.google.com (an-out-0910.google.com
> > > [209.85.132.190])
> > >         by mx.google.com with ESMTP id d19si8287797and.
> > > 17.2008.06.05.17.24.27;
> > >         Thu, 05 Jun 2008 17:24:27 -0700 (PDT)
> > > Received-SPF: pass (google.com: domain of
> > > 3u4nisayjdm8eczd8317v36.x97wc3v8.v8yczhwc3v8v8y4z88j@apphosting.bounces.google.com
> > > designates 209.85.132.190 as permitted sender) client-
> > > ip=209.85.132.190;
> > > Authentication-Results: mx.google.com; spf=pass (google.com: domain of
> > > 3u4nisayjdm8eczd8317v36.x97wc3v8.v8yczhwc3v8v8y4z88j@apphosting.bounces.google.com
> > > designates 209.85.132.190 as permitted sender)
> > > smtp.mail=3u4nisayjdm8eczd8317v36.x97wc3v8.v8yczhwc3v8v8y4z88j@apphosting.bounces.google.com
> > > Received: by an-out-0910.google.com with SMTP id c35so4004851ana.3
> > >         for ; Thu, 05 Jun 2008
> > > 17:24:27 -0700 (PDT)
> > > MIME-Version: 1.0
> > > Received: by 10.90.79.12 with SMTP id c12mr1313031agb.
> > > 29.1212711867576; Thu,
> > >         05 Jun 2008 17:24:27 -0700 (PDT)
> > > Message-ID: <0016361e886814f06e044ef47...@google.com>
> > > Date: Thu, 05 Jun 2008 17:24:27 -0700
> > > Subject: Confirm email address
> > > From: tre...@gmail.com
> > > To: brian.and...@brianandjenny.com
> > > Content-Type: text/plain; charset=ISO-8859-1; Format=Flowed
> > > Content-Transfer-Encoding: 7bit
>
> > > Copy received at developer account:
>
> > > Delivered-To: tre...@gmail.com
> > > Received: by 10.151.10.15 with SMTP id n15cs34681ybi;
> > >         Thu, 5 Jun 2008 17:24:27 -0700 (PDT)
> > > Received: by 10.90.93.13 with SMTP id q13mr2366598agb.
> > > 78.1212711867663;
> > >         Thu, 05 Jun 2008 17:24:27 -0700 (PDT)
> > > Return-Path:
> > > <3u4nisayjbs8eczd8317v36.x97eczd8317v36@apphosting.bounces.google.com>
> > > Received: from an-out-0910.google.com (an-out-0910.google.com
> > > [209.85.132.184])
> > >         by mx.google.com with ESMTP id 36si4223174aga.
> > > 18.2008.06.05.17.24.27;
> > >         Thu, 05 Jun 2008 17:24:27 -0700 (PDT)
> > > Received-SPF: pass (google.com: domain of
> > > 3u4nisayjbs8eczd8317v36.x97eczd8317v36@apphosting.bounces.google.com
> > > designates 209.85.132.184 as permitted sender) client-
> > > ip=209.85.132.184;
> > > Authentication-Results: mx.google.com; spf=pass (google.com: domain of
> > > 3u4nisayjbs8eczd8317v36.x97eczd8317v36@apphosting.bounces.google.com
> > > designates 209.85.132.184 as permitted sender)
> > > smtp.mail=3u4nisayjbs8eczd8317v36.x97eczd8317v36@apphosting.bounces.google.com
> > > Received: by an-out-0910.google.com with SMTP id c25so3855286anc.6
> > >         for ; Thu, 05 Jun 2008 17:24:27 -0700 (PDT)
> > > MIME-Version: 1.0
> > > Received: by 10.90.79.12 with SMTP id c12mr1313031agb.
> > > 29.1212711867576; Thu,
> > >         05 Jun 2008 17:24:27 -0700 (PDT)
> > > Message-ID: <0016361e886814f06e044ef47...@google.com>
> > > Date: Thu, 05 Jun 2008 17:24:27 -0700
> > > Subject: Confirm email address
> > > From: tre...@gmail.com
> > > To: brian.and...@brianandjenny.com
> > > Content-Type: text/plain; charset=ISO-8859-1; Format=Flowed
> > > Content-Transfer-Encoding: 7bit
>
> > > The sending code is pre

[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread GenghisOne

The logic for pulling an image out of the datastore is stored in the
image_sharing.py file


class ImageSharingServeImage(webapp.RequestHandler):
  """Handler for dynamically serving an image from the datastore.

  Very simple - it just pulls the appropriate data out of the
datastore
  and serves it.
  """

  def get(self, display_type, pic_key):
"""Dynamically serves a PNG image from the datastore.

Args:
  type: a string describing the type of image to serve (image or
thumbnail)
  pic_key: the key for a Picture model that holds the image
"""
image = db.get(pic_key)

if display_type == 'image':
  self.response.headers['Content-Type'] = 'image/png'
  self.response.out.write(image.data)
elif display_type == 'thumbnail':
  self.response.headers['Content-Type'] = 'image/png'
  self.response.out.write(image.thumbnail_data)
else:
  self.error(500)
  self.response.out.write(
  'Couldn\'t determine what type of image to serve.')



On Jun 23, 7:02 am, Jonathan Feinberg  wrote:
> On Jun 23, 8:57 am, GenghisOne  wrote:
>
> > Here's a sample that might provide some value to you...
>
> >http://code.google.com/p/google-app-engine-samples/source/browse/trun...
>
> I couldn't find any reference to download_data there; which file
> defines the exporter?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread Jonathan Feinberg

On Jun 23, 9:13 am, GenghisOne  wrote:

>   def get(self, display_type, pic_key):

I'm afraid I still don't see how this addresses my question about
download_data.

  
http://code.google.com/appengine/docs/python/tools/uploadingdata.html#Downloading_Data_from_App_Engine
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread Nick Johnson (Google)
On Tue, Jun 23, 2009 at 2:01 PM, Jonathan Feinberg wrote:

>
> On Jun 23, 8:13 am, "Nick Johnson (Google)" 
> wrote:
>
> > No, just whatever suits you best. You can write them in binary, or to
> > separate files, if you want.
>
> In reading the documentation for
>
>   appcfg.py download_data
>
> , which is what I'm talking about here, I don't see any sample code or
> documentation that describes how to do what you're suggesting. Could
> you give me a pointer? Here's the documentation I've seen:
>
> http://code.google.com/appengine/docs/python/tools/uploadingdata.html#Downloading_Data_from_App_Engine


Check out the source code for the Exporter in the bulkloader:
http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/tools/bulkloader.py#3308.
The docstrings describe how to define your own exporter class that
stores
data however you want.

-Nick Johnson


>
> Thanks.
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Odd memcache behavior across multiple app instances

2009-06-23 Thread Kim Riber

Yes. There is no other activity when running step 3.
The two values I get back, are some of the last ones I write to the
cache (I return all the values I write).
So it looks a lot like the write threads are writing to two instances
of memcache, and when I start reading back, I see the 2 different
values from the 2 insances.
When waiting for an hour or so, when I run step 3 it returns only one
value again.
If I run the heavy load task (and spawn more instances), running step
3 will start reporting the 2 values again. Like the cluster with the
second memcache instance is getting fired up, and reporting it's old
value.
the code for step 3 is something like this:

def read_memcache()
val = memcache.get('my_test_key')
return val

I could try to set up an application with the 3 steps.

Cheers
Kim

On Jun 23, 2:52 pm, "Nick Johnson (Google)" 
wrote:
> Hi Kim,
>
> Are you able to send us the code you use for step 3? And are you certain
> nothing is changing the memcache concurrently with step 3?
>
>
>
> On Tue, Jun 23, 2009 at 1:44 PM, Kim Riber  wrote:
>
> > Hi Nick
>
> > I run the test in 3 steps (with half a minute in between):
> > 1. Heavy load process to spawn appinstances
> > 2. Write a lot of random values to the same key
> > 3. Read the key from multiple threads.
> > I can repeat 3rd step, and still get the same result (mostly 2
> > different values)
>
> > It seems like I hit 2 different memcache servers with different view
> > on what is in the cache at that key
>
> > >Also, are you running this against the dev_appserver, or in production?
> > How do I see that?
> > We are running towards .appspot.com
>
> If you are testing against the local development server, that's the
> dev_appserver. If you're testing code running on appspot.com, that's
> production.
>
> -Nick Johnson
>
>
>
> > 
>
> > -Kim
>
> > On Jun 23, 12:39 pm, "Nick Johnson (Google)" 
> > wrote:
> > > Hi Kim,
>
> > > It's not clear from your description exactly how you're performing your
> > > tests. Without extra information, the most likely explanation would be
> > that
> > > you're seeing a race condition in your code, where the key is modified
> > > between subsequent requests to the memcache API.
>
> > > Also, are you running this against the dev_appserver, or in production?
>
> > > -Nick Johnson
>
> > > On Tue, Jun 23, 2009 at 7:18 AM, Kim Riber 
> > wrote:
>
> > > > Just made another test, to confirm the behavior I see.
> > > > This example is much simpler, and simply has 10 threads writing random
> > > > values to memcahce to the same key.
> > > > I would expect the last value written to be the one left in memcache.
> > > > When afterwards, having 4 threads reading 10 times from that same key,
> > > > they return 2 different values.
> > > > This only happens if I prior to the writing threads, run some heavy
> > > > tasks, to force gae to spawn more app instances.
> > > > It seems like each server cluster might have its own memcache,
> > > > independant from each other. I hope this is not true. From a thread
> > > > from Ryan
>
> > > >http://groups.google.com/group/google-appengine/browse_thread/thread/.
> > ..
> > > > he states that
>
> > > > >as for the datastore, and all other current stored data APIs like
> > > > >memcache, there is a single, global view of data. we go to great
> > > > >lengths to ensure that these APIs are strongly consistent.
>
> > > > Regards
> > > > Kim
>
> > > > On Jun 17, 8:51 pm, Kim Riber  wrote:
> > > > > To clarify a bit:
>
> > > > > one thread from our server runs one loop with a unique id.
> > > > > each requests stores a value in memcache and returns that value. In
> > > > > the following request, the memcache is queried if the value just
> > > > > written, is in the cache.
> > > > > This sometimes fail.
>
> > > > > My fear is that it is due to the requests changing to another app
> > > > > instance and then suddently getting wrong data.
>
> > > > > instance 1 +  +
> > > > > instance 2      --
>
> > > > > Hope this clears out the example above a bit
>
> > > > > Cheers
> > > > > Kim
>
> > > > > On Jun 17, 7:52 pm, Kim Riber  wrote:
>
> > > > > > Hi,
> > > > > > I'm experiencing some rather strange behavior from memcache. I
> > think
> > > > > > I'm getting different data back from memcache using the same key
> > > > > > The issue I see is that when putting load on our application, even
> > > > > > simple memcache queries are starting to return inconsistant data.
> > When
> > > > > > running the same request from multiple threads, I get different
> > > > > > results.
> > > > > > I've made a very simple example, that runs fine on 1-200 threads,
> > but
> > > > > > if I put load on the app (with some heavier requests) just before I
> > > > > > run my test, I see different values coming back from memcache using
> > > > > > the same keys.
>
> > > > > > def get_new_memcahce_value(key, old_value):
> > > > > >     old_val = memcache.get(key)
> > > > > >     new_val = uuid.uuid

[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread Jonathan Feinberg

On Tue, Jun 23, 2009 at 9:26 AM, Nick Johnson (Google) <
nick.john...@google.com> wrote:

> Check out the source code for the Exporter in the bulkloader:
> http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/tools/bulkloader.py#3308.
>  The docstrings describe how to define your own exporter class that stores
> data however you want.
>

The docstring on the Exporter class is incorrect; it suggests
overriding a
method on the Loader class.

The Exporter class's __init__ method expects a bunch of tuples that
connect
entity properties to functions that return strings. There doesn't seem
to be
a way to decouple this assumption (that properties are serialized as
strings) from the other initialization machinery (registering the
exporter
for a given entity class). Am I missing something?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How do I limit searchable_text_index using SearchableModel?

2009-06-23 Thread acuth

Whenever I've tried doing a SearchableModelDescendant.all
(keys_only=True).search(query) construction, it has failed saying it
doesn't understand the keys_only parameter - see 'Using __key__ and
SearchableModel 
http://groups.google.com/group/google-appengine/browse_thread/thread/73dc1dc31bfd497b

Do you know if this was fixed in a recent release?

Adrian


On Jun 23, 11:33 am, "Nick Johnson (Google)" 
wrote:
> 2009/6/23 Ian Lewis 
>
> > ogterran,
>
> > It should do one for the search and then one for each item in the search
> > result.
>
> Not quite - it will do one _query_, and multiple gets. A get is much, much
> cheaper than a query. You're right about the number of round-trips, though.
>
> > If you are worried performance on the calls to the datastore you can modify
> > this code to make the ProductSearchIndex entity be a child of the Product
> > entity and  use a key only query to retrieve only the keys for the search
> > index entities (since we only really care about the Products anyway).
>
> Good idea!
>
>
>
>
>
> > This will still to the same number of queries but will avoid the overhead
> > of deserializing the ProductSearchIndex objects (and the associated index
> > list property which might be long).
>
> > Something like the following should work:
>
> > class Product(db.Model):
> >pid = db.StringProperty(required=True)
> >title = db.StringProperty(required=True)
> >site = db.StringProperty(required=True)
> >url = db.LinkProperty(required=True)
>
> > class ProductSearchIndex(search.SearchableModel):
> ># parent == Product
> >title = db.StringProperty(required=True)
>
> > ...
> > # where you write the Product
> > product = Product(pid = pid, title=title, site=site, url=url)
> > product.put()
> > index = ProductSearchIndex(parent=product, title=title)
> > index.put()
>
> > ...
> > # where you search
> > keys = ProductSearchIndex.all(keys_only=True).search(query).fetch(100)
> > for key in keys:
> > product = Product.get(key.parent())
> > print product.url
>
> This can be done much more efficiently:
>   keys = ProductSearchIndex.all(keys_only=True).search(query).fetch(100)
>   products = db.get(x.parent() for x in keys)
>
> Now you're down to just two round-trips!
>
> -Nick Johnson
>
>
>
>
>
> > On Tue, Jun 23, 2009 at 5:59 PM, ogterran  wrote:
>
> >> Hi Ian,
>
> >> Thanks for the response.
> >> I have one question on number of datastore calls.
> >> How many datastore calls is the query below making?
> >> Is it 1 or 100?
>
> >> > class Product(db.Model):
> >> >pid = db.StringProperty(required=True)
> >> >title = db.StringProperty(required=True)
> >> >site = db.StringProperty(required=True)
> >> >url = db.LinkProperty(required=True)
>
> >> > class ProductSearchIndex(search.SearchableModel):
> >> >product = db.ReferenceProperty(Product)
> >> >title = db.StringProperty(required=True)
>
> >> query = ProductSearchIndex.all().search(searchtext)
> >> results = query.fetch(100)
> >> for i, v in enumerate(results):
> >>print v.product.url
>
> >> Thanks
> >> Jon
>
> > --
> > ===
> > 株式会社ビープラウド  イアン・ルイス
> > 〒150-0012
> > 東京都渋谷区広尾1-11-2アイオス広尾ビル604
> > email: ianmle...@beproud.jp
> > TEL:03-5795-2707
> > FAX:03-5795-2708
> >http://www.beproud.jp/
> > ===
>
> --
> Nick Johnson, App Engine Developer Programs Engineer
> Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread Nick Johnson (Google)
On Tue, Jun 23, 2009 at 2:54 PM, Jonathan Feinberg wrote:

>
> On Tue, Jun 23, 2009 at 9:26 AM, Nick Johnson (Google) <
> nick.john...@google.com> wrote:
>
> > Check out the source code for the Exporter in the bulkloader:
> >
> http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/tools/bulkloader.py#3308.
> The docstrings describe how to define your own exporter class that stores
> > data however you want.
> >
>
> The docstring on the Exporter class is incorrect; it suggests
> overriding a
> method on the Loader class.
>
> The Exporter class's __init__ method expects a bunch of tuples that
> connect
> entity properties to functions that return strings. There doesn't seem
> to be
> a way to decouple this assumption (that properties are serialized as
> strings) from the other initialization machinery (registering the
> exporter
> for a given entity class). Am I missing something?


Raw strings (that is, ones that aren't unicode) can contain any data,
including blobs, and all the conversion functions should return raw strings.
You can then override output_entities to output the data to the format of
your choice.

-Nick Johnson


> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How do I limit searchable_text_index using SearchableModel?

2009-06-23 Thread Nick Johnson (Google)
You're quite right. This will be fixed in a future release.

-Nick Johnson

2009/6/23 acuth 

>
> Whenever I've tried doing a SearchableModelDescendant.all
> (keys_only=True).search(query) construction, it has failed saying it
> doesn't understand the keys_only parameter - see 'Using __key__ and
> SearchableModel
> http://groups.google.com/group/google-appengine/browse_thread/thread/73dc1dc31bfd497b
>
> Do you know if this was fixed in a recent release?
>
> Adrian
>
>
> On Jun 23, 11:33 am, "Nick Johnson (Google)" 
> wrote:
> > 2009/6/23 Ian Lewis 
> >
> > > ogterran,
> >
> > > It should do one for the search and then one for each item in the
> search
> > > result.
> >
> > Not quite - it will do one _query_, and multiple gets. A get is much,
> much
> > cheaper than a query. You're right about the number of round-trips,
> though.
> >
> > > If you are worried performance on the calls to the datastore you can
> modify
> > > this code to make the ProductSearchIndex entity be a child of the
> Product
> > > entity and  use a key only query to retrieve only the keys for the
> search
> > > index entities (since we only really care about the Products anyway).
> >
> > Good idea!
> >
> >
> >
> >
> >
> > > This will still to the same number of queries but will avoid the
> overhead
> > > of deserializing the ProductSearchIndex objects (and the associated
> index
> > > list property which might be long).
> >
> > > Something like the following should work:
> >
> > > class Product(db.Model):
> > >pid = db.StringProperty(required=True)
> > >title = db.StringProperty(required=True)
> > >site = db.StringProperty(required=True)
> > >url = db.LinkProperty(required=True)
> >
> > > class ProductSearchIndex(search.SearchableModel):
> > ># parent == Product
> > >title = db.StringProperty(required=True)
> >
> > > ...
> > > # where you write the Product
> > > product = Product(pid = pid, title=title, site=site, url=url)
> > > product.put()
> > > index = ProductSearchIndex(parent=product, title=title)
> > > index.put()
> >
> > > ...
> > > # where you search
> > > keys = ProductSearchIndex.all(keys_only=True).search(query).fetch(100)
> > > for key in keys:
> > > product = Product.get(key.parent())
> > > print product.url
> >
> > This can be done much more efficiently:
> >   keys = ProductSearchIndex.all(keys_only=True).search(query).fetch(100)
> >   products = db.get(x.parent() for x in keys)
> >
> > Now you're down to just two round-trips!
> >
> > -Nick Johnson
> >
> >
> >
> >
> >
> > > On Tue, Jun 23, 2009 at 5:59 PM, ogterran 
> wrote:
> >
> > >> Hi Ian,
> >
> > >> Thanks for the response.
> > >> I have one question on number of datastore calls.
> > >> How many datastore calls is the query below making?
> > >> Is it 1 or 100?
> >
> > >> > class Product(db.Model):
> > >> >pid = db.StringProperty(required=True)
> > >> >title = db.StringProperty(required=True)
> > >> >site = db.StringProperty(required=True)
> > >> >url = db.LinkProperty(required=True)
> >
> > >> > class ProductSearchIndex(search.SearchableModel):
> > >> >product = db.ReferenceProperty(Product)
> > >> >title = db.StringProperty(required=True)
> >
> > >> query = ProductSearchIndex.all().search(searchtext)
> > >> results = query.fetch(100)
> > >> for i, v in enumerate(results):
> > >>print v.product.url
> >
> > >> Thanks
> > >> Jon
> >
> > > --
> > > ===
> > > 株式会社ビープラウド  イアン・ルイス
> > > 〒150-0012
> > > 東京都渋谷区広尾1-11-2アイオス広尾ビル604
> > > email: ianmle...@beproud.jp
> > > TEL:03-5795-2707
> > > FAX:03-5795-2708
> > >http://www.beproud.jp/
> > > ===
> >
> > --
> > Nick Johnson, App Engine Developer Programs Engineer
> > Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
> Number:
> > 368047
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread Jonathan Feinberg

On Jun 23, 9:57 am, "Nick Johnson (Google)" 
wrote:
> Raw strings (that is, ones that aren't unicode) can contain any data,
> including blobs, and all the conversion functions should return raw strings.
> You can then override output_entities to output the data to the format of
> your choice.

Huzzah! Thank you.

Am I write in my understanding that by the time these Exporter methods
are called, the entities have already been transferred in some opaque
(and presumably efficient) form from GAE to the client?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread Nick Johnson (Google)
On Tue, Jun 23, 2009 at 3:03 PM, Jonathan Feinberg wrote:

>
> On Jun 23, 9:57 am, "Nick Johnson (Google)" 
> wrote:
> > Raw strings (that is, ones that aren't unicode) can contain any data,
> > including blobs, and all the conversion functions should return raw
> strings.
> > You can then override output_entities to output the data to the format of
> > your choice.
>
> Huzzah! Thank you.
>
> Am I write in my understanding that by the time these Exporter methods
> are called, the entities have already been transferred in some opaque
> (and presumably efficient) form from GAE to the client?


Correct - the bulkloader uses remote_api. If you want, of course, you can
write your own importer, exporter, or anything else, using remote_api.

-Nick Johnson


>
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread Jonathan Feinberg

On Jun 23, 10:04 am, "Nick Johnson (Google)" 
wrote:

> > Am I write in my understanding that by the time these Exporter methods
> > are called, the entities have already been transferred in some opaque
> > (and presumably efficient) form from GAE to the client?
>
> Correct - the bulkloader uses remote_api. If you want, of course, you can
> write your own importer, exporter, or anything else, using remote_api.

Again, thanks. One final question (I think): is my undertsnading of
the code correct in that the result_db is not deleted at the
conclusion of a successful transfer, and that, therefore, if i run the
export a couple of weeks later, only the newly added entities will be
transferred?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: How do we deal with blobs?

2009-06-23 Thread Nick Johnson (Google)
On Tue, Jun 23, 2009 at 3:13 PM, Jonathan Feinberg wrote:

>
> On Jun 23, 10:04 am, "Nick Johnson (Google)" 
> wrote:
>
> > > Am I write in my understanding that by the time these Exporter methods
> > > are called, the entities have already been transferred in some opaque
> > > (and presumably efficient) form from GAE to the client?
> >
> > Correct - the bulkloader uses remote_api. If you want, of course, you can
> > write your own importer, exporter, or anything else, using remote_api.
>
> Again, thanks. One final question (I think): is my undertsnading of
> the code correct in that the result_db is not deleted at the
> conclusion of a successful transfer, and that, therefore, if i run the
> export a couple of weeks later, only the newly added entities will be
> transferred?


I'm not certain, actually. I would expect that the database is deleted or
marked as completed once everything's downloaded.  Let me know if you find
out! ;)

-Nick Johnson


> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Query with >1000 matches

2009-06-23 Thread herbie

Oh, really? That limits my app somewhat.   I asume if I have no
inequality filtres and order by date I will get the latest entities?

I could then filter these in memory for x > threshold


On Jun 23, 1:50 pm, "Nick Johnson (Google)" 
wrote:
> Hi herbie,
>
> If your query includes an inequality (such as x>50), then your first sort
> order has to be on the same property as that inequality, which means you
> can't (directly) fetch the most recent 200 results with x>50. You either
> need to change your query to use only equality filters, or you need to fetch
> extra results, then sort them in memory and only take the most recent ones.
>
> -Nick Johnson
>
>
>
> On Tue, Jun 23, 2009 at 1:44 PM, herbie <4whi...@o2.co.uk> wrote:
>
> > Thanks for your help Nick.
>
> > No my threshold value 'x' isn't constant.   I still havn't got my head
> > round this yet!   Can you tell me how to get the latest entities
> > (assuming I don't want all of them)   out of the datastore  and filter
> > on another property?
>
> > For example:  Get the latest 200 entities  where x > 50.   I don't
> > care what their 'date's are as long as I get the latest and x > 50.
>
> > On Jun 23, 1:16 pm, "Nick Johnson (Google)" 
> > wrote:
> > > On Tue, Jun 23, 2009 at 12:42 PM, herbie <4whi...@o2.co.uk> wrote:
>
> > > > So will this :
> > > > query = Foo.all().filter("property_x >" 50).order("property_x") .order
> > > > ("-timestamp")
> > > > results = query.fetch(200)
>
> > > > ..get the latest entities where property_x > 50 ?  Or will it get the
> > > > 200 properties with the largest 'property_x'  which are then ordered
> > > > by 'timestamp' ?   A subtle but important difference.
>
> > > It will get the 200 entities with the smallest property_x greater than 50
> > > (since you're filtering >50 and ordering first by property_x). If two
> > > entities have the same value for property_x, they will be sorted by
> > > timestamp, descending.
>
> > > If you need the latest, and your threshold of 50 is a constant, you can
> > add
> > > a BooleanProperty to your entity group encoding the condition 'is greater
> > > than 50', and filter on that using an equality filter.
>
> > > -Nick Johnson
>
> > > > As I said I need make sure I get the latest entities.
>
> > > > On Jun 22, 11:33 pm, Tony  wrote:
> > > > > Yes, that is what it means.  I forgot about that restriction.
>
> > > > > I see what you mean about changing 'x' values.  Perhaps consider
> > > > > keeping two counts - a running sum and a running count (of the # of x
> > > > > properties).  If a user modifies an 'x' value, you can adjust the sum
> > > > > up or down accordingly.
>
> > > > > On Jun 22, 5:40 pm, herbie <4whi...@o2.co.uk> wrote:
>
> > > > > > I tried your query below but I get "BadArgumentError: First
> > ordering
> > > > > > property must be the same as inequality filter property, if
> > specified
> > > > > > for this query;"
> > > > > > Does this mean I have to order on 'x' first, then order on 'date'?
> > > > > > Will this still return the latest 200 of all entities with x > 50
> > if
> > > > > > I  call query.fetch(200)?
>
> > > > > > I take your's and Nick's about keeping a 'running average'.   But
> > in
> > > > > > my example the user can change the 'x' value so the average has to
> > be
> > > > > > recalculated from the latest entities.
>
> > > > > > On Jun 22, 9:46 pm, Tony  wrote:
>
> > > > > > > You could accomplish this task like so:
>
> > > > > > > xlist = []
> > > > > > > query = Foo.all().filter("property_x >" 50).order("-timestamp")
> > > > > > > for q in query:
> > > > > > >   xlist.append(q.property_x)
> > > > > > > avg = sum(xlist) / len(xlist)
>
> > > > > > > What Nick is saying, I think, is that fetching 1000 entities is
> > going
> > > > > > > to be very resource-intensive, so a better way to do it is to
> > > > > > > calculate this data at write-time instead of read-time.  For
> > example,
> > > > > > > every time you add an entity, you could update a separate entity
> > that
> > > > > > > has a property like "average = db.FloatProperty()" with the
> > current
> > > > > > > average, and then you could simply fetch that entity and get the
> > > > > > > current running average.
>
> > > > > > > On Jun 22, 4:25 pm, herbie <4whi...@o2.co.uk> wrote:
>
> > > > > > > > Ok. Say I have many (>1000)  Model entities with two properties
> > 'x'
> > > > > > > > and 'date'.    What is the most efficient query to fetch say
> > the
> > > > > > > > latest 200 entities  where x > 50.   I don't care what their
> > > > 'date's
> > > > > > > > are as long as I get the latest and x > 50
>
> > > > > > > > Thanks again for your help.
>
> > > > > > > > On Jun 22, 4:11 pm, "Nick Johnson (Google)" <
> > > > nick.john...@google.com>
> > > > > > > > wrote:
>
> > > > > > > > > Consider precalculating this data and storing it against
> > another
> > > > entity.
> > > > > > > > > This will save a lot of work on requests.
>
> > > > > > > > > -Nick Johnson
>
> > > > > > > > > On Mon, Jun 22, 20

[google-appengine] Re: Query with >1000 matches

2009-06-23 Thread Nick Johnson (Google)
On Tue, Jun 23, 2009 at 3:27 PM, herbie <4whi...@o2.co.uk> wrote:

>
> Oh, really? That limits my app somewhat.   I asume if I have no
> inequality filtres and order by date I will get the latest entities?
>
> I could then filter these in memory for x > threshold


Correct.

-Nick Johnson


>
>
>
> On Jun 23, 1:50 pm, "Nick Johnson (Google)" 
> wrote:
> > Hi herbie,
> >
> > If your query includes an inequality (such as x>50), then your first sort
> > order has to be on the same property as that inequality, which means you
> > can't (directly) fetch the most recent 200 results with x>50. You either
> > need to change your query to use only equality filters, or you need to
> fetch
> > extra results, then sort them in memory and only take the most recent
> ones.
> >
> > -Nick Johnson
> >
> >
> >
> > On Tue, Jun 23, 2009 at 1:44 PM, herbie <4whi...@o2.co.uk> wrote:
> >
> > > Thanks for your help Nick.
> >
> > > No my threshold value 'x' isn't constant.   I still havn't got my head
> > > round this yet!   Can you tell me how to get the latest entities
> > > (assuming I don't want all of them)   out of the datastore  and filter
> > > on another property?
> >
> > > For example:  Get the latest 200 entities  where x > 50.   I don't
> > > care what their 'date's are as long as I get the latest and x > 50.
> >
> > > On Jun 23, 1:16 pm, "Nick Johnson (Google)" 
> > > wrote:
> > > > On Tue, Jun 23, 2009 at 12:42 PM, herbie <4whi...@o2.co.uk> wrote:
> >
> > > > > So will this :
> > > > > query = Foo.all().filter("property_x >" 50).order("property_x")
> .order
> > > > > ("-timestamp")
> > > > > results = query.fetch(200)
> >
> > > > > ..get the latest entities where property_x > 50 ?  Or will it get
> the
> > > > > 200 properties with the largest 'property_x'  which are then
> ordered
> > > > > by 'timestamp' ?   A subtle but important difference.
> >
> > > > It will get the 200 entities with the smallest property_x greater
> than 50
> > > > (since you're filtering >50 and ordering first by property_x). If two
> > > > entities have the same value for property_x, they will be sorted by
> > > > timestamp, descending.
> >
> > > > If you need the latest, and your threshold of 50 is a constant, you
> can
> > > add
> > > > a BooleanProperty to your entity group encoding the condition 'is
> greater
> > > > than 50', and filter on that using an equality filter.
> >
> > > > -Nick Johnson
> >
> > > > > As I said I need make sure I get the latest entities.
> >
> > > > > On Jun 22, 11:33 pm, Tony  wrote:
> > > > > > Yes, that is what it means.  I forgot about that restriction.
> >
> > > > > > I see what you mean about changing 'x' values.  Perhaps consider
> > > > > > keeping two counts - a running sum and a running count (of the #
> of x
> > > > > > properties).  If a user modifies an 'x' value, you can adjust the
> sum
> > > > > > up or down accordingly.
> >
> > > > > > On Jun 22, 5:40 pm, herbie <4whi...@o2.co.uk> wrote:
> >
> > > > > > > I tried your query below but I get "BadArgumentError: First
> > > ordering
> > > > > > > property must be the same as inequality filter property, if
> > > specified
> > > > > > > for this query;"
> > > > > > > Does this mean I have to order on 'x' first, then order on
> 'date'?
> > > > > > > Will this still return the latest 200 of all entities with x >
> 50
> > > if
> > > > > > > I  call query.fetch(200)?
> >
> > > > > > > I take your's and Nick's about keeping a 'running average'.
> But
> > > in
> > > > > > > my example the user can change the 'x' value so the average has
> to
> > > be
> > > > > > > recalculated from the latest entities.
> >
> > > > > > > On Jun 22, 9:46 pm, Tony  wrote:
> >
> > > > > > > > You could accomplish this task like so:
> >
> > > > > > > > xlist = []
> > > > > > > > query = Foo.all().filter("property_x >"
> 50).order("-timestamp")
> > > > > > > > for q in query:
> > > > > > > >   xlist.append(q.property_x)
> > > > > > > > avg = sum(xlist) / len(xlist)
> >
> > > > > > > > What Nick is saying, I think, is that fetching 1000 entities
> is
> > > going
> > > > > > > > to be very resource-intensive, so a better way to do it is to
> > > > > > > > calculate this data at write-time instead of read-time.  For
> > > example,
> > > > > > > > every time you add an entity, you could update a separate
> entity
> > > that
> > > > > > > > has a property like "average = db.FloatProperty()" with the
> > > current
> > > > > > > > average, and then you could simply fetch that entity and get
> the
> > > > > > > > current running average.
> >
> > > > > > > > On Jun 22, 4:25 pm, herbie <4whi...@o2.co.uk> wrote:
> >
> > > > > > > > > Ok. Say I have many (>1000)  Model entities with two
> properties
> > > 'x'
> > > > > > > > > and 'date'.What is the most efficient query to fetch
> say
> > > the
> > > > > > > > > latest 200 entities  where x > 50.   I don't care what
> their
> > > > > 'date's
> > > > > > > > > are as long as I get the latest and x > 50
> >
> > > > > > > > > Thanks again f

[google-appengine] Re: Performance improvements

2009-06-23 Thread Tony

The quote is real, but unless my english is way off, I'm pretty sure
that sentence does not imply that the reduction in quota is part of
the performance improvements.

On Jun 23, 2:23 am, GenghisOne  wrote:
> cc -- Is that quote for real or are you just using a dramatic device
> to illustrate your point?
>
> On Jun 22, 7:14 pm, cc  wrote:
>
> > "along with many performance improvements, we will be reducing the
> > free quota levels"
>
> > On Jun 22, 7:11 pm, cc  wrote:
>
> > > I think you misread the doublespeak the reduction in the quota IS the
> > > "performance improvement"
>
> > > On Jun 22, 6:05 am, luddep  wrote:
>
> > > > Hello,
>
> > > > So the free quotas have been reduced today and according to the docs
> > > > (http://code.google.com/appengine/docs/quotas.html#Free_Changes) there
> > > > are going to be some performance improvements as well, will there be
> > > > any information released regarding what actual improvements they are?
> > > > (i.e., datastore related, etc)
>
> > > > Thanks!
> > > > - Ludwig
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Performance improvements

2009-06-23 Thread Peter Recore

I agree that the quote does not imply the quota reduction is part of
the performance improvements.
With all that being said, I too would be interested to hear what the
actual performance improvements were.

On Jun 23, 10:41 am, Tony  wrote:
> The quote is real, but unless my english is way off, I'm pretty sure
> that sentence does not imply that the reduction in quota is part of
> the performance improvements.
>
> On Jun 23, 2:23 am, GenghisOne  wrote:
>
> > cc -- Is that quote for real or are you just using a dramatic device
> > to illustrate your point?
>
> > On Jun 22, 7:14 pm, cc  wrote:
>
> > > "along with many performance improvements, we will be reducing the
> > > free quota levels"
>
> > > On Jun 22, 7:11 pm, cc  wrote:
>
> > > > I think you misread the doublespeak the reduction in the quota IS the
> > > > "performance improvement"
>
> > > > On Jun 22, 6:05 am, luddep  wrote:
>
> > > > > Hello,
>
> > > > > So the free quotas have been reduced today and according to the docs
> > > > > (http://code.google.com/appengine/docs/quotas.html#Free_Changes) there
> > > > > are going to be some performance improvements as well, will there be
> > > > > any information released regarding what actual improvements they are?
> > > > > (i.e., datastore related, etc)
>
> > > > > Thanks!
> > > > > - Ludwig
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Free Quoata Reduction?

2009-06-23 Thread codingGirl

Here is a similar discussion in the Google Appengine Python Group:
http://groups.google.com/group/google-appengine-python/browse_thread/thread/226f7959cd54705


The Google appengine team should once and for all state what the
minimum free quotas will be in the next years. Appengine is a total
lock in for our work so Google should be clear what we can expect.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: SMS verification trouble.

2009-06-23 Thread Patipat Susumpow

I still redirected to http://appengine.google.com/permissions/smssend
whenever i try to create new application.

How can this be fixed?

Cheers,
Patipat.

On Jun 22, 5:06 pm, "Nick Johnson (Google)" 
wrote:
> Hi Patipat,
>
> I've manually activated your account.
>
> -Nick Johnson
>
> On Sat, Jun 20, 2009 at 10:37 AM, Patipat Susumpow wrote:
>
> > Hi,
>
> > I can't verify my account by SMS from
> >http://appengine.google.com/permissions/smssend.do, tried many times with
> > friends' mobile phone no, various supported operators in Thailand, but
> > always get "The phone number has been sent too many messages or has already
> > been used to confirm an account." message.
>
> > I'm wondering that I never use this verification method before, but get
> > this error.
>
> > Thanks,
> > Patipat.
>
> --
> Nick Johnson, App Engine Developer Programs Engineer
> Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Problem administering the apps

2009-06-23 Thread Alex Geo

Yes, I just did that and it worked.

Thank you very much!

On Jun 23, 2:18 pm, "Nick Johnson (Google)" 
wrote:
> Hi Alex,
>
> Did you create your apps using a Google Apps account? If so, you need to log
> in athttp://appengine.google.com/a/yourdomain- for 
> example,http://appengine.google.com/a/navigheaza.ro.
>
> -Nick Johnson
>
>
>
>
>
> On Mon, Jun 22, 2009 at 7:07 PM, Alex Geo  wrote:
>
> > Hello!
>
> > I need a bit of help over here regarding the managing of apps. I visit
> >http://appspot.com, log in and I`m redirected to
> >http://appengine.google.com/start
> > where I can create applications, but I cannot see my existing
> > applications and modify their settings.
>
> > Has anyone experienced this problem before? Please let me know if it
> > did and it fixed.
>
> > Best regards,
> > Alex
>
> --
> Nick Johnson, App Engine Developer Programs Engineer
> Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Server Error (500) while uploading index definitions (again)

2009-06-23 Thread Cat

I just tried to deploy a new version and bumped its version number
from 1 to 122 (the svn number).
Now access to 122.latest...appspot.com results in a 500 Server Error
while my 1.latest... version works fine.

Some changes:
The index.yaml has not changed (except reference counts) and there
should not be a need for a new index. Some DBs do not occur in the
index (accessed by keyword & parent only).
I have changed one existing db.Model from package dbmodels to
iconmanager

application/octet-stream.
Cloning 88 static files.
Cloning 56 application files.
Uploading 26 files.
Deploying new version.
Checking if new version is ready to serve.
Will check again in 1 seconds.
Checking if new version is ready to serve.
Will check again in 2 seconds.
Checking if new version is ready to serve.
Closing update: new version is ready to start serving.
Uploading index definitions.
Error 500: --- begin server output ---

Server Error (500)
A server error has occurred.
--- end server output ---
Your app was updated, but there was an error updating your indexes.
Please retry later with appcfg.py update_indexes.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Server Error (500) while uploading index definitions (again)

2009-06-23 Thread Cat

Ok, the error traces back to an import error on the "Logs" menu
probably my fault.
I am investigating because the local app did not show this error ...
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Idempotence for Cron Jobs

2009-06-23 Thread Wooble



On Jun 22, 11:51 pm, MajorProgamming  wrote:
> I understand that TaskQueues have the possibility of running over and
> over again. Does this apply to cron jobs? Do we need to design them to
> be Idempotent as well?

No.  A cron job will run once when it's scheduled to run.  Idempotence
is important for task queue jobs specifically because they will re-run
if they fail at any point during their run, so parts of them may get
run multiple times.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: SMS verification trouble.

2009-06-23 Thread Nick Johnson (Google)
What email address are you using? I activated the one you're using to post
here.

-Nick Johnson

On Tue, Jun 23, 2009 at 4:48 PM, Patipat Susumpow  wrote:

>
> I still redirected to http://appengine.google.com/permissions/smssend
> whenever i try to create new application.
>
> How can this be fixed?
>
> Cheers,
> Patipat.
>
> On Jun 22, 5:06 pm, "Nick Johnson (Google)" 
> wrote:
> > Hi Patipat,
> >
> > I've manually activated your account.
> >
> > -Nick Johnson
> >
> > On Sat, Jun 20, 2009 at 10:37 AM, Patipat Susumpow  >wrote:
> >
> > > Hi,
> >
> > > I can't verify my account by SMS from
> > >http://appengine.google.com/permissions/smssend.do, tried many times
> with
> > > friends' mobile phone no, various supported operators in Thailand, but
> > > always get "The phone number has been sent too many messages or has
> already
> > > been used to confirm an account." message.
> >
> > > I'm wondering that I never use this verification method before, but get
> > > this error.
> >
> > > Thanks,
> > > Patipat.
> >
> > --
> > Nick Johnson, App Engine Developer Programs Engineer
> > Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
> Number:
> > 368047
> >
>


-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Over Quota: Datastore Indices Count

2009-06-23 Thread Thomas McKay - www.winebythebar.com

My app is getting "Your application is exceeding a quota: Datastore
Indices Count" after a vacuum. Could I get a reset of indices count
please?
App:  winebythebar


On May 23, 11:50 pm, fedestabile  wrote:
> Thanks Jeff, it's working fine now :)
>
> Cheers,
> Fred
>
> On May 20, 5:48 am, "Jeff S (Google)"  wrote:
>
> > Fred, I've reset the indexcountso you should be all set. Kaspars
> > I've reset the indexquotaforyourapp as well and it looks like you
> > had some indexes which were stuck in the building state so I've moved
> > them to error.
>
> > Happy coding,
>
> > Jeff
>
> > On May 19, 2:11 am, "kaspars...@gmail.com" 
> > wrote:
>
> > > I've only 8 indexes at the moment and I'm also getting "Your
> > >applicationisexceedingaquota:DatastoreIndicesCount". The
> > >applicationis basically stuck for couple days.
>
> > > Could you please reset thequota, app ID is lasi2.
>
> > > Thanks,
> > > Kaspars
>
>
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Server Error (500) while uploading index definitions (again)

2009-06-23 Thread Cat

Ok, when uploading version 2 and after deleting version 122 I could
trace the Error back to a circular import that did not show when
running locally.

On 23 Jun., 18:05, Cat  wrote:
> Ok, the error traces back to an import error on the "Logs" menu
> probably my fault.
> I am investigating because the local app did not show this error ...
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] SMS registration

2009-06-23 Thread thebravoman

Hi,
A week ago I have registered a new app engine user with my email
address. As a result I have used my phone number for the SMS
registration. After a week of trying google apps engine I would like
to start a more real use of it along with 2 colleagues.
Now I need to create an application where each of us can deploy.
I have tried creating a google account specific to the application we
are going to develop. Then I have tried registering a new application
for this google account. But it is again me with the same phone number
so I have received the "Phone number can be used only once per app
engine account" error.

I have read the FAQ and the answer - "You may only sign up for one
Google App Engine account per mobile phone number."
But I am also not willing to give access to my personal mail to my
colleagues so that we could start working on an application where we
could all deploy.
I can understand the restriction (to some extend) so my proposals are:
1.To limit the number of applications per phone number. Not the number
of accounts per phone number. In the moment an account can have 10
applications. Can it be for a phone number to have 10 applications?
This way the restriction will be (to a greate extend) preserved.
2. When a user enters his phone number warn him with big red warning
messages that there can be only on google app engine account for a
phone number.

Do you think such a request is appropriate? Should I create a new
issue?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Best way to get support for Google App Engine issues

2009-06-23 Thread gae123

I have been using the GAE for about a year now. On the positive side,
I had very few problems. Pretty much everything works as described in
the documentation.

As far as I can remember, I had three incidents where I needed support
from Google. At the end all my issues were resolved by the great
Google folks who monitor the group BUT not in a very timely manner.

In the first incident, my problem was resolved in a couple of days. In
the other two incidents it took 5-8 days. In the most recent case,
that took 8 days we lost valuable time because of the following
reasons:

1) The Google Engineer who first handled my case what in an overseas
timezone (Nick is in Ireland, I am in California) and the latency of
requests responses was 24 hours
2) Since I did not want to reveal information about my site in a
public forum I sent it through private e-mail that was classified as
spam
3) Another user who had nothing to do with me intervened to the thread
with his issues and confused the situation

So I am wondering,

1. Is there a better way to get support? You know some application
(proably part of the dashboard) where you file a ticket, you describe
the issue and its priority  and you get back a ticket number,  you
have an option to keep it private between Google and you etc? In GAE
spirit develpers could even get a few tickets free per month and then
pay if they go over their ticket quotas :-)
2. If not, s something like that or even better than that in the
plans?
3. In the meantime, I described what I did to get support, should I
have done something in a different way?

Thanks

PS: In all three issuesthe problem was around indexes problems. There
seems to be a bug where if some index quota is exceeded the indexes
get in some weird state and Google personnel has to intervene.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Datastoer design for high performance query / group membership / venn diagram self joins

2009-06-23 Thread tiburondude

Hi,

I have an app with two entities that need (maybe) to interrelate in
the typical sql join sense.

Users
-
userId  Long

Matches
initiatingUserId Long
attemptingUserId Long
matched  boolean

>From my UI I can get the data populated properly but I end up with
this in one row:

initiatingUserId = 1
attempingUserId = 2
matched = true

What I want is when user 1 goes to this section of the app, to get a
list of all records with matched = true, where the currently logged in
user is in EITHER initiatingUserId OR attemptingUserId.  This can't be
done in the sql sense using a logical OR, since that's not supported
in the datastore since it doesn't perform well.

Of course I can do two queries and join the dataset, but this
introduces some serious pain in the workaround regarding paging on a
large dataset.  Not to mention two queries per request just sounds
slow/wrong.

I watched the excellent video from google IO 2009 that instructs us to
think of these things as "group membership queries", and to do "venn
diagram self joins".

So I was thinking the solution may be to remodel the entities like so:

- Remove the Matches entity completely
- change the Users entity to this:

Users:
-
List initiatingUserIds
List attemptingUserIds

I'm struggling to write the query that would then retrieve this data.

Anyone have any ideas on this design?

Thanks a bunch!
David

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] newbie question: Alternatives of auto_increment

2009-06-23 Thread Captain___nemo

Hi,
I am new in google app engine.

I am a PHP-MySql developer. I used auto increment (integer type)
always in my database. It helps for total count, works as primary key
and so on. But as I can see auto_increment is not available for Data
Store. I was wondering what is the alternative idea that Data Store
provides and recommend us to use?

Thanks in advance.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: SMS Verfication Troubles

2009-06-23 Thread felurkinda

Hi Nick:

I've the same problem as Otto. Could manually active my account?

Thanks.

=
Best Regards,
Zhao Tengjiao

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Shifting "applications" between google and google apps.

2009-06-23 Thread Matt

Hi there,

I got a small problem that I need to sort out with Google App Engine.

When I first signed up to develop for GAE I used my regular Google
account and dutifully registered  the name of a product my company is
developing.

I've realized that I should have signed up using my companies "Google
Apps" account instead.

So at the moment I've got a registered application under my regular
Google account that I need to "move" across to my companies Google
Apps account.

What should I do? Can I delete the registered application from within
my account and then add it to my apps domain? Will that work?

I don't want to lose the registered application name as it's very
important to us.

I've also read that there is some work in progress sort out issues
like this. Is that something I should wait for instead?

I'd really appreciate an answer for this as it's potentially holding
back our application deployment.

Thanks for your time.
Matt

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Help me, I have problem to receive sms, when signup for an App Engine account

2009-06-23 Thread saltfactory

I'm student and live in the South of the Korea,
I want to learning and developing app on google app engine.
so, I signed up app engine account, but I had a problem that I can't
received sms to notify account.
but I really inputing correct phone number with my country code .
when send my phone number to google, I have a errors.
please help me,
I want to have account to learn google app engine, but don't know how
to have account

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Server Error (500) while uploading index definitions (again)

2009-06-23 Thread rjportier

Hi Jason,

We needed to vacuum the indexes first and we did so on cpedevtest01,
cpedevtest02 and cpeacc01.
We also reduced the number of indexes to a much smaller set.

On cpedevtest02 the indexes are ok now, but on cpedevtest01 and on
cpeacc01 there are still some indexes stuck in 'Building' state. On
cpeacc01 we also get an error 500 when updating the indexes. So can
you help us out again and nudge them into the Error state? We'll then
vacuum them and update these application IDs also with the smaller
set.

Thanks!
Robert

On Jun 23, 12:55 am, Partac Constantin  wrote:
> Hi Jason,
>
> Thank you for your help. I tried to redeploy the application with just a few
> indexes but those indexes you nudged into error state are remaining in error
> state. I tried erasing all indexes from datastore-indexes.xml but this did
> not help either. What should I do in order to erase the indexes which are in
> error state.
> About the limitation of 100 indexes I did not know that it existed. Is this
> limitation referring to 100 new indexes when deploying the application or
> the total count should not exceed 100 indexes. On the 
> pagehttp://code.google.com/appengine/docs/java/datastore/overview.html#Qu...
>  it mentioned that there is a limitation of 1,000 indexes on an entity.
> Could you detail what you mean by 100 quota.
>
> Thank you,
> Costi
>
> On Mon, Jun 22, 2009 at 20:45, Jason (Google)  wrote:
> > OK, I nudged your indexes into the error state and reset your Datastore
> > Indices Count quota, but make sure not to upload more than 100 indexes or
> > you may see this issue again.
> > - Jason
>
> > On Mon, Jun 22, 2009 at 10:29 AM, Jason (Google) wrote:
>
> >> Hi Costi. How many indexes are you trying to deploy? There is a hard limit
> >> of 100, and it looks like you're very close to this number.
> >> - Jason
>
> >> On Mon, Jun 22, 2009 at 5:44 AM, C Partac  wrote:
>
> >>> I have the same problem Server Error (500)  on my applications
> >>> cpedevtest01 and cpedevtest02 while uploading. The indexes are
> >>> building for 4 days already.
> >>> Could you reset the indexes manually because after deleting the
> >>> indexes from index I still get the error while uploading the
> >>> application.
>
> >>> Thank you
>
> >>> Costi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Can't enable authentication for a Google Apps Premier domain that is a subdomain

2009-06-23 Thread Dudinha

Hi all,

I can't create an App Engine application to authenticate the users of
a Google Apps Premier Edition domain, g.spread.com.br. It is a
subdomain of spread.com.br.

The strange is that if I put a domain that does not exist, it accept!

How can I solve it?

Best regards,

Eduardo Bortoluzzi Junior
Spread Teleinformática Ltda
Google Reseller in Brazil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] GAE + Eclipse Plug-In + Subversion

2009-06-23 Thread stefan77

Hello,

i need some help with a team setup for the GAE in Eclipse with
Subversion.
I've created a GAE Project in Eclipse and created a repository with
the Eclipse Team function (Subversive Plug-in) on a Subversion-Server.
Creation and commits work.
But i cannot check it out on another workplace (also Eclipse+GAE
Plugin is installed).
I've tried to just check out the project but then the GAE Plugin does
not detect the project as GAE supported.
And when i first create a GAE Project and then checkout the sources it
will also not work because of the source structure.
Has someone experiences or ideas, how do i get it work?

Regards,
Stefan


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] imaplib issue in GAE

2009-06-23 Thread 马不停蹄的猪

In my project base on GAE, I uses imaplib to check email of Gmail.
Below is the code:

class Mailbox:
def __init__(self):
self.m = imaplib.IMAP4_SSL("imap.gmail.com")
.

But below error occures when above instruction: imaplib.IMAP4_SSL
("imap.gmail.com")  excuted.

Traceback (most recent call last):
  File "/base/python_lib/versions/1/google/appengine/ext/webapp/
__init__.py", line 501, in __call__
handler.get(*groups)
  File "/base/data/home/apps/srjsimblog/1.334408786956981127/
mailBlog.py", line 82, in get
gmailBox = Mailbox()
  File "/base/data/home/apps/srjsimblog/1.334408786956981127/
mailBlog.py", line 31, in __init__
self.m = imaplib.IMAP4_SSL("imap.gmail.com")
  File "/base/python_dist/lib/python2.5/imaplib.py", line 1128, in
__init__
IMAP4.__init__(self, host, port)
  File "/base/python_dist/lib/python2.5/imaplib.py", line 163, in
__init__
self.open(host, port)
  File "/base/python_dist/lib/python2.5/imaplib.py", line 1139, in
open
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
AttributeError: 'module' object has no attribute 'socket'

If I use imaplib in a standalone program with same instruction, it
works fine.  Any body knows why? And how to resolve this issue?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Index deletion/building not happening

2009-06-23 Thread Scott

Hi, I'm having a little trouble getting a new index built. It's for
application skrit, and the model is called UserItemSection, and I'm
trying to build a single index for it. Here's how things have gone so
far.

1. I updated the index to the server two weeks ago. The index page
showed it as 'building'.

2. I checked today and it was still 'building'.

3. I tried vacuuming it, but I think appengine was having index
troubles at the time because every time I used any of the commands
'vacuum_indexes', 'update_indexes', or 'update', it would error over
the indexes. For example, the output when I used 'update' was:

--
Uploading index definitions.
Error 500: --- begin server output ---

Server Error (500)
A server error has occurred.
--- end server output ---
Your app was updated, but there was an error updating your indexes.
Please retry later with appcfg.py update_indexes.
--

The other two commands had similar error messages.

4. I successfully ran the 'update_indexes' command, so I guess
appengine got over its hiccup. I then tried 'vacuum_indexes' and told
it to delete the VocabListSection index. It outputted this:

--
Fetching index definitions diff.
This index is no longer defined in your index.yaml file.

kind: VocabListSection
properties:
- name: list
- name: sect


Are you sure you want to delete this index? (N/y/a): y
Deleting selected index definitions.
2009-06-23 11:44:43,198 WARNING appcfg.py:697 An index was not
deleted.  Most likely this is because it no longer exists.

kind: VocabListSection
properties:
- name: list
- name: sect
--

But on the index page on the dashboard, it's still shown as
'building', so no change from before. Running 'update' or
'update_indexes' has no effect.

How can I get this index unstuck and built?

-Scott

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] [ANN]: full-text search for app-engine-patch/Django

2009-06-23 Thread Waldemar Kornewald

Hi,
we'd like to announce the immediate availablility of our full-text
search package.
It's called gae-search:
http://gae-full-text-search.appspot.com/

See it in action by searching our documentation (which is indexed with
gae-search). We also have a few demos.

Note that gae-search requires app-engine-patch (Django).

Features:
* index only specific properties (instead of all string/text
properties like in SearchableModel)
* Porter stemmers (increase search quality)
* sort your results (at least a little bit) via chain-sorting
* make "DISTINCT" queries using a so-called "values index"
* auto-completion via a jQuery plugin
* key-based pagination (fully unit-tested implementation of Ryan
Barrett's algorithm)
* easy to use views and templates (add search support in just a few lines)

Since it took a lot of effort to implement all features and make them
easy to use we can't give this away for free, though. We initially
implemented it for our own projects, but after so many people
complained about the lack of full-text search we though we could
provide it to others - for a little compensation.

Bye,
Waldemar Kornewald & Thomas Wanschik (the creators of app-engine-patch)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: dev_appserver.py throws "SystemError: frexp() result out of range" and subsequenty "ValueError: bad marshal data"

2009-06-23 Thread rraj
No, but I gave a different datastore location (using datastore_path and
history_path, as shown in the trace snippet) which I thought was equivalent
enough - is it not ?!
Regards,
R.Rajkumar

On Tue, Jun 23, 2009 at 4:17 PM, Nick Johnson (Google) <
nick.john...@google.com> wrote:

> Hi rraj,
>
> Have you tried clearing your datastore?
>
> -Nick Johnson
>
>
> On Tue, Jun 23, 2009 at 3:07 AM, rraj  wrote:
>
>> Hi,
>> Had anybody encountered the following error and can guide me on how you
>> fixed it ?
>>
>> When running the development app server, initial run throws
>> "SystemError: frexp() result out of range"...
>>
>> C:\Program Files\Google\google_appengine>dev_appserver.py
>> --datastore_path=C:\gae_data --history_path=C:\gae_data demos\guestbook
>> Traceback (most recent call last):
>>   File "C:\Program Files\Google\google_appengine\dev_appserver.py", line
>> 60, in 
>> run_file(__file__, globals())
>>   File "C:\Program Files\Google\google_appengine\dev_appserver.py", line
>> 57, in run_file
>> execfile(script_path, globals_)
>>   File "C:\Program
>> Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py",
>> line 483, in 
>> sys.exit(main(sys.argv))
>>   File "C:\Program
>> Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py",
>> line 400, in main
>> SetGlobals()
>>   File "C:\Program
>> Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py",
>> line 86, in SetGlobals
>> from google.appengine.tools import dev_appserver
>>   File "C:\Program
>> Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line
>> 86, in 
>> from google.appengine.api import datastore_file_stub
>>   File "C:\Program
>> Files\Google\google_appengine\google\appengine\api\datastore_file_stub.py",
>> line 38, in 
>> import datetime
>> SystemError: frexp() result out of range
>>
>>
>>
>> Subsequent attempts to run applications, throws "ValueError: bad
>> marshal data"...
>>
>>
>> C:\Program Files\Google\google_appengine>dev_appserver.py
>> --datastore_path=C:\gae_data --history_path=C:\gae_data demos\guestbook
>> Traceback (most recent call last):
>>   File "C:\Program Files\Google\google_appengine\dev_appserver.py", line
>> 60, in 
>> run_file(__file__, globals())
>>   File "C:\Program Files\Google\google_appengine\dev_appserver.py", line
>> 57, in run_file
>> execfile(script_path, globals_)
>>   File "C:\Program
>> Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py",
>> line 483, in 
>> sys.exit(main(sys.argv))
>>   File "C:\Program
>> Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py",
>> line 400, in main
>> SetGlobals()
>>   File "C:\Program
>> Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py",
>> line 86, in SetGlobals
>> from google.appengine.tools import dev_appserver
>>   File "C:\Program
>> Files\Google\google_appengine\google\appengine\tools\dev_appserver.py", line
>> 86, in 
>> from google.appengine.api import datastore_file_stub
>> ValueError: bad marshal data
>>
>>
>>
>> Python Version :: Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45)
>> [MSC v.1310 32 bit Intel)] on win32
>>
>> GAE Version : 1.2.3 (GoogleAppEngine_1.2.3.msi)
>>
>> Was running the application with an earlier version of GAE
>> (1.2.2/1.2.1), when I encountered the "bad marshal data" problem during a
>> restart of my application. Tried moving to the latest version to see if this
>> has been handled.
>>
>> Removing the datastore_file_stub.pyc and running again reproduces
>> the problem in the same sequence : "frexp() result out of range" followed by
>> "bad marshal data".
>>
>> Tried moving to Python 2.6.2 - did not help.
>>
>> Tried repairing GAE 1.2.3 - did not help.
>>
>> Uninstalled Python 2.6.2, Python 2.5.2 & GAE and then installed Python
>> 2.5.2 and GAE 1.2.3 again and tested with demo application and new
>> data-store path, when I got the above traces.
>>
>>
>> Not able to run any GAE apps now :-(
>> Any tips to get me going again will be appreciated.
>>
>> Thanks & Regards,
>> R.Rajkumar
>>
>>
>>
>>
>
>
> --
> Nick Johnson, App Engine Developer Programs Engineer
> Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> 368047
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Over Quota: Datastore Indices Count

2009-06-23 Thread Jeff S (Google)
Hi Thomas,
I've reset the index count, apologies for the inconvenience.

Happy coding,

Jeff

On Tue, Jun 23, 2009 at 9:12 AM, Thomas McKay - www.winebythebar.com <
thomasfmc...@gmail.com> wrote:

>
> My app is getting "Your application is exceeding a quota: Datastore
> Indices Count" after a vacuum. Could I get a reset of indices count
> please?
> App:  winebythebar
>
>
> On May 23, 11:50 pm, fedestabile  wrote:
> > Thanks Jeff, it's working fine now :)
> >
> > Cheers,
> > Fred
> >
> > On May 20, 5:48 am, "Jeff S (Google)"  wrote:
> >
> > > Fred, I've reset the indexcountso you should be all set. Kaspars
> > > I've reset the indexquotaforyourapp as well and it looks like you
> > > had some indexes which were stuck in the building state so I've moved
> > > them to error.
> >
> > > Happy coding,
> >
> > > Jeff
> >
> > > On May 19, 2:11 am, "kaspars...@gmail.com" 
> > > wrote:
> >
> > > > I've only 8 indexes at the moment and I'm also getting "Your
> > > >applicationisexceedingaquota:DatastoreIndicesCount". The
> > > >applicationis basically stuck for couple days.
> >
> > > > Could you please reset thequota, app ID is lasi2.
> >
> > > > Thanks,
> > > > Kaspars
> >
> >
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Is "class Body(db.Model):" ok?

2009-06-23 Thread Jesse Grosjean

I've just added a new model class to my app that's defined like this:

class Body(db.Model):
content = db.TextProperty()

It seems to be working fine in my server code, but for some reason it
doesn't show up in the list of entities shown by the App Engine
Console Data Viewer. Also when I run a direct query in the console
for:

SELECT * FROM Body

I get a page that reports:

Server Error
A server error has occurred.

Can someone help me figure out what is going on?

Thanks,
Jesse
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Efficient way to structure my data model

2009-06-23 Thread ecognium

Thanks again - this is very helpful. I will let you know if i run into
any future index creation errors as it could have been caused by any
number of other entries - i mistakenly thought it was all these categ
list-based entries.

So if i understand it right even with a 10 element list for keywords,
there will only be 10 rows when 4 categ fields are used. In the event
I use  'categ' only once in my query along with keywords field, it
will have up to 40 rows (10 from keywords and 4C1 from categ list). Am
I adding these up right?

I do not see myself going beyond 6 elements in the categ list at this
point (I guess the max will be 6C3 = 20 under such a situation). The
keyword list will be probably go into the 20s but do not see anything
beyond that and will always be used only once in the query.

Thanks,
-e

On Jun 23, 3:53 am, "Nick Johnson (Google)" 
wrote:
> Hi ecognium,
>
>
>
> On Tue, Jun 23, 2009 at 1:35 AM, ecognium  wrote:
>
> > Thanks, Nick. Let me make sure I understand your comment correctly.
> > Suppose I have the following data:
>
> > ID      BlobProp1       BlobProp2-N     Keywords
> >  Categ
> > =
> > 123     blah                    blah                    tag1,tag2,tag3
> >  Circle,
> > Red,  Large, Dotted
> > 345     blah                    blah                    tag3,tag4,tag5
> > Square, Blue, Small, Solid
> > 678     blah                    blah                    tag1,tag3,tag4
> > Circle, Blue, Small, Solid
>
> > -
>
> > The field categ (list) contains four different types - Shape, Color,
> > Size and Line Type. Suppose the user wants to retrieve all entities
> > that are Small Dotted Blue Circles then the query will be:
>
> > Select * From MyModel where categ = "Circle" AND categ = "Small" AND
> > categ = "Blue" AND categ = "Dotted"
>
> > When I was reading about exploding indexes the example indicated the
> > issue was due to Cartesian product of two list elements. I thought the
> > same will hold true with one list field when used multiple times in a
> > query.
>
> That is indeed true, though it's not quite the cartesian product - the
> datastore won't bother indexing (Circle, Circle, Circle, Circle), or
> (Dotted, Dotted, Dotted, Dotted) - it only indexes every unique combination,
> which is a substantially smaller number than the cartesian product. It's
> still only tractable for small lists, though, such as the 4 item lists
> you're dealing with.
>
> Are you saying the above query will not need {Circle, Red,
>
> > Large, Dotted} * {Circle, , , } * {Circle, , , } * {Circle, , , }
> > number of index entities for entity ID=123?
>
> Correct - if you're not specifying a sort order, you can execute the query
> without any composite indexes whatsoever. The datastore satisfies
> equality-only queries using a merge join strategy.
>
> > I was getting index errors
> > when I was using the categ list property four times in my index
> > specification and that's why I was wondering if I should restructure
> > things.
>
> How many items did you have in the list you were indexing in that case? If
> your list has 4 items and your index specification lists it 4 times, you
> should only get one index entry.
>
> so I am guessing the following spec should not cause any index
>
> > issues in the future?
>
> Again, that depends on the number of entries in the 'categ' list. With 4
> entries, this will only generate a single index entry, but the number of
> entries will expand exponentially as the list increases in size.
>
> -Nick Johnson
>
>
>
>
>
> > - kind: MyModel
> >  properties:
> >  - name: categ
> >  - name: categ
> >  - name: categ
> >  - name: categ
> >  - name: keywords
> >  - name: __key__   # used for paging
>
> > Thanks,
> > -e
>
> > On Jun 22, 2:10 am, "Nick Johnson (Google)" 
> > wrote:
> > > Hi ecognium,
>
> > > If I understand your problem correctly, every entity will have 0-4
> > entries
> > > in the 'categ' list, corresponding to the values for each of 4 categories
> > > (eg, Color, Size, Shape, etc)?
>
> > > The sample query you give, with only equality filters, will be
> > satisfiable
> > > using the merge join query planner, which doesn't require custom indexes,
> > so
> > > you won't have high indexing overhead. There will simply be one index
> > entry
> > > for each item in each list.
>
> > > If you do need custom indexes, the number of index entries, isn't 4^4, as
> > > you suggest, but rather smaller. Assuming you want to be able to query
> > with
> > > any number of categories from 0 to 4, you'll need 3 or 4 custom indexes
> > > (depending on if the 0-category case requires its own index), and the
> > total
> > > number of index entries will be 4C1 + 4C2 + 4C3 + 4C4 = 4 + 6 + 4 + 1 =
> > 15.
> > > For 6 categories, the number of entries would be 6 + 15 + 20 + 15 + 6 + 1
> > =
> > > 63, which is still a not-unreasonable num

[google-appengine] Re: SMS registration

2009-06-23 Thread Wooble

Each developer should have their own Google account, which you can add
as an administrator to your app.  You do not need to share a Google
account.

On Jun 22, 6:31 pm, thebravoman  wrote:
> Hi,
> A week ago I have registered a new app engine user with my email
> address. As a result I have used my phone number for the SMS
> registration. After a week of trying google apps engine I would like
> to start a more real use of it along with 2 colleagues.
> Now I need to create an application where each of us can deploy.
> I have tried creating a google account specific to the application we
> are going to develop. Then I have tried registering a new application
> for this google account. But it is again me with the same phone number
> so I have received the "Phone number can be used only once per app
> engine account" error.
>
> I have read the FAQ and the answer - "You may only sign up for one
> Google App Engine account per mobile phone number."
> But I am also not willing to give access to my personal mail to my
> colleagues so that we could start working on an application where we
> could all deploy.
> I can understand the restriction (to some extend) so my proposals are:
> 1.To limit the number of applications per phone number. Not the number
> of accounts per phone number. In the moment an account can have 10
> applications. Can it be for a phone number to have 10 applications?
> This way the restriction will be (to a greate extend) preserved.
> 2. When a user enters his phone number warn him with big red warning
> messages that there can be only on google app engine account for a
> phone number.
>
> Do you think such a request is appropriate? Should I create a new
> issue?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: imaplib issue in GAE

2009-06-23 Thread Wooble

You can't open socket connections in App Engine.  There's no way at
present to connect to an email server; the workaround is to have
smtp2web post your email messages to your application instead of
having the application try to fetch them from somewhere.

You could also build an app running elsewhere that reads your email
and provides a REST interface to it.

On Jun 23, 11:19 am, 马不停蹄的猪  wrote:
> In my project base on GAE, I uses imaplib to check email of Gmail.
> Below is the code:
>
> class Mailbox:
> def __init__(self):
> self.m = imaplib.IMAP4_SSL("imap.gmail.com")
> .
>
> But below error occures when above instruction: imaplib.IMAP4_SSL
> ("imap.gmail.com")  excuted.
>
> Traceback (most recent call last):
>   File "/base/python_lib/versions/1/google/appengine/ext/webapp/
> __init__.py", line 501, in __call__
> handler.get(*groups)
>   File "/base/data/home/apps/srjsimblog/1.334408786956981127/
> mailBlog.py", line 82, in get
> gmailBox = Mailbox()
>   File "/base/data/home/apps/srjsimblog/1.334408786956981127/
> mailBlog.py", line 31, in __init__
> self.m = imaplib.IMAP4_SSL("imap.gmail.com")
>   File "/base/python_dist/lib/python2.5/imaplib.py", line 1128, in
> __init__
> IMAP4.__init__(self, host, port)
>   File "/base/python_dist/lib/python2.5/imaplib.py", line 163, in
> __init__
> self.open(host, port)
>   File "/base/python_dist/lib/python2.5/imaplib.py", line 1139, in
> open
> self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
> AttributeError: 'module' object has no attribute 'socket'
>
> If I use imaplib in a standalone program with same instruction, it
> works fine.  Any body knows why? And how to resolve this issue?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: SMS verification trouble.

2009-06-23 Thread Patipat Susumpow

keng...@gmail

the same one I've posted here.

I'm curious. When I click "Create an Application" from this page
http://appengine.google.com/start, it's still redirect me to
http://appengine.google.com/permissions/smssend

Cheers,
Patipat.

On Jun 23, 11:07 pm, "Nick Johnson (Google)" 
wrote:
> What email address are you using? I activated the one you're using to post
> here.
>
> -Nick Johnson
>
>
>
>
>
> On Tue, Jun 23, 2009 at 4:48 PM, Patipat Susumpow  wrote:
>
> > I still redirected tohttp://appengine.google.com/permissions/smssend
> > whenever i try to create new application.
>
> > How can this be fixed?
>
> > Cheers,
> > Patipat.
>
> > On Jun 22, 5:06 pm, "Nick Johnson (Google)" 
> > wrote:
> > > Hi Patipat,
>
> > > I've manually activated your account.
>
> > > -Nick Johnson
>
> > > On Sat, Jun 20, 2009 at 10:37 AM, Patipat Susumpow  > >wrote:
>
> > > > Hi,
>
> > > > I can't verify my account by SMS from
> > > >http://appengine.google.com/permissions/smssend.do, tried many times
> > with
> > > > friends' mobile phone no, various supported operators in Thailand, but
> > > > always get "The phone number has been sent too many messages or has
> > already
> > > > been used to confirm an account." message.
>
> > > > I'm wondering that I never use this verification method before, but get
> > > > this error.
>
> > > > Thanks,
> > > > Patipat.
>
> > > --
> > > Nick Johnson, App Engine Developer Programs Engineer
> > > Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
> > Number:
> > > 368047
>
> --
> Nick Johnson, App Engine Developer Programs Engineer
> Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Application names disappear from Admin console, No new can be created but nevertheless availability counter is decreasing

2009-06-23 Thread stelg

Hi Nick,

Account X is  an account I created on  foloowing the instructions on
http://code.google.com/appengine/
My website is just at the stelg.appspot.com (no other domain name)
I log in at the page Welcome to Google App Engine   http://appengine.google.com/
This does noet work: http://appengine.google.com/a/stelg.appspot.com
where stelg.appspot.com equals to ""
I can see and work normally with the Admin console on for my first
created application: stelg. All the rest is gone I have been able
to view them in the past...

Regards

Stelg


On 17 jun, 18:33, "Nick Johnson (Google)" 
wrote:
> Hi stelg,
>
> Is account 'X' an Apps for Your Domain account? If so, you need to log in 
> athttp://appengine.google.com/a/ to see the apps.
>
> -Nick Johnson
>
>
>
>
>
> On Wed, Jun 17, 2009 at 5:08 PM, stelg  wrote:
>
> > Hi Folks,
> > I experience a strange situation, which needs some explanation as the
> > sequence op steps might have caused this problem.
>
> > Step 1: I created more than one year ago my first application in
> > Google App Engine using a Google App Acount (here) called X.
>
> > Step 2: I created a second, a third and a fourth AppEngine name using
> > account X, so total of 4 applications. All were displayed in the admin
> > console.
>
> > Step 3 : I created a new Google Account (called Y) --> This is a Gmail
> > account, not an App Engine account!
>
> > Step 4: I allowed account Y to become administrator of my first Google
> > App Engine application, So far so good: 2 admins!
>
> > Step 5:  Today I tried to create with account X (is admin) a new
> > Google App Engine application. The system accepts the new application
> > name  but when i finished this process, no new application name
> > appeared in the admin console (the screen you see when you login). The
> > number of available Google App Engine applications went anyhow down
> > from 10 to 9.
>
> > Another big surprise i got: all my other applications made with
> > account X (so the second, third and fourth mentioned in step 2) were
> > gone. As far as i know you cannot delete applications. I have
> > never found or seen such a function. My older application names are
> > not listed anymore. Only 1 application is shown and that was my first
> > from step1 !!
> > The application counter shows that I still have 9 applications
> > available... Weird...strange..strange..strange Lucky me these
> > apps were not the important ones...
>
> > Step 6: I retried to create again a new Google App Engine application
> > with again a new name, but no new things appear in the admin console,
> > nevertheless the counter went down again: The number of available
> > Google App Engine applications went down from 9 to 8.
>
> > Step 7: after 30-60 minutes waiting NO new Google App Engine
> > application name has appeared in the console. So what is happening
> > here?  Logging on with account Y does not show them (which would be
> > even more strange when that would have happened).
>
> > I am stuck here: 3 apps gone en no new one can be created.
> > Hheelp!!!
>
> --
> Nick Johnson, App Engine Developer Programs Engineer
> Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
> 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Retrieving size of collection set

2009-06-23 Thread djidjadji

You can construct a __key__ only query and count them up.
Then you don't have to construct all the A objects just for counting.

result = A.all(keys_only=True).filter('refprop =', b.key()).fetch(1000)
numA = len(result)

2009/6/22 Nick Johnson (Google) :
> Hi johntray,
>
> On Sun, Jun 21, 2009 at 5:47 PM, johntray  wrote:
>>
>> Well yes I could add a count property to the B objects, but I'm really
>> trying to understand more about how ReferenceProperty works.
>
> ReferenceProperty's collection attributes are just syntactic sugar for
> creating and executing a query yourself. As such, all the same limitations
> apply. In the case where you call len() on it, I believe this will result in
> a count query being executed, which doesn't require Python to decode all the
> entities, but does require the datastore to fetch all the index rows.
> [snip]

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: GAE + Eclipse Plug-In + Subversion

2009-06-23 Thread Tony

I have not used this particular plugin but this might point you in the
right direction: Eclipse stores project information in .project (or
similar) file in your project directory (and maybe others, like
a .settings folder).  My guess is Subversion's default settings
ignores files that start with . (dot) when committing changes (since
it stores data in .svn folders).  You can either:

a.) commit the Eclipse dot-files and do "import an existing project"
on your second workspace
b.) don't commit the dot-files, and create a new project on your
second workspace (and use the repository checkout as the initial
directory)

That should work, if I understand your question right.

On Jun 23, 7:40 am, stefan77  wrote:
> Hello,
>
> i need some help with a team setup for the GAE in Eclipse with
> Subversion.
> I've created a GAE Project in Eclipse and created a repository with
> the Eclipse Team function (Subversive Plug-in) on a Subversion-Server.
> Creation and commits work.
> But i cannot check it out on another workplace (also Eclipse+GAE
> Plugin is installed).
> I've tried to just check out the project but then the GAE Plugin does
> not detect the project as GAE supported.
> And when i first create a GAE Project and then checkout the sources it
> will also not work because of the source structure.
> Has someone experiences or ideas, how do i get it work?
>
> Regards,
> Stefan
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Datastoer design for high performance query / group membership / venn diagram self joins

2009-06-23 Thread Tony

You could do something like this (I'm writing python instead of java
but you get the idea):

class User(db.Model):
  matchesInitiated = db.ListProperty(db.Key)
  matchesAttempted = db.ListProperty(db.Key)

class Match(db.Model)
  interesting_property = db.StringProperty()

The properties on User contain db.Keys of Match entities that the User
relates to.  When you want to get all matches for a user, you just
merge the two properties into one list of keys and db.get
(list_of_keys).  No queries, even.

On Jun 22, 6:31 pm, tiburondude 
wrote:
> Hi,
>
> I have an app with two entities that need (maybe) to interrelate in
> the typical sql join sense.
>
> Users
> -
> userId  Long
>
> Matches
> initiatingUserId Long
> attemptingUserId Long
> matched  boolean
>
> From my UI I can get the data populated properly but I end up with
> this in one row:
>
> initiatingUserId = 1
> attempingUserId = 2
> matched = true
>
> What I want is when user 1 goes to this section of the app, to get a
> list of all records with matched = true, where the currently logged in
> user is in EITHER initiatingUserId OR attemptingUserId.  This can't be
> done in the sql sense using a logical OR, since that's not supported
> in the datastore since it doesn't perform well.
>
> Of course I can do two queries and join the dataset, but this
> introduces some serious pain in the workaround regarding paging on a
> large dataset.  Not to mention two queries per request just sounds
> slow/wrong.
>
> I watched the excellent video from google IO 2009 that instructs us to
> think of these things as "group membership queries", and to do "venn
> diagram self joins".
>
> So I was thinking the solution may be to remodel the entities like so:
>
> - Remove the Matches entity completely
> - change the Users entity to this:
>
> Users:
> -
> List initiatingUserIds
> List attemptingUserIds
>
> I'm struggling to write the query that would then retrieve this data.
>
> Anyone have any ideas on this design?
>
> Thanks a bunch!
> David
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Is "class Body(db.Model):" ok?

2009-06-23 Thread Tony

Have you added any Body entities to your datastore?  Sometimes I get
HTTP 500 errors in the console viewer for models with no existing
entities.

On Jun 23, 1:09 pm, Jesse Grosjean  wrote:
> I've just added a new model class to my app that's defined like this:
>
> class Body(db.Model):
>         content = db.TextProperty()
>
> It seems to be working fine in my server code, but for some reason it
> doesn't show up in the list of entities shown by the App Engine
> Console Data Viewer. Also when I run a direct query in the console
> for:
>
> SELECT * FROM Body
>
> I get a page that reports:
>
> Server Error
> A server error has occurred.
>
> Can someone help me figure out what is going on?
>
> Thanks,
> Jesse
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Is "class Body(db.Model):" ok?

2009-06-23 Thread Jesse Grosjean

> Have you added any Body entities to your datastore?  Sometimes I get
> HTTP 500 errors in the console viewer for models with no existing
> entities.

Yes I've added a lot, and they seem to be working, I just don't see
them in the console. I also have an "Includes ancestors" index that's
serving Body entities.

Jesse
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Testing Task Queue

2009-06-23 Thread Jeff S (Google)
Hi Stephen,

My last post contained a mistake, flush actually causes the tasks to be
removed but they are not run. To actually run the tasks, click on the queue
name then click the run button on the next page.

Thanks for the feedback about documentation on the dev server's admin
console. It is mentioned here but it is quite brief:

http://code.google.com/appengine/docs/python/tools/devserver.html#The_Development_Console

Thank you,

Jeff

On Mon, Jun 22, 2009 at 6:08 PM, Stephen Mayer wrote:

>
> Hey Jeff,
>
> Thanks much for the reply.  I'm wondering why the admin interface in
> my dev server isn't ever mentioned in the docs (that I have found thus
> far) ... it would have come in handy to know of its existence.
> Perhaps you might consider adding a mention of it at some point?
> Looks like i can browse my entities and reset my cache ... nice stuff
> to know about.  Some of these features you can't even do in
> production.
>
> Stephen
>
> On Jun 22, 1:55 pm, "Jeff S (Google)"  wrote:
> > Hi Stephen,
> > In the SDK dev server, the task queue's tasks must be triggered manually.
> If
> > you visit localhost:8080/_ah/admin/queues, you can see a list of queue
> names
> > with a "flush" button to cause all enqueued tasks in that queue to be
> > executed. Part of the reason for having a manual trigger for execution is
> to
> > prevent runaway scenarios as you describe. In the SDK you can step
> through
> > each "generation" of tasks and watch for endless or exponential triggers.
> >
> > Happy coding,
> >
> > Jeff
> >
> > On Sun, Jun 21, 2009 at 5:27 PM, Stephen Mayer  >wrote:
> >
> >
> >
> >
> >
> > > So now that we have the task queue ... how do we test it in our
> > > sandboxes?  Or perhaps I missed that part of the documentation ... can
> > > anyone clue me in on testing it in a place that is not production (I
> > > wouldn't want a queue to start some runaway process in production ...
> > > would much prefer to catch those cases in testing).
> >
> > > Thoughts?
> > > -Stephen
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: jdo problem with field type BigDecimal and google app engine plugin

2009-06-23 Thread Jeff S (Google)

Hi Ronny,

Your suspicions are correct. At the moment, the datastore doesn't
natively support arbitrary precision floating point values, they are
limited to double precision. If you convert to a String, it may not be
possible to perform the same types of queries. However, if that is not
an issue, you could perform this conversion behind the scenes to store
a BigDecimal as a String using JDOInstanceCallback. Since this is
starting to get into some pretty JDO-for-App-Engine specific
territory, follow up discussion might be better suited to the Google
App Engine for Java discussion group:

http://groups.google.com/group/google-appengine-java

Happy coding,

Jeff

On Jun 22, 3:52 am, Ronny Bubke  wrote:
> When i use BigDecimal as persistent type with JDO i got strange
> behaviour.
>
> A value new BigDecimal("1.4") will change after restoring from
> database to 1.39
>
> I suppose this is a bug because BigDecimal is stored as float or
> double. But it should stored as String.
>
> In the whitelist BigDecimal is supported as persistent type. I don't
> know whether it will work on App Engine Server. Because of unit
> testing it should also work with the plugin.
>
> Maybe somebody can help me.
>
> Thx.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Does static files count for outbound traffic?

2009-06-23 Thread Mariano Benitez

I do have some large static files, and given that appengine do not use
eTags or another form of check it is always returning the entire
files.

If the static files count as part of the outgoing traffic quota (which
I think it is reasonable), then either you implement eTag or another
method of not returning the entire file if it was not modified.

My other alternative is to implement my handlers, move all to dynamic,
and handle the 304 myself. I don't like the idea but I have no choice.

Regards,
MAriano
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] best away to know numbers of rows in a entities

2009-06-23 Thread alf

hi,

assuming you have more than 1000 rows and you have no counter store in
a entities what is the best away to know how many rows has a entitie.

I think is better time to time request numbers of rows per entity to
has a incremental counter and always have in mind insert and delete

many thaks.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Does cron jobs take more time to init than normal handlers

2009-06-23 Thread Mariano Benitez

Hello,

Now that I got cron, I moved something I used to do in a normal
handler to use a cache and refresh every 5 minutes.

What I discovered now is that what used to take 400ms in the normal
handler is now taking 800+ms in the cron handler. (I do the exact same
thing, really)

I don't know if cron handlers are being cached or since I do it not
very frequently I have to pay that price.

Thanks
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Using the Humanize part of Django templates

2009-06-23 Thread MajorProgamming

I am currently running an app that is ONLY using Django templates (not
the whole framework). I was wondering how I can use the
django.contrib.humanize part so I can use specialized filters? It
mentions something about INSTALLED_APPS, but I have no clue how to do
that?

Thanks,
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Task Queue Back-off Times

2009-06-23 Thread Tony

I've looked through the source in the SDK but I can't find any code
relating to the back-off times when tasks fail (I think because the
SDK doesn't actually process queues automatically?).

Does anyone know what the back-off/retry times are for failed
attempts?  The documentation says "at worst, once a day" but I was
hoping for something a little more specific.  Thanks!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Task Queue API Users

2009-06-23 Thread Tony

I think (correct me if I'm wrong) that what Colin is saying is that if
User A is logged in, and performs an action on a page which enqueues a
task, and the task hits a webhook, the webhook should be able to
operate just as if User A had logged in, and hit the webhook url (so
users.get_current_user() should return the user that enqueued the
task).

The workaround seems pretty easy, though, just pass the required
information in the payload: "if user is None: user = db.get(request.get
('userkey'))," or "if user is None: username = db.get(request.get
('username'))" or what have you.

Or maybe he's just saying you should be able to assign more granular
permissions like:

- url: /hook
  login: [admin, cron]

Or maybe I'm missing his point entirely :P


On Jun 23, 9:02 am, hawkett  wrote:
> Hi Nick,
>
>   Bug filed -http://code.google.com/p/googleappengine/issues/detail?id=1751
>
> > I'm not sure I see the problem - what user would you expect to see listed
> > when a webhook is being called by the cron ortaskqueuesystem?
>
> The problem is that the handler code needs to have an understanding of
> the particular calling client.  This tightly couples the handler code
> to the calling mechanism.  I totally wrecks the idea that the protocol
> should allow loose coupling of the two end points.  From my
> perspective, that's bad architecture.  If I explicitly say I need a
> user (admin or otherwise) to access a URI, then the system should make
> sure that URI is not accessed unless there is a user.  Once you start
> introducing edge cases - 'It's true unless this, or unless that', the
> platform becomes 'clunky'. app.yml is an interface contract, and
> currently asynch breaks that contract. That contract is far more
> important than one client's (GAE system) difficulty (which user?)
> conforming to it.  My 2c anyway.  Thanks,
>
> Colin
>
> On Jun 23, 10:46 am, "Nick Johnson (Google)" 
> wrote:
>
>
>
> > Hi hawkett,
>
> > The bug you found earlier, withTaskQueueaccesses returning 302s instead
> > of executing correctly, is definitely a bug in the dev_appserver. Can you
> > please file a bug on the issue tracker?
>
> > On Mon, Jun 22, 2009 at 11:18 PM, hawkett  wrote:
>
> > > Hi,
>
> > >   I've deployed an app to do some tests on live app engine, and the
> > > following code
>
> > > currentUser = users.get_current_user()
> > > if currentUser is not None:
> > >   logging.info("Current User - ID: %s, email: %s, nickname: %s" %
> > > (currentUser.user_id(), currentUser.email(), currentUser.nickname()))
>
> > > logging.info("is admin? %s" % users.is_current_user_admin())
>
> > > yields:  'is admin? False'
>
> > > as the total log output.  This is code that is run directly from a
> > > handler in app.yaml that specified - 'login:admin'
>
> > > This represents a pretty big problem - it means you can't rely on
> > > 'login:admin' to produce a user that is an admin.
>
> > On the contrary - only administrators and the system itself (eg, cron and
> >taskqueueservices) will be able to access "login: admin" handlers.
> > However, when access is by a service, no user is specified, so
> > "is_current_user_admin()" will naturally return False, not because it's not
> > an admin access, but because there's no current user.
>
> > > I'm guessing that
> > > the goal of theTaskQueueAPI is to be usable on generic URLs - e.g.
> > > in a RESTful application, the full CRUD (and more) functionality is
> > > exposed via a dynamic set of URL's that more than likely are not
> > > specifically for theTaskQueueAPI - however the above situation
> > > means you really have to code explicitly for theTaskQueueAPI,
> > > because the meaning of the directives in app.yaml is not reliable.  It
> > > looks like cron functionality works like this as well, and that has
> > > been around for a while.  Use cases such as write-behind outlined in
> > > Brett's IO talk are significantly limited by being unable to predict
> > > whether you will get a user or not (especially if you intend to hit
> > > RESTful URI that could just as easily be hit by real users).  Sure,
> > > there are ways to code around it, but it's not pretty.
>
> > I'm not sure I see the problem - what user would you expect to see listed
> > when a webhook is being called by the cron ortaskqueuesystem?
>
> > -Nick Johnson
>
> > > I've added a defect to the issue tracker here -
> > >http://code.google.com/p/googleappengine/issues/detail?id=1742
>
> > > I'm keen to understand how google sees this situation, and whether the
> > > current situation is here to stay, or something short term to deliver
> > > the functionality early.  Cheers,
>
> > > Colin
>
> > > On Jun 22, 4:31 pm, "Nick Johnson (Google)" 
> > > wrote:
> > > > Hi hawkett,
>
> > > > My mistake. This sounds like a bug in the SDK - can you please file a
> > > bug?
>
> > > > -Nick Johnson
>
> > > > On Mon, Jun 22, 2009 at 4:25 PM, hawkett  wrote:
>
> > > > > Hi Nick,
>
> > > > > In my SDK (just the normal mac download), I can inspect thequeue

[google-appengine] Re: newbie question: Alternatives of auto_increment

2009-06-23 Thread Tony

It depends on what exactly you need it for.  If you need it for the
total count, you can use the count() method on queries (slow) or
maintain a counter on another entity (good), or on a collection of
entity shards (better).  If you want to know how many entities of a
kind you've created ever (even if some are deleted), you can use the
key_name property to implement your own "auto_increment" policy.
Something like this (mind you this code is probably terrible :P):

class Counter(db.Model):
  count = db.IntegerProperty()

class Item(db.Model):
  some_prop = db.StringProperty()
  created_at = db.DateTimeProperty(auto_now_add=True)

  @classmethod
  def new_item(cls):
counter = Counter.get_by_key_name("item_counter")
count = counter.count
counter.count += 1
counter.put()

## or, something like this, which wouldn't require you to maintain
a counter (but requires a query)
count = Item.all(keys_only=True).order(-created_at).get().name
().split("_")[1]

return cls(key_name="num_"+str(count+1))


On Jun 22, 9:17 pm, Captain___nemo  wrote:
> Hi,
> I am new in google app engine.
>
> I am a PHP-MySql developer. I used auto increment (integer type)
> always in my database. It helps for total count, works as primary key
> and so on. But as I can see auto_increment is not available for Data
> Store. I was wondering what is the alternative idea that Data Store
> provides and recommend us to use?
>
> Thanks in advance.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



  1   2   >