[google-appengine] Building Index stucked

2009-06-22 Thread dan

Hi, I've got a problem while building a new index for my applicaiton.
I need some help.
My application ID is deardayz.
Thanks.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Efficient way to structure my data model

2009-06-22 Thread ecognium

Hi All,

I would like to get your opinion on the best way to structure my
data model.
My app allows the users to filter the entities by four category types
(say A,B,C,D). Each category can have multiple values (for e.g.,
category type A can have values 1,2,3) but the
user can  choose only one value per category for filtering.  Please
note the values are unique across the category types as well. I could
create four fields corresponding to the four types but it does not
allow me to expand to more categories later easily. Right now, I just
use one list field to store the different values as it is easy to add
more category types later on.

My model (simplified) looks like this:



class Example(db.Model):

categ= db.StringListProperty()

keywords = db.StringListProperty()



The field keywords will have about 10-20 values for each entity. For
the above example, categ will have up to 4 values. Since I allow for
filtering on 4 category types, the index table gets large with
unnecessary values. The filtering logic looks like:
keyword = 'k' AND categ = '1' AND categ = '9' AND categ = '14' AND
categ = '99'

 Since there are 4 values in the categ list property, there will be
4^4 rows created in the index table (most of them will never be hit
due to the uniqueness guaranteed by design). Multiply it by the number
of values in the keywords table, the index table gets large very
quickly.

I would like to avoid creating multiple fields if possible because
when I want to make the number of category types to six, I would have
to change the underlying model and all the filtering code. Any
suggestions on how to construct the model such that it will allow for
ease of expansion in category types yet still not create large index
tables? I know there is a Category Property but not sure if it really
provides any specific benefit here.

Thanks!
-e
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Efficient way to structure my data model

2009-06-22 Thread Nick Johnson (Google)
Hi ecognium,

If I understand your problem correctly, every entity will have 0-4 entries
in the 'categ' list, corresponding to the values for each of 4 categories
(eg, Color, Size, Shape, etc)?

The sample query you give, with only equality filters, will be satisfiable
using the merge join query planner, which doesn't require custom indexes, so
you won't have high indexing overhead. There will simply be one index entry
for each item in each list.

If you do need custom indexes, the number of index entries, isn't 4^4, as
you suggest, but rather smaller. Assuming you want to be able to query with
any number of categories from 0 to 4, you'll need 3 or 4 custom indexes
(depending on if the 0-category case requires its own index), and the total
number of index entries will be 4C1 + 4C2 + 4C3 + 4C4 = 4 + 6 + 4 + 1 = 15.
For 6 categories, the number of entries would be 6 + 15 + 20 + 15 + 6 + 1 =
63, which is still a not-unreasonable number.

-Nick Johnson

On Mon, Jun 22, 2009 at 8:51 AM, ecognium ecogn...@gmail.com wrote:


 Hi All,

I would like to get your opinion on the best way to structure my
 data model.
 My app allows the users to filter the entities by four category types
 (say A,B,C,D). Each category can have multiple values (for e.g.,
 category type A can have values 1,2,3) but the
 user can  choose only one value per category for filtering.  Please
 note the values are unique across the category types as well. I could
 create four fields corresponding to the four types but it does not
 allow me to expand to more categories later easily. Right now, I just
 use one list field to store the different values as it is easy to add
 more category types later on.

 My model (simplified) looks like this:



 class Example(db.Model):

categ= db.StringListProperty()

keywords = db.StringListProperty()



 The field keywords will have about 10-20 values for each entity. For
 the above example, categ will have up to 4 values. Since I allow for
 filtering on 4 category types, the index table gets large with
 unnecessary values. The filtering logic looks like:
 keyword = 'k' AND categ = '1' AND categ = '9' AND categ = '14' AND
 categ = '99'

  Since there are 4 values in the categ list property, there will be
 4^4 rows created in the index table (most of them will never be hit
 due to the uniqueness guaranteed by design). Multiply it by the number
 of values in the keywords table, the index table gets large very
 quickly.

 I would like to avoid creating multiple fields if possible because
 when I want to make the number of category types to six, I would have
 to change the underlying model and all the filtering code. Any
 suggestions on how to construct the model such that it will allow for
 ease of expansion in category types yet still not create large index
 tables? I know there is a Category Property but not sure if it really
 provides any specific benefit here.

 Thanks!
 -e
 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Building Index stucked

2009-06-22 Thread Nick Johnson (Google)
Hi dan,

All your indexes are in 'serving' state. What problem are you having?

-Nick Johnson

On Mon, Jun 22, 2009 at 7:42 AM, dan danlee...@gmail.com wrote:


 Hi, I've got a problem while building a new index for my applicaiton.
 I need some help.
 My application ID is deardayz.
 Thanks.
 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Django Performance Version

2009-06-22 Thread Nick Johnson (Google)
Hi Stephen,

On Mon, Jun 22, 2009 at 4:21 AM, Stephen Mayer stephen.ma...@gmail.comwrote:


 If I want to use the new Django 1.x support do I replace the django
 install in the app engine SDK  ... or do I add it to my app as a
 module?  If I add it ... how do I prevent it from being uploaded with
 the rest of the app?


For how to use Django 1.0 in App Engine, see here:
http://code.google.com/appengine/docs/python/tools/libraries.html#Django

I'm also wondering about Django performance.  Here was my test case:
 create a very basic app Django Patch ... display a page (no db
 reads ... just display a template)
 ... point mon.itor.us at it every 30 minutes ... latency is about
 1500-2000ms.  I assume it's because Django Patch zips up django into a
 package and the package adds overhead ... the first time it's hit the
 app server has to unzip it (or is it every time it's hit?)  Woah ...
 that seemed a bit high for my taste ... I want my app to be reasonably
 performant ... and that's not reasonable.


The first request to a runtime requires that the runtime be initialized, all
the modules loaded, etcetera. On top of that, as you point out, Django
itself has to be zipimported, which increases latency substantially. If the
ping every 30 minutes is the only traffic to your app, what you're seeing is
the worst-case latency, every single request. Using the built-in Django will
decrease latency substantially, but more significantly, requests that hit an
existing runtime (the vast majority of them, for a popular app) will see far
superior latencies, since they don't need to load anything.



 Try 2:
 create a very basic app displaying a template, use the built in django
 template engine but without any of the other django stuff ... use the
 GAE webapp as my framework.  response time is now down to 100-200ms on
 average, according to mon.itor.us.  I assume this would come down
 further if my app proved popular enough to keep it on a server for any
 length of time.

 I'm brand new to python, app engine and django ... I have about 10
 years of experience with PHP and am a pretty good developer in the PHP
 space.  I would like to work on GAE with some sense of what the best
 practices are for scalable and performant apps.

 Here are my conclusions based on my very simple research thus far:
 1) Django comes at a cost ... especially if you don't use the default
 install that comes built with the SDK.
 2) Best practices is probably to pick and choose django components on
 GAE but use webapp as your primary framework.


This depends on what you want to achieve, and on personal preference.

-Nick Johnson



 Thoughts?  Am I off here?
 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Retrieving size of collection set

2009-06-22 Thread Nick Johnson (Google)
Hi johntray,

On Sun, Jun 21, 2009 at 5:47 PM, johntray john.tur...@gmail.com wrote:


 Well yes I could add a count property to the B objects, but I'm really
 trying to understand more about how ReferenceProperty works.


ReferenceProperty's collection attributes are just syntactic sugar for
creating and executing a query yourself. As such, all the same limitations
apply. In the case where you call len() on it, I believe this will result in
a count query being executed, which doesn't require Python to decode all the
entities, but does require the datastore to fetch all the index rows.



 Nonetheless, there are a couple reasons I don't want to add a count
 property to the B objects:

  -- Every time I create or delete an A object, I would also have to
 make two additional datastore calls to read, modify, and write object
 B.

  -- And to make sure the counter stays in sync, I would need to use a
 datastore transaction for these operations. But right now, my A and B
 objects are not in the same entity group, which means (if I understand
 correctly) transactions are not supported. So I would have to do a
 full update to the existing datastore in order to add a reliable
 counter to object B.


If you want to be able to count objects without O(n) work, your only option
is to store a precalculated count of them.

-Nick Johnson






 On Jun 21, 12:31 pm, Sylvain sylvain.viv...@gmail.com wrote:
  Could you add an int property that is updated with the len ?
  So you only check this property ?
 
  On 21 juin, 17:51, johntray john.tur...@gmail.com wrote:
 
 
 
   If I have two datastore object types, I'll call A and B, and A
   includes a ReferenceProperty to B, then objects of type B will have a
   back-reference property with default name a_set. Now if I just want
   the size of some B object's a_set, I could call len(a_set). To execute
   this, my understanding is that GAE will do a datastore query to
   retrieve the relevant A objects and pass them to the len() function.
   Since I don't need the A object contents, would it be more efficient
   to set up my own Query object and call Query.count() instead?
 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: 30 second request limit - a killer?

2009-06-22 Thread Nick Johnson (Google)
Hi Dominik,

As I undestand it, the Compass support for App Engine currently stores the
entire index in local instance memory, which makes it impractical for use in
a production environment. As such, you're going to have to rebuild your
index for every new runtime instance, which simply isn't practical.

-Nick Johnson

On Sat, Jun 20, 2009 at 3:57 AM, Dominik Steiner 
dominik.j.stei...@googlemail.com wrote:


 Hi there,

 I have made my first steps with GAE on Java and it had been a pleasure
 to develop with the eclipse plugin for GWT and GAE. As the JDO query
 implementation of GAE is quite reduced, I used the Compass framework
 to work around that and it looked like it could get my app going.

 But as you can read in the following forum post

 http://forum.compass-project.org/thread.jspa?messageID=298249#298249

 I have run into problems that my data in the GAE database and the
 Compass cache is running out of sync. The solution from Compass side
 to trigger an indexing of the Compass cache is failing because that
 operation is taking more than 30 seconds and thus is throwing an
 error.

 So my questions are: have others run into the same problem and could
 fix it? what would be a workaround of the 30 second limit?

 I really would love to see my app running on GAE, but right now that
 problem is killing it.

 Anybody with some hints or ideas?

 Thanks

 Dominik
 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Model design for wiki like index pages

2009-06-22 Thread Nick Johnson (Google)
Hi Jesse,

On Sat, Jun 20, 2009 at 10:49 PM, Jesse Grosjean
je...@hogbaysoftware.comwrote:


 I have a wiki like app.

 The basic model is a page which has title(StringProperty) and a body
 (TextProperty) properties. It all seems to work well, but I'm not sure
 how well my model will scale. The problem I see is that I want to have
 an Index page, which lists all other pages.


Showing every single page in the wiki isn't going to scale very well from
either a datastore point of view, or a user-interface one - a list of
thousands of entries is not generally very useful to users. :)



 My concern is when a model object is loaded in GAE all property fields
 are also loaded in from the store at the same time. That would seem to
 post a problem with my app on index pages, because it would mean when
 someone visits the index page both the title (which I want) and body
 (which I don't need) for all pages would need to be loaded from the
 store. Loading the body in this case seems wasteful, and possibly very
 problematic performance wise on a site with many pages.


If you use the key_name functionality of the datastore, you can name Page
entities after their titles, which will allow you to retrieve pages with a
get operation instead of a query. It'll also mean you can do keys-only
queries to retrieve a list of pages matching a particular query, without
having to retrieve the page contents.



 My questions:

 1. Is this a problem that other people are worrying about, should I
 worry about it? I could solve the problem by dividing my page model
 into two separate models... on that contained the title and a
 reference to another model which would contain page body. That should
 make the index page scale, but it complicates the rest of the app. I'd
 prefer to avoid that rout if possible.


That's also a possible approach, especially if you have other metadata you
often want to retrieve without retrieving the body of the page.




 2. Is there, or is there a future possibility to specify that certain
 fields in a model are lazy load, not fetched and returned in the
 initial query?


This is unlikely, since the entity is stored in the datastore as a single
encoded Protocol Buffer.

-Nick Johnson




 Thanks,
 Jesse
 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: List Property containing keys - performance question

2009-06-22 Thread Nick Johnson (Google)
Hi Morten,

On Sat, Jun 20, 2009 at 11:02 AM, Morten Bek Ditlevsen morten@gmail.com
 wrote:

 Hi Federico,

 Thanks for your answers - I'm just having a bit of a hard time figuring out
 which data store requests happen automatically.


The only cases in which datastore gets or queries are automatically executed
is when first dereferencing a ReferenceProperty. The collection property
that ReferenceProperty creates on the referenced object returns a query,
which you can execute yourself (explicitly with .get() or .fetch(), or
implicitly by iterating over it or calling len() on it).



 I wondered because I had an error in the datastore:

   File /base/data/home/apps/grindrservr/26.334331202299577521/main.py, line 
 413, in query
 if result in meStatus.blocks:
   File /base/python_lib/versions/1/google/appengine/api/datastore_types.py, 
 line 472, in __cmp__

 for elem in other.__reference.path().element_list():

 The 'blocks' property is just like the 'favorites' described in my previous
 mail - and 'result' is a value iterated over the results from a 'keys only'
 query.


The code there is iterating over the elements of each key and comparing
them, rather than iterating over results.

-Nick Johnson




 So I guess what I don't understand is why the datastore is in play here. I
 know that my results is probably an iterator, but why is this necessary when
 you just query for keys?
 That's what caused be to think that the error might be related to the
 'blocks' list of keys...

 Sincerely,
 /morten



 On Sat, Jun 20, 2009 at 10:22 AM, Federico Builes 
 federico.bui...@gmail.com wrote:


 Morten Bek Ditlevsen writes:
   Hi there,
   I have an entity with a list property containing keys:
  
 favorites = db.ListProperty(db.Key, indexed=False)
  
   I suddenly came to wonder:
   If I check if a key is in the list like this:
  
   if thekey in user.favorites:
  
   will that by any chance try and fetch any entities in the
 user.favorites
   list?
  
   I don't think so, but I would like to make sure! :-)

 When you do foo in bar it's actually calling Python methods, not the
 datastore ops., and since
 Python sees favorites as a list of keys it should not fetch the entities.

 If you were to do index this and do it in datastore side (WHERE favorites
 = thekey) it might have to
 un-marshal the property and do a normal lookup, but I don't think the
 slowdown is noticeable.

 --
 Federico




 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: SMS verification trouble.

2009-06-22 Thread Nick Johnson (Google)
Hi Patipat,

I've manually activated your account.

-Nick Johnson

On Sat, Jun 20, 2009 at 10:37 AM, Patipat Susumpow keng...@gmail.comwrote:

 Hi,

 I can't verify my account by SMS from
 http://appengine.google.com/permissions/smssend.do, tried many times with
 friends' mobile phone no, various supported operators in Thailand, but
 always get The phone number has been sent too many messages or has already
 been used to confirm an account. message.

 I'm wondering that I never use this verification method before, but get
 this error.

 Thanks,
 Patipat.
 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: No way to delete error data entry

2009-06-22 Thread Nick Johnson (Google)
Hi Charlie,

What do you mean by not totally working? Also, you may have better luck
asking this in the google-appengine-java group.

-Nick Johnson

On Sat, Jun 20, 2009 at 3:36 PM, Charlie Zhu zh.char...@gmail.com wrote:


 Thank you, Nick,

 I have written code as below with low level API to delete the entry.
 It runs without error but seems not totally working. And thanks god
 that data suddenly appeared at Data Viewer and problem resolved.

 Code pasted here and hope it useful for others

 import com.google.appengine.api.datastore.DatastoreService;
 import com.google.appengine.api.datastore.DatastoreServiceFactory;
 import com.google.appengine.api.datastore.Entity;
 import com.google.appengine.api.datastore.Query;

public void doGet(HttpServletRequest req, HttpServletResponse resp)
 throws IOException {
 String tbname = req.getParameter(tbname);
 if(tbname!=null)
 {
 DatastoreService datastore =
 DatastoreServiceFactory.getDatastoreService();

 // Or perform a query
 Query query = new Query(tbname);
 for (Entity taskEntity :
 datastore.prepare(query).asIterable()) {
 datastore.delete(taskEntity.getKey());
 }
 }
}


 Regards,
 Charlie

 On Jun 17, 11:58 pm, Nick Johnson (Google) nick.john...@google.com
 wrote:
  Hi Charlie,
 
  Your easiest option here is probably to upload an alternate major version
 of
  your app with the old schema, and use that to retrieve and fix the faulty
  entit(y|ies). Alternate approaches include using the low level datastore
  API, or uploading a Python version that uses the low level API or
  db.Expando.
 
  -Nick Johnson
 
 
 
 
 
  On Wed, Jun 17, 2009 at 9:15 AM, Charlie Zhu zh.char...@gmail.com
 wrote:
 
   Hi,
 
   I have tried all ways I known to delete some schema changing caused
   error Entities and failed.
 
   1. Delete on Data Viewer on the console.
   Data Viewer shows No Data Yet.
 
   2. Delete by code
   Below is part of the codes:
  Query q = pm.newQuery(CDKFingerprint.class);
  ListCDKFingerprint results2;
  results2 = (ListCDKFingerprint) q.execute();
  pm.deletePersistentAll(results2);
   But that cause server error:
   java.lang.NullPointerException: Datastore entity with kind
   CDKFingerprint and key CDKMol(c=cc=cc=c)/CDKFingerprint(1) has a null
   property named bits_count.  This property is mapped to
   cdkhelper.CDKFingerprint.bits_count, which cannot accept null values.
   ...
   at org.datanucleus.jdo.JDOPersistenceManager.deletePersistentAll
   (JDOPersistenceManager.java:795)
   ...
 
   3. Assign values to the NULL field then delete
   The code
  for(CDKFingerprint r: results2) {
  r.bits_count = 0;
  pm.makePersistent(r);
  }
   And server error again
   java.lang.NullPointerException: Datastore entity with kind
   CDKFingerprint and key CDKMol(c=cc=cc=c)/CDKFingerprint(1) has a null
   property named bits_count.  This property is mapped to
   cdkhelper.CDKFingerprint.bits_count, which cannot accept null values.
   ...
   at org.datanucleus.store.appengine.query.StreamingQueryResult
   $AbstractListIterator.hasNext(StreamingQueryResult.java:205)
   ...
 
   Having no idea and hoping help.
 
   Regards,
   Charlie
 
  --
  Nick Johnson, App Engine Developer Programs Engineer
  Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
 Number:
  368047
 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Tuples, Exploding Index, IO talk

2009-06-22 Thread Nick Johnson (Google)
Hi hawkett,

On Sat, Jun 20, 2009 at 3:05 PM, hawkett hawk...@gmail.com wrote:


 Hi,

   I was watching Brett's IO talk re. using 'Relational Index Tables',
 and there were a few hints of things in there, and I just wanted to
 check I got it all correctly -

 1.  Lists are good for tuples - a use case I see is an entity being
 tagged, and having a state within that tag - so the tuples might be
 ('tagA', 'PENDING') , ('tagB', 'ACCEPTED'), ('tagC', 'DENIED') etc. -
 so the list structures would be

 class Thing(db.Model):
  name = db.StringProperty()
  tags = db.ListProperty(str, default=[])
  states = db.ListProperty(str, default=[])

 with their contents tags = ['tagA', 'tagB', 'tagC'], states =
 ['PENDING', 'ACCEPTED', 'DENIED']

 and as data comes and goes you maintain both lists to ensure you
 record the correct state for the correct tag by matching their list
 position.


A much better approach is to use a single ListProperty, and serialize your
tuples to it - using Pickle, JSON, CSV, etc - whatever suits. If you want,
you can easily write a custom Datastore property class to make this easier.
This allows you to do everything you outlined below without extra effort.

-Nick Johnson



 2.  Relational Index Tables are good for exploding index problems - so
 the query here might be -
 get me all the 'Things' which have 'tagA' and which are 'PENDING' in
 that tag - i.e. all records with the tuple ('tagA, 'PENDING'), which
 would be a composite index over two list properties - an exploding
 index.

 So assuming I've got the above right, I'm trying to work out a few
 things

 a.  Without relational index tables, what is the best way to construct
 the query - e.g.

 things = db.GqlQuery(
  SELECT * FROM Thing 
  WHERE tags = :1 AND states = :2, 'tagA', 'PENDING')

 which would get me anything that had 'tagA' at any point in the tags
 list, and anything that had a 'PENDING' at any point in the states
 list. This is potentially many more records than those that match the
 tuple.  So then I have to do an in-memory cull of those records
 returned and work out which ones actually conform to the tuple?  Just
 wondering if I am missing something here, because it seems like a
 great method for storing a tuple, but complex to query for that same
 tuple?

 b.  If I am going to use relational index tables, to avoid the
 exploding index that the above query could generate -

 class Thing(db.model):
  name = db.StringProperty()

 class ThingTagIndex(db.Model)
  tags = db.ListProperty(str, default=[])

 class ThingStateIndex(db.Model)
  states = db.ListProperty(str, default=[])

 then am I right in thinking that my query would be performed as

 tagIndexKeys = db.GqlQuery(
  SELECT __key__ FROM ThingTagIndex 
  WHERE tags = :1, 'tagA')

 # All the things that have 'tagA' in their tags list
 thingTagKeys = [k.parent() for k in tagIndexKeys]

 stateIndexKeys = db.GqlQuery(
  SELECT __key__ FROM ThingStateIndex 
  WHERE states = :1 AND ANCESTOR IN :2, 'PENDING', thingTagKeys)

 # All the things that have both 'tagA' and 'PENDING' (but not
 necessarily as a tuple)
 thingKeys = [k.parent() for k in stateIndexKeys]

 things = db.get(thingKeys)

 # Oops - I need the lists to do the culling part of my tuple query
 from (a)

 So I have avoided the exploding index by performing two separate
 queries, but I could have achieved much the same result without the
 index tables - i.e. by performing separate queries and avoiding the
 composite index.  Just wondering if I am seeing the tuple situation
 correctly - i.e. there is no way to query them that doesn't require
 some in-memory culling?  Thanks,

 Colin

 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Is it possible to delete or rename a named task queue?

2009-06-22 Thread Jon McAlister

At the moment, no, but we will certainly be adding this eventually. In
the mean time I would encourage you to file this on our issue tracker.

On Mon, Jun 22, 2009 at 12:22 AM, czczer...@gmail.com wrote:

 I tried just removing the queue name from queue.yaml but the queue
 still seems to exist with a rate and bucket size set to 'paused'.
 If it isn't now, will it be possible to delete or rename queues in the
 future? The reason I ask is because I created a test queue with a
 crazy name and would rather not have to use it in the future.

 thanks,
 - Claude
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: file system links cause security exceptions

2009-06-22 Thread Nick Johnson (Google)
Hi Josh,

You'll probably have more luck asking this in the google-appengine-java
group.

-Nick Johnson

On Sat, Jun 20, 2009 at 2:07 AM, Josh Moore joshsmo...@gmail.com wrote:

 Hi,
 I am using JRuby and the App Engine to run Rails.  Because it is all ruby
 code there is no need to compile anything and all rails developers are used
 to not needing to restart the server when code changes.  Also we are used to
 working with a different directory structure then the WAR file structure.
  So I thought I would be tricky and use a file system link to link my source
 code folder into the WAR file system I use to deploy on the App Engine.
 However, this is where I run into a problem.  With the source code in a
 linked directory I get this exception when Rails tries to reload the Ruby
 files:

 java.security.AccessControlException: access denied (java.io.FilePermission
 /Users/joshmoore/code/rails_turbine/app/views/layouts/** read)
  at
 java.security.AccessControlContext.checkPermission(AccessControlContext.java:323)
 at
 java.security.AccessController.checkPermission(AccessController.java:546)
  at java.lang.SecurityManager.checkPermission(SecurityManager.java:532)
 at
 com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkPermission(DevAppServerFactory.java:122)
  at java.lang.SecurityManager.checkRead(SecurityManager.java:871)
 at java.io.File.isDirectory(File.java:752)
 at org.jruby.util.Dir.glob_helper(Dir.java:641)
 ...

 However, if I remove the linked directory and put the source code in a
 regular folder it runs fine and will happily reload the Ruby files when
 there is a change.  I am wondering is this expected behavior? Or is this a
 bug in the dev_appserver?

 Thanks,

 Josh

 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Just released: Python SDK 1.2.3

2009-06-22 Thread Jon McAlister

On Sat, Jun 20, 2009 at 5:00 PM, Thomaswinning...@gmail.com wrote:

 Some initial thoughts using the task queue api:

 1. It is very easy to create a chain reaction if you don't know what
 you are doing :P

Indeed it is :-P

 2. Using the queues with the dev_appservery.py is very nice such that
 you can test things out and see how things get queued.

 3. Would like to see flush queue option (or something) in the
 production server, as well as to look at the queue.

We don't have this right now, but will certainly add this eventually.
In the mean time I would encourage you to file this on the issue
tracker.

 4. My (horrible) first try at queues with production data spawned a
 lot of tasks, most of which now I wish I could just remove and start
 over.

One thing you can do is pause the queue. Another is to push a new
version of your app that has a very simpler handler for the URL the
tasks are using; that way it will quickly eat through all of the
dangling tasks.

 5. It seemed like I generated 10x tasks then what I was expecting, not
 sure if that is my mistake, but it didn't seem to have this order of
 magnitude when I tried with development data, so I am not sure if that
 is my fault or what.

 6. Currently my queue is stuck and not progressing, again, not sure if
 that is my fault or not.

 Thanks again, the API itself it drop dead simple and fun.

Glad to hear!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Reference Properties in GQL Queries

2009-06-22 Thread Nick Johnson (Google)
Hi Alfonso,

No, this is not possible, as it would require an implicit join, which the
datastore does not support. If the name is unique, though, you can use it as
your key_name for the entities in question, in which case you can do a
simple get instead of a query.

-Nick Johnson

On Fri, Jun 19, 2009 at 8:37 PM, Alfonso Lopez leb...@gmail.com wrote:


 Is there any way to give a WHERE conditional values from a reference
 property?  Something to this affect

 SELECT * FROM some_table WHERE referenceprop.name='yay'

 Where 'name' is a property of the referenced Kind?

 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Getting the ID of an entity in PostCallHook

2009-06-22 Thread Nick Johnson (Google)
Hi Frederico,

In the post-call hook, the response variable will be filled in with the RPC
response, which in the case of a Datastore put will be a list of keys of the
stored entities. If you want to step through them in sync with the entities,
this code may prove useful:

  def post_hook(service, call, request, response):
if call == 'Put':
  for key, entity in zip(response.key_list(), request.entity_list()):
# Do something with key, entity.

-Nick Johnson

On Fri, Jun 19, 2009 at 7:52 PM, Federico Builes
federico.bui...@gmail.comwrote:


 I'm using a post hook for datastore puts and I'd like to be able to access
 the id of an entity after
 it's saved. Although I can access all the properties of the entity from the
 PB, I don't have any
 idea of how to access the key, and the only relevant document I've found
 (http://code.google.com/appengine/articles/hooks.html) does not really go
 deep enough into the
 subject.
 Any suggestions? Below is the code for the hook I'm using:

def post_hook(service, call, request, response):
if call is Put:
for entity in request.entity_list():
# I'd like to get the key of the entity here
apiproxy_stub_map.apiproxy.GetPostCallHooks().Append('post_hook',
 post_hook, 'datastore_v3')

 --
 Federico Builes

 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: howTo write tag with EL support?

2009-06-22 Thread Serega.Sheypak

I can't do it, because appengine web.xml tells that it has 2.3 dtd
version.
And it's impossible to use 2.4 version in java appengine env.
I will try these solutions http://forums.sun.com/thread.jspa?threadID=625802
found on forum.sun.com

P.S.
Configuration is bad. Convention is better...(

On Jun 21, 7:44 pm, Serega.Sheypak serega.shey...@gmail.com wrote:
 Hello. I have a problem writing custom jsp-tag with attr which accepts
 EL (ExpressionLanguage).

 Here is the LTD:
 ?xml version=1.0 encoding=UTF-8?
 taglib version=2.0 xmlns=http://java.sun.com/xml/ns/j2ee;
                                           
 xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
                                           
 xsi:schemaLocation=http://java.sun.com/xml/ns/j2eeweb-
 jsptaglibrary_2_0.xsd

         tlib-version1.0/tlib-version
         short-namePrizeTags/short-name
         uriPrizeTags/uri

         tag
                 nameerror/name
                 tagclassru.develbureau.server.tag.OutTag/tagclass
                 bodycontentempty/bodycontent
                 infoPrints if something exists/info
                 attribute
                         namevalue/name
                         requiredtrue/required
                         rtexprvaluetrue/rtexprvalue
                 /attribute
         /tag

 Here is the Tag code:
 public class OutTag extends SimpleTagSupport{
         private static final long serialVersionUID = 1L;
         String val = null;

         public void doTag() throws JspException {
                 try{
                         PageContext pageContext = (PageContext) 
 getJspContext();
                     JspWriter out = pageContext.getOut();
                         if(val!=null){
                                 out.println(val);
                                 System.out.println(val - [+val+]);
                         }
                 }catch (Exception e) {
                         System.out.println(doStartTag - 
 [+e.getMessage()+]);
                 }
         }

         public void setValue(Object value){
                 System.out.println(setValue - [+value+]);
                 if(value!=null  value instanceof String){
                         String t = (String)value;
                         if(t.trim().length()3){
                                 val = t;
                         }
                 }
         }

 }

 Here is the putput:
 setValue - [${pageScope.clientRequest.name}]
 val - [${pageScope.clientRequest.name}]

 setValue - [${clientRequest.name}]
 val - [${clientRequest.name}]

 So it doesn't want to EVAL incomming EL

 Here is the usage:
   jsp:useBean id=clientRequest
                            scope=page
                            type=ru.develbureau.client.model.ClientRequestTO
                            
 class=ru.develbureau.client.model.ClientRequestTO
         jsp:setProperty name=clientRequest property=* /
   /jsp:useBean

 !-- some code...--
 input type=text class=wideInput name=name value=prize:out
 value=${pageScope.clientRequest.name} //
 OR
 input type=text class=wideInput name=name value=prize:out
 value=${clientRequest.name} //

 NOTHING HELPS.

 It just prints ${clientRequest.name}. It doesn't want to EVAL expr.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Request scope values

2009-06-22 Thread Nick Johnson (Google)
Hi cgcoder,

If you're using the built-in framework, the RequestHandler subclass that
gets invoked to handle your request is instantiated only for the current
request, so you can use a class member to store any per-request information.
You can also use a global variable, if you wish, as long as you clean it out
at the end (or beginning) of the request.

-Nick Johnson

On Fri, Jun 19, 2009 at 3:15 PM, cgcoder gopinath.ch...@gmail.com wrote:


 Hi all,

   In java web application, we have can have request scoped variable,
 which exist only during the lifetime of the request. Is there
 something similar to that in app engine (webapp /python) framework.

   Basically what I am trying to do is to create a generic error page
 screen, which needs to dispaly the error based on what is set in the
 request variable. I am not sure how i can do this with python-
 appengine.

  Please help meout.

 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: App exceeding a quota: Workflow Backend Index Task Count

2009-06-22 Thread vivpuri

Checking on the Billing Settings, my Free Quota for CPU is showing up
as 6.5 hours. I think it was higher than that earlier.

-V
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: App exceeding a quota: Workflow Backend Index Task Count

2009-06-22 Thread vivpuri

Okay, the Google Docs page on Quotas(http://code.google.com/appengine/
docs/quotas.html) says that every app should have 46 CPU-hours from
their Free CPU quota. Is this reduction in Free CPU hours a planned/
announced change?

-V
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: App exceeding a quota: Workflow Backend Index Task Count

2009-06-22 Thread Nick Johnson (Google)
Hi vivpuri,

Yes, the reduction is planned. There's a note at the top of the page you
linked to stating that, and a section at the bottom that lists the new
limits.

-Nick Johnson

On Mon, Jun 22, 2009 at 12:17 PM, vivpuri vivpu...@gmail.com wrote:


 Okay, the Google Docs page on Quotas(http://code.google.com/appengine/
 docs/quotas.html http://code.google.com/appengine/%0Adocs/quotas.html)
 says that every app should have 46 CPU-hours from
 their Free CPU quota. Is this reduction in Free CPU hours a planned/
 announced change?

 -V
 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: App exceeding a quota: Workflow Backend Index Task Count

2009-06-22 Thread vivpuri

Thanks for the update Nick.

Unfortunately, this change almost kills my budget :(

-V

On Jun 22, 7:22 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi vivpuri,

 Yes, the reduction is planned. There's a note at the top of the page you
 linked to stating that, and a section at the bottom that lists the new
 limits.

 -Nick Johnson

 On Mon, Jun 22, 2009 at 12:17 PM, vivpuri vivpu...@gmail.com wrote:

  Okay, the Google Docs page on Quotas(http://code.google.com/appengine/
  docs/quotas.html http://code.google.com/appengine/%0Adocs/quotas.html)
  says that every app should have 46 CPU-hours from
  their Free CPU quota. Is this reduction in Free CPU hours a planned/
  announced change?

  -V

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Tuples, Exploding Index, IO talk

2009-06-22 Thread hawkett

Thanks Nick,

   I just wanted to make sure I wasn't missing something wrt to tuple
handling and lists - your solution sounds good - cheers,

Colin

On Jun 22, 11:10 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi hawkett,



 On Sat, Jun 20, 2009 at 3:05 PM, hawkett hawk...@gmail.com wrote:

  Hi,

    I was watching Brett's IO talk re. using 'Relational Index Tables',
  and there were a few hints of things in there, and I just wanted to
  check I got it all correctly -

  1.  Lists are good for tuples - a use case I see is an entity being
  tagged, and having a state within that tag - so the tuples might be
  ('tagA', 'PENDING') , ('tagB', 'ACCEPTED'), ('tagC', 'DENIED') etc. -
  so the list structures would be

  class Thing(db.Model):
   name = db.StringProperty()
   tags = db.ListProperty(str, default=[])
   states = db.ListProperty(str, default=[])

  with their contents tags = ['tagA', 'tagB', 'tagC'], states =
  ['PENDING', 'ACCEPTED', 'DENIED']

  and as data comes and goes you maintain both lists to ensure you
  record the correct state for the correct tag by matching their list
  position.

 A much better approach is to use a single ListProperty, and serialize your
 tuples to it - using Pickle, JSON, CSV, etc - whatever suits. If you want,
 you can easily write a custom Datastore property class to make this easier.
 This allows you to do everything you outlined below without extra effort.

 -Nick Johnson





  2.  Relational Index Tables are good for exploding index problems - so
  the query here might be -
  get me all the 'Things' which have 'tagA' and which are 'PENDING' in
  that tag - i.e. all records with the tuple ('tagA, 'PENDING'), which
  would be a composite index over two list properties - an exploding
  index.

  So assuming I've got the above right, I'm trying to work out a few
  things

  a.  Without relational index tables, what is the best way to construct
  the query - e.g.

  things = db.GqlQuery(
   SELECT * FROM Thing 
   WHERE tags = :1 AND states = :2, 'tagA', 'PENDING')

  which would get me anything that had 'tagA' at any point in the tags
  list, and anything that had a 'PENDING' at any point in the states
  list. This is potentially many more records than those that match the
  tuple.  So then I have to do an in-memory cull of those records
  returned and work out which ones actually conform to the tuple?  Just
  wondering if I am missing something here, because it seems like a
  great method for storing a tuple, but complex to query for that same
  tuple?

  b.  If I am going to use relational index tables, to avoid the
  exploding index that the above query could generate -

  class Thing(db.model):
   name = db.StringProperty()

  class ThingTagIndex(db.Model)
   tags = db.ListProperty(str, default=[])

  class ThingStateIndex(db.Model)
   states = db.ListProperty(str, default=[])

  then am I right in thinking that my query would be performed as

  tagIndexKeys = db.GqlQuery(
   SELECT __key__ FROM ThingTagIndex 
   WHERE tags = :1, 'tagA')

  # All the things that have 'tagA' in their tags list
  thingTagKeys = [k.parent() for k in tagIndexKeys]

  stateIndexKeys = db.GqlQuery(
   SELECT __key__ FROM ThingStateIndex 
   WHERE states = :1 AND ANCESTOR IN :2, 'PENDING', thingTagKeys)

  # All the things that have both 'tagA' and 'PENDING' (but not
  necessarily as a tuple)
  thingKeys = [k.parent() for k in stateIndexKeys]

  things = db.get(thingKeys)

  # Oops - I need the lists to do the culling part of my tuple query
  from (a)

  So I have avoided the exploding index by performing two separate
  queries, but I could have achieved much the same result without the
  index tables - i.e. by performing separate queries and avoiding the
  composite index.  Just wondering if I am seeing the tuple situation
  correctly - i.e. there is no way to query them that doesn't require
  some in-memory culling?  Thanks,

  Colin

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Query with 1000 matches

2009-06-22 Thread herbie

I know that if there are more than 1000 entities that match a query,
then only 1000 will  be return by fetch().  But my question is which
1000? The last 1000 added to the datastore?  The first 1000 added to
the datastore? Or is it undedined?

Thanks
Ian

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Tuples, Exploding Index, IO talk

2009-06-22 Thread hawkett

This actually leads to another question that has been on my mind - how
reliable is a key's string representation?  If one part of my tuple is
a key, can I store it's hash and reliably turn that into the actual
key for the object at a (much) later date, or do I need to store the
key_name and path?  Put another way, does Google guarantee that the
Key hashing algorithm will never change?  Is the hash produced for a
given key_name and path guaranteed to be the same on both the SDK and
the live environment?

Just wanting to get a clear idea of the best approach to serialising
keys.  Thanks,

Colin

On Jun 22, 11:10 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi hawkett,



 On Sat, Jun 20, 2009 at 3:05 PM, hawkett hawk...@gmail.com wrote:

  Hi,

    I was watching Brett's IO talk re. using 'Relational Index Tables',
  and there were a few hints of things in there, and I just wanted to
  check I got it all correctly -

  1.  Lists are good for tuples - a use case I see is an entity being
  tagged, and having a state within that tag - so the tuples might be
  ('tagA', 'PENDING') , ('tagB', 'ACCEPTED'), ('tagC', 'DENIED') etc. -
  so the list structures would be

  class Thing(db.Model):
   name = db.StringProperty()
   tags = db.ListProperty(str, default=[])
   states = db.ListProperty(str, default=[])

  with their contents tags = ['tagA', 'tagB', 'tagC'], states =
  ['PENDING', 'ACCEPTED', 'DENIED']

  and as data comes and goes you maintain both lists to ensure you
  record the correct state for the correct tag by matching their list
  position.

 A much better approach is to use a single ListProperty, and serialize your
 tuples to it - using Pickle, JSON, CSV, etc - whatever suits. If you want,
 you can easily write a custom Datastore property class to make this easier.
 This allows you to do everything you outlined below without extra effort.

 -Nick Johnson





  2.  Relational Index Tables are good for exploding index problems - so
  the query here might be -
  get me all the 'Things' which have 'tagA' and which are 'PENDING' in
  that tag - i.e. all records with the tuple ('tagA, 'PENDING'), which
  would be a composite index over two list properties - an exploding
  index.

  So assuming I've got the above right, I'm trying to work out a few
  things

  a.  Without relational index tables, what is the best way to construct
  the query - e.g.

  things = db.GqlQuery(
   SELECT * FROM Thing 
   WHERE tags = :1 AND states = :2, 'tagA', 'PENDING')

  which would get me anything that had 'tagA' at any point in the tags
  list, and anything that had a 'PENDING' at any point in the states
  list. This is potentially many more records than those that match the
  tuple.  So then I have to do an in-memory cull of those records
  returned and work out which ones actually conform to the tuple?  Just
  wondering if I am missing something here, because it seems like a
  great method for storing a tuple, but complex to query for that same
  tuple?

  b.  If I am going to use relational index tables, to avoid the
  exploding index that the above query could generate -

  class Thing(db.model):
   name = db.StringProperty()

  class ThingTagIndex(db.Model)
   tags = db.ListProperty(str, default=[])

  class ThingStateIndex(db.Model)
   states = db.ListProperty(str, default=[])

  then am I right in thinking that my query would be performed as

  tagIndexKeys = db.GqlQuery(
   SELECT __key__ FROM ThingTagIndex 
   WHERE tags = :1, 'tagA')

  # All the things that have 'tagA' in their tags list
  thingTagKeys = [k.parent() for k in tagIndexKeys]

  stateIndexKeys = db.GqlQuery(
   SELECT __key__ FROM ThingStateIndex 
   WHERE states = :1 AND ANCESTOR IN :2, 'PENDING', thingTagKeys)

  # All the things that have both 'tagA' and 'PENDING' (but not
  necessarily as a tuple)
  thingKeys = [k.parent() for k in stateIndexKeys]

  things = db.get(thingKeys)

  # Oops - I need the lists to do the culling part of my tuple query
  from (a)

  So I have avoided the exploding index by performing two separate
  queries, but I could have achieved much the same result without the
  index tables - i.e. by performing separate queries and avoiding the
  composite index.  Just wondering if I am seeing the tuple situation
  correctly - i.e. there is no way to query them that doesn't require
  some in-memory culling?  Thanks,

  Colin

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en

[google-appengine] Re: Tuples, Exploding Index, IO talk

2009-06-22 Thread Nick Johnson (Google)
Hi hawkett,

I presume by key hash, you mean the string obtained by stringifying a Key
object (eg, str(key)). This is not a hash, but rather a base64 encoding of
the Key Protocol Buffer. We don't necessarily guarantee this encoding scheme
will not change (though it's rather unlikely), but we do guarantee that
db.Key(str(key)) == key - that is, that you'll always be able to reconstruct
a key from its string-form.

-Nick Johnson

On Mon, Jun 22, 2009 at 1:48 PM, hawkett hawk...@gmail.com wrote:


 This actually leads to another question that has been on my mind - how
 reliable is a key's string representation?  If one part of my tuple is
 a key, can I store it's hash and reliably turn that into the actual
 key for the object at a (much) later date, or do I need to store the
 key_name and path?  Put another way, does Google guarantee that the
 Key hashing algorithm will never change?  Is the hash produced for a
 given key_name and path guaranteed to be the same on both the SDK and
 the live environment?

 Just wanting to get a clear idea of the best approach to serialising
 keys.  Thanks,

 Colin

 On Jun 22, 11:10 am, Nick Johnson (Google) nick.john...@google.com
 wrote:
  Hi hawkett,
 
 
 
  On Sat, Jun 20, 2009 at 3:05 PM, hawkett hawk...@gmail.com wrote:
 
   Hi,
 
 I was watching Brett's IO talk re. using 'Relational Index Tables',
   and there were a few hints of things in there, and I just wanted to
   check I got it all correctly -
 
   1.  Lists are good for tuples - a use case I see is an entity being
   tagged, and having a state within that tag - so the tuples might be
   ('tagA', 'PENDING') , ('tagB', 'ACCEPTED'), ('tagC', 'DENIED') etc. -
   so the list structures would be
 
   class Thing(db.Model):
name = db.StringProperty()
tags = db.ListProperty(str, default=[])
states = db.ListProperty(str, default=[])
 
   with their contents tags = ['tagA', 'tagB', 'tagC'], states =
   ['PENDING', 'ACCEPTED', 'DENIED']
 
   and as data comes and goes you maintain both lists to ensure you
   record the correct state for the correct tag by matching their list
   position.
 
  A much better approach is to use a single ListProperty, and serialize
 your
  tuples to it - using Pickle, JSON, CSV, etc - whatever suits. If you
 want,
  you can easily write a custom Datastore property class to make this
 easier.
  This allows you to do everything you outlined below without extra effort.
 
  -Nick Johnson
 
 
 
 
 
   2.  Relational Index Tables are good for exploding index problems - so
   the query here might be -
   get me all the 'Things' which have 'tagA' and which are 'PENDING' in
   that tag - i.e. all records with the tuple ('tagA, 'PENDING'), which
   would be a composite index over two list properties - an exploding
   index.
 
   So assuming I've got the above right, I'm trying to work out a few
   things
 
   a.  Without relational index tables, what is the best way to construct
   the query - e.g.
 
   things = db.GqlQuery(
SELECT * FROM Thing 
WHERE tags = :1 AND states = :2, 'tagA', 'PENDING')
 
   which would get me anything that had 'tagA' at any point in the tags
   list, and anything that had a 'PENDING' at any point in the states
   list. This is potentially many more records than those that match the
   tuple.  So then I have to do an in-memory cull of those records
   returned and work out which ones actually conform to the tuple?  Just
   wondering if I am missing something here, because it seems like a
   great method for storing a tuple, but complex to query for that same
   tuple?
 
   b.  If I am going to use relational index tables, to avoid the
   exploding index that the above query could generate -
 
   class Thing(db.model):
name = db.StringProperty()
 
   class ThingTagIndex(db.Model)
tags = db.ListProperty(str, default=[])
 
   class ThingStateIndex(db.Model)
states = db.ListProperty(str, default=[])
 
   then am I right in thinking that my query would be performed as
 
   tagIndexKeys = db.GqlQuery(
SELECT __key__ FROM ThingTagIndex 
WHERE tags = :1, 'tagA')
 
   # All the things that have 'tagA' in their tags list
   thingTagKeys = [k.parent() for k in tagIndexKeys]
 
   stateIndexKeys = db.GqlQuery(
SELECT __key__ FROM ThingStateIndex 
WHERE states = :1 AND ANCESTOR IN :2, 'PENDING', thingTagKeys)
 
   # All the things that have both 'tagA' and 'PENDING' (but not
   necessarily as a tuple)
   thingKeys = [k.parent() for k in stateIndexKeys]
 
   things = db.get(thingKeys)
 
   # Oops - I need the lists to do the culling part of my tuple query
   from (a)
 
   So I have avoided the exploding index by performing two separate
   queries, but I could have achieved much the same result without the
   index tables - i.e. by performing separate queries and avoiding the
   composite index.  Just wondering if I am seeing the tuple situation
   correctly - i.e. there is no way to query them that doesn't require
   some in-memory culling? 

[google-appengine] Re: Query with 1000 matches

2009-06-22 Thread Nick Johnson (Google)
Hi herbie,

The first 1000 results of a query are the ones returned. If you do not
specify a sort order, entities are returned sorted by their keys.

-Nick Johnson

On Mon, Jun 22, 2009 at 1:42 PM, herbie 4whi...@o2.co.uk wrote:


 I know that if there are more than 1000 entities that match a query,
 then only 1000 will  be return by fetch().  But my question is which
 1000? The last 1000 added to the datastore?  The first 1000 added to
 the datastore? Or is it undedined?

 Thanks
 Ian

 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Query with 1000 matches

2009-06-22 Thread herbie


So to be sure to get the latest 1000 entities I should add a datetime
property to my entitie model and filter and sort on that?



On Jun 22, 1:42 pm, herbie 4whi...@o2.co.uk wrote:
 I know that if there are more than 1000 entities that match a query,
 then only 1000 will  be return by fetch().  But my question is which
 1000? The last 1000 added to the datastore?  The first 1000 added to
 the datastore? Or is it undedined?

 Thanks
 Ian
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Query with 1000 matches

2009-06-22 Thread Nick Johnson (Google)
Correct. Are you sure you need 1000 entities, though? Your users probably
won't read through all 1000.

-Nick Johnson

On Mon, Jun 22, 2009 at 3:23 PM, herbie 4whi...@o2.co.uk wrote:



 So to be sure to get the latest 1000 entities I should add a datetime
 property to my entitie model and filter and sort on that?



 On Jun 22, 1:42 pm, herbie 4whi...@o2.co.uk wrote:
  I know that if there are more than 1000 entities that match a query,
  then only 1000 will  be return by fetch().  But my question is which
  1000? The last 1000 added to the datastore?  The first 1000 added to
  the datastore? Or is it undedined?
 
  Thanks
  Ian
 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Query with 1000 matches

2009-06-22 Thread herbie

No the users won't need to read 1000 entities, but I want to calculate
the average of a  property from the latest 1000 entities.


On Jun 22, 3:30 pm, Nick Johnson (Google) nick.john...@google.com
wrote:
 Correct. Are you sure you need 1000 entities, though? Your users probably
 won't read through all 1000.

 -Nick Johnson



 On Mon, Jun 22, 2009 at 3:23 PM, herbie 4whi...@o2.co.uk wrote:

  So to be sure to get the latest 1000 entities I should add a datetime
  property to my entitie model and filter and sort on that?

  On Jun 22, 1:42 pm, herbie 4whi...@o2.co.uk wrote:
   I know that if there are more than 1000 entities that match a query,
   then only 1000 will  be return by fetch().  But my question is which
   1000? The last 1000 added to the datastore?  The first 1000 added to
   the datastore? Or is it undedined?

   Thanks
   Ian

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Query with 1000 matches

2009-06-22 Thread Nick Johnson (Google)
Consider precalculating this data and storing it against another entity.
This will save a lot of work on requests.

-Nick Johnson

On Mon, Jun 22, 2009 at 3:55 PM, herbie 4whi...@o2.co.uk wrote:


 No the users won't need to read 1000 entities, but I want to calculate
 the average of a  property from the latest 1000 entities.


 On Jun 22, 3:30 pm, Nick Johnson (Google) nick.john...@google.com
 wrote:
  Correct. Are you sure you need 1000 entities, though? Your users probably
  won't read through all 1000.
 
  -Nick Johnson
 
 
 
  On Mon, Jun 22, 2009 at 3:23 PM, herbie 4whi...@o2.co.uk wrote:
 
   So to be sure to get the latest 1000 entities I should add a datetime
   property to my entitie model and filter and sort on that?
 
   On Jun 22, 1:42 pm, herbie 4whi...@o2.co.uk wrote:
I know that if there are more than 1000 entities that match a query,
then only 1000 will  be return by fetch().  But my question is which
1000? The last 1000 added to the datastore?  The first 1000 added to
the datastore? Or is it undedined?
 
Thanks
Ian
 
  --
  Nick Johnson, App Engine Developer Programs Engineer
  Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
 Number:
  368047
 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Task Queue API Users

2009-06-22 Thread Nick Johnson (Google)
Hi hawkett,

In the current release of the SDK, the Task Queue stub simply logs tasks to
be executed, and doesn't actually execute them. How are you executing these
tasks?

-Nick Johnson

On Mon, Jun 22, 2009 at 3:46 PM, hawkett hawk...@gmail.com wrote:


 Hi,

   I'm running into some issues trying to use the Task Queue API with
 restricted access URL's defined in app.yaml - when a URL is defined as
 either 'login: admin' or 'login: required', when the task fires it is
 receiving a 302 - which I assume is a redirect to the login page.  I'm
 just running this on the SDK at the moment, but I was expecting at
 least the 'login: admin' url to work, based on the following comment
 from this page
 http://code.google.com/appengine/docs/python/taskqueue/overview.html

 'If a task performs sensitive operations (such as modifying important
 data), the developer may wish to protect the worker URL to prevent a
 malicious external user from calling it directly. This is possible by
 marking the worker URL as admin-only in the app configuration.'

 I figure I'm probably doing something dumb, but I had expected the
 tasks to be executed as some sort of system user, so that either
 'login: required' or 'login: admin' would work - perhaps even being
 able to specify the email and nickname of the system user as app.yaml
 configuration.  Another alternative would be if there was a mechanism
 to create an auth token to supply when the task is created.  e.g.
 users.current_user_auth_token() to execute the task as the current
 user.

 So I guess the broader question is - where does the task queue get the
 'run_as' user, or if there isn't one, what's the mechanism for hitting
 a 'login: admin' worker URL?

 Most apps should be able to expect a call to users.get_current_user()
 to return a user object in code protected by 'login: admin'.

 Thanks,

 Colin
 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Task Queue API Users

2009-06-22 Thread Nick Johnson (Google)
Hi hawkett,

My mistake. This sounds like a bug in the SDK - can you please file a bug?

-Nick Johnson

On Mon, Jun 22, 2009 at 4:25 PM, hawkett hawk...@gmail.com wrote:


 Hi Nick,

 In my SDK (just the normal mac download), I can inspect the queue in
 admin console, and have a 'run' and 'delete' button next to each task
 in the queue.  When I press 'run', the task fires, my server receives
 the request, and returns the 302.

 Colin

 On Jun 22, 4:15 pm, Nick Johnson (Google) nick.john...@google.com
 wrote:
  Hi hawkett,
 
  In the current release of the SDK, the Task Queue stub simply logs tasks
 to
  be executed, and doesn't actually execute them. How are you executing
 these
  tasks?
 
  -Nick Johnson
 
 
 
  On Mon, Jun 22, 2009 at 3:46 PM, hawkett hawk...@gmail.com wrote:
 
   Hi,
 
 I'm running into some issues trying to use the Task Queue API with
   restricted access URL's defined in app.yaml - when a URL is defined as
   either 'login: admin' or 'login: required', when the task fires it is
   receiving a 302 - which I assume is a redirect to the login page.  I'm
   just running this on the SDK at the moment, but I was expecting at
   least the 'login: admin' url to work, based on the following comment
   from this page
  http://code.google.com/appengine/docs/python/taskqueue/overview.html
 
   'If a task performs sensitive operations (such as modifying important
   data), the developer may wish to protect the worker URL to prevent a
   malicious external user from calling it directly. This is possible by
   marking the worker URL as admin-only in the app configuration.'
 
   I figure I'm probably doing something dumb, but I had expected the
   tasks to be executed as some sort of system user, so that either
   'login: required' or 'login: admin' would work - perhaps even being
   able to specify the email and nickname of the system user as app.yaml
   configuration.  Another alternative would be if there was a mechanism
   to create an auth token to supply when the task is created.  e.g.
   users.current_user_auth_token() to execute the task as the current
   user.
 
   So I guess the broader question is - where does the task queue get the
   'run_as' user, or if there isn't one, what's the mechanism for hitting
   a 'login: admin' worker URL?
 
   Most apps should be able to expect a call to users.get_current_user()
   to return a user object in code protected by 'login: admin'.
 
   Thanks,
 
   Colin
 
  --
  Nick Johnson, App Engine Developer Programs Engineer
  Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
 Number:
  368047
 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Task Queue API Users

2009-06-22 Thread hawkett

Hi Nick,

In my SDK (just the normal mac download), I can inspect the queue in
admin console, and have a 'run' and 'delete' button next to each task
in the queue.  When I press 'run', the task fires, my server receives
the request, and returns the 302.

Colin

On Jun 22, 4:15 pm, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi hawkett,

 In the current release of the SDK, the Task Queue stub simply logs tasks to
 be executed, and doesn't actually execute them. How are you executing these
 tasks?

 -Nick Johnson



 On Mon, Jun 22, 2009 at 3:46 PM, hawkett hawk...@gmail.com wrote:

  Hi,

    I'm running into some issues trying to use the Task Queue API with
  restricted access URL's defined in app.yaml - when a URL is defined as
  either 'login: admin' or 'login: required', when the task fires it is
  receiving a 302 - which I assume is a redirect to the login page.  I'm
  just running this on the SDK at the moment, but I was expecting at
  least the 'login: admin' url to work, based on the following comment
  from this page
 http://code.google.com/appengine/docs/python/taskqueue/overview.html

  'If a task performs sensitive operations (such as modifying important
  data), the developer may wish to protect the worker URL to prevent a
  malicious external user from calling it directly. This is possible by
  marking the worker URL as admin-only in the app configuration.'

  I figure I'm probably doing something dumb, but I had expected the
  tasks to be executed as some sort of system user, so that either
  'login: required' or 'login: admin' would work - perhaps even being
  able to specify the email and nickname of the system user as app.yaml
  configuration.  Another alternative would be if there was a mechanism
  to create an auth token to supply when the task is created.  e.g.
  users.current_user_auth_token() to execute the task as the current
  user.

  So I guess the broader question is - where does the task queue get the
  'run_as' user, or if there isn't one, what's the mechanism for hitting
  a 'login: admin' worker URL?

  Most apps should be able to expect a call to users.get_current_user()
  to return a user object in code protected by 'login: admin'.

  Thanks,

  Colin

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Task Queue API Users

2009-06-22 Thread hawkett

Sure - just before I do, the following may indicate that this isn't a
bug -

If I am also logged in to the application in another tab, as an
administrator, then when I hit 'run' the task fires successfully, or
at least the stub fires and records a 200.  So it would appear I need
to also be logged in as an admin.  While this makes some sense, it
doesn't really mirror the behaviour on GAE, as the task queue won't
have the benefit of this authentication cookie - what user does the
live system use to execute protected URL's?

Colin

On Jun 22, 4:31 pm, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi hawkett,

 My mistake. This sounds like a bug in the SDK - can you please file a bug?

 -Nick Johnson



 On Mon, Jun 22, 2009 at 4:25 PM, hawkett hawk...@gmail.com wrote:

  Hi Nick,

  In my SDK (just the normal mac download), I can inspect the queue in
  admin console, and have a 'run' and 'delete' button next to each task
  in the queue.  When I press 'run', the task fires, my server receives
  the request, and returns the 302.

  Colin

  On Jun 22, 4:15 pm, Nick Johnson (Google) nick.john...@google.com
  wrote:
   Hi hawkett,

   In the current release of the SDK, the Task Queue stub simply logs tasks
  to
   be executed, and doesn't actually execute them. How are you executing
  these
   tasks?

   -Nick Johnson

   On Mon, Jun 22, 2009 at 3:46 PM, hawkett hawk...@gmail.com wrote:

Hi,

  I'm running into some issues trying to use the Task Queue API with
restricted access URL's defined in app.yaml - when a URL is defined as
either 'login: admin' or 'login: required', when the task fires it is
receiving a 302 - which I assume is a redirect to the login page.  I'm
just running this on the SDK at the moment, but I was expecting at
least the 'login: admin' url to work, based on the following comment
from this page
   http://code.google.com/appengine/docs/python/taskqueue/overview.html

'If a task performs sensitive operations (such as modifying important
data), the developer may wish to protect the worker URL to prevent a
malicious external user from calling it directly. This is possible by
marking the worker URL as admin-only in the app configuration.'

I figure I'm probably doing something dumb, but I had expected the
tasks to be executed as some sort of system user, so that either
'login: required' or 'login: admin' would work - perhaps even being
able to specify the email and nickname of the system user as app.yaml
configuration.  Another alternative would be if there was a mechanism
to create an auth token to supply when the task is created.  e.g.
users.current_user_auth_token() to execute the task as the current
user.

So I guess the broader question is - where does the task queue get the
'run_as' user, or if there isn't one, what's the mechanism for hitting
a 'login: admin' worker URL?

Most apps should be able to expect a call to users.get_current_user()
to return a user object in code protected by 'login: admin'.

Thanks,

Colin

   --
   Nick Johnson, App Engine Developer Programs Engineer
   Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
  Number:
   368047

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Change db.IntegerProperty() to db.FloatProperty()?

2009-06-22 Thread Savraj

Hi App Engine-ers,

So I've got a ton of data stored in my db -- and I've got a particular
field, let's call it 'value' set as 'db.IntegerProperty()' in my model
definition.  If I change this to 'db.FloatProperty()', what happens?

I would imagine that the existing values in the db remain Integers,
while the new ones coming in are floats, and that should be fine for
my purposes. But will it work?  I suppose the only way to know is try,
but I don't want to mangle my database, which has quite a bit of data
in it.

What will happen in this case?

-s

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Change db.IntegerProperty() to db.FloatProperty()?

2009-06-22 Thread Nick Johnson (Google)
Hi Savraj,

If you change your property from db.IntegerProperty to db.FloatProperty, all
your existing entities will fail to validate, and throw exceptions when
loaded. If you want to make this change, you need to transition all your
existing entities before making the update.

-Nick Johnson

On Mon, Jun 22, 2009 at 5:04 PM, Savraj sav...@gmail.com wrote:


 Hi App Engine-ers,

 So I've got a ton of data stored in my db -- and I've got a particular
 field, let's call it 'value' set as 'db.IntegerProperty()' in my model
 definition.  If I change this to 'db.FloatProperty()', what happens?

 I would imagine that the existing values in the db remain Integers,
 while the new ones coming in are floats, and that should be fine for
 my purposes. But will it work?  I suppose the only way to know is try,
 but I don't want to mangle my database, which has quite a bit of data
 in it.

 What will happen in this case?

 -s

 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] memcache and api_cpu_ms

2009-06-22 Thread John Tantalo

I recently attempted to improve the responsiveness of one of my app's
more elementary handlers by using memcache to cache the datastore
lookups. According to my logs, this has had a positive effect on my
api_cpu_ms, reducing this time to 72 ms. However, the cpu_ms has not
seen a similar decrease, and hovers around 1000ms.

Do memcache gets count towards api_cpu_ms or cpu_ms? Do I need to
worry about performance issues around deserializing model instances in
memcache?

My caching strategy looks like this:

response = dict() # (might not be empty)
cached = memcache.get(__CACHE_KEY)
if cached:
  response.update(cached)
  return
else:
  # datastore calls
  foo = get_foo()
  bar = get_bar()
  # build cache object
  cached = dict(foo=foo, edits=bar)
  response.update(cached)
  # cache
  memcache.set(__CACHE_KEY, cached)
  return
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] memcache and api_cpu_ms

2009-06-22 Thread John Tantalo

I recently attempted to improve the responsiveness of one of my app's
more elementary handlers by using memcache to cache the datastore
lookups. According to my logs, this has had a positive effect on my
api_cpu_ms, reducing this time to 72 ms. However, the cpu_ms has not
seen a similar decrease, and hovers around 1000ms.

Do memcache gets count towards api_cpu_ms or cpu_ms? Do I need to
worry about performance issues around deserializing model instances in
memcache?

My caching strategy looks like this:

response = dict() # (might not be empty)
cached = memcache.get(__CACHE_KEY)
if cached:
  response.update(cached)
  return
else:
  # datastore calls
  foo = get_foo()
  bar = get_bar()
  # build cache object
  cached = dict(foo=foo, edits=bar)
  response.update(cached)
  # cache
  memcache.set(__CACHE_KEY, cached)
  return
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Image List Viewer

2009-06-22 Thread SH

Hi,

I am trying to build up an E-Greeting application on GAE using Java.
One of the pre-requisiite is to have an Image List Viewer to display
the images (may be macromedia flex, jpg, gifs and other images
formats).  I am currently thinking of using Picasa to host the images,
but am clueless as to how I can build this Image List Viewer
component.  Does anyone have a sample code base where I can start off
on?

Any clues will be appreciated as well.

Thanks.

SH

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Web Selling and Credit Card Authentication on App Engine

2009-06-22 Thread stephenp

I have my appengine servlet using URLFetch to call out to PayPal to do
credit card authorization, and I've already seen a few deadline (5 sec
timeout) errors. I am new to PayPal (and appengine) and I was
wondering if someone here with more experience with PayPal might know
whether or not it is typical for a request to take longer than the 5
secs allowed by appengine. If PayPal can usually do the auth in under
5 sec, then I can just retry when it fails and hope the next time it
can do it in less time. I know there are all sorts of things that
factor in - I am just asking for what your experience tells you.

Stephen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] ReferenceProperty performance question

2009-06-22 Thread Paddy Foran

Suppose I have the following database:

class Model1(db.Model):
  att1 = db.IntegerProperty()

class Model2(db.Model):
  model1 = db.ReferenceProperty(Model1, collection_name='model2s')
  string = db.StringProperty()

Suppose I create a few Model1s and several Model2s, then have the
following code:

model1s = Model1.all()
for model1 in model1s:
  for model2 in model1.model2s:
self.response.out.write(model2.string)

Does model1.model2s cause a query to be run? I would imagine so. Is
this query run every time I ask for model1.model2s, or is it
automatically cached?

Any information on this would be helpful.

Thanks,
Paddy Foran

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] The check_login decorator can only be used for GET requests

2009-06-22 Thread Felipe Cavalcante

Hi,

Does somebody know why The check_login decorator can only be used for
GET requests ?

The message above appears when I use put @login_required in the post
methos of a webapp.RequestHandler class.


-
Felipe

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] jdo problem with field type BigDecimal and google app engine plugin

2009-06-22 Thread Ronny Bubke

When i use BigDecimal as persistent type with JDO i got strange
behaviour.

A value new BigDecimal(1.4) will change after restoring from
database to 1.39

I suppose this is a bug because BigDecimal is stored as float or
double. But it should stored as String.

In the whitelist BigDecimal is supported as persistent type. I don't
know whether it will work on App Engine Server. Because of unit
testing it should also work with the plugin.

Maybe somebody can help me.

Thx.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Entity per User

2009-06-22 Thread Felipe Cavalcante

Hi,

I want to create an application that has three tables (Categories,
Account, Config). Every table has a property (user = db.UserProperty)
to separate the content for a specific user.

My question is: Is that the better form to separate information to
every user? Is there any way to create a set of entities for every
single user?

I think I dont undersand (or have faith in) the concepts of BigTable
yet. :)

Any comment is welcome!

Felipe

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] moin moin

2009-06-22 Thread Peppe83

Hi, i am newbie in Google App Engine. I have a question for you: do is
possible use MoinMoin wiki (written in python) with GAE?thanks

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Performance improvements

2009-06-22 Thread luddep

Hello,

So the free quotas have been reduced today and according to the docs
(http://code.google.com/appengine/docs/quotas.html#Free_Changes) there
are going to be some performance improvements as well, will there be
any information released regarding what actual improvements they are?
(i.e., datastore related, etc)

Thanks!
- Ludwig

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Server Error (500) while uploading index definitions (again)

2009-06-22 Thread C Partac



I have the same problem Server Error (500)  on my applications
cpedevtest01 and cpedevtest02 while uploading. The indexes are
building for 4 days already.
Could you reset the indexes manually because after deleting the
indexes from index I still get the error while uploading the
application.

Thank you

Costi

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Using either Google Apps or Google account cookies

2009-06-22 Thread Mark Ellul

Hi,

I have seen an AppEngine Demo which takes you to the
https://www.google.com/a/UniversalLogin url to do the authentication.
The page shows all the logged in accounts and a choice is allowed of
account to authorise.

I am using the Python Libraries and would love to do the same thing
with my application, but cannot figure out how.

Any ideas or pointers would be very much appreciated.

Regards

Mark

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Problem with Verify Your Account by SMS page

2009-06-22 Thread WaveyGravey

I am having the same problem.  What is the issue with this?  How can I
get around this?

On Jun 3, 4:17 pm, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi Harry,

 Your account should now be verified.

 -Nick Johnson



 On Wed, Jun 3, 2009 at 10:27 AM, Harry Levinson aku...@gmail.com wrote:
  I have a problem with Verify Your Account by SMS page.

  The system sent me the verification code by SMS to my Verizon phone right
  away.

  However the Verify web page says There were errors:  Carrier.

  How can I get help fixing my account or finding a web page in which to
  paste my verification code?

  Harry Levinson- Hide quoted text -

 - Show quoted text -

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: ReferenceProperty performance question

2009-06-22 Thread Nick Johnson (Google)
On Sun, Jun 21, 2009 at 7:33 PM, Paddy Foran foran.pa...@gmail.com wrote:


 Suppose I have the following database:

 class Model1(db.Model):
  att1 = db.IntegerProperty()

 class Model2(db.Model):
  model1 = db.ReferenceProperty(Model1, collection_name='model2s')
  string = db.StringProperty()

 Suppose I create a few Model1s and several Model2s, then have the
 following code:

 model1s = Model1.all()
 for model1 in model1s:
  for model2 in model1.model2s:
self.response.out.write(model2.string)

 Does model1.model2s cause a query to be run?


model1.model2s _is_ a query object. Iterating over it causes it to be
executed in the same manner that iterating over the model1s query object
causes that to be executed. No caching is performed, for the same reason.

-Nick Johnson

I would imagine so. Is
 this query run every time I ask for model1.model2s, or is it
 automatically cached?

 Any information on this would be helpful.

 Thanks,
 Paddy Foran

 



-- 
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
368047

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Server Error (500) while uploading index definitions (again)

2009-06-22 Thread Jason (Google)
Hi Costi. How many indexes are you trying to deploy? There is a hard limit
of 100, and it looks like you're very close to this number.
- Jason

On Mon, Jun 22, 2009 at 5:44 AM, C Partac cpar...@gmail.com wrote:




 I have the same problem Server Error (500)  on my applications
 cpedevtest01 and cpedevtest02 while uploading. The indexes are
 building for 4 days already.
 Could you reset the indexes manually because after deleting the
 indexes from index I still get the error while uploading the
 application.

 Thank you

 Costi

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Using either Google Apps or Google account cookies

2009-06-22 Thread Tony

Could you post a link to the demo you're referring to?  I'm not
exactly sure what you're asking but if I see this demo I can probably
figure it out.

On Jun 22, 11:18 am, Mark Ellul mark.el...@gmail.com wrote:
 Hi,

 I have seen an AppEngine Demo which takes you to 
 thehttps://www.google.com/a/UniversalLoginurl to do the authentication.
 The page shows all the logged in accounts and a choice is allowed of
 account to authorise.

 I am using the Python Libraries and would love to do the same thing
 with my application, but cannot figure out how.

 Any ideas or pointers would be very much appreciated.

 Regards

 Mark
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Server Error (500) while uploading index definitions (again)

2009-06-22 Thread Jason (Google)
OK, I nudged your indexes into the error state and reset your Datastore
Indices Count quota, but make sure not to upload more than 100 indexes or
you may see this issue again.
- Jason

On Mon, Jun 22, 2009 at 10:29 AM, Jason (Google) apija...@google.comwrote:

 Hi Costi. How many indexes are you trying to deploy? There is a hard limit
 of 100, and it looks like you're very close to this number.
 - Jason


 On Mon, Jun 22, 2009 at 5:44 AM, C Partac cpar...@gmail.com wrote:




 I have the same problem Server Error (500)  on my applications
 cpedevtest01 and cpedevtest02 while uploading. The indexes are
 building for 4 days already.
 Could you reset the indexes manually because after deleting the
 indexes from index I still get the error while uploading the
 application.

 Thank you

 Costi

 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Entity per User

2009-06-22 Thread Tony

First, remember that db.UserProperty gives you access only to a very
limited set of information about a user with a Google Account.  In
other words, you won't be able to manipulate the user that you're
pointing to in the datastore - you can only access certain properties
like email, username, and user_id, and you can't change any of those
properties (because they are pulled from the individual's Google
Account).  So if you have additional info you'll want to store about a
user (first name, lastname, birthday, whatever) you will probably want
something like:

class User(db.Model) {
  first_name = db.StringProperty(),
  last_name = db.StringProperty(),
  user = db.UserProperty(),
}

As for the rest of your question, it depends on how the data is going
to be accessed by your application.  For something like Account or
Config I imagine you're looking at a one-to-one relationship - one
Account entity per User entity.  So you might want to add a property
on your User class like account = db.ReferenceProperty(Account).
This is cheaper than having to do a query like Account.all().filter
(user =, users.get_current_user()) every time you want to get the
current user's account info.  Same with Config.

For Category, it depends on your usage pattern.  For example, if a
user is going to be a part of just a few categories, and you mostly
just want to grab those Category entities based on the currently
logged in user, you could add a property like this to your User model:
categories = db.ListProperty(db.Key).  Then if you want to get all of
a user's categories, you do something like db.get
(current_user.categories) which doesn't require a query, and is pretty
quick/cheap.  And if you want to query for all the users who are in
category X, you can do this query: User.all().filter(categories =,
category_x_key).  Pretty handy.

Recommended reading:
http://code.google.com/appengine/articles/modeling.html

On Jun 22, 7:13 am, Felipe Cavalcante felipexcavalca...@gmail.com
wrote:
 Hi,

 I want to create an application that has three tables (Categories,
 Account, Config). Every table has a property (user = db.UserProperty)
 to separate the content for a specific user.

 My question is: Is that the better form to separate information to
 every user? Is there any way to create a set of entities for every
 single user?

 I think I dont undersand (or have faith in) the concepts of BigTable
 yet. :)

 Any comment is welcome!

 Felipe
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: The check_login decorator can only be used for GET requests

2009-06-22 Thread Tony

My wild guess: the @login_required decorator automatically forwards
not-logged-in users to a login url (something like
http://your-app.appspot.com/_ah/login?continue=http://your-app.appspot.com/original-url?q=params).
Once the user logs in, they get to the url they were originally
seeking and all is well.  With a POST request, the parameters
contained in the request would be lost, which would create unexpected
behavior for the application and user.

If you are using Google Account authentication and want to ensure a
POST request is authenticated, use something like this:

from google.appengine.api import users
def post(self):
  user = users.get_current_user()
  if user is None:
# user isn't logged in
  else:
# user is logged in

On Jun 21, 6:27 pm, Felipe Cavalcante felipexcavalca...@gmail.com
wrote:
 Hi,

 Does somebody know why The check_login decorator can only be used for
 GET requests ?

 The message above appears when I use put @login_required in the post
 methos of a webapp.RequestHandler class.

 -
 Felipe
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] download_data: Can I get only entities created since some arbitrary point in time?

2009-06-22 Thread Jonathan Feinberg

I'm moving my application off of GAE.

I'd like to download all of my data, develop and deploy the new
implementation, then get the rest of the data, which had been created
in the meantime. Is there a way to do so?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Change db.IntegerProperty() to db.FloatProperty()?

2009-06-22 Thread Tony

http://code.google.com/appengine/articles/update_schema.html describes
one technique for updating your schema.

On Jun 22, 12:13 pm, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi Savraj,

 If you change your property from db.IntegerProperty to db.FloatProperty, all
 your existing entities will fail to validate, and throw exceptions when
 loaded. If you want to make this change, you need to transition all your
 existing entities before making the update.

 -Nick Johnson



 On Mon, Jun 22, 2009 at 5:04 PM, Savraj sav...@gmail.com wrote:

  Hi App Engine-ers,

  So I've got a ton of data stored in my db -- and I've got a particular
  field, let's call it 'value' set as 'db.IntegerProperty()' in my model
  definition.  If I change this to 'db.FloatProperty()', what happens?

  I would imagine that the existing values in the db remain Integers,
  while the new ones coming in are floats, and that should be fine for
  my purposes. But will it work?  I suppose the only way to know is try,
  but I don't want to mangle my database, which has quite a bit of data
  in it.

  What will happen in this case?

  -s

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] download_data: How do we deal with blobs?

2009-06-22 Thread Jonathan Feinberg

How should we deal with blobs on the way out? Should we build
(potentially large) Base64 strings?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: memcache and api_cpu_ms

2009-06-22 Thread Tony

Without knowing more about your app, I can't say for sure, but it
seems likely that whatever processing takes place in response.update
(object) is using your cpu time, which is why you don't see much of a
speedup via caching here.  I would suggest profiling the operation to
determine what function call(s) are specifically taking the most
resources.  In my experience, you won't notice a large difference in
cpu usage between serializing model instances to memcache vs. adding
identifier information (like db keys) for fetching later.  My entities
are small, however, your mileage my vary.  I find that the primary
tradeoff in serializing large amounts of info to memcache is in
increased memory pressure and thus lower memcache hit rate, higher
datastore access.

On Jun 22, 12:48 pm, John Tantalo john.tant...@gmail.com wrote:
 I recently attempted to improve the responsiveness of one of my app's
 more elementary handlers by using memcache to cache the datastore
 lookups. According to my logs, this has had a positive effect on my
 api_cpu_ms, reducing this time to 72 ms. However, the cpu_ms has not
 seen a similar decrease, and hovers around 1000ms.

 Do memcache gets count towards api_cpu_ms or cpu_ms? Do I need to
 worry about performance issues around deserializing model instances in
 memcache?

 My caching strategy looks like this:

 response = dict() # (might not be empty)
 cached = memcache.get(__CACHE_KEY)
 if cached:
   response.update(cached)
   return
 else:
   # datastore calls
   foo = get_foo()
   bar = get_bar()
   # build cache object
   cached = dict(foo=foo, edits=bar)
   response.update(cached)
   # cache
   memcache.set(__CACHE_KEY, cached)
   return
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: Can I get only entities created since some arbitrary point in time?

2009-06-22 Thread Tony

If you add a property such as: created_at = db.DateTimeProperty
(auto_now_add=True) to the models you are moving, you could then query
on that property to find entities created during any timeframe.  It
won't retroactively apply to entities created before you changed the
model, but this doesn't appear to be necessary for what you're trying
to do.

On Jun 22, 1:58 pm, Jonathan Feinberg e.e.c...@gmail.com wrote:
 I'm moving my application off of GAE.

 I'd like to download all of my data, develop and deploy the new
 implementation, then get the rest of the data, which had been created
 in the meantime. Is there a way to do so?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: Can I get only entities created since some arbitrary point in time?

2009-06-22 Thread Jonathan Feinberg

On Jun 22, 2:18 pm, Tony fatd...@gmail.com wrote:
 If you add a property such as: created_at = db.DateTimeProperty
 (auto_now_add=True) to the models you are moving, you could then query
 on that property to find entities created during any timeframe.

That's true. What does that have to do with appcfg.py download_data?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Problem administering the apps

2009-06-22 Thread Alex Geo

Hello!

I need a bit of help over here regarding the managing of apps. I visit
http://appspot.com, log in and I`m redirected to 
http://appengine.google.com/start
where I can create applications, but I cannot see my existing
applications and modify their settings.

Has anyone experienced this problem before? Please let me know if it
did and it fixed.

Best regards,
Alex

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: memcache and api_cpu_ms

2009-06-22 Thread John Tantalo

Tony,

The update call is the standard dict.update[1], which should be
plenty fast for my purposes.

My data is actually under a kilobyte, so I am quite confused why it
would take nearly 1000ms in CPU.

Here's an example of the data (in yaml format) with some personally
identifying information stripped out:

http://emend.appspot.com/?yaml

The actual data being cached is slightly larger, but not by much.

[1] http://docs.python.org/library/stdtypes.html#dict.update

On Jun 22, 11:06 am, Tony fatd...@gmail.com wrote:
 Without knowing more about your app, I can't say for sure, but it
 seems likely that whatever processing takes place in response.update
 (object) is using your cpu time, which is why you don't see much of a
 speedup via caching here.  I would suggest profiling the operation to
 determine what function call(s) are specifically taking the most
 resources.  In my experience, you won't notice a large difference in
 cpu usage between serializing model instances to memcache vs. adding
 identifier information (like db keys) for fetching later.  My entities
 are small, however, your mileage my vary.  I find that the primary
 tradeoff in serializing large amounts of info to memcache is in
 increased memory pressure and thus lower memcache hit rate, higher
 datastore access.

 On Jun 22, 12:48 pm, John Tantalo john.tant...@gmail.com wrote:

  I recently attempted to improve the responsiveness of one of my app's
  more elementary handlers by using memcache to cache the datastore
  lookups. According to my logs, this has had a positive effect on my
  api_cpu_ms, reducing this time to 72 ms. However, the cpu_ms has not
  seen a similar decrease, and hovers around 1000ms.

  Do memcache gets count towards api_cpu_ms or cpu_ms? Do I need to
  worry about performance issues around deserializing model instances in
  memcache?

  My caching strategy looks like this:

  response = dict() # (might not be empty)
  cached = memcache.get(__CACHE_KEY)
  if cached:
    response.update(cached)
    return
  else:
    # datastore calls
    foo = get_foo()
    bar = get_bar()
    # build cache object
    cached = dict(foo=foo, edits=bar)
    response.update(cached)
    # cache
    memcache.set(__CACHE_KEY, cached)
    return


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: Can I get only entities created since some arbitrary point in time?

2009-06-22 Thread Tony

Having re-read your original question, I see now that my answer
doesn't apply at all :P

To do this with appcfg.py download_data, I'd see what you can do with
the exporter class that you're defining for each model that you're
downloading.  You could add a timestamp field as I suggested, and
define a function in your exporter class that tests that this field
exists and/or is within a particular range.  From there, you could
either set a boolean value and use it to pare down your result set in
post-processing, or raise an exception.  You're probably not going to
avoid having to download every entity (without patching
download_data.py) as the functionality you're looking for doesn't
appear to exist.

Hope that's more helpful.

On Jun 22, 2:19 pm, Jonathan Feinberg e.e.c...@gmail.com wrote:
 On Jun 22, 2:18 pm, Tony fatd...@gmail.com wrote:

  If you add a property such as: created_at = db.DateTimeProperty
  (auto_now_add=True) to the models you are moving, you could then query
  on that property to find entities created during any timeframe.

 That's true. What does that have to do with appcfg.py download_data?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Testing Task Queue

2009-06-22 Thread Jeff S (Google)
Hi Stephen,
In the SDK dev server, the task queue's tasks must be triggered manually. If
you visit localhost:8080/_ah/admin/queues, you can see a list of queue names
with a flush button to cause all enqueued tasks in that queue to be
executed. Part of the reason for having a manual trigger for execution is to
prevent runaway scenarios as you describe. In the SDK you can step through
each generation of tasks and watch for endless or exponential triggers.

Happy coding,

Jeff

On Sun, Jun 21, 2009 at 5:27 PM, Stephen Mayer stephen.ma...@gmail.comwrote:


 So now that we have the task queue ... how do we test it in our
 sandboxes?  Or perhaps I missed that part of the documentation ... can
 anyone clue me in on testing it in a place that is not production (I
 wouldn't want a queue to start some runaway process in production ...
 would much prefer to catch those cases in testing).

 Thoughts?
 -Stephen
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: download_data: Can I get only entities created since some arbitrary point in time?

2009-06-22 Thread Tony

I'll ignore your rudeness and answer the question in case someone else
finds this thread and has a similar question:

class ModelNameExporter(bulkloader.Exporter):
  def handle_entity(self, entity):
if entity was created between x and y:
  return entity
else:
  return None

You may ignore my post and wait for someone with a @google.com email
address to answer your question.

On Jun 22, 2:38 pm, Jonathan Feinberg e.e.c...@gmail.com wrote:
 On Jun 22, 2:29 pm, Tony fatd...@gmail.com wrote:
 [snip]

  Hope that's more helpful.

 I'm hoping that someone who actually knows the answer--someone with
 google.com in their email address--will contribute to this thread.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: memcache and api_cpu_ms

2009-06-22 Thread John Tantalo

Thanks, Tony. I'll try the profiling and post again if I discover
anything interesting.

On Jun 22, 11:36 am, Tony fatd...@gmail.com wrote:
 I see, I didn't realize you were just calling the dict method.  In
 that case, 1000ms seems unusually high.  Still, it seems unlikely that
 memcache usage is causing it.  Your best bet is to profile requests
 (http://code.google.com/appengine/kb/commontasks.html#profiling) to
 this handler and see where the cpu time is being spent - you might
 have some large imports or something elsewhere that's causing a
 performance drop.

 On Jun 22, 2:29 pm, John Tantalo john.tant...@gmail.com wrote:

  Tony,

  The update call is the standard dict.update[1], which should be
  plenty fast for my purposes.

  My data is actually under a kilobyte, so I am quite confused why it
  would take nearly 1000ms in CPU.

  Here's an example of the data (in yaml format) with some personally
  identifying information stripped out:

 http://emend.appspot.com/?yaml

  The actual data being cached is slightly larger, but not by much.

  [1]http://docs.python.org/library/stdtypes.html#dict.update

  On Jun 22, 11:06 am, Tony fatd...@gmail.com wrote:

   Without knowing more about your app, I can't say for sure, but it
   seems likely that whatever processing takes place in response.update
   (object) is using your cpu time, which is why you don't see much of a
   speedup via caching here.  I would suggest profiling the operation to
   determine what function call(s) are specifically taking the most
   resources.  In my experience, you won't notice a large difference in
   cpu usage between serializing model instances to memcache vs. adding
   identifier information (like db keys) for fetching later.  My entities
   are small, however, your mileage my vary.  I find that the primary
   tradeoff in serializing large amounts of info to memcache is in
   increased memory pressure and thus lower memcache hit rate, higher
   datastore access.

   On Jun 22, 12:48 pm, John Tantalo john.tant...@gmail.com wrote:

I recently attempted to improve the responsiveness of one of my app's
more elementary handlers by using memcache to cache the datastore
lookups. According to my logs, this has had a positive effect on my
api_cpu_ms, reducing this time to 72 ms. However, the cpu_ms has not
seen a similar decrease, and hovers around 1000ms.

Do memcache gets count towards api_cpu_ms or cpu_ms? Do I need to
worry about performance issues around deserializing model instances in
memcache?

My caching strategy looks like this:

response = dict() # (might not be empty)
cached = memcache.get(__CACHE_KEY)
if cached:
  response.update(cached)
  return
else:
  # datastore calls
  foo = get_foo()
  bar = get_bar()
  # build cache object
  cached = dict(foo=foo, edits=bar)
  response.update(cached)
  # cache
  memcache.set(__CACHE_KEY, cached)
  return


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Vacuum Indexes - Datastore Indices Count

2009-06-22 Thread Phil Peters

Thanks!

On Jun 20, 4:29 pm, Jeff S (Google) j...@google.com wrote:
 HiPhil,

 Apologies for the inconvenience. I've reset the index count for your
 app. The speedup you saw from creating indexes on an empty datastore
 is expected.

 Happy coding,

 Jeff

 On Jun 20, 6:16 am,Philphil.pet...@taters.co.uk wrote:



  Hi,

  I've come across the issue regardng vacuuming of indexes not correctly
  releasing resources creating the following exception: Your
  application is exceeding a quota: Datastore Indices Count

  Can someone please reset the quota on my application 5starlivesbeta.

  Also, I found it was much quicker removing all indexes before clearing
  down my test data (which I did with vacuum indexes) and then recreated
  indexes once the datastore was empty - is this the recommended
  approach?

  Cheers!- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] 403 Application Over Quota Problem - Not True!

2009-06-22 Thread Devel63

All of a sudden, my app is returning 403 application over quota
whenever I do anything a bit strenuous.

All of the quotas are WAY under, but things that used to work fine are
now triggering this message.

A guess is that the budgeting process has become much more fine-
grained, and is mistakenly extrapolating from one request that may do
a number of DB writes and take 10 seconds.  But these are extremely
rare.

The app name is judysapps-qa.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How to connect appengine to Google sites

2009-06-22 Thread Jeff S (Google)
Hi Bob,
I assume that the API you are looking for is for reading data from Google
Sites. Sending email from App Engine is described here:

http://code.google.com/appengine/docs/python/mail/overview.html
http://code.google.com/appengine/docs/java/mail/overview.html

There is not
currently a Google Data API for Google Sites, but I think you are not
the first person to ask for one :-)

The list of available Google Data APIs can be found here:

http://code.google.com/apis/gdata/

Happy coding,

Jeff

On Sun, Jun 21, 2009 at 7:05 PM, Bob bobzs...@gmail.com wrote:


 Hi,

 I want to read some data from a Google Sites and send email to the
 users according to the contents. Is there any API?

 Thanks,
 Bob
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Query with 1000 matches

2009-06-22 Thread herbie

Ok. Say I have many (1000)  Model entities with two properties 'x'
and 'date'.What is the most efficient query to fetch say the
latest 200 entities  where x  50.   I don't care what their 'date's
are as long as I get the latest and x  50

Thanks again for your help.


On Jun 22, 4:11 pm, Nick Johnson (Google) nick.john...@google.com
wrote:
 Consider precalculating this data and storing it against another entity.
 This will save a lot of work on requests.

 -Nick Johnson



 On Mon, Jun 22, 2009 at 3:55 PM, herbie 4whi...@o2.co.uk wrote:

  No the users won't need to read 1000 entities, but I want to calculate
  the average of a  property from the latest 1000 entities.

  On Jun 22, 3:30 pm, Nick Johnson (Google) nick.john...@google.com
  wrote:
   Correct. Are you sure you need 1000 entities, though? Your users probably
   won't read through all 1000.

   -Nick Johnson

   On Mon, Jun 22, 2009 at 3:23 PM, herbie 4whi...@o2.co.uk wrote:

So to be sure to get the latest 1000 entities I should add a datetime
property to my entitie model and filter and sort on that?

On Jun 22, 1:42 pm, herbie 4whi...@o2.co.uk wrote:
 I know that if there are more than 1000 entities that match a query,
 then only 1000 will  be return by fetch().  But my question is which
 1000? The last 1000 added to the datastore?  The first 1000 added to
 the datastore? Or is it undedined?

 Thanks
 Ian

   --
   Nick Johnson, App Engine Developer Programs Engineer
   Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
  Number:
   368047

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Query with 1000 matches

2009-06-22 Thread Tony

You could accomplish this task like so:

xlist = []
query = Foo.all().filter(property_x  50).order(-timestamp)
for q in query:
  xlist.append(q.property_x)
avg = sum(xlist) / len(xlist)

What Nick is saying, I think, is that fetching 1000 entities is going
to be very resource-intensive, so a better way to do it is to
calculate this data at write-time instead of read-time.  For example,
every time you add an entity, you could update a separate entity that
has a property like average = db.FloatProperty() with the current
average, and then you could simply fetch that entity and get the
current running average.

On Jun 22, 4:25 pm, herbie 4whi...@o2.co.uk wrote:
 Ok. Say I have many (1000)  Model entities with two properties 'x'
 and 'date'.    What is the most efficient query to fetch say the
 latest 200 entities  where x  50.   I don't care what their 'date's
 are as long as I get the latest and x  50

 Thanks again for your help.

 On Jun 22, 4:11 pm, Nick Johnson (Google) nick.john...@google.com
 wrote:

  Consider precalculating this data and storing it against another entity.
  This will save a lot of work on requests.

  -Nick Johnson

  On Mon, Jun 22, 2009 at 3:55 PM, herbie 4whi...@o2.co.uk wrote:

   No the users won't need to read 1000 entities, but I want to calculate
   the average of a  property from the latest 1000 entities.

   On Jun 22, 3:30 pm, Nick Johnson (Google) nick.john...@google.com
   wrote:
Correct. Are you sure you need 1000 entities, though? Your users 
probably
won't read through all 1000.

-Nick Johnson

On Mon, Jun 22, 2009 at 3:23 PM, herbie 4whi...@o2.co.uk wrote:

 So to be sure to get the latest 1000 entities I should add a datetime
 property to my entitie model and filter and sort on that?

 On Jun 22, 1:42 pm, herbie 4whi...@o2.co.uk wrote:
  I know that if there are more than 1000 entities that match a query,
  then only 1000 will  be return by fetch().  But my question is which
  1000? The last 1000 added to the datastore?  The first 1000 added to
  the datastore? Or is it undedined?

  Thanks
  Ian

--
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
   Number:
368047

  --
  Nick Johnson, App Engine Developer Programs Engineer
  Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
  368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Query with 1000 matches

2009-06-22 Thread Tony

I should clarify, I'm not saying move the 1000-entity fetch to the
write process, but instead keep a running total of sum and count that
you can increment and use to calculate the average, rather than having
to fetch entities.  This doesn't solve the use case of average of
last x entities, though (I made the assumption that you'd prefer to
have the average of 1000 entities if possible) - for that you could
use a list property of length x as a queue and use the sum() and len()
functions to get the average.

On Jun 22, 4:46 pm, Tony fatd...@gmail.com wrote:
 You could accomplish this task like so:

 xlist = []
 query = Foo.all().filter(property_x  50).order(-timestamp)
 for q in query:
   xlist.append(q.property_x)
 avg = sum(xlist) / len(xlist)

 What Nick is saying, I think, is that fetching 1000 entities is going
 to be very resource-intensive, so a better way to do it is to
 calculate this data at write-time instead of read-time.  For example,
 every time you add an entity, you could update a separate entity that
 has a property like average = db.FloatProperty() with the current
 average, and then you could simply fetch that entity and get the
 current running average.

 On Jun 22, 4:25 pm, herbie 4whi...@o2.co.uk wrote:

  Ok. Say I have many (1000)  Model entities with two properties 'x'
  and 'date'.    What is the most efficient query to fetch say the
  latest 200 entities  where x  50.   I don't care what their 'date's
  are as long as I get the latest and x  50

  Thanks again for your help.

  On Jun 22, 4:11 pm, Nick Johnson (Google) nick.john...@google.com
  wrote:

   Consider precalculating this data and storing it against another entity.
   This will save a lot of work on requests.

   -Nick Johnson

   On Mon, Jun 22, 2009 at 3:55 PM, herbie 4whi...@o2.co.uk wrote:

No the users won't need to read 1000 entities, but I want to calculate
the average of a  property from the latest 1000 entities.

On Jun 22, 3:30 pm, Nick Johnson (Google) nick.john...@google.com
wrote:
 Correct. Are you sure you need 1000 entities, though? Your users 
 probably
 won't read through all 1000.

 -Nick Johnson

 On Mon, Jun 22, 2009 at 3:23 PM, herbie 4whi...@o2.co.uk wrote:

  So to be sure to get the latest 1000 entities I should add a 
  datetime
  property to my entitie model and filter and sort on that?

  On Jun 22, 1:42 pm, herbie 4whi...@o2.co.uk wrote:
   I know that if there are more than 1000 entities that match a 
   query,
   then only 1000 will  be return by fetch().  But my question is 
   which
   1000? The last 1000 added to the datastore?  The first 1000 added 
   to
   the datastore? Or is it undedined?

   Thanks
   Ian

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
Number:
 368047

   --
   Nick Johnson, App Engine Developer Programs Engineer
   Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
   368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Django Performance Version

2009-06-22 Thread Nash-t

I understand the need to have 0.96 available for applications that
want/prefer it, but at some point, couldn't google make 1.0 the
preloaded default and require applications to zip load 0.96 if they
want it?

On Jun 22, 2:17 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi Stephen,

 On Mon, Jun 22, 2009 at 4:21 AM, Stephen Mayer stephen.ma...@gmail.comwrote:



  If I want to use the new Django 1.x support do I replace the django
  install in the app engine SDK  ... or do I add it to my app as a
  module?  If I add it ... how do I prevent it from being uploaded with
  the rest of the app?

 For how to use Django 1.0 in App Engine, see 
 here:http://code.google.com/appengine/docs/python/tools/libraries.html#Django

 I'm also wondering about Django performance.  Here was my test case:

  create a very basic app Django Patch ... display a page (no db
  reads ... just display a template)
  ... point mon.itor.us at it every 30 minutes ... latency is about
  1500-2000ms.  I assume it's because Django Patch zips up django into a
  package and the package adds overhead ... the first time it's hit the
  app server has to unzip it (or is it every time it's hit?)  Woah ...
  that seemed a bit high for my taste ... I want my app to be reasonably
  performant ... and that's not reasonable.

 The first request to a runtime requires that the runtime be initialized, all
 the modules loaded, etcetera. On top of that, as you point out, Django
 itself has to be zipimported, which increases latency substantially. If the
 ping every 30 minutes is the only traffic to your app, what you're seeing is
 the worst-case latency, every single request. Using the built-in Django will
 decrease latency substantially, but more significantly, requests that hit an
 existing runtime (the vast majority of them, for a popular app) will see far
 superior latencies, since they don't need to load anything.





  Try 2:
  create a very basic app displaying a template, use the built in django
  template engine but without any of the other django stuff ... use the
  GAE webapp as my framework.  response time is now down to 100-200ms on
  average, according to mon.itor.us.  I assume this would come down
  further if my app proved popular enough to keep it on a server for any
  length of time.

  I'm brand new to python, app engine and django ... I have about 10
  years of experience with PHP and am a pretty good developer in the PHP
  space.  I would like to work on GAE with some sense of what the best
  practices are for scalable and performant apps.

  Here are my conclusions based on my very simple research thus far:
  1) Django comes at a cost ... especially if you don't use the default
  install that comes built with the SDK.
  2) Best practices is probably to pick and choose django components on
  GAE but use webapp as your primary framework.

 This depends on what you want to achieve, and on personal preference.

 -Nick Johnson



  Thoughts?  Am I off here?

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: 403 Application Over Quota Problem - Not True!

2009-06-22 Thread Tony

According to this (http://googleappengine.blogspot.com/2009/02/skys-
almost-limit-high-cpu-is-no-more.html), you shouldn't get over quota
messages for individual requests using high cpu, but it sounds exactly
like what's happening to your app.  Unless they reimplemented the high-
cpu-requests quota with the new decreased free quota levels, this
sounds like a bug (http://code.google.com/p/googleappengine/issues/
list).

On Jun 22, 3:19 pm, Devel63 danstic...@gmail.com wrote:
 All of a sudden, my app is returning 403 application over quota
 whenever I do anything a bit strenuous.

 All of the quotas are WAY under, but things that used to work fine are
 now triggering this message.

 A guess is that the budgeting process has become much more fine-
 grained, and is mistakenly extrapolating from one request that may do
 a number of DB writes and take 10 seconds.  But these are extremely
 rare.

 The app name is judysapps-qa.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Query with 1000 matches

2009-06-22 Thread herbie

I tried your query below but I get BadArgumentError: First ordering
property must be the same as inequality filter property, if specified
for this query;
Does this mean I have to order on 'x' first, then order on 'date'?
Will this still return the latest 200 of all entities with x  50 if
I  call query.fetch(200)?


I take your's and Nick's about keeping a 'running average'.   But in
my example the user can change the 'x' value so the average has to be
recalculated from the latest entities.


On Jun 22, 9:46 pm, Tony fatd...@gmail.com wrote:
 You could accomplish this task like so:

 xlist = []
 query = Foo.all().filter(property_x  50).order(-timestamp)
 for q in query:
   xlist.append(q.property_x)
 avg = sum(xlist) / len(xlist)

 What Nick is saying, I think, is that fetching 1000 entities is going
 to be very resource-intensive, so a better way to do it is to
 calculate this data at write-time instead of read-time.  For example,
 every time you add an entity, you could update a separate entity that
 has a property like average = db.FloatProperty() with the current
 average, and then you could simply fetch that entity and get the
 current running average.

 On Jun 22, 4:25 pm, herbie 4whi...@o2.co.uk wrote:

  Ok. Say I have many (1000)  Model entities with two properties 'x'
  and 'date'.    What is the most efficient query to fetch say the
  latest 200 entities  where x  50.   I don't care what their 'date's
  are as long as I get the latest and x  50

  Thanks again for your help.

  On Jun 22, 4:11 pm, Nick Johnson (Google) nick.john...@google.com
  wrote:

   Consider precalculating this data and storing it against another entity.
   This will save a lot of work on requests.

   -Nick Johnson

   On Mon, Jun 22, 2009 at 3:55 PM, herbie 4whi...@o2.co.uk wrote:

No the users won't need to read 1000 entities, but I want to calculate
the average of a  property from the latest 1000 entities.

On Jun 22, 3:30 pm, Nick Johnson (Google) nick.john...@google.com
wrote:
 Correct. Are you sure you need 1000 entities, though? Your users 
 probably
 won't read through all 1000.

 -Nick Johnson

 On Mon, Jun 22, 2009 at 3:23 PM, herbie 4whi...@o2.co.uk wrote:

  So to be sure to get the latest 1000 entities I should add a 
  datetime
  property to my entitie model and filter and sort on that?

  On Jun 22, 1:42 pm, herbie 4whi...@o2.co.uk wrote:
   I know that if there are more than 1000 entities that match a 
   query,
   then only 1000 will  be return by fetch().  But my question is 
   which
   1000? The last 1000 added to the datastore?  The first 1000 added 
   to
   the datastore? Or is it undedined?

   Thanks
   Ian

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
Number:
 368047

   --
   Nick Johnson, App Engine Developer Programs Engineer
   Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
   368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Actually Using Offset

2009-06-22 Thread MajorProgamming

I need to perform a simple ordering query (descending rank) on my
datastore. The default query looks like:

query.order('-rank')

However, I now need to perform paging. I attempted doing:

query.filter('__key__ ', bookmark)
query.order('-rank')
query.order('__key__')

This won't work though because it expects the ordering of the key to
go first.

So I modified it to look like:

query.filter('__key__ ', bookmark)
query.order('__key__')
query.order('-rank')

However, this doesn't produce the desired results.

So my question is, how can I get around this? And if there is no way,
should I just use offset and limit my users to 1000? What should I do?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: 403 Application Over Quota Problem - Not True!

2009-06-22 Thread Mike Wesner

enabling billing seems to have sped things up and so far has stopped
the 403's.

I still think something is fishy since we had not warnings in the
appspot dashboard and are way under free quotas.




On Jun 22, 4:58 pm, Mike Wesner m...@konsole.net wrote:
 Several of our appspot instances are having this exact same issue.

 We are way under quota, hardly hitting the appid at all and we see 403
 on static files and other things.  Random 500 errors too.

 We are enabling billing on a few of our test instances which we hope
 will help, but I can't see how it will make a difference since we are
 so far under quota/usage rates.

 ANY GOOGLERS READING THIS?  This is a serious issue and we get ZERO
 information or support from google.

 How can a company use this stuff when its so flakey?

 On Jun 22, 2:19 pm, Devel63 danstic...@gmail.com wrote:

  All of a sudden, my app is returning 403 application over quota
  whenever I do anything a bit strenuous.

  All of the quotas are WAY under, but things that used to work fine are
  now triggering this message.

  A guess is that the budgeting process has become much more fine-
  grained, and is mistakenly extrapolating from one request that may do
  a number of DB writes and take 10 seconds.  But these are extremely
  rare.

  The app name is judysapps-qa.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Django Performance Version

2009-06-22 Thread Wooble



On Jun 22, 5:09 pm, Nash-t timna...@gmail.com wrote:
 I understand the need to have 0.96 available for applications that
 want/prefer it, but at some point, couldn't google make 1.0 the
 preloaded default and require applications to zip load 0.96 if they
 want it?

It would be a Bad Idea to break existing apps that aren't modified to
request the old version.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Task Queue API Users

2009-06-22 Thread hawkett

Hi,

   I've deployed an app to do some tests on live app engine, and the
following code

currentUser = users.get_current_user()
if currentUser is not None:
   logging.info(Current User - ID: %s, email: %s, nickname: %s %
(currentUser.user_id(), currentUser.email(), currentUser.nickname()))

logging.info(is admin? %s % users.is_current_user_admin())

yields:  'is admin? False'

as the total log output.  This is code that is run directly from a
handler in app.yaml that specified - 'login:admin'

This represents a pretty big problem - it means you can't rely on
'login:admin' to produce a user that is an admin.  I'm guessing that
the goal of the Task Queue API is to be usable on generic URLs - e.g.
in a RESTful application, the full CRUD (and more) functionality is
exposed via a dynamic set of URL's that more than likely are not
specifically for the Task Queue API - however the above situation
means you really have to code explicitly for the Task Queue API,
because the meaning of the directives in app.yaml is not reliable.  It
looks like cron functionality works like this as well, and that has
been around for a while.  Use cases such as write-behind outlined in
Brett's IO talk are significantly limited by being unable to predict
whether you will get a user or not (especially if you intend to hit
RESTful URI that could just as easily be hit by real users).  Sure,
there are ways to code around it, but it's not pretty.

I've added a defect to the issue tracker here -
http://code.google.com/p/googleappengine/issues/detail?id=1742

I'm keen to understand how google sees this situation, and whether the
current situation is here to stay, or something short term to deliver
the functionality early.  Cheers,

Colin

On Jun 22, 4:31 pm, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi hawkett,

 My mistake. This sounds like a bug in the SDK - can you please file a bug?

 -Nick Johnson



 On Mon, Jun 22, 2009 at 4:25 PM, hawkett hawk...@gmail.com wrote:

  Hi Nick,

  In my SDK (just the normal mac download), I can inspect the queue in
  admin console, and have a 'run' and 'delete' button next to each task
  in the queue.  When I press 'run', the task fires, my server receives
  the request, and returns the 302.

  Colin

  On Jun 22, 4:15 pm, Nick Johnson (Google) nick.john...@google.com
  wrote:
   Hi hawkett,

   In the current release of the SDK, the Task Queue stub simply logs tasks
  to
   be executed, and doesn't actually execute them. How are you executing
  these
   tasks?

   -Nick Johnson

   On Mon, Jun 22, 2009 at 3:46 PM, hawkett hawk...@gmail.com wrote:

Hi,

  I'm running into some issues trying to use the Task Queue API with
restricted access URL's defined in app.yaml - when a URL is defined as
either 'login: admin' or 'login: required', when the task fires it is
receiving a 302 - which I assume is a redirect to the login page.  I'm
just running this on the SDK at the moment, but I was expecting at
least the 'login: admin' url to work, based on the following comment
from this page
   http://code.google.com/appengine/docs/python/taskqueue/overview.html

'If a task performs sensitive operations (such as modifying important
data), the developer may wish to protect the worker URL to prevent a
malicious external user from calling it directly. This is possible by
marking the worker URL as admin-only in the app configuration.'

I figure I'm probably doing something dumb, but I had expected the
tasks to be executed as some sort of system user, so that either
'login: required' or 'login: admin' would work - perhaps even being
able to specify the email and nickname of the system user as app.yaml
configuration.  Another alternative would be if there was a mechanism
to create an auth token to supply when the task is created.  e.g.
users.current_user_auth_token() to execute the task as the current
user.

So I guess the broader question is - where does the task queue get the
'run_as' user, or if there isn't one, what's the mechanism for hitting
a 'login: admin' worker URL?

Most apps should be able to expect a call to users.get_current_user()
to return a user object in code protected by 'login: admin'.

Thanks,

Colin

   --
   Nick Johnson, App Engine Developer Programs Engineer
   Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
  Number:
   368047

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 

[google-appengine] Re: Query with 1000 matches

2009-06-22 Thread Tony

Yes, that is what it means.  I forgot about that restriction.

I see what you mean about changing 'x' values.  Perhaps consider
keeping two counts - a running sum and a running count (of the # of x
properties).  If a user modifies an 'x' value, you can adjust the sum
up or down accordingly.

On Jun 22, 5:40 pm, herbie 4whi...@o2.co.uk wrote:
 I tried your query below but I get BadArgumentError: First ordering
 property must be the same as inequality filter property, if specified
 for this query;
 Does this mean I have to order on 'x' first, then order on 'date'?
 Will this still return the latest 200 of all entities with x  50 if
 I  call query.fetch(200)?

 I take your's and Nick's about keeping a 'running average'.   But in
 my example the user can change the 'x' value so the average has to be
 recalculated from the latest entities.

 On Jun 22, 9:46 pm, Tony fatd...@gmail.com wrote:



  You could accomplish this task like so:

  xlist = []
  query = Foo.all().filter(property_x  50).order(-timestamp)
  for q in query:
    xlist.append(q.property_x)
  avg = sum(xlist) / len(xlist)

  What Nick is saying, I think, is that fetching 1000 entities is going
  to be very resource-intensive, so a better way to do it is to
  calculate this data at write-time instead of read-time.  For example,
  every time you add an entity, you could update a separate entity that
  has a property like average = db.FloatProperty() with the current
  average, and then you could simply fetch that entity and get the
  current running average.

  On Jun 22, 4:25 pm, herbie 4whi...@o2.co.uk wrote:

   Ok. Say I have many (1000)  Model entities with two properties 'x'
   and 'date'.    What is the most efficient query to fetch say the
   latest 200 entities  where x  50.   I don't care what their 'date's
   are as long as I get the latest and x  50

   Thanks again for your help.

   On Jun 22, 4:11 pm, Nick Johnson (Google) nick.john...@google.com
   wrote:

Consider precalculating this data and storing it against another entity.
This will save a lot of work on requests.

-Nick Johnson

On Mon, Jun 22, 2009 at 3:55 PM, herbie 4whi...@o2.co.uk wrote:

 No the users won't need to read 1000 entities, but I want to calculate
 the average of a  property from the latest 1000 entities.

 On Jun 22, 3:30 pm, Nick Johnson (Google) nick.john...@google.com
 wrote:
  Correct. Are you sure you need 1000 entities, though? Your users 
  probably
  won't read through all 1000.

  -Nick Johnson

  On Mon, Jun 22, 2009 at 3:23 PM, herbie 4whi...@o2.co.uk wrote:

   So to be sure to get the latest 1000 entities I should add a 
   datetime
   property to my entitie model and filter and sort on that?

   On Jun 22, 1:42 pm, herbie 4whi...@o2.co.uk wrote:
I know that if there are more than 1000 entities that match a 
query,
then only 1000 will  be return by fetch().  But my question is 
which
1000? The last 1000 added to the datastore?  The first 1000 
added to
the datastore? Or is it undedined?

Thanks
Ian

  --
  Nick Johnson, App Engine Developer Programs Engineer
  Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
 Number:
  368047

--
Nick Johnson, App Engine Developer Programs Engineer
Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration 
Number:
368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Server Error (500) while uploading index definitions (again)

2009-06-22 Thread Partac Constantin
Hi Jason,

Thank you for your help. I tried to redeploy the application with just a few
indexes but those indexes you nudged into error state are remaining in error
state. I tried erasing all indexes from datastore-indexes.xml but this did
not help either. What should I do in order to erase the indexes which are in
error state.
About the limitation of 100 indexes I did not know that it existed. Is this
limitation referring to 100 new indexes when deploying the application or
the total count should not exceed 100 indexes. On the page
http://code.google.com/appengine/docs/java/datastore/overview.html#Quotas_and_Limits
 it mentioned that there is a limitation of 1,000 indexes on an entity.
Could you detail what you mean by 100 quota.

Thank you,
Costi

On Mon, Jun 22, 2009 at 20:45, Jason (Google) apija...@google.com wrote:

 OK, I nudged your indexes into the error state and reset your Datastore
 Indices Count quota, but make sure not to upload more than 100 indexes or
 you may see this issue again.
 - Jason


 On Mon, Jun 22, 2009 at 10:29 AM, Jason (Google) apija...@google.comwrote:

 Hi Costi. How many indexes are you trying to deploy? There is a hard limit
 of 100, and it looks like you're very close to this number.
 - Jason


 On Mon, Jun 22, 2009 at 5:44 AM, C Partac cpar...@gmail.com wrote:




 I have the same problem Server Error (500)  on my applications
 cpedevtest01 and cpedevtest02 while uploading. The indexes are
 building for 4 days already.
 Could you reset the indexes manually because after deleting the
 indexes from index I still get the error while uploading the
 application.

 Thank you

 Costi





 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Change db.IntegerProperty() to db.FloatProperty()?

2009-06-22 Thread Savraj

Thanks Nick and Tony for your help and clarification.  I will look in
to potential migration options in that article you sent, Tony.  Never
use int for something that might be a float! ;)

-savraj

On Jun 22, 2:00 pm, Tony fatd...@gmail.com wrote:
 http://code.google.com/appengine/articles/update_schema.htmldescribes
 one technique for updating your schema.

 On Jun 22, 12:13 pm, Nick Johnson (Google) nick.john...@google.com
 wrote:

  Hi Savraj,

  If you change your property from db.IntegerProperty to db.FloatProperty, all
  your existing entities will fail to validate, and throw exceptions when
  loaded. If you want to make this change, you need to transition all your
  existing entities before making the update.

  -Nick Johnson

  On Mon, Jun 22, 2009 at 5:04 PM, Savraj sav...@gmail.com wrote:

   Hi App Engine-ers,

   So I've got a ton of data stored in my db -- and I've got a particular
   field, let's call it 'value' set as 'db.IntegerProperty()' in my model
   definition.  If I change this to 'db.FloatProperty()', what happens?

   I would imagine that the existing values in the db remain Integers,
   while the new ones coming in are floats, and that should be fine for
   my purposes. But will it work?  I suppose the only way to know is try,
   but I don't want to mangle my database, which has quite a bit of data
   in it.

   What will happen in this case?

   -s

  --
  Nick Johnson, App Engine Developer Programs Engineer
  Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
  368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Urllib.urlfetch with HTTPS giving IOError error

2009-06-22 Thread Ritesh Nadhani

Anybody?

On Fri, Jun 19, 2009 at 12:53 PM, Ritesh Nadhanirite...@gmail.com wrote:
 Hi

 Hi, I am trying to access an https url specifically:
 https://api-3t.sandbox.paypal.com/nvp and I get the traceback:
 http://paste.pocoo.org/show/124055/

 If I access the same thing from shell using urllib, everything works
 and I get the correct response. http://paste.pocoo.org/show/124067/

 Both the code send the same exact parameter to urllib.urlopen(). Is
 GAE urlib() different from the standard Python?

 What could be the issue?:

 --
 Ritesh
 http://www.riteshn.com




-- 
Ritesh
http://www.riteshn.com

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Efficient way to structure my data model

2009-06-22 Thread ecognium

Thanks, Nick. Let me make sure I understand your comment correctly.
Suppose I have the following data:

ID  BlobProp1   BlobProp2-N KeywordsCateg
=
123 blahblahtag1,tag2,tag3  Circle,
Red,  Large, Dotted
345 blahblahtag3,tag4,tag5
Square, Blue, Small, Solid
678 blahblahtag1,tag3,tag4
Circle, Blue, Small, Solid
-

The field categ (list) contains four different types - Shape, Color,
Size and Line Type. Suppose the user wants to retrieve all entities
that are Small Dotted Blue Circles then the query will be:

Select * From MyModel where categ = Circle AND categ = Small AND
categ = Blue AND categ = Dotted

When I was reading about exploding indexes the example indicated the
issue was due to Cartesian product of two list elements. I thought the
same will hold true with one list field when used multiple times in a
query. Are you saying the above query will not need {Circle, Red,
Large, Dotted} * {Circle, , , } * {Circle, , , } * {Circle, , , }
number of index entities for entity ID=123? I was getting index errors
when I was using the categ list property four times in my index
specification and that's why I was wondering if I should restructure
things. so I am guessing the following spec should not cause any index
issues in the future?

- kind: MyModel
  properties:
  - name: categ
  - name: categ
  - name: categ
  - name: categ
  - name: keywords
  - name: __key__   # used for paging

Thanks,
-e


On Jun 22, 2:10 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi ecognium,

 If I understand your problem correctly, every entity will have 0-4 entries
 in the 'categ' list, corresponding to the values for each of 4 categories
 (eg, Color, Size, Shape, etc)?

 The sample query you give, with only equality filters, will be satisfiable
 using the merge join query planner, which doesn't require custom indexes, so
 you won't have high indexing overhead. There will simply be one index entry
 for each item in each list.

 If you do need custom indexes, the number of index entries, isn't 4^4, as
 you suggest, but rather smaller. Assuming you want to be able to query with
 any number of categories from 0 to 4, you'll need 3 or 4 custom indexes
 (depending on if the 0-category case requires its own index), and the total
 number of index entries will be 4C1 + 4C2 + 4C3 + 4C4 = 4 + 6 + 4 + 1 = 15.
 For 6 categories, the number of entries would be 6 + 15 + 20 + 15 + 6 + 1 =
 63, which is still a not-unreasonable number.

 -Nick Johnson



 On Mon, Jun 22, 2009 at 8:51 AM, ecognium ecogn...@gmail.com wrote:

  Hi All,

     I would like to get your opinion on the best way to structure my
  data model.
  My app allows the users to filter the entities by four category types
  (say A,B,C,D). Each category can have multiple values (for e.g.,
  category type A can have values 1,2,3) but the
  user can  choose only one value per category for filtering.  Please
  note the values are unique across the category types as well. I could
  create four fields corresponding to the four types but it does not
  allow me to expand to more categories later easily. Right now, I just
  use one list field to store the different values as it is easy to add
  more category types later on.

  My model (simplified) looks like this:

  class Example(db.Model):

     categ        = db.StringListProperty()

     keywords = db.StringListProperty()

  The field keywords will have about 10-20 values for each entity. For
  the above example, categ will have up to 4 values. Since I allow for
  filtering on 4 category types, the index table gets large with
  unnecessary values. The filtering logic looks like:
  keyword = 'k' AND categ = '1' AND categ = '9' AND categ = '14' AND
  categ = '99'

   Since there are 4 values in the categ list property, there will be
  4^4 rows created in the index table (most of them will never be hit
  due to the uniqueness guaranteed by design). Multiply it by the number
  of values in the keywords table, the index table gets large very
  quickly.

  I would like to avoid creating multiple fields if possible because
  when I want to make the number of category types to six, I would have
  to change the underlying model and all the filtering code. Any
  suggestions on how to construct the model such that it will allow for
  ease of expansion in category types yet still not create large index
  tables? I know there is a Category Property but not sure if it really
  provides any specific benefit here.

  Thanks!
  -e

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047

[google-appengine] Re: Efficient way to structure my data model

2009-06-22 Thread ecognium

Thanks, Nick. Let me make sure I understand your comment correctly.
Suppose I have the following data:

ID  Blob1 Blob2-N Keywords  Categ

123 blah  blah  tag1,tag2,tag3  Circle,Red,  Large, Dotted
345 blah  blah  tag3,tag4,tag5  Square, Blue, Small, Solid
678 blah  blah  tag1,tag3,tag4  Circle, Blue, Small, Solid
--

The field categ (list) contains four different types - Shape, Color,
Size and Line Type. Suppose the user wants to retrieve all entities
that are Small Dotted Blue Circles then the query will be:

Select * From MyModel where categ = Circle AND categ = Small AND
categ = Blue AND categ = Dotted

When I was reading about exploding indexes the example indicated the
issue was due to Cartesian product of two list elements. I thought the
same will hold true with one list field when used multiple times in a
query. Are you saying the above query will not need {Circle, Red,
Large, Dotted} * {Circle, , , } * {Circle, , , } * {Circle, , , }
number of index entities for entity ID=123? I was getting index errors
when I was using the categ list property four times in my index
specification and that's why I was wondering if I should restructure
things. so I am guessing the following spec should not cause any index
issues in the future?

- kind: MyModel
  properties:
  - name: categ
  - name: categ
  - name: categ
  - name: categ
  - name: keywords
  - name: __key__   # used for paging

Thanks,
-e

On Jun 22, 2:10 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi ecognium,

 If I understand your problem correctly, every entity will have 0-4 entries
 in the 'categ' list, corresponding to the values for each of 4 categories
 (eg, Color, Size, Shape, etc)?

 The sample query you give, with only equality filters, will be satisfiable
 using the merge join query planner, which doesn't require custom indexes, so
 you won't have high indexing overhead. There will simply be one index entry
 for each item in each list.

 If you do need custom indexes, the number of index entries, isn't 4^4, as
 you suggest, but rather smaller. Assuming you want to be able to query with
 any number of categories from 0 to 4, you'll need 3 or 4 custom indexes
 (depending on if the 0-category case requires its own index), and the total
 number of index entries will be 4C1 + 4C2 + 4C3 + 4C4 = 4 + 6 + 4 + 1 = 15.
 For 6 categories, the number of entries would be 6 + 15 + 20 + 15 + 6 + 1 =
 63, which is still a not-unreasonable number.

 -Nick Johnson



 On Mon, Jun 22, 2009 at 8:51 AM, ecognium ecogn...@gmail.com wrote:

  Hi All,

     I would like to get your opinion on the best way to structure my
  data model.
  My app allows the users to filter the entities by four category types
  (say A,B,C,D). Each category can have multiple values (for e.g.,
  category type A can have values 1,2,3) but the
  user can  choose only one value per category for filtering.  Please
  note the values are unique across the category types as well. I could
  create four fields corresponding to the four types but it does not
  allow me to expand to more categories later easily. Right now, I just
  use one list field to store the different values as it is easy to add
  more category types later on.

  My model (simplified) looks like this:

  class Example(db.Model):

     categ        = db.StringListProperty()

     keywords = db.StringListProperty()

  The field keywords will have about 10-20 values for each entity. For
  the above example, categ will have up to 4 values. Since I allow for
  filtering on 4 category types, the index table gets large with
  unnecessary values. The filtering logic looks like:
  keyword = 'k' AND categ = '1' AND categ = '9' AND categ = '14' AND
  categ = '99'

   Since there are 4 values in the categ list property, there will be
  4^4 rows created in the index table (most of them will never be hit
  due to the uniqueness guaranteed by design). Multiply it by the number
  of values in the keywords table, the index table gets large very
  quickly.

  I would like to avoid creating multiple fields if possible because
  when I want to make the number of category types to six, I would have
  to change the underlying model and all the filtering code. Any
  suggestions on how to construct the model such that it will allow for
  ease of expansion in category types yet still not create large index
  tables? I know there is a Category Property but not sure if it really
  provides any specific benefit here.

  Thanks!
  -e

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe 

[google-appengine] Re: Testing Task Queue

2009-06-22 Thread Stephen Mayer

Hey Jeff,

Thanks much for the reply.  I'm wondering why the admin interface in
my dev server isn't ever mentioned in the docs (that I have found thus
far) ... it would have come in handy to know of its existence.
Perhaps you might consider adding a mention of it at some point?
Looks like i can browse my entities and reset my cache ... nice stuff
to know about.  Some of these features you can't even do in
production.

Stephen

On Jun 22, 1:55 pm, Jeff S (Google) j...@google.com wrote:
 Hi Stephen,
 In the SDK dev server, the task queue's tasks must be triggered manually. If
 you visit localhost:8080/_ah/admin/queues, you can see a list of queue names
 with a flush button to cause all enqueued tasks in that queue to be
 executed. Part of the reason for having a manual trigger for execution is to
 prevent runaway scenarios as you describe. In the SDK you can step through
 each generation of tasks and watch for endless or exponential triggers.

 Happy coding,

 Jeff

 On Sun, Jun 21, 2009 at 5:27 PM, Stephen Mayer stephen.ma...@gmail.comwrote:





  So now that we have the task queue ... how do we test it in our
  sandboxes?  Or perhaps I missed that part of the documentation ... can
  anyone clue me in on testing it in a place that is not production (I
  wouldn't want a queue to start some runaway process in production ...
  would much prefer to catch those cases in testing).

  Thoughts?
  -Stephen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Django Performance Version

2009-06-22 Thread Stephen Mayer

Thanks, Nick!  I would love to see an article from Google's
perspective on best practices for app development.  Or perhaps some of
the developers out here can help put together some great guidelines.

Stephen

On Jun 22, 4:17 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi Stephen,

 On Mon, Jun 22, 2009 at 4:21 AM, Stephen Mayer stephen.ma...@gmail.comwrote:



  If I want to use the new Django 1.x support do I replace the django
  install in the app engine SDK  ... or do I add it to my app as a
  module?  If I add it ... how do I prevent it from being uploaded with
  the rest of the app?

 For how to use Django 1.0 in App Engine, see 
 here:http://code.google.com/appengine/docs/python/tools/libraries.html#Django

 I'm also wondering about Django performance.  Here was my test case:

  create a very basic app Django Patch ... display a page (no db
  reads ... just display a template)
  ... point mon.itor.us at it every 30 minutes ... latency is about
  1500-2000ms.  I assume it's because Django Patch zips up django into a
  package and the package adds overhead ... the first time it's hit the
  app server has to unzip it (or is it every time it's hit?)  Woah ...
  that seemed a bit high for my taste ... I want my app to be reasonably
  performant ... and that's not reasonable.

 The first request to a runtime requires that the runtime be initialized, all
 the modules loaded, etcetera. On top of that, as you point out, Django
 itself has to be zipimported, which increases latency substantially. If the
 ping every 30 minutes is the only traffic to your app, what you're seeing is
 the worst-case latency, every single request. Using the built-in Django will
 decrease latency substantially, but more significantly, requests that hit an
 existing runtime (the vast majority of them, for a popular app) will see far
 superior latencies, since they don't need to load anything.







  Try 2:
  create a very basic app displaying a template, use the built in django
  template engine but without any of the other django stuff ... use the
  GAE webapp as my framework.  response time is now down to 100-200ms on
  average, according to mon.itor.us.  I assume this would come down
  further if my app proved popular enough to keep it on a server for any
  length of time.

  I'm brand new to python, app engine and django ... I have about 10
  years of experience with PHP and am a pretty good developer in the PHP
  space.  I would like to work on GAE with some sense of what the best
  practices are for scalable and performant apps.

  Here are my conclusions based on my very simple research thus far:
  1) Django comes at a cost ... especially if you don't use the default
  install that comes built with the SDK.
  2) Best practices is probably to pick and choose django components on
  GAE but use webapp as your primary framework.

 This depends on what you want to achieve, and on personal preference.

 -Nick Johnson



  Thoughts?  Am I off here?

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Urllib.urlfetch with HTTPS giving IOError error

2009-06-22 Thread Tony

I don't believe urllib supports https requests.  Try using urllib2 or
Google's urlfetch module.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] dev_appserver.py throws SystemError: frexp() result out of range and subsequenty ValueError: bad marshal data

2009-06-22 Thread rraj
Hi,
Had anybody encountered the following error and can guide me on how you
fixed it ?

When running the development app server, initial run throws
SystemError: frexp() result out of range...

C:\Program Files\Google\google_appenginedev_appserver.py
--datastore_path=C:\gae_data --history_path=C:\gae_data demos\guestbook
Traceback (most recent call last):
  File C:\Program Files\Google\google_appengine\dev_appserver.py, line 60,
in module
run_file(__file__, globals())
  File C:\Program Files\Google\google_appengine\dev_appserver.py, line 57,
in run_file
execfile(script_path, globals_)
  File C:\Program
Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py,
line 483, in module
sys.exit(main(sys.argv))
  File C:\Program
Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py,
line 400, in main
SetGlobals()
  File C:\Program
Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py,
line 86, in SetGlobals
from google.appengine.tools import dev_appserver
  File C:\Program
Files\Google\google_appengine\google\appengine\tools\dev_appserver.py, line
86, in module
from google.appengine.api import datastore_file_stub
  File C:\Program
Files\Google\google_appengine\google\appengine\api\datastore_file_stub.py,
line 38, in module
import datetime
SystemError: frexp() result out of range



Subsequent attempts to run applications, throws ValueError: bad
marshal data...


C:\Program Files\Google\google_appenginedev_appserver.py
--datastore_path=C:\gae_data --history_path=C:\gae_data demos\guestbook
Traceback (most recent call last):
  File C:\Program Files\Google\google_appengine\dev_appserver.py, line 60,
in module
run_file(__file__, globals())
  File C:\Program Files\Google\google_appengine\dev_appserver.py, line 57,
in run_file
execfile(script_path, globals_)
  File C:\Program
Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py,
line 483, in module
sys.exit(main(sys.argv))
  File C:\Program
Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py,
line 400, in main
SetGlobals()
  File C:\Program
Files\Google\google_appengine\google\appengine\tools\dev_appserver_main.py,
line 86, in SetGlobals
from google.appengine.tools import dev_appserver
  File C:\Program
Files\Google\google_appengine\google\appengine\tools\dev_appserver.py, line
86, in module
from google.appengine.api import datastore_file_stub
ValueError: bad marshal data



Python Version :: Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45)
[MSC v.1310 32 bit Intel)] on win32

GAE Version : 1.2.3 (GoogleAppEngine_1.2.3.msi)

Was running the application with an earlier version of GAE
(1.2.2/1.2.1), when I encountered the bad marshal data problem during a
restart of my application. Tried moving to the latest version to see if this
has been handled.

Removing the datastore_file_stub.pyc and running again reproduces
the problem in the same sequence : frexp() result out of range followed by
bad marshal data.

Tried moving to Python 2.6.2 - did not help.

Tried repairing GAE 1.2.3 - did not help.

Uninstalled Python 2.6.2, Python 2.5.2  GAE and then installed Python 2.5.2
and GAE 1.2.3 again and tested with demo application and new data-store
path, when I got the above traces.


Not able to run any GAE apps now :-(
Any tips to get me going again will be appreciated.

Thanks  Regards,
R.Rajkumar

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: 403 Application Over Quota Problem - Not True!

2009-06-22 Thread cc

Yep same problem here 403s everywhere! Try using the Remote API it is
now useless with the new quota. Problem seems to be that the burst
quota has also dropped by a factor of 10 along with the daily quota.
Google has the burst quota set to try to help keep you under the daily
quota, spreading the quota over a 24 hour period. This feature seems
to render most apps unusable now. Anyone trying out app engine under
the free quota is quickly going to decide not to use app engine for
their web app. Kind of defeats the purpose of even offering the free
version. Google should have kept the burst quota constant and only
reduced the daily quota. This is just further proof that the
accountants are now running the show at Google.

On Jun 22, 3:06 pm, Mike Wesner m...@konsole.net wrote:
 enabling billing seems to have sped things up and so far has stopped
 the 403's.

 I still think something is fishy since we had not warnings in the
 appspot dashboard and are way under free quotas.

 On Jun 22, 4:58 pm, Mike Wesner m...@konsole.net wrote:

  Several of our appspot instances are having this exact same issue.

  We are way under quota, hardly hitting the appid at all and we see 403
  on static files and other things.  Random 500 errors too.

  We are enabling billing on a few of our test instances which we hope
  will help, but I can't see how it will make a difference since we are
  so far under quota/usage rates.

  ANY GOOGLERS READING THIS?  This is a serious issue and we get ZERO
  information or support from google.

  How can a company use this stuff when its so flakey?

  On Jun 22, 2:19 pm, Devel63 danstic...@gmail.com wrote:

   All of a sudden, my app is returning 403 application over quota
   whenever I do anything a bit strenuous.

   All of the quotas are WAY under, but things that used to work fine are
   now triggering this message.

   A guess is that the budgeting process has become much more fine-
   grained, and is mistakenly extrapolating from one request that may do
   a number of DB writes and take 10 seconds.  But these are extremely
   rare.

   The app name is judysapps-qa.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Performance improvements

2009-06-22 Thread cc

I think you misread the doublespeak the reduction in the quota IS the
performance improvement

On Jun 22, 6:05 am, luddep lud...@gmail.com wrote:
 Hello,

 So the free quotas have been reduced today and according to the docs
 (http://code.google.com/appengine/docs/quotas.html#Free_Changes) there
 are going to be some performance improvements as well, will there be
 any information released regarding what actual improvements they are?
 (i.e., datastore related, etc)

 Thanks!
 - Ludwig
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Performance improvements

2009-06-22 Thread cc

along with many performance improvements, we will be reducing the
free quota levels

On Jun 22, 7:11 pm, cc c...@gamegiants.net wrote:
 I think you misread the doublespeak the reduction in the quota IS the
 performance improvement

 On Jun 22, 6:05 am, luddep lud...@gmail.com wrote:

  Hello,

  So the free quotas have been reduced today and according to the docs
  (http://code.google.com/appengine/docs/quotas.html#Free_Changes) there
  are going to be some performance improvements as well, will there be
  any information released regarding what actual improvements they are?
  (i.e., datastore related, etc)

  Thanks!
  - Ludwig
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Idempotence for Cron Jobs

2009-06-22 Thread MajorProgamming

I understand that TaskQueues have the possibility of running over and
over again. Does this apply to cron jobs? Do we need to design them to
be Idempotent as well?

For more info on what I'm talking about:
http://en.wikipedia.org/wiki/Idempotence
http://www.youtube.com/watch?v=o3TuRs9ANhs
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---