[google-appengine] Re: Datastore is slow on queries involving many entities, but a smallish dataset

2009-12-01 Thread yejun
Does MessageS have a relation? Maybe those related field being
converted to separated queries because there's no table join.
In JPA you can specify those fields as Lazy fetch to avoid that.

On Dec 1, 4:55 am, Eric Rannaud eric.rann...@gmail.com wrote:
 Athttp://www.911pagers.org/, a relatively simple query such 
 ashttp://www.911pagers.org/#rangeId0-128, which does:

     SELECT * FROM MessageS where id = 0  id  128 order by id

 and fetches 128 entities, takes anywhere between 1s and 4s, usually
 around +2.5s, measured on the server, in wall-clock time, with the
 following code:

     Calendar c = Calendar.getInstance();
     long t0 = c.getTimeInMillis();
     qmsgr = (ListMessageS) qmsg.execute(lo, hi);
     System.err.println(getCMIdRange:qmsg:  + (c.getTimeInMillis() - t0));

 id is not the primary key of MessageS, but a 'long' field, _unique_ in
 each entity (so it maps 1-to-1 with the entities). Note: such a query
 doesn't require a specific index.

 There are about 500,000 MessageS entities in the datastore, for a
 total of 140 MB, excluding metadata (of which there are about 85 MB).

 Repeating the same request several times in a row doesn't improve
 response time. The delay has been pretty consistent over several days.
 Note again that the delay is measured on the server, the latency of my
 connection is not a factor.

 I am aware App Engine is a preview and all, but 2.5s, really?

 I can find the data faster in a text file split across 4 machines over
 a shared Ethernet 10Mb with a bunch of Python for loops running on
 OLPC XOs, for $DEITY's sake.

 Eric.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Datastore is slow on queries involving many entities, but a smallish dataset

2009-12-01 Thread yejun


On Dec 1, 4:55 am, Eric Rannaud eric.rann...@gmail.com wrote:
 Athttp://www.911pagers.org/, a relatively simple query such 
 ashttp://www.911pagers.org/#rangeId0-128, which does:

     SELECT * FROM MessageS where id = 0  id  128 order by id

 and fetches 128 entities, takes anywhere between 1s and 4s, usually
 around +2.5s, measured on the server, in wall-clock time, with the
 following code:

     Calendar c = Calendar.getInstance();
     long t0 = c.getTimeInMillis();
     qmsgr = (ListMessageS) qmsg.execute(lo, hi);
     System.err.println(getCMIdRange:qmsg:  + (c.getTimeInMillis() - t0));

 id is not the primary key of MessageS, but a 'long' field, _unique_ in
 each entity (so it maps 1-to-1 with the entities). Note: such a query
 doesn't require a specific index.

Why such a query doesn't require a specific index?
There's no such thing as unique constraint on google data store.


 There are about 500,000 MessageS entities in the datastore, for a
 total of 140 MB, excluding metadata (of which there are about 85 MB).

 Repeating the same request several times in a row doesn't improve
 response time. The delay has been pretty consistent over several days.
 Note again that the delay is measured on the server, the latency of my
 connection is not a factor.

 I am aware App Engine is a preview and all, but 2.5s, really?

 I can find the data faster in a text file split across 4 machines over
 a shared Ethernet 10Mb with a bunch of Python for loops running on
 OLPC XOs, for $DEITY's sake.

 Eric.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Datastore is slow on queries involving many entities, but a smallish dataset

2009-12-01 Thread yejun
Did you enable auto index in datastore-indexes.xml and generated
datastore-indexes-auto.xml is correct?

On Dec 1, 6:33 pm, Eric Rannaud eric.rann...@gmail.com wrote:
 On Tue, Dec 1, 2009 at 3:21 PM, yejun yej...@gmail.com wrote:
  id is not the primary key of MessageS, but a 'long' field, _unique_ in
  each entity (so it maps 1-to-1 with the entities). Note: such a query
  doesn't require a specific index.

  Why such a query doesn't require a specific index?

 I mean, according to the documentation, you can execute such a query
 without building an index for it, as App Engine provides automatic
 indexes in such a case.

 App Engine provides automatic indexes for the following forms of queries:
   - queries using only equality and ancestor filters
   - queries using only inequality filters (which can only be of a
 single property)
   - queries with no filters and only one sort order on a property,
 either ascending or descending
   - queries using equality filters on properties and inequality or
 range filters on keys

 My query is of the second type.

  There's no such thing as unique constraint on google data store.

 I don't understand what you mean.

This is related to issue #178 
http://code.google.com/p/googleappengine/issues/detail?id=178

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Datastore is slow on queries involving many entities, but a smallish dataset

2009-12-01 Thread yejun
You still need to enable auto index.
http://code.google.com/appengine/docs/java/config/indexconfig.html

On Dec 1, 7:32 pm, Eric Rannaud eric.rann...@gmail.com wrote:
 On Tue, Dec 1, 2009 at 4:21 PM, yejun yej...@gmail.com wrote:
  Did you enable auto index in datastore-indexes.xml and generated
  datastore-indexes-auto.xml is correct?

 Yes. But that's irrelevant, as an index is not needed. And by the way,
 you can try to define an index over a single property, but it will be
 rejected by appcfg, as App Engine provides those implicitly.

   There's no such thing as unique constraint on google data store.

  I don't understand what you mean.

  This is related to issue 
  #178http://code.google.com/p/googleappengine/issues/detail?id=178

 Oh I see. I never talked about a unique(ness) constraint. I merely
 stated that the value of the field id *happens* to be unique accross
 all MessageS entities.

 Thanks,
 Eric.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: is gaeutilities sessions the only 3rd party session manager?

2009-01-24 Thread yejun

Maybe store a secure token locally on gears or flash, then send one
time token by javascript. But the initial token still need to be
delivered by ssl.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: is gaeutilities sessions the only 3rd party session manager?

2009-01-24 Thread yejun

Http digest auth is another option. But without ssl, I can't see any
practical reason to elevate session security level.

On Jan 24, 1:37 pm, bowman.jos...@gmail.com
bowman.jos...@gmail.com wrote:
 The problems I see what that approach is:

  - 1 time token can be sniffed. We have limited ssl support with
 appengine which is why the session token client side needs to change.
  - Relying on gears, flash, or even javascript creates client side
 dependencies. gaeutilities already has a dependency on cookies because
 it's low enough level trying to create a way to append the session
 token to all requests for all applications wasn't really possible.
 Though I do have plans to expose the session token via some method to
 provide an opportunity for people to do that. Adding more dependencies
 is something I want to avoid.

 On Jan 24, 12:57 pm, yejun yej...@gmail.com wrote:

  Maybe store a secure token locally on gears or flash, then send one
  time token by javascript. But the initial token still need to be
  delivered by ssl.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Store model as different name to class?

2009-01-04 Thread yejun

Will this work?

mymodel().put()

mymodel.kind = lambda x: unewmodel

mymodel().put()

Will the first instance be saved to mymodel table and second instance
saved to newmodel table?

On Dec 30 2008, 8:13 pm, ryan ryanb+appeng...@google.com wrote:
 actually, the recommended way to do this is to override the Model.kind
 () class method:

 http://code.google.com/appengine/docs/datastore/modelclass.html#Model...
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How to sort enties making use of sharded counter?

2008-12-27 Thread yejun


 high write contention actually is, i.e. how many writes per minute/
 second you'd have to hit to have write conflicts, i.e. when you should
 consider a sharded approach.


Datastore can handle 10/s write at constant interval. So if write
occurs randomly at average 2/s there be about 1% probability
transaction takes excessive time (300ms).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Store model as different name to class?

2008-12-27 Thread yejun

NewTest=type('NewTest',(Test,),{})

On Dec 27, 11:43 am, Anthony acorc...@gmail.com wrote:
 Hi, Is it possible to store a model as a different name to the class
 name?

 I thought this would work but its still stored as a Test...

 class Test(db.Expando):
         test = db.StringProperty()

 class MainHandler(webapp.RequestHandler):
         def get(self):
                 NewTest = Test
                 nt = NewTest(test=hello)
                 nt.put()
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Use AppEngine on Vista x64

2008-12-22 Thread yejun

Did you install x86 or amd64 python?

On Dec 22, 2:58 pm, Chen Harel chook.ha...@gmail.com wrote:
 Hi, I've installed Python 2.5.2 and AppEngine 1.1.7 on Vista x64
 Ultimate... (Admin user - NO UTC)
 both dev_appserver.py and appcfg.py gives me the help screen (Like no
 options were included) every single time..
 Please help me help you help me ... What more information do you need
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Your app is probably incorrect! [vote up issue 313].

2008-12-22 Thread yejun


 Is there any particular reason why distributed transactions don't have
 a higher priority on the GAE TODO list?

What makes you think it should be higher priority? It is not in my
opinion.

Do you think a blog, todo list or caculator requires distributed
transaction?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Your app is probably incorrect! [vote up issue 313].

2008-12-22 Thread yejun


 Why would you want to limit GAE apps to those sorts of things?  What
 about all those apps where there is an interaction between users?


Developers are still struggling to complete some simple tasks such as
pageing through comments with current datastore API. Do you think it
is necessary to have a luxury toilet while you were living in shelter?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Reading app.yaml from App Engine -- possible?

2008-12-22 Thread yejun

import os
os.environ.get('APPLICATION_ID')

On Dec 22, 4:09 pm, Aral a...@aralbalkan.com wrote:
 I need to get to the app name at runtime when running on the
 deployment environment.

 I tried open('app.yaml').read() + a little regexp. Works locally, but
 it can't find the file on the deployment environment.

 Is it possible to read in app.yaml on deployment?

 If not, is there a way to get at the app name?

 Thanks,
 Aral
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: use stored string key to get an entity

2008-12-22 Thread yejun

Key to str conversion is not revertible.

On Dec 22, 5:09 pm, Shay Ben Dov shay.ben...@gmail.com wrote:
 Hi this is the issue

 I have a Transaction model

 I do:
 transaction=Transaction()
 transaction.put()

 tr_key = str(transaction.key())

 I want to store it in a second mode

 class Tag(db.Model):

   transaction = db.Key(encoded=None)
   tag = db.StringProperty(required=True)
   user = db.UserProperty(required=True)

 I do:

 tag = Tag(transaction=tr_key,)
 tag.put()

 later on in the application I retrieve the tag entity and I want to
 get the transaction back

 I do:
 tr_key = tag.transaction
 transaction = Transaction.get(tr_key)

 and I get an error message:
 BadKeyError: Key datastore_types.Key.from_path(_app=u'') is not
 complete.

 Every help or example is very appreciated.

 Thanks,

 Shay
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Use AppEngine on Vista x64

2008-12-22 Thread yejun

I have no problem with x86 version.
I have both python and sdk add to system path during installation.

On Dec 22, 5:29 pm, Chen Harel chook.ha...@gmail.com wrote:
 Hey, I'm on Intel Core 2 Duo, so I've installed the x86...

 On Dec 22, 10:26 pm, yejun yej...@gmail.com wrote:

  Did you install x86 or amd64 python?

  On Dec 22, 2:58 pm, Chen Harel chook.ha...@gmail.com wrote:

   Hi, I've installed Python 2.5.2 and AppEngine 1.1.7 on Vista x64
   Ultimate... (Admin user - NO UTC)
   both dev_appserver.py and appcfg.py gives me the help screen (Like no
   options were included) every single time..
   Please help me help you help me ... What more information do you need
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: filesystem virtualization on top of datastore

2008-12-19 Thread yejun

I think you can monkey patch builtin funtion and os module to enable
existing code to run on such kind vfs.

On Dec 19, 1:43 pm, jeremy jeremy.a...@gmail.com wrote:
 yu: this is interesting, but will it provide seamless virtualization
 of a filesystem environment? or will existing libraries that make use
 of file and os module functions have to be rewritten?

 ross: i'm encouraged to hear that, but i still don't see how. there
 doesn't seem to be a way to mount a vfs on the appengine platform, and
 there doesn't seem to be a way to wrap existing python filesystem
 functionality (there was some discussion on python forums about
 providing a filesystem api for just such virtualization - the existing
 os filesystem functionality would be reimplemented through this api)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Concurrency Control in the datastore

2008-12-12 Thread yejun

get_or_insert is very annoying to use. First it doesn't tell you
whether it is a  get or insert at the end. 2nd it cannot be
encapsulated in another transaction.

On Dec 12, 4:21 pm, Sharp-Developer.Net
alexander.trakhime...@gmail.com wrote:
 And there is a get_or_insert() method.

 I guess it's waiting for active transactions to be completed and could
 be used for synchronization.
 --
 Alexhttp://sharp-developer.net/

 On Dec 12, 8:07 pm, ryan ryanb+appeng...@google.com wrote:

  if you create an entity with the same key as an existing entity, e.g.
  you do Foo(key_name='bar').put() in one request, then you do the same
  thing in another request, the second put() will overwrite the first.

  if this happens in two concurrent transactions and they collide, one
  of the transactions will retry and do its put() again, overwriting the
  first one.

  if the put()s are outside transactions, it's much less likely that
  they'll collide. it's still possible that the datastore could attempt
  to perform those writes at the same time, but it's much less likely.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Django 1.0 - any update on plans to replace 0.96 ?

2008-12-12 Thread yejun

It will be nice if GAE always keep one warmed up spare process.

On Dec 12, 4:47 pm, Sharp-Developer.Net
alexander.trakhime...@gmail.com wrote:
 Thanks Marzia,

 Have GAE team thought about providing some warm-up url?

 So if GAE see a load is increasing
 (and probably could predict a new request handler will be needed soon)
 then engine calls some URL specified in app.yaml (for example) and
 a request handler instance created and kept in cache for a while.

 So we do not waste time when a user request comes?

 I do not talk about 100% accuracy but some sort of pro-active warming
 would benefit.

 May be you could even disable calls to DB/memcache/etc so it is just
 modules initialization.
 --
 Alexhttp://sharp-developer.net/

 On Dec 12, 6:49 pm, Marzia Niccolai ma...@google.com wrote:

  Hi Alex,

  We are definitely interested in offering Django 1.0 with App Engine in the
  future.  However, it seems likely that including Django 1.0 as the default
  Django version with App Engine would need to be part of an api version
  change, since such a change would likely brake existing apps.

  In terms of the high CPU warnings, we are generally working on a solution
  that will lesson the affect of such warnings on applications, so we hope we
  can address this soon not just for this case, but in general.

  As for the time concern, there isn't much right now that can be done.  But
  as your application increases in popularity, it's more likely people will
  see an already warm interpreter and thus not have to wait for a new
  initialization.

  -Marzia

  On Fri, Dec 12, 2008 at 6:14 AM, Sharp-Developer.Net 

  alexander.trakhime...@gmail.com wrote:

   Hi,

   Any plans to update Django 0.96 to Django 1.0 ?

   The 1.0 has been released on Sep 3 and still not there.

   I use local version of Django 1.0 and it's OK but my concern is
   performance and CPU usage.

   Whenever new instance of handler is created by GAE it takes some time
   and I guess CPU cycles.

   I've noticed Django loading and appengine_django helper installation
   takes solid amount of time (~0.8 sec) and I guess CPU cycles.

   See log extract from main.py :
   
   # 12-11 10:42AM 27.040 - Loading django helper...
   # 12-11 10:42AM 27.862 - Successfully loaded django helper
   

   It does not affect my CPU quota but harm user expirience and make me
   nervous when checking logs in admin console :).

   I assume if we load Django provided by GAE it will be pre-loaded and
   thus no CPU/time penalities for app. Is my assumption correct?

   Would be really great if we could get the Django version 1.0 provided
   by default.

   The latest reply on this is
  http://groups.google.com/group/google-appengine/msg/c3bb71cd63d8d32f

   May be we should fill a feature request to speed it up?

   P.S. I'm wonder what if Django could be provided pre-patched as an
   option?

   P.P.S. I know - this is too specific request for general platform as
   GAE so would be happy to get any advises on speeding up cold
   requests. For example I already do not load Django if page content
   found in memcache. Still would prefer to have Django pre-loaded. May
   be by specifying this in app.yaml?
   --
   Alex
   http:/sharp-developer.net/
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Concurrency Control in the datastore

2008-12-11 Thread yejun

Basically, I think during concurrent tractions, the first one commit
will win, the rest all fail. Non transaction operation will always
success. Write is always transactioned.
Am I right?

On Dec 11, 2:46 pm, DXD dungd...@gmail.com wrote:
 Wonderful! Great design!!! Thanks Ryan.

 David.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Concurrency Control in the datastore

2008-12-11 Thread yejun

Actually I still have a question.
What actually happened, if a new entity created in a transaction. What
if 2 concurrent transaction create entity with same key path.

On Dec 11, 5:08 pm, ryan ryanb+appeng...@google.com wrote:
 On Dec 11, 1:18 pm, yejun yej...@gmail.com wrote:

  Basically, I think during concurrent tractions, the first one commit
  will win, the rest all fail. Non transaction operation will always
  success. Write is always transactioned.

 correct. the python API retries transactions when they collide,
 though, and the datastore itself retries writes when they collide, so
 your app will generally only see collisions (in the form of
 TransactionFailedError exceptions) during periods of very high
 contention when all retries fail.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Superclass.get_by_key_name(subclass_instance.key_name) doesn't work

2008-12-10 Thread yejun

On Dec 10, 3:38 pm, theillustratedlife [EMAIL PROTECTED] wrote:
 Thanks Andy.  It's nice to know all the hidden features.  =)

 If the datastore doesn't know about subclasses, I'm not sure they're
 worth using.  Couldn't I give Question all the properties I might
 need, but only provide values to the ones each instance uses?  Is
 there any sort of penalty for declaring properties in a model, but not
 using them?

Any indexable property will cost a couple million cpu cycle when you
put a new entity. You should set them to text or blob to avoid index
penalty.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: AE for a bussiness app

2008-12-09 Thread yejun

http://code.google.com/appengine/docs/roadmap.html

On Dec 9, 6:44 am, Selva [EMAIL PROTECTED] wrote:
 I have a road map to deploy a datasstore featured application in AE.
 The main design is to store a huge amount of static data and getting
 the part of the data as per the requirements. There will not be any
 major data includes in production.The only issue in this is
 maintaining the huge amount of data (around 1000k entities) in the
 datastore , b4 that adding it to the datastore. I have completed the
 full application bt I couldn't upload it b'coz of the above
 limitations. Let any clarify me, Is GAE team having a clear roadmap
 regarding the datastore, timeout,pricing issues?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Is there hope that content-encoding will ever be allowed?

2008-12-09 Thread yejun

On Dec 9, 5:10 am, jago [EMAIL PROTECTED] wrote:
 Well partially. Obviously I would prefer compressing the file by a
 factor 10 and then send less data, without setting the right content-
 encoding this is impossible though.


GAE front end server always gzip files whenever possible.
In the meanwhile, you can always send static big files by other
services such as amazon S3 or simplecdn.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Creating unique key names

2008-12-09 Thread yejun

You don't have to use global data entity. For example use a datastore
backed global count as base number.
Your unique id can be generated by that count multiply by a big number
+ a local count.

On Dec 9, 11:57 am, Andy Freeman [EMAIL PROTECTED] wrote:
  The later
  solution only requires one put during any given processes lifetime so
  it shouldn't be a perf problem.

 It introduces a clean-up problem.   I can't delete such an object
 until after I delete all entity groups named using said object's key.
 (GAE is free to reused generated keys.)

 I shouldn't have mentioned the overhead of puts.  The real problem is
 cleanup and consistency, a problem that transactions are designed to
 solve.

 On Nov 24, 7:18 pm, Josh Heitzman [EMAIL PROTECTED] wrote:

  Like Jon McAlister said either use a random number or create a new
  entity when one of your modules is loaded and treat that entity's key
  as the globally unique process ID (i.e. MAC address + pid).  The later
  solution only requires one put during any given processes lifetime so
  it shouldn't be a perf problem.

  On Nov 22, 6:38 pm, Andy Freeman [EMAIL PROTECTED] wrote:

Yes, I understand transactions and entity groups. Why do you need to
create an entity group *atomically*?

   For the same reason that transactions are useful - incomplete groups
   are wrong (in my application) and I'd rather not deal with them.

If you create a new entity, it will automatically be assigned a unique
key at the datastore level. What's wrong with just using that?

   Each db.put has significant overhead.  If I can generate a unique name
   without a db.put, I can reduce the number of db.puts that my
   application does by a factor of 2.

   On Nov 22, 5:07 pm, David Symonds [EMAIL PROTECTED] wrote:

On Sun, Nov 23, 2008 at 8:50 AM, Andy Freeman [EMAIL PROTECTED] wrote:
  Suppose that I want to atomically create an entity group with two
  nodes, one the parent of the other.
 But *why* exactly do you want to do this?

 Because I want a set of one or more entities that can be manipulated
 in a single transaction. Entity group relationships tell App Engine to
 store several entities in the same part of the distributed network. A
 transaction sets up datastore operations for an entity group, and all
 of the operations are applied as a group, or not at all if the
 transaction fails.

Yes, I understand transactions and entity groups. Why do you need to
create an entity group *atomically*?

 The fact that GAE uses many machines and concurrently is why the full
 hostname, IP, or MAC address or some other machine identifier is
 useful in creating a unique identifier on GAE.  (If my application
 always ran on the same machine, the process id and time would be
 sufficient.)

If you create a new entity, it will automatically be assigned a unique
key at the datastore level. What's wrong with just using that?

Dave.- Hide quoted text -

  - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Is there hope that content-encoding will ever be allowed?

2008-12-09 Thread yejun

Unsinged java applet support crossdomain.xml since this May.

On Dec 9, 1:05 pm, jago [EMAIL PROTECTED] wrote:
  GAE front end server always gzip files whenever possible.

 You probably don't know how applets work. You can compress the Applet-
 jar file extremely by using pack200+gzip. For that you need the client
 (the Java plugin in your browser) know what you did (send content-
 encoding for pack200-gzip).

 Obviously I have other webspace from where I could serve the jars with
 whatever content-encoding I want. The problem is sandboxed Applets
 only allow loading files from the 'path' (Domain) their jar-file was
 loaded from. So if I want to read data from the appengine I have to
 load it from the appengine :)

  In the meanwhile, you can always send static big files by other
  services such as amazon S3 or simplecdn.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Configuring Friend Connect

2008-12-08 Thread yejun

Is it possible to fetch user information after user login friend
connect?

On Dec 4, 7:57 am, Andi Albrecht [EMAIL PROTECTED]
wrote:
 On Thu, Dec 4, 2008 at 1:24 PM, kang [EMAIL PROTECTED] wrote:
  GAE itself has google account model, and user can login.. but if they want
  to post through FriendConnect ,they need to login for the second time
  it's not a good user experience

 You're right, I think it's very confusing or maybe even annoying if a
 user has to login two times on two different Googlish-looking login
 pages...

 And from a developer's point of view it would be much easier to spice
 up an already existing appengine application with some social features
 if one could just use a Python API toFriendConnect with an user
 object already present within appengine. [just dreaming... ;-)]



  On Thu, Dec 4, 2008 at 8:17 PM, Andi Albrecht [EMAIL PROTECTED]
  wrote:

  On Thu, Dec 4, 2008 at 10:50 AM, kang [EMAIL PROTECTED] wrote:
   right
   i've configured for my app
   but what do you think of theFriendConnect? I think an appengine
   website
   need not use it...

  Why not? Could be a nice extra for some applications... But it would
  be great to have an easy option to re-use an login on an appengine
  website forFriendConnect (any hints? maybe I've just missed
  something...)

   On Thu, Dec 4, 2008 at 6:28 AM, Andi Albrecht
   [EMAIL PROTECTED]
   wrote:

   Hi Rajiv,

   here's how I did it... I copied the two files in a directory called
   static (where all my CSS and images live). Then I'd added the
   following lines to app.yaml:

   - url: /rpc_relay.html
    static_files: static/rpc_relay.html
    upload: static/rpc_relay.html

   - url: /canvas.html
    static_files: static/canvas.html
    upload: static/canvas.html

   I'm not sure if you can omit the upload directive. But the relevant
   part is to point the urls to the files you've downloaded from the
  friendconnect page.

   Hope that helps,

   Andi

   On Wed, Dec 3, 2008 at 8:37 PM, Rajiv R [EMAIL PROTECTED] wrote:

am trying to usefriendconnect for my app engine application. I am
told to place 2 .html files in the app's home url i.e
http://appname.appspot.com/filename.html.

I tried a few options in app.yaml but they all result in 404-Not
Found.

any suggestions?

Cheers!
Rajiv

   --
   Stay hungry,Stay foolish.

  --
  Stay hungry,Stay foolish.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: IP to location in GAE

2008-12-06 Thread yejun

Google ajax api has city level ip lookup.

On Dec 6, 5:13 am, Ubaldo Huerta [EMAIL PROTECTED] wrote:
 I wonder if anyone has tackled the problem of guessing country, city
 for a given IP. Of all the server side solutions that I'm aware of,
 maxmind, seems to have the best data. I know that there are REST, RPC,
 etc based solutions, but given GAE restrictions on duration of
 requests, I wouldn't go that route due to latency, time outs, etc

 So, the issue is how to move maxmind data to GAE. Note that geoLite
 city CSV is pretty large (~ 100 MB) (btw,  it will make a nice dent in
 the allowed quota )

 http://www.maxmind.com/app/geolitecity

  Anyhow, I wonder what would be the best strategy to upload the data:
 bulk uploader v.s. approcket. For app rocket, which I just saw in the
 blog post, I'll need to load the data to a SQL database

 Another solution is to do IP to location on the client side, but I
 don't know of any services that are reliable and fast.

 Ideas?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Is one big table table quicker than 8 smaller tables?

2008-12-05 Thread yejun

Query speed doesn't not depends on your data size, since entire GAE
cluster is in one single table.

On Dec 5, 12:54 am, lock [EMAIL PROTECTED] wrote:
 I've been desperately trying to optimise my code to get rid of those
 'High CPU' requests.  Some changes worked others didn't, in the end
 I've really only gained a marginal improvement.  So I'm now
 considering some significant structural changes and are wondering if
 anyone has tried something similar and can share their experience.

 The apps pretty simple it just geo-tags data points using the geoHash
 algorithm, so basically each entry in the table is the geoHash of the
 given lat/long with some associated meta data.  Queries are then done
 by a bounding box that is also geohashed and used as datastore query
 filters.  Due do some idiosyncrasies with using geoHash, any given
 query may be split into up to 8 queries (by lat 90,0,-90   by long 180
 90, 0, -90, 180), but generally the bounds fall into only one/two
 division(s) and therefore only result in one datastore query.

 All these queries are currently conducted on the one large datastore,
 I'm wondering if it would be more efficient to break down this one
 datastore into 8 separate tables (all containing the same type) and
 query the table relevant to the current bounding box.

 In summary I guess what I'm trying to ask is (sorry for the ramble),
 does the query performance degrade significantly as the size of the
 database increases?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: List of items in memcache?

2008-12-05 Thread yejun

It's impossible.

On Dec 5, 6:45 pm, Adam [EMAIL PROTECTED] wrote:
 I'm trying to figure out if there is a way to get a list of all keys
 stored in memcache.  I am storing values with somewhat unpredictable
 names -- posts_page16, posts_page2 -- and I need to have code that
 will not know the numerical values be able to delete those items from
 the cache.

 I've scoured the docs pretty closely, and nothing jumped out at me.
 Any ideas?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Is one big table table quicker than 8 smaller tables?

2008-12-05 Thread yejun

I think the max number of queries for one location should be 4 search
boxes. I could be wrong.

On Dec 5, 8:47 pm, lock [EMAIL PROTECTED] wrote:
 Thanks guys, that's what I was hoping to hear, you saved me a couple
 hours trying to prove it for myself (not to mention the frustration).
 After I went away and thought about it some more I figured there must
 be some 'smarts' in the database to prevent the query time from
 increasing.  Otherwise how could any database scale well...

 No merge joins or IN operators in my code, so nothing to worry about
 there.

 After a _lot_ more testing I'm finding that query time does scale with
 the number of fetched _results_, not the DB size.  During early
 testing I convinced myself that increasing the DB size was slowing my
 query down, when really the number of results were increasing as I
 added more data, doh (it was getting late ;-)  ).

 The overall solution that seems to be working well for me at the
 moment is to have different tables for different resolutions.  As the
 size of the geometric bounds increases I switch between a few tables,
 each one with a lower fidelity therefore reducing the number of
 results that can be returned.  Visually it works similar to Level Of
 Detail techniques you see in some 3D modeling packages.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: GAE will hurt Linux unless...

2008-12-03 Thread yejun

There's no such thing as directly using linux.

On Dec 3, 5:09 pm, Amir Michail [EMAIL PROTECTED] wrote:
 On Wed, Dec 3, 2008 at 5:06 PM, yejun [EMAIL PROTECTED] wrote:

  GAE runs on linux, so I couldn't see any reason linux will be hurt.

 GAE will cut down the number of people familiar with and directly
 using Linux even if it ends up increasing the number of Linux web
 servers.

 Amir





  On Dec 3, 3:23 pm, Amir  Michail [EMAIL PROTECTED] wrote:
  Hi,

  I suspect that the Google App Engine (and cloud computing more
  generally) will have the unintended effect of significantly reducing
  usage of Linux among web developers.

  The solution of course is for Google to release a user-friendly and
  slick Google OS built on top of Linux that makes the web the main
  source of apps.

  But even that's not enough since many people (even web devs) play
  games on Windows... and linux cannot compete at all in that regard.
  One could argue that the shift towards game consoles is making Windows
  less important for games, but that will take time...

  Amir

 --http://b4utweet.comhttp://chatbotgame.comhttp://numbrosia.comhttp://twitter.com/amichail
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: What is a proper way to cache values on per request basis?

2008-12-02 Thread yejun

GAE is single thread server.

On Dec 2, 2:50 am, Sharp-Developer.Net
[EMAIL PROTECTED] wrote:
 Would not be that thread unsafe?

 As I understand modules are loaded once per machine instance and then
 cached/reused by different requests.

 I'm not sure what is the __main__ module. I don't have such.

 So I afraid if I cache in module user specific information it will
 become visible to another thread of my app running on the same box.
 --
 Alexander Trakhimenokhttp://sharp-developer.net/

 On Dec 2, 2:04 am, yejun [EMAIL PROTECTED] wrote:

  For example,

  import __main__

  class yourhandler(webapp.RequestHandler):
      def __init__():
          __main__.cache = {}

  On Dec 1, 9:00 pm, yejun [EMAIL PROTECTED] wrote:

   You can save it to a global variable as cache.
   You can use a module level variable as cache and clear it in the
   handler's __init__.

   On Dec 1, 2:28 pm, Sharp-Developer.Net

   [EMAIL PROTECTED] wrote:
Hi,

I have to retrieve some entities by key multiple times during single
request.

I do use memcache but getting quite high CPU usage and lot's of
warnings.

As I retrieve the same entity by it key multiple (many) times during a
request I wonder could I improve my code by caching results on per
request handler instance basis? I sure I could but as newbie in Python
I'm not sure what is the best place  way to do that.

I could add variable to a request object (I use Django) but that will
require to pass it to every place where I need to use it. It's too
complicated.

I wonder is there such a thing like a HttpContext.Current in C#? In
ASP.NET if I want to store/retrieve an object on per request basis
I'll simply do next:

   HttpContext.Current.Items[key] = value;
   var value = HttpContext.Current.Items[key];

Is the anything similar in AppEngine/Python?

Again, as a Python newbie will apreciate a working code sample.

I think this question could be interesting to many people.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: using custom domain

2008-12-02 Thread yejun

Obviously wrong CNAME

dig www.maithilparinay.com

www.maithilparinay.com. 1120IN  CNAME
p2p.geo.vip.sp1.yahoo.com.

On Dec 2, 5:41 pm, Yogi [EMAIL PROTECTED] wrote:
 Hi All,
          I have deployed google appengine application 
 athttp://maithilparinay.appspot.comover which I have used custom domain
 forwarding.
 I have successfully created the CNAME record as directed but I am not
 able to access the application over my custom domain 
 addresshttp://www.maithilparinay.com

 The application keeps on loading without giving any error message.
 There is no message shown and the browser keeps on trying loading the
 application (even a time out is not happening)

 Does any one else over here has faced similar kind of issue. An input
 would be greatly appreciated.

 Thanks in Advance
 -Yogendra Jha
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Crafting a raw HTTP response header - Server injecting its own status code

2008-12-01 Thread yejun

GAE program run on a CGI server if you decide to by pass WSGI gateway.
So you should use CGI header instead of raw HTTP headers.

On Dec 1, 3:48 pm, Brian [EMAIL PROTECTED] wrote:
 I've decided not to use WSGI for my application, and instead I'm
 crafting my HTTP responses by hand, much like the Hello World
 example in the GAE documentation:

 print 'Content-Type: text/plain'
 print ''
 print 'Hello, world!'

 It seems that when this method is used, the output is buffered and
 then altered by the server (I assume both development and production),
 injecting additional headers -- server, content-length, cache-control,
 etc.  This is all well and good, for the most part, as it doesn't seem
 to overwrite any headers I've crafted.

 Now on to my problem:
 The server also injects a status line into my crafted response.  Any
 requested URL that matches a URL pattern in app.yaml will result in a
 200 OK status.  So, taking the Hello World example above, I would like
 to do the following:

 print 'HTTP/1.1 404 Not Found'
 print 'Content-Type: text/plain'
 print ''
 print 'Hello world wasn't found!'

 When I do this, the server doesn't recognize the HTTP status code,
 injects its own status code and headers, then sends by my original
 status code and headers as response content.

 I assume this would be considered a bug, unless I'm doing something
 wrong.  Any help would be appreciated, otherwise I will file a bug
 report.

 Thanks

 Brian
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: What is a proper way to cache values on per request basis?

2008-12-01 Thread yejun

You can save it to a global variable as cache.
You can use a module level variable as cache and clear it in the
handler's __init__.

On Dec 1, 2:28 pm, Sharp-Developer.Net
[EMAIL PROTECTED] wrote:
 Hi,

 I have to retrieve some entities by key multiple times during single
 request.

 I do use memcache but getting quite high CPU usage and lot's of
 warnings.

 As I retrieve the same entity by it key multiple (many) times during a
 request I wonder could I improve my code by caching results on per
 request handler instance basis? I sure I could but as newbie in Python
 I'm not sure what is the best place  way to do that.

 I could add variable to a request object (I use Django) but that will
 require to pass it to every place where I need to use it. It's too
 complicated.

 I wonder is there such a thing like a HttpContext.Current in C#? In
 ASP.NET if I want to store/retrieve an object on per request basis
 I'll simply do next:

    HttpContext.Current.Items[key] = value;
    var value = HttpContext.Current.Items[key];

 Is the anything similar in AppEngine/Python?

 Again, as a Python newbie will apreciate a working code sample.

 I think this question could be interesting to many people.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: What is a proper way to cache values on per request basis?

2008-12-01 Thread yejun

For example,

import __main__

class yourhandler(webapp.RequestHandler):
def __init__():
__main__.cache = {}

On Dec 1, 9:00 pm, yejun [EMAIL PROTECTED] wrote:
 You can save it to a global variable as cache.
 You can use a module level variable as cache and clear it in the
 handler's __init__.

 On Dec 1, 2:28 pm, Sharp-Developer.Net

 [EMAIL PROTECTED] wrote:
  Hi,

  I have to retrieve some entities by key multiple times during single
  request.

  I do use memcache but getting quite high CPU usage and lot's of
  warnings.

  As I retrieve the same entity by it key multiple (many) times during a
  request I wonder could I improve my code by caching results on per
  request handler instance basis? I sure I could but as newbie in Python
  I'm not sure what is the best place  way to do that.

  I could add variable to a request object (I use Django) but that will
  require to pass it to every place where I need to use it. It's too
  complicated.

  I wonder is there such a thing like a HttpContext.Current in C#? In
  ASP.NET if I want to store/retrieve an object on per request basis
  I'll simply do next:

     HttpContext.Current.Items[key] = value;
     var value = HttpContext.Current.Items[key];

  Is the anything similar in AppEngine/Python?

  Again, as a Python newbie will apreciate a working code sample.

  I think this question could be interesting to many people.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: conversion of pricing into page impressions

2008-11-25 Thread yejun

I think 1 million views will roughly cost $10 if each view uses 0.1s
cpu and 100k size.

On Nov 25, 5:55 am, conman [EMAIL PROTECTED]
wrote:
 Ok, I guees I need to specify my question :)

 Who has a running App Engine application with some amount of daily
 traffic and can make an assumption about how much he will have to pay
 if the pricing model kicks in?

 I need a rough conversion between the page views an application can
 deliver and the burned ressources - which have to be paid.

 Thanks!
 Constantin

 On 22 Nov., 16:40, conman [EMAIL PROTECTED]
 wrote:

  Hello,

  I know about the official pricing model for app engine[1], but where I
  need help is a rough conversion of this values (for example cpu
  cycles) into served page impressions. I know that this value depends
  heavily on the application, but what ranges are here to expect?

  And also I'd like to know which of the pricing variables (cpu, http
  requests, db storage) is likely to be the most expensive one?

  Regards,Constantin

  [1]http://www.google.com/intl/en/press/annc/20080527_google_io.html
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: best practice for using JS and AJAX with GAE

2008-11-25 Thread yejun

I just add one more cons. Ajax page is usually not SEO friendly.

On Nov 25, 5:05 pm, Jeff S [EMAIL PROTECTED] wrote:
 Hi Scott,

 This is by no means an exhaustive list of pros and cons for using
 Ajax, but I can think of a few off the top of my head:

 Pros:
 - avoid reloading an entire page, use less bandwidth, more responsive
 - dynamically update page content (usually by polling the server)
 - offload expensive operations to the user's computer
 - using JavaScript/Ajax requests can possibly increase resistance to
 Cross-site Request Forgery

 Cons:
 - users must have JavaScript enabled. Some browsers (text based) lack
 JS support, reduces accessibility
 - can be difficult to navigate to content: loss of linkability, back
 and forward browser buttons can be less useful without additional
 plumbing
 - depending on your design, more HTTP requests to the server

 Overall, I find that Ajax enables more usable web apps. Most of the
 above limitations can be overcome with, well, you guessed it,
 JavaScript (except the first). I'm sure others have opinions on this
 topic as well. What do you think?

 Thank you,

 Jeff

 On Nov 23, 8:34 pm, Scott Mahr [EMAIL PROTECTED] wrote:

  I am pretty new to web development in general, in fact to any kind of
  CS development.  Through my process of learning what I need to know I
  think sometimes I miss the big picture, and that hurts me later.  I
  started using GAE with basic html, and then moved to using Jquery.
  Now it seems to me that have few pages for an app, and using ajax to
  update and change content would be a good move.  My question is this,
  what are the pros and cons of using a lot of javascript and/or ajax.
  One example recently, I needed to parse some text from a quiz that my
  users submits, I didn't really know if it was better to parse in JS,
  then use ajax to send it to my python script and store it in the
  datastore, or send the raw data and parse it in Python before sending
  it to the datastore.

  I have read many of the posts on optimal data structure and the like
  and have learned a lot from them, so this is meant as a pretty open
  ended question for best practices.

  Thanks,

  Scott
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Creating unique key names

2008-11-23 Thread yejun

There's no currently available method to identify process and machine.
Your best bet is random id generated during module initialization.

On Nov 22, 9:46 pm, Andy Freeman [EMAIL PROTECTED] wrote:
  os.getpid isn't available

 Thanks.

  nor unique across processes.

 Huh?  During the life of a process A on a machine B, no other process
 on B will have the same process id as A.

 Two different processes on a given machine may have the same id if
 their lifetimes are disjoint and two processes on different machines
 may have the same process id at the same time, but the latter is just
 why some sort of machine identifier is important.

 On Nov 22, 5:43 pm, yejun [EMAIL PROTECTED] wrote:

  UUID should be ok, which use system urandom as seed by default.
  os.getpid isn't available nor unique across processes.

  On Nov 22, 8:07 pm, David Symonds [EMAIL PROTECTED] wrote:

   On Sun, Nov 23, 2008 at 8:50 AM, Andy Freeman [EMAIL PROTECTED] wrote:
 Suppose that I want to atomically create an entity group with two
 nodes, one the parent of the other.
But *why* exactly do you want to do this?

Because I want a set of one or more entities that can be manipulated
in a single transaction. Entity group relationships tell App Engine to
store several entities in the same part of the distributed network. A
transaction sets up datastore operations for an entity group, and all
of the operations are applied as a group, or not at all if the
transaction fails.

   Yes, I understand transactions and entity groups. Why do you need to
   create an entity group *atomically*?

The fact that GAE uses many machines and concurrently is why the full
hostname, IP, or MAC address or some other machine identifier is
useful in creating a unique identifier on GAE.  (If my application
always ran on the same machine, the process id and time would be
sufficient.)

   If you create a new entity, it will automatically be assigned a unique
   key at the datastore level. What's wrong with just using that?

   Dave.- Hide quoted text -

  - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Setting the HTTP status code without webapp

2008-11-22 Thread yejun

Just print(200) or print(500) will work.

On Nov 22, 7:06 pm, Rodrigo Moraes [EMAIL PROTECTED] wrote:
 duh, how i missed the without webapp part of the subject?

 reformulating, without webapp you'd need to pass the status code and
 message in the response header, together with other headers, and not
 in the body as you did.

 a response object is advised, and you can use the one from WebOb which
 is simple enough.

 take a look at google.webapp.ext.webapp.__init__ (in
 Response.wsgi_write()) to see how it does it.

 hope this helps.

 -- rodrigo
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Creating unique key names

2008-11-22 Thread yejun

UUID should be ok, which use system urandom as seed by default.
os.getpid isn't available nor unique across processes.

On Nov 22, 8:07 pm, David Symonds [EMAIL PROTECTED] wrote:
 On Sun, Nov 23, 2008 at 8:50 AM, Andy Freeman [EMAIL PROTECTED] wrote:
   Suppose that I want to atomically create an entity group with two
   nodes, one the parent of the other.
  But *why* exactly do you want to do this?

  Because I want a set of one or more entities that can be manipulated
  in a single transaction. Entity group relationships tell App Engine to
  store several entities in the same part of the distributed network. A
  transaction sets up datastore operations for an entity group, and all
  of the operations are applied as a group, or not at all if the
  transaction fails.

 Yes, I understand transactions and entity groups. Why do you need to
 create an entity group *atomically*?

  The fact that GAE uses many machines and concurrently is why the full
  hostname, IP, or MAC address or some other machine identifier is
  useful in creating a unique identifier on GAE.  (If my application
  always ran on the same machine, the process id and time would be
  sufficient.)

 If you create a new entity, it will automatically be assigned a unique
 key at the datastore level. What's wrong with just using that?

 Dave.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Creating app.yaml

2008-11-22 Thread yejun

Copy it from google_appengine\new_project_template and change the app
name.

On Nov 22, 12:31 pm, Grady [EMAIL PROTECTED] wrote:
 I am trying to create my first GAE app off by following a screencast.
 In the screencast the author simply creates a new .txt file and names
 it app.yaml.  XP recognizes this as a .yaml file and changes the
 file type under description and the icon.  I feel like a loser, but
 when I do these same steps Vista does not change its type from a txt
 to yaml.

 Because of this when I run appcfg.py i get the following appcfg.py:
 error: Directory does not contain an app.yaml configuration file.

 What do I have to do to create a .yaml file?

 Thanks!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Is AUTH_DOMAIN supposed to be different from app?

2008-11-18 Thread yejun

It seems I always get gmail.com.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: using GAE to serve largely static files for an AJAX based application.

2008-11-12 Thread yejun

XHR require the host name be exactly same. So you have to server your
page and xml both off same server.

On Nov 12, 4:49 am, bryan rasmussen [EMAIL PROTECTED]
wrote:
 As per the subject:

 I have an ajax based application, I figured I could offload some of
 the static XML files I need to serve via XHR off to GAE, then when I
 am doing a new XHR request check for a file, if that file does not
 return 200 I am currently not accessing GAE so switch to some other
 service. Does this seem reasonable. Another problem is obviously that
 I will need to do some DNS settings to make sure that I can serve my
 initial request from my domain and then get further requests via  XHR
 from app engine. Has anyone tried a similar setup or have suggestions,
 note problems with the approach?

 Thanks,
 Bryan Rasmussen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Any way to create dynamic file

2008-11-12 Thread yejun

If pyExcelrator can save data to file, I guess it would possibly save
to StringIO as well.

On Nov 12, 12:51 pm, Calvin Spealman [EMAIL PROTECTED] wrote:
 Consider looking at anything the docs api provides to generate spreadsheets,
 from which your users could export a number of formats (or possibly you
 could) or just give them csv

 On Nov 12, 2008 12:34 PM, Masa [EMAIL PROTECTED] wrote:

 I'd like to experiment with a site creating some statistical figures
 on the fly. People could see them on the website, but for offline
 analysis it'd be extremely important also to offer the statistics as
 downloadable, customized files.

 I've already solved how the create the information content. The
 problem I have at the moment is how to provide the users a way to
 download the statistics as an Excel file. I've experimented with
 pyExcelrator and can get a desired outcome with it. The problem with
 GAE is that I can't save the outcome Excel to be downloaded as file.

 If I could someway, get the result of pyExcelrator to Blob instead of
 file, I could easily offer it as a downloadable file. Do you think
 that is possible to do somehow?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Exploding Indexes - trying to understand

2008-11-12 Thread yejun

It only explodes if you query them together. but I guess you don't
need to query credit_cards with country_clubs at the same time, since
credit_cards will be unique.

On Nov 12, 12:49 pm, johnP [EMAIL PROTECTED] wrote:
 Let's say I have the following model:

 class Person(db.Model):
     name = db.StringProperty()
     country_clubs = db.ListProperty(db.Key)
     credit_cards = db.ListProperty(db.Key)

 Where the person can belong to 10 different country clubs, and can
 have 15 different credit cards.  Does this lead to exploding indexes?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How to: Variable assignment in django template.

2008-11-11 Thread yejun



 Cannot be done? Then django is lousy. If so, could someone point me to
 a better python template technology?

Mako template can include any arbitrary python code.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: counter use pattern

2008-11-11 Thread yejun

Every operation by transferring value between memcache and datastore
is a non atomic operation. The original design is cache value from
datastore to memcache. The equivalent implementation is just throwing
random exception from the transaction.

I feel the non sharded counter plus a memcache will be more robust
solution.

Because a reasonable frequency update from memcache to datastore
completely eliminated the necessity of sharded implementation. A high
frequency memcache to datastore implementation will be very unreliable
due to high frequency non atomic operations.

On Nov 11, 4:12 pm, Bill [EMAIL PROTECTED] wrote:
 I probably should've piped up on the thread earlier.  I'm currently
 looking at yejun's fork and will merge pending some questions I have
 on his optimizations.

 Here's one:
 My old code stored the counter name in each shard so I could get all
 shards with a single fetch.  If you have 20 shards, you could have any
 number of actually created shards.  In a very high transaction system,
 probably all 20 shards exist.
 In yejun's optimization, he's iterating through each shard using
 get_by_key_name and checking if it exists.  Which is likely to be
 faster?

 A nice optimization by yejun is making count a TextProperty.  This
 will prevent indexing and would probably save some cycles.

 Josh, you said lines 144-145 should be under the try: on line 137.
 That way, the delayed counter count won't get reset to zero even in
 the case of a failed transaction.

 I thought any datastore errors are handled via the db.Error exception
 which forces a return before the delayed counter count is reset.
 (db.Error is defined in appengine/api/datastore_errors.py)

 On Josh's memcached buffering scheme, I can definitely see the utility
 if you're willing to sacrifice some reliability (we're down to one
 point of failure -- memcache) for possibly a lot of speed.  Using
 memcache buffers for counters makes sense because it's easy to
 accumulate requests while for other model puts, like comments or other
 text input, the amount of memcache buffer could grow pretty large
 quickly.

 Does it make sense to use a sharded backend to a memcache buffer?
 Depends on the frequency of the final datastore writes, as mentioned
 above.  (I'm not as concerned with complexity as yejun because I think
 this buffering is reasonably simple for counters.)   So I think this
 would be a good thing to add onto the sharded counter code through a
 fork.  Then people who want that kind of speed can opt for it.

 -Bill
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Getting host of application..?

2008-11-10 Thread yejun

self.request.host_url should return url without path.

On Nov 10, 4:13 pm, jago [EMAIL PROTECTED] wrote:
 I wonder how to get the 'host' of an application.

 As 'host' I meanhttp://localhost:8080orhttp://www.example.com/?

 With self.request.url I have the current host + path. Is there a clean
 way to only get the host without splitting strings?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Getting host of application..?

2008-11-10 Thread yejun

Webapp's request is thin wrapper of WebOb request object.

On Nov 10, 9:49 pm, jago [EMAIL PROTECTED] wrote:
 Are there more Request attributes I don't know about? In Google
 AppEngine's docu is no self.request.host_url  listed. Where can I find
 the complete docu?

 On Nov 11, 2:17 am, yejun [EMAIL PROTECTED] wrote:

  self.request.host_url should return url without path.

  On Nov 10, 4:13 pm, jago [EMAIL PROTECTED] wrote:

   I wonder how to get the 'host' of an application.

   As 'host' I meanhttp://localhost:8080orhttp://www.example.com/?

   With self.request.url I have the current host + path. Is there a clean
   way to only get the host without splitting strings?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Permanent Unique User Identifier

2008-11-07 Thread yejun

Google openid already supports persistent id independent of user name
or email address.

http://code.google.com/apis/accounts/docs/OpenID.html

On Nov 7, 5:56 pm, Alexander Kojevnikov [EMAIL PROTECTED]
wrote:
 Ryan,

 I see now. It looks like Google is going to fix this in the future,
 see the last paragraph on this 
 page:http://code.google.com/appengine/docs/users/userobjects.html

 I couldn't find an issue for this, adding one would speed this up...

 Alex

 On Nov 8, 6:09 am, Ryan Lamansky [EMAIL PROTECTED] wrote:

  Alexander Kojevnikov: Although your plan would function, it offers no
  protection from email address changes.  If the user changes their
  email address, they would no longer match their original UserProfile
  (because UserProperty is just a fancy name for string, as it stores
  only the email address), thus cutting them off from all referenced
  data.

  -Ryan
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Purchased domain via Google Apps. How to host my appengine app on this domain?

2008-11-07 Thread yejun

I guess part of the reason is dns does not support domain name cname.
Even google.com itself is just a redirect to www.google.com


On Nov 7, 8:33 pm, jago [EMAIL PROTECTED] wrote:
 Why did I register this domain if I then can't host my app there? Has
 Google gone insane?

 I was possible for a long time! I cannot believe they did this :(((

 Any idea if it will ever be possible to host the app directly on a
 naked domain?

 On Nov 8, 2:02 am, Marzia Niccolai [EMAIL PROTECTED] wrote:

  Hi,

  This is currently not 
  possible:http://code.google.com/appengine/kb/commontasks.html#naked_domain

  -Marzia

  On Fri, Nov 7, 2008 at 4:59 PM, jago [EMAIL PROTECTED] wrote:

   I'd like to map my app tohttp://myurl.com(alsoknown as a naked
   domain).

   I purchased a domain via Google Apps. As far as I can tell I only
   found the option to host my AppEngine app on a subdomain of the
   purchased domain!

   Please please please tell me I can also host it directly on the naked
   domain, i.e. something likehttp://myurl.com

   thx...jago
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Purchased domain via Google Apps. How to host my appengine app on this domain?

2008-11-07 Thread yejun

Just set url to www in your app setting.

On Nov 7, 9:20 pm, jago [EMAIL PROTECTED] wrote:
 okok...I know. Google calls them 'access URLs' but a common although
 wrong term is subdomain.

 let me rephrase my question: Is it possible if I purchasedwww.example.com
 from Google to directly make my AppEngine app accessible if people
 type in there URL field:www.example.com? However they should not be
 re-directed towww.myapp.example.combut directly stay atwww.example.com
 where the AppEngine app is.

 Is this possible? How?

 So far Google Apps only allows me to make my AppEngine app accessible
 at:www.myapp.example.com

 Thanks...jago

 On Nov 8, 3:06 am, Roberto Saccon [EMAIL PROTECTED] wrote:

  I think you don't know what a subdomain is:

 www.example.comisa subdomain
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: indexes for and queries

2008-11-07 Thread yejun

In ext/search

On Nov 8, 1:24 am, Andy Freeman [EMAIL PROTECTED] wrote:
 Where is Searchable defined?  (Windows explorer search won't look at
 contents of .py files)

 On Nov 6, 2:30 am, dobee [EMAIL PROTECTED] wrote:

  if i do this to find any entities that match house and dog i do

  Searchable.all().filter('content_type =', 'something').filter('words
  =', 'house').filter('words =', 'dog'). order('c_time')

  is it right that i need an index for every number of words? so if i
  want to support searches for cats dogs pets i need an additional
  index?

  get_data failed no matching index found.
  This query needs this index:
  - kind: Searchable
    properties:
    - name: content_type
    - name: words
    - name: words
    - name: c_time
      direction: desc

  thx, in advance, bernd
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Entites which must not exist

2008-11-06 Thread yejun

In practical, you only need to search a few club geographically close
to each other. So an brutual force scan is the simplest way.
A query cross global seems pretty meaningless beyond statistical
reasons.

On Nov 6, 8:14 am, Ian Bambury [EMAIL PROTECTED] wrote:
 2008/11/6 powera [EMAIL PROTECTED]



  Can't you create an index on a Bookings entity based on time, so you
  can do a query SELECT * FROM Bookings WHERE time = 1PM ?  If you can
  keep the list of courts in memcache, it should be easy to compare this
  list to the list of courts to find open listings at any given time.
  With an index, you shouldn't have to worry about the 1000-item limit
  unless you envision having more than 1000 reservations for a given
  time.

 If someone searches for a free bowling lane for tomorrow evening between 5
 and 10pm and say the average is 20 lanes per business, then I only need 10
 bowling alleys and I'm at the limit

 If someone searches for a free lane any evening next week, then 3 bowling
 alleys will break the limit.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Entites which must not exist

2008-11-06 Thread yejun

Get all relevant book from those clubs. 1000 record limit only apply a
single query result. You can always add more query condition to reduce
the query result set less than 1000.
Such as subdividing court to different type, book to morning/evening.
clubs to womans/mans. Or even insert a random number just for the
purpose of subdivide data.

On Nov 6, 10:57 am, Ian Bambury [EMAIL PROTECTED] wrote:
 How does that help the 1000-record limit? Do you mean get the nearby clubs
 and just do all the rest manually?

 Ian

 http://examples.roughian.com

 2008/11/6 yejun [EMAIL PROTECTED]



  In practical, you only need to search a few club geographically close
  to each other. So an brutual force scan is the simplest way.
  A query cross global seems pretty meaningless beyond statistical
  reasons.

  On Nov 6, 8:14 am, Ian Bambury [EMAIL PROTECTED] wrote:
   2008/11/6 powera [EMAIL PROTECTED]

Can't you create an index on a Bookings entity based on time, so you
can do a query SELECT * FROM Bookings WHERE time = 1PM ?  If you can
keep the list of courts in memcache, it should be easy to compare this
list to the list of courts to find open listings at any given time.
With an index, you shouldn't have to worry about the 1000-item limit
unless you envision having more than 1000 reservations for a given
time.

   If someone searches for a free bowling lane for tomorrow evening between
  5
   and 10pm and say the average is 20 lanes per business, then I only need
  10
   bowling alleys and I'm at the limit

   If someone searches for a free lane any evening next week, then 3 bowling
   alleys will break the limit.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How to set a property is unique key in app engine?

2008-11-06 Thread yejun

On Nov 6, 12:13 pm, Andy Freeman [EMAIL PROTECTED] wrote:
 Can't this be done with a carefully constructed key_name?


There's a couple differences.
Unique key means throw exception on duplication. Save duplicated
key_name will cause overwritten. You have to use get_or_insert to test
uniqueness and avoid overwriting.
You can not use inequality filter on key_name.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: html form with file upload, and download files from GAE

2008-11-05 Thread yejun

You can use POST request to upload file to S3 directly. S3 will
redirect user back on success but not on failure.

On Nov 5, 1:13 pm, Dado [EMAIL PROTECTED] wrote:
 Shoot! How about file upload time then... I can always store the data
 on S3, but if GAE quits on a request that takes a long time because of
 large file size (I am thinking in the range 1-10Mb) then it is pretty
 useless for what I would like to do.

 Thanx,

 Dado

 On Nov 5, 9:45 am, Jeff S [EMAIL PROTECTED] wrote:

  Hi Dado,

  The 1MB file size might be increased at some point in the future, but
  this change won't be tied to the availability of billing.

  Thank you,

  Jeff

  On Nov 4, 10:40 am, Dado [EMAIL PROTECTED] wrote:

   I have the same issue with GAE in regards to the entity size limit.
   Could someone on the GAE team tell us if this limitation will be
   lifted for future paying customers? Also, is there a similar
   limitation on the size of a file upload (post)/download(get) request,
   given that the docs say request/response cycle cannot last longer than
   a few seconds?

   Thnx,

   Dado

   On Nov 2, 2:20 pm, cm_gui [EMAIL PROTECTED] wrote:

hi All

We want to implement our company Intranet (currently using XOOPS) on
Google Apps Sites (we are a Google Apps Premier customer).

We have a lot of online forms on our Intranet.   Forms like expense
reimbursement where our staff enter their expenses and attach receipts
to the form and then submit the form.  The form data will then be
email to our accounting staff with download links to the receipts
(which are stored our webserver).

It is not possible to implement this in Google Apps Sites, and so we
are thinking of creating an application on Google App Engine and then
packaging it into a Gadget and inserting the Gadget into a Dashboard
in Google Apps Sites.   It seems that this is the only way to
incorporate a Google App Engine application into Google Apps Sites,
i.e. via Google Gadget.

But it seems like there is not flat file system on GAE to store the
uploaded file.  And the file has to be stored as a blogtype entry in
Google datastore.   But Google Datastore has a limit of 1MB per
entry.   Our users sometimes upload files as big as 100MB (for CAD
files).   Also, after storing the file in the Datastore, we need it to
be immediately available for download as link in the form email
notification.    For example, after our staff submit  the expense
reimbursement form, an email will be sent to our accounting staff and
the email will contain a link to the receipt/invoice file uploaded,
and the accounting staff can download the receipt file -- this is what
we are doing now using our Xoops Intranet.

I got the information that there's no flat file storing in GAE and
that there is a 1MB limit on DataStore entries from this site:

   http://stackoverflow.com/questions/81451/upload-files-in-google-app-e...

Thank you.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: GAE vs. EC2

2008-11-05 Thread yejun



On Nov 5, 1:55 pm, Barry Hunter [EMAIL PROTECTED] wrote:
 On Wed, Nov 5, 2008 at 6:39 PM, bFlood wrote:

  EBS - from the amazon docs:
  A volume can only be attached to one instance at a time, but many
  volumes can be attached to a single instance. 
 http://aws.amazon.com/ebs/

 Isn't this a inherent limitation of the technology, not a Amazon one?
 Trying to have multiple Operating Systems working on one actual disk
 is ripe for disaster (the volumes are low level block devices). How
 will locking and such work? never heard of connecting one physical
 disk to multiple machines.


NFS is not scalable and reliable. There are lots of other options,
such as ISCSI, GFS, Lustre  and etc. Some are software, some are
hardware, the problem in the end will just come down to performance,
reliability, scability.  There's current no watchdog and hardware
synchronizer implementation on EC2 afasik, most solution will be hard
to be reliable and scalable. Of course amazon can implement better
native solution but I think it will be only done by user's demand.







--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: html form with file upload, and download files from GAE

2008-11-05 Thread yejun

You can't for now, because python variable is probably limited to 1MB
size.

On Nov 5, 2:41 pm, Dado [EMAIL PROTECTED] wrote:
 Sure I can do that, but what if I want to verify or extract
 information from the file before storing it?

 On Nov 5, 10:49 am, yejun [EMAIL PROTECTED] wrote:

  You can use POST request to upload file to S3 directly. S3 will
  redirect user back on success but not on failure.

  On Nov 5, 1:13 pm, Dado [EMAIL PROTECTED] wrote:

   Shoot! How about file upload time then... I can always store the data
   on S3, but if GAE quits on a request that takes a long time because of
   large file size (I am thinking in the range 1-10Mb) then it is pretty
   useless for what I would like to do.

   Thanx,

   Dado

   On Nov 5, 9:45 am, Jeff S [EMAIL PROTECTED] wrote:

Hi Dado,

The 1MB file size might be increased at some point in the future, but
this change won't be tied to the availability of billing.

Thank you,

Jeff

On Nov 4, 10:40 am, Dado [EMAIL PROTECTED] wrote:

 I have the same issue with GAE in regards to the entity size limit.
 Could someone on the GAE team tell us if this limitation will be
 lifted for future paying customers? Also, is there a similar
 limitation on the size of a file upload (post)/download(get) request,
 given that the docs say request/response cycle cannot last longer than
 a few seconds?

 Thnx,

 Dado

 On Nov 2, 2:20 pm, cm_gui [EMAIL PROTECTED] wrote:

  hi All

  We want to implement our company Intranet (currently using XOOPS) on
  Google Apps Sites (we are a Google Apps Premier customer).

  We have a lot of online forms on our Intranet.   Forms like expense
  reimbursement where our staff enter their expenses and attach 
  receipts
  to the form and then submit the form.  The form data will then be
  email to our accounting staff with download links to the receipts
  (which are stored our webserver).

  It is not possible to implement this in Google Apps Sites, and so we
  are thinking of creating an application on Google App Engine and 
  then
  packaging it into a Gadget and inserting the Gadget into a Dashboard
  in Google Apps Sites.   It seems that this is the only way to
  incorporate a Google App Engine application into Google Apps Sites,
  i.e. via Google Gadget.

  But it seems like there is not flat file system on GAE to store the
  uploaded file.  And the file has to be stored as a blogtype entry in
  Google datastore.   But Google Datastore has a limit of 1MB per
  entry.   Our users sometimes upload files as big as 100MB (for CAD
  files).   Also, after storing the file in the Datastore, we need it 
  to
  be immediately available for download as link in the form email
  notification.    For example, after our staff submit  the expense
  reimbursement form, an email will be sent to our accounting staff 
  and
  the email will contain a link to the receipt/invoice file uploaded,
  and the accounting staff can download the receipt file -- this is 
  what
  we are doing now using our Xoops Intranet.

  I got the information that there's no flat file storing in GAE and
  that there is a 1MB limit on DataStore entries from this site:

 http://stackoverflow.com/questions/81451/upload-files-in-google-app-e...

  Thank you.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: GAE vs. EC2

2008-11-05 Thread yejun

Just try it, you will see. I am tired of arguing here. It makes little
sense to comparing them hypothetically. They are on completely
different level of a real problem.

On Nov 5, 2:27 pm, sal [EMAIL PROTECTED] wrote:
 On Nov 5, 1:46 pm, yejun [EMAIL PROTECTED] wrote:

  But you can use a notepad to write a scalable web application on GAE
  just in 5 minutes and run it in seconds. On EC2 the development
  process will take a minimal of days.

 I'm not sure I agree with the quicker-development-cycle argument...
 see the threads on how to write a simple web counter with GAE, or do
 simple database queries.  Things very easily done using other
 technologies on an EC2 type of solution.  You just have to 1) agree to
 10 cents an hour to test your app 2) pick images you wouldn't have to
 choose with GAE.  You can even use Python on EC2 so thats not really a
 'pro' for GAE either.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: more complicated counters/ratings (sorting?)

2008-11-04 Thread yejun

You keep total rating values and number of users rated both in two
shard counters which are not belong to your product entity group. Then
update your average rating for your product's entity group
periodically like once every 100 request or 1000 request depends on
your need. So the concurrency level is only determined by the number
of shards not by number of users or products.

Even though the transfer data from shard counter to your entity group
is non-transactional, but the updating of raw records are
transactional, the data will be corrected next time when you update
average rate. Updating will only takes the time of a dozen of
datastore read and 1 put.


On Nov 4, 11:40 am, Jay Freeman \(saurik\) [EMAIL PROTECTED]
wrote:
 If I don't use transactions (with the ratings in the same entity groups as
 the shards they are being served by) then I can't be guaranteed I don't
 accidentally drop or double count ratings in the case of errors. :( Is the
 idea that you are recommending I just say oh well, its a drop in the
 bucket? -J

 --
 From: Alexander Kojevnikov [EMAIL PROTECTED]
 Sent: Tuesday, November 04, 2008 1:12 AM
 To: Google App Engine google-appengine@googlegroups.com
 Subject: [google-appengine] Re: more complicated counters/ratings (sorting?)

 ... I would use shards to track the ratings (but without using
  transactions) and from time to time re-calculate the average from the
  shards and keep it with the product. This would allow indexing by the
  average rating, without many sacrifices.

 ...
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: more complicated counters/ratings (sorting?)

2008-11-04 Thread yejun

You can save your individual rating anywhere, you just need it for the
record.
Put you update your total rating and total user record in a
transaction counter. For average rating you only need total rating and
total user counter, you don't need individual record.

On Nov 4, 12:15 pm, Jay Freeman \(saurik\) [EMAIL PROTECTED]
wrote:
 Ok, but this model still involves having the products in the same entity
 group as the sharded ratings, right? That isn't what I read from use shards
 to track the ratings (but without using transactions). I can easily see how
 you don't need a transaction to sum the shards into the total, but you
 definitely need a transaction to add the new rating and add its weight to
 the shard. -J

 --
 From: yejun [EMAIL PROTECTED]
 Sent: Tuesday, November 04, 2008 9:11 AM
 To: Google App Engine google-appengine@googlegroups.com
 Subject: [google-appengine] Re: more complicated counters/ratings (sorting?)



  You keep total rating values and number of users rated both in two
  shard counters which are not belong to your product entity group. Then
  update your average rating for your product's entity group
  periodically like once every 100 request or 1000 request depends on
  your need. So the concurrency level is only determined by the number
  of shards not by number of users or products.

  Even though the transfer data from shard counter to your entity group
  is non-transactional, but the updating of raw records are
  transactional, the data will be corrected next time when you update
  average rate. Updating will only takes the time of a dozen of
  datastore read and 1 put.

  On Nov 4, 11:40 am, Jay Freeman \(saurik\) [EMAIL PROTECTED]
  wrote:
  If I don't use transactions (with the ratings in the same entity groups
  as
  the shards they are being served by) then I can't be guaranteed I don't
  accidentally drop or double count ratings in the case of errors. :( Is
  the
  idea that you are recommending I just say oh well, its a drop in the
  bucket? -J

  --
  From: Alexander Kojevnikov [EMAIL PROTECTED]
  Sent: Tuesday, November 04, 2008 1:12 AM
  To: Google App Engine google-appengine@googlegroups.com
  Subject: [google-appengine] Re: more complicated counters/ratings
  (sorting?)

  ... I would use shards to track the ratings (but without using
   transactions) and from time to time re-calculate the average from the
   shards and keep it with the product. This would allow indexing by the
   average rating, without many sacrifices.

  ...
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: GAE vs. EC2

2008-11-04 Thread yejun

I feel this comparison is similar to raw meat vs cooked dinner.

On Nov 4, 12:31 pm, sal [EMAIL PROTECTED] wrote:
 Just curious to hear some opinions on this - especially from anyone
 who has experience with Amazon's EC2 as well as GAE.

 I just read a blog saying you can be up and running with EC2's
 cheapest offering with no upfront cost and 79$ a month.  You get a
 'real' virtualized Linux machine with 1.7GB of ram.  And by clicking a
 button (there are free graphical admin tools now), as many more
 instances/images as you need will pop up instantly using a system
 image that you create to handle whatever load you have. (Your bill
 goes just up as you click into more resources).

 There are loads of 'public' images to pick from, some include Python
 already. (Others have Java, PHP, etc).  By choosing one of these
 images you'll have Python running, with full root access to a server
 online that you can do whatever you like with.  I guess technically,
 someone could just put the GAE SDK up on an EC2 box, with some tweaks,
 and you could almost have your GAE app running there unmodified as
 well?

 I'm using GAE because of the zero, upfront cost currently... this is
 great for toying around with neat ideas - but for 'real world',
 demanding applications... you'll eventually have to pay even for GAE.
 What do we have offered that something like EC2 doesn't?

 Google has announced another language coming in a few months - but
 again EC2 allows to use whichever is installed in your machine image
 already - any language you can use in linux I suppose... not sure if
 its enough to keep me onboard once my app goes over its quotas and I
 have to start to pay for more.

 looking forward to hear thoughts!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: more complicated counters/ratings (sorting?)

2008-11-04 Thread yejun

When someone changes the rating, you just need to calculate the delta
and update the total rating accordingly.

Update individual record and average rating can not be atomically
synced. Entity group need to be small, because read or write on an
entity group will cause the entire group being serialized and
deserialized.

Also the average will at most have 3 significant figures, one wrong
update on 1 of a million record won't affect your average value at
all.

On Nov 4, 1:01 pm, Jay Freeman \(saurik\) [EMAIL PROTECTED] wrote:
 I'm sorry, I said products in the same, I meant individual ratings in the
 same, although I think you got it.

 I definitely need to have the individual ratings in the same entity group as
 the sharded counts or I lost safety updating them. Example: I might commit a
 rating (so the user sees his rating if he looks) but fail to update the
 count. If he later goes and changes the rating I would be removing the old
 total (which never got counted) and adding the new one, which is obviously
 wrong.

 This is the comment I was making earlier: these examples of using sharded
 counters seem to be completely missing on the idea that the thing you are
 counting has to be in the same entity group as the sharded count or your
 risk double or zero counting it. The actual example from Brett is immune
 because he doesn't actually have anything he's counting (just button
 pushes), but if it were blog posts or comments then those items need to be
 in the same entity group so you can update the shard's count and add the
 post as a single, atomic transaction.

 -J

 --
 From: yejun [EMAIL PROTECTED]
 Sent: Tuesday, November 04, 2008 9:52 AM
 To: Google App Engine google-appengine@googlegroups.com
 Subject: [google-appengine] Re: more complicated counters/ratings (sorting?)



  You can save your individual rating anywhere, you just need it for the
  record.
  Put you update your total rating and total user record in a
  transaction counter. For average rating you only need total rating and
  total user counter, you don't need individual record.

  On Nov 4, 12:15 pm, Jay Freeman \(saurik\) [EMAIL PROTECTED]
  wrote:
  Ok, but this model still involves having the products in the same entity
  group as the sharded ratings, right? That isn't what I read from use
  shards
  to track the ratings (but without using transactions). I can easily see
  how
  you don't need a transaction to sum the shards into the total, but you
  definitely need a transaction to add the new rating and add its weight to
  the shard. -J

  --
  From: yejun [EMAIL PROTECTED]
  Sent: Tuesday, November 04, 2008 9:11 AM
  To: Google App Engine google-appengine@googlegroups.com
  Subject: [google-appengine] Re: more complicated counters/ratings
  (sorting?)

   You keep total rating values and number of users rated both in two
   shard counters which are not belong to your product entity group. Then
   update your average rating for your product's entity group
   periodically like once every 100 request or 1000 request depends on
   your need. So the concurrency level is only determined by the number
   of shards not by number of users or products.

   Even though the transfer data from shard counter to your entity group
   is non-transactional, but the updating of raw records are
   transactional, the data will be corrected next time when you update
   average rate. Updating will only takes the time of a dozen of
   datastore read and 1 put.

   On Nov 4, 11:40 am, Jay Freeman \(saurik\) [EMAIL PROTECTED]
   wrote:
   If I don't use transactions (with the ratings in the same entity
   groups
   as
   the shards they are being served by) then I can't be guaranteed I
   don't
   accidentally drop or double count ratings in the case of errors. :( Is
   the
   idea that you are recommending I just say oh well, its a drop in the
   bucket? -J

   --
   From: Alexander Kojevnikov [EMAIL PROTECTED]
   Sent: Tuesday, November 04, 2008 1:12 AM
   To: Google App Engine google-appengine@googlegroups.com
   Subject: [google-appengine] Re: more complicated counters/ratings
   (sorting?)

   ... I would use shards to track the ratings (but without using
transactions) and from time to time re-calculate the average from
the
shards and keep it with the product. This would allow indexing by
the
average rating, without many sacrifices.

   ...
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: GAE vs. EC2

2008-11-04 Thread yejun

Of course, you can have someone cook the raw meat to dinner. There's
no actual difference in the end.

The difficulty to EC2 for small project is the scaling part, you need
either buy or write your own management code for an almost real
cluster minus hardware. You need to monitor server load, and start new
EC2 instance when load gets high and terminate extra unused servers.
You need to take care way more possible exceptions then GAE.

On Nov 4, 1:39 pm, sal [EMAIL PROTECTED] wrote:
 Point taken, in the scenario that you might have to make your own
 image, possibly...

 But assume that someone signs up for EC2, and just chooses an existing
 image with Python in it.  Really there isn't much cooking involved
 correct?  You should have a working server up pretty quickly...

 (a few other considerations: within GAE your serverside RAM can be
 invalidated at-random, as well as the memcache... and we're limited to
 using a sortof limited Datastore, rather than the full RDBMS you could
 have in an EC2 image)  Maybe a bit like a free dinner without a fork?
 =)

 On Nov 4, 1:19 pm, yejun [EMAIL PROTECTED] wrote:

  I feel this comparison is similar to raw meat vs cooked dinner.

  On Nov 4, 12:31 pm, sal [EMAIL PROTECTED] wrote:

   Just curious to hear some opinions on this - especially from anyone
   who has experience with Amazon's EC2 as well as GAE.

   I just read a blog saying you can be up and running with EC2's
   cheapest offering with no upfront cost and 79$ a month.  You get a
   'real' virtualized Linux machine with 1.7GB of ram.  And by clicking a
   button (there are free graphical admin tools now), as many more
   instances/images as you need will pop up instantly using a system
   image that you create to handle whatever load you have. (Your bill
   goes just up as you click into more resources).

   There are loads of 'public' images to pick from, some include Python
   already. (Others have Java, PHP, etc).  By choosing one of these
   images you'll have Python running, with full root access to a server
   online that you can do whatever you like with.  I guess technically,
   someone could just put the GAE SDK up on an EC2 box, with some tweaks,
   and you could almost have your GAE app running there unmodified as
   well?

   I'm using GAE because of the zero, upfront cost currently... this is
   great for toying around with neat ideas - but for 'real world',
   demanding applications... you'll eventually have to pay even for GAE.
   What do we have offered that something like EC2 doesn't?

   Google has announced another language coming in a few months - but
   again EC2 allows to use whichever is installed in your machine image
   already - any language you can use in linux I suppose... not sure if
   its enough to keep me onboard once my app goes over its quotas and I
   have to start to pay for more.

   looking forward to hear thoughts!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: GAE vs. EC2

2008-11-04 Thread yejun

Has anyone compared SDB and google datastore?

On Nov 4, 2:29 pm, Andrew Badera [EMAIL PROTECTED] wrote:
 Or EBS for that matter too (S3, SimpleDB, EBS)

 On Tue, Nov 4, 2008 at 2:28 PM, Andrew Badera [EMAIL PROTECTED] wrote:
  You don't have to use S3 with EC2 ... ... what are you talking about?

  You CAN use S3 ... or SimpleDB ... or any third party storage service ...

  There are plenty of third-party tools (Rightscale comes to mind) that make
  scaling EC2 a breeze.

  Thanks-
  - Andy Badera
  - [EMAIL PROTECTED]
  - (518) 641-1280

  -http://higherefficiency.net/
  -http://changeroundup.com/

  -http://flipbitsnotburgers.blogspot.com/
  -http://andrew.badera.us/

  - Google me:http://www.google.com/search?q=andrew+badera

  On Tue, Nov 4, 2008 at 2:25 PM, Arash [EMAIL PROTECTED] wrote:

  There is a point which you are missing here. Firing up more images in
  EC2 does not makes your application scalable. There is lots and lots
  of other issues here. With EC2 you have to use S3 etc etc.
  there might be some point to consider working with GAE but in short I
  think there is much more to do if you want a scalable application in
  EC2.

  On Nov 4, 2:10 pm, sal [EMAIL PROTECTED] wrote:
Of course, you can have someone cook the raw meat to dinner. There's
no actual difference in the end.

   These were my thoughts too... if its the same difference in the end...
   I'm looking for reasons as to why one would stick with GAE long-term.

The difficulty to EC2 for small project is the scaling part, you need
either buy or write your own management code for an almost real
cluster minus hardware. You need to monitor server load, and start new
EC2 instance when load gets high and terminate extra unused servers.
You need to take care way more possible exceptions then GAE.

   It seems there are images you can choose for EC2 which automatically
   load balance/scale when you boot new instances...

On Nov 4, 1:39 pm, sal [EMAIL PROTECTED] wrote:

 Point taken, in the scenario that you might have to make your own
 image, possibly...

 But assume that someone signs up for EC2, and just chooses an
  existing
 image with Python in it.  Really there isn't much cooking involved
 correct?  You should have a working server up pretty quickly...

 (a few other considerations: within GAE your serverside RAM can be
 invalidated at-random, as well as the memcache... and we're limited
  to
 using a sortof limited Datastore, rather than the full RDBMS you
  could
 have in an EC2 image)  Maybe a bit like a free dinner without a
  fork?
 =)

 On Nov 4, 1:19 pm, yejun [EMAIL PROTECTED] wrote:

  I feel this comparison is similar to raw meat vs cooked dinner.

  On Nov 4, 12:31 pm, sal [EMAIL PROTECTED] wrote:

   Just curious to hear some opinions on this - especially from
  anyone
   who has experience with Amazon's EC2 as well as GAE.

   I just read a blog saying you can be up and running with EC2's
   cheapest offering with no upfront cost and 79$ a month.  You get
  a
   'real' virtualized Linux machine with 1.7GB of ram.  And by
  clicking a
   button (there are free graphical admin tools now), as many more
   instances/images as you need will pop up instantly using a
  system
   image that you create to handle whatever load you have. (Your
  bill
   goes just up as you click into more resources).

   There are loads of 'public' images to pick from, some include
  Python
   already. (Others have Java, PHP, etc).  By choosing one of these
   images you'll have Python running, with full root access to a
  server
   online that you can do whatever you like with.  I guess
  technically,
   someone could just put the GAE SDK up on an EC2 box, with some
  tweaks,
   and you could almost have your GAE app running there unmodified
  as
   well?

   I'm using GAE because of the zero, upfront cost currently...
  this is
   great for toying around with neat ideas - but for 'real world',
   demanding applications... you'll eventually have to pay even for
  GAE.
   What do we have offered that something like EC2 doesn't?

   Google has announced another language coming in a few months -
  but
   again EC2 allows to use whichever is installed in your machine
  image
   already - any language you can use in linux I suppose... not
  sure if
   its enough to keep me onboard once my app goes over its quotas
  and I
   have to start to pay for more.

   looking forward to hear thoughts!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl

[google-appengine] Re: GAE vs. EC2

2008-11-04 Thread yejun

On EC2 you can make your own image, but that's not the point. I
believe no one in their right mind would use a public image for their
production system, it is just not safe by any means.

You need to do a lot more than by lanching an image. You need to
manage them just like a real operating system, security auditing,
patching, debugging. Of course you can also simply terminate any image
which causing problems.

I think it's still too early to say which cloud computing product will
succeed in the end though. They may coexist for ever just like grocery
store and restaurant.

On Nov 4, 2:57 pm, sal [EMAIL PROTECTED] wrote:
  I think the scaling issue here is understated. Compared to traditional
  scaling strategies that organizations use to perform, GAE provides
  alot of transparency. The premise of GAE is to put focus on
  development of applications. Thus, GAE is more developer focused. EC2
  is a more general solution. Furthermore, I imagine instantiating more
  VMs is a form of network administration that doesn't exist in GAE ...

 I wouldn't assume this yet, as I'm sure you'll have to perform some
 sort of verification/configuration to 'scale up' with GAE also.
 Google likely wont just let your app spike though all the funds in
 your bank account - you'll have to login to some kind of console to
 configure how much resources you want to pay for.  EC2 has the same
 thing basically - you just go to a web page and control how many
 instances are running.

  unless your application is so advanced that it comes with logic to
  efficiently instantiate and shutdown VMs on its own.

 There are already free utilities that do this with EC2 it seems.



  On Nov 4, 11:20 am, sal [EMAIL PROTECTED] wrote:

EC2 also has a lot other usage than hosting a web site. You can use it
for scientific computing, video transcoding, data mining and etc.

   I agree - you have a little more freedom / computing power / resources
   than you do with GAE, and its pretty cheap.  A quick lookthrough on
   Amazon's site shows EC2's lowend costing $0.10 per hour (ten cents an
   hour) to use.  And you can shut it down/start it up whenever you want
   so you don't incur much cost while 'playing around' in the beginning.

   I did like being able to 'dive in' to GAE just using my Google login
   and start playing around - but EC2 seems more practical for real world
   use yet.  There needs to be more to make GAE something viable... or
   maybe Google's not really aiming to compete on the 'high end' cloud
   computing arena, more just to give a place for people to create Google
   Gadgets?  (In that case it should be named 'Google Gadget Engine'!!)
   But I don't think that's the case, I must be missing something =)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: user login problem

2008-11-04 Thread yejun

Naked domain is no longer supported by google. I think you'd better
redirect anyone use naked domain to www.

On Nov 4, 3:13 pm, Richie [EMAIL PROTECTED] wrote:
 Hello,

 my application ishttp://www.eaglefeed.me

 Login from there works like a charme, but login fromhttp://eaglefeed.me
 does not.

 from my sourcecode users.get_current_user() seem to return None after
 login.

 what can I do about this?

 any help is appreciated. maybe someone could tell me, how to redirect
 my users as a work around.

 Thanks in advance,

 Richie
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Indexing does not work

2008-11-04 Thread yejun

Only compound queries need index. Single column properties are already
implicitly indexed.

On Nov 4, 2:54 pm, Richie [EMAIL PROTECTED] wrote:
 Hello,

 my user base is increasing that's why I wanted to index my tables.

 index.yaml is not updated automatically on my windows machine (I tried
 a lot of things, including deleting it, etc.) that's why I updated it
 manually.

 I uploaded my app (did not change the version) but appspot still
 states You have not created indexes for this application. 

 Could someone please tell me how to index my datastore?

 Thanks in advance,

 Richie
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: GAE vs. EC2

2008-11-04 Thread yejun

EC2 also has a lot other usage than hosting a web site. You can use it
for scientific computing, video transcoding, data mining and etc.

On Nov 4, 1:39 pm, sal [EMAIL PROTECTED] wrote:
 Point taken, in the scenario that you might have to make your own
 image, possibly...

 But assume that someone signs up for EC2, and just chooses an existing
 image with Python in it.  Really there isn't much cooking involved
 correct?  You should have a working server up pretty quickly...

 (a few other considerations: within GAE your serverside RAM can be
 invalidated at-random, as well as the memcache... and we're limited to
 using a sortof limited Datastore, rather than the full RDBMS you could
 have in an EC2 image)  Maybe a bit like a free dinner without a fork?
 =)

 On Nov 4, 1:19 pm, yejun [EMAIL PROTECTED] wrote:

  I feel this comparison is similar to raw meat vs cooked dinner.

  On Nov 4, 12:31 pm, sal [EMAIL PROTECTED] wrote:

   Just curious to hear some opinions on this - especially from anyone
   who has experience with Amazon's EC2 as well as GAE.

   I just read a blog saying you can be up and running with EC2's
   cheapest offering with no upfront cost and 79$ a month.  You get a
   'real' virtualized Linux machine with 1.7GB of ram.  And by clicking a
   button (there are free graphical admin tools now), as many more
   instances/images as you need will pop up instantly using a system
   image that you create to handle whatever load you have. (Your bill
   goes just up as you click into more resources).

   There are loads of 'public' images to pick from, some include Python
   already. (Others have Java, PHP, etc).  By choosing one of these
   images you'll have Python running, with full root access to a server
   online that you can do whatever you like with.  I guess technically,
   someone could just put the GAE SDK up on an EC2 box, with some tweaks,
   and you could almost have your GAE app running there unmodified as
   well?

   I'm using GAE because of the zero, upfront cost currently... this is
   great for toying around with neat ideas - but for 'real world',
   demanding applications... you'll eventually have to pay even for GAE.
   What do we have offered that something like EC2 doesn't?

   Google has announced another language coming in a few months - but
   again EC2 allows to use whichever is installed in your machine image
   already - any language you can use in linux I suppose... not sure if
   its enough to keep me onboard once my app goes over its quotas and I
   have to start to pay for more.

   looking forward to hear thoughts!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: user login problem

2008-11-04 Thread yejun

Google's official word is that you should leave redirect to your
domain registrar.
But your domain ready naked I think you can test
os.environ[''HTTP_HOST''] to see whether it is naked or not in your
request handler, then do a http 301 redirect.

On Nov 4, 5:19 pm, Richie [EMAIL PROTECTED] wrote:
 do you have any information on how to do this?

 Richie

 On 4 Nov., 21:14, yejun [EMAIL PROTECTED] wrote:

  Naked domain is no longer supported by google. I think you'd better
  redirect anyone use naked domain to www.

  On Nov 4, 3:13 pm, Richie [EMAIL PROTECTED] wrote:

   Hello,

   my application ishttp://www.eaglefeed.me

   Login from there works like a charme, but login fromhttp://eaglefeed.me
   does not.

   from my sourcecode users.get_current_user() seem to return None after
   login.

   what can I do about this?

   any help is appreciated. maybe someone could tell me, how to redirect
   my users as a work around.

   Thanks in advance,

   Richie
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Help: Problem using WSGIApplication to automatically generate error page

2008-11-04 Thread yejun

Put
get=poest in your not_found handler.


On Nov 4, 7:57 pm, Alok [EMAIL PROTECTED] wrote:
 Here is my app.yaml:

 application: proshortsetf
 version: 1
 runtime: python
 api_version: 1

 handlers:
 - url: /blueprint
   static_dir: blueprint
 - url: /candlestick
   static_dir: candlestick
 - url: /.*
   script: main.py

 I have also posted the main.py code 
 here:http://pastie.textmate.org/private/hqnkofw3tnzmfy45wbdpw

 On Nov 4, 7:27 pm, Alexander Kojevnikov [EMAIL PROTECTED]
 wrote:

  Your code looks fine to me. Could you paste here your app.yaml file?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: pages displaying after logout

2008-11-03 Thread yejun

Use HTTP headers

Pragma: no-cache
Cache-Control: no-store, no-cache, must-revalidate, max-age=0

And browser will not store POST request.

On Nov 3, 3:33 am, saurabh [EMAIL PROTECTED] wrote:
 Hi,

    i am developing an application in cherrypy. I am new to
 python . i got a problem that after logout if i press back
 button(of browser) the pages are still displaying .. how can i
 resolve this ...

 Regards,

 Saurabh Dwivedi
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: External Storage Options?

2008-11-03 Thread yejun

You can use amazon S3 storage for now which support query string
authorization. I believe google will offer large file storage in the
near future.

On Nov 3, 11:19 am, jivany [EMAIL PROTECTED] wrote:
 Part of me thinks this is a stupid question and I'm over-complicating
 the solution but...

 I have a server with a large amount of storage space available.  I'd
 like to use that space to serve up images through a GAE app.  The
 reason I want a GAE app is so I can integrate with my Google Apps on
 my domain. The reason I want to use the other server is because it's
 paid for and has a much higher storage limit than the 500MB on GAE.

 I think the easiest way is to just have the GAE app get the browser to
 pull the images from the external server.  My concern is finding a way
 to secure the external images so that they can only be accessed with a
 valid Google account (as if they were on GAE).

 Can anyone point me in the right direction to do something like this?
 Am I over-complicating this?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: JSONP Gateway

2008-11-03 Thread yejun

I think JSON only use utf-8 encoding.

On Nov 3, 2:22 pm, Sylvain [EMAIL PROTECTED] wrote:
 I think that the current encoding is set to : ISO-8859-1 and the
 response is UTF-8.

 So if i change my default browser encoding to UTF-8, all accented
 chars are well displayed.

 Regards

 On 3 nov, 20:13, Malte Ubl [EMAIL PROTECTED] wrote:

  Hi, this looks exactly like expected. Was it clear that the output of
  the web service is to be comsumed by a JSONP web service client like
  jQuery?

  On Mon, Nov 3, 2008 at 1:43 PM, Sylvain [EMAIL PROTECTED] wrote:

   Google Group raises an error, so I've copy/past here :

  http://paste.blixt.org/1982

   Regards

   On 3 nov, 11:52, Malte Ubl [EMAIL PROTECTED] wrote:
   OK,

   could you paste contents when you View Source the message?

   On Mon, Nov 3, 2008 at 9:31 AM, Sylvain [EMAIL PROTECTED] wrote:

The one you give in the first post

   http://jsonpgateway.appspot.com/?url=http%3A//search.yahooapis.com/Im...

And no, it is not html

On 3 nov, 07:49, Malte Ubl [EMAIL PROTECTED] wrote:
Which URL is that? Looks like your browser might have rendered the 
JSON as html.

On Sun, Nov 2, 2008 at 10:39 PM, Sylvain [EMAIL PROTECTED] wrote:

 here is a screenshot 
 :http://testgapp.appspot.com/o/npdu8j2AZYsK8QK0MSx6VqtnMu8t2h.jpg

 On 2 nov, 22:35, Sylvain [EMAIL PROTECTED] wrote:
 Yes, I don't why but Google Group changes the encoding.

 If you look at your results, it was something like that :
 n@gociation priv@e

 On 2 nov, 20:25, Malte Ubl [EMAIL PROTECTED] wrote:

  What may be an encoding issue? Do you see an error?

  On Sun, Nov 2, 2008 at 12:33 PM, Sylvain [EMAIL PROTECTED] 
  wrote:

   maybe an encoding issue ?

   négociation pour 12 millions de dollars et un autre privée

   On 2 nov, 11:28, Malte Ubl [EMAIL PROTECTED] wrote:
   Hey,

   you can provide a callback parameter that may include [] so 
   you can
   write something like this callback=jsonpCall[13]

   Bye,

   Malte

   On Sat, Nov 1, 2008 at 2:01 AM, Jean-Lou Dupont

   [EMAIL PROTECTED] wrote:

Just a suggestion: you could add a context parameter for
applications where multiple dispatch is required.
Jean-Lou.

On Oct 31, 3:00 pm, Malte Ubl [EMAIL PROTECTED] wrote:
Hey,

I've just released a very simple application that can be 
used to turn
any JSON webservice into a JSONP webservice.
It takes an url parameter, fetches that url, validates the 
json and
then returns it wrapped in a callback function.

Is this ok with the terms of service? Do you think the 
problems with
respect to security are too serious to leave this online?
An example is 
here:http://jsonpgateway.appspot.com/?url=http%3A//search.yahooapis.com/Im...

Bye
Malte
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: External Storage Options?

2008-11-03 Thread yejun

I don't think urlfetch is a good way to get remote object especial
large one like images, you will be double charged by network usage.
I think HTTP 302 redirect with url based authentication would be a
more economical solution.

On Nov 3, 12:45 pm, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
 A solution based off of this 
 :http://stuporglue.org/restrict-access-jpg-htaccess.php
 on your remote server.

 Use urlfetch to return and display the images. You'll need to
 determine the best method for passing off to the remote server that
 the request is from a validated GAE user, but that should be simple
 enough for you to figure out without posting the information in a
 group which is indexed by search engines allowing people to easily
 bypass whatever you put in place.

 On Nov 3, 12:28 pm, yejun [EMAIL PROTECTED] wrote:

  You can use amazon S3 storage for now which support query string
  authorization. I believe google will offer large file storage in the
  near future.

  On Nov 3, 11:19 am, jivany [EMAIL PROTECTED] wrote:

   Part of me thinks this is a stupid question and I'm over-complicating
   the solution but...

   I have a server with a large amount of storage space available.  I'd
   like to use that space to serve up images through a GAE app.  The
   reason I want a GAE app is so I can integrate with my Google Apps on
   my domain. The reason I want to use the other server is because it's
   paid for and has a much higher storage limit than the 500MB on GAE.

   I think the easiest way is to just have the GAE app get the browser to
   pull the images from the external server.  My concern is finding a way
   to secure the external images so that they can only be accessed with a
   valid Google account (as if they were on GAE).

   Can anyone point me in the right direction to do something like this?
   Am I over-complicating this?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: App Engine vs. Python vs. OOP terminology?

2008-11-03 Thread yejun

One thing really puzzles me in python if you change a class definition
after you create an instance of that class, how should the instance
behavior. It seems python instance will make their own copy of class
attributes only on write.

On Nov 3, 11:40 am, pr3d4t0r [EMAIL PROTECTED] wrote:
 On Nov 3, 8:31 am, David Symonds [EMAIL PROTECTED] wrote:

   Kind -- it's really a class, a specialization of the db.Model or
   db.Expando classes.

  Not quite. An entity may look like an instance of a Kind when accessed
  through the db.Model layer, but it isn't fundamentally a specific
  Kind. For instance, the Key-space is across all entities of your
  datastore, not of a particular kind. Also, if you change your
  db.Model, existing entities aren't changed automatically.

 Coolio and agreed - I've stuck with the term entity for that
 reason.  What about other places where App Engine seems to have its
 own terminology for Python/OO concepts?

 In general I'm going with:

 1. If it's clear that this is a something specific to App Engine (like
 your example), use App Engine terms
 2. If it's what appears to be a generic Python term renamed, use the
 Python term (e.g. decorator instead of annotation unless the decorator
 actually annotates the code!)
 3. If it's neither of the above, find a generic OOP/OOD term that
 applies depending on context and intended meaning

 Thanks Dave and have a great day!

 Ehttp://www.istheserverup.nethttp://www.teslatestament.com
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: counter use pattern

2008-11-03 Thread yejun

 To solve this, I'm also
 planning to destroy and recreate the memcache object upon successful
 datastore write (and associated memcache delay being reset to zero).

You shouldn't do this. It the completely negates the reason why you
use this counter class.
If you just need a counter for a local object, you should just save
the counter with object itself.

This counter  class should only be used in situation when the same
named counter need to be increment more than 10 times per second by
different requests/users concurrently.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: counter use pattern

2008-11-03 Thread yejun

I believe mecached algorithm will keep most used object not just by
creation time.
The reason you use a sharded count is that you want increment
operation always hit the disk. Otherwise you don't need shard because
you don't access it very often if write is cached in memcache. Shard
is used here is to improve write concurrency.

On Nov 3, 3:43 pm, josh l [EMAIL PROTECTED] wrote:
 Yejun,

 Thanks for the updated code example.  Since a counter is such a common
 need, I think it might be helpful if we all worked together on the
 same codebase, rather than forking each time, or at least if we could
 be specific about the changes we made (I know I can do a diff, but if
 you noted what exactly you changed, and why, that would be awesome for
 all future users who are curious).

 Moving on, I think I didn't explain myself well regarding the
 destruction of the memcache object.  Imagine an app where
  1) There will definitely be at least 10 counter requests/sec (for
 same named counter.  let's call it the TotalRequests counter, and is
 referred to by some Stats model)
  2) Lots and lots of other entities get written to memcache (millions
 of entities in the system, and each gets cached upon initial request)

 In this situation, it is guaranteed objects in our memcache will
 disappear after some use, since we have less memcache total avail than
 the size of items that will be cached over a few days of use.  Now,
 which items get removed?  In this case, our counter is the first
 created item in memcache, and definitely one of the first items to be
 just nuked from memcache when we hit the max storage limit for our
 apps' memcache.  To ensure it never gets nuked due to it being the
 'oldest object in memcache', then we could 'occasionally' destroy/
 recreate it.  Maybe, for example, I could also have a time_created on
 it, and if it's older than a few hours, then nuke/recreate upon
 resetting it.  I figured might as well do this every time, but anyway
 hopefully you see my point as to why I was thinking about the need to
 destroy/reuse.

 Much more important than this very occasional mis-count for a
 destroyed memcache item, tho, is my general idea of just not even
 attempting to write to a shard entity unless we've had a few (10?,
 50?) counter increments.  I am getting ~350ms/request average due to
 the time it takes writing to the shards (multiple counters/request),
 and this is my main concern with the current code.

 I will diff your code (thanks again) and check it out this afternoon.

   -Josh

 On Nov 3, 12:22 pm, yejun [EMAIL PROTECTED] wrote:

   To solve this, I'm also
   planning to destroy and recreate the memcache object upon successful
   datastore write (and associated memcache delay being reset to zero).

  You shouldn't do this. It the completely negates the reason why you
  use this counter class.
  If you just need a counter for a local object, you should just save
  the counter with object itself.

  This counter  class should only be used in situation when the same
  named counter need to be increment more than 10 times per second by
  different requests/users concurrently.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: counter use pattern

2008-11-03 Thread yejun

The reason why not use shard count with memcache cache write is that
with memcache you will only write datastore once per second or every
10 seconds. I see no reason why you need a shard counter for that kind
frequency.
Because when a memcache reset happends, losing 1/10 second of data or
1 second seems have the similar reliability to me.

On Nov 3, 4:32 pm, josh l [EMAIL PROTECTED] wrote:
 Yejun,

 I've been told that the memcached framework on GAE uses a first-
 created, first-deleted algorithm, and that we have finite (a few
 hundred megs, but maybe even up to a GB) of memcache for any specific
 app.  This means that once you hit your limit, older objects WILL get
 deleted.  And my app will definitely be going over this max limit.
 This is not a huge deal to me (probably won't happen that my counter
 gets deleted that often, and it's ok if it's a little bit off) but I
 figured my counter might want as well handle that small case anyway.

 Again, the much bigger case:  Not writing to the datastore each time
 you want to increment.  And yes, I am aware of why to use the sharded
 counter.  The point is what if you have about 50QPS coming in (so you
 need a sharded counter with a lot of shards for sure), and every
 single request is writing to ~3 different counters.  Now each request
 is taking a while because of the datastore writes that it attempts
 each time, even with no Transaction collisions on shard-writes.  And
 also I believe there is a GAE watchdog looking to see if over a period
 of time your average request is 300ms.

 So I am simply saying, why not try cut down on the total datastore
 writes, and write to a shard only 1/10 times, but still get the
 correct totals?  This is the reasoning for my arguments above.  So now
 you have an order of magnitude less datastore writes, and the average
 response time is way down.  This sounds good to me, and I am sure
 others who plan to write apps that have a number of sharded counter
 increments / avg. request, might feel similarly.  Am I missing
 something obvious here?

   -Josh

 On Nov 3, 12:54 pm, yejun [EMAIL PROTECTED] wrote:

  I believe mecached algorithm will keep most used object not just by
  creation time.
  The reason you use a sharded count is that you want increment
  operation always hit the disk. Otherwise you don't need shard because
  you don't access it very often if write is cached in memcache. Shard
  is used here is to improve write concurrency.

  On Nov 3, 3:43 pm, josh l [EMAIL PROTECTED] wrote:

   Yejun,

   Thanks for the updated code example.  Since a counter is such a common
   need, I think it might be helpful if we all worked together on the
   same codebase, rather than forking each time, or at least if we could
   be specific about the changes we made (I know I can do a diff, but if
   you noted what exactly you changed, and why, that would be awesome for
   all future users who are curious).

   Moving on, I think I didn't explain myself well regarding the
   destruction of the memcache object.  Imagine an app where
    1) There will definitely be at least 10 counter requests/sec (for
   same named counter.  let's call it the TotalRequests counter, and is
   referred to by some Stats model)
    2) Lots and lots of other entities get written to memcache (millions
   of entities in the system, and each gets cached upon initial request)

   In this situation, it is guaranteed objects in our memcache will
   disappear after some use, since we have less memcache total avail than
   the size of items that will be cached over a few days of use.  Now,
   which items get removed?  In this case, our counter is the first
   created item in memcache, and definitely one of the first items to be
   just nuked from memcache when we hit the max storage limit for our
   apps' memcache.  To ensure it never gets nuked due to it being the
   'oldest object in memcache', then we could 'occasionally' destroy/
   recreate it.  Maybe, for example, I could also have a time_created on
   it, and if it's older than a few hours, then nuke/recreate upon
   resetting it.  I figured might as well do this every time, but anyway
   hopefully you see my point as to why I was thinking about the need to
   destroy/reuse.

   Much more important than this very occasional mis-count for a
   destroyed memcache item, tho, is my general idea of just not even
   attempting to write to a shard entity unless we've had a few (10?,
   50?) counter increments.  I am getting ~350ms/request average due to
   the time it takes writing to the shards (multiple counters/request),
   and this is my main concern with the current code.

   I will diff your code (thanks again) and check it out this afternoon.

     -Josh

   On Nov 3, 12:22 pm, yejun [EMAIL PROTECTED] wrote:

 To solve this, I'm also
 planning to destroy and recreate the memcache object upon successful
 datastore write (and associated memcache delay being reset to zero).

You shouldn't do

[google-appengine] Re: counter use pattern

2008-11-03 Thread yejun

I am not say they are same.  But you are degrading a 100% reliable
solution to 99% or 99.9%. To me 99.9% and 99% have similar reliability
comparing to 100%. And the complexity of combining memcache write and
sharded write also possibly make the problem somewhat unreliable.

On Nov 3, 5:11 pm, josh l [EMAIL PROTECTED] wrote:
 Why do you say with memcache I would only write to the datastore once
 per second, or every 10 seconds?  This doesn't make any sense... If I
 only attempt write 1/10 of my counter increments to the datastore, I
 will still be writing a few times/second at least (I am writing this
 to expect 30QPS, but likely it will be more).

 Also, my issue/question is not regarding Transaction Collisions.  Of
 course these are avoidable by just using many shards, and adding more
 shards until collisions are extremely infrequent.  My issue is
 regarding total request/response time.  Attempting to write anything
 to the datastore takes time.  It just does.  The cpu cycles may not
 count against us (in $cost), but the time does (or seems to, at
 least).  If a request ends up needing to write to multiple counters
 (and all of mine do), quickly the total response is in the many
 hundreds of ms.  I don't want this, I want quicker.

 My testing shows memcache to be fairly reliable.  I am not worried
 about the very occasional (I hope) memcache issue.  I am worried about
 total time per request.  I believe I can drastically shorten this time
 by not attempting to write to the datastore as often, and having the
 counter use just memcache most of the time for increments.  I can't
 imagine I am the only person to think of this?

   -Josh

 On Nov 3, 1:56 pm, yejun [EMAIL PROTECTED] wrote:

  The reason why not use shard count with memcache cache write is that
  with memcache you will only write datastore once per second or every
  10 seconds. I see no reason why you need a shard counter for that kind
  frequency.
  Because when a memcache reset happends, losing 1/10 second of data or
  1 second seems have the similar reliability to me.

  On Nov 3, 4:32 pm, josh l [EMAIL PROTECTED] wrote:

   Yejun,

   I've been told that the memcached framework on GAE uses a first-
   created, first-deleted algorithm, and that we have finite (a few
   hundred megs, but maybe even up to a GB) of memcache for any specific
   app.  This means that once you hit your limit, older objects WILL get
   deleted.  And my app will definitely be going over this max limit.
   This is not a huge deal to me (probably won't happen that my counter
   gets deleted that often, and it's ok if it's a little bit off) but I
   figured my counter might want as well handle that small case anyway.

   Again, the much bigger case:  Not writing to the datastore each time
   you want to increment.  And yes, I am aware of why to use the sharded
   counter.  The point is what if you have about 50QPS coming in (so you
   need a sharded counter with a lot of shards for sure), and every
   single request is writing to ~3 different counters.  Now each request
   is taking a while because of the datastore writes that it attempts
   each time, even with no Transaction collisions on shard-writes.  And
   also I believe there is a GAE watchdog looking to see if over a period
   of time your average request is 300ms.

   So I am simply saying, why not try cut down on the total datastore
   writes, and write to a shard only 1/10 times, but still get the
   correct totals?  This is the reasoning for my arguments above.  So now
   you have an order of magnitude less datastore writes, and the average
   response time is way down.  This sounds good to me, and I am sure
   others who plan to write apps that have a number of sharded counter
   increments / avg. request, might feel similarly.  Am I missing
   something obvious here?

     -Josh

   On Nov 3, 12:54 pm, yejun [EMAIL PROTECTED] wrote:

I believe mecached algorithm will keep most used object not just by
creation time.
The reason you use a sharded count is that you want increment
operation always hit the disk. Otherwise you don't need shard because
you don't access it very often if write is cached in memcache. Shard
is used here is to improve write concurrency.

On Nov 3, 3:43 pm, josh l [EMAIL PROTECTED] wrote:

 Yejun,

 Thanks for the updated code example.  Since a counter is such a common
 need, I think it might be helpful if we all worked together on the
 same codebase, rather than forking each time, or at least if we could
 be specific about the changes we made (I know I can do a diff, but if
 you noted what exactly you changed, and why, that would be awesome for
 all future users who are curious).

 Moving on, I think I didn't explain myself well regarding the
 destruction of the memcache object.  Imagine an app where
  1) There will definitely be at least 10 counter requests/sec (for
 same named counter.  let's call

[google-appengine] Re: counter use pattern

2008-11-03 Thread yejun

Of course nothing can be 100% reliable. Sorry I can't express my
opinion very clearly.
The problem is you are trying to make the counter so complex, the
effort you put in there may not worth the man hours.

On Nov 3, 5:26 pm, josh l [EMAIL PROTECTED] wrote:
 How is your counter 100% reliable?  It could have a delayed
 transaction (or a bunch) and then a memcache failure.

 I agree my write-to-memcache-first counter would be (slightly) more
 complex, and like yours, it _could_ have issues if memcache failed for
 one reason or another.  In that (rare) case it is likely mine _would_
 lose some counts, and yours likely would not -- but I think it is  a
 rare case and I think the very occasional loss of some counts is more
 than made up for by an order of magnitude (or more) less writes, and
 drastically faster response times.

 According to guide, in this 
 videohttp://www.youtube.com/watch?v=CmyFcChTc4Meurl=http://www.technorati...
 (21:10), you will hit a watchdog if your avg. requests are 300ms, and
 I have seen it happen.  I need to avoid it, and I don't see another
 way  to do that when the app requirement is for multiple counter
 incrememts/request, except with my counter example (far) above.

   -Josh

 On Nov 3, 2:17 pm, yejun [EMAIL PROTECTED] wrote:

  I am not say they are same.  But you are degrading a 100% reliable
  solution to 99% or 99.9%. To me 99.9% and 99% have similar reliability
  comparing to 100%. And the complexity of combining memcache write and
  sharded write also possibly make the problem somewhat unreliable.

  On Nov 3, 5:11 pm, josh l [EMAIL PROTECTED] wrote:

   Why do you say with memcache I would only write to the datastore once
   per second, or every 10 seconds?  This doesn't make any sense... If I
   only attempt write 1/10 of my counter increments to the datastore, I
   will still be writing a few times/second at least (I am writing this
   to expect 30QPS, but likely it will be more).

   Also, my issue/question is not regarding Transaction Collisions.  Of
   course these are avoidable by just using many shards, and adding more
   shards until collisions are extremely infrequent.  My issue is
   regarding total request/response time.  Attempting to write anything
   to the datastore takes time.  It just does.  The cpu cycles may not
   count against us (in $cost), but the time does (or seems to, at
   least).  If a request ends up needing to write to multiple counters
   (and all of mine do), quickly the total response is in the many
   hundreds of ms.  I don't want this, I want quicker.

   My testing shows memcache to be fairly reliable.  I am not worried
   about the very occasional (I hope) memcache issue.  I am worried about
   total time per request.  I believe I can drastically shorten this time
   by not attempting to write to the datastore as often, and having the
   counter use just memcache most of the time for increments.  I can't
   imagine I am the only person to think of this?

     -Josh

   On Nov 3, 1:56 pm, yejun [EMAIL PROTECTED] wrote:

The reason why not use shard count with memcache cache write is that
with memcache you will only write datastore once per second or every
10 seconds. I see no reason why you need a shard counter for that kind
frequency.
Because when a memcache reset happends, losing 1/10 second of data or
1 second seems have the similar reliability to me.

On Nov 3, 4:32 pm, josh l [EMAIL PROTECTED] wrote:

 Yejun,

 I've been told that the memcached framework on GAE uses a first-
 created, first-deleted algorithm, and that we have finite (a few
 hundred megs, but maybe even up to a GB) of memcache for any specific
 app.  This means that once you hit your limit, older objects WILL get
 deleted.  And my app will definitely be going over this max limit.
 This is not a huge deal to me (probably won't happen that my counter
 gets deleted that often, and it's ok if it's a little bit off) but I
 figured my counter might want as well handle that small case anyway.

 Again, the much bigger case:  Not writing to the datastore each time
 you want to increment.  And yes, I am aware of why to use the sharded
 counter.  The point is what if you have about 50QPS coming in (so you
 need a sharded counter with a lot of shards for sure), and every
 single request is writing to ~3 different counters.  Now each request
 is taking a while because of the datastore writes that it attempts
 each time, even with no Transaction collisions on shard-writes.  And
 also I believe there is a GAE watchdog looking to see if over a period
 of time your average request is 300ms.

 So I am simply saying, why not try cut down on the total datastore
 writes, and write to a shard only 1/10 times, but still get the
 correct totals?  This is the reasoning for my arguments above.  So now
 you have an order of magnitude less datastore writes, and the average

[google-appengine] Re: Changing overriding the method(HTTP verb) of the request by extending RequestHandler

2008-11-03 Thread yejun

Or you can modify self
class MyRequestHandler(webapp.RequestHandler):
def initialize(self, request, response):
m = request.get('_method')
if m:
self.__dict__[request.method] =  self.__dict__[m]
webapp.RequestHandler.initialize(self, request, response)


On Nov 3, 8:17 pm, Jeff S [EMAIL PROTECTED] wrote:
 Is there a reason why you can't call a different request handler
 method based on the request URL's _method parameter. For example, this
 seems like it would work:

 class MyRequestHandler(webapp.RequestHandler):

   def post(self):
     if self.request.get('_method') == 'PUT':
       self.put()
     else:
       # Continue with an actual POST.

   def put(self):
     # Idempotent PUT logic.

 Happy coding,

 Jeff

 On Oct 31, 2:32 pm, airportyh [EMAIL PROTECTED] wrote:

  I want to changed the request verb of a request in the case a
  parameter _method is provided, for example if a POST request comes in
  with a parameter _method=PUT, I need to change the request to call the
  put method of the handler. This is to cope with the way prototype.js
  works with verbs like PUT and DELETE(workaround for IE). Here is my
  first attempt:

  class MyRequestHandler(webapp.RequestHandler):
      def initialize(self, request, response):
          m = request.get('_method')
          if m:
              request.method = m.upper()
          webapp.RequestHandler.initialize(self, request, response)

  The problem is, for some reason whenever the redirect is done, the
  self.request.params are emptied by the time the handling method(put or
  delete) is called, even though they were populated when initialize is
  called. Anyone have a clue why this is? As a workaround I thought I
  could clone the params at initialize() time, but .copy() did not work,
  and I haven't found a way to do that either.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Permanent Unique User Identifier

2008-11-03 Thread yejun

Allow multiple email address per user.

On Nov 3, 5:46 pm, Ryan Lamansky [EMAIL PROTECTED] wrote:
 Is there a good way to establish a permanent unique user identifier?

 All we have now is the email address.  If the user changes this, they
 lose access to all content they're associated with.

 Ideally, I'd like some kind of user ID so that I can incorporate it
 into a key name, thus providing efficient, quick, and dependable
 access...

 -Ryan
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Permanent Unique User Identifier

2008-11-03 Thread yejun

Why not just let user choose a user name?

On Nov 3, 10:35 pm, Ryan Lamansky [EMAIL PROTECTED] wrote:
 Mahmoud: The problem is that won't recognize if the user changes their
 email address; it'll create a new entity and the user will lose
 everything.

 yejun: I thought about that, but there's no way to automatically know
 that a new email address was changed from some old one.  User
 interaction is required.  This requires an additional security system
 to be built to verify that the user did indeed have the old email
 address originally.  There's also a possibility that the user will
 start creating new content before realizing there's a problem,
 creating the hassle of trying to merge the accounts.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: How query reference property?

2008-11-02 Thread yejun

You can't query on reference's property.

On Nov 1, 11:31 pm, Rodrigo Lopes [EMAIL PROTECTED] wrote:
 Hi there! I'm new to python and app engine, I need some help with this:

 *class* Profile (db.Model):
     name = db.StringProperty()
     backup_date = db.DateProperty()

 *class* BoxInfo (db.Model):
     source = db.TextProperty()
     index  = db.IntegerProperty()
     profile= db.ReferenceProperty(Profile, collection_name='boxes')

 *How can I do this?*
 query = model.BoxInfo.all().filter(profile.backup_date =, None)

 *I have also tried this way, with no sucess:*
 query = db.GqlQuery(select * from BoxInfo where profile in (select * form
 Profile where backup_date = :1), None)

 Which gives me:
   BadQueryError: Parse Error: Parameter list requires literal or reference
 parameter at symbol select

 Thanks

 --
 Rodrigo Lopes
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Data view disappeared after I changed model

2008-11-02 Thread yejun

After I changed a BlobProperty to TextProperty, all tables disappeared
from data view of admin console.
I can still query data from code though.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Possible bug in urlfetch_stub.py

2008-11-01 Thread yejun

Http PUT usually is used to push a file like object not key/value
pairs.

On Nov 1, 4:24 am, Paul [EMAIL PROTECTED] wrote:
 The following are lines 134 and 135 of urlfetch_stub.py:

 if method == 'POST' and payload:
    adjusted_headers['Content-Type'] = 'application/x-www-form-
 urlencoded'

 I believe the condition should be:

 if (method == 'POST' or method == 'PUT') and payload:

 Or am I missing something?  Is there an appropriate place to file
 bugs?

 Thanks,
 ~paul
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Datastore API - max. keyname length ?

2008-11-01 Thread yejun

500 unicode I think.

On Nov 1, 1:39 pm, Roberto Saccon [EMAIL PROTECTED] wrote:
 When I define datastore entities with a given keyname, what is the
 max. size of such a keyname ?

 regards
 Roberto
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Set content-type and content-encoding for files served statically???

2008-11-01 Thread yejun

Set static file in app.yaml. GAE does not allow setting content-
encoding for both static and dynamic files.

handlers:
- url: /favicon\.ico
  static_files: static/favicon.ico
  mime_type: image/vnd.microsoft.icon
  upload: static/favicon.ico



On Nov 1, 6:01 am, jago [EMAIL PROTECTED] wrote:
 I have a certain file in my appspot. Serving it is no problem.

 If this file is requested by a client I want to be able to serve this
 file with a specific content-type and content-encoding.

 Is this possible? How? Some setting in my index.yaml file?
 Will it ever be possible?

 Cheers,
 jago
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Confusions on transactions and concurrent writes to entity groups

2008-11-01 Thread yejun

It is not really a lock. The first finished put will win, the other
concurrent puts will fail and retry.

On Oct 31, 8:37 pm, jeremy [EMAIL PROTECTED] wrote:
 All writes are transactional.

 Does this mean updating values on a single entity will lock the entire
 group when put() is called?

 On Sep 22, 10:52 am, Jeff S [EMAIL PROTECTED] wrote:

  To further clarify. All writes are transactional. Details on how the
  transactions work can be found in ryan's presentation from Google 
  I/O:http://snarfed.org/space/datastore_talk.htmlThesection on
  transactions specifically begins at slide 49. You can also watch the
  video 
  here:http://sites.google.com/site/io/under-the-covers-of-the-google-app-en...

  Cheers,

  Jeff

  On Sep 22, 10:36 am, Jeff S [EMAIL PROTECTED] wrote:

   Hi David,

   Even if a put request to the datastore is not run in a transaction,
   the operation is automatically retried. Contention is not unique to
   transactions. The benefit of using transactions, is that if one write
   in the transaction times out (due to too much contention or some other
   issue) the other parts of the transaction will not be applied. For
   more details 
   see:http://code.google.com/appengine/docs/datastore/transactions.html#Usi...

   Happy coding,

   Jeff

   On Sep 18, 6:25 pm, DXD [EMAIL PROTECTED] wrote:

I appreciate any clarifications on my situation as follows. I have an
entity group whose the root entity is called root. When a particular
URL is requested, a new entity is added to this group as a direct
child of root. The code looks similar to this:

def insert():
  root = Root.get_by_key_name('key_name')
  child = Child(parent=root)
  child.put()

Note that the insert() function is not run in a transaction (not
called by db.run_in_transaction()).

I spawned many concurrent requests to this URL. The log shows that
there are many failed requests with either TransactionFailedError:
too much contention on these datastore entities. please try again or
DeadlineExceededError. Since I'm still a bit unclear about the
internal working of the datastore, these are my explanations for what
happened. Pls correct me where I'm wrong:

1. when one child entity is being inserted, it locks the entire group.
All other concurrent requests are blocked, and their child.put()
statement exclusively is retried a number of times. Say the limit
number of retry is r.

2. If child.put() is retried r times but still doesn't go through, it
gives up and yields the too much contention error.

3. If child.put() does not yet reach r times of retry, but its session
already reaches the time limit t, then it fails yielding the
DeadlineExceededError.

If my explanations are correct, isn't it true that the insert()
function is exactly equivalent to this version?:

def insert():
  root = Root.get_by_key_name('key_name')
  child = Child(parent=root)
  def txn()
    child.put()
  db.run_in_transaction(txn)

Or more generally, is it true that all API operations that write to
the datastore have exactly the same effect with transaction
(automatically retried if failed, and so on)?

Thanks for clarifications,
David.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Modeling Hierarchical Data

2008-10-31 Thread yejun

I think I have read somewhere in documents, the entity groups size
need to keep small.

On Oct 31, 6:34 am, Anthony [EMAIL PROTECTED] wrote:
 Store all the parent ids as a hierarchy in a string...

 h=id1/id2/
 h=id1/id2/id3
 h=id1/id2/id3/id4
 h=id5/
 h=id5/id6

 You can then filter WHERE h  id1/ and h  id5/ to get all
 children of id1.

 Or you can use entity groups  ancestors.

 More details 
 here...http://groups.google.com/group/google-appengine/browse_thread/thread/...

 On Oct 31, 8:31 am, Chris Tan [EMAIL PROTECTED] wrote:

  I'm wondering how other people have been modeling hierarchical
  information
  such as nested comments.

  My first thought was to do something like this:

  class Comment(db.Model):
      reply_to = db.SelfReferenceProperty(collection_name=replies)
      author = db.UserProperty()
      content = db.TextProperty()

  However, this isn't scalable, as a query is made for every
  comment in the thread.

  Any ideas?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Will there ever be a DictProperty in datastore?

2008-10-30 Thread yejun

There's a difference between property and datastore type.

On Oct 30, 4:30 pm, luismgz [EMAIL PROTECTED] wrote:
 There are other options too, such as pickling a dictionary into a blob
 property, or saving its string representation into a StringProperty
 and then use eval() to get it back alive.
 However, all these have issues and I guess that performance-wise they
 are suboptimal...

 On Oct 30, 1:16 pm, Anthony [EMAIL PROTECTED] wrote:

  I don't know if this will help but I've built a custom property for
  dealing with basic dict items. It's based on StringListProperty, so
  you still have some indexing for searching on keys, or key:value
  pairs.

  The code has not been tested much yet...

  class DictListProperty(db.StringListProperty):
          def __init__(self, *args, **kwds):
                  #cache reads so we only process list once
                  self._cache = None
                  super(DictListProperty, self).__init__(*args, **kwds)

          def get_value_for_datastore(self, model_instance):
                  value = super(DictListProperty,
  self).get_value_for_datastore(model_instance)
                  if value is None:
                          return None
                  else:
                          #convert dict to list of key:value
                          l=[]
                          for k, v in value.items():
                                  #expand any lists out (for tag lists etc)
                                  if isinstance(v,list):
                                          l.append(k)#add empty key with no 
  value so we know this is a list
                                          for i in v:
                                                  l.append(k+:+str(i))
                                  else:
                                          l.append(k+:+str(v))
                          return self.data_type(l)

          def validate(self, value):
                  return value

          def make_value_from_datastore(self, value):
                  if self._cache is None:
                          if value is None:
                                  return None
                          elif isinstance(value, list):
                                  self._cache = {}
                                  #split list of key:values back into dict
                                  for v in value:
                                          s=v.split(:,1)
                                          if len(s)==1 and 
  self._cache.has_key(s[0]): #special case for
  single item list
                                                  self._cache[s[0]] = 
  list(self._cache[s[0]])
                                          elif len(s)==1: #special case for 
  empty list
                                                  self._cache[s[0]] = []
                                          elif self._cache.has_key(s[0]): 
  #add to list
                                                   self._cache[s[0]] = 
  list(self._cache[s[0]])
                                                   
  self._cache[s[0]].append(s[1])
                                          else:
                                                  self._cache[s[0]]=s[1]
                                  return self._cache
                          else:
                                  return None
                  else:
                          return self._cache

  #Tests..

  class TestModel(db.Model):
          d = DictListProperty()

  #Main Test
  t = TestModel()
  t.d = {a:1,b:2test,c:3:test'here'withsemicolon
  +junkin,string} #basic key:value structure, non-strings saved as
  string
  t.d[list]=[0,1,2,3] #lists are also supported, expanded and stored
  as d,d:0,d:1,d:2 etc. so can be indexed.
  t.d[empty]=[] #empty list will return as empty list, stored as just
  key
  t.put()

  t = TestModel.get(p.key())
  #t = Test.all().filter(d = ,b:2test).get() #for equality you need
  to combine full key:value as string
  #t = Test.all().filter(d = ,list:2).get() #lists are expanded for
  indexing
  #t = Test.all().filter(d = ,c:).get() #inequalities can be user
  to to test for just keys.
  self.response.out.write(t.d) #returns a dict
  self.response.out.write(t.d[list]) #lists are re-combined
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Django + GAE

2008-10-29 Thread yejun

All of them are monkey patchs, which means it may break when you do an
upgrade.
For simple project I think the buildin webapp frameworks should works
as well.

On Oct 29, 12:24 pm, Daniel Larkin [EMAIL PROTECTED] wrote:
 Hi all,

 I'd like to use Django on GAE for a small project. Ideally I'd like to
 use version 1.0 of Django rather than 0.96, and I'm aware there are
 various patches and helper scripts etc to make this possible. Yet,
 these approaches seem less than straight-forward (perhaps I'm
 incorrect there? I haven't actually tried them) and are such patches
 going to break with newer versions of GAE. After initially deciding to
 use Django 1.0, I'm now considering just using the built-in 0.96
 version, would this be such a bad idea for someone moving from php-
 land to an elegant python MVC design pattern based solution.

 Any comments would be greatly appreciated!
 thanks
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Filter by first letter? LIKE Statements?

2008-10-29 Thread yejun

http://code.google.com/appengine/docs/datastore/queriesandindexes.html

Read the first tip section.

On Oct 29, 6:38 pm, Kenchu [EMAIL PROTECTED] wrote:
 How do you filter things by for example their first letter? Take this
 model for example:

 class Song(db.Model):
   title = db.StringProperty()

 How would I get all the songs beginning with the letter A?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Will there ever be a DictProperty in datastore?

2008-10-28 Thread yejun

Dict is not indexable.

On Oct 28, 5:11 pm, luismgz [EMAIL PROTECTED] wrote:
 Is there any reason for not having implemented a DictProperty in
 datastore?
 Are there plans to implement it?
 I believe it would be great to have native dictionaries in datastore,
 and that it would simplify a lot many development tasks.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: File download from datastore

2008-10-27 Thread yejun

Check section 4 of rfc 2184. It seems you need to specify the encoding
type on the value it self, because Content-Disposition itself only
support us ascii encoding.

http://www.ietf.org/rfc/rfc2183
http://www.ietf.org/rfc/rfc2184


On Oct 27, 10:08 pm, Sergey Klevtsov [EMAIL PROTECTED] wrote:
 Well, I tested on 14 files of different types (doc txt zip gif jpg pdf
 xls). 7 of them, which contained only ascii-characters, were
 downloaded with Content-Disposition header. 7 other, which included
 non-ascii (cyrillic, specifically) letters - without the header. So
 this seems to be the problem (I encoded names with utf-8, have also
 tried utf-16, but things are even worse then). Well, it's not a very
 important issue, and it's not urgent for me either, but if this could
 be fixed easily, that would be great.

 p.s. files I tested on are stil there:http://s-klevzoff.appspot.com/files

 On 27 окт, 20:11, Marzia Niccolai [EMAIL PROTECTED] wrote:

  Hi,

  Can you give an example of the types of filenames with which this is
  occurring so I can try to replicate it?

  We should allow you to set the content-disposition header, so if it's not
  being included, it may be that we incorrectly think it's malformed in some
  way.

  -Marzia

  On Mon, Oct 27, 2008 at 8:50 AM, Sergey Klevtsov [EMAIL PROTECTED]wrote:

   Ok, I sniffed the traffic between my browser and my app on gae, this
   is what returned on file request:

   HTTP/1.1 200 OK
   Cache-Control: no-cache
   Content-Type: application/octet-stream; charset=utf-8
   Date: Mon, 27 Oct 2008 15:39:00 GMT
   Server: Google Frontend
   Content-Length: 2022

   Google server deletes Content-Disposition header from response :( but
   only for some files, for example .doc and .txt... GIF files are
   dwonloaded correctly and the header is not deleted. Anyone knows what
   can be done about this?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: File download from datastore

2008-10-27 Thread yejun

I just tested rfc 2184. It seems only firefox support it.

On Oct 27, 10:24 pm, yejun [EMAIL PROTECTED] wrote:
 Check section 4 of rfc 2184. It seems you need to specify the encoding
 type on the value it self, because Content-Disposition itself only
 support us ascii encoding.

 http://www.ietf.org/rfc/rfc2183http://www.ietf.org/rfc/rfc2184

 On Oct 27, 10:08 pm, Sergey Klevtsov [EMAIL PROTECTED] wrote:

  Well, I tested on 14 files of different types (doc txt zip gif jpg pdf
  xls). 7 of them, which contained only ascii-characters, were
  downloaded with Content-Disposition header. 7 other, which included
  non-ascii (cyrillic, specifically) letters - without the header. So
  this seems to be the problem (I encoded names with utf-8, have also
  tried utf-16, but things are even worse then). Well, it's not a very
  important issue, and it's not urgent for me either, but if this could
  be fixed easily, that would be great.

  p.s. files I tested on are stil there:http://s-klevzoff.appspot.com/files

  On 27 окт, 20:11, Marzia Niccolai [EMAIL PROTECTED] wrote:

   Hi,

   Can you give an example of the types of filenames with which this is
   occurring so I can try to replicate it?

   We should allow you to set the content-disposition header, so if it's not
   being included, it may be that we incorrectly think it's malformed in some
   way.

   -Marzia

   On Mon, Oct 27, 2008 at 8:50 AM, Sergey Klevtsov [EMAIL PROTECTED]wrote:

Ok, I sniffed the traffic between my browser and my app on gae, this
is what returned on file request:

HTTP/1.1 200 OK
Cache-Control: no-cache
Content-Type: application/octet-stream; charset=utf-8
Date: Mon, 27 Oct 2008 15:39:00 GMT
Server: Google Frontend
Content-Length: 2022

Google server deletes Content-Disposition header from response :( but
only for some files, for example .doc and .txt... GIF files are
dwonloaded correctly and the header is not deleted. Anyone knows what
can be done about this?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Datastore Entity

2008-10-25 Thread yejun

I think this should work as well. I only tried db.Model though.

On Oct 25, 3:36 am, Koren [EMAIL PROTECTED] wrote:
 Thanks!
 so can i do soemthing like:
 Newkind = type(KindName, (db.expando,))

 obj1 = KindName()

 ?

 On Oct 25, 12:14 am, yejun [EMAIL PROTECTED] wrote:

  I think you mean kind.

  Newkind = type(KindName, (db.model,), dict(p1=db.Property(), p2...))

  On Oct 24, 5:45 pm,Koren[EMAIL PROTECTED] wrote:

   hi,

   is it possible to create a datastore entity at runtime?
   i have the situation that i need to create new Entities (and not
   records) during runtime. This corresponds to setting up a database
   table during runtime instead of just adding rows to the same table.

   thanks,

  Koren- Hide quoted text -

  - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



  1   2   >