[google-appengine] Re: Chat Time transcript for April 7, 2010

2010-04-20 Thread Andy Freeman
BTW, the task queue uses the datastore for tasks that are bigger than
10k or so.  (The SDK code tries to add the task queue entry, but that
fails for large task queue items, so it stores them in the datastore
and then writes a task queue entry that reads from the datastore and
deletes what it stored when the task succeeds.)

In other words, you can't necessarily use the task queue to avoid
making the user wait for datastore writes.

On Apr 20, 2:07 pm, Jason (Google) apija...@google.com wrote:
 The high-level summary and complete transcript of the April 7th
 edition of the IRC office hours is pasted below. Please join us on the
 first and third Wednesday of every month in the #appengine channel on
 irc.freenode.net. On the first Wednesday, we meet in the channel from
 7:00-8:00 p.m. PST (evening hours), and on the third Wednesday (e.g.
 TOMORROW, 4/21), we're available from 9:00-10:00 a.m. PST (morning
 hours).

 - Jason

 --SUMMARY---
 - Discussion of existing App Engine-compatible OpenID libraries and
 forthcoming built-in support for OpenID and OAuth [7:03-7:09]

 - Remember, back up your code or use a secure version control system
 -- we cannot help you recover your code in the event of corruption or
 theft, so backup early and often. Also, 
 seehttp://stackoverflow.com/questions/2479087/can-i-restore-my-source-co
 [7:11-7:16]

 - Q: Is it feasible to push every write to the task queue instead of
 making the user wait for the write to complete inside a handler? A:
 While this should work, there are two things to keep in mind: 1) the
 number of task insertions is currently limited to 1M per day -- if you
 make more than one million writes each day, this solution will fail.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Appengine down! how to notify my app's users?

2010-02-26 Thread Andy Freeman
See http://code.google.com/p/googleappengine/issues/detail?id=2083
http://code.google.com/p/googleappengine/issues/detail?id=1578
http://code.google.com/p/googleappengine/issues/detail?id=915



On Feb 24, 11:26 am, Waleed Abdulla wal...@ninua.com wrote:
 I second the request to having the ability to test these cases locally.



 On Wed, Feb 24, 2010 at 11:19 AM, Kwame Iwegbue iweg...@gmail.com wrote:
  Any idea how to simulate this error on localhost. I wouldn't want to wait
  until the next great appengine outage, to find out if it
  (CapabilityDisabledError) works!

  On Feb 24, 2010, at 11:52 AM, Flips p...@script-network.com wrote:

   CapabilityDisabledError: Datastore writes are temporarily unavailable

  On Feb 24, 5:48 pm, Kwame iweg...@gmail.com wrote:

  What exception hadling code can I add to my app to notify me and my
  users when appengine is down, so I can post an appropriate alert
  message?

  --
  You received this message because you are subscribed to the Google Groups
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to
  google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2bunsubscrib...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.

  --
  You received this message because you are subscribed to the Google Groups
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to
  google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2bunsubscrib...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.- Hide quoted text -

 - Show quoted text -

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: 1.3.1 SDK Prerelease - help us verify

2010-02-16 Thread Andy Freeman
 Furthermore, it
 seems highly probable that as things are, many people will obliviously
 write public webapps that take a raw cursor as a parameter.  This
 could be the new SQL injection attack.

Can you comment a bit more on the security issues?

AFAIK, cursors can not be used to write anything.  The cursor still
has to match the query with its parameters, so I don't see how they
can synthesize a cursor to see anything that they haven't already seen
(replay) or that they'd see by requesting more and more pages (skip
ahead).

The cursor may, as part of its is this the right query content,
reveal something about the query.

Hmm - the latter seems somewhat serious.  It isn't data modification,
but it is a data reveal.

What information can someone extract from a production cursor?  Does
it contain the parameters (bad) or signatures (okay if someone can't
derive one parameter given the other parameters).

-andy


On Feb 9, 9:02 am, Jeff Schnitzer j...@infohazard.org wrote:
 Still, a slightly modified version of the original request does not
 seem unreasonable.  He would have to formulate his URLs something like
 this:

 myblog.com/comments/?q=thefirst=1234

 or maybe:

 myblog.com/comments/?q=theafter=1234

 I could see this being really useful, since encrypting (or worse,
 storing on the server) the cursor is pretty painful.  Furthermore, it
 seems highly probable that as things are, many people will obliviously
 write public webapps that take a raw cursor as a parameter.  This
 could be the new SQL injection attack.

 Jeff

 2010/2/9 Alkis Evlogimenos ('Αλκης Ευλογημένος) evlogime...@gmail.com: If 
 the cursor had to skip entries by using an offset, its performance would
  depend on the size of the offset. This is what the current Query.fetch() api
  is doing when you give it an offset. A cursor is a pointer to the entry from
  which the next query will start. It has no notion of offset.
  On Tue, Feb 9, 2010 at 4:07 PM, Nickolas Daskalou n...@daskalou.com wrote:

  Does the production cursor string contain information about the app id,
  kind, any filter()s or order()s, and (more importantly) some sort of
  numerical value that indicates how many records the next query should
  skip? If so, and if we could extract this information (and then use it
  again to the reconstruct the cursor), that would make for much cleaner,
  safer and intuitive URLs than including the entire cursor string (or some
  sort of encrypted/encoded cursor string replacement).

  2010/2/10 Nick Johnson (Google) nick.john...@google.com

  Hi Nickolas,

  2010/2/9 Nickolas Daskalou n...@daskalou.com

  I'd want to do this so that I could include parts of the cursor (such as
  the offset) into a URL without including other parts (eg. the model kind 
  and
  filters). I could then reconstruct the cursor on the server side based on
  what was passed into the URL.

  The offset argument you're talking about is specific to the
  dev_appserver's implementation of cursors. In production, offsets are not
  used, so this won't work.
  -Nick Johnson

  For example, if I was searching for blog comments that contained the
  word the (with the default order being the creation time, descending), 
  the
  URL might look like this:

  myblog.com/comments/?q=the

  With model:

  class Comment(db.Model):
    
    created_at = db.DateTimeProperty(auto_now_add=True)
    words = db.StringListProperty() # A list of all the words in a comment
  (forget about exploding indexes for now)
    ...

  The query object for this URL might look something like:

  
  q =
  Comment.all().filter('words',self.request.get('q')).order('-created_at')
  

  To get to the 1001st comment, it'd be good if the URL looked something
  like this:

  myblog.com/comments/?q=theskip=1000

  instead of:

  myblog.com/comments/?q=thecursor=[something ugly]

  so that when the request comes in, I can do this:

  
  q =
  Comment.all().filter('words',self.request.get('q')).order('-created_at')
  cursor_template = q.cursor_template()
  cursor =
  db.Cursor.from_template(cursor_template,offset=int(self.request.get('skip')­))
  
  (or something along these lines)

  Does that make sense?

  On 10 February 2010 01:03, Nick Johnson (Google)
  nick.john...@google.com wrote:

  Hi Nickolas,

  2010/2/9 Nickolas Daskalou n...@daskalou.com

  Will we be able to construct our own cursors much the same way that we
  are able to construct our own Datastore keys (Key.from_path())?

  No, not practically speaking.

  Also along the same lines, will we be able to deconstruct a cursor
  to get its components (offset, start_inclusive etc.), as we can now do 
  with
  keys (key.name(), key.id(), key.kind() etc.)?

  While you could do this, there's no guarantees that it'll work (or
  continue to work), as you'd be digging into internal implementation 
  details.
  Why do you want to do this?
  -Nick Johnson

  2010/2/9 Nick Johnson (Google) nick.john...@google.com

  2010/2/9 Stephen 

[google-appengine] Re: memcache set succeeds but immediate get fails. Pls help

2010-02-09 Thread Andy Freeman
  memcache.set() does not set if id already present.

Huh?  I don't see that in the documentation.  Why do you think that it
is true?

memcache.set is described as Sets a key's value, regardless of
previous contents in cache.
memcache.add is described as Sets a key's value, if and only if the
item is not already in memcache.

http://code.google.com/appengine/docs/python/memcache/functions.html

On Feb 7, 1:32 pm, observer247 prem...@gmail.com wrote:
 Thanks Eli ! The cache time was the issue.

 memcache.set() does not set if id already present. So I am using
 delete and add.
 I cannot be sure id is present, memcache could be deleted because of
 memory pressure from app engine, right ?

 On Feb 7, 10:18 am, Eli Jones eli.jo...@gmail.com wrote:



  One minor thing I noticed.. why not use memcache.set() instead of
  memcache.delete(), memcache.add()?

  On Sun, Feb 7, 2010 at 6:22 AM, observer247 prem...@gmail.com wrote:
   This is my code:

                  ret = memcache.add(key=mykey, value=qList, time=
   60*60*24*30)
                  logging.critical(Created cache batch %s Passed %s %
   (mykey, str(ret)))

                  qList = memcache.get(mykey)

   For some reason, qList is None ! I have logged all values and qList is
   a non empty list. Check code below where I print a lot of info in the
   logs.

   Detailed code here:

   def MY_QC_MAX(): return 3
   def MY_QC_SIZE(): return 200

   def createBatchMyModels():
          import random
          for n in range(MY_QC_MAX()):
                  bnum = n + 1
                  mykey = qkey_batch_+str(bnum)
                  qQ = MyModel.all(keys_only=True).filter('approved',
   True)
                  if bnum  1:
                          qQ = qQ.filter('__key__ ', last_key)
                  rows = qQ.fetch(MY_QC_SIZE())
                  tot = len(rows)
                  if tot  MY_QC_SIZE():
                          logging.critical(Not enough MyModels for
   batch %u, got %u % (bnum, tot))
                          if tot == 0:
                                  return
                  last_key = rows[tot - 1]
                  # create the qList
                  qList = list()
                  logging.critical(Added %u rows into key %s % (tot,
   mykey))
                  tmpc = 0
                  for r in rows:
                          if tmpc == 0:
                                  logging.critical(elem %u into key %s
   % (r.id(), mykey))
                                  tmpc = tmpc + 1
                          qList.append(r.id())

                  for elem in qList:
                          logging.info(key %s elem is %u % (mykey,
   elem))
                  memcache.delete(mykey)
                  ret = memcache.add(key=mykey, value=qList, time=
   60*60*24*30)
                  logging.critical(Created cache batch %s Passed %s %
   (mykey, str(ret)))

                  qList = memcache.get(mykey)
                  if qList is None:
                          logging.critical(.. getNextMyModel: Did not
   find key %s % mykey)
                  else:
                          logging.critical(.. LEN : %u % len(qList))

   Sample log:
   .
   02-07 03:15AM 05.240 key qkey_batch_1 elem is 13108
   C 02-07 03:15AM 05.250 Created cache batch qkey_batch_1 Passed True
   C 02-07 03:15AM 05.253 .. getNextQuestion: Did not find key
   qkey_batch_1
   C 02-07 03:15AM 05.339 Added 200 rows into key qkey_batch_2
   ...

   Can anyone pls help !

   --
   You received this message because you are subscribed to the Google Groups
   Google App Engine group.
   To post to this group, send email to google-appeng...@googlegroups.com.
   To unsubscribe from this group, send email to
   google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2Bunsubscrib
e...@googlegroups.com
   .
   For more options, visit this group at
  http://groups.google.com/group/google-appengine?hl=en.- Hide quoted text -

 - Show quoted text -

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Memcache: Should we save Model instance or protocol buffer?

2010-02-05 Thread Andy Freeman
I'm pretty sure that the protocol buffer saves just the key.

However, I'm virtually certain that pickle will save both the instance
and the key if an reference property has been used.  And, if that
instance has used reference properties, their instances are also saved
by pickle.  I mention pickle because that's what memcache uses to
store instances.

It's an interesting choice - do you pickle instances as is, which
won't change their datetime now properties, but will save instances if
you've used the a reference property that names them.  Or, do you
memcache a protocol buffer, which will update datetime now properties
but won't save instances associated with reference properties.



On Feb 4, 10:27 pm, Nickolas Daskalou n...@daskalou.com wrote:
 Thanks Andy. Do you know what happens to a RefenceProperty that has already
 had the referenced entity loaded on the Model instance? Does the
 referenced entity also get saved in the ProtocolBuffer, or is only its key
 saved?

 On 5 February 2010 16:54, Andy Freeman ana...@earthlink.net wrote:



  Note that memcaching a protocol buffer has interesting consequences.
  One is that the auto-now datetime properties are updated when the
  protocol buffer is created.  This update is just on the protocol
  buffer - it doesn't affect the datastore copy.

 http://code.google.com/p/googleappengine/issues/detail?id=2402

  On Feb 4, 8:11 am, Sylvain sylvain.viv...@gmail.com wrote:
   I think the answer (and more) is here :
 http://blog.notdot.net/2009/9/Efficient-model-memcaching

   On Feb 4, 10:43 am, Nickolas Daskalou n...@daskalou.com wrote:

Is it better/safer to store a Model instance into Memcache directly
  (Method
1 below), or should we convert it to a protocol buffer first, then
  store it
(Method 2 below)?

Method 1:

memcache.set(cache_key, entity)
...
entity = memcache.get(cache_key)

Method 2:

memcache.set(cache_key, db.model_to_protobuf(entity))
...
entity = db.protobuf_to_model(memcache.get(cache_key))

I'm assuming Method 2 results in a smaller Memcache footprint, yes?

Nick- Hide quoted text -

   - Show quoted text -

  --
  You received this message because you are subscribed to the Google Groups
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to
  google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2bunsubscrib...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.- Hide quoted text -

 - Show quoted text -

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Memcache: Should we save Model instance or protocol buffer?

2010-02-04 Thread Andy Freeman
Note that memcaching a protocol buffer has interesting consequences.
One is that the auto-now datetime properties are updated when the
protocol buffer is created.  This update is just on the protocol
buffer - it doesn't affect the datastore copy.

http://code.google.com/p/googleappengine/issues/detail?id=2402

On Feb 4, 8:11 am, Sylvain sylvain.viv...@gmail.com wrote:
 I think the answer (and more) is here 
 :http://blog.notdot.net/2009/9/Efficient-model-memcaching

 On Feb 4, 10:43 am, Nickolas Daskalou n...@daskalou.com wrote:



  Is it better/safer to store a Model instance into Memcache directly (Method
  1 below), or should we convert it to a protocol buffer first, then store it
  (Method 2 below)?

  Method 1:

  memcache.set(cache_key, entity)
  ...
  entity = memcache.get(cache_key)

  Method 2:

  memcache.set(cache_key, db.model_to_protobuf(entity))
  ...
  entity = db.protobuf_to_model(memcache.get(cache_key))

  I'm assuming Method 2 results in a smaller Memcache footprint, yes?

  Nick- Hide quoted text -

 - Show quoted text -

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: auto run tasks using dev sdk

2010-01-20 Thread Andy Freeman
 Any other suggestions on how to easily test with background tasks
 using the SDK so they run automatically with hopefully more than 1
 running concurrently?

The development server does not support concurrent execution.

On Jan 13, 3:25 pm, Philip phili...@gmail.com wrote:
 There really needs to be an easier way to do local testing with queues
 and tasks doing what they do when an app is deployed.

 Any other suggestions on how to easily test with background tasks
 using the SDK so they run automatically with hopefully more than 1
 running concurrently?

 On Jan 13, 1:13 pm, Wesley Chun (Google) wesc+...@google.com
 wrote:



  greetings!

  you are correct. the Task queues in the development server are
  controlled by a POST that you send from the Task Queues page of the
  admin console. this is (mostly) desired by developers because they
  have more control over when tasks get executed.

  in order to run them automatically, you'll need to do some wizardry to
  simulate those requests. in the Python world, you could probably do
  one of these two things:

  - a record-n-playback macro either via a tool like Selenium or
  Windmill
  - if you want a pure command-line script, check out the Mechanize
  package which simulates a browser

  let us know how you end up implementing it. does anyone else out there
  have a different way of auto-executing tasks with the dev server?

  cheers,
  -- wesley
  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  Core Python Programming, Prentice Hall, (c)2007,2001
  Python Fundamentals, Prentice Hall, (c)2009
     http://corepython.com

  wesley.j.chun :: wesc+...@google.com
  developer relations :: google app engine

  On Jan 11, 9:36 pm, Philip phili...@gmail.com wrote:

   I've read some articles on auto running tasks using the developer sdk
   and python. The articles I've found don't provide a good workaround. I
   am POSTing tasks using the Task class with parameters. I would like to
   run concurrent background tasks on my dev machine. If I can only run
   them single threaded, I guess that's OK for now.

   What type of shell script or equivalent should I write that will
   automatically run tasks by first discovering them in the queues and
   then submitting them as POSTs to the appropriate queue URLs with the
   original parameters?

   Note: The GAE SDK is running on OS X.- Hide quoted text -

 - Show quoted text -
-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: EntityProto instance to Model instance?

2010-01-17 Thread Andy Freeman
Search in google/appengine/ext/db/__init__.py for protobuf functions.


On Jan 16, 9:21 pm, Nickolas Daskalou n...@daskalou.com wrote:
 Does anyone have an answer for this? Google guys?

 2010/1/15 Kapil Kaisare kksm19820...@gmail.com



  As an aside: what is an EntityProto, and is there a link in the GAE
  documentation for it?

  Regards,
  Kaisare, Kapil Sadashiv

  On Fri, Jan 15, 2010 at 09:37, Nickolas Daskalou n...@daskalou.comwrote:

  How can I convert an EntityProto to a Model instance?

  I have a Model method, pre_put(), that I want to call on each Model
  instance before it's Put into the Datastore, using hooks (eg:
 http://code.google.com/appengine/articles/hooks.html).

  My hook code looks like this:

  def hook(service, call, request, response):
      assert service == 'datastore_v3'
      if call == 'Put':
          for entity in request.entity_list():
              entity.pre_put()

  When the hook is called and runs, I get this error:

  AttributeError: EntityProto instance has no attribute 'pre_put'

  Is there any way to convert the entities in request.entity_list() to their
  original Models, manipulate them, and then convert them back to an
  EntityProto instance? Or even better, if the EntityProto instances have
  references to the actual Model instances which we can access and 
  manipulate?

  --
  You received this message because you are subscribed to the Google Groups
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to
  google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2bunsubscrib...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.

  --
  You received this message because you are subscribed to the Google Groups
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to
  google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2bunsubscrib...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.- Hide quoted text -

 - Show quoted text -
-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: EntityProto instance to Model instance?

2010-01-17 Thread Andy Freeman
I asked that question wrt db.class_for_kind and was told that it was
safe to assume that it would always be there.

I'd guess that model_from_protobuf should be safe as well.

File a bug requesting that it be added to the documented function
list.

On Jan 17, 6:26 pm, Nickolas Daskalou n...@daskalou.com wrote:
 Since these are not documented online anywhere, is it possible these
 functions may change in the future (and hence break my code)?

 2010/1/18 Andy Freeman ana...@earthlink.net



  Search in google/appengine/ext/db/__init__.py for protobuf functions.

  On Jan 16, 9:21 pm, Nickolas Daskalou n...@daskalou.com wrote:
   Does anyone have an answer for this? Google guys?

   2010/1/15 Kapil Kaisare kksm19820...@gmail.com

As an aside: what is an EntityProto, and is there a link in the GAE
documentation for it?

Regards,
Kaisare, Kapil Sadashiv

On Fri, Jan 15, 2010 at 09:37, Nickolas Daskalou n...@daskalou.com
  wrote:

How can I convert an EntityProto to a Model instance?

I have a Model method, pre_put(), that I want to call on each Model
instance before it's Put into the Datastore, using hooks (eg:
   http://code.google.com/appengine/articles/hooks.html).

My hook code looks like this:

def hook(service, call, request, response):
    assert service == 'datastore_v3'
    if call == 'Put':
        for entity in request.entity_list():
            entity.pre_put()

When the hook is called and runs, I get this error:

AttributeError: EntityProto instance has no attribute 'pre_put'

Is there any way to convert the entities in request.entity_list() to
  their
original Models, manipulate them, and then convert them back to an
EntityProto instance? Or even better, if the EntityProto instances
  have
references to the actual Model instances which we can access and
  manipulate?

--
You received this message because you are subscribed to the Google
  Groups
Google App Engine group.
To post to this group, send email to
  google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to
google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2bunsubscrib...@googlegroups.com
  google-appengine%2bunsubscrib...@googlegroups.com
.
For more options, visit this group at
   http://groups.google.com/group/google-appengine?hl=en.

--
You received this message because you are subscribed to the Google
  Groups
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
  .
To unsubscribe from this group, send email to
google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2bunsubscrib...@googlegroups.com
  google-appengine%2bunsubscrib...@googlegroups.com
.
For more options, visit this group at
   http://groups.google.com/group/google-appengine?hl=en.-Hide quoted
  text -

   - Show quoted text -

  --
  You received this message because you are subscribed to the Google Groups
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to
  google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2bunsubscrib...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.- Hide quoted text -

 - Show quoted text -
-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Application specific configuration vars

2010-01-09 Thread Andy Freeman
  Does any one else agree if the AppEngine environment could provide
 this as a feature, so its not part of the application but rather a
 part of the environment?

If you make your configuration a db.Model subclass instance, you can
modify it in production without loading a new version.

That is
class ApplicationConfiguration(db.Model):
debug_mode = db.BooleanProperty()
admin_email = db.StringProperty()

and so on.




On Jan 4, 6:30 pm, Devraj Mukherjee dev...@gmail.com wrote:
 Hi all,

 My AppEngine application is written in Python. As the application code
 base becomes larger are experiencing the need to use configuration
 variables for our own applications. Example are we running in Debug
 mode or not, admin email addresses (used to send internal
 notifications) etc.

 Currently we are maintaining this in a common Python file as varaibles.

 Does any one else agree if the AppEngine environment could provide
 this as a feature, so its not part of the application but rather a
 part of the environment?

 Would appreciate comments.

 --
 The secret impresses no-one, the trick you use it for is everything
 - Alfred Borden (The Prestiege)
-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Google App Engine

2010-01-02 Thread Andy Freeman
Why would developers be better off discussing there than here?

On Dec 31 2009, 7:09 pm, Vadivel saravanan1...@gmail.com wrote:
 Google App Engine lets you run your web applications on Google's
 infrastructure. App Engine applications are easy to build, easy to
 maintain, and easy to scale as your traffic and data storage needs
 grow. This is the general discussion group for Google App Engine.
 Please join and participate. Click the link below..

 =
  page:http://www.123maza.com/website-designing/businesz
 =

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] ereporter problems

2010-01-01 Thread Andy Freeman
I'm seeing some strange behavior from ereporter in the development
server.  It looks like reloading modules is confusing things.

Here's the log.  Comments are in []s.

INFO 2010-01-01 19:44:53,108 appengine_rpc.py:157] Server:
appengine.google.
com
INFO 2010-01-01 19:44:53,108 appcfg.py:348] Checking for updates
to the SDK.

INFO 2010-01-01 19:44:53,640 appcfg.py:362] The SDK is up to date.
INFO 2010-01-01 19:44:53,921 dev_appserver_main.py:399] Running
application
application on port 80: http://0.0.0.0:80
[Now I visit a page that is guarantted to throw an exception.]
 registering ereporter
[That message comes from my code that loads ereporter as specified in
google/appengine/ext/ereporter/ereporter.py .]
ERROR2010-01-01 19:45:11,250 __init__.py:388] first pass
Traceback (most recent call last):
  [irrelevant trace lines deleted]
  File C:\appengine\google_appengine_1.3.0\application\msg_page.py,
line 84,
in render
assert False, 'first pass'
AssertionError: first pass
[Ereporter behaves as expected - there's an appropriate
ExceptionRecord in the datastore.]
[Now I edit msg_page.py, which is a handler, and revisit the same
url.]
INFO 2010-01-01 19:45:11,265 dev_appserver.py:3243] GET /msg/send
HTTP/1.1
 500 -
INFO 2010-01-01 19:45:11,280 dev_appserver_index.py:205] Updating
C:\appengi
ne\google_appengine_1.3.0\application\index.yaml
 registering ereporter
[That message is curious.  The file that loads ereporter wasn't
changed.  That file is, however, loaded by the handler which was
changed.]
ERROR2010-01-01 19:45:42,655 __init__.py:388] second pass
Traceback (most recent call last):
  [irrelevant trace lines deleted]
  File C:\appengine\google_appengine_1.3.0\application\msg_page.py,
line 84,
in render
assert False, 'second pass'
AssertionError: second pass
[The changed assertion verifies that the development server reloaded
the handler.]
[Now things get strange - ereporter fails.]
Traceback (most recent call last):
  File C:\appengine\google_appengine_1.3.0\google\appengine\ext
\ereporter\erepo
rter.py, line 216, in emit
signature = self.__GetSignature(record.exc_info)
  File C:\appengine\google_appengine_1.3.0\google\appengine\ext
\ereporter\erepo
rter.py, line 163, in __GetSignature
frames = traceback.extract_tb(trace)
AttributeError: 'NoneType' object has no attribute 'extract_tb'
INFO 2010-01-01 19:45:42,671 dev_appserver.py:3243] GET /msg/send
HTTP/1.1
 500 -

Interestingly enough, the ExceptionRecord in the datastore was updated
appropriately - it now has a count of 2.

It's no big deal if the fix is restart the development server after
any source file change, as long as everything works in production
with saved handlers.  (I have multiple handler files, each with their
own main.)

Thanks,
-andy

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: How to update a part of application?

2009-12-31 Thread Andy Freeman
Instead of thinking that it will take a lot of time to check the
files, you could measure how much time it takes.

That way, you'll know how much time you saved when you screw up an
update because you didn't correctly specify which files changed.

I'm guessing that the amount of time that you will save is far less
important than messing up an update.  Yes, you will mess up an update
with your scheme.  You'll forget that you changed a file.


On Dec 31, 7:04 am, thanhnv vietthanh.ngu...@gmail.com wrote:
 @djidjadji: thankyou for your answer. But, I think It will take much
 time to checks all files and collects changed file while I know
 exactly what folder (or files) are modified. So, I have edited a
 little code of appcfg.py to solve this issue. I hope it does not
 infringe Google license.
 And, I hope they will add this feature on next version of SDK :)

 thanks

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Forking new process from within app engine?

2009-12-30 Thread Andy Freeman
However, the development server is single-threaded, which makes
debugging this interesting.

On Dec 29, 12:04 pm, mb doit...@gmail.com wrote:
 Even though App Engine doesn't allow spawning new threads, you could
 expose each subprocess on its own URI and run them asynchronously with
 urlfetch:http://code.google.com/appengine/docs/python/urlfetch/asynchronousreq...

 On Dec 28, 9:08 pm, AL amle...@gmail.com wrote:



  Hi everybody.

  I need to invoke an external process in one of my app engine pages
  like so:

  import subprocess
  p = subprocess.Popen(ls -la, shell=True, stdout=subprocess.PIPE,
  stderr=subprocess.STDOUT)
  for line in p.stdout.readlines():
    self.response.out.write(line)
  retval = p.wait()

  This code works fine in regular python but app engine says
  AttributeError: 'module' object has no attribute 'Popen'

  wondering if calling an external application or forking a process is
  possible. I'm doubtful because of the security implications, but im
  also a newbie at python.

  Thanks Y'all- Hide quoted text -

 - Show quoted text -

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Cannot redirect

2009-12-30 Thread Andy Freeman
Is that the whole post method?

I ask because calling self.redirect does not cause the post method to
terminate.  If there's code afterwards, it can undo the redirect.


On Dec 29, 3:02 pm, Pancho yfa...@gmail.com wrote:
 Hi Wesley,

 Indeed the extra ) is a typo in the message but the code does
 compile.
 I am calling it using POST and the le log info Redirecting to
 google.com is actually displayed but the redirection doesn't happen.

 Any idea?

 Thanks,

 Pancho

 On Dec 29, 2:04 am, Wesley Chun wesc+...@google.com wrote:



  greetings!

  how are you calling into your code? based on the code you posted
  below, it does not look like it will even compile. (there is an extra
  ) at the end of your class definition.)

  also, you have created a post() method. did you invoke your
  application using POST (or GET)? if the latter, then you need to put
  your code into a get() method.

  hope this helps!
  -wesley

  On Wed, Dec 23, 2009 at 3:07 PM, Pancho yfa...@gmail.com wrote:
   Hi Everybody!

   You may have come across this issue that I have with redirecting from
   within a post:

   class Handler(webapp.RequestHandler):)

    def post(self):
      logging.info('Redirecting to google.com')
      self.redirect('http://www.google.com')

   For a strange reason this doesn't work. The message is correctly
   logged but there no redirection.

   Please help!

   Cheers.

  --
  - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  Core Python Programming, Prentice Hall, (c)2007,2001
  Python Fundamentals, Prentice Hall, (c)2009
     http://corepython.com

  wesley.j.chun :: wesc+...@google.com
  developer relations :: google app engine- Hide quoted text -

 - Show quoted text -

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: how to delete a table/entity?

2009-12-25 Thread Andy Freeman
  your method is right but the limit is there what ever methods we use.

There's always going to be a limit for scalable applications -
appengine just exposes it.

 because the offset is limited to 1000.
 i can not sort data by fields in results more than some limited items

Don't sort.  Use indices.  They can handle multiple fields.

Indices are the only way to build scalable applications.


On Dec 24, 9:16 pm, ajaxer calid...@gmail.com wrote:
 your method is right but the limit is there what ever methods we use.

 and you didn't get what i mean.

 because the offset is limited to 1000.
 i can not sort data by fields in results more than some limited items

 with out the offset limit, we can do it easily.

 On 12月24日, 上午2时10分, Andy Freeman ana...@earthlink.net wrote:



  Any application requires fetching an unbounded amount of data for a
  single page view is not scalable, no matter what technology you use to
  build it, so this problem is not appengine specific.

  If you need aggregations (average, median, total, etc), you have to
  compute them incrementally or with an off-line process.

   when even with the datetime = you still get a big set, how you can
   handle it?

  We're talking about paging through a dataset, presenting n (for small
  n) elements at a time to a user.

  If we're paging through by the value of field with distinct values and
  we want to present 20 results per page, the query for the first page
  is order by field with limit 20.  That query has a last result.
  The query for the next page is field  {last result's field value}
  order by field, again with limit 20.  That query also has a last
  result so the form of subsequent queries should be obvious.  (If
  you've got other conditions, such as user id or key, you need to add
  those as well.)

  Suppose that entities can have the same field value.  If you don't
  care how those entities are ordered, the first query's order by clause
  can be order by field, __key__, again limit 20.  The next query
  tries to pick up entities with the same field as the last result from
  the previous query.  It looks like field = {last result's field's
  value} and __key__  {last result's key} order by __key__ and you
  keep using it until it fails.  You then use a query like the next
  page query from the previous case.  (I stopped mentioning limit
  because the value depends on what you need to fill the current page.)

  On Dec 22, 8:50 pm, ajaxer calid...@gmail.com wrote:

   when even with the datetime = you still get a big set, how you can
   handle it?
   for example you get 1 item with the most specific filtering sql.
   and on this filtering sql, you should have a statistic info. like how
   many item it is .

   how do you expect the appengine to handle this problem?
   how about at one request with many these actions?

   On Dec 21, 11:09 pm, Andy Freeman ana...@earthlink.net wrote:

What statistics are you talking about?

You're claiming that one can't page through an entity type without
fetching all instances and sorting them.  That claim is wrong because
the order by constraint does exactly that.

For example, suppose that you want to page through by a date/time
field named datetime.  The query for the first page uses order by
datetime while queries for subsequent pages have a datetime =
clause for the last datetime value from the previous page and continue
to order by datetime.

What part of that do you think doesn't work?

Do you think that Nick was wrong when he said that time time to
execute such query depends on the number of entities?

You can even do random access by using markers that are added/
maintained by a sequential process like the above.

On Dec 20, 7:34 pm, ajaxer calid...@gmail.com wrote:

 You misunderstand.
 if not show me a site with statistics on many fields.
 with more than 1000 pages please.
 thanks.

 On Dec 21, 9:06 am, Andy Freeman ana...@earthlink.net wrote:

  You misunderstand.

  If you have an ordering based on one or more indexed properties, you
  can page efficiently wrt that ordering, regardless of the number of
  data items.  (For the purposes of this discussion, __key__ is an
  indexed property, but you don't have to use it or can use it just to
  break ties.)

  If you're fetching a large number of items and sorting so you can 
  find
  a contiguous subset, you're doing it wrong.

  On Dec 19, 10:26 pm, ajaxer calid...@gmail.com wrote:

   obviously, if you have to page a data set more than 5 items 
   which
   is not ordered by __key__,

   you may find that the __key__  is of no use, because the filtered 
   data
   is ordered not by key.
   but by the fields value, and for that reason you need to loop 
   query as
   you may like to do.

   but you will encounter a timeout exception before you really 
   finished

[google-appengine] Re: how to delete a table/entity?

2009-12-23 Thread Andy Freeman
Any application requires fetching an unbounded amount of data for a
single page view is not scalable, no matter what technology you use to
build it, so this problem is not appengine specific.

If you need aggregations (average, median, total, etc), you have to
compute them incrementally or with an off-line process.

 when even with the datetime = you still get a big set, how you can
 handle it?

We're talking about paging through a dataset, presenting n (for small
n) elements at a time to a user.

If we're paging through by the value of field with distinct values and
we want to present 20 results per page, the query for the first page
is order by field with limit 20.  That query has a last result.
The query for the next page is field  {last result's field value}
order by field, again with limit 20.  That query also has a last
result so the form of subsequent queries should be obvious.  (If
you've got other conditions, such as user id or key, you need to add
those as well.)

Suppose that entities can have the same field value.  If you don't
care how those entities are ordered, the first query's order by clause
can be order by field, __key__, again limit 20.  The next query
tries to pick up entities with the same field as the last result from
the previous query.  It looks like field = {last result's field's
value} and __key__  {last result's key} order by __key__ and you
keep using it until it fails.  You then use a query like the next
page query from the previous case.  (I stopped mentioning limit
because the value depends on what you need to fill the current page.)



On Dec 22, 8:50 pm, ajaxer calid...@gmail.com wrote:
 when even with the datetime = you still get a big set, how you can
 handle it?
 for example you get 1 item with the most specific filtering sql.
 and on this filtering sql, you should have a statistic info. like how
 many item it is .

 how do you expect the appengine to handle this problem?
 how about at one request with many these actions?

 On Dec 21, 11:09 pm, Andy Freeman ana...@earthlink.net wrote:



  What statistics are you talking about?

  You're claiming that one can't page through an entity type without
  fetching all instances and sorting them.  That claim is wrong because
  the order by constraint does exactly that.

  For example, suppose that you want to page through by a date/time
  field named datetime.  The query for the first page uses order by
  datetime while queries for subsequent pages have a datetime =
  clause for the last datetime value from the previous page and continue
  to order by datetime.

  What part of that do you think doesn't work?

  Do you think that Nick was wrong when he said that time time to
  execute such query depends on the number of entities?

  You can even do random access by using markers that are added/
  maintained by a sequential process like the above.

  On Dec 20, 7:34 pm, ajaxer calid...@gmail.com wrote:

   You misunderstand.
   if not show me a site with statistics on many fields.
   with more than 1000 pages please.
   thanks.

   On Dec 21, 9:06 am, Andy Freeman ana...@earthlink.net wrote:

You misunderstand.

If you have an ordering based on one or more indexed properties, you
can page efficiently wrt that ordering, regardless of the number of
data items.  (For the purposes of this discussion, __key__ is an
indexed property, but you don't have to use it or can use it just to
break ties.)

If you're fetching a large number of items and sorting so you can find
a contiguous subset, you're doing it wrong.

On Dec 19, 10:26 pm, ajaxer calid...@gmail.com wrote:

 obviously, if you have to page a data set more than 5 items which
 is not ordered by __key__,

 you may find that the __key__  is of no use, because the filtered data
 is ordered not by key.
 but by the fields value, and for that reason you need to loop query as
 you may like to do.

 but you will encounter a timeout exception before you really finished
 the action.

 On Dec 19, 8:26 am, Andy Freeman ana...@earthlink.net wrote:

if the type of data is larger than 1 items, you need 
reindexing
   for this result.
   and recount each time for getting the proper item.

  What kind of reindexing are you talking about.

  Global reindexing is only required when you change the indices in
  app.yaml.  It doesn't occur when you add more entities and or have 
  big
  entities.

  Of course, when you change an entity, it gets reindexed, but that's 
  a
  constant cost.

  Surely you're not planning to change all your entities fairly often,
  are you?  (You're going to have problems if you try to maintain
  sequence numbers and do insertions, but that doesn't scale anyway.)

it seems you have not encountered such a problem.
   on this situation, the indexes on the fields helps nothing for the
   bulk of  data you have

[google-appengine] Re: how to delete a table/entity?

2009-12-21 Thread Andy Freeman
What statistics are you talking about?

You're claiming that one can't page through an entity type without
fetching all instances and sorting them.  That claim is wrong because
the order by constraint does exactly that.

For example, suppose that you want to page through by a date/time
field named datetime.  The query for the first page uses order by
datetime while queries for subsequent pages have a datetime =
clause for the last datetime value from the previous page and continue
to order by datetime.

What part of that do you think doesn't work?

Do you think that Nick was wrong when he said that time time to
execute such query depends on the number of entities?

You can even do random access by using markers that are added/
maintained by a sequential process like the above.

On Dec 20, 7:34 pm, ajaxer calid...@gmail.com wrote:
 You misunderstand.
 if not show me a site with statistics on many fields.
 with more than 1000 pages please.
 thanks.

 On Dec 21, 9:06 am, Andy Freeman ana...@earthlink.net wrote:



  You misunderstand.

  If you have an ordering based on one or more indexed properties, you
  can page efficiently wrt that ordering, regardless of the number of
  data items.  (For the purposes of this discussion, __key__ is an
  indexed property, but you don't have to use it or can use it just to
  break ties.)

  If you're fetching a large number of items and sorting so you can find
  a contiguous subset, you're doing it wrong.

  On Dec 19, 10:26 pm, ajaxer calid...@gmail.com wrote:

   obviously, if you have to page a data set more than 5 items which
   is not ordered by __key__,

   you may find that the __key__  is of no use, because the filtered data
   is ordered not by key.
   but by the fields value, and for that reason you need to loop query as
   you may like to do.

   but you will encounter a timeout exception before you really finished
   the action.

   On Dec 19, 8:26 am, Andy Freeman ana...@earthlink.net wrote:

  if the type of data is larger than 1 items, you need reindexing
 for this result.
 and recount each time for getting the proper item.

What kind of reindexing are you talking about.

Global reindexing is only required when you change the indices in
app.yaml.  It doesn't occur when you add more entities and or have big
entities.

Of course, when you change an entity, it gets reindexed, but that's a
constant cost.

Surely you're not planning to change all your entities fairly often,
are you?  (You're going to have problems if you try to maintain
sequence numbers and do insertions, but that doesn't scale anyway.)

  it seems you have not encountered such a problem.
 on this situation, the indexes on the fields helps nothing for the
 bulk of  data you have to be sorted is really big.

Actually I have.  I've even done difference and at-least-#
(intersection and union are special cases - at-least-# also handles
majority), at-most-# (binary xor is the only common case that I came
up with), and combinations thereof on paged queries.

Yes, I know that offset is limited to 1000 but that's irrelevant
because the paging scheme under discussion doesn't use offset.  It
keeps track of where it is using __key__ and indexed data values.

On Dec 16, 7:56 pm, ajaxer calid...@gmail.com wrote:

 of course the time is related to the type data you are fetching by one
 query.

 if the type of data is larger than 1 items, you need reindexing
 for this result.
 and recount each time for getting the proper item.

 it seems you have not encountered such a problem.
 on this situation, the indexes on the fields helps nothing for the
 bulk of  data you have to be sorted is really big.

 On Dec 17, 12:20 am, Andy Freeman ana...@earthlink.net wrote:

   it still can result in timout if the data is really big

  How so?  If you don't request too many items with a page query, it
  won't time out.  You will run into runtime.DeadlineExceededErrors if
  you try to use too many page queries for a given request, but 

   of no much use to most of us if we really have big data to sort 
   and
   page.

  You do know that the sorting for the page queries is done with the
  indexing and not user code, right?  Query time is independent of the
  total amount of data and depends only on the size of the result set.
  (Indexing time is constant per inserted/updated entity.)

  On Dec 16, 12:13 am, ajaxer calid...@gmail.com wrote:

   it is too complicated for most of us.
   and it still can result in timout if the data is really big

   of no much use to most of us if we really have big data to sort 
   and
   page.

   On Dec 15, 11:35 pm, Stephen sdea...@gmail.com wrote:

On Dec 15, 8:04 am, ajaxer calid...@gmail.com wrote:

 also 1000 index limit makes it not possible to fetcher older

[google-appengine] Re: how to delete a table/entity?

2009-12-20 Thread Andy Freeman
You misunderstand.

If you have an ordering based on one or more indexed properties, you
can page efficiently wrt that ordering, regardless of the number of
data items.  (For the purposes of this discussion, __key__ is an
indexed property, but you don't have to use it or can use it just to
break ties.)

If you're fetching a large number of items and sorting so you can find
a contiguous subset, you're doing it wrong.

On Dec 19, 10:26 pm, ajaxer calid...@gmail.com wrote:
 obviously, if you have to page a data set more than 5 items which
 is not ordered by __key__,

 you may find that the __key__  is of no use, because the filtered data
 is ordered not by key.
 but by the fields value, and for that reason you need to loop query as
 you may like to do.

 but you will encounter a timeout exception before you really finished
 the action.

 On Dec 19, 8:26 am, Andy Freeman ana...@earthlink.net wrote:



if the type of data is larger than 1 items, you need reindexing
   for this result.
   and recount each time for getting the proper item.

  What kind of reindexing are you talking about.

  Global reindexing is only required when you change the indices in
  app.yaml.  It doesn't occur when you add more entities and or have big
  entities.

  Of course, when you change an entity, it gets reindexed, but that's a
  constant cost.

  Surely you're not planning to change all your entities fairly often,
  are you?  (You're going to have problems if you try to maintain
  sequence numbers and do insertions, but that doesn't scale anyway.)

it seems you have not encountered such a problem.
   on this situation, the indexes on the fields helps nothing for the
   bulk of  data you have to be sorted is really big.

  Actually I have.  I've even done difference and at-least-#
  (intersection and union are special cases - at-least-# also handles
  majority), at-most-# (binary xor is the only common case that I came
  up with), and combinations thereof on paged queries.

  Yes, I know that offset is limited to 1000 but that's irrelevant
  because the paging scheme under discussion doesn't use offset.  It
  keeps track of where it is using __key__ and indexed data values.

  On Dec 16, 7:56 pm, ajaxer calid...@gmail.com wrote:

   of course the time is related to the type data you are fetching by one
   query.

   if the type of data is larger than 1 items, you need reindexing
   for this result.
   and recount each time for getting the proper item.

   it seems you have not encountered such a problem.
   on this situation, the indexes on the fields helps nothing for the
   bulk of  data you have to be sorted is really big.

   On Dec 17, 12:20 am, Andy Freeman ana...@earthlink.net wrote:

 it still can result in timout if the data is really big

How so?  If you don't request too many items with a page query, it
won't time out.  You will run into runtime.DeadlineExceededErrors if
you try to use too many page queries for a given request, but 

 of no much use to most of us if we really have big data to sort and
 page.

You do know that the sorting for the page queries is done with the
indexing and not user code, right?  Query time is independent of the
total amount of data and depends only on the size of the result set.
(Indexing time is constant per inserted/updated entity.)

On Dec 16, 12:13 am, ajaxer calid...@gmail.com wrote:

 it is too complicated for most of us.
 and it still can result in timout if the data is really big

 of no much use to most of us if we really have big data to sort and
 page.

 On Dec 15, 11:35 pm, Stephen sdea...@gmail.com wrote:

  On Dec 15, 8:04 am, ajaxer calid...@gmail.com wrote:

   also 1000 index limit makes it not possible to fetcher older data 
   on
   paging.

   for if we need an indexed page more than 1 items,
   it would cost us a lot of cpu time to calculate the base for GQL
   to fetch the data with index less than 1000.

 http://code.google.com/appengine/articles/paging.html-Hidequotedtext-

 - Show quoted text -- Hide quoted text -

   - Show quoted text -- Hide quoted text -

 - Show quoted text -

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: how to delete a table/entity?

2009-12-18 Thread Andy Freeman
  if the type of data is larger than 1 items, you need reindexing
 for this result.
 and recount each time for getting the proper item.

What kind of reindexing are you talking about.

Global reindexing is only required when you change the indices in
app.yaml.  It doesn't occur when you add more entities and or have big
entities.

Of course, when you change an entity, it gets reindexed, but that's a
constant cost.

Surely you're not planning to change all your entities fairly often,
are you?  (You're going to have problems if you try to maintain
sequence numbers and do insertions, but that doesn't scale anyway.)

  it seems you have not encountered such a problem.
 on this situation, the indexes on the fields helps nothing for the
 bulk of  data you have to be sorted is really big.

Actually I have.  I've even done difference and at-least-#
(intersection and union are special cases - at-least-# also handles
majority), at-most-# (binary xor is the only common case that I came
up with), and combinations thereof on paged queries.

Yes, I know that offset is limited to 1000 but that's irrelevant
because the paging scheme under discussion doesn't use offset.  It
keeps track of where it is using __key__ and indexed data values.




On Dec 16, 7:56 pm, ajaxer calid...@gmail.com wrote:
 of course the time is related to the type data you are fetching by one
 query.

 if the type of data is larger than 1 items, you need reindexing
 for this result.
 and recount each time for getting the proper item.

 it seems you have not encountered such a problem.
 on this situation, the indexes on the fields helps nothing for the
 bulk of  data you have to be sorted is really big.

 On Dec 17, 12:20 am, Andy Freeman ana...@earthlink.net wrote:



   it still can result in timout if the data is really big

  How so?  If you don't request too many items with a page query, it
  won't time out.  You will run into runtime.DeadlineExceededErrors if
  you try to use too many page queries for a given request, but 

   of no much use to most of us if we really have big data to sort and
   page.

  You do know that the sorting for the page queries is done with the
  indexing and not user code, right?  Query time is independent of the
  total amount of data and depends only on the size of the result set.
  (Indexing time is constant per inserted/updated entity.)

  On Dec 16, 12:13 am, ajaxer calid...@gmail.com wrote:

   it is too complicated for most of us.
   and it still can result in timout if the data is really big

   of no much use to most of us if we really have big data to sort and
   page.

   On Dec 15, 11:35 pm, Stephen sdea...@gmail.com wrote:

On Dec 15, 8:04 am, ajaxer calid...@gmail.com wrote:

 also 1000 index limit makes it not possible to fetcher older data on
 paging.

 for if we need an indexed page more than 1 items,
 it would cost us a lot of cpu time to calculate the base for GQL
 to fetch the data with index less than 1000.

   http://code.google.com/appengine/articles/paging.html-Hidequoted text -

   - Show quoted text -- Hide quoted text -

 - Show quoted text -

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: pickle problems - ImportError: No module named __builtin__

2009-12-18 Thread Andy Freeman
 pickle should be pretty much as you would expect except that it's
 really cPickle, as stated in the 
 docs:http://code.google.com/appengine/kb/general.html

If pickle is really cPickle, then why are the cPickle specific
features not available?


On Dec 17, 1:25 am, Wesley Chun w...@google.com wrote:
 scott,

 how is the variable 'path' created? since App Engine executes in a
 restricted environment, you may not have access to the necessary
 files. a call to os.path.join() should have a flavor of
 os.path.join(os.path.dirname(__file__), ...

 pickle should be pretty much as you would expect except that it's
 really cPickle, as stated in the 
 docs:http://code.google.com/appengine/kb/general.htmlhttp://code.google.com/appengine/docs/python/runtime.html

 best regards,
 -- wesley
 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
 Core Python Programming, Prentice Hall, (c)2007,2001
 Python Fundamentals, Prentice Hall, (c)2009
    http://corepython.com

 wesley.j.chun : w...@google.com
 developer relations, google app engine



 On Wed, Dec 16, 2009 at 7:18 PM, Scott hillma...@gmail.com wrote:
  I am trying to execute the following code:

  path = os.path.join(os.getcwd(), 'applications', 'default', 'modules')

  word_set_a = pickle.load(open(os.path.join(path, 'word_set_a.txt'),
  'r'))

  But I am getting the following error:

  Traceback (most recent call last):
   File /base/data/home/apps/hillmanwork/1.338501969210326694/gluon/
  restricted.py, line 173, in restricted
     exec ccode in environment
   File /base/data/home/apps/hillmanwork/1.338501969210326694/
  applications/default/controllers/boggle.py:findWords, line 18, in
  module
   File /base/data/home/apps/hillmanwork/1.338501969210326694/gluon/
  globals.py, line 96, in lambda
     self._caller = lambda f: f()
   File /base/data/home/apps/hillmanwork/1.338501969210326694/
  applications/default/controllers/boggle.py:findWords, line 6, in
  findWords
   File /base/data/home/apps/hillmanwork/1.338501969210326694/
  applications/default/modules/pyBoggle.py, line 88, in module
     word_set_a = pickle.load(open(os.path.join(path,
  'word_set_a.txt'), 'r'))
   File /base/python_dist/lib/python2.5/pickle.py, line 1363, in load
     return Unpickler(file).load()
   File /base/python_dist/lib/python2.5/pickle.py, line 852, in load
     dispatch[key](self)
   File /base/python_dist/lib/python2.5/pickle.py, line 1084, in
  load_global
     klass = self.find_class(module, name)
   File /base/python_dist/lib/python2.5/pickle.py, line 1117, in
  find_class
     __import__(module)
  ImportError: No module named __builtin__

  Everything works just fine on the local test server, but once upload I
  have not such luck.

  Are there limitations with the pickle module that I am not aware of?- Hide 
  quoted text -

 - Show quoted text -

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Java or Python; Which should I recommend to others who are very sensitive to costs?

2009-12-15 Thread Andy Freeman
 Having said that, today it turns out for me that Java runtime is much
 more cost effective than Python runtime in some cases

The question is not whether the Java or Python runtime is more cost
effective in some cases, it's whether it which is more cost effective
in your cases.

Suppose that your application does one datastore operation for each
page and that datastore operation and other code takes the same amount
of time in both Python and Java.  Datastore operations are so much
slower than startup that this alone would make the startup difference
almost unnoticeable.

And, as someone else pointed out, development time is a cost too.

On Dec 15, 10:02 am, Takashi Matsuo matsuo.taka...@gmail.com wrote:
 Hello,

 Today I noticed that App Engine Java environment became much faster
 then before. The spin up cost is about 700cpu_ms with the simplest
 servlet. Additionally, when it comes to serving with a hot instance,
 the cost reduces to 0-2cpu_ms, while python environment takes about
 5-7cpu_ms even with the simplest handler.

 To make it simple here, lets say Java takes     1cpu_ms while Python takes
 6cpu_ms for serving very simple page.
 How many requests can they serve with 1 cpu hour?

 Java: 360 requests/1 cpu hour
 Python: 60 requests/1 cpu hour

 This is a big deference; 6 times! If your app exceeds free quota, this
 deference can impact total amount of costs significantly. I'm a big
 Python fan and I have believed that appengine Python runtime is
 superior to Java runtime, so I've been trying to persuade others to
 use Python rather than Java for now.

 Having said that, today it turns out for me that Java runtime is much
 more cost effective than Python runtime in some cases, so should I
 recommend others to use Apppengine Java if they are very sensitive to
 cpu costs?

 I'd appreciate if anyone could share one's thoughts/experiences on this.

 TIA

 --
 Takashi Matsuo
 Kay's daddy

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: CancelledError: The API call taskqueue.Add() was explicitly cancelled.

2009-12-12 Thread Andy Freeman
 I don't think you can do this. When you get a DeadlineExceededError
 you only have a very short amount of cpu time left -- enough to return
 a simple response to the client, but not enough to queue a new task.

I remember seeing some code by Nick that did queue a task after
catching a DeadlineExceededError but I can't find it now.  (I think
that it actually deferred, but that's the same thing if the task is
small enough.  The defer does an instance write if the task is too big
and there's not enough time to do that.)


On Dec 12, 4:23 am, Stephen sdea...@gmail.com wrote:
 On Dec 10, 11:44 am, Alex Popescu

 the.mindstorm.mailingl...@gmail.com wrote:

  Here is the scenario in which I'm seeing this error:

  - I have a set of tasks that are executed

  - the tasks are expensive so sometimes they may reach the
  DeadlineExceededError

  - in case the DeadlineExceededError occurs, I am attempting to create
  a new task to signal that processing was not completed and should
  continue at a later moment

 I don't think you can do this. When you get a DeadlineExceededError
 you only have a very short amount of cpu time left -- enough to return
 a simple response to the client, but not enough to queue a new task.

  While I could probably code around this issue, it will definitely make
  my app code more messy and complex. Right now things are clear:

  - there is a list of objects that must be processed

  - once new objects get appended to that list a new task is created for
  taking care of them

  - if the task cannot empty the list of objects to be processed it is
  scheduling a new task to continue the processing later

 You could try making your batch size smaller.

 If a task does not return a 200 success response, it will be retried.
 You could code your task so that if it only manages to process some of
 the list, when it is run again after returning a non-200 response it
 picks up where it left off and processes the remaining items in the
 list.

  Can anyone explain the meaning of the CancelledError? I read the
  documentation and I must confess that I'm not very sure what triggers
  it (at least I don't agree it is explicitly).

 This is probably your second task being cancelled after your
 DeadlineExceededError.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: CancelledError: The API call taskqueue.Add() was explicitly cancelled.

2009-12-12 Thread Andy Freeman
 I don't think you can do this. When you get a DeadlineExceededError
 you only have a very short amount of cpu time left -- enough to return
 a simple response to the client, but not enough to queue a new task.

http://code.google.com/appengine/articles/deferred.html contains
sample code that defers a task in response to a
DeadlineExceededError .

Note that this example is not a big task, one that would cause the
deferred code to generate a model instance containing the data.  It's
one that fits in the task queue's size limit (IIRC, 10k).

On Dec 12, 4:23 am, Stephen sdea...@gmail.com wrote:
 On Dec 10, 11:44 am, Alex Popescu

 the.mindstorm.mailingl...@gmail.com wrote:

  Here is the scenario in which I'm seeing this error:

  - I have a set of tasks that are executed

  - the tasks are expensive so sometimes they may reach the
  DeadlineExceededError

  - in case the DeadlineExceededError occurs, I am attempting to create
  a new task to signal that processing was not completed and should
  continue at a later moment

 I don't think you can do this. When you get a DeadlineExceededError
 you only have a very short amount of cpu time left -- enough to return
 a simple response to the client, but not enough to queue a new task.

  While I could probably code around this issue, it will definitely make
  my app code more messy and complex. Right now things are clear:

  - there is a list of objects that must be processed

  - once new objects get appended to that list a new task is created for
  taking care of them

  - if the task cannot empty the list of objects to be processed it is
  scheduling a new task to continue the processing later

 You could try making your batch size smaller.

 If a task does not return a 200 success response, it will be retried.
 You could code your task so that if it only manages to process some of
 the list, when it is run again after returning a non-200 response it
 picks up where it left off and processes the remaining items in the
 list.

  Can anyone explain the meaning of the CancelledError? I read the
  documentation and I must confess that I'm not very sure what triggers
  it (at least I don't agree it is explicitly).

 This is probably your second task being cancelled after your
 DeadlineExceededError.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: CancelledError: The API call taskqueue.Add() was explicitly cancelled.

2009-12-12 Thread Andy Freeman
 I don't think you can do this. When you get a DeadlineExceededError
 you only have a very short amount of cpu time left -- enough to return
 a simple response to the client, but not enough to queue a new task.


http://code.google.com/appengine/articles/deferred.html contains
sample code that queues a task in response to a
DeadlineExceededError .  (defer is a task queue wrapper.)

Not only does this example queue a task after a DeadlineExceededError,
it also does a db.put and a db.delete; look at _batch_write.  I'm not
surprised that there's enough time to do queue a task but I am
surprised that there's also enough time to do a put and a delete.


On Dec 12, 4:23 am, Stephen sdea...@gmail.com wrote:
 On Dec 10, 11:44 am, Alex Popescu

 the.mindstorm.mailingl...@gmail.com wrote:

  Here is the scenario in which I'm seeing this error:

  - I have a set of tasks that are executed

  - the tasks are expensive so sometimes they may reach the
  DeadlineExceededError

  - in case the DeadlineExceededError occurs, I am attempting to create
  a new task to signal that processing was not completed and should
  continue at a later moment

 I don't think you can do this. When you get a DeadlineExceededError
 you only have a very short amount of cpu time left -- enough to return
 a simple response to the client, but not enough to queue a new task.

  While I could probably code around this issue, it will definitely make
  my app code more messy and complex. Right now things are clear:

  - there is a list of objects that must be processed

  - once new objects get appended to that list a new task is created for
  taking care of them

  - if the task cannot empty the list of objects to be processed it is
  scheduling a new task to continue the processing later

 You could try making your batch size smaller.

 If a task does not return a 200 success response, it will be retried.
 You could code your task so that if it only manages to process some of
 the list, when it is run again after returning a non-200 response it
 picks up where it left off and processes the remaining items in the
 list.

  Can anyone explain the meaning of the CancelledError? I read the
  documentation and I must confess that I'm not very sure what triggers
  it (at least I don't agree it is explicitly).

 This is probably your second task being cancelled after your
 DeadlineExceededError.

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: using google.appengine.ext.deferred

2009-12-10 Thread Andy Freeman
I mis-spoke - I meant the python.exe window.  (I double click on .py
files.)

That window ignores clicks and like.

On Dec 9, 1:43 pm, Eli Jones eli.jo...@gmail.com wrote:
 If you have a windows command prompt open, right click on the black area of
 the window and select Mark.

 Then, left-click and drag to highlight the text you wish to copy.. once it
 is highlighted, then right-click.

 Whatever you highlighted is now copied to the clipboard.



 On Wed, Dec 9, 2009 at 4:39 PM, Andy Freeman ana...@earthlink.net wrote:
  The windows command window doesn't support copy or cut so I can't grab
  a trace.  (I don't run inside the launcher or with a programming
  environment.)

  the import error was on line 27 of handler.py, specifically from
  google.appengine.ext import deferred.  It said that it couldn't find
  deferred.  Occasionally import google.appengine.ext.deferred as
  deferred would work.

  I also had problems with ereporter/ereporter.py, specifically line
  163, frames = traceback.extract_tb(trace).  traceback was None even
  though it is imported on line 78.

  Yes, I tried restarting everything and deleting all of the .pyc files.

  However, all is well today with both.  Grr.  I'll retry just to
  verify.

  Argh.  I spoke too soon - I can trigger what looks like the same
  problem with ereporter/report_generator.py.

  type 'exceptions.ImportError': cannot import name ereporter
       args = ('cannot import name ereporter',)
       message = 'cannot import name ereporter'

  The next thing up the call stack is dev_appserver.py,
  ExecuteOrImportScript, line 2121, exec module_code in
  script_module.__dict__.  Above that is ExecuteCGI, line 2225.  Above
  that is Dispatch, line 2315.

  This is being triggered by ereporter/report_generatory.py, line 48,
  which is from google.appengine.ext import ereporter.  It's right
  after from google.appengine.ext import db so I don't suspect a path
  problem.

  I can fairly reliably trigger this exception by going to the ereporter
  page before I visit any other page.  If I go to some other page before
  going to the ereporter page, the import error doesn't happen.

  Which reminds me, using debug as an ereporter option is unfortunate
  - the development environment sometimes throws up an overlay when it
  sees that option.  (Ereporter can also do with a today or all date
  option.)

  Thanks,
  -andy

  On Dec 9, 4:33 am, Nick Johnson (Google) nick.john...@google.com
  wrote:
   Hi Andy,

   What import errors do you get, and under what circumstances?

   On Tue, Dec 8, 2009 at 11:51 PM, Andy Freeman ana...@earthlink.net
  wrote:
As of 1.2.8, the handler in google/appengine/ext/deferred/deferred.py
(imported by google/appengine/ext/deferred/__init__.py) says to use
google/appengine/ext/deferred/handler.py as the handler instead,
citing possible import errors.

I get import errors in my development environment (windows xp, python
2.5.2 (r252:60911 on win32) when I use handler.py but not when I use
deferred.py.

Any ideas what I'm doing differently?  Is this going to bite me in
production?

Thanks,
-andy

--

You received this message because you are subscribed to the Google
  Groups
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
  .
To unsubscribe from this group, send email to
google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2bunsubscrib...@googlegroups.com
  google-appengine%2bunsubscrib...@googlegroups.com
.
For more options, visit this group at
   http://groups.google.com/group/google-appengine?hl=en.

   --
   Nick Johnson, Developer Programs Engineer, App Engine
   Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
  Number:
   368047- Hide quoted text -

   - Show quoted text -

  --

  You received this message because you are subscribed to the Google Groups
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to
  google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2bunsubscrib...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.- Hide quoted text -

 - Show quoted text -

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] implementation of in queries

2009-12-10 Thread Andy Freeman
I know that both IN and != queries are implemented with multiple
queries.

Are those multiple queries executed in the datastore or in the
application?

I'm interested in whether an IN query is has less latency than the
corresponding sequence of queries.  (For my specific application, the
sequence is more convenient and requires no de-duping, but if IN has
less latency, I'll use it.)

ps - This should be documented somewhere

Thanks,
-andy

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] using google.appengine.ext.deferred

2009-12-08 Thread Andy Freeman
As of 1.2.8, the handler in google/appengine/ext/deferred/deferred.py
(imported by google/appengine/ext/deferred/__init__.py) says to use
google/appengine/ext/deferred/handler.py as the handler instead,
citing possible import errors.

I get import errors in my development environment (windows xp, python
2.5.2 (r252:60911 on win32) when I use handler.py but not when I use
deferred.py.

Any ideas what I'm doing differently?  Is this going to bite me in
production?

Thanks,
-andy

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: GAE for CPU intensive time important application

2009-11-28 Thread Andy Freeman
 Another question, you both recommended Python for some of its
 features, but isn't Python much slower than Java?

Maybe, maybe not, but it may not matter.  What fraction of your run-
time actually depends on language speed?

On Nov 28, 9:13 am, Eric shel...@gmail.com wrote:
 Another question, you both recommended Python for some of its
 features, but isn't Python much slower than Java? So wouldn't that
 necessitate many more instances/CPUs to keep with the query load?

 On Nov 28, 9:45 am, Niklas Rosencrantz teknik...@gmail.com wrote:



   1) pricing

  absolutely seems so. gae apps 1/20 cheaper than previous
  hostingmethods (servers) 2) latency resulting from slow CPU, JIT 
  compiles, etc.

  latency oriented group we don't focus 
  onhttp://groups.google.com/group/make-the-web-faster
  in the long run, yes. you can compare to dedicated physical server,
  much more difficult to configure, compile modules spec for physical
  architecture, get superiour response time with C++ server pages ouput
  hello world while best project is security and convenience are
  kings. latency least prio still important.

  python is good, same thing in python 1/10 code compared to java, no
  XML, yaml very neat. java strong point: more ways to solve same
  problem.

  On Sat, Nov 28, 2009 at 6:35 AM, 风笑雪 kea...@gmail.com wrote:
   The White House hosted an online town hall meeting on GAE with GWT,
   and received 700 hits per second at its peak.
  http://google-code-updates.blogspot.com/2009/04/google-developer-prod...

   But more than 1000 queries a second is never been tested.

   I think Java is not a good choice in your case. When your user
   suddenly increasing, starting a new Jave instance may cost more than 5
   seconds, while Python needs less than 1 second.

   2009/11/27 Eric shel...@gmail.com:

   Hi,

   I wish to set up a CPU-intensive time-important query service for
   users on the internet.
   Is GAE with Java the right choice? (as compared to other clouds, or
   non-cloud architecture)
   Specifically, in terms of:
   1) pricing
   2) latency resulting from slow CPU, JIT compiles, etc..
   3) latency resulting from communication of processes inside the cloud
   (e.g. a queuing process and a calculation process)
   4) latency of communication between cloud and end user

   A usage scenario I am expecting is:
   - a typical user sends a query (XML of size around 1K) once every 30
   seconds on average,
   - Each query requires a numerical computation of average time 0.2 sec
   and max time 1 sec (on a 1 GHz Pentium). The computation requires no
   data other than the query itself.
   - The delay a user experiences between sending a query and receiving a
   response should be on average no more than 2 seconds and in general no
   more than 5 seconds.
   - A background save to a DB of the response should occur (not time
   critical)
   - There can be up to 3 simultaneous users - i.e., on average 1000
   queries a second, each requiring an average 0.2 sec calculation, so
   that would necessitate around 200 CPUs.

   Is this feasible on GAE Java?
   If so, where can I learn about the correct design methodology for such
   a project on GAE?

   If this is the wrong forum to ask this, I'd appreciate redirection.

   Thanks,

   Eric

   --

   You received this message because you are subscribed to the Google 
   Groups Google App Engine group.
   To post to this group, send email to google-appeng...@googlegroups.com.
   To unsubscribe from this group, send email to 
   google-appengine+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/google-appengine?hl=en.

   --

   You received this message because you are subscribed to the Google Groups 
   Google App Engine group.
   To post to this group, send email to google-appeng...@googlegroups.com.
   To unsubscribe from this group, send email to 
   google-appengine+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/google-appengine?hl=en.- Hide quoted 
   text -

 - Show quoted text -

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: GAE for CPU intensive time important application

2009-11-28 Thread Andy Freeman
 Maybe I don't understand something, but why should the 5 second setup
 on a new instance bother me? A new instance should be created when
 other instances are near capacity, and not when they exceed it, right?
 So once initialized it can be dummy-run internally and only
 available 5 seconds later while the existing instance continue to take
 care of the incoming queries.

What makes you think that the request that causes the creation of a
new instance doesn't wait for the creation of said instance?  (The
scheme you suggest is plausible, but why do you think that it's how
appengine works?)

On Nov 28, 9:03 am, Eric shel...@gmail.com wrote:
 Thanks for the response.

 Maybe I don't understand something, but why should the 5 second setup
 on a new instance bother me? A new instance should be created when
 other instances are near capacity, and not when they exceed it, right?
 So once initialized it can be dummy-run internally and only
 available 5 seconds later while the existing instance continue to take
 care of the incoming queries.

 Also, do you think the latency requirements are realistic with GAE?
 That is, in the ordinary case, could the response be consistently
 served back to the querying user with delay of max 3 seconds?

 On Nov 28, 8:35 am, 风笑雪 kea...@gmail.com wrote:



  The White House hosted an online town hall meeting on GAE with GWT,
  and received 700 hits per second at its 
  peak.http://google-code-updates.blogspot.com/2009/04/google-developer-prod...

  But more than 1000 queries a second is never been tested.

  I think Java is not a good choice in your case. When your user
  suddenly increasing, starting a new Jave instance may cost more than 5
  seconds, while Python needs less than 1 second.

  2009/11/27 Eric shel...@gmail.com:

   Hi,

   I wish to set up a CPU-intensive time-important query service for
   users on the internet.
   Is GAE with Java the right choice? (as compared to other clouds, or
   non-cloud architecture)
   Specifically, in terms of:
   1) pricing
   2) latency resulting from slow CPU, JIT compiles, etc..
   3) latency resulting from communication of processes inside the cloud
   (e.g. a queuing process and a calculation process)
   4) latency of communication between cloud and end user

   A usage scenario I am expecting is:
   - a typical user sends a query (XML of size around 1K) once every 30
   seconds on average,
   - Each query requires a numerical computation of average time 0.2 sec
   and max time 1 sec (on a 1 GHz Pentium). The computation requires no
   data other than the query itself.
   - The delay a user experiences between sending a query and receiving a
   response should be on average no more than 2 seconds and in general no
   more than 5 seconds.
   - A background save to a DB of the response should occur (not time
   critical)
   - There can be up to 3 simultaneous users - i.e., on average 1000
   queries a second, each requiring an average 0.2 sec calculation, so
   that would necessitate around 200 CPUs.

   Is this feasible on GAE Java?
   If so, where can I learn about the correct design methodology for such
   a project on GAE?

   If this is the wrong forum to ask this, I'd appreciate redirection.

   Thanks,

   Eric

   --

   You received this message because you are subscribed to the Google Groups 
   Google App Engine group.
   To post to this group, send email to google-appeng...@googlegroups.com.
   To unsubscribe from this group, send email to 
   google-appengine+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/google-appengine?hl=en.- Hide quoted 
   text -

 - Show quoted text -

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Creating ancestor to handle transactions

2009-11-17 Thread Andy Freeman
You need to create the parent's key, but you don't need to create a
parent entity in the datastore.

You can use k = models.Parent(key_name='parent_key') to create a
parent node and then simply use k as you've been using it (without the
put or get).  (When you specify a key-name, the model instance has a
key even though it hasn't been put.)

You can also create the parent key directly with parent_key =
db.Key.from_path(models.Parent.kind(), 'parent_key') and use
parent_key where you've been using k.  (The documentation for db.Model
says that the value of parent can be an instance that has a valid key
or a key.)

Yes, ancestor queries with such keys works even though there's no
corresponding entity in the datastore.

On Nov 11, 1:31 pm, Simo Salminen ssalm...@gmail.com wrote:
 I have database entities that I want to change in transaction. For
 this I have to create an entity group. However, the entities don't
 have any natural parent entity, so I have created a dummy parent
 entity. Is there another good way to do this?

 Currently I am using the system below. First, I have following data
 model.

 class Event(db.Model):
     date = db.DateTimeProperty(required=True)
     name = db.StringProperty(required=True)

 I have created a dummy parent:
 class Parent(db.Model):
     pass

 Creating parent and defining the parent for the entity (this code is
 inside the transaction):
 try:
         k = db.get(parent_key)
 except db.BadKeyError:
         k = models.Parent(key_name=parent_key)
         k.put()

 ancestor_key = k.key()
 q = db.GqlQuery('SELECT * WHERE ANCESTOR IS :1', ancestor_key)
 # (using the query results here, code clipped)
 e = models.Event(date=date, name=name, parent=k)
 e.put()

 Can I avoid creating this dummy Parent model, and satisfy the
 entity group/transaction requirement in more elegant way?

--

You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=.




[google-appengine] Re: How does the App Engine deployment work?

2009-10-30 Thread Andy Freeman

It's probably dead simple.

Each cluster can have a db that says what apps and versions it can
handle.  For each app and version that it handles, it has a copy of
the code in its local GFS.  That gives every server in that cluster
access to the code.  Since this code is read-only, each server can
cache.

When you deploy a new application or new version, it goes to a cluster
which updates its db and local GFS.  It then starts telling other
clusters about the new application/version.

Routing for a specific version is easy.  If said router doesn't know
about that version/application, it asks clusters for that application/
version until it finds one that has it and caches that information for
a while.  (Note - I'm not using router to refer to an Internet
router, but to refer to the thingie that figures out where to send
requests for a given application.)

However, routing for the current version is tricky - the problem is
that different clusters can think that different versions are
current.  (Deleting old versions can be tricky unless you can tell
the routers that a version is dead.  If you can, you tell all the
routers that an application/version is dead, delete that version from
all the clusters, and then remove the is dead notice from the
routers.)

The easy way to deal with that is to simply not let a version become
current until it has been copied to all appropriate clusters.

Note that the naive implementation of this idea has a race.  You can
move the race into the routers by telling them what version is
current.  However, there's still the problem of keeping multiple
routers consistent, especially if they're relying on what clusters
tell them.  One way is to temporarily tell all but one router to route
current requests for a given application to said router until said
router is ready to atomically perform the transition.

There could be a set of dedicated routers that are used only for
applications that are transitioning from one current version to
another.  With this scheme, a transition router owns the definition of
current wrt a given application until the clusters' dbs definition
of current are updated wrt that application.  (Of course, the file
copies and initial cluster db entries can be done before the
transition starts.)

When you're not doing a transition, the above allows an arbitrary
number of routers and clusters.  (Notice that a given version need not
be on every cluster.)  Of course, such routers can also spread the
load for a given application/version across multiple clusters and
there are some tricks to speed up the search for a cluster that has a
given application/version.

During an application's transition, a single router has to handle all
the routing load for said application, but there are very few
applications that will overwhelm a dedicated router.  In fact, most
applications won't strain a dedicated router, so you actually want to
use a given transition router to transition multiple applications
whenever possible.

On Oct 30, 5:20 am, tav t...@espians.com wrote:
 Hey App Engine team,

 I was wondering if you could share a quick high-level summary of how
 the app engine deployment works internally? I've been trying to figure
 out how it works so as to mimic the behaviour for my own framework...

 All the ways that I can think of are nowhere near the elegance of what
 App Engine offers:

 * Using a SAN for the app code and putting the code into versioned
 directories. Whilst simple, this has the downsides of cost — both in
 terms of money and latency.

 * Using something like Capistrano/Fabric to do parallel updates to
 many servers. But this doesn't really scale so well and requires a lot
 of administrative overhead.

 * Putting the app code into a distributed data store. But this has the
 downsides of having to do a datastore lookup before serving every
 request — not to mention the additional time it takes to get the code
 for a cold start...

 Would love to know how you guys do it — thanks!

 --
 love, tav

 plex:espians/tav | t...@espians.com | +44 (0) 7809 569 
 369http://tav.espians.com|http://twitter.com/tav| skype:tavespian
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: DeadlineExceededError error rate increased significantly recently

2009-10-30 Thread Andy Freeman

See http://code.google.com/p/googleappengine/issues/detail?id=1298 .

On Oct 30, 5:24 am, Stephen sdea...@gmail.com wrote:
 On Oct 29, 11:20 pm, Tim Hoffman zutes...@gmail.com wrote:

  Hi

  I am not using DJango, and they where asking for automagic recycling.

 It's not django specific. Unusual latency anywhere in the system,
 including Google's api calls, might trigger a DeadlineExceededError.
 The various scenarios are too byzantine to expect people to get this
 100% right. And the failure mode is pretty bad.

  I would prefer to see the ability to explicit shut the instance (and
  therefore let a new one respawn) if you detect the problem.

 But that's not a bad idea also.

 I haven't tested this, but you could try provoking the system into
 killing your instance. Something like:

 __too_much_memory = []

 def suicide():
     global __too_much_memory
     for n in xrange(1000 * 1000 * 1000 * 1000):
         __too_much_memory.append('If we blow our memory limit, maybe
 google will kill us...')
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Memcache - Values may not be more than 1000000 bytes in length

2009-10-29 Thread Andy Freeman

You might want to look at/star 
http://code.google.com/p/googleappengine/issues/detail?id=1084
.

On Oct 29, 10:52 am, jago java.j...@gmail.com wrote:
 How can I best prevent against the following error? Can I check the
 length of items before putting them in the memcache?

   File /base/python_lib/versions/1/google/appengine/api/memcache/
 __init__.py, line 635, in _set_with_policy
     stored_value, flags = _validate_encode_value(value,
 self._do_pickle)
   File /base/python_lib/versions/1/google/appengine/api/memcache/
 __init__.py, line 180, in _validate_encode_value
     'received %d bytes' % (MAX_VALUE_SIZE, len(stored_value)))
 ValueError: Values may not be more than 100 bytes in length;
 received 1861357 bytes
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Datastore property type bug?

2009-10-20 Thread Andy Freeman

Note that fetching and re-storing has to update some subtle issues if
you're deleting properties.

See 
http://groups.google.com/group/google-appengine/browse_thread/thread/5a00e44cc56ae0d6/08f6b0ab02ce6f10#08f6b0ab02ce6f10

and http://code.google.com/p/googleappengine/issues/detail?id=2251 .

On Oct 20, 6:40 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi,
 Changing the model definition will not update entities already written to
 the datastore. If you need the existing entities indexed on this field, you
 will need to iterate through them all, fetching and re-storing them.

 -Nick Johnson





 On Mon, Oct 19, 2009 at 9:48 PM, tamakun f...@ecksor.com wrote:

  Can anyone tell me if they've run into a similar problem to the
  following?

  I had an entity with a TextProperty originally, i wanted to add a
  StringProperty so I updated the model and added new indexes for
  querying based on the new StringProperty.

  It seems that assigning a StringProperty property with a TextProperty
  property will change the type of that property to db.Text so that it
  doesn't show up in queries/filters because those aren't indexable, but
  there doesn't seem to be a way to change it back.  I've tried
  assigning None and doing put() on these entities but getting the same
  result (don't show up in queries that filter based on the new
  StringProperty).

 --
 Nick Johnson, Developer Programs Engineer, App Engine
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Heavy vs. Light entities

2009-10-14 Thread Andy Freeman

  So now I am thinking of breaking up the WriteUps class into several
  classes, putting all the lightweight data that I need to access
  frequently in one class and all the heavy data that I rarely need to
  access in another and using ancestor relationships and transactions to
  make sure everything stays together.
  Does this make any sense, or I am fundamentally misunderstanding how
  app engine fetches data?

 Yes, that sounds like a good idea.

Where's the tradeoff point?

For example, suppose that the heavy data is accessed 90% of the time,
but all of the accesses are via a single db.get and the heavy data
instance is always in the same entity group.  (via a single db.get
means that if the heavy data is in a separate instance and it's
needed, the same db.get is used for both the light data and the heavy
data instances).


On Oct 13, 3:49 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi Shailen,





 On Sun, Oct 11, 2009 at 4:09 PM, Shailen shailen.t...@gmail.com wrote:

  Consider 2 classes:

  class Light(db.Model):
   location = db.StringProperty()

  class Heavy(db.Model):
   location = db.StringProperty()
   content = db.TextProperty()
   image1 = db.BlobProperty()
   image2 = db.BLobProperty()
   image3 = db.BlobProperty()
   ..., etc., etc. ...

  I have 2 choices when getting information from the datastore: I can
  either fetch just the keys, or I can
  fetch the entities themselves. There isn't any way to fetch only a
  part of an entity (say just the location property in entities of type
  Heavy). Assuming this is correct, I have a couple of questions:

  1) If I want to access the location property in both Light and Heavy,
  is it fair to assume that accessing Light will be speedier than
  accessing Heavy, since Heavy has to fetch all sorts of data that I
  don't need but that is part of the entity?

 Yes.



  2) Would it make sense to break down Heavy into more lightweight
  classes for more efficient lookup?

 In the example above, definitely, yes.



  This question is probably best clarified by discussing what's
  happening in my actual application where I allow an Author to create
  several WriteUps. The WriteUps have numerous properties:  a heading, a
  sub-heading, a lot of text and, optionally, several images, and a few
  more properties. When displaying the contents of a WriteUp, I need to
  access all these properties. But sometimes, I need to access only 1 or
  2 lightweight properties; for instance, on a page summarizing the
  WriteUps done by an author, I need to only list the heading and sub-
  heading; I do not need to access the text or Blob data. But there is
  no way to access *just* the heading or sub-heading, right? I end up
  fetching ALL of the entity, whether I need all of it or not. Is this
  correct? Or does the datastore have some way of optimizing this?

 Entities are always fetched in their entirety.







  Reading the In the documentation (in the Queries and Indexes section),
  we are told:

  # The query is not executed until results are accessed.
  results = q.fetch(5)
  for p in results:
   print %s %s, %d inches tall % (p.first_name, p.last_name,
  p.height)

  When are results 'accessed', when fetch() is called, or when
  p.first_name, p._last_name, etc. are used? I am assuming that when
  fetch(5) is called, all 5 entities (assuming they exist) are fetched,
  and if the entities contain properties other than first_name,
  last_name or height, they are loaded up too. Is that correct?

 Yes.



  So now I am thinking of breaking up the WriteUps class into several
  classes, putting all the lightweight data that I need to access
  frequently in one class and all the heavy data that I rarely need to
  access in another and using ancestor relationships and transactions to
  make sure everything stays together.
  Does this make any sense, or I am fundamentally misunderstanding how
  app engine fetches data?

 Yes, that sounds like a good idea.

 -Nick Johnson



  - Shailen

 --
 Nick Johnson, Developer Programs Engineer, App Engine
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047- Hide quoted text -

 - Show quoted text -- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: What are your top three issues in appengine?

2009-10-14 Thread Andy Freeman

Also, naked domains

On Oct 14, 8:19 am, Andy Freeman ana...@earthlink.net wrote:
  2) Allow resizing images that are bigger than 1 Mb

 http://code.google.com/p/googleappengine/issues/detail?id=1422

 Yes!  There's no point in adding image support to my application
 without the ability to reduce the size of images that are over 1MByte.

 Also

 Support for in-instance caching (some way for an instance to
 understand its memory usage and a way to cause a borked instance to
 commit suicide).  
 Seehttp://code.google.com/p/googleappengine/issues/detail?id=1298
 .

 and

 Incremental conflict checking for transactions - 
 seehttp://code.google.com/p/googleappengine/issues/detail?id=1298.

 Of course, faster and more reliable datastore operations, but that
 doesn't need more votes.

 On Oct 6, 7:09 am, Mike mickn...@gmail.com wrote:



  My top top issues are...

  1) SSL/HTTPS Support on Google Apps 
  domainshttp://code.google.com/p/googleappengine/issues/detail?id=792

  I couldn't agree more. This is a huge problem for people who want to
  develop real e-commerce applications. If Google wants to encourage and
  grow their paid AppEngine customers, being able to actually collect
  money from end users is a huge priority.

  I'm writing a fairly sophisticated e-Commerce store on AppEngine at
  the moment, but I'm having to use PayPal, which is far from ideal.

  2) Allow resizing images that are bigger than 1 
  Mbhttp://code.google.com/p/googleappengine/issues/detail?id=1422

  Most average users are hard pressed enough to work out how to upload
  an image, let alone understand why/how to reduce the sizes of their
  images. The 1 MB limit hurts so much I've actually opted to remove it
  from my application rather than have to support people who don't know
  how to resize.- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Will reducing model size improve performance?

2009-10-12 Thread Andy Freeman

 There's no need to use a new model name: You can simply create new entities
 to replace the old ones, under the current model name. If you're using key
 names, you can construct a new entity with the same values as the old ones,
 and store that.

Note the precise wording.  You can't just put() the instance that you
read from the datastore, the instance that doesn't have the properties
that you've deleted, you have to get(), make a new db.Model instance
with the same key, populate its properties from the instance that you
got, and put the new instance.  If you're not using key names, you
can't create that new db.Model instance (as of 1.2.5) because you
can't create an instance with a specified id.

The problem is in db.Model._to_entity() (and maybe
db.Expando._to_entity()).  If the instance was created from a protocol
buffer, put() tries to reuse said protocol buffer, and it still
contains values for properties that you've deleted.  These values are
not deleted by _to_entity() so they end up being sent back to the
datastore.

I've filed http://code.google.com/p/googleappengine/issues/detail?id=2251
.


On Oct 10, 1:29 pm, Nick Johnson (Google) nick.john...@google.com
wrote:
 On Sat, Oct 10, 2009 at 6:27 PM, Jason Smith 
 j...@proven-corporation.comwrote:







  Thanks for the help guys. I think this is an important matter to have
  cleared up.

  It's bedtime here (GMT+7) however tomorrow I think I will do some
  benchmarks along the lines of the example I wrote up in the SO
  question.

  At this point I would think the safest thing would be to completely
  change the model name, thereby guaranteeing that you will be writing
  entities with fresh keys. However I suspect it's not necessary to go
  that far. I'm thinking that on the production datastore, changing the
  model definition and then re-put()ing the entity will be what's
  required to realize a speed benefit when reducing the number of
  properties on a model. But the facts will speak for themselves.

 There's no need to use a new model name: You can simply create new entities
 to replace the old ones, under the current model name. If you're using key
 names, you can construct a new entity with the same values as the old ones,
 and store that.

 You can also use the low-level API in google.appengine.api.datastore; this
 provides a dict-like interface from which you can delete unwanted fields.

 -Nick Johnson





  On Oct 11, 12:17 am, Andy Freeman ana...@earthlink.net wrote:
In other words: if I want to reduce the size of my entities, is
it necessary to migrate the old entities to ones with the new
definition?

   I'm pretty sure that the answer to that is yes.

             If so, is it sufficient to re-put() the entity, or must I
save under a wholly new key?

   I think that it should be sufficient re-put() but decided to test that
   hypothesis.

   It isn't sufficient in the SDK - the SDK admin console continues to
   show values for properties that you've deleted from the model
   definition after the re-put().  Yes, I checked to make sure that those
   properties didn't have values before the re-put().

   I did the get and re-put() in a transaction, namely:

   def txn(key):
       obj = Model.get(key)
       obj.put()
   assert db.run_in_transaction(txn, key)

   I tried two things to get around this problem.  The first was to add
   db.delete(obj.key()) right before obj.put().  (You can't do obj.delete
   because that trashes the obj.)

   The second was to add obj.old_property = None right before the
   obj.put() (old_property is the name of the property that I deleted
   from Model's definition.)

   Neither one worked.  According to the SDK's datastore viewer, existing
   instances of Model continued to have values for old_property after I
   updated them with that transaction even with the two changes, together
   or separately.

   If this is also true of the production datastore, this is a big deal.

   On Oct 10, 4:44 am, Jason Smith j...@proven-corporation.com wrote:

Hi, group. My app's main cost (in dollars and response time) is in the
db.get([list, of, keys, here]) call in some very high-trafficked code.
I want to pare down the size of that model to the bare minimum with
the hope of reducing the time and CPU fee for this very common
activity. Many users who are experiencing growth in the app popularity
probably have this objective as well.

I have two questions that hopefully others are thinking about too.

1. Can I expect the API time of a db.get() with several hundred keys
to reduce roughly linearly as I reduce the size of the entity?
Currently the entity has the following data attached: 9 String, 9
Boolean, 8 Integer, 1 GeoPt, 2 DateTime, 1 Text (avg size ~100 bytes
FWIW), 1 Reference, 1 StringList (avg size 500 bytes). The goal is to
move the vast majority of this data to related classes so that the
core fetch of the main model will be quick

[google-appengine] Re: Using transactions to avoid stale memcache entries.

2009-10-10 Thread Andy Freeman

 Update memcache after the transaction completes. There's still the
 possibility that your script could fail between the two events,

Updating memcache after the transaction completes can result in
persistently inconsistent memcache data even if there's no script
failure.  Consider:

def txn(key):
a = db.get(key)
if not a: return None
a.count += 1
a.put()
return a
a = db.run_in_transaction(txn, key)
if a:
memcache.set(str(a.key()), a)

Even if there are no script failures, the order that different
processes finish the transaction is not guaranteed to be the same as
the order that those processes do the memcache.set.  That
inconsistency lasts until the memcache data timesout.  (IIRC, there's
actually no guarantee that memcache data is flushed when the timeout
expires.)

 but there's
 no avoiding that without transactional semantics between the datastore and
 memcache.

While such transactional semantics between memcache and datastore
would be sufficient, I don't think that they're necessary to satisfy
my requirement.  My existence argument for can satisfy requirement
without transactional semantics is the implementation that I
provided.  It only requires consistency checks at datastore operations
and that I address three specific script failures.  (Note that all
datastore operations after the one that runs into the conflict will be
rolled back/ignored, so there's a cost to delaying the check until
commit.  That said, I don't know if doing the consistency check once
at commit is signficantly cheaper than doing it incrementally at each
datastore operation.)

The script failures that I need to address are machine, deadline, or
programming problems after/during the memcache.set and before the
commit.  The last problem is under my control and I think that I've
got a handle on deadlines.  I have to live with machine errors
everywhere else, so 

Datastore transactions are the only tool that I have to constrain the
order of operations in different processes.  I'd like them to be as
powerful as possible.


On Oct 9, 9:53 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi Andy,

 On Fri, Oct 9, 2009 at 5:08 PM, Andy Freeman ana...@earthlink.net wrote:

   They are raised inside a transaction, when a conflict is detected with
   another concurrent transaction. The transaction infrastructure will catch
   and retry these several times, and only raise it in the external code if
  it
   was unable to execute the transaction after several retries.

  Yes, but when are conflicts checked?  Specifically, is the error
  always raised by the statement in the user function that runs into the
  conflict or can it be raised later, say during transaction commit.

 Any datastore operation inside a transaction could raise this exception. It
 would be a bad idea to rely on _where_ this exception will be raised.







  I've looked at the SDK's implementation of
  RunInTransactionCustomRetries (in google/appengine/api/datastore.py).
  The except that catches the CONCURRENT_TRANSACTION exception protects
  the commit and not the execution of the user function.  That suggests
  that the user function is run to completion regardless of conflicts
  and that the conflict isn't acted upon until a commit is tried.

  However, your description and the documentation suggests the real
  implementation detects and acts on conflicts while running the user
  function.

  Here's a user function which demonstrates the difference.  (Yes, I
  picked an example that I care about.  I'm trying to ensure that
  memcache data is not too stale.)

  def txn():
     ...
     a.put()
     memcache.set('a', a.field)
     return a

  If the CONCURRENT_TRANSACTION exception is raised while txn is being
  run, specifically during a.put(), the memcache.set won't happen when
  db.run_in_transaction(txn) fails.  If that exception is raised after
  txn has exited and during commit (as the SDK code suggests), the
  memcache.set will happen whether or not db.run_in_transaction(txn)
  fails.

  If my understanding of the SDK code is correct and the real
  implementation works the same way, namely that conflicts are detected
  after the user function completes, how can I ensure that memcache data
  is not too stale?  (One way is to have that data expire reasonably
  quickly, but that reduces the value of memcache.)

 Update memcache after the transaction completes. There's still the
 possibility that your script could fail between the two events, but there's
 no avoiding that without transactional semantics between the datastore and
 memcache.

  Also, what's the definition of conflict?  Clearly there's a conflict
  between a user function that reads a given data store entity and one
  that writes the same entity.  However, what about the following?

  def txn1(a, b):
     # notice - no read for a or b
     a.put()
     b.put()
     return True

  Does the conflict detection system detect the conflict between
  transactions

[google-appengine] Re: Deleting / Hoarding of Application Ids

2009-10-10 Thread Andy Freeman

Note that the yadayada.appspot.com is reserved for the owner of
yaday...@gmail.com.  This association seems reasonable, but means that
any name recycling/reclaiming for appengine must also address gmail
and perhaps other google accounts.

On Oct 8, 10:09 pm, Donzo para...@gmail.com wrote:
 Are there plans to enable deleting Application Ids?  Are there plans
 to eventually expire (auto-delete) Application Ids reserved but
 unused???

 I'm just getting into GAE and am extremely interested in using it for
 several web sites we now operate.  One issue is that almost all the
 GAE application ids corresponding to domains we own are already
 taken ... but none have a live site running.  For example, we own
 yadayada.com (not really), but I've found that yadayada.appspot.com is
 already reserved but doesn't have an active web site running.  This is
 true for almost all of my domains, some of them rather unique so I
 wonder if appspot.com is not seeing a lot of hoarding of good names
 since it's free.

 This is important to me because of the current inability to use https
 (ssl) AND my domain name for a GAE hosted web site.  I do need to
 direct my users to https for some pages ... and am willing to do so if
 I can get yadayada.appspot.com.  But since that's already taken, I
 would have to use something else ... which will spook some of my users
 into thinking its an impostor web site (asking for id and password no
 less!!!).
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Will reducing model size improve performance?

2009-10-10 Thread Andy Freeman

 In other words: if I want to reduce the size of my entities, is
 it necessary to migrate the old entities to ones with the new
 definition?

I'm pretty sure that the answer to that is yes.

  If so, is it sufficient to re-put() the entity, or must I
 save under a wholly new key?

I think that it should be sufficient re-put() but decided to test that
hypothesis.

It isn't sufficient in the SDK - the SDK admin console continues to
show values for properties that you've deleted from the model
definition after the re-put().  Yes, I checked to make sure that those
properties didn't have values before the re-put().

I did the get and re-put() in a transaction, namely:

def txn(key):
obj = Model.get(key)
obj.put()
assert db.run_in_transaction(txn, key)

I tried two things to get around this problem.  The first was to add
db.delete(obj.key()) right before obj.put().  (You can't do obj.delete
because that trashes the obj.)

The second was to add obj.old_property = None right before the
obj.put() (old_property is the name of the property that I deleted
from Model's definition.)

Neither one worked.  According to the SDK's datastore viewer, existing
instances of Model continued to have values for old_property after I
updated them with that transaction even with the two changes, together
or separately.

If this is also true of the production datastore, this is a big deal.


On Oct 10, 4:44 am, Jason Smith j...@proven-corporation.com wrote:
 Hi, group. My app's main cost (in dollars and response time) is in the
 db.get([list, of, keys, here]) call in some very high-trafficked code.
 I want to pare down the size of that model to the bare minimum with
 the hope of reducing the time and CPU fee for this very common
 activity. Many users who are experiencing growth in the app popularity
 probably have this objective as well.

 I have two questions that hopefully others are thinking about too.

 1. Can I expect the API time of a db.get() with several hundred keys
 to reduce roughly linearly as I reduce the size of the entity?
 Currently the entity has the following data attached: 9 String, 9
 Boolean, 8 Integer, 1 GeoPt, 2 DateTime, 1 Text (avg size ~100 bytes
 FWIW), 1 Reference, 1 StringList (avg size 500 bytes). The goal is to
 move the vast majority of this data to related classes so that the
 core fetch of the main model will be quick.

 2. If I do not change the name of the entity (i.e. just delete all the
 db.*Property definitions in the model), will I still incur the same
 high cost fetching existing entities? The documentation says that all
 properties of a model are fetched simultaneously. Will the old
 unneeded properties still transfer over RPC on my dime and while users
 wait? In other words: if I want to reduce the size of my entities, is
 it necessary to migrate the old entities to ones with the new
 definition? If so, is it sufficient to re-put() the entity, or must I
 save under a wholly new key?

 Thanks very much to anyone who knows about this matter!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Using transactions to avoid stale memcache entries.

2009-10-09 Thread Andy Freeman

 They are raised inside a transaction, when a conflict is detected with
 another concurrent transaction. The transaction infrastructure will catch
 and retry these several times, and only raise it in the external code if it
 was unable to execute the transaction after several retries.

Yes, but when are conflicts checked?  Specifically, is the error
always raised by the statement in the user function that runs into the
conflict or can it be raised later, say during transaction commit.

I've looked at the SDK's implementation of
RunInTransactionCustomRetries (in google/appengine/api/datastore.py).
The except that catches the CONCURRENT_TRANSACTION exception protects
the commit and not the execution of the user function.  That suggests
that the user function is run to completion regardless of conflicts
and that the conflict isn't acted upon until a commit is tried.

However, your description and the documentation suggests the real
implementation detects and acts on conflicts while running the user
function.

Here's a user function which demonstrates the difference.  (Yes, I
picked an example that I care about.  I'm trying to ensure that
memcache data is not too stale.)

def txn():
...
a.put()
memcache.set('a', a.field)
return a

If the CONCURRENT_TRANSACTION exception is raised while txn is being
run, specifically during a.put(), the memcache.set won't happen when
db.run_in_transaction(txn) fails.  If that exception is raised after
txn has exited and during commit (as the SDK code suggests), the
memcache.set will happen whether or not db.run_in_transaction(txn)
fails.

If my understanding of the SDK code is correct and the real
implementation works the same way, namely that conflicts are detected
after the user function completes, how can I ensure that memcache data
is not too stale?  (One way is to have that data expire reasonably
quickly, but that reduces the value of memcache.)

Also, what's the definition of conflict?  Clearly there's a conflict
between a user function that reads a given data store entity and one
that writes the same entity.  However, what about the following?

def txn1(a, b):
# notice - no read for a or b
a.put()
b.put()
return True

Does the conflict detection system detect the conflict between
transactions with txn1 for the same datastore entities?  (The intent
of transactions with txn1 is to ensure that a and b are mutually
consistent in the datastore.)

Speaking of definitions of conflict, suppose that conflicts actually
are detected/handled while the user function is being run, so that txn/
txnw can not leave the datastore and memcache inconsistent for very
long.  Are txnw and txnr (below) seen as conflicting given the same
key?  (They're not conflicting as far as the datastore is concerned,
but remember - I'm trying to keep memcache consistent as well.)

def txnw(key, new_value):
v = db.get(key)
v.field = new_value
db.put(v)
memcache.set(str(key), v.field)
return True

def txnr(key):
v = db.get(key)
memcache.set(str(key), v.field)
return True

Thanks,
-andy



On Oct 9, 4:45 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi Andy,

 On Tue, Oct 6, 2009 at 8:45 PM, Andy Freeman ana...@earthlink.net wrote:

  Short version.

  When, exactly, are apiproxy_errors.ApplicationErrors
  with .application_error ==  datastore_pb.Error.CONCURRENT_TRANSACTION
  raised.

 They are raised inside a transaction, when a conflict is detected with
 another concurrent transaction. The transaction infrastructure will catch
 and retry these several times, and only raise it in the external code if it
 was unable to execute the transaction after several retries.

 -Nick Johnson



 --
 Nick Johnson, Developer Programs Engineer, App Engine
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Using transactions to avoid stale memcache entries.

2009-10-06 Thread Andy Freeman

Short version.

When, exactly, are apiproxy_errors.ApplicationErrors
with .application_error ==  datastore_pb.Error.CONCURRENT_TRANSACTION
raised.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Using transactions to avoid stale memcache entries.

2009-10-06 Thread Andy Freeman

# I'd like to use memcache to store db.Model instances but I'd like to
# guarantee that memcache entries do not stay stale for very long.  At
# the same time, I'd like to use long memcache timeouts, significantly
# longer than the staleness constraint.

# For the purposes of discussion, I've defined five models, T1-T5,
# with different implementations of upd(), the method that updates the
# datastore for a given instance, and latest, which is supposed to get
# the latest version of such an instance from either memcache or the
# datastore.

# I defined these models because they're useful in describing the
# problems that I've run into.  Each one comes with some questions
# about how transactions work.

# T1 and T3 do not satisfy my not too stale requirement due to race
# conditions which I describe in-line.  T2, T4, and T5 are my attempts
# to eliminate those races.

# Note - caching all db.get() results violates my not very stale
# requirement.  The discussion for T3 below shows why/how.

# I'm reasonably confident that T2 satisfies my not too stale
# requirement but it isn't as effective at keeping entries in memcache
# as T4 or T5.  However, I don't know if T4 satisfies that
# requirement.  If T4 doesn't satisfy that requirement, I don't know
# if it's possible to have something that both satisfies that
# requirement and is more effective than T2 without doing something
# like T5.

# As I discuss below, T5 may not work either and isn't applicable in
# many circumstances.

# FWIW, the specific data types in these examples are just to help me
# describe the issues.  Hacks which use characteristics of said data
# types to meet the requirement are cool and everything, but the data
# types that I care about are different, so 

from google.appengine.ext import db
from google.appengine.api import memcache

# Common code for T1-T4.
class TBase(db.Model):
num_updates = db.IntegerProperty(default=0)

@classmethod
def run_txn(cls, fn):
# This succeeds or a deadline error happens.  Let's ignore the
# latter.
return db.run_in_transaction_custom_retries(1  20, fn)

@classmethod
def memcache_key(cls, key):
return str(key)

@classmethod
def from_cache(cls, key):
return memcache.get(cls.memcache_key(key))

_memcache_key = property(lambda self: self.memcache_key(self.key
()))

def from_ds(self):
return self.get(self.key())

def cache(self):
# I assume that any previous entry is deleted if memcache.set
# fails.  If that's not true, uncomment the next line.
# self.flush()
memcache.set(self._memcache_key, self)

def flush(self):
memcache.delete(self._memcache_key)


# T1 doesn't work because there's a race in upd() after the
transaction.
class T1(TBase):
# Yes, self is stale after upd().
def upd(self):
def txn():
# and self may be stale before upd, so 
s = self.from_ds()
s.num_updates += 1
s.put()
return s
# Calls to cache() doesn't necessarily complete in the same
# order as calls to the corresponding put().
self.run_txn(txn).cache()

# Want instance with latest num_updates for key.
@classmethod
def latest(cls, key):
obj = cls.from_cache(key)
if not obj:
obj = cls.get(key)
return obj


# T2 works if consistency related exceptions only occur during
# datastore operations and continue to occur until a transaction
# succeeds.  In other words, T2 works if any transaction that
# completes its datastore put() is guaranteed to complete the cache()
# before any other instance completes its datatore put().

# However, T2 only writes newly updated entries into memcache.  Once
# entries expire, they're not cached again.

class T2(TBase):
# self is still stale after upd().
def upd(self):
def txn():
s = self.from_ds()
s.num_updates += 1
s.put()
# Are these calls to cache() guaranteed to occur in the
# same order as successful calls to put()?
s.cache()
self.run_txn(txn)

@classmethod
def latest(cls, key):
obj = cls.from_cache(key)
if not obj:
obj = cls.get(key)
return obj


# Suppose that T2 works, but we want to cache only those entries that
# are being actively read.  (Refreshing the cache from a datastore
# read also helps with entries that fall out of memcache for some
# reason.)  One way to satisfy that want is to memcache instances
# that were read from the datastore outside of upd().

# T3 is a naive attempt to implement that want.  It doesn't work
# because there's a race between upd() and latest().

# We could use timeouts for the memcache write in latest() to limit
# the lifetime of potentially stale entries.  (This assumes that
# memcache timeouts actually work.)  However, the tighter the limit on
# 

[google-appengine] Re: normalisation in datastore

2009-09-30 Thread Andy Freeman

 This is said because the datastore has cheap diskspace and there isn't
 good support for joins here.

Actually, that's not why folks say denormalize on app engine.

They say denormalize because the app engine infrastructure makes a lot
of normalization difficult.  That difficulty is a consequence of an
architecture designed for absurd amounts of scalability and a read
mostly assumption.

Note that this is true regardless of the cost of doing the required
multiple updates.  There are applications that can't run on app engine
because those updates take too long.  (Some of that update cost is a
consequence of the app engine consistency model, which also has costs
and benefits.)

Normalization is not a free lunch - the general cases have inherent
costs.  By supporting normalization only in very specific cases, app
engine avoided paying those costs and got some benefits.  If
normalization is worth more to you than those other benefits, app
engine is not for you.  If those other benefits are worth more than
normalization, app engine may be an option.

On Sep 29, 3:48 am, clay clay.lenh...@searchlatitude.com wrote:
 This is said because the datastore has cheap diskspace and there isn't
 good support for joins here.

 However, we'll relearn why normalization is good.

 Denormalization means that what would normally modify a single cell
 will have to modify many rows.  This isn't ideal in a relational
 database, nor in the datastore.

 I wouldn't drink the normalization isn't for the datastore coolaid
 too much. ;)

 On Sep 28, 6:06 pm, Wooble geoffsp...@gmail.com wrote:



  The datastore isn't a relational database, so articles about
  relational databases for the most part don't apply.

  On Sep 28, 7:54 am, jerry ramphisa jere...@gmail.com wrote:

   Hi there,

   I hear normalizing the database is bad if you are using google
   datastore. On other other hand, most articles mention people should
   normalize databases, it doesn't matter what kind.

   Can someone give me light here.- Hide quoted text -

  - Show quoted text -- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Secure bandwidth charges and quotas

2009-09-28 Thread Andy Freeman

http://code.google.com/appengine/docs/quotas.html#Resources ,
http://code.google.com/appengine/docs/java/urlfetch/overview.html#Quotas_and_Limits
, and 
http://code.google.com/appengine/docs/python/urlfetch/overview.html#Quotas_and_Limits
all mention Secure Outgoing Bandwidth and Secure Incoming
Bandwidth but there's no documentation of the free vs billing enabled
quota for secure bandwidth on 
http://code.google.com/appengine/docs/quotas.html#Resources
and no mention of the charges on 
http://code.google.com/appengine/docs/billing.html#Billable_Quota_Unit_Cost
.

What are the quotas on secure bandwidth and what are the charges?

Thanks,
-andy


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: PreCalculate datastore size

2009-09-22 Thread Andy Freeman

See http://code.google.com/p/googleappengine/issues/detail?id=1084,
especially the May 12 comment.

On Sep 22, 2:48 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi Neves,
 Entities are stored in Protocol Buffer format, which is detailed 
 here:http://code.google.com/apis/protocolbuffers/docs/encoding.html

 You can get the encoded Protocol Buffer for an entity by calling
 db.entity_to_proto(x).Encode(), which will return a binary string - the
 length of this is the size of it in the datastore. Unfortunately, there's
 not currently any way to reliably measure index size, or total overhead.

 -Nick Johnson





 On Fri, Sep 18, 2009 at 6:17 AM, Neves marcos.ne...@gmail.com wrote:

  How can I calculate how much data my model will use?
  for example, a long field is 8 bytes long?
  A key field hash is always a 40 bytes length?
  What about indexes?

  Supose I have a model like this, with no special index:
  class Space(db.Model):
   x = db.IntegerProperty()
   y = db.IntegerProperty()
   z = db.IntegerProperty()

  1000 records like this would have:

  1000 * (3*8) * 40 = 960,000 bytes

  What about the index size?

 --
 Nick Johnson, Developer Programs Engineer, App Engine
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
 368047- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Request file size limit?

2009-09-03 Thread Andy Freeman

It is odd to suggest that maximum request size implies something about
scalability.

For example, the Google search api (the one that uses the search
engine) has a 2KByte size limit.  The search box itself has similar
limits.

If I write a simple text comparison program that allows 1MByte
queries, would you say that said program scales more than Google
search?

If Google changed the search limit to 10KByte, would you say that
they'd made search more scalable?  Would changing it to 500 bytes mean
that it was less scalable?

On Sep 2, 8:55 am, angelod101 angelod...@gmail.com wrote:
 I read on stackoverflow that the maximum file size you can request
 with urlfetch is 1MB, is this true? That doesn't seem very large for a
 platform made to scale. I have a zip file that can be  15MB and I was
 looking at app engine to save/unzip the file, parse it into the data
 store and build a web service to expose the data. Will I be able to
 download such a large file with app engine? Thanks.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Request Slow Down

2009-08-26 Thread Andy Freeman

Is there any reason to prefer multiple app.yaml entries over few?

That is, one can match /pages/.*, /users/.*, /tasks/.*, /foo/.*, etc
with separate app.yaml entries, followed by a catch-all app.yaml
entry, each with its own handler file, each file with its own wsgi
application, or with a single app.yaml entry (/.*) and a handler file
with a wsgi application that has a clause for each of those cases.
(Assume that each handler file defines main() so it will be cached.)

Is there any difference between these two approachs wrt reuse or other
implementation issues?

For example, if an instance is created for one handler, will it be
used for another?  (The documentation says that handler files can be
cached like any other module, but doesn't say how that interacts with
instance reuse.)

Thanks,
-andy

On Aug 26, 4:31 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi PubliusX,

 On Tue, Aug 25, 2009 at 8:38 PM, PubliusX arjun.va...@gmail.com wrote:

  Hey, thanks.. but I figured out the problem.  Apparently if there are
  too many requesthandlers (referenced in the handler script), then
  appengine doesn't cache the handler script.. At least thats what I
  think.. because I reduced the number by removing an arbitrary 5-6
  classes and its back to the old time.

 That's definitely not the case. App Engine doesn't even know how many
 RequestHandlers you're using, or even check what framework you're using. You
 were probably just getting a new instance on each request.

 -Nick Johnson



 --
 Nick Johnson, Developer Programs Engineer, App Engine

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Multiple custom sub domains?

2009-08-14 Thread Andy Freeman

http://a.tinythread.com is actually serviced by http://tinythread.appspot.com
.

The use of a instead of www made me wonder if it's possible to set
up things so that b.tinythread.com also works.

In short, can one set up multiple custom subdomains for a given App
Engine application?  For example, could the owner of tinythread.com
and tinythread.appspot.com have set things up so that
a.tinythread.com, b.tinythread.com, ..., many.tinythread.com all use
tinythread.appspot.com ?

I suppose that someone else might be interested in whether one can use
a different app engine application for different custom subdomains.

Thanks,
-andy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Basic Event Tracking Question

2009-08-12 Thread Andy Freeman

I just realized that I should have phrased this as a question.

Is it true that updating a single entity more than once a second is
problematic?

How often can one single entity (using a transaction) with a low
likelyhood of contention?

If the answer depends on the number of indices it's in, what are some
reasonable rules of thumb?

Thanks,
-andy

On Aug 10, 6:10 am, Andy Freeman ana...@earthlink.net wrote:
  Finally, do look at the info on sharded counters if you expect
  individual counters to be updated more than about once a second.

 That is fairly disturbing advice as it seems to suggest that updating
 a simple entity more than once a second is problematic.

 I would have thought that it would be safe to update a given entity
 10-15 times/second and maybe even 20x/second. (10x/second is 100ms per
 update, 15 is over 65ms per update, 20 is 50ms per update.)  Frankly,
 I'm surprised that 5 times/second is too fast. (5x/second is 200ms per
 update.)

 On Aug 10, 5:24 am, Nick Johnson (Google) nick.john...@google.com
 wrote:



  On Sun, Aug 9, 2009 at 4:58 PM, Markitechtmarkite...@gmail.com wrote:

   Thanks Nick, makes complete sense.

   I'll write it so that for each Interaction, it finds the appropriate
   entity, increments the counter and stores it; if looking for the
   entity turns nothing up, i make a new one with a counter set to 1.

   right?

  Right. Just make sure to do it inside a transaction if you need exact
  counts. And use key naming to avoid the need to do queries.

  Finally, do look at the info on sharded counters if you expect
  individual counters to be updated more than about once a second.

  -Nick Johnson

   thanks again for the quick and kind attention.

   best,
   Christopher

   On Aug 7, 6:02 am, Nick Johnson (Google) nick.john...@google.com
   wrote:
   Hi Markitecht,

   It sounds like your best option is to have a single Interaction entity
   for each unique string. You can use the key name to ensure uniqueness.
   Then, to record a new interaction, in a transaction fetch the existing
   one (if any), increment the count, and store it.

   If you expect some interactions to be very popular (more than a few
   updates a second), you should probably look into sharded counters.

   -Nick Johnson

   On Wed, Aug 5, 2009 at 7:15 PM, Markitechtmarkite...@gmail.com wrote:

I am writing a dirt-simple tracking API.

For the sake of explanation, i will over-simplify my question even
further.

I have an endpoint that accepts one item of string metadata, and saves
a new instance of an Interaction object.

(the interaction object also saves the user and the date created)

How do i query Interaction to return the most popular
'interactions' (using those string metadata values), with a count for
each?

This seems *so* simple, but i just can't figure out how to do it on
AE.

Thanks,
Christopher

   --
   Nick Johnson, Developer Programs Engineer, App Engine

  --
  Nick Johnson, Developer Programs Engineer, App Engine- Hide quoted text -

  - Show quoted text -- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Basic Event Tracking Question

2009-08-10 Thread Andy Freeman

 Finally, do look at the info on sharded counters if you expect
 individual counters to be updated more than about once a second.

That is fairly disturbing advice as it seems to suggest that updating
a simple entity more than once a second is problematic.

I would have thought that it would be safe to update a given entity
10-15 times/second and maybe even 20x/second. (10x/second is 100ms per
update, 15 is over 65ms per update, 20 is 50ms per update.)  Frankly,
I'm surprised that 5 times/second is too fast. (5x/second is 200ms per
update.)

On Aug 10, 5:24 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 On Sun, Aug 9, 2009 at 4:58 PM, Markitechtmarkite...@gmail.com wrote:

  Thanks Nick, makes complete sense.

  I'll write it so that for each Interaction, it finds the appropriate
  entity, increments the counter and stores it; if looking for the
  entity turns nothing up, i make a new one with a counter set to 1.

  right?

 Right. Just make sure to do it inside a transaction if you need exact
 counts. And use key naming to avoid the need to do queries.

 Finally, do look at the info on sharded counters if you expect
 individual counters to be updated more than about once a second.

 -Nick Johnson







  thanks again for the quick and kind attention.

  best,
  Christopher

  On Aug 7, 6:02 am, Nick Johnson (Google) nick.john...@google.com
  wrote:
  Hi Markitecht,

  It sounds like your best option is to have a single Interaction entity
  for each unique string. You can use the key name to ensure uniqueness.
  Then, to record a new interaction, in a transaction fetch the existing
  one (if any), increment the count, and store it.

  If you expect some interactions to be very popular (more than a few
  updates a second), you should probably look into sharded counters.

  -Nick Johnson

  On Wed, Aug 5, 2009 at 7:15 PM, Markitechtmarkite...@gmail.com wrote:

   I am writing a dirt-simple tracking API.

   For the sake of explanation, i will over-simplify my question even
   further.

   I have an endpoint that accepts one item of string metadata, and saves
   a new instance of an Interaction object.

   (the interaction object also saves the user and the date created)

   How do i query Interaction to return the most popular
   'interactions' (using those string metadata values), with a count for
   each?

   This seems *so* simple, but i just can't figure out how to do it on
   AE.

   Thanks,
   Christopher

  --
  Nick Johnson, Developer Programs Engineer, App Engine

 --
 Nick Johnson, Developer Programs Engineer, App Engine- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Proposal: __index__ support to make certain datastore/query operations more efficient

2009-08-08 Thread Andy Freeman

 Would require an additional check in the if/else clause on write of
 the related entity which should raise an error if the entity it points
 to doesn't exist...

That's not enough - the __index__ entity could be deleted afterwards.

The relevant check in deletes is very expensive (it's a query) and
it's unclear what one can do when it occurs because you can't query
for entities with __index__ values.  (What Kind table?)

Requiring explicit key_name 's and/or unnecessary transaction groups
isn't really a solution.

I think that a better approach would be a different kind of query, one
that returned the entity referenced by the reference property
specified in the query.  Note that this would allow a given entity to
have the equivalent of multiple __index__ properties.

This doesn't eliminate the problem with referenced entities that don't
exist.

One possible solution is a sentinel value.  None is a not-horrible
choice.  (For db.Query.fetch(), None is unambiguous - there would be
None's in the list if the query succeeded but the entity fetch
failed.  For db.Query.get(), None would ambiguous.  One possibility is
to throw an exception of the query succeeded but the entity fetch
failed.)

On Aug 7, 10:42 am, tav t...@espians.com wrote:
 Hey Andy,

  Cute, but there's no way to guarantee that there's an object with that
  key.

 Very good point!

 Would require an additional check in the if/else clause on write of
 the related entity which should raise an error if the entity it points
 to doesn't exist...

 And if all entities that were __index__'ed were kept in a separate
 table -- much like the Kind table, then perhaps that could be checked
 when an entity is about to be deleted and if a referrant related
 entity is still around, an Exception could be raised?

 Thoughts?

  Also, redirecting queries this way means that there's no way to get
  the key (or entity) via a query so the entity can be updated, deleted,
  etc.

 Sure. But one should always be able to get the entity directly using
 it's key name -- i tend to have predictable key_names for my related
 entities -- and perhaps even use the transaction descendant queries
 that landed yesterday in 1.2.4?

 --
 love, tav

 plex:espians/tav | t...@espians.com | +44 (0) 7809 569 
 369http://tav.espians.com|http://twitter.com/tav| skype:tavespian
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Proposal: __index__ support to make certain datastore/query operations more efficient

2009-08-07 Thread Andy Freeman

 With my proposal, when an entity is indexed, the key defined in its
 __index__ property will be used in place of its own key when the
 indexes are constructed...

Cute, but there's no way to guarantee that there's an object with that
key.

Also, redirecting queries this way means that there's no way to get
the key (or entity) via a query so the entity can be updated, deleted,
etc.

On Aug 7, 8:16 am, tav t...@espians.com wrote:
 Hey Julian,

  That's an interesting idea, for that particular use case.

  If I reformulate the relation index entities query as I understand
  it:
  1. [datastore]Traverse the index and return entity keys
  2. [app engine]Transform returned keys to retrieve parent entity
  3. [datastore] Batch get of entities

  Your idea:
  1. [datastore]Traverse the index and to get entity keys
  3. [datastore] Batch get of entities

  So you would save step 2, a roundtrip app engine - datastore, not sure
  that's substantial though.

 Nice reformulation! However, my proposal would be just a single step:

 1. [datastore] Traverse the index and return entities

 The only additional cost should be an additional if/else clause when
 *writing* entities -- the datastroe would have to look to see if the
 entity has an __index__ property. Quite a minimal cost given the
 savings otherwise!

 Why should it just be a single step? Because of the way that the
 datastore works (at least according to the presentation that Ryan gave
 last year):

 *http://snarfed.org/space/datastore_talk.html

 According to it (slide 21):

 * Each row in an index table includes the index data and entity key

 With my proposal, when an entity is indexed, the key defined in its
 __index__ property will be used in place of its own key when the
 indexes are constructed...

 Of course these special related entities would still be accessible
 by using their key names...

 Hope that makes sense.

 --
 love, tav

 plex:espians/tav | t...@espians.com | +44 (0) 7809 569 
 369http://tav.espians.com|http://twitter.com/tav| skype:tavespian
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Predict and Limit Index Size

2009-08-05 Thread Andy Freeman

This is somewhat related to

http://code.google.com/p/googleappengine/issues/detail?id=917
and
http://code.google.com/p/googleappengine/issues/detail?id=1084

On Aug 4, 2:11 am, Koen Bok k...@madebysofa.com wrote:
 Is there any way to reliably calculate how large an index for an item
 will be? That way I could make sure my fulltext wordlists will never
 get the exploding inde problem.

 My current index is

 - kind: SearchableIndex
   properties:
   - name: parentKind
   - name: userKey
   - name: words
   - name: words
   - name: sortKey

 Now let's say the words property has 10 words in it. Would my index
 then be 1 * 1 * 10 * 10 * 1 = 100? And could I just compare that
 against datastore._MAX_INDEXED_PROPERTIES?

 - Koen
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Globally monotonic counter

2009-07-17 Thread Andy Freeman

You're mis-parsing the sentence.  Note that they even tell you what
they mean by take care of messy details.

Let's look at another example.  MS's Visual {whatever} documentation
claims that it makes programming easy.  Do you think that that claim
implies that using said product will let anyone produce an O(N)
solution to an NP-complete problem?

  I worked in a distributed systems group for many years, so I know that
 many of these problems are simply inherent to distributed systems.  It
 doesn't disturb me that they exist.

You're complaining that GAE doesn't solve them.

 What bothers me is the way these
 issues are broadly *ignored* by GAE's documentation.

GAE documentation doesn't teach you how to program.  It doesn't teach
you how to make money with a web site.  It doesn't even tell you how
to do things with other systems so you can compare.  It tells you how
to do things with GAE.

Sure, I'd like better tutorials.  But, if I had a choice between
documentation that makes it possible to use a new subsystem and more
information on distributed systems programming with a GAE twist, I'll
take the former every time.




On Jul 16, 11:40 pm, n8gray n8g...@gmail.com wrote:
 On Jul 16, 10:35 pm, Andy Freeman ana...@earthlink.net wrote:

                                                I'm starting to think that 
   the GAE takes
   care of the messy details of distributed systems programming claim is
   a bit overstated...

  Global clock consistency requires very expensive clocks accessible
  from every server with known latency (and even that's a bit dodgy).
  AFAIK, GAE doesn't provide that, but who does?

  GAE doesn't do the impossible, but also doesn't say that it does.  WRT
  the latter, would you really prefer otherwise?

 But that's just it -- in many places it's claimed that GAE makes it
 all a cakewalk.  From the datastore docs:

 
 Storing data in a scalable web application can be tricky. A user could
 be interacting with any of dozens of web servers at a given time, and
 the user's next request could go to a different web server than the
 one that handled the previous request. All web servers need to be
 interacting with data that is also spread out across dozens of
 machines, possibly in different locations around the world.

 Thanks to Google App Engine, you don't have to worry about any of
 that. App Engine's infrastructure takes care of all of the
 distribution, replication and load balancing of data behind a simple
 API—and you get a powerful query engine and transactions as well.
 

 You could argue that that's not claiming to do the impossible, but
 you don't have to worry about any of that is certainly not true.
 Nowhere in the documentation is there a discussion of the kinds of
 subtle gotchas that you need to be aware of when programming for this
 kind of system.  It's all just golly isn't this so gosh-darn easy!
 You have to go digging to find the article on transaction isolation
 where you find out that your queries can return results that, um,
 don't match your queries.  And AFAICT you *do* have to worry about
 subsequent requests being handled by different servers, since there
 doesn't seem to be any guarantee that the datastore writes made in one
 request will be seen in the next.  Memcache doesn't have transactions,
 so it seems like guaranteeing coherence with the datastore is tricky.

 I worked in a distributed systems group for many years, so I know that
 many of these problems are simply inherent to distributed systems.  It
 doesn't disturb me that they exist.  What bothers me is the way these
 issues are broadly *ignored* by GAE's documentation.  If I wasn't a
 bit savvy about distributed systems I probably wouldn't have realized
 that clock skew could cause problems, and nothing I read in GAE's docs
 would have helped me figure it out.  So no, I don't want GAE to claim
 to do the impossible, I want them to *stop* claiming to do the
 impossible.  I would love to see some articles about the pitfalls of
 the system and how to avoid them or mitigate them.  The transaction
 isolation article is great in that respect -- I hope people at Google
 are planning more along those lines.

 Cheers,
 -n8
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Globally monotonic counter

2009-07-16 Thread Andy Freeman

  I think I can live without uniqueness as
 long as timestamps don't go backwards.

Timestamps on a given server probably don't go backwards, but there's
no guarantee about the relationship between clocks on different
servers.

On Jul 13, 11:16 pm, n8gray n8g...@gmail.com wrote:
 Hi Albert,

 Thanks for the suggestion.  I think I can live without uniqueness as
 long as timestamps don't go backwards.  But I think my problem still
 exists with a sharded counter.  Having a counter is certainly better
 than using datetime.utcnow() and lets me assign an order to all events
 that have been generated, but that's not really the problem.  The
 tricky part is deciding, on the client end, which events to request
 based on the events you've received.  When the client asks for all
 events after time T it gets some last event with a new timestamp
 S.  But I don't think you can trust this S because there might be some
 other event in some corner of GAE with an earlier timestamp that
 hasn't yet been observed by the server that answered the client's
 request.

 I guess the root of the problem is that I know that transactions on
 entity groups give me the ACID properties but when it comes to
 operations outside of transactions I have no idea what the consistency
 model is.  Has this been described somewhere?

 Thanks,
 -n8

 On Jul 13, 7:06 pm, Albert albertpa...@gmail.com wrote:



  Hi!

  This is a quick suggestion.

  How about using a global counter (just like your title suggests). You
  can use a sharded global counter to facilitate your counting.

  And use that counter as a timestamp / bookmark.

  On every event, you read from the global counter, use that value as
  your timestamp, and then increment the global counter.

  I'm not sure of it's implications, though. I'm not also sure if it
  actually guarantees uniqueness of timestamps when two events happen
  almost at the same time.

  Perhaps you can get an idea from this.

  Enjoy!- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Globally monotonic counter

2009-07-16 Thread Andy Freeman

  I'm starting to think that the 
 GAE takes
 care of the messy details of distributed systems programming claim is
 a bit overstated...

Global clock consistency requires very expensive clocks accessible
from every server with known latency (and even that's a bit dodgy).
AFAIK, GAE doesn't provide that, but who does?

GAE doesn't do the impossible, but also doesn't say that it does.  WRT
the latter, would you really prefer otherwise?


On Jul 16, 3:47 pm, n8gray n8g...@gmail.com wrote:
 Thanks for the advice, Nick.  I'd still like to know more about the
 consistency model though.  For example, I wonder if there's any
 guarantee that two transactions on different entity groups executed by
 one process in a given order will be observed in the same order.  I
 suspect the answer is no.  I'm starting to think that the GAE takes
 care of the messy details of distributed systems programming claim is
 a bit overstated...

 Cheers,
 -n8

 On Jul 14, 2:27 am, Nick Johnson (Google) nick.john...@google.com
 wrote:



  Hi Nathan,

  Your best options are either to keep track of one event stream per
  game, or to use system time, and 'rewind' the timestamp a bit to
  capture any missed events, as you suggest. Global monotonic counters
  aren't very practical in large distributed systems.

  -Nick Johnson- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Eating one's own dog food

2009-07-16 Thread Andy Freeman

 If GAE had existed
 when Larry and Sergey were developing their pagerack
 algorithm, would they have used GEA for evaluation?

No, but that has nothing to do with quotas.  GAE is pretty far from a
reasonable platform for a search engine.

 They would quickly reach quota limits,
 way before they knew if they had a viable idea.

If GAE's free quota is not sufficient to evaluate your idea, perhaps
you should evaluate it using a service that provides sufficent free
quota.

On Jul 15, 10:20 am, richard emberson richard.ember...@gmail.com
wrote:
 I understand that BigTable is behind GAE, but my concern is
 more with GAE performance and quotas. If GAE had existed
 when Larry and Sergey were developing their pagerack
 algorithm, would they have used GEA for evaluation?
 I have my doubts. They would quickly reach quota limits,
 way before they knew if they had a viable idea.

 Richard





 Tony wrote:
  Though I realize this is not exactly what you're asking, the concept
  of GAE is that it exposes some of the infrastructure that all Google
  applications rely on (i.e. Datastore) for others to use.  So, in a
  sense, Google's various applications were using App Engine before App
  Engine existed.  As far as I know, every Google service runs on the
  same homogeneous infrastructure, which is part of what makes it so
  reliable (and why the only available languages are Python and Java,
  languages used internally at Google).

  But I don't work there, so maybe I'm completely off-base.

  On Jul 15, 12:53 pm, richard emberson richard.ember...@gmail.com
  wrote:
  Eating one's own dog 
  foodhttp://en.wikipedia.org/wiki/Eating_one's_own_dog_food
  or in this case:
  Using one's own cloud.

  Amazon' cloud is based upon the IT technology they use
  within Amazon.
  Salesforce.com's Force.com offering is what they used to
  build their CRM system.

  These cloud vendors Eat their own dog food.

  If a cloud vendor does not use their cloud offering for
  their other products and/or internal systems, one
  would have to assume that the cloud is viewed as
  a technology ghetto within their own corporation - good
  enough for others but not for ourselves.

  So, concerning the Google App Engine, are other groups
  within Google clamoring to port or build their offerings
  on top of the App Engine? If so, please be specific, what
  Google products and infrastructure and what are the schedules
  for their hosting on GAE?

  Is the GAE group supporting the Google Docs group as they
  move to use GAE? How about gmail, will the Google Gmail
  group be relying on GAE support? I have not seen emails
  from either of those internal Google groups on the GAE
  mailing list. Lastly, when will Google search be supported
  by the GAE group;

  Will those groups have to live under the same quota restrictions
  while they evaluate using GAE?  If not, why not? If they
  are unreasonable for an internal evaluation, what makes them
  reasonable for an external evaluation?

  Evaluating whether or not GAE should be used for a particular
  application is not FREE even if one gets a very small slice
  of GAE resources with which to do the evaluation.
  Tens or hundreds of hours go into determine if GAE has
  the right characteristics and quotas that limit how fast one
  can work makes it worse. (Yes one can $$ for higher quotas,
  but during the evaluation phase $$ is out of the question.)

  Richard Emberson

  --
  Quis custodiet ipsos custodes

 --
 Quis custodiet ipsos custodes- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Eating one's own dog food

2009-07-16 Thread Andy Freeman

   
  If you need
 more, try to create multiple account, schedule roster among them, sync your
 data among them. This is the solutions. If terms and conditions of use of
 GAE allows, there will be open souce projects for GAE clustering.

I'm pretty sure that the GAE TOS forbid that solution.


On Jul 16, 7:26 pm, Juguang XIAO jugu...@gmail.com wrote:
 Dogfooding, as wikipedia names it, is ideological, may not be practical.
 People, as well as company, lives practically for surviving first then
 chasing dreams.

 I have seen the positive movement from Google, did so much for developers.
 People may take it for granted, thinking the leader should do more. I am
 content with GAE, as it offers some free and exciting stuffs. I cannot ask
 Google to give more, unless he decided so. People can be happy when they are
 grateful.

 Technically speaking, I do not think Google offers its best to developers.
 It will be too costy to do so. Their mainstream businesses need to be
 maintained, and I guess each business unit has its own authority and freedom
 to do thing in their own way. Core businesses and technologies need to be
 protected. If you are not happy, go for Microsoft. ;-)

 I do not believe, people can run their serious business without serious pay.
 6.5 hour per day CUP time is enough for casual applications. If you need
 more, try to create multiple account, schedule roster among them, sync your
 data among them. This is the solutions. If terms and conditions of use of
 GAE allows, there will be open souce projects for GAE clustering.

 Juguang





 On Fri, Jul 17, 2009 at 10:03 AM, GenghisOne mdkach...@gmail.com wrote:

  So it looks like there's an updated Google App Engine roadmap and
  guess what...no mention of full-text search.

  Doesn't that strike anyone as a bit odd? How can an emerging cloud
  computing platform not effectively address full-text search? And
  what's really odd is the absolute silence from Google...quite frankly,
  I don't get it.

  On Jul 16, 12:28 pm, Bryan bj97...@gmail.com wrote:
   This is a very interesting discussion.  I would like to see some input
   from Google.

   On Jul 15, 10:20 am, richard emberson richard.ember...@gmail.com
   wrote:

I understand that BigTable is behind GAE, but my concern is
more with GAE performance and quotas. If GAE had existed
when Larry and Sergey were developing their pagerack
algorithm, would they have used GEA for evaluation?
I have my doubts. They would quickly reach quota limits,
way before they knew if they had a viable idea.

Richard

Tony wrote:
 Though I realize this is not exactly what you're asking, the concept
 of GAE is that it exposes some of the infrastructure that all Google
 applications rely on (i.e. Datastore) for others to use.  So, in a
 sense, Google's various applications were using App Engine before App
 Engine existed.  As far as I know, every Google service runs on the
 same homogeneous infrastructure, which is part of what makes it so
 reliable (and why the only available languages are Python and Java,
 languages used internally at Google).

 But I don't work there, so maybe I'm completely off-base.

 On Jul 15, 12:53 pm, richard emberson richard.ember...@gmail.com
 wrote:
 Eating one's own dog foodhttp://
  en.wikipedia.org/wiki/Eating_one's_own_dog_food
 or in this case:
 Using one's own cloud.

 Amazon' cloud is based upon the IT technology they use
 within Amazon.
 Salesforce.com's Force.com offering is what they used to
 build their CRM system.

 These cloud vendors Eat their own dog food.

 If a cloud vendor does not use their cloud offering for
 their other products and/or internal systems, one
 would have to assume that the cloud is viewed as
 a technology ghetto within their own corporation - good
 enough for others but not for ourselves.

 So, concerning the Google App Engine, are other groups
 within Google clamoring to port or build their offerings
 on top of the App Engine? If so, please be specific, what
 Google products and infrastructure and what are the schedules
 for their hosting on GAE?

 Is the GAE group supporting the Google Docs group as they
 move to use GAE? How about gmail, will the Google Gmail
 group be relying on GAE support? I have not seen emails
 from either of those internal Google groups on the GAE
 mailing list. Lastly, when will Google search be supported
 by the GAE group;

 Will those groups have to live under the same quota restrictions
 while they evaluate using GAE?  If not, why not? If they
 are unreasonable for an internal evaluation, what makes them
 reasonable for an external evaluation?

 Evaluating whether or not GAE should be used for a particular
 application is not FREE even if one 

[google-appengine] Re: CapabilityDisabledError - How to simulate and test locally?

2009-07-03 Thread Andy Freeman

You might want to comment on and star

http://code.google.com/p/googleappengine/issues/detail?id=915 .

On Jul 3, 8:19 am, Charlie charlieev...@mac.com wrote:
 Indeed.

 I was thinking it would be great to be able to disable the APIs from
 the app engine admin console, myself, so I could do testing of the
 deployed app.

 On Jul 2, 1:26 pm, mckoss mck...@gmail.com wrote:



  Unfortunately, we're all dealing with the AppEngine outage today with
  even the System Status page down!

 http://code.google.com/status/appengine

  Thank goodness for the Downtime forum - it seems to be the only
  official notification about this from Google:

 http://groups.google.com/group/google-appengine-downtime-notify/brows...

  My question is, I'd like to make my application more robust in the
  face of Service outages.  I know about the CapabilityDisabledError
  exception.  I'd like to test my application in the local environment
  and simulate failures of the datastore, memcache, etc.

  Is there an easy way to simulate service failures locally?  Anyone
  have any techniques for doing this if it's not directly supported in
  the SDK?

  Thanks!
  Mike- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Transactionally updating multiple entities over 1MB

2009-06-27 Thread Andy Freeman

  Does that mean that db.put((e1, e2, e3,)) where all of the entities
  are 500kb will fail?

 Yes.

Thanks.

I'll take this opportunity to promote a couple of related feature
requests.

(1) We need a way to estimate entity sizes
http://code.google.com/p/googleappengine/issues/detail?id=1084

(2) We need a way to help predict when datastore operations will fail
http://code.google.com/p/googleappengine/issues/detail?id=917

I assume that db.get((k1, k2,)) can fail because of size reasons when
db.get(k1) followed by db.get(k2) will succeed.  Does db.get((k1,
k2,)) return at least one entity in that case?



On Jun 26, 9:36 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 On Fri, Jun 26, 2009 at 4:42 PM, Andy Freeman ana...@earthlink.net wrote:

    the 1MB limit applies only to single API calls

  Does that mean that db.put((e1, e2, e3,)) where all of the entities
  are 500kb will fail?

 Yes.



  Where are limits on the total size per call documented?

 http://code.google.com/appengine/docs/python/datastore/overview.html#...
  only mentions a limit on the size of individual entities and the total
  number of entities for batch methods.  The batch method documentation
  (http://code.google.com/appengine/docs/python/datastore/functions.html
  andhttp://code.google.com/appengine/docs/python/memcache/functions.html)
  does not mention any limits.

 You're right - we need to improve our documentation in that area. The 1MB
 limit applies to _all_ API calls.



  Is there a documented limit on the number of entities per memcache
  call?

 No.



  BTW - There is a typo in
 http://code.google.com/appengine/docs/python/memcache/overview.html#Q...
  .
  It says In addition to quotas, the following limits apply to the use
  of the Mail service: instead of Memcache service

 Thanks for the heads-up.

 -Nick Johnson







  On Jun 26, 7:28 am, Nick Johnson (Google) nick.john...@google.com
  wrote:
   Hi tav,

   Batch puts aren't transactional unless all the entities are in the
   same entity group. Transactions, however, _are_ transactional, and the
   1MB limit applies only to single API calls, so you can make multiple
   puts to the same entity group in a transaction.

   -Nick Johnson

   On Fri, Jun 26, 2009 at 8:53 AM, tavt...@espians.com wrote:

Hey guys and girls,

I've got a situation where I'd have to transactionally update
multiple entities which would cumulatively be greater than the 1MB
datastore API limit... is there a decent solution for this?

For example, let's say that I start off with entities E1, E2, E3 which
are all about 400kb each. All the entities are specific to a given
User. I grab them all on a remote node and do some calculations on
them to yield new computed entities E1', E2', and E3'.

Any failure of the remote node or the datastore is recoverable except
when the remote node tries to *update* the datastore... in that
situation, it'd have to batch the update into 2 separate .put() calls
to overcome the 1MB limit. And should the remote node die after the
first put(), we have a messy situation =)

My solution at the moment is to:

1. Create a UserRecord entity which has a 'version' attribute
corresponding to the latest versions of the related entities for any
given User.

2. Add a 'version' attribute to all the entities.

3. Whenever the remote node creates the computed new set of
entities, it creates them all with a new version number -- applying
the same version for all the entities in the same transaction.

4. These new entities are actually .put() as totally separate and new
entities, i.e. they do not overwrite the old entities.

5. Once a remote node successfully writes new versions of all the
entities relating to a User, it updates the UserRecord with the latest
version number.

6. From the remote node, delete all Entities related to a User which
don't have the latest version number.

7. Have a background thread check and do deletions of invalid versions
in case a remote node had died whilst doing step 4, 5 or 6...

I've skipped out the complications caused by multiple remote nodes
working on data relating to the same User -- but, overall, the
approach is pretty much the same.

Now, the advantage of this approach (as far as I can see) is that data
relating to a User is never *lost*. That is, data is never lost before
there is valid data to replace it.

However, the disadvantage is that for (unknown) periods of time, there
would be duplicate data sets for a given User... All of which is
caused by the fact that the datastore calls cannot exceed 1MB. =(

So queries will yield duplicate data -- gah!!

Is there a better approach to try at all? Thanks!

--
love, tav

plex:espians/tav | t...@espians.com | +44 (0) 7809 569 369
   http://tav.espians.com|http://twitter.com/tav|http://twitter.com/tav

[google-appengine] Re: Transactionally updating multiple entities over 1MB

2009-06-26 Thread Andy Freeman

  the 1MB limit applies only to single API calls

Does that mean that db.put((e1, e2, e3,)) where all of the entities
are 500kb will fail?

Where are limits on the total size per call documented?
http://code.google.com/appengine/docs/python/datastore/overview.html#Quotas_and_Limits
only mentions a limit on the size of individual entities and the total
number of entities for batch methods.  The batch method documentation
(http://code.google.com/appengine/docs/python/datastore/functions.html
and http://code.google.com/appengine/docs/python/memcache/functions.html)
does not mention any limits.

Is there a documented limit on the number of entities per memcache
call?

BTW - There is a typo in 
http://code.google.com/appengine/docs/python/memcache/overview.html#Quotas_and_Limits.
It says In addition to quotas, the following limits apply to the use
of the Mail service: instead of Memcache service

On Jun 26, 7:28 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi tav,

 Batch puts aren't transactional unless all the entities are in the
 same entity group. Transactions, however, _are_ transactional, and the
 1MB limit applies only to single API calls, so you can make multiple
 puts to the same entity group in a transaction.

 -Nick Johnson





 On Fri, Jun 26, 2009 at 8:53 AM, tavt...@espians.com wrote:

  Hey guys and girls,

  I've got a situation where I'd have to transactionally update
  multiple entities which would cumulatively be greater than the 1MB
  datastore API limit... is there a decent solution for this?

  For example, let's say that I start off with entities E1, E2, E3 which
  are all about 400kb each. All the entities are specific to a given
  User. I grab them all on a remote node and do some calculations on
  them to yield new computed entities E1', E2', and E3'.

  Any failure of the remote node or the datastore is recoverable except
  when the remote node tries to *update* the datastore... in that
  situation, it'd have to batch the update into 2 separate .put() calls
  to overcome the 1MB limit. And should the remote node die after the
  first put(), we have a messy situation =)

  My solution at the moment is to:

  1. Create a UserRecord entity which has a 'version' attribute
  corresponding to the latest versions of the related entities for any
  given User.

  2. Add a 'version' attribute to all the entities.

  3. Whenever the remote node creates the computed new set of
  entities, it creates them all with a new version number -- applying
  the same version for all the entities in the same transaction.

  4. These new entities are actually .put() as totally separate and new
  entities, i.e. they do not overwrite the old entities.

  5. Once a remote node successfully writes new versions of all the
  entities relating to a User, it updates the UserRecord with the latest
  version number.

  6. From the remote node, delete all Entities related to a User which
  don't have the latest version number.

  7. Have a background thread check and do deletions of invalid versions
  in case a remote node had died whilst doing step 4, 5 or 6...

  I've skipped out the complications caused by multiple remote nodes
  working on data relating to the same User -- but, overall, the
  approach is pretty much the same.

  Now, the advantage of this approach (as far as I can see) is that data
  relating to a User is never *lost*. That is, data is never lost before
  there is valid data to replace it.

  However, the disadvantage is that for (unknown) periods of time, there
  would be duplicate data sets for a given User... All of which is
  caused by the fact that the datastore calls cannot exceed 1MB. =(

  So queries will yield duplicate data -- gah!!

  Is there a better approach to try at all? Thanks!

  --
  love, tav

  plex:espians/tav | t...@espians.com | +44 (0) 7809 569 369
 http://tav.espians.com|http://twitter.com/tav| skype:tavespian

 --
 Nick Johnson, App Engine Developer Programs Engineer
 Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration
 Number: 368047- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: why there is no way to build a key id?

2009-06-24 Thread Andy Freeman

see also

http://groups.google.com/group/google-appengine/browse_thread/thread/3f8cfeaf7dc2eb72/d5d599180fe47e02?lnk=gstq=ryan+allocate#d5d599180fe47e02


On Jun 24, 11:33 am, Jeff S (Google) j...@google.com wrote:
 Hi Jeff,

 This is an idea that we're aware of as well. If you could reserve the next
 ID in advance, then you could actually do this in one put since multiple
 entities could be sent in one batch :-) The workaround available now is to
 use the key_name, but the difficulty becomes ensuring that the key_name is
 unique. Here is a feature request which I'm aware of which is along these
 lines (though the approach differs slightly):

 http://code.google.com/p/googleappengine/issues/detail?id=1003

 Thank you,

 Jeff

 2009/6/24 Jeff Enderwick jeff.enderw...@gmail.com





  Hey Jeff - sorry for the confusion. The idea was that one would be
  able to get the unique id from an GOOG and then do a db.put with that
  id as an arg. For example, let's say I want to create two entities,
  with each referring to each other. I need to do three db.put
  operations:

  a = Foo()
  db.put(a)
  b = Foo()
  b.ref = a.key()
  db.put(b)
  a.ref = b.key()
  db.put(a)

  One would hope to be able to do this with two db.puts.

  Thanks,
  Jeff

  2009/6/16 Jeff S (Google) j...@google.com:

   The datastore does not allow key_names which begin with a digit in
   order to avoid confusion with an ID, which is numerical. If you want
   to use numeric key names, you could add a one letter prefix :-)

   Happy coding,

   Jeff

   On Jun 16, 1:17 am, cryb cbuti...@gmail.com wrote:
   Hi Jeff.
   Thanks for your reply.. I really hope that in the near future
   appengine will support setting key ids for entities.
   You mentioned that I can use hooks in order to achieve my goal..
   However I was more interested in a solution based on appengine java
   sdk, and not on python hooks. Does appengine java sdk provide hooks or
   some other similar mechanism?
   It seems that for the moment I'll stick to generating key names.
   One more question: I've tried to generate some entities with key names
   on my local devappserver and I got a strange exception stating that I
   can't create key names that start with a digit (?!?)... this holds on
   google appengine production servers too or it's just a bug of
   devappserver?

   Thanks

   On Jun 16, 2:45 am, Jeff S (Google) j...@google.com wrote:

Hi cryb,

As you noted, we do not currently allow the ID for a key to be set, as
  we
ensure that the ID is unique for each existing entity. I recommend
  using a
key name instead of an ID, as Antoniov suggeted, if possible.

It is technically possible to modify the key of an entity as it is
  being
converted to a protocol buffer message before it is sent to the
  datastore.
You could do this using hooks in the API proxy as described in this
  article:http://code.google.com/appengine/articles/hooks.htmlAlsoitis
  possible to
construct the key for the desired object if you know the ID in
  advance.

class X(db.Model):
  pass

# If you've already created the entity so you have the ID.
x_id = X().put().id()

# Instead of getting by ID, you can create the key manually.
k = db.Key.from_path('X', x_id)

Now you have the desired key without having fetched the object, but
  the part
which the model class does not allow is setting the key yourself. So
  you
could modify the protocol buffer message before it is written to the
datastore, but I don't recommend it.

The decision to allow setting key_names but not IDs is something we
  may
revisit.

Happy coding,

Jeff

2009/6/12 cryb cbuti...@gmail.com

 Hi.. that is to build key names... What I asked was why I can't
  build
 a key ID..

 On Jun 12, 5:35 am, Antoniov nio@gmail.com wrote:
  Use the code:
  s = Story(key_name=xzy123)
  Then you create an entity with the key name xzy123.

  Check this:

 http://code.google.com/intl/en-US/appengine/docs/python/datastore/key...

  On 6月12日, 上午1时28分, cryb cbuti...@gmail.com wrote:

   Does anyone know why it is possible to build a key name but NOT
  a key
   id? I know key IDs are used as autoincrements, but why can't I
  just
   override this mechanism and build my own key id?
   Suppose I want to overwrite an existent entry in my table that
  has a
   key id I know, and also I want to keep that key id after
  update...
   because I can't just build a key id, I am forced to fetch that
  entity,
   modify it and write it back, instead of just write the updated
  entity
   with the key id I already know - so an additional read to the
   datastore.
   Is there an obscure reason for that? (both key names and key ids
  are
   prefixed with appid/kind as far as I know so there is no chance
  of
   collision with other apps/kinds)- Hide quoted text -

 - Show quoted text -

[google-appengine] Re: why there is no way to build a key id?

2009-06-16 Thread Andy Freeman

 The decision to allow setting key_names but not IDs is something we may
 revisit.

I hope that you're also considering some way to request and allocate
an unused id for a given path prefix.  (That way we can get unique key
ids to specify.)



On Jun 15, 4:45 pm, Jeff S (Google) j...@google.com wrote:
 Hi cryb,

 As you noted, we do not currently allow the ID for a key to be set, as we
 ensure that the ID is unique for each existing entity. I recommend using a
 key name instead of an ID, as Antoniov suggeted, if possible.

 It is technically possible to modify the key of an entity as it is being
 converted to a protocol buffer message before it is sent to the datastore.
 You could do this using hooks in the API proxy as described in this 
 article:http://code.google.com/appengine/articles/hooks.htmlAlso it is 
 possible to
 construct the key for the desired object if you know the ID in advance.

 class X(db.Model):
   pass

 # If you've already created the entity so you have the ID.
 x_id = X().put().id()

 # Instead of getting by ID, you can create the key manually.
 k = db.Key.from_path('X', x_id)

 Now you have the desired key without having fetched the object, but the part
 which the model class does not allow is setting the key yourself. So you
 could modify the protocol buffer message before it is written to the
 datastore, but I don't recommend it.

 The decision to allow setting key_names but not IDs is something we may
 revisit.

 Happy coding,

 Jeff

 2009/6/12 cryb cbuti...@gmail.com





  Hi.. that is to build key names... What I asked was why I can't build
  a key ID..

  On Jun 12, 5:35 am, Antoniov nio@gmail.com wrote:
   Use the code:
   s = Story(key_name=xzy123)
   Then you create an entity with the key name xzy123.

   Check this:
 http://code.google.com/intl/en-US/appengine/docs/python/datastore/key...

   On 6月12日, 上午1时28分, cryb cbuti...@gmail.com wrote:

Does anyone know why it is possible to build a key name but NOT a key
id? I know key IDs are used as autoincrements, but why can't I just
override this mechanism and build my own key id?
Suppose I want to overwrite an existent entry in my table that has a
key id I know, and also I want to keep that key id after update...
because I can't just build a key id, I am forced to fetch that entity,
modify it and write it back, instead of just write the updated entity
with the key id I already know - so an additional read to the
datastore.
Is there an obscure reason for that? (both key names and key ids are
prefixed with appid/kind as far as I know so there is no chance of
collision with other apps/kinds)- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: alternative to self.redirect(users.create_login_url(self.request.uri))

2009-06-11 Thread Andy Freeman

 The URL you supply needs to be absolute 
 ('http://mysite.com/choose_user.html'), not relative ('choose_user.html' or
 '/choose_user.html').

Is the documentation wrong?

From http://code.google.com/appengine/docs/python/users/functions.html
dest_url can be full URL or a path relative to your application's
domain.

The examples in 
http://code.google.com/appengine/docs/python/users/loginurls.html
use /.



On Jun 10, 10:16 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi Brian,

 The URL you supply needs to be absolute 
 ('http://mysite.com/choose_user.html'), not relative ('choose_user.html' or
 '/choose_user.html'). You may find the self.request.host and
 self.request.host_uri members helpful in constructing an absolute url from a
 relative one without hardcoding your site's domain.

 -Nick Johnson

 On Wed, Jun 10, 2009 at 6:10 PM, thebrianschott schott.br...@gmail.comwrote:





  Nick,

  Thanks.

  Would the new code be something like this if I sent the user to a
  signin?
  alternative to self.redirect(users.create_login_url(choose_user.html))

  Brian in Atlanta- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: A question about google app engine - thread id for performance profiling

2009-06-11 Thread Andy Freeman

 While one instance has only one thread, there could be many instances
serving the same number of clients simultaneously.

Yes, and those separate instances are not threads, they're processes.
More important,
they may be running on different machines, so neither thread ids nor
process ids are guaranteed to be unique.  That's why using thread or
process ids as identifiers is guaranteed to be wrong in any system
that uses multiple machines, such as app engine.

Depending on how you're aggregating, one of the uuid variants might be
appropriate.

I tried to store some data at the thread local objects pool (just as I
   do with Java)

You're assuming persistence in an environment where persistence is
unlikely.

However, to the extent that there is persistence, you can simply store
stuff in memory because a given process may be used to run a number of
instances, one after another.  (Not simultaneously - simultaneous
instances run in different processes, possibly on different
machines.)  While an instance is running, it can see what other
instances that ran in the same process left in memory after they
finished.

However, said stuff in memory for a given process is only accessible
to the currently running instance.

On Jun 11, 6:58 am, Liang Han liang@gmail.com wrote:
 Nick,

 While one instance has only one thread, there could be many instances
 serving the same number of clients simultaneously.
 Thus, the metric from the multiple instances will be messed up,
 because the metric logs are recorded in the global area, such as
 memcache, or as the debug log.  (is there some area to store data per
 instance? or is there a way to get the instance id?)

 There shouldn't be any barrier for me to release the source code, but
 firstly, it has to work.

  App Engine apps run in a single-threaded environment. That is, a given
  interpreter instance will only ever be processing one request at any given
  time. Thus, you don't need to worry about concurrent invocations.

 On Jun 11, 6:15 pm, Nick Johnson (Google) nick.john...@google.com
 wrote:



  Hi Liang Han,

  (Replies inline)

  On Thu, Jun 11, 2009 at 2:15 AM, Liang Han liang@gmail.com wrote:

   A simple application, which is running at the google app engine,
   visits the datastore, and uses the urlfetch service as well.    I want
   to add the 6 hooks into the code to get the timestamp when code is
   running at that checkpoint, thus the transactions performance
   profiling can be done by calculating with the timestamps.

   Here's the flow in pseudo code:

   class MainPage(webapp.RequestHandler):
    def get(self):
      -- 1. hook at program starting
      do something ...
      -- 2. hook before fetching url
      result = urlfetch.fetch(url=http://...;)
      -- 3. hook after fetching url
      do something ...
      -- 4. hook before visiting datastore
      visit datastore
      -- 5. hook after visiting datastore
      do something
      -- 6. finish hook

   Although it's straightforward to just add the hook code in the
   original code, however, in order to minimize the modification to
   existing code, I use the following approaches:

   1. Use apiproxy_stub_map.apiproxy.GetPreCallHooks().Append to add
   hooks to get data at checkpoint 2, 3, 4, 5.   I referenced the article
  http://code.google.com/appengine/articles/hooks.html

   2. Monkeypatch google.appengine.ext.webapp.WSGIApplication (assuming
   the original code is written with webapp framework) by overwritting
   def __call__(self, environ, start_response).  I just add my code
   before and after calling the real method.   Therefore, I can get data
   at checkpoint 1 and 6.     The orignial code just need to modified to
   use the subclass of  WSGIApplication.

  Cool stuff you're working on! Are you planning on releasing it for others to
  use?

   Here's the problem I run into.
   Since multiple users might be visiting the same application at the
   sametime, in order not to mess up data from different users, we must
   be able to get a same id, such as thread id for the 6 checkpoints of
   each user.  However, I cannot achieve this.

  App Engine apps run in a single-threaded environment. That is, a given
  interpreter instance will only ever be processing one request at any given
  time. Thus, you don't need to worry about concurrent invocations.

  -Nick Johnson

   I have tried to use thread.get_ident(), but it always returns -1.  (I
   know google app engine doesn't allow create new thread, but it doesn't
   make any sense to return -1 for threads created by google engine
   itself ...)

   I tried to store some data at the thread local objects pool (just as I
   do with Java), but seems the thread.local is not the same with the one
   in Java.    I'm not familiar with python anyway, maybe I miss
   something or misunderstand something.

   Do you guys have any idea about this?    Thanks very much in advance!- 
   Hide quoted text -

 - Show quoted text -

[google-appengine] Re: ReferenceProperty put() does not update the existing entity

2009-05-30 Thread Andy Freeman

 If an instance has been stored before, the put() method updates the
 existing entity.

As I wrote, Message(handle=handle,owner=person.key()) makes a new
instance.  It's a constructor - it makes a new instance regardless of
what the arguments are.  A new instance, by definition, has not been
stored before.  (No, putting an instance made by calling the
constructor with the same arguments does not change what db.Model
subclass constructors do.  Constructors make new instances.  The only
exception to that rule doesn't apply here.)

If you use key_name and parent, the constructor will make a new
instance with a specific key.  If there is a stored entity with that
key, putting said instance will overwrite the previously stored
entity.

However, that the sentence quoted above does not apply in that
situation because a new instance is not an instance that has been
stored before.


On May 29, 8:51 pm, thebrianschott schott.br...@gmail.com wrote:
 Andy,

 I am basing my approach on the following sentence at the
 following link.

 If an instance has been stored before, the put() method updates the
 existing entity.

 http://code.google.com/appengine/docs/python/datastore/creatinggettin...

 What am I missing?

 Brian in Atlanta

 On May 29, 11:27 pm, Andy Freeman ana...@earthlink.net wrote:



         message = Message(handle=handle,owner=person.key())

  That line always creates a new Message instance.  Why do you think
  that it should do anything else?  (The only way to get copies of
  existing instances is with get() and queries.)- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: ReferenceProperty put() does not update the existing entity

2009-05-29 Thread Andy Freeman

   message = Message(handle=handle,owner=person.key())

That line always creates a new Message instance.  Why do you think
that it should do anything else?  (The only way to get copies of
existing instances is with get() and queries.)

On May 29, 8:05 pm, thebrianschott schott.br...@gmail.com wrote:
 Woobie,

 I don't understand the difference between using a key, like I think
 my example shows I am doing, and a key_name. Can you
 direct me further, please?

 On May 29, 10:46 pm, Wooble geoffsp...@gmail.com wrote:



  No, of course not.  You can have multiple Message entities that all
  have the same owner.  If you want to update an existing entity, get
  it, modify it, and put it again, or use a key_name.

  On May 29, 9:20 pm, thebrianschott schott.br...@gmail.com wrote:

   When I use the code below, a new message with the same Person
   ReferenceProperty (and same handle) results in a new db instance,
   rather than updating the existing entity as I think it should
   according to
   the following link.  Isn't it supposed to just update the existing
   instance,
   without creating another?

  http://code.google.com/appengine/docs/python/datastore/creatinggettin...

   Thanks,

   Brian in Atlanta

   ***code excerpt below

   class Person(db.Model):
       user = db.UserProperty()
       address = db.EmailProperty()
       skillName = db.StringProperty()

   class Message(db.Model):
       owner = db.ReferenceProperty(Person)
       comment = db.TextProperty()
       handle =  db.StringProperty()

   class SendMessage(webapp.RequestHandler):
       def post(self):
           skillName_id = self.request.get('skillName')
           handle = self.request.get('handle')
           comment = self.request.get('comment')
           key = db.Key.from_path(Person, skillName_id)
           person = Person.get(key)
           message = Message(handle=handle,owner=person.key())
           message.comment = comment
           message.handle = handle
           message.put()- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: time and date sync service in google cluster

2009-05-28 Thread Andy Freeman

 So my question is: does google cluster guarantees time and date sync
 among its nodes?

IIRC, the last time this came up, the answer was no.  They expect the
times on different nodes wil be somewhat synchronized and reasonably
close to correct, but make no guarantees.



On May 28, 7:39 am, cryb cbuti...@gmail.com wrote:
 Hello.
 Although it should be obvious, the Google cluster should provide a
 time and date synchronization service for all its nodes.
 I've done some search on the internet and I went through appengine
 docs, but I didn't find any reference that states this.
 I know that this should be common sense, but I just want to make sure.
 So my question is: does google cluster guarantees time and date sync
 among its nodes?
 I ask this because I need it for expiration primitives that won't work
 properly if such a service is not in place.
 Thanks.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Datastore usage ~ 80 times more than expected (Add your vote to a datastore usage accounting feature)

2009-05-13 Thread Andy Freeman

Argh!

This means that one form (db.Key) is smaller than the other
(comparable string) for the datastore while the reverse is true for
memcache.

How about defining a __getstate__ and __setstate__ for db.Key that is
smaller than the string equivalent?  This will help for memcaching any
db.Model instance whose .key() is defined.

On May 13, 11:41 am, Jason (Google) apija...@google.com wrote:
 Hi Andy. In this case, the list of Key objects will be smaller than the list
 of key strings. Even though the picked db.Key object is larger, it is a
 binary-encoded protocol buffer form that gets stored, which is smaller than
 the pickled string. That said, I doubt it would make a tremendous difference
 unless you have a lot of these entities or these lists have a lot of values.

 - Jason



 On Mon, May 11, 2009 at 10:38 PM, Andy Freeman ana...@earthlink.net wrote:

  Since index space can be significant, can we get some additional
  information?

  For example, does an indexed db.ListProperty(db.Key) with three
  elements take significantly more or less space than an indexed
  db.StringListProperty with three elements whose value is str() of the
  same keys?  (The pickle of keys seems to be significantly larger than
  the pickle of the equivalent strings.)

  On May 11, 5:04 pm, Jason (Google) apija...@google.com wrote:
   Hi Anthony. I'm very sorry for the late reply, and thank you for bearing
   with me. I've discussed this with the datastore team and it's evident
  that
   the CSV file's size is not a great indicator of how much storage your
   entities will consume. On top of the size of the raw data, each entity
  has
   associated metadata, as you've already mentioned, but I'd bet that the
   indexes are consuming the greatest space. If you don't ever query on one
  or
   more of these 15 string properties, you may consider changing their
  property
   types to Text or declaring indexed=false in your model. If you can do
  this
   with one of your properties and re-build your indexes, I'd be interested
  in
   seeing how much your storage usage decreases since you'll need one less
   index.

   (Note that single-property indexes are present but not listed in the
  Admin
   Console.)

   - Jason

   On Sat, May 9, 2009 at 4:34 PM, Kugutsumen kugutsu...@gmail.com wrote:

Two weeks ago, I've sent my applications ID to both you and Nick and I
haven't heard from you since then.

Thanks- Hide quoted text -

   - Show quoted text -- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Datastore usage ~ 80 times more than expected (Add your vote to a datastore usage accounting feature)

2009-05-13 Thread Andy Freeman

Argh!

This means that one form (db.Key) is smaller than the other
(comparable string) for the datastore while the reverse is true for
memcache.

I've created am issue ( 
http://code.google.com/p/googleappengine/issues/detail?id=1538
)requesting a __getstate__ and __setstate__ for db.Key that is smaller
than the string equivalent.  In addition to eliminating the
inconsistency betwen the datastore and memcache sizes, it will reduce
the size of every memcache'd db.Model instance whose .key() is
defined.

On May 13, 11:41 am, Jason (Google) apija...@google.com wrote:
 Hi Andy. In this case, the list of Key objects will be smaller than the list
 of key strings. Even though the picked db.Key object is larger, it is a
 binary-encoded protocol buffer form that gets stored, which is smaller than
 the pickled string. That said, I doubt it would make a tremendous difference
 unless you have a lot of these entities or these lists have a lot of values.

 - Jason



 On Mon, May 11, 2009 at 10:38 PM, Andy Freeman ana...@earthlink.net wrote:

  Since index space can be significant, can we get some additional
  information?

  For example, does an indexed db.ListProperty(db.Key) with three
  elements take significantly more or less space than an indexed
  db.StringListProperty with three elements whose value is str() of the
  same keys?  (The pickle of keys seems to be significantly larger than
  the pickle of the equivalent strings.)

  On May 11, 5:04 pm, Jason (Google) apija...@google.com wrote:
   Hi Anthony. I'm very sorry for the late reply, and thank you for bearing
   with me. I've discussed this with the datastore team and it's evident
  that
   the CSV file's size is not a great indicator of how much storage your
   entities will consume. On top of the size of the raw data, each entity
  has
   associated metadata, as you've already mentioned, but I'd bet that the
   indexes are consuming the greatest space. If you don't ever query on one
  or
   more of these 15 string properties, you may consider changing their
  property
   types to Text or declaring indexed=false in your model. If you can do
  this
   with one of your properties and re-build your indexes, I'd be interested
  in
   seeing how much your storage usage decreases since you'll need one less
   index.

   (Note that single-property indexes are present but not listed in the
  Admin
   Console.)

   - Jason

   On Sat, May 9, 2009 at 4:34 PM, Kugutsumen kugutsu...@gmail.com wrote:

Two weeks ago, I've sent my applications ID to both you and Nick and I
haven't heard from you since then.

Thanks- Hide quoted text -

   - Show quoted text -- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Datastore usage ~ 80 times more than expected (Add your vote to a datastore usage accounting feature)

2009-05-11 Thread Andy Freeman

Since index space can be significant, can we get some additional
information?

For example, does an indexed db.ListProperty(db.Key) with three
elements take significantly more or less space than an indexed
db.StringListProperty with three elements whose value is str() of the
same keys?  (The pickle of keys seems to be significantly larger than
the pickle of the equivalent strings.)

On May 11, 5:04 pm, Jason (Google) apija...@google.com wrote:
 Hi Anthony. I'm very sorry for the late reply, and thank you for bearing
 with me. I've discussed this with the datastore team and it's evident that
 the CSV file's size is not a great indicator of how much storage your
 entities will consume. On top of the size of the raw data, each entity has
 associated metadata, as you've already mentioned, but I'd bet that the
 indexes are consuming the greatest space. If you don't ever query on one or
 more of these 15 string properties, you may consider changing their property
 types to Text or declaring indexed=false in your model. If you can do this
 with one of your properties and re-build your indexes, I'd be interested in
 seeing how much your storage usage decreases since you'll need one less
 index.

 (Note that single-property indexes are present but not listed in the Admin
 Console.)

 - Jason



 On Sat, May 9, 2009 at 4:34 PM, Kugutsumen kugutsu...@gmail.com wrote:

  Two weeks ago, I've sent my applications ID to both you and Nick and I
  haven't heard from you since then.

  Thanks- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Django 1.0

2009-05-08 Thread Andy Freeman

Native django 1.0 may not be on the roadmap, but it is a very popular
request.

To join that crowd, star 
http://code.google.com/p/googleappengine/issues/detail?id=872
.


On May 8, 1:59 am, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi little_pea,

 It's not currently on our roadmap, so don't count on it in the near
 future. You can use Django 1.0 in your App Engine apps, you just have
 to bundle it and upload it with your app.

 -Nick Johnson



 On Fri, May 8, 2009 at 12:41 AM, little_pea evgeny.demche...@gmail.com 
 wrote:

  Does enybody know id Django 1.0 support is planned for App Engine?- Hide 
  quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Django 1.0

2009-05-08 Thread Andy Freeman

Note that Django 1.1 is coming soon (
http://www.djangoproject.com/weblog/2009/may/07/django-1-1-update/ )
so it would be nice if app engine made the leap.

On May 8, 5:33 am, Andy Freeman ana...@earthlink.net wrote:
 Native django 1.0 may not be on the roadmap, but it is a very popular
 request.

 To join that crowd, 
 starhttp://code.google.com/p/googleappengine/issues/detail?id=872
 .

 On May 8, 1:59 am, Nick Johnson (Google) nick.john...@google.com
 wrote:



  Hi little_pea,

  It's not currently on our roadmap, so don't count on it in the near
  future. You can use Django 1.0 in your App Engine apps, you just have
  to bundle it and upload it with your app.

  -Nick Johnson

  On Fri, May 8, 2009 at 12:41 AM, little_pea evgeny.demche...@gmail.com 
  wrote:

   Does enybody know id Django 1.0 support is planned for App Engine?- Hide 
   quoted text -

  - Show quoted text -- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: documentation for memcache namespaces?

2009-05-06 Thread Andy Freeman

That sentence is in the section that lists the functions that have no
meaning/for compatibility.

In the listing of functions that have meaning, various arguments are
listed as having no meaning/for compatibility but namespace isn't one
of those arguments.

On May 5, 4:00 pm, djidjadji djidja...@gmail.com wrote:
 It says: To be compatible with other memcache implementations they
 allow parameters and functions that have no meaning
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: documentation for memcache namespaces?

2009-05-05 Thread Andy Freeman

I mentioned that link.  It doesn't provide any useful information
about what namespaces do.

Consider the documentation for set_multi. key_prefix - Prefix for to
prepend to all keys. ...  namespace - An optional namespace for the
keys.  What functionality does namespace provide that key_prefix
doesn't provide?


On May 5, 1:15 am, djidjadji djidja...@gmail.com wrote:
 http://code.google.com/appengine/docs/python/memcache/clientclass.htm...

 2009/5/4 Andy Freeman ana...@earthlink.net:





  Where are memcache namespaces documented? (They're mentioned in
 http://code.google.com/appengine/docs/python/memcache/functions.html
  andhttp://code.google.com/appengine/docs/python/memcache/clientclass.html
  .)

  Note thathttp://code.google.com/p/memcached/wiki/FAQsays that
  memcached does not support namespaces- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] documentation for memcache namespaces?

2009-05-04 Thread Andy Freeman

Where are memcache namespaces documented? (They're mentioned in
http://code.google.com/appengine/docs/python/memcache/functions.html
and http://code.google.com/appengine/docs/python/memcache/clientclass.html
.)

Note that http://code.google.com/p/memcached/wiki/FAQ says that
memcached does not support namespaces
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Concurrency Threading...

2009-05-03 Thread Andy Freeman

It would help if you told us something about the concurrency mechanism
that you're trying to test.

Is it related to datastore consistency in the face of concurrent
access?

On Apr 26, 3:26 pm, eli elliott.rosent...@gmail.com wrote:
 Thanks for the answers guys.
 I basically just need to test that my concurrency mechanism is
 working. How can I do that without being able to spawn two threads
 simultaneously?

 Regards,
 Elliott

 On Apr 20, 3:09 pm, Andy Freeman ana...@earthlink.net wrote:



   Unfortunately I cant seem to get app engine to run two threads
   concurrently to simulate two users taking action at the same time.

  Are you concerned with the development server or the production
  server?

  The development server is single threaded and there doesn't seem to be
  any way to fix that.  (You can run multiple instances, but each
  instance will have its own datastore.)

  Application instances are, by design, single threaded.  The production
  server is supposed to fire up as many instances as needed - are you
  not seeing that behavior?  Note that the datastore actions of these
  separate instances will be interleaved.  Is that the problem that
  you're trying to address?

  On Apr 20, 4:11 am, eli elliott.rosent...@gmail.com wrote:

   Hi guys,

   I have built a program that is (hopefully) concurrency safe. I would
   like to test this thoroughly using a testing module that I have
   written.
   Unfortunately I cant seem to get app engine to run two threads
   concurrently to simulate two users taking action at the same time.

   Currently I am using .start() on a number of threads, each of which
   should then go off and access certain datastructures to test their
   safety. Unfortunately I can only seem to get app engine to serve each
   threads requests, from start to finish, in the order they were
   created.

   Could somebody please give me some pointers. Thanks for the help in
   advanced.

   Regards,
  ElliottR- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Datastore usage ~ 80 times more than expected

2009-04-22 Thread Andy Freeman

How are you estimating the size?  For example, do you think that
strings are stored using one byte per character or two?  (I don't
know, but I do know that they're interpreted as unicode.)

I've asked for mechanisms to help estimate size - see
http://code.google.com/p/googleappengine/issues/detail?id=1084

On Apr 21, 8:24 am, Amir  Michail amich...@gmail.com wrote:
 Hi,

 A rough estimate shows the app engine is using 80 times more storage
 than one might expect given the data stored there.

 Any reasons why this might be so?  Is there a way I can accurately
 predict storage given the various data types (e.g., text vs string)?

 Amir
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Concurrency Threading...

2009-04-20 Thread Andy Freeman

 Unfortunately I cant seem to get app engine to run two threads
 concurrently to simulate two users taking action at the same time.

Are you concerned with the development server or the production
server?

The development server is single threaded and there doesn't seem to be
any way to fix that.  (You can run multiple instances, but each
instance will have its own datastore.)

Application instances are, by design, single threaded.  The production
server is supposed to fire up as many instances as needed - are you
not seeing that behavior?  Note that the datastore actions of these
separate instances will be interleaved.  Is that the problem that
you're trying to address?

On Apr 20, 4:11 am, eli elliott.rosent...@gmail.com wrote:
 Hi guys,

 I have built a program that is (hopefully) concurrency safe. I would
 like to test this thoroughly using a testing module that I have
 written.
 Unfortunately I cant seem to get app engine to run two threads
 concurrently to simulate two users taking action at the same time.

 Currently I am using .start() on a number of threads, each of which
 should then go off and access certain datastructures to test their
 safety. Unfortunately I can only seem to get app engine to serve each
 threads requests, from start to finish, in the order they were
 created.

 Could somebody please give me some pointers. Thanks for the help in
 advanced.

 Regards,
 Elliott R
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: GQL Query with IN operator Issue (bug or am i making a mistake?)

2009-04-20 Thread Andy Freeman

db.TextProperty is not an indexable property.  That means that it's
not queryable either.

It would be nice if to get an exception or some other indication of
what's going on.

However, note that indexable is something that happens in the
datastore when an instance is store.  If you change a property from
StringProperty to TextProperty or the reverse, strange things will
probably happen.  (If you put some instances with StringProperty, I
suspect that you can still successfully query for those instances
using that property after you've switched to TextProperty.)

On Apr 19, 1:24 am, ecognium ecogn...@gmail.com wrote:
 Hello everyone, I noticed an odd behavior with GQL query when it has
 two IN operators and a regular condition. Below is some basic code to
 reproduce the problem:

 class DummyData(db.Model):
         x = db.StringListProperty()
         y = db.TextProperty()

 class Dummy(webapp.RequestHandler):
     def get(self):
         d = DummyData()
         d.x = ['a', 'b','c']
         d.y = test
         d.put()
         d = DummyData()
         d.x = ['c', 'd','e']
         d.y = test2
         d.put()

         q = db.GqlQuery(SELECT * FROM DummyData where x in ('c') and
 x in ('a') )
         results = q.fetch(10) # 10 instead of 2? - useful if you run
 the test multiple times
         for r in results:
             self.response.headers['Content-Type'] = text/plain
             self.response.out.write(x =  + ,.join(r.x) +  y =  +
 r.y + \n)

 When you run the above code you will see the following output:
 x = a,b,c y = test

 However when I replace the above query with the one below, I do not
 get any  results (even though it should return the same result as
 above):

 # Note the addition of y = 'test'
 q = db.GqlQuery(SELECT * FROM DummyData where y = 'test' and x in
 ('c') and x in ('a') )

 Although here the IN conditions are the same as '=', my application
 actually uses multiple list values.; I am just presenting a simpler
 example.

 If someone can confirm the issue, I can open a bug report for this.

 Thanks!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Images API still limited to 1MB

2009-04-19 Thread Andy Freeman

 For a processing API call
 like an image API call, there should be no dependence on BigTable or
 any sort of clustering

It has been said that the image API is actually executed on an image
processing cluster, not the application servers.
http://code.google.com/appengine/docs/python/images/overview.html
mentions an Images service in several locations.

On Apr 18, 10:12 pm, Anuraag Agrawal anura...@gmail.com wrote:
 Indeed the entire API is limited to 1MB, but the point of discussion
 I'd like to make is that for data-related API calls like the datastore
 and memcache, it's easy to come up with hardware/implementation
 constraints that would warrant such a limit, and there are usually
 relatively simple ways to work around them.  For a processing API call
 like an image API call, there should be no dependence on BigTable or
 any sort of clustering, so the limit seems a little arbitrary,
 especially in light of the request limit increase to 10MB which does
 not seem to have any value for image requests.  I'm honestly hoping
 it's just an oversight that will be fixed in the short term.

 In the interim, using an external image API seems to be the best
 solution indeed.  Right now, I'm looking into the Imageshack API which
 seems to offer enough functionality, but can you go into how you use
 the Picasa API?  It seems to be very well suited to importing a user's
 photos, but to upload and manage photos as a website would require
 using ClientLogin, and since ClientLogin uses a captcha, it doesn't
 work well on app engine.  Or at least, that's what's written in the
 App Engine Data API docs.

 Thanks.

 On Apr 18, 10:55 pm, Tim Hoffman zutes...@gmail.com wrote:



  Why don't you stick the images in Picasa and just manage them through
  app engine ?

  Thats what I am doing

  T

  On Apr 18, 4:28 pm, Anuraag Agrawal anura...@gmail.com wrote:

   When App Engine raised its request size limit from 1MB to 10MB, it
   seemed like we would finally be able to use it for an image sharing
   website as while reasonably sized digital camera images over 1MB are
   very likely, it'd take an extremely professional camera image to break
   the 10MB limit, which seemed like an acceptable limit to place on a
   user's file uploads since those users are probably knowledgeable
   enough to resize the images themselves anyways.  API calls were still
   limited to 1MB, and as the examples listed on the blog post were
   memcache and datastore, it seemed to make sense since App Engine is
   probably still designed to place a 1MB limit on its datastore
   entries.   This seemed like it'd be ok since it should be possible to
   use the images API to resize any input images to less than 1MB before
   storing them in the datastore, completely acceptable for our task.
   However, after trying this and looking into some server errors, it
   seems the images API is also limited to 1MB input files (which fits
   with the 1MB limit on API calls, the fact just didn't register at
   first).  At least, that's how I'm interpreting the RequestTooLargeError
   (The request to API call images.Transform() was too large) I get when
   submitting a 1.5MB file.

   Is the limit on the images API by design/constraint?  I imagine image
   API calls aren't split across computers in a cluster or anything and
   are run in place, with possibly some temp memory that's cleared
   immediately, which makes having a limit smaller than the request size
   seem a little strange to me.  A 1MB limit on image files makes it hard
   to support user submitted image uploads in a practical setting.  I
   know it's possible to split the image over datastore entries just to
   store them, but we also need to be able to resize them to generate
   thumbnails, etc.

   And if anyone's come up with a workaround splitting the input file
   into parts to resize in parts, it'd be nice to hear.  While PNG uses
   DEFLATE and might not work, JPEG as far as I know cosine transforms
   blocks independently so it seems like it could be possible. Though
   it'd probably increase the load on the servers more than just having a

   1MB API call limit.

   Thanks.- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: testing application error handling

2009-04-18 Thread Andy Freeman

Sounds like http://code.google.com/p/googleappengine/issues/detail?id=915
.

Please star and comment as appropriate.

On Apr 18, 1:34 pm, iceanfire iceanf...@gmail.com wrote:
 According to the google maps team, we have to expect atleast some
 [small] amounts of datastore errors.

 Is there anyway to get the SDK to throw these errors, so I can better
 test how my application responds when a specific query is denied (i.e
 make sure the right code kicks in and handles the error?)

 Or, a broader question: how do you ensure that your application can
 recover from errors?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Need an article to handle properly a datastore timeout

2009-04-15 Thread Andy Freeman

 And even with 3 entities of 3 different kinds you have to manualy
 rollback it because a transaction works only with 1 kind.

Where is it documented that a transaction only works with one kind?

The SDK is quite happy to do transactions involving db.Model instances
of different kinds.

On Apr 15, 7:47 am, Sylvain sylvain.viv...@gmail.com wrote:
 Maybe, but my app needs it. So I will not change it for that.

 And even with 3 entities of 3 different kinds you have to manualy
 rollback it because a transaction works only with 1 kind. So If the
 timeout happends on the second write, you have to manually rollback
 the first put() and it can be very complicated if the number of
 entites/kinds increases.

 Another thing, you can have a timeout on a a simple get_by_keyname,
 fetch(50),... etc,... So the number of write is not the only/main
 issue and a request can wait 30s so it should no be an issue.

 I understand that in order to remove timeout we have to change things
 but an article that explain what to do and how could be very usefull.

 During the first  app chat (long time ago), I've asked for it and it
 seems the Google team agree with it. So 
 :)http://groups.google.com/group/google-appengine/browse_frm/thread/ced...

 Regards.

 On 15 avr, 16:00, 风笑雪 kea...@gmail.com wrote:



  It's not recommend to write more than 20 entities in one request, and I
  don't think you will exceed it in a common request.The datastore has such
  a limitation of writing, you can't treat it as a relationship db. (In my
  test, it takes about 1 second to put 100 simple entities into db.)

  However, sometimes we may meet this situation.
  My suggest is to limit the number of entities as possible, use
  db.run_in_transaction to keep entities correct and complete saved to db.
  If possible, AJAX calls can help you breaks a big transaction into several
  small transaction.
  And you can set how many times the transaction should try when operation
  fails, so you can give a failed message to user before timeout.

  2009/4/15 Sylvain sylvain.viv...@gmail.com

   Hi,

   Currently, I think 0.5% of my Datastore operations result in a
   datastore timeout.
   I don't know why... It can be raised on very simple or very
   complicated operation.

   For my app for example, the problem is that it can occur during a big
   request where I need to create 50-100 entities of 3 different kinds.
   So I need to manually rollback everything and it can be difficult.

   Having a big transaction could be a solution, but GAE it too limited
   (only 1 kind,).

   It seems that datastore timeout will always be there and we have to
   manage them. So I think we need an article that explains how to handle
   them properly with different scenarios. It will be very appreciated.

   Regards.- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Will Google support a relational database in the future?

2009-04-12 Thread Andy Freeman

 In the business environment
 it's often not know, and the flexibility in this regard provided by
 relational databases is part of what has made them so popular.

Only if the schema makes it possible to get what you want.  And, even
if it does, the cost may be excessive.

  If 
 your only window
 to your business knowledge is accessible via GAE, you're in a serious
 problem each time your requirements do not fall into the realm of what
 you anticipated.

Nope.  The difference is that GAE's query language is weaker so you
have to do more in user code, such as joins and aggregations.
Relational databases write this code for you.  However, regardless of
who writes this code, it has to be run.  Maybe the query optimizer
will write better code than you do, but maybe it won't.

On Apr 12, 1:03 pm, Consultuning consultun...@gmail.com wrote:
  Pushing calculations from read time to write time makes sense in that reads
  seriously out number writes for most web applications.

 Pushing calculations at write time instead of read time has some other
 very very strong implication that you're probably missing: that you
 know in advance what you want to calculate. For very narrow
 application domains this is probably true. In the business environment
 it's often not know, and the flexibility in this regard provided by
 relational databases is part of what has made them so popular.

 For Google own apps this is not probably a problem, since they have
 other means of processing massive amounts of data. If your only window
 to your business knowledge is accessible via GAE, you're in a serious
 problem each time your requirements do not fall into the realm of what
 you anticipated.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Embedded Django 0.96 versus Django 1.0

2009-04-10 Thread Andy Freeman

See http://code.google.com/p/googleappengine/issues/detail?id=872


On Apr 10, 11:08 am, Devel63 danstic...@gmail.com wrote:
 We've been using Django 0.96 (since it's already available within App
 Engine), mostly just to get i18n support.

 What are the drawbacks of going all the way to the latest version?
 - We have to upload all the files (and I understand there's a zip way
 to do that)
 - Startup times are even slower when instantiated on a new machine
 (how much?)
 - Other?

 Any key advantages?
 - Gaebar requires 1.0
 - The registration module requires 1.0
 - Other?

 Any sense of when/if Google will make this migration themselves?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Running dev_appserver.py under a mod_python

2009-04-07 Thread Andy Freeman

Your application probably isn't written to be multi-threaded and any
effort to make it so is wasted because it won't be executed as multi-
threaded in production.

You could get around application threading issues by forking but then
you'll discover that the dev server's datastore is a pile of code with
some file handles and can't handle concurrent access.

You could write a proxy that distributed requests across multiple
instances of the dev server, but they'd have separate and independent
datastores.

On Apr 6, 4:59 pm, Khai khaitd...@gmail.com wrote:
 Perhaps I didn't explain my problem clearly.  This problem applies
 only to the development server (dev_appserver.py or SDK).  The problem
 is it is single threaded, so when multiple requests arrived at the
 same time, they have to be served serially (one after the other), so
 one of the request got timed out by the opensocial agent.  What can I
 do to service multiple requests concurrently in the development
 environment?

 On Apr 4, 2:28 pm, Alkis Evlogimenos ('Αλκης Ευλογημένος)



 evlogime...@gmail.com wrote:
  That's a very bad idea: dev_appserver is not secure, is snail slow if you
  add more than a couple of thousand entities in it and it is uncertain if it
  can share a datastore across multiple instances.
  Why do you want to do this? What is wrong with hosting on GAE?

  On Sat, Apr 4, 2009 at 11:04 PM, Khai khaitd...@gmail.com wrote:

   Before I try something crazy I want to know if someone has try it or
   whether it is too crazy to try.

   The problem is dev_appserver.py is single-threaded, and I need to have
   multiple instances running.  I am developing an opensocial application
   which make three asynchronous requests to my server.  Because
   dev_appserver.py is single threaded, the last request serviced by
   dev_appserver.py took more than 5 seconds and get timed out by
   opensocial.  So I need to have multiple processes of dev_appserver.py
   running.

   I've search this group, and so far I've only found that someone run
   multiple dev_appserver.py processes using different ports which is not
   practical for my problem.  I've also search this group for mod_python,
   but did not find any relevant result.  I want to run dev_appserver.py
   as a mod_python script with Apache prefork mpm (multiple processes).

   Is this possible?  What is the degree of difficulty?  I am very novice
   with GAE, and I have never done anything with mod_python.  Has anyone
   try this before?  Would anyone willing to try it and share it with the
   group?

   Any responses / advices would be greatly appreciated.

   Khai

  --

  Alkis- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-07 Thread Andy Freeman

 Some user reported a problem and wanted to know if Google had any plan
 to solve it. That equates to wanting a guarantee in your world?  Some
 kind of twisted world you live in there.

When considering a plan to solve a problem, I think that it's
reasonable to consider whether said plan will actually solve the
problem.  Why?  Because if a plan doesn't solve the problem, the
problem still exists.

I am willing to assume that Google is doing what it can reasonably do
about this.  The continued complaints suggest that the results of
those efforts are inadequate.  And, we've seen threats regarding
what will happen if Google doesn't come through.  Maybe those people
will be satisfied by something short of a guarantee, but 

And, as has been noted, a Google representative posted a solution and
was ignored.


On Apr 6, 11:33 pm, Andy selforgani...@gmail.com wrote:
  Yes, I do.

 I'm glad you finally learn the word obligation. Too bad you didn't
 learn it earlier when you spewed your nonsense that obligation can
 only come from laws and contracts.

 Feel free to consult a dictionary first next time when you find
 yourself once again tempted to use a big word you don't understand.

  I'm not angry.

 Good for you. Definitely worth reporting back to your anger management
 counselor

  I'm merely pointing out that Google's
  capabilities in this area are limited, that they need to take their
  complaints elsewhere if they want guarantees.

 Who's talking about guarantees?

 Some user reported a problem and wanted to know if Google had any plan
 to solve it. That equates to wanting a guarantee in your world? Some
 kind of twisted world you live in there.

 In fact the only person who even brought up the word guarantee is
 you.

 Do you always argue against your own strawman like that?

  Do you really believe that Google can honor a promise that a given
  site won't be blocked if the Chinese govt wants to block said site?
  (Feel free to assume that the site is hosted in China.)

 Who's talking about promise that a given site won't be blocked other
 than you?

 Once again you're the only person to use words like guarantees and
 promise

 You must be really busy arguing with your own strawman like that...

  I merely pointed out that Google can't do as they ask

 And you're the spokesperson of Google? self-appointed?

 This is what the OP asked: Does Google have a plan for dealing with
 this?
 No different than any other threads that are also about reporting
 problems and asking for solutions.

 For whatever reason such a simple question bothers you tremendously.
 To such a degree that you felt compelled to spew nonsense such as
 Google can't do as they ask, when in fact you have
 no standing to speak for Google on what they can or cannot do.

 So the real question is why does the simple question Does Google have
 a plan for dealing with this? bother you so much?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-06 Thread Andy Freeman

 One of them is to offer hosting at Google's data center in China.

Since China can block sites hosted in China  (I thought that it
was common knowledge that China imposed controls on in-China sites.)

There's nothing wrong with wanting guarantees, but Google isn't in a
position to give a guarantee wrt blocking that it can honor because
China can block no matter what Google does.

On Apr 6, 12:57 am, Andy selforgani...@gmail.com wrote:
 what can Google do to stop the Chinese
  govt from blocking?
 one can argue that Google needs the Chinese govt to not block,
 but that doesn't imply that Google can do anything to stop the Chinese
 govt from blocking.

 As I've told you, there are plenty of solutions.

 One of them is to offer hosting at Google's data center in China. The
 biggest objection you could muster up to that solution was a pointless
 Google may or may not offer that solution piffle. Yes indeed. And
 Google may or may not offer any solutions to any problems reported
 in this forum. What's your point?

 What solutions Google may or may not offer is certainly not for you to
 say. The only strange thing in this entire thread is what caused you
 to take so much offense to some users who are merely doing what
 everyone else is doing in this forum: reporting problems and asking
 for Google's help in solving such problems.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-06 Thread Andy Freeman

 Do you see the words social and moral in addition to legal in
 the definition for obligation? Do you even understand what those
 words mean?

Yes, I do.  And I also understand how social and moral obligations
work.  If you feel that Google is violating a social or moral
obligation, you don't have any recourse other than to shun Google and
to try to convince others to do likewise.  You're not satisfied with
shun, so either you don't understand social or moral obligations or
you think that Google has some other type of obligation.  Since the
only remaining one is legal

 Some users reported a problem and wanted to know if Google has any
 plan to address the problem. For whatever reason that makes you very
 angry,

Huh?  I'm not angry.  I'm merely pointing out that Google's
capabilities in this area are limited, that they need to take their
complaints elsewhere if they want guarantees.

Do you really believe that Google can honor a promise that a given
site won't be blocked if the Chinese govt wants to block said site?
(Feel free to assume that the site is hosted in China.)

 The same basis that compelled you to feel justified in ordering others
 not to report problems and not to ask for Google's help in solving
 their problems.

Except that I didn't order anyone to do anything, which may explain
why you didn't quote any such order.  I merely pointed out that Google
can't do as they ask and pointed them to someone who can.

Why does that bother you so much?



On Apr 6, 12:52 am, Andy selforgani...@gmail.com wrote:
  It's the plain meaning of the word.  I apologise for not knowing that
  you didn't know what it meant when you wrote that Google had an
  obligation to make GAE available in China.  Are there other statements
  that you made without understanding their meaning?

 If you think obligation only refers to legal obligation, you are,
 as usual, very mistaken.

 From the dictionary:
 Obligation:
 A social, legal, or moral requirement

 Do you see the words social and moral in addition to legal in
 the definition for obligation? Do you even understand what those
 words mean? Feel free to look them up. I have an obligation to provide
 the best service to my clients. Does that mean I'm legally required to
 do that? Of course not. But is it still an obligation? Absolutely. Can
 you keep driveling on about things you know nothing about? I wouldn't
 bet against that.

 I apologize for not knowing that you didn't know what it meant when
 you wrote that obligation only means legal obligation.  Are there
 other statements that you made without understanding their meaning?

 Next time when you find yourself once again talking about something
 you clearly know nothing about, I highly recommend consulting a
 dictionary.

 That or keep your mouth closed. Better to keep your mouth shut and be
 thought a fool than to open it and remove all doubt

  China availability issue is one of the few issues where folks claim
  that/act like Google has an obligation

 Let me get this straight:

 Some users reported a problem and wanted to know if Google has any
 plan to address the problem. For whatever reason that makes you very
 angry,

 Newsflash: this entire forum is full of threads reporting problems
 with GAE and asking about Google's plan to fix those problems. Are you
 going to post your Google has no legal obligation to solve your
 problem! response to every single one of those threads?

  And the basis for this order is...

 The same basis that compelled you to feel justified in ordering others
 not to report problems and not to ask for Google's help in solving
 their problems. So you tell me.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-06 Thread Andy Freeman

It's even funnier that you quoted someone who isn't banging on Google
to respond.

On Apr 6, 12:03 am, Paddy Foran foran.pa...@gmail.com wrote:
 I'd just like to point out how funny it is that people keep banging on
 for Google to respond, and in their banging on for Google to respond,
 they missed Google's actual response.

  Is there any google staff who is responsible for GAE promotion and
  technology to say something here?

  How can I access to my Google Apps via my own domain directly, e.g.
  how can access via mail.my_domain.com instead of mail.google.com/a/
  my_domain.com?

 One way to address this is to run a proxy server elsewhere, which will
 allow your site to have it's own unique IP, rather than the shared IPs
 of Google.

 -Brett
 App Engine Team

 Please note the App Engine Team signature. That means Brett (at
 least claims he) is from Google.

 Poor Brett was ignored, as people clamoured for Brett to comment.

 This is why I love the internet. It amuses me to no end.

 On Apr 6, 12:48 am, Andy Freeman ana...@earthlink.net wrote:



   No company is willing to be a pawn in the game of politics between
   Google and China.

  That sounds reasonable, but what can Google do to stop the Chinese
  govt from blocking?

  (1) Google can't tell the Chinese govt what to do.

  (2) The Chinese govt appears to be technically competent and controls
  the relevant connections, both from the outside and from internal
  datacenters.

  (3) Google can propose agreements, but China is a soverign entity and
  and can do what it pleases wrt internal matters.  (Other posters have
  suggested that buying dinner for the appropriate official would cause
  the blocking to go away.  I don't see why the Chinese govt would find
  such an agreement binding.)

  Yes, one can argue that Google needs the Chinese govt to not block,
  but that doesn't imply that Google can do anything to stop the Chinese
  govt from blocking.  Google's needs do not obligate the Chinese govt.

  On Apr 5, 3:16 pm, WallyDD shaneb...@gmail.com wrote:

   Google is more or less obligated to solve this issue.

   No company is willing to be a pawn in the game of politics between
   Google and China.
   Name a single company (that has any international presence) who would
   be willing to use GAE knowing full well that it is blocked in its
   current form?
   This issue has nothing to do with the Chinese government and there is
   no way Google will point the finger at them.

   Perhaps google can also take on all the other countries that are
   blocking GAE and while they are at it they can point fingers at
   corporate america and their firewalls?
   You have to remember that at the moment this is a preview release.

   I don't really understand why you persist with this argument. You have
   raised some valid points which should be looked at and considered in
   the scheme of things but most of the diatribe you present here seems
   aimed at China/Chinese Government. I have always found prejudices
   cloud peoples judgement.

   To sumarise how this problem will probably be viewed;
   Google created a dns based system (for GAE addressing) which puts
   everything though ghs.google.com. This system works really well and
   from my experience it was very clever and efficient. However it has an
   issue with firewalls that got overlooked. Google has just recently
   been made aware of this problem.

   On Apr 5, 12:53 pm, Andy Freeman ana...@earthlink.net wrote:

 Feel free to hair-split the word obligation.

It's the plain meaning of the word.  I apologise for not knowing that
you didn't know what it meant when you wrote that Google had an
obligation to make GAE available in China.  Are there other statements
that you made without understanding their meaning?

China availability issue is one of the few issues where folks claim
that/act like Google has an obligation even though it's an issue where
Google has very little capability to change things.

 That's why I want to hear from a Google representative on their plan.

I predict that if Google says anything, it will be roughly equivalent
to we're doing what we can.  At that point, you'll have to decide if
the results, which will vary with the whim of the Chinese govt, are
adequate for your purposes.

Of course, if you're better at dealing with the Chinese govt than
Google is

 Now just accept that fact and act accordingly.

And the basis for this order is...

On Apr 4, 6:11 pm, Andy selforgani...@gmail.com wrote:

  I'm someone who understands that obligations come from laws and
  contracts.  Feel free to point to the relevant chapter and verse 
  that

  However, absent a contract and/or a law, Google isn't obligated to
  make GAE applications visible in China.

 Feel free to hair-split the word obligation.

 Does Google have the legal obligation to solve this problem? No. Just
 like

[google-appengine] Re: 308K CSV file cost 20% GAE data space!

2009-04-05 Thread Andy Freeman

Something is wrong here - it should not generate multiple indices per
tag, regardless of the length of the list.  (The query won't work if
the length is greater than 30, but that's a different problem.)

see Ryan's message in

http://groups.google.com/group/google-appengine/browse_thread/thread/1285c272c0e1b62a

That time, the problem turned out to be some user error, but I didn't
see an explanation.



On Apr 3, 8:58 am, 秦锋 feng.w@gmail.com wrote:
 Alkis:
 I re-check my code and found if I use list property in gql like
 following:

 WHERE tags = :1, inputTags (it's a list)

 If there are 3 tag in inputTags, index will be:

 name:tag
 name:tag
 name:tag

 Does this cost huge space?

 But it's really cool for multi-tags searching in app!

 On 4月3日, 下午8时50分, Alkis Evlogimenos ('Αλκης Ευλογημένος)



 evlogime...@gmail.com wrote:
  What do your models look like?

  On Fri, Apr 3, 2009 at 2:00 PM, 秦锋 feng.w@gmail.com wrote:

   My App: cndata4u.appspot.com
   Now I have imported about 2500 records there, and with only THREE
   entities. But I have found that these data have occurred 20% data
   store, about 200M!
   My original CSV files have only 308K!

   Any idea?

  --

  Alkis- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-05 Thread Andy Freeman

 Feel free to hair-split the word obligation.

It's the plain meaning of the word.  I apologise for not knowing that
you didn't know what it meant when you wrote that Google had an
obligation to make GAE available in China.  Are there other statements
that you made without understanding their meaning?

China availability issue is one of the few issues where folks claim
that/act like Google has an obligation even though it's an issue where
Google has very little capability to change things.

 That's why I want to hear from a Google representative on their plan.

I predict that if Google says anything, it will be roughly equivalent
to we're doing what we can.  At that point, you'll have to decide if
the results, which will vary with the whim of the Chinese govt, are
adequate for your purposes.

Of course, if you're better at dealing with the Chinese govt than
Google is

 Now just accept that fact and act accordingly.

And the basis for this order is...


On Apr 4, 6:11 pm, Andy selforgani...@gmail.com wrote:
  I'm someone who understands that obligations come from laws and
  contracts.  Feel free to point to the relevant chapter and verse that

  However, absent a contract and/or a law, Google isn't obligated to
  make GAE applications visible in China.

 Feel free to hair-split the word obligation.

 Does Google have the legal obligation to solve this problem? No. Just
 like Google doesn't have any legal obligation to improve this service
 or add any new features. Does that mean users should stop posting any
 thread that's about improving GAE?

 Does that mean you're going to start polluting every single thread in
 this forum by posting your 'Google has no legal obligation to do this
 drivel?

  Good for you.  And Google may, or may not, offer such an option.  Note
  may not - they're under no obligation to do so.  (I don't presume to
  know the risks and costs of offering such an option.  After all, China
  can block at the edge of the data centers, impose conditions, or even
  shut them down.)

 Another zero-value drivel.

 Yes Google may or may not offer that solution, just like they may or
 may not offer any solution to any other problems raised in this forum

 That's why I want to hear from a Google representative on their plan.
 Your speculation on what Google may or may not do is just that,
 worthless speculation that serves no purpose in this discussion.

 You're right to not presume to know though, seeing how you don't
 know anything in this matter.

 Now just accept that fact and act accordingly.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: 308K CSV file cost 20% GAE data space!

2009-04-05 Thread Andy Freeman

 But 秦锋 I think is referring to AND, the code in the quoted post is
 only pseudo code (and/or gql expands to 'and' internally - sorry, I
 dont know enough python)

Why do you think that WHERE tags = :1, inputTags (it's a list) is
pseudo code?  It looks to me like an excerpt from query construction
code together with a description.

The :1 isn't python-specific, it's a convention in the API for query
construction.  It means the first argument after the query string.
If the query string ended with :1, the comma would separate it from
the first argument, which I take to be the value of inputTags, which
is described as being a list.

Where are you finding something relevant in GQL which expands
internally into an AND?

Maybe 秦锋 will post the actual code.

On Apr 5, 1:04 pm, Barry Hunter barrybhun...@googlemail.com wrote:
 That thread is refering to 'in' which effectivly is a OR - which as
 pointed out by ryan expands to three(or multiple) queries under the
 hood.

 But 秦锋 I think is referring to AND, the code in the quoted post is
 only pseudo code (and/or gql expands to 'and' internally - sorry, I
 dont know enough python)

 a query of WHERE tag = 'tag1' AND tag = 'tag2' AND tag = 'tag3'  -
 will require *three* indexes on `tag`

  - this only make sense for ListProperties. But is a valid query. For
 Lists,  tag = 'tag1'  could be written like  ('tag1' IN LIST `tag`)
  - which is different to the sql/gql IN keyword.

 On 05/04/2009, Andy Freeman ana...@earthlink.net wrote:







   Something is wrong here - it should not generate multiple indices per
   tag, regardless of the length of the list.  (The query won't work if
   the length is greater than 30, but that's a different problem.)

   see Ryan's message in

   http://groups.google.com/group/google-appengine/browse_thread/thread/...

   That time, the problem turned out to be some user error, but I didn't
   see an explanation.

   On Apr 3, 8:58 am, 秦锋 feng.w@gmail.com wrote:
    Alkis:
    I re-check my code and found if I use list property in gql like
    following:

    WHERE tags = :1, inputTags (it's a list)

    If there are 3 tag in inputTags, index will be:

    name:tag
    name:tag
    name:tag

    Does this cost huge space?

    But it's really cool for multi-tags searching in app!

    On 4月3日, 下午8时50分, Alkis Evlogimenos ('Αλκης Ευλογημένος)

    evlogime...@gmail.com wrote:
     What do your models look like?

     On Fri, Apr 3, 2009 at 2:00 PM, 秦锋 feng.w@gmail.com wrote:

      My App: cndata4u.appspot.com
      Now I have imported about 2500 records there, and with only THREE
      entities. But I have found that these data have occurred 20% data
      store, about 200M!
      My original CSV files have only 308K!

      Any idea?

     --

Alkis- Hide quoted text -

    - Show quoted text -

 --
 Barry

 -www.nearby.org.uk-www.geograph.org.uk-- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-04 Thread Andy Freeman

 You have
 nothing of value to contribute to the discussion except to badger the
 people who reported this problem.

On the contrary.  I've pointed out how to actually solve the problem.

  Throughout this, you've acted like Google has some obligation to make
  GAE applications visible in China.  It doesn't.

 And who are you to say it doesn't?

I'm someone who understands that obligations come from laws and
contracts.  Feel free to point to the relevant chapter and verse that
obligates Google to make GAE applications visible in China.  If
there's no such law or contract, what is the basis of the obligation
that you think exists?  (Yes, you're free to not use GAE if Google
can't solve the China access problem, but your freedom to go
elsewhere doesn't obligate Google.  After all, you're free to not use
GAE for any reason, such as not painting their buildings pink, yet no
one thinks that Google is obligated to paint its buildings pink.)

Google may choose to try to make GAE applications visible in China.
(I'm willing to give them the benefit of the doubt and assume that
they're doing what's reasonable.  Feel free to provide evidence to the
contrary.  And no, failing isn't evidence that they're not doing what
they can.)  Google may also suggest work-arounds.

However, absent a contract and/or a law, Google isn't obligated to
make GAE applications visible in China.

 I want the option to host my app in Google's China data center

Good for you.  And Google may, or may not, offer such an option.  Note
may not - they're under no obligation to do so.  (I don't presume to
know the risks and costs of offering such an option.  After all, China
can block at the edge of the data centers, impose conditions, or even
shut them down.)

  Actually, I do know about those and lots of other bandaids.  However,
  I also know how they all fail.

 You know they all fail? How?

Reread what I actually wrote.  I know HOW they all fail, that is, what
the Chinese have done to thwart such bandaids in the past.  This isn't
the first time that China has blocked stuff, so we can look at what
they've done before and see some of what they can and will do.  (I
don't assume that they're unwilling/unable to do things that they
haven't done before.)  While it's possible that this time they won't
use techniques that have worked before, I wouldn't bet that they
won't.  Of course, you're free to act as if they won't.

On Apr 3, 3:35 pm, Andy selforgani...@gmail.com wrote:
  Sure there is - unless you know how to fix the problem.  (Surely
  you're not going to argue that you're reporting an unknown problem.)
  After all, you complained about someone else's posting with This is a
  forum for people to share information on GAE and solve problems.

 Indeed this is a forum for people to share information on GAE and
 solve problems. The original poster reported a problem. You have
 nothing of value to contribute to the discussion except to badger the
 people who reported this problem. Therefore I ask you to either
 contribute or shut up.

 Just like if someone started shouting in a library I'd also ask him to
 cut it out. So in your world that's also pot and kettle huh?

  Throughout this, you've acted like Google has some obligation to make
  GAE applications visible in China.  It doesn't.

 And who are you to say it doesn't?
 I want to hear from a Google employee on their position on this. I'm
 not interested in your misguided opinion on what Google might or might
 not consider is their responsibility or your political/moral drivel.

  That's assuming that the Chinese want appengine apps to get through.
  Since they're blocking, I'm pretty sure that they want to block at
  least some app engine apps and are willing to block them all to block
  the ones that they don't want.
  What are the odds that they haven't tried that?

 Much higher than the odds that you actually know what you're talking
 about.

  SInce China is blocking app engine because it doesn't like certain app
  engine apps and those apps are the most likely to want to want to use
  such an option 

 Who are you to say which apps are most likely to want to use such an
 option?

 I want the option to host my app in Google's China data center and my
 app would have no problem getting approval.

  Actually, I do know about those and lots of other bandaids.  However,
  I also know how they all fail.

 You know they all fail? How? Have you actually tried the solutions?

 Oh wait you couldn't have actually tried them because you don't even
 work for google. You're just another Internet riffraff out to make a
 fool of himself and waste everyone else's time.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, 

[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-03 Thread Andy Freeman

 So no, there's no pot and kettle here at all.

Sure there is - unless you know how to fix the problem.  (Surely
you're not going to argue that you're reporting an unknown problem.)
After all, you complained about someone else's posting with This is a
forum for people to share information on GAE and solve problems.

Throughout this, you've acted like Google has some obligation to make
GAE applications visible in China.  It doesn't.

 And no, there's no need for google to subvert the great firewall in
 order to solve this problem.

That's assuming that the Chinese want appengine apps to get through.
Since they're blocking, I'm pretty sure that they want to block at
least some app engine apps and are willing to block them all to block
the ones that they don't want.

 Google could talk to the authorities in
 China to see what can be done to get unblocked.

What are the odds that they haven't tried that?

 It could give App Engine users the option to move their sites to google's 
 data centers
 in China.

SInce China is blocking app engine because it doesn't like certain app
engine apps and those apps are the most likely to want to want to use
such an option 

 Plenty of solutions - just because you don't know about them doesn't mean they
 don't exist.

Actually, I do know about those and lots of other bandaids.  However,
I also know how they all fail.

The fix to the problem is China.  If you're not working on that,
you're just flapping your gums.

On Apr 2, 11:10 pm, Andy selforgani...@gmail.com wrote:
 I want to to hear from Google whether it has done anything to solve
 this problem or whether it has any plan to do so.

 I don't want to hear pompous speech from a self-appointed non-google
 spokesperson on his political/moral drivels and that he encourage
 me to take my business elsewhere.

 So no, there's no pot and kettle here at all.

 And no, there's no need for google to subvert the great firewall in
 order to solve this problem. Google could talk to the authorities in
 China to see what can be done to get unblocked. It could give App
 Engine users the option to move their sites to google's data centers
 in China. It could start selling static IP hosting.  Plenty of
 solutions - just because you don't know about them doesn't mean they
 don't exist.

 On Apr 3, 1:54 am, Andy Freeman ana...@earthlink.net wrote:



   This is a forum for people to share information on GAE and solve

  problems.

  Pot, kettle and all that unless you know how Google can subvert the
  great firewall.

  On Apr 2, 8:48 pm, Andy selforgani...@gmail.com wrote:

   No one is interested in hearing your political/moral preaching.

   This is a forum for people to share information on GAE and solve
   problems. If you have anything of value to add to the discussion, feel
   free to add your bits. If not, you won't be missed.

   So you encourage me to take my business elsewhere?

   Who are you - are you the spokesperson of Google? Is that the Google
   official position on this matter?

   Or was that just another failed attempt of you at self-aggrandizement?

   On Apr 2, 7:53 pm, Joe Bowman bowman.jos...@gmail.com wrote:

China and the other countries block content that they deem
unacceptable for their citizens. In order to get appengine off the
blacklist, they would have to disallow people to create applications
which would be deemed offensive to those countries.

First, looking at it from the pure technical/business view, this would
require that applications no longer post immediately, and be under
review at each update at a minimum. This would potentially decrease
the amount of applications served (thus decreasing revenue) while
increasing costs to support the system.

From the political/moral view, Google has been a staunch supporter of
rights to speech, and it wasn't that long ago that they were chastised
for bending their own rules to support China at all by allowing the
filtering of search results. Further expansion of their products
having such filtering imposed by them would lead to more reputation
damage. Reputation damage also costs money.

So really, from two different perspectives, there's no business sense
in worrying about if appengine applications are being firewalled by 6
out of the 150+ countries that exist in the world. As a customer you
have every right to take your business elsewhere, and if making you
application available in those 6 countries is of the importance that
you need to, I encourage you to do so. Not every web application is
going to be appropriate for appengine.

There's 6 countries that support appengine, and can only write
programs in python. Which is really the limiting factor of the
application environment?

On Apr 2, 7:16 pm, Andy Freeman ana...@earthlink.net wrote:

  Why shouldn't this be google's problem?

 Suppose that I sold raincoats and you wanted to buy one of my

[google-appengine] Re: Google App Engine and China access

2009-04-02 Thread Andy Freeman

 Domain binded could not be reached due to ghs.google.com is blocked.

Shouldn't you be addressing your complaint to the folks doing the
blocking?

On Apr 1, 10:10 pm, 秦锋 feng.w@gmail.com wrote:
 Domain binded could not be reached due to ghs.google.com is blocked.

 I strongly want Google to deploy ghs.google.cn!

 On 4月2日, 上午11时11分, Andy selforgani...@gmail.com wrote:



  Every now and then I see posts on App Engine being blocked by China.

  Several workarounds have been suggested, which seem to work some of
  the time but not others.

  What's the current state -- is App Engine accessible from China?

  Is it a good platform to use for apps that want to be accessible to
  Chinese users? What's your thoughts?- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-02 Thread Andy Freeman

 Why shouldn't this be google's problem?

Suppose that I sold raincoats and you wanted to buy one of my
raincoats.  If someone else got between us and stopped me from
delivering raincoats to you, who would you hold responsible?

Google isn't doing the blocking.

Yes, Google may be able to make more money if it can get around the
blocking, but that doesn't change the fact that the blocks are not
under Google's control.

In other words, blocking may be a problem, that is an issue, for
Google, but it isn't Google's problem, that is, something that Google
has some obligation to do act upon.


On Apr 2, 3:38 pm, Andy selforgani...@gmail.com wrote:
 Why shouldn't this be google's problem?

 Google's hosting platform is being blocked by the country with the
 largest internet population in the world. You think that's not a major
 problem?

 I've used plenty of hosting sites that are perfectly accessible from
 China. So obviously this is a problem for Google.

 On Apr 2, 11:18 am, Barry Hunter barrybhun...@googlemail.com wrote:



  And why is this Google's problem?- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Should I take my website somewhere else? - blocked in China

2009-04-02 Thread Andy Freeman

  If A times 10  B then fix it.

You're assuming that google can fix it.  Since google isn't doing
the blocking, this is an interesting assumption.


On Apr 2, 7:05 pm, WallyDD shaneb...@gmail.com wrote:
 Paying extra money for a static IP address is something that I would
 happily cough up money for. Could Google create the functionality?

 I am no expert on Firewalls and security but is this same type of
 blocking done with some corporate firewalls? I was under the
 impression that these countries buy their firewalls from the same
 companies which outfit corporate america.

 It is unfortunate that some politicians/employers choose to block
 their citizens/employees from viewing certain websites. Denying access
 to a whole portion of the web to people simply because of some poorly
 implemented IT policy is something that Google needs to deal with.

 How much adense revenue is google losing from this per year? = A
 How much would it cost to fix (or workaround) the problem? = B

 If A times 10  B then fix it.

 Any chance of a response from Google?

 On Apr 2, 7:53 pm, Joe Bowman bowman.jos...@gmail.com wrote:



  China and the other countries block content that they deem
  unacceptable for their citizens. In order to get appengine off the
  blacklist, they would have to disallow people to create applications
  which would be deemed offensive to those countries.

  First, looking at it from the pure technical/business view, this would
  require that applications no longer post immediately, and be under
  review at each update at a minimum. This would potentially decrease
  the amount of applications served (thus decreasing revenue) while
  increasing costs to support the system.

  From the political/moral view, Google has been a staunch supporter of
  rights to speech, and it wasn't that long ago that they were chastised
  for bending their own rules to support China at all by allowing the
  filtering of search results. Further expansion of their products
  having such filtering imposed by them would lead to more reputation
  damage. Reputation damage also costs money.

  So really, from two different perspectives, there's no business sense
  in worrying about if appengine applications are being firewalled by 6
  out of the 150+ countries that exist in the world. As a customer you
  have every right to take your business elsewhere, and if making you
  application available in those 6 countries is of the importance that
  you need to, I encourage you to do so. Not every web application is
  going to be appropriate for appengine.

  There's 6 countries that support appengine, and can only write
  programs in python. Which is really the limiting factor of the
  application environment?

  On Apr 2, 7:16 pm, Andy Freeman ana...@earthlink.net wrote:

Why shouldn't this be google's problem?

   Suppose that I sold raincoats and you wanted to buy one of my
   raincoats.  If someone else got between us and stopped me from
   delivering raincoats to you, who would you hold responsible?

   Google isn't doing the blocking.

   Yes, Google may be able to make more money if it can get around the
   blocking, but that doesn't change the fact that the blocks are not
   under Google's control.

   In other words, blocking may be a problem, that is an issue, for
   Google, but it isn't Google's problem, that is, something that Google
   has some obligation to do act upon.

   On Apr 2, 3:38 pm, Andy selforgani...@gmail.com wrote:

Why shouldn't this be google's problem?

Google's hosting platform is being blocked by the country with the
largest internet population in the world. You think that's not a major
problem?

I've used plenty of hosting sites that are perfectly accessible from
China. So obviously this is a problem for Google.

On Apr 2, 11:18 am, Barry Hunter barrybhun...@googlemail.com wrote:

 And why is this Google's problem?- Hide quoted text -

- Show quoted text -- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



  1   2   3   >