[google-appengine] Re: Unit testing with webtest problem

2012-09-11 Thread Richard Arrano
What I'm asking more specifically is what does App Engine do, if anything, 
to os.putenv? Because using the App Engine console, I can do things like:
 
os.environ[x] = ndb.Model()
 
and access it with os.environ[x] without a problem. However, when I pull 
up the interpreter outside of Python and attempt to put anything other than 
a string into os.environ, it will balk:
 
import os
 import os
 os.environ[abc] = {}
Traceback (most recent call last):
  File stdin, line 1, in module
  File C:\Python27\lib\os.py, line 420, in __setitem__
putenv(key, item)
TypeError: must be string, not dict

 
So it seems like App Engine does something to the putenv call where it 
serializes model instances into strings first, and if that's the case must 
also do a similar operation to os.getenv to deserialize them. What is App 
Engine doing, if anything to the getenv/putenv calls?
 
Thanks,
Richard

On Tuesday, September 11, 2012 6:09:16 AM UTC-7, Guido van Rossum wrote:

 You'll probably get more help from StackOverflow.com. You'll need to 
 provide more info, nobody can help you debug this with just that traceback 
 information unless they're psychic.

 On Sunday, September 9, 2012 10:06:33 AM UTC-7, Richard Arrano wrote:

 Hello,
 I've been using webtest to unit test my application and I've encountered 
 a strange issue. I wrap many of my get/post handlers in decorators and in 
 some of those decorators, I put ndb.Model instances in os.environ for later 
 use in the subsequent handler. This works on my local dev server and in 
 production. However, when I run nosetests it always gives me an error:
  
 os.environ[user] = user
 File C:\Python27\lib\os.py, line 420, in __setitem__
 putenv(key, item)
 TypeError: must be string, not User
  
 Any ideas on how to mitigate this so my tests won't error out at this 
 point?
  
 Thanks,
 Richard



-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/er6JJQK-EJoJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Backends always fail on the dev server

2012-09-11 Thread Richard Arrano
I hate to disappoint you, but I never did figure it out. Instead, I worked 
around it by doing entirely all my backend testing on the production 
server. I also started using the background_thread functionality, which 
ONLY works on the production server. So unfortunately my only 
recommendation to both of you is to just test on production. Sorry guys.
 
-Richard

On Wednesday, August 1, 2012 9:40:01 PM UTC-7, Max Lieblich wrote:

 Hi,

 Did you ever figure this out? I am running into problems when I try to run 
 dev_appserver.py --backends, always running into this, using either the 
 frontend or backend. It just doesn't work.

   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\api\apiproxy_stub_map.py, 
 line 570, in check_success
 self.__rpc.CheckSuccess()
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\api\apiproxy_rpc.py, line 
 156, in _WaitImpl
 self.request, self.response)
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\ext\remote_api\remote_api_stub.py,
  line 189, in MakeSyncCall
 self._MakeRealSyncCall(service, call, request, response)
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\ext\remote_api\remote_api_stub.py,
  line 201, in _MakeRealSyncCall
 encoded_response = self._server.Send(self._path, encoded_request)
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\tools\appengine_rpc.py, line 
 366, in Send
 f = self.opener.open(req)
   File C:\Python27\lib\urllib2.py, line 400, in open
 response = self._open(req, data)
   File C:\Python27\lib\urllib2.py, line 418, in _open
 '_open', req)
   File C:\Python27\lib\urllib2.py, line 378, in _call_chain
 result = func(*args)
   File C:\Python27\lib\urllib2.py, line 1207, in http_open
 return self.do_open(httplib.HTTPConnection, req)
   File C:\Python27\lib\urllib2.py, line 1177, in do_open
 raise URLError(err)
 URLError: urlopen error [Errno 10104] getaddrinfo failed


 On Wednesday, August 3, 2011 10:09:16 PM UTC-7, Richard Arrano wrote:

 Hello, 
 I've been working with the Backends API and tried to run the 
 counter_demo and am always unsuccessful. Each instance spits out an 
 error about the connection being refused to /_ah/start. Here's one 
 example for an instance of loadtest: 

 [Backend Instance] [loadtest.7] [dev_appserver.py:4248] INFO GET /_ah/ 
 start HTTP/1.1 500 - 
 [Backend Instance] [counter.0] [dev_appserver.py:4201] ERROR Exception 
 encountered handling request 
 Traceback (most recent call last): 
   File C:\Program Files\Google\google_appengine\google\appengine\tools 
 \dev_appserver.py, line 4134, in _HandleRequest 
 dev_appserver_index.SetupIndexes(config.application, root_path) 
   File C:\Program Files\Google\google_appengine\google\appengine\tools 
 \dev_appserver_index.py, line 304, in SetupIndexes 
 existing_indexes = datastore_admin.GetIndices(app_id) 
   File C:\Program Files\Google\google_appengine\google\appengine\api 
 \datastore_admin.py, line 53, in GetIndices 
 resp = _Call('GetIndices', req, resp) 
   File C:\Program Files\Google\google_appengine\google\appengine\api 
 \datastore_admin.py, line 102, in _Call 
 result = apiproxy_stub_map.MakeSyncCall('datastore_v3', call, req, 
 resp) 
   File C:\Program Files\Google\google_appengine\google\appengine\api 
 \apiproxy_stub_map.py, line 94, in MakeSyncCall 
 return stubmap.MakeSyncCall(service, call, request, response) 
   File C:\Program Files\Google\google_appengine\google\appengine\api 
 \apiproxy_stub_map.py, line 308, in MakeSyncCall 
 rpc.CheckSuccess() 
   File C:\Program Files\Google\google_appengine\google\appengine\api 
 \apiproxy_rpc.py, line 156, in _WaitImpl 
 self.request, self.response) 
   File C:\Program Files\Google\google_appengine\google\appengine\ext 
 \remote_api\remote_api_stub.py, line 252, in MakeSyncCall 
 response) 
   File C:\Program Files\Google\google_appengine\google\appengine\ext 
 \remote_api\remote_api_stub.py, line 178, in MakeSyncCall 
 self._MakeRealSyncCall(service, call, request, response) 
   File C:\Program Files\Google\google_appengine\google\appengine\ext 
 \remote_api\remote_api_stub.py, line 190, in _MakeRealSyncCall 
 encoded_response = self._server.Send(self._path, encoded_request) 
   File C:\Program Files\Google\google_appengine\google\appengine\tools 
 \appengine_rpc.py, line 365, in Send 
 f = self.opener.open(req) 
   File C:\Program Files\Python\lib\urllib2.py, line 381, in open 
 response = self._open(req, data) 
   File C:\Program Files\Python\lib\urllib2.py, line 399, in _open 
 '_open', req) 
   File C:\Program Files\Python\lib\urllib2.py, line 360, in 
 _call_chain 
 result = func(*args) 
   File C:\Program Files\Python\lib\urllib2.py, line 1107, in 
 http_open 
 return self.do_open(httplib.HTTPConnection, req) 
   File C:\Program Files\Python\lib\urllib2.py, line 1082, in do_open

[google-appengine] Unit testing with webtest problem

2012-09-09 Thread Richard Arrano
Hello,
I've been using webtest to unit test my application and I've encountered a 
strange issue. I wrap many of my get/post handlers in decorators and in 
some of those decorators, I put ndb.Model instances in os.environ for later 
use in the subsequent handler. This works on my local dev server and in 
production. However, when I run nosetests it always gives me an error:
 
os.environ[user] = user
File C:\Python27\lib\os.py, line 420, in __setitem__
putenv(key, item)
TypeError: must be string, not User
 
Any ideas on how to mitigate this so my tests won't error out at this point?
 
Thanks,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/Re8qzrUmbooJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Change to blobstore/get_serving_url?

2012-08-22 Thread Richard Arrano
Frank,
Thanks for the tip and for debugging that issue. I was completely lost. 
However, when I try your workaround, despite that my IDE can see that os 
has a function called putenv, for some reason App Engine does not. I get 
this error trying it that way:
 

os.putenv(datastore._ENV_KEY, )
AttributeError: 'module' object has no attribute 'putenv'

 

Any idea on how to mitigate this? I have imported os but this is extremely 
strange. I'm on 2.7.3 and am able to use putenv in my own interpreter.

 

Thanks,

Richard


On Tuesday, August 21, 2012 10:25:30 PM UTC-7, Frank VanZile wrote:

 Here is my stack of the same problem.

   File C:\work\twist.2\server\backend\lib\images\image_helper.py, line 
 18, in
 get_serving_url
 return images.get_serving_url(image_blob_key)
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\api\imag
 es\__init__.py, line 1793, in get_serving_url
 return rpc.get_result()
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\api\apip
 roxy_stub_map.py, line 604, in get_result
 return self.__get_result_hook(self)
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\api\imag
 es\__init__.py, line 1889, in get_serving_url_hook
 rpc.check_success()
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\api\apip
 roxy_stub_map.py, line 570, in check_success
 self.__rpc.CheckSuccess()
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\api\apip
 roxy_rpc.py, line 156, in _WaitImpl
 self.request, self.response)
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\api\apip
 roxy_stub.py, line 160, in MakeSyncCall
 method(request, response)
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\api\imag
 es\images_stub.py, line 296, in _Dynamic_GetUrlBase
 datastore.Put(entity_info)
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\api\data
 store.py, line 579, in Put
 return PutAsync(entities, **kwargs).get_result()
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\api\data
 store.py, line 556, in PutAsync
 return _GetConnection().async_put(config, entities, local_extra_hook)
   File C:\Program Files 
 (x86)\Google\google_appengine\google\appengine\datastor
 e\datastore_rpc.py, line 1534, in async_put
 pbs = [self.__adapter.entity_to_pb(entity) for entity in entities]
   File C:\work\twist.2\server\backend\lib\external\ndb\model.py, line 
 561, in
 entity_to_pb
 pb = ent._to_pb()
 AttributeError: 'Entity' object has no attribute '_to_pb'


 i have ran into the same type of problem before.  the problem is the sdk 
 tries to cache the datastore_rpc connection.  see _GetConnection in 
 datastore.py

 def _GetConnection():
   Retrieve a datastore connection local to the thread.

   connection = None
   if os.getenv(_ENV_KEY):
 try:
   connection = _thread_local.connection
 except AttributeError:
   pass
   if connection is None:
 connection = datastore_rpc.Connection(adapter=_adapter)
 _SetConnection(connection)
   return connection

 an incorrect datastore connection is being used for that call from this 
 connection cache.

 you can hack around it by adding this before the get_serving_url call.  

 import os
 from google.appengine.api import datastore
 os.environ[datastore._ENV_KEY] = ''
 os.putenv(datastore._ENV_KEY, '')
 datastore._thread_local.connection = None



 On Tuesday, August 21, 2012 6:59:25 PM UTC-7, Takashi Matsuo (Google) 
 wrote:


 Can you show me the whole stacktrace?
 What is your app-id?


 On Wed, Aug 22, 2012 at 7:14 AM, Richard Arrano ricka...@gmail.comwrote:

 Hello,
 I have a function that takes a user image upload and saves both the blob 
 key and a serving URL. After upgrading to 1.7.1, I now get an error trying 
 to save the updated entity to the datastore. The property pictures is an 
 ndb.StringProperty(repeated=True, indexed=False) and the property 
 picture_keys is an ndb.BlobKeyProperty(repeated=True) where the former will 
 hold the image serving URLs for the blob keys in the latter. In a 
 transaction, I read the uploaded file and generate a serving URL and save 
 the blob key and the serving URL. This worked last night before upgrading 
 to 1.7.1, but now I get an error on:
  

 article.pictures.append(get_serving_url(blob_info.key()))

  

 AttributeError: 'Entity' object has no attribute '_to_pb'
  
 Without fail on the development server, this line now generates that 
 error. Was there a change to the API and I should be doing something 
 differently? If so, what?
  
 Thanks,
 Richard
  
 -- 
 You received this message because you are subscribed to the Google 
 Groups Google App Engine group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/google-appengine/-/Bg4zC34iDOcJ.
 To post to this group, send

[google-appengine] Change to blobstore/get_serving_url?

2012-08-21 Thread Richard Arrano
Hello,
I have a function that takes a user image upload and saves both the blob 
key and a serving URL. After upgrading to 1.7.1, I now get an error trying 
to save the updated entity to the datastore. The property pictures is an 
ndb.StringProperty(repeated=True, indexed=False) and the property 
picture_keys is an ndb.BlobKeyProperty(repeated=True) where the former will 
hold the image serving URLs for the blob keys in the latter. In a 
transaction, I read the uploaded file and generate a serving URL and save 
the blob key and the serving URL. This worked last night before upgrading 
to 1.7.1, but now I get an error on:
 

article.pictures.append(get_serving_url(blob_info.key()))

 

AttributeError: 'Entity' object has no attribute '_to_pb'
 
Without fail on the development server, this line now generates that error. 
Was there a change to the API and I should be doing something differently? 
If so, what?
 
Thanks,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/Bg4zC34iDOcJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Task Guarantee?

2012-07-11 Thread Richard Arrano
Hello,
I have been designing my app with the notion in mind that even named
tasks may execute more than once, but I only recently came to realize
that a task may not execute at all. I have a task that operates on a
subset of my entities and it's absolutely imperative that all members
of this subset get processed and saved. I originally thought named
tasks would help accomplish this, but this does not seem to be the
case. Is there any way to guarantee that I process these entities? I
also considered a cron job that checks every couple of minutes to
check for unprocessed entities(since a cron job will kick off the
initial task) but I was hoping for a slightly more elegant solution.

Thanks,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Task Guarantee?

2012-07-11 Thread Richard Arrano
I also have not seen tasks fail to run, but I found this thread:
 
http://stackoverflow.com/questions/5583813/google-app-engine-added-task-goes-missing
 
Specifically, the part that says: Tasks are not guaranteed to be executed 
in the order they arrive, and they are not guaranteed to be executed 
exactly once. In some cases, a single task may be executed *more than once 
or not at all*.
 
I haven't seen the behavior myself, and perhaps the commenter is not 
correct, but it occurred to me that I need to account for the 
possibility. I believe I will have a status flag a la Per's 
suggestion. Thanks!
 
-Richard 

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/SYVZ0ymSYoYJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: NDB Caching Question

2012-04-25 Thread Richard Arrano
Do you mean rather than pull my 2500 entities, use a task to keep the
2500 updated in a single JSON property and then use it to sort on a
desired property as necessary? I was considering doing this as an
alternative. It seemed wasteful in my usage scenario to pull 2500
entities just to give the user back 50 or so, but to do it with
indexes caused a huge explosion in storage costs. Did you guys do any
experiments to see what was faster in your case?

Thanks,
Richard

On Apr 25, 1:17 pm, Alexander Trakhimenok
alexander.trakhime...@gmail.com wrote:
 Richard, I would advise to go with the JSON property. In our project
 we intensively use JSONs and update them in task queues  backends.
 Actually we have a rule - every page should make just 3-5 DB requests.
 In future we would consider to move from JSON to ProtoBuf but not for
 now.

 Also we've moved some rarely changed dictionaries (like geo locations
 - e.g. all cities in the world) into the Python code. That pushed us
 to use F2 instances due to higher memory demand but resulted in lower
 latency and almost same costs. It's cheaper to upload new version of
 app when needed.
 --
 Alexander Trakhimenok
 Dev lead athttp://www.myclasses.org/project

 On Apr 24, 6:07 pm, Richard Arrano rickarr...@gmail.com wrote:



  Thank you for the quick and very informative reply. I wasn't even
  aware this was possible with NDB. How would those x.yref.get() calls
  show up in AppStats? Or would they at all if it's just pulling it from
  memory?

  Thank you Kaan as well, I will actually experiment with the
  PickleProperty and see what's faster. I like that solution because the
  X kind is not one I expect to be heavily cached so I don't mind
  actually caching the pickled instance as I expect them to be evicted
  within a relatively short amount of time.

  I also wanted to ask: I saw someone did a speed test with NDB and I
  noticed he was pulling 500 entities of 40K and in the worst-case 0%
  cache hit scenario, it took something like 8-10 seconds. I was
  actually planning to have a piece of my application regularly query
  and cache ~2500 entities(of 2500) and sort on it to avoid a huge
  amount of indices(and a NOT IN filter that would really slow things
  down). Is this feasible or would you expect his results to scale, i.e.
  500 entities with 0% cache hits * 5 ~= 40-50s in my usage scenario? Or
  was there something unique to his situation with his indices and large
  amount of data? In mine each entity has about 10 properties with zero
  indices. If this is the case I'll probably copy the entities into a
  JsonProperty that occasionally gets updated and simply query/cache
  that since I don't expect the 2500 entities to change very often.

  Thanks,
  Richard

  On Apr 24, 12:59 pm, Guido van Rossum gu...@google.com wrote:

   On Monday, April 23, 2012 10:21:26 PM UTC-7, Richard Arrano wrote:

I'm switching from db to ndb and I have a question regarding caching:

In the old db, I would have a class X that contains a reference to a
class Y. The Y type would be accessed most frequently and rarely
change. So when I would query an X and retrieve the Y type it points
to, I would store X in the memcache with the actual instance Y rather
than the key. If X is invalidated in the memcache, then so is the Y
instance but otherwise I would skip the step of querying Y upon re-
retrieving X from the memcache. Is there any way to do this in ndb? Or
must I re-query each Y type even if it is from memcache or context?

   If you leave the caching to NDB, you probably needn't worry about this
   much. It's going to be an extra API call to retrieve Y (e.g. y =
   x.yref.get()) but that will generally be a memcache roundtrip. If you are
   retrieving a lot of Xes in one query, there's a neat NDB idiom to prefetch
   all the corresponding Ys in one roundtrip:

   xs = MyModel.query(...).fetch()
   _ = ndb.get_multi([x.yref for x in xs])

   This effectively throws away the ys, but populates them in the context
   cache. After this, for any x in xs, the call x.yref.get() will use the
   context cache, which is a Python dict in memory. (Its lifetime is one
   incoming HTTP request.)

   You can even postpone waiting for the ys, using an async call:

   xs = MyModel.query(...).fetch()
   _ = ndb.get_multi_async([x.yref for x in xs])

   Now the first time you reference some x.yref.get() it will block for the
   get_multi_async() call to complete, and after that all subsequent
   x.yref.get() calls will be satisfied from memory (no server roundtrip at
   all).

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: NDB Caching Question

2012-04-24 Thread Richard Arrano
Thank you for the quick and very informative reply. I wasn't even
aware this was possible with NDB. How would those x.yref.get() calls
show up in AppStats? Or would they at all if it's just pulling it from
memory?

Thank you Kaan as well, I will actually experiment with the
PickleProperty and see what's faster. I like that solution because the
X kind is not one I expect to be heavily cached so I don't mind
actually caching the pickled instance as I expect them to be evicted
within a relatively short amount of time.

I also wanted to ask: I saw someone did a speed test with NDB and I
noticed he was pulling 500 entities of 40K and in the worst-case 0%
cache hit scenario, it took something like 8-10 seconds. I was
actually planning to have a piece of my application regularly query
and cache ~2500 entities(of 2500) and sort on it to avoid a huge
amount of indices(and a NOT IN filter that would really slow things
down). Is this feasible or would you expect his results to scale, i.e.
500 entities with 0% cache hits * 5 ~= 40-50s in my usage scenario? Or
was there something unique to his situation with his indices and large
amount of data? In mine each entity has about 10 properties with zero
indices. If this is the case I'll probably copy the entities into a
JsonProperty that occasionally gets updated and simply query/cache
that since I don't expect the 2500 entities to change very often.

Thanks,
Richard

On Apr 24, 12:59 pm, Guido van Rossum gu...@google.com wrote:
 On Monday, April 23, 2012 10:21:26 PM UTC-7, Richard Arrano wrote:

  I'm switching from db to ndb and I have a question regarding caching:

  In the old db, I would have a class X that contains a reference to a
  class Y. The Y type would be accessed most frequently and rarely
  change. So when I would query an X and retrieve the Y type it points
  to, I would store X in the memcache with the actual instance Y rather
  than the key. If X is invalidated in the memcache, then so is the Y
  instance but otherwise I would skip the step of querying Y upon re-
  retrieving X from the memcache. Is there any way to do this in ndb? Or
  must I re-query each Y type even if it is from memcache or context?

 If you leave the caching to NDB, you probably needn't worry about this
 much. It's going to be an extra API call to retrieve Y (e.g. y =
 x.yref.get()) but that will generally be a memcache roundtrip. If you are
 retrieving a lot of Xes in one query, there's a neat NDB idiom to prefetch
 all the corresponding Ys in one roundtrip:

 xs = MyModel.query(...).fetch()
 _ = ndb.get_multi([x.yref for x in xs])

 This effectively throws away the ys, but populates them in the context
 cache. After this, for any x in xs, the call x.yref.get() will use the
 context cache, which is a Python dict in memory. (Its lifetime is one
 incoming HTTP request.)

 You can even postpone waiting for the ys, using an async call:

 xs = MyModel.query(...).fetch()
 _ = ndb.get_multi_async([x.yref for x in xs])

 Now the first time you reference some x.yref.get() it will block for the
 get_multi_async() call to complete, and after that all subsequent
 x.yref.get() calls will be satisfied from memory (no server roundtrip at
 all).

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] NDB Caching Question

2012-04-23 Thread Richard Arrano
Hello,
I'm switching from db to ndb and I have a question regarding caching:

In the old db, I would have a class X that contains a reference to a
class Y. The Y type would be accessed most frequently and rarely
change. So when I would query an X and retrieve the Y type it points
to, I would store X in the memcache with the actual instance Y rather
than the key. If X is invalidated in the memcache, then so is the Y
instance but otherwise I would skip the step of querying Y upon re-
retrieving X from the memcache. Is there any way to do this in ndb? Or
must I re-query each Y type even if it is from memcache or context?

Thanks,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Understanding Data Writes

2012-02-09 Thread Richard Arrano
Hi Robert,
Thanks for the quick response! I gathered that the admin would be
incurring some overhead, but does it seem reasonable that it could
account for my estimate having been off by nearly a factor of 10? It
seems like in that case, it would be far cheaper to just write a
custom entity delete task that does a keys only query and calls
db.delete on them.

Thanks for the article, I think I understand. To clarify, in the
example I mentioned, would adding a non-composite EntitiesByProperty
ASC type index then require 12,000 * (size of EntitiesByProperty ASC
i.e. 3 str + key + property value) on top of the original storage
space?

Thanks,
Rick

On Feb 8, 11:18 pm, Robert Kluin robert.kl...@gmail.com wrote:
 Hi Richard,
   The datastore admin also incurs some overhead.  At the minimum it
 will be querying to find the keys that need deleted, so you'll have
 *at least* 1 additional small operation per entity deleted.  In
 addition you'll have overhead from the queries, and the shards getting
 / writing their status entities -- so several more datastore
 operations per task that was run.  All those add up pretty fast.

   You can see how many write operations an entity needs in the SDK's
 datastore view.  There's not a really good way to easily determine the
 storage used by an index, I've got a feature request to at least
 provide details on what existing indexes are using:
  http://code.google.com/p/googleappengine/issues/detail?id=2740

   There's also an article where this is discussed, note that the
 article is a little out of date.  It doesn't account for namespaces,
 so you'll need to factor those in if you're using them.  I usually do
 my estimates in a spreadsheet.
    http://code.google.com/appengine/articles/storage_breakdown.html

 Robert







 On Thu, Feb 9, 2012 at 00:44, Richard Arrano rickarr...@gmail.com wrote:
  Hello,
  I'm having some trouble understanding the billing figures for when I
  perform data writes. I had 1300 entities with 1 property indexed and I
  kicked off a job via the Datastore Admin to delete them all. Given
  that:

  Entity Delete (per entity)      2 Writes + 2 Writes per indexed property
  value + 1 Write per composite index value

  It seems like in that case, for each entity there would be 2 writes +
  2 writes for the 1 indexed property = 4 writes per entity, so 4 * 1300
  = 5200 writes used, or ~10% of the daily quota for writes consumed.
  However, within seconds, the data was indeed deleted but 100% of my
  quota had been consumed(0% had been consumed prior). How was I somehow
  off by a factor of 10?

  On a related note, is there any tool out there for estimating how much
  space will be consumed by a certain object? I.e. I have a huge amount
  of one object, around 12,000 and I would love to see how much space
  would be consumed with one property indexed as opposed to two.

  Thanks,
  Rick

  --
  You received this message because you are subscribed to the Google Groups 
  Google App Engine group.
  To post to this group, send email to google-appengine@googlegroups.com.
  To unsubscribe from this group, send email to 
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Understanding Data Writes

2012-02-08 Thread Richard Arrano
Hello,
I'm having some trouble understanding the billing figures for when I
perform data writes. I had 1300 entities with 1 property indexed and I
kicked off a job via the Datastore Admin to delete them all. Given
that:

Entity Delete (per entity)  2 Writes + 2 Writes per indexed property
value + 1 Write per composite index value

It seems like in that case, for each entity there would be 2 writes +
2 writes for the 1 indexed property = 4 writes per entity, so 4 * 1300
= 5200 writes used, or ~10% of the daily quota for writes consumed.
However, within seconds, the data was indeed deleted but 100% of my
quota had been consumed(0% had been consumed prior). How was I somehow
off by a factor of 10?

On a related note, is there any tool out there for estimating how much
space will be consumed by a certain object? I.e. I have a huge amount
of one object, around 12,000 and I would love to see how much space
would be consumed with one property indexed as opposed to two.

Thanks,
Rick

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Quick Pricing Clarification

2012-02-07 Thread Richard Arrano
Hello,
I just was wondering in the pricing model when it says:
Query   1 Read + 1 Read per entity returned

Suppose that it's a get_by_key_name type query and I supply keys that
do not exist, i.e. will return None type in Python, do those apply to
the 1 Read per entity returned or not?

Thanks,
Rick

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Bulkloader / datastore quota

2012-01-05 Thread Richard Arrano
Hello,
I've been uploading some data from the development to the production
server. This first upload was ~450 entities with 3 indexed properties
each. In one upload(batching at 10/post), I burned 60% of the free
quota of writes. Is this normal? If not, any idea on what I'm doing
wrong? My concern is that I have a few files with several thousand
entities and I was able to upload them before the pricing change, but
it looks like now I won't be able to. Will I just have to split it up
into tons of little pieces and upload one each day while burning the
quota?

Thanks,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] PIL Import Error

2012-01-01 Thread Richard Arrano
Hello,
I've been attempting to work with the Python Imaging Library in my
app, and for the most part I can do so. In my app.yaml I have:
libraries:
- name: PIL
  version: latest

However, I wanted to use ImageMath.eval and I found this error:

 line 11, in module
import ImageMath
  File /base/python27_runtime/python27_lib/versions/third_party/
PIL-1.1.7/PIL/ImageMath.py, line 19, in module
import _imagingmath

It seems like I should be able to use ImageMath since it is in the
directory on App Engine's side, but there's a problem with finding the
SO file. Is this something on purpose that's disabled for us?

Thanks,
Rick

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Using Threads in Backends

2011-12-29 Thread Richard Arrano
Hello,
I've been writing a game manager backend whose purpose is to check
every couple of seconds for games that are ready to be launched, call
a task to launch them, and rinse and repeat. I have a handler for a
front-end instance to URLFetch from the backend and retrieve its state
to display to users and I put the main loop into a thread that I call
time.sleep on. The problem is it seems like when I run the main loop
function in the thread, absolutely nothing happens. It's of this form:

def start(self):
_id = thread.start_new_thread(self.check_current_games, ())
logging.info(Thread id: %s % _id)

When start is called, it seems like it just totally locks up and
nothing I output to the logs shows up. The logging line about the
thread if is never shown. Does anyone know why this might be and how I
can go about fixing it?

Also, when I worked on the backend starting and stopping it many
times, I used up the full 9 hour backend quota in ~2 hours. Is this
because it's keeping one idle and when I immediately click start again
I'm running several instances at the same time? It never seems to
indicate I have more than one instance active, so I'm not sure why
this would be the case.

Thanks,
Rick

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Using Threads in Backends

2011-12-29 Thread Richard Arrano
An addendum: I made a small mistake and it actually does output to the
logs, however, in the main loop of the form:

def check_current_games(self):
 while True:
  # do work
  time.sleep(10)

it always hangs and the loop never performs a second iteration. I also
noticed in the logs that it's full of requests to /_ah/stop. Is App
Engine attemping to kill my backend? Is there some other way I should
be doing this? All I want is a thread that loops and checks game
states, launches games that are ready, and then goes to sleep and
handles any other requests involving backend state information.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Multithreading question

2011-11-22 Thread Richard Arrano
Hello,
Quick question regarding multithreading in Python 2.7:
I have some requests that call 2-3 functions that call the memcache in
each function. It would be possible but quite complicated to just use
get_multi, and I was wondering if I could simply put each function
into a thread and run the 2-3 threads to achieve some parallelism.
Would this work or am I misunderstood about what we can and cannot do
with regards to multithreading in 2.7?

Thanks,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Multithreading question

2011-11-22 Thread Richard Arrano
@Brandon:
This is true but it just would take a lot of rewriting that may or may
not be worth it.

@Brian
Thanks for the tip, I didn't even realize that(I haven't been using
AppStats, shame on me). Would the savings be worth it, in your
opinion, when they're not present in the cache and have to resort to 3
gets of varying size?

On Nov 22, 12:37 pm, Brian Quinlan bquin...@google.com wrote:
 Hi Richard,

 On Wed, Nov 23, 2011 at 7:18 AM, Richard Arrano rickarr...@gmail.com wrote:
  Hello,
  Quick question regarding multithreading in Python 2.7:
  I have some requests that call 2-3 functions that call the memcache in
  each function. It would be possible but quite complicated to just use
  get_multi, and I was wondering if I could simply put each function
  into a thread and run the 2-3 threads to achieve some parallelism.
  Would this work or am I misunderstood about what we can and cannot do
  with regards to multithreading in 2.7?

 This will certainly work put I'm not sure that it would be worth the 
 complexity.

 Fetching a value from memcache usually takes 5ms so parallelizing 3
 memcache gets is going to save you ~10ms.

 Cheers,
 Brian







  Thanks,
  Richard

  --
  You received this message because you are subscribed to the Google Groups 
  Google App Engine group.
  To post to this group, send email to google-appengine@googlegroups.com.
  To unsubscribe from this group, send email to 
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Multithreading question

2011-11-22 Thread Richard Arrano
I see, I'm guessing it probably isn't worth it to optimize this
particular area but it's good to know that the multithreading ability
would work in a more complex instance where I truly needed the
parallelism.

One last question on the topic, having to do with threadsafe: the
function that I was referring to was actually a decorator that checks
certain permissions that I insert before a large amount of handlers.
It also stores the returned objects via self.permissions for example.
Is there a possibility of a race condition on self.permissions or does
it function in such a manner that this is impossible?

Thanks,
Richard

On Nov 22, 1:56 pm, Brian Quinlan bquin...@google.com wrote:
 On Wed, Nov 23, 2011 at 8:48 AM, Richard Arrano rickarr...@gmail.com wrote:
  @Brandon:
  This is true but it just would take a lot of rewriting that may or may
  not be worth it.

  @Brian
  Thanks for the tip, I didn't even realize that(I haven't been using
  AppStats, shame on me). Would the savings be worth it, in your
  opinion, when they're not present in the cache and have to resort to 3
  gets of varying size?

 Its hard to give advice on this kind of complexity vs. performance
 trade-off without really understanding the application.

 Datastore gets are slower than memcache gets but are still pretty quick.

 Cheers,
 Brian







  On Nov 22, 12:37 pm, Brian Quinlan bquin...@google.com wrote:
  Hi Richard,

  On Wed, Nov 23, 2011 at 7:18 AM, Richard Arrano rickarr...@gmail.com 
  wrote:
   Hello,
   Quick question regarding multithreading in Python 2.7:
   I have some requests that call 2-3 functions that call the memcache in
   each function. It would be possible but quite complicated to just use
   get_multi, and I was wondering if I could simply put each function
   into a thread and run the 2-3 threads to achieve some parallelism.
   Would this work or am I misunderstood about what we can and cannot do
   with regards to multithreading in 2.7?

  This will certainly work put I'm not sure that it would be worth the 
  complexity.

  Fetching a value from memcache usually takes 5ms so parallelizing 3
  memcache gets is going to save you ~10ms.

  Cheers,
  Brian

   Thanks,
   Richard

   --
   You received this message because you are subscribed to the Google 
   Groups Google App Engine group.
   To post to this group, send email to google-appengine@googlegroups.com.
   To unsubscribe from this group, send email to 
   google-appengine+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/google-appengine?hl=en.

  --
  You received this message because you are subscribed to the Google Groups 
  Google App Engine group.
  To post to this group, send email to google-appengine@googlegroups.com.
  To unsubscribe from this group, send email to 
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Keep it short: Who is forced to leave GAE?

2011-08-31 Thread Richard Arrano
I'll be leaving if some of the prices aren't tweaked, particularly the
channels. I was banking on being able to use a large amount of
channels, likely in the thousands per day. I did a double take when I
realized the new price was per hundred rather than per thousand,
particularly when channels expire after two hours and need to be re-
created. Does anyone have a good alternative to the Channel API using
Amazon's solutions?

-Richard

On Aug 31, 6:05 pm, Raymond C. windz...@gmail.com wrote:
 I am not asking who is not happy with the new pricing (virtually most of GAE
 users).

 I am just asking who is FORCED to leave GAE because you cannot afford to
 keep running on GAE under the new pricing model.  Please (if possible) state
 the monthly price change as well.

 And what options you are considering?

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Keep it short: Who is forced to leave GAE?

2011-08-31 Thread Richard Arrano
Thanks Martin, that looks like a great alternative. Do you know
anything about Amazon's SNS? Is it applicable as a Channel replacement
or am I misunderstanding it? Either way, looks like a good way to
replace the now extremely expensive Channel API. Google appears to be
pricing themselves out of the cloud computing business. And I agree
with the views of a poster in another thread who mentioned that
working with App Engine and their changing models is like trying to
hit a moving target. Developers can't and won't spend all their time
reworking their applications to avoid incurring huge charges when
Google changes pricing around.

Thanks,
Richard

On Aug 31, 7:51 pm, Martin Ceperley mar...@ceperley.com wrote:
 Richard, a good alternative to the Channel API is Beacon Push 
 (http://beaconpush.com/) we have been using it and it's dead simple and works 
 flawlessly. It supports broadcast messaging (with channel API does not out of 
 the box) as well as per-user messaging. Also extremely affordable, 3 million 
 messages for $3.29.

 -Martin

 On Aug 31, 2011, at 10:33 PM, Richard Arrano wrote:







  I'll be leaving if some of the prices aren't tweaked, particularly the
  channels. I was banking on being able to use a large amount of
  channels, likely in the thousands per day. I did a double take when I
  realized the new price was per hundred rather than per thousand,
  particularly when channels expire after two hours and need to be re-
  created. Does anyone have a good alternative to the Channel API using
  Amazon's solutions?

  -Richard

  On Aug 31, 6:05 pm, Raymond C. windz...@gmail.com wrote:
  I am not asking who is not happy with the new pricing (virtually most of 
  GAE
  users).

  I am just asking who is FORCED to leave GAE because you cannot afford to
  keep running on GAE under the new pricing model.  Please (if possible) 
  state
  the monthly price change as well.

  And what options you are considering?

  --
  You received this message because you are subscribed to the Google Groups 
  Google App Engine group.
  To post to this group, send email to google-appengine@googlegroups.com.
  To unsubscribe from this group, send email to 
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Keep it short: Who is forced to leave GAE?

2011-08-31 Thread Richard Arrano
As far as I can tell on the new billing page, it says 100 under Free
Quota for Channels Created and then a rate of $0.01 for every 100
more channels created. I could be misinterpreting it, but it seems
clear cut.

PubNub also looks like a great alternative to Channels, I'll have to
look at the two and weigh them. On a related note, if my application
is written for webapp and Django, does anyone know if it would be a
relatively simple task to set up Django/webapp on EC2 and transition
my code? Obviously I'd have to change the database to use SimpleDB,
but again, how arduous would this task be?

-Richard

On Aug 31, 8:05 pm, Srirangan sriran...@gmail.com wrote:
   I did a double take when I realized the new price was per hundred
  rather than per thousand,particularly when
  channels expire after two hours and need to be re-created.

 Correct me if I am wrong but doesn't this charge apply only after you exceed
 your free quota or 8000 odd created channels per day?

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Datastore index overhead?

2011-08-25 Thread Richard Arrano
Hello,
I recently uploaded quite a bit of data using the bulkloader. I
noticed that my stored data quota has gone up by a significant
amount. The data statistics tell me that I'm using 33MBytes for the
size of all entities. I realize there's some overhead for indexes,
but the type that I uploaded ~40,000 instances of is only indexed on
two properties. I'm using 30% of the 1.00 GB storage now. So I'm
wondering a) does metadata not include the storage for indexes and
b) does the discrepancy make sense given my situation?

Thanks,
Richard Arrano

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Datastore index overhead?

2011-08-25 Thread Richard Arrano
And for the record, I did the upload before the datastore statistics
had been refreshed, so those stats are indeed current.

-Richard

On Aug 25, 8:30 pm, Richard Arrano rickarr...@gmail.com wrote:
 Hello,
 I recently uploaded quite a bit of data using the bulkloader. I
 noticed that my stored data quota has gone up by a significant
 amount. The data statistics tell me that I'm using 33MBytes for the
 size of all entities. I realize there's some overhead for indexes,
 but the type that I uploaded ~40,000 instances of is only indexed on
 two properties. I'm using 30% of the 1.00 GB storage now. So I'm
 wondering a) does metadata not include the storage for indexes and
 b) does the discrepancy make sense given my situation?

 Thanks,
 Richard Arrano

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Bulkloader Problem

2011-08-18 Thread Richard Arrano
Hello,
I'm attempting to upload some data in CSV format to the App Engine
servers. I filled out the first row specifying the property names,
including key. However, no matter what I do, including
import_transform: transform.create_foreign_key('UserAccount',
key_is_id=False), but it always has the same problem where the key id
is not the name I specify in the CSV data, rather an automatically
generated one. Oddly, they always seem to be ids around 38000 even
after several deletes and reuploads. Nonetheless, they are never the
names I give them. Any ideas on how to fix this?

Thanks,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Development Server Performance Issue

2011-07-31 Thread Richard Arrano
I haven't changed anything in ages, so if it used to be default then
yes. How can I change this?

-Richard

On Jul 30, 2:21 am, Tim Hoffman zutes...@gmail.com wrote:
 Are you using sqlite backend.  Maybe you aren't and your datastore size is
 growing and the default datastore performs terribly when it gets big.

 Rgds

 T

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Development Server Performance Issue

2011-07-31 Thread Richard Arrano
Thanks Tim, you nailed it. I attributed it to 1.5.2 but I realized I
created a moderate amount of data around the same time, hence the
slowdown. Using SQLite fixed it completely. Thanks!

-Richard

On Jul 31, 4:52 pm, Tim Hoffman zutes...@gmail.com wrote:
 --usesqlite

 If you datastore gradually grows and you keep adding to it then it will get
 really slow.
 The 1.5.2 unless you use --default_partition argument you end up with a
 different namespace so
 the datastore may appear empty (you can't find any data) so you chuck some
 more data in and
 boom your datastore is twice as big as it was before.

 Just a guess mind you.

 Rgds

 Tim

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Development Server Performance Issue

2011-07-30 Thread Richard Arrano
It was on the Python dev server. I should refine my statement and say
that previously, it would sometimes hit 400 MB of memory used but now
it starts at 600 and usually balloons to 800+. The write times are
horrendous for db.put() and db.delete().

On Jul 29, 1:40 pm, Ikai Lan (Google) ika...@google.com wrote:
 Is this the Python or Java dev server? Has anyone else experienced similar
 issues?

 --
 Ikai Lan
 Developer Programs Engineer, Google App Engine
 plus.ikailan.com | twitter.com/ikai

 On Fri, Jul 29, 2011 at 6:13 AM, Richard Arrano rickarr...@gmail.comwrote:

  Hello,
  I've noticed that ever since I installed 1.5.2, the performance of my
  development server has degraded terribly. It used to use ~250 MB of
  memory and now, without any major changes to my application, it
  consistently uses ~600-800 MB. Writing to the local datastore has now
  become incredibly slow; writing ~100 entries now takes ~3-5 minutes.
  Any idea why this might be? Some flag I need to set? Any help is much
  appreciated it, it's majorly hampering my development.

  Thanks,
  Richard

  --
  You received this message because you are subscribed to the Google Groups
  Google App Engine group.
  To post to this group, send email to google-appengine@googlegroups.com.
  To unsubscribe from this group, send email to
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Development Server Performance Issue

2011-07-29 Thread Richard Arrano
Hello,
I've noticed that ever since I installed 1.5.2, the performance of my
development server has degraded terribly. It used to use ~250 MB of
memory and now, without any major changes to my application, it
consistently uses ~600-800 MB. Writing to the local datastore has now
become incredibly slow; writing ~100 entries now takes ~3-5 minutes.
Any idea why this might be? Some flag I need to set? Any help is much
appreciated it, it's majorly hampering my development.

Thanks,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] how to avoid an IN query

2011-05-21 Thread Richard Arrano
Hello,
I'm working on a problem that at the moment seems to me to require an
expensive IN query which I'd like to avoid. Basically, each group of
users(and there may be thousands of such groups) draws product data
from a subset of currently 8 providers(though it could reach ~16). The
subset must contain at least 1, and can contain as many as the number
of providers. Each user's inventory contains a reference property to
the product and the product has a reference to its provider. What I'd
like to do is to be able to create a view for each group of the
products available given the providers they're drawing from(which can
and will vary from user group to user group). So one way to do this is
a query like:

Product.all().filter('provider IN ', providers)

Where providers is a list that represents the subset of providers that
user group is drawing from. But this of course is quite slow. Given
that the number of providers is relatively small, is there any way to
change my approach or how to model the data in such a way that I can
create this view in a speedy manner?

Thanks,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: How can I test backends with my free app

2011-05-16 Thread Richard Arrano
What I think he was referring to, and a problem I've encountered, is
working with Backends in the development server. The dev server will
not acknowledge the presence of Backends - the link is listed in the
admin console, but it won't list any backends you've configured in the
yaml file. Are these meant to only work on the production server? I
actually was trying to experiment with the backends-io you linked to
and I could not get it to work on the dev server at all.

-Richard

On May 16, 2:46 pm, Greg Darke (Google) darke+goo...@google.com
wrote:
 To upload this backend to App Engine, you need to perform a backends up:

 appcfg.py backends update path/to/application/

 On 12 May 2011 10:04, Sergey tvoys...@gmail.com wrote:

  I want to test new feature - backends.
  As I understand it's enough to create backends.yaml and describe there
  parameters of my backends.
  I've done that:

  backends:
  - name: testbe
   class: B1
   instances: 1

  I ran this app on dev server. At admin console I see This application
  doesn't define any backends. See the documentation for more.

  I tried to deploy this into GAE - same effect.

  What am I doing wrong?
  Is there some code examples to better understand the idea of backends?

  --
  You received this message because you are subscribed to the Google Groups 
  Google App Engine group.
  To post to this group, send email to google-appengine@googlegroups.com.
  To unsubscribe from this group, send email to 
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/google-appengine?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Backends clarifications

2011-05-13 Thread Richard Arrano
Ah that's great news then, I didn't realize that.

Greg: Thanks for the link, I was a bit misunderstood and didn't see it
in the App Engine Samples.

One thing that's intrigued me about the reserved backend, and it
mentions in the documentation, is to keep track of game states. I was
hoping someone could tell me if this setup would make sense: if I need
to update possibly hundreds of game states, and by update I mean
sending out actual updates via the Channel API, I thought I could do
this by two separate instances. One of which holds the data(there
isn't much data to keep track of, 128MB may be sufficient), the other
operates on a while loop that only exits when all games have completed
and runs every second or every other second using time.sleep(1000) or
2000. It sends out updates to each client in each game that requires
updates, and gets the data from the other backend instance. Then it
would sleep until the next iteration. Is this a sensible use of
backends or is this actually more suited to tasks? I was originally
using tasks that enqueue a new task each following second to
accomplish this. Which one would be better and why?

Thanks,
Richard

On May 12, 11:10 am, Gregory D'alesandre gr...@google.com wrote:
 On Thu, May 12, 2011 at 3:32 AM, Richard Arrano rickarr...@gmail.comwrote:

  Hello,
  I have a few questions about the new Backends feature:
  It seems like a reserved backend allows us to have access to a sort of
  unevictable memcache, is this correct? If I were doing something like
  a game, would it be reasonable to have a single instance keep track of
  a few pieces of vital information about each game? The sum of which
  might be around 1MB of information at any given time.

  How would I go about this anyway? Address the reserved instance and
  how would I set a value in its memory?

  And just to be clear, there's no free version of Backends, correct?

 Actually, there is!  You get $0.72 worth of backends free per day (this can
 be used in any configuration you'd like 9 hours of a B1 or 1 hour of a B8 +
 1 hour of a B1, etc).

 Greg



  Thanks,
  Richard

  --
  You received this message because you are subscribed to the Google Groups
  Google App Engine group.
  To post to this group, send email to google-appengine@googlegroups.com.
  To unsubscribe from this group, send email to
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Backends clarifications

2011-05-12 Thread Richard Arrano
Hello,
I have a few questions about the new Backends feature:
It seems like a reserved backend allows us to have access to a sort of
unevictable memcache, is this correct? If I were doing something like
a game, would it be reasonable to have a single instance keep track of
a few pieces of vital information about each game? The sum of which
might be around 1MB of information at any given time.

How would I go about this anyway? Address the reserved instance and
how would I set a value in its memory?

And just to be clear, there's no free version of Backends, correct?

Thanks,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Channel use/reuse question

2011-05-12 Thread Richard Arrano
Hello,
I've been using the Channel API and for each chat room, I have a fixed
number of users. So for each room, prior to the launch, I call
create_channel for each user and store it in a dictionary that I save
in a TextProperty. When I want to broadcast, I read the TextProperty
and convert it back to a dictionary, then call channel.send_message on
each. This works well initially. However, when a user exits the
session and comes back, it invariably causes this error each time:

  File C:\Program Files\Google\google_appengine\google\appengine\api
\channel\channel_service_stub.py, line 127, in
_Dynamic_SendChannelMessage
self._channel_messages[client_id].append(request.message())
KeyError: '21024-21026-185804764220139124118'

Where the first two numbers are some identifiers and the last part is
the user id. Does something get used up so to speak when the client
opens the initial connection? I've seen people suggesting reusing
tokens instead of calling create_channel each time, which is what I'm
trying to do. Does anyone have any ideas why this would occur?

Thanks,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Channel use/reuse question

2011-05-12 Thread Richard Arrano
Hm, well this defeats the purpose because the point is to not cut into
the rather low amount of channels GAE affords us daily. Can anyone
from Google confirm that this is the case and the only way to do it is
to call create_channel again?

Thanks,
Richard

On May 12, 8:59 am, Westmark fredrik.westm...@gmail.com wrote:
 Hello Richard,

 I had the same problem and could only resolve it by generating a new
 token for the reconnecting user, i.e. calling create_channel again.
 You can use the same user id though. I believe I read in the docs
 somewhere that a token is only good for one sustained connection,
 afterwards it's discarded.

 BR // Fredrik

 On May 12, 1:57 pm, Richard Arrano rickarr...@gmail.com wrote:

  Hello,
  I've been using the Channel API and for each chat room, I have a fixed
  number of users. So for each room, prior to the launch, I call
  create_channel for each user and store it in a dictionary that I save
  in a TextProperty. When I want to broadcast, I read the TextProperty
  and convert it back to a dictionary, then call channel.send_message on
  each. This works well initially. However, when a user exits the
  session and comes back, it invariably causes this error each time:

    File C:\Program Files\Google\google_appengine\google\appengine\api
  \channel\channel_service_stub.py, line 127, in
  _Dynamic_SendChannelMessage
      self._channel_messages[client_id].append(request.message())
  KeyError: '21024-21026-185804764220139124118'

  Where the first two numbers are some identifiers and the last part is
  the user id. Does something get used up so to speak when the client
  opens the initial connection? I've seen people suggesting reusing
  tokens instead of calling create_channel each time, which is what I'm
  trying to do. Does anyone have any ideas why this would occur?

  Thanks,
  Richard



-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Clarification about entity group write rate

2011-03-15 Thread Richard Arrano
Hello,
I was just hoping for some clarification: when the documentation says
that writes to entity groups are limited to 1 per second, does this
mean any write regardless of how much data is being written, or each
individual row being written? Basically, what if I have a list of
15-20 model objects of the same entity group I want to commit with a
db.put(my_list). Do I need to split up the writes to only occur once
per second or is it okay because I'm doing it in one fell swoop with
db.put?

Thanks,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Server Locality/Memcache key life issues

2011-03-13 Thread Richard Arrano
Hello,
I had a few items I've been taking for granted and I was wondering if
anyone could clear up for me. First of all, I did some searching and I
realize Google employees can't say exactly where the servers are.
However, one issue that affects me is I've been counting on
performance being equal among European and American users. I assumed
that if an American user logs on, they will be talking to an American
server serving my application and if a European user logs on, they'll
be talking to a European server. Is this the case? If so, if I have
American and European users intermingling in a live environment, is my
application a good candidate for HR? Perhaps I don't fully understand
the Master Slave/HR distinction and someone could shed some light upon
this.

Additionally, I realize that I shouldn't be using memcache for storage
I can count on without writing to the datastore. I've read that even
within a minute the key can be evicted. However, what about within
say 3-5 seconds? Could I count on a key being written in the memcache
and not being evicted within such a small timeframe with high
confidence? Perhaps another question in the same vein would be, what
is the average key life when the memcache is under pressure?

Thanks,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Attempting to guarantee a write

2011-03-03 Thread Richard Arrano
Hello Robert,
I don't necessary even need to do it by tasks - what I'm doing is
something along the lines of a live ebay type auction, where I need to
guarantee that when user X wins the bid for product Y, I will a)
record this bid, b) be able to access it for the next item so I can
know that product Y is no longer available and c) I would like to be
able for the users to see which items have been previously sold(i.e.
some sort of history). I can do this in the normal way with the
datastore, but my concern is when the datastore has issues due to
latency and put() fails. I can't have this, and it's a very low rate
of writing with a low amount of data that needs to be guaranteed.
That's why I thought the task and encoding it in the payload might
work, but if I can't access what I previously wrote to the payload
then it won't.

What about modifying it so that rather than attempting to access the
payload data, I give the task a name along the format of 'auctionId-
itemId' and in the payload, the bidder and the price. That way the
item cannot be sold twice, because we can't have two tasks with the
same name, and it will eventually get written if datastore writing is
down because the task will eventually be executed. What do you think?
I'm also a bit confused on whether or not a task eventually executes -
is it only transactional tasks that must eventually be executed and
don't get tombstones? In that case this won't work because I can't
name them.

Thanks,
Richard

On Mar 2, 9:04 am, Robert Kluin robert.kl...@gmail.com wrote:
 Hi Richard,
   Data in the task queue is not accessible in some fashion when the
 user inputs it.  There is currently no way to query / retrieve tasks
 from a queue, its a one-way-street so to speak.

   It sounds like you're implementing an autosave type feature?  If you
 use memcache, remember it is just a cache and a given key (or the
 whole thing) could be flushed at any time.  If you use the taskqueue
 (or deferred), remember that the tasks may not be run in the order
 they were inserted (particularly true if a queue backs up at all). if
 possible, you'll want to keep some type of revision count so you don't
 overwrite new with old.

   If you provide more info someone can probably offer additional pointers.

 Robert

 On Wed, Mar 2, 2011 at 07:28, Richard Arrano rickarr...@gmail.com wrote:
  Hello,
  I was reading the thread regarding wanting to guarantee a put()
  (http://groups.google.com/group/google-appengine/browse_thread/thread/
  8280d73d09dc64ee/1cf8c5539155371a?lnk=raotpli=1) and I've found
  myself desiring to do the same. It seems to me that using the deferred
  task queue with  10k of data will allow us to guarantee the data to
  be committed at some point in time, regardless of datastore latency/
  availability. The scenario that interests me is when I have some data
  I'd like to make sure gets committed at some later time(when exactly
  doesn't matter), but it must be recorded and accessible in some
  fashion when the user inputs it. I was thinking about using the
  deferred task queue, but the problem is that although it's  10k of
  data, it will grow as the user inputs more data(they won't be able to
  input everything at once). Could this be solved by retrieving the task
  from the deferred queue and editing its payload. Is this possible to
  do? Is there another solution that will fit what I'm looking to do?

  Thanks,
  Richard

  --
  You received this message because you are subscribed to the Google Groups 
  Google App Engine group.
  To post to this group, send email to google-appengine@googlegroups.com.
  To unsubscribe from this group, send email to 
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/google-appengine?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Attempting to guarantee a write

2011-03-03 Thread Richard Arrano
Hi Steve,
I would certainly agree about some sort of ability to guarantee low-
volume high-importance storage. I had an idea that it might be able to
be implemented in the form of something along the lines of 1MB
unevictable memcache storage; we wouldn't be able to store much there,
but we would absolutely guarantee that whatever we store there will be
able to be retrieved until we flush it or delete the keys. I also like
your idea of some sort of task queue with certain guarantees attached
to it at the cost of it being called rarely and with smaller amounts
of payload. Alternatively I wouldn't mind seeing some sort of pay
option where you could pay for guarantees on latency and write
availability. This might be covered in the Business SLA, though my
cursory impression was that it doesn't actually guarantee things it
simply reimburses you if they have significant outages.

Something that occurred to me that might help you was I was actually
considering de-isolating the datastore as a single point of failure by
additionally sending out those really high importance writes via XMPP
to some other server. The other server could be EC2, it could even be
a private home server(it's not going to get hammered since we're
talking very low volume). It would then back up those records even if
the datastore is having latency/write issues and they could be
processed in the near future. So in this sense, you could even query
the private server via XMPP and get a response regarding the payment
record without GAE having put the record to the datastore yet. I'm not
sure about this solution, it just occurred to me and I thought it
might have some benefits. Your solution sounds good too, though I
suppose you might even be able to combine them. Thanks for the tips!

Robert: regarding using memcache, I do plan to use the memcache in
this sense to see what's been sold and what hasn't been, but I assume
because of the volatility of the memcache there's no way to rely on it
absolutely to check this. Which of course goes back to my prior point
that I'd love to see some sort of very constrained, low volume segment
of memcache be non-volatile for purposes like this. Alternatively,
what about the ability to at least see some record of even just the
names of tasks yet to be executed in the task queue? This way I could
construct a record of what's been sold regardless of datastore
performance and whether or not the memcache has evicted it.

-Richard

On Mar 3, 10:53 am, stevep prosse...@gmail.com wrote:
 Hi Richard,

 Thought I'd comment since I started this thread. Think Wim and Robert
 have helped already more than I could. So just some thoughts from me.

 First: I hope any GAE engineers reading this thread might have a
 thought sparked about task queue featured needed for highly important,
 lower-volume tasks such as yours.

 My problem is that I've got to record a one-time purchase payment.
 Yours is more complicated. My issue was the all the steps needed to
 setup and maintain a payment record for a Paypal for an Express
 Checkout purchase (digital good).

 Rather than risk any loss of the update in the handler functions
 (which already has to deal with sending urlfetch calls to Paypal), I
 send the responses from Paypal to a high-priority task queue for the
 put()s.

 This high-priority queue is pretty strict about very lightweight
 imports, and record updates (i.e. no complex indexes on records being
 updated). Hopefully this keeps issues related to slower task handlers
 low. Look to Wim's link in this thread for many helpful points about
 handler weight control.

 Since I can't move forward with Express Checkout until the payment
 record has been put() by the high-priority queue, the client starts to
 send a request to the on-line handler once every second (with a nice
 dialog explaining the delay for the customer). If the new payment
 record has not been put() after 10 client calls, then I go into damage
 control on the client side. Hopefully this happens very infrequently.

 The alternative for me is to have the on-line handler send the
 urlfetch call to Paypal, put() the new payment record upon PP's
 response, and then respond back to the client with result=OK. If all
 this happens, then the client knows to proceed because the initial
 payment put() has been done. I just felt that there was a greater risk
 of failure having all this functionality in the one on-line handler
 function. Perhaps with HR, this is unfounded, but I'm not sure there's
 enough anecdotal evidence to support this yet.

 This of course encumbers the user with an additional dialog and a bit
 more wait time, but I don't think it is too much as many purchase
 processes on-line take a bit of time. Plus, if we can't get this all
 done within the allotted time, the damage control is very favorable to
 the customer and penalizes me which is as it should be (Google of
 course getting its cut either way -- actually a bit more should GAE be
 running slowly thereby 

[google-appengine] Mobile Integration and the Channel API

2011-02-08 Thread Richard Arrano
Hello,
I was thinking about how to integrate my App Engine app with mobile
devices; my favorite choices so far are phonegap and Titanium, and it
seems relatively simple to port things to these platforms. However,
the sticking point has been that my application uses the Channel API.
What I'm wondering is, what exactly does a mobile platform like
phonegap or Titanium need to interface properly with the Channel API?
I know WebSockets, but is there anything more than that? If anyone has
any experience integrating their app with mobile devices and using
Channel, I'd appreciate some ideas on how to do so. I realize this
could be seen as a phonegap/Titanium question rather than an App
Engine one but I'm particularly interested in what App Engine's JS is
doing client-side so I can perhaps find some mobile platform specific
hacks(for instance I've found some that enable WebSockets in
phonegap). Any help is much appreciated.

Thanks,
Richard Arrano

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: silent-auction type application

2011-01-20 Thread Richard Arrano
Thanks for the info Nick and Robert; I actually have been using that
exact talk to tweak my app and it's been extremely useful.

One more thing I wanted to ask: the way I've been structuring the
auction type setup is through a Task object that creates a custom
class I've built that basically follows this pattern:

while auction.notFinished():
do auction work
send updates to users in the auction via channel.send_message
time.sleep(0.5)

For initial development I've just set notFinished() to check against a
counter within the class that increments up to some arbitrary number
like 50. I've set the user page to connect to the channel, and it does
receive messages but it only receives them after the while loop
terminates. This is being run in a task and NOT in the handler for the
page being served. This perplexed me - I tested it on the actual
servers and it doesn't suffer this problem, but it is still very very
laggy - I set it to send back a user's message sent to the channel and
it takes quite a few iterations of the loop to print this. Is there a
better way to structure this?

Thanks,
Richard Arrano

On Jan 16, 7:14 pm, Nick Johnson (Google) nick.john...@google.com
wrote:
 Hi Richard,

 On Sat, Jan 15, 2011 at 12:05 PM, Richard Arrano rickarr...@gmail.comwrote:

  Thanks for the response! I was thinking it over and I have a question
  - so if a timestamp with its monotonically increasing index causes a
  performance hit at a high write rate, would updating the high bid do
  so as well ? The high bid itself will be monotonically increasing - it
  will never go down, but perhaps I misunderstood something about how
  indices work.

 See the reply I just sent - while it's true that something like the
 increasing bid would cause the same issue, it's not going to be a problem at
 the sort of write rate you're likely to see here.



  And I guess a broader question that I have is(and I assume the answer
  is yes) - does the size of what we're writing affect the performance?
  As in, writing a simple update to an integer property as opposed to
  creating new complex objects and writing them.

 To a small degree, yes, the size of the object will affect the overhead, but
 it's not the largest factor - other things like the number of index rows you
 have per entity will have a much bigger impact. Note that the size that's
 significant is the size of your whole entity, not the amount of data you're
 changing.

 -Nick Johnson



  Thanks,
  Richard

  On Jan 14, 10:05 am, Ikai Lan (Google) 
  ikai.l+gro...@google.comikai.l%2bgro...@google.com

  wrote:
   You can certainly write to Memcache, but I don't think your application
  can
   tolerate any kind of volatility. Persistence is the price you have to
  pay.
   Fortunately, I think this can be done pretty cheaply. Just be aware of
   monotonically increasing indexes like timestamps: if you have an
  application
   with a high write rate that has a timestamp, this will cause the
  persistence
   unit storing the indexes to be unable to be autosplit easily across
  multiple
   hardware instances and you will take a performance hit. The solution here
   is, again, to shard the timestamp by prefixing a value to distribute the
   writes.

   --
   Ikai Lan
   Developer Programs Engineer, Google App Engine
   Blogger:http://googleappengine.blogspot.com
   Reddit:http://www.reddit.com/r/appengine
   Twitter:http://twitter.com/app_engine

   On Fri, Jan 14, 2011 at 6:06 AM, Richard Arrano rickarr...@gmail.com
  wrote:

I'm looking to make a silent-auction type of application where you
have 20-30 users bidding on an item at a time, with potentially
hundreds or thousands of auctions happening simultaneously. As soon as
a high bid is made, it updates this information and sends it via the
Channel API to the other users in the auction. I see two potential
difficulties:

   1. The limit on updating an entity group about once per second - I
believe this can be solved with sharding bids amongst users in the
auction and querying all shards to find the maximum bid at any given
time, correct?

   2. The nature of the auction lends itself to a heavy amount of
writing to the datastore - this itself eats up CPU and I’m trying to
figure out if it can be avoided. Is this just inevitable in this type
of application? Does it matter that I would only be only updating a
single IntegerProperty() in any given write? Is there some clever
solution that we can apply that avoids the hammering of the datastore?

Any tips or suggestions would be appreciated.

Thank you,
Richard

--
You received this message because you are subscribed to the Google
  Groups
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
  .
To unsubscribe from this group, send email to
google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2bunsubscr...@googlegroups.com

[google-appengine] silent-auction type application

2011-01-14 Thread Richard Arrano
I'm looking to make a silent-auction type of application where you
have 20-30 users bidding on an item at a time, with potentially
hundreds or thousands of auctions happening simultaneously. As soon as
a high bid is made, it updates this information and sends it via the
Channel API to the other users in the auction. I see two potential
difficulties:

1. The limit on updating an entity group about once per second - I
believe this can be solved with sharding bids amongst users in the
auction and querying all shards to find the maximum bid at any given
time, correct?

2. The nature of the auction lends itself to a heavy amount of
writing to the datastore - this itself eats up CPU and I’m trying to
figure out if it can be avoided. Is this just inevitable in this type
of application? Does it matter that I would only be only updating a
single IntegerProperty() in any given write? Is there some clever
solution that we can apply that avoids the hammering of the datastore?

Any tips or suggestions would be appreciated.

Thank you,
Richard

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: silent-auction type application

2011-01-14 Thread Richard Arrano
Thanks for the response! I was thinking it over and I have a question
- so if a timestamp with its monotonically increasing index causes a
performance hit at a high write rate, would updating the high bid do
so as well ? The high bid itself will be monotonically increasing - it
will never go down, but perhaps I misunderstood something about how
indices work.

And I guess a broader question that I have is(and I assume the answer
is yes) - does the size of what we're writing affect the performance?
As in, writing a simple update to an integer property as opposed to
creating new complex objects and writing them.

Thanks,
Richard

On Jan 14, 10:05 am, Ikai Lan (Google) ikai.l+gro...@google.com
wrote:
 You can certainly write to Memcache, but I don't think your application can
 tolerate any kind of volatility. Persistence is the price you have to pay.
 Fortunately, I think this can be done pretty cheaply. Just be aware of
 monotonically increasing indexes like timestamps: if you have an application
 with a high write rate that has a timestamp, this will cause the persistence
 unit storing the indexes to be unable to be autosplit easily across multiple
 hardware instances and you will take a performance hit. The solution here
 is, again, to shard the timestamp by prefixing a value to distribute the
 writes.

 --
 Ikai Lan
 Developer Programs Engineer, Google App Engine
 Blogger:http://googleappengine.blogspot.com
 Reddit:http://www.reddit.com/r/appengine
 Twitter:http://twitter.com/app_engine

 On Fri, Jan 14, 2011 at 6:06 AM, Richard Arrano rickarr...@gmail.comwrote:

  I'm looking to make a silent-auction type of application where you
  have 20-30 users bidding on an item at a time, with potentially
  hundreds or thousands of auctions happening simultaneously. As soon as
  a high bid is made, it updates this information and sends it via the
  Channel API to the other users in the auction. I see two potential
  difficulties:

     1. The limit on updating an entity group about once per second - I
  believe this can be solved with sharding bids amongst users in the
  auction and querying all shards to find the maximum bid at any given
  time, correct?

     2. The nature of the auction lends itself to a heavy amount of
  writing to the datastore - this itself eats up CPU and I’m trying to
  figure out if it can be avoided. Is this just inevitable in this type
  of application? Does it matter that I would only be only updating a
  single IntegerProperty() in any given write? Is there some clever
  solution that we can apply that avoids the hammering of the datastore?

  Any tips or suggestions would be appreciated.

  Thank you,
  Richard

  --
  You received this message because you are subscribed to the Google Groups
  Google App Engine group.
  To post to this group, send email to google-appengine@googlegroups.com.
  To unsubscribe from this group, send email to
  google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2bunsubscr...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.