Re: [google-appengine] Re: Python 2.7: Instance memory limitations with concurrent requests

2012-02-24 Thread Andrin von Rechenberg
It turns out that the requests with high ram requirements are so well
distributed
that I'm not running out of memory. So my initial concerns (the first mail
in this
thread) can be forgotten. YAY.

-Andrin



On Wed, Feb 22, 2012 at 9:08 AM, Robert Kluin robert.kl...@gmail.comwrote:

 Hey Alex,
  I should probably have stated this better as memory is not always
 handled well.  For example, the ext.db code keeps many copies of the
 data in various forms.  This can cause rapid and unexpected memory
 blowups, and the result is something that appears similar to a memory
 leak.  As Brian noted, this is partially caused by Python's handling
 of memory.

  However, there are a number of scenarios where you get real memory
 leaks.  For example there have recently been several posts / issues
 from people having issues with the blobstore leaking memory.  Some of
 these are quite detailed and the repro code is very simple.  If I'm
 not mistaken, in the past, we've observed this happening with heavy
 datastore use as well, though I don't have simple repro cases for
 those.



 Robert





 On Tue, Feb 21, 2012 at 16:13, alex a...@cloudware.it wrote:
  The whole point of this topic was python27 runtime, multithreading and
  concurrent requests. Your specific case, plus python 2.5, doesn't
  necessarily means memory leaks in the runtime itself. I'd profile my code
  that handles most frequently accessed URLs to start off.
 
  --
  You received this message because you are subscribed to the Google Groups
  Google App Engine group.
  To view this discussion on the web visit
  https://groups.google.com/d/msg/google-appengine/-/ZyB9_mGBPDQJ.
 
  To post to this group, send email to google-appengine@googlegroups.com.
  To unsubscribe from this group, send email to
  google-appengine+unsubscr...@googlegroups.com.
  For more options, visit this group at
  http://groups.google.com/group/google-appengine?hl=en.

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To post to this group, send email to google-appengine@googlegroups.com.
 To unsubscribe from this group, send email to
 google-appengine+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Python 2.7: Instance memory limitations with concurrent requests

2012-02-22 Thread Robert Kluin
Hey Alex,
  I should probably have stated this better as memory is not always
handled well.  For example, the ext.db code keeps many copies of the
data in various forms.  This can cause rapid and unexpected memory
blowups, and the result is something that appears similar to a memory
leak.  As Brian noted, this is partially caused by Python's handling
of memory.

  However, there are a number of scenarios where you get real memory
leaks.  For example there have recently been several posts / issues
from people having issues with the blobstore leaking memory.  Some of
these are quite detailed and the repro code is very simple.  If I'm
not mistaken, in the past, we've observed this happening with heavy
datastore use as well, though I don't have simple repro cases for
those.



Robert





On Tue, Feb 21, 2012 at 16:13, alex a...@cloudware.it wrote:
 The whole point of this topic was python27 runtime, multithreading and
 concurrent requests. Your specific case, plus python 2.5, doesn't
 necessarily means memory leaks in the runtime itself. I'd profile my code
 that handles most frequently accessed URLs to start off.

 --
 You received this message because you are subscribed to the Google Groups
 Google App Engine group.
 To view this discussion on the web visit
 https://groups.google.com/d/msg/google-appengine/-/ZyB9_mGBPDQJ.

 To post to this group, send email to google-appengine@googlegroups.com.
 To unsubscribe from this group, send email to
 google-appengine+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Python 2.7: Instance memory limitations with concurrent requests

2012-02-21 Thread Mike Wesner
I know that, but I thought it might still be relavant.

On Feb 21, 3:13 pm, alex a...@cloudware.it wrote:
 The whole point of this topic was python27 runtime, multithreading and
 concurrent requests. Your specific case, plus python 2.5, doesn't
 necessarily means memory leaks in the runtime itself. I'd profile my code
 that handles most frequently accessed URLs to start off.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: Python 2.7: Instance memory limitations with concurrent requests

2012-02-21 Thread Brian Quinlan
Hi Mike,

On Wed, Feb 22, 2012 at 7:22 AM, Mike Wesner mike.wes...@webfilings.com wrote:
 I would love to learn more from google on python memory handling.   As
 Robert mentioned, we have observed that memory is not released/garbage
 collected on python 2.5 instances.  It seems to just hold on to it.
 This works because the instances don't live forever and eventually get
 shutdown and new ones take their place.

We haven't changed Python's built-in memory management in either of
our Python runtimes.

Note that, even excluding cyclic garbage collection, Python and the C
allocator do not always release memory eagerly - they maintain their
own memory pools for performance reasons.

So, for example, you can't assume that an instance consuming 100MB of
memory only has 28MB remaining to handle requests.

Cheers,
Brian


The memory management used in the App Engine Python 2.5 and 2.7 runti


 -Mike

 On Feb 21, 10:33 am, alex a...@cloudware.it wrote:
  One of the biggest issues is that the Python runtime leaks memory like mad

 could you elaborate on this? or give some references that support your
 claims (provided an app code is written according to best practices, etc)







 On Tuesday, February 21, 2012 5:10:16 PM UTC+1, Robert Kluin wrote:

  To be clear, the scheduler can still dispatch multiple requests for
  that url to the same instance.  Only one request will execute at a
  time though.  One of the biggest issues is that the Python runtime
  leaks memory like mad, so in this case you may wind up with 1)
  increased latency and 2) still blowing mem limits fast.

  Robert

  On Tue, Feb 21, 2012 at 09:36, Jeff Schnitzer wrote:
   On Mon, Feb 20, 2012 at 5:33 AM, Johan Euphrosine wrote:

   On Mon, Feb 20, 2012 at 11:25 AM, Andrin von Rechenberg
   wrote:

   I guess that's the same solution as just deploying two different
   versions. A threadsafe one and a non threadsafe one. Or did
   I misunderstand you?

   appcfg.py provide commands to help you manage your backends deployment
  and
   configuration, with `backends.yaml` and `appcfg backends update`.

   That make the solution more convenient than using multiple versions on
   frontend instances.

   This adds the significant downside of no auto-scaling.  I don't see how
   that's more convenient :-)

   Andrin:  In javaland I would simply synchronize the people-search
  function
   so that at most one thread can execute that routine in a single instance
  at
   once.  It means all people-search requests in that instance will queue
  up in
   serial, which could cause undesirable waits if the function takes
   significant time, but if the requests are spread out among enough
  instances
   it probably won't be an issue.

   I don't know what the python equivalent of 'synchronized' is.

   Jeff

 --
 You received this message because you are subscribed to the Google Groups 
 Google App Engine group.
 To post to this group, send email to google-appengine@googlegroups.com.
 To unsubscribe from this group, send email to 
 google-appengine+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/google-appengine?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.