[sqlalchemy] Re: queuepool timeout's with mod_wsgi under heavier load

2010-01-15 Thread Graham Dumpleton
FWIW, thread on mod_wsgi list about this is at:

  http://groups.google.com/group/modwsgi/browse_frm/thread/c6e65603c8e75a30

Graham

On Jan 15, 4:10 am, Michael Bayer mike...@zzzcomputing.com wrote:
 Damian wrote:
  Hi,

  Every few days, when we experience higher loads we get sqlalchemy's

  TimeoutError: QueuePool limit of size 5 overflow 10 reached,
  connection timed out, timeout 30

  Along with that I see an increase in (2-3 a minute):

  (104)Connection reset by peer: core_output_filter: writing data to the
  network

  and

   (32)Broken pipe: core_output_filter: writing data to the network

  in my apache error logs.

  Having checked over my pylons code a few times, the Session.remove()
  should always be called.  I'm worried that the broken pipe or
  connection reset by peer mean that remove isn't being called.

 sounds more like the higher load means more concurrent DB activity, PG
 then takes more time to return results, more reqs then come in,
 connections aren't available since they're all busy.   AFAIK the broken
 pipe stuff doesn't kill off the Python threads in the middle of their
 work...if they threw an exception, hopefully you have Session.remove()
 happening within a finally:.   Since you're on PG, just to a ps listing
 on your database machine or better yet use pgtop and you'll see just
 what happens when a spike comes in.





  The server is runningmod_wsgiwith apaches mpm_worker with the
  following config:

  IfModule mpm_worker_module
      StartServers         16
      MaxClients          480
      MinSpareThreads      50
      MaxSpareThreads     300
      ThreadsPerChild      30
      MaxRequestsPerChild   0
  /IfModule

  and usingmod_wsgi'sdaemon mode:

    WSGIDaemonProcess somename user=www-data group=www-data processes=4
  threads=32

  Is this somehow overkill?  The server is a well speced quad core with
  8 gigs of ram and fast hard drives.

  It also runs the database server (postgres).

  Has anyone else experienced this kind of problem?  I've cross posted
  this to both themod_wsgiand sqlalchemy mailing lists - hope that's
  ok as I believe this may be relevant to both groups.

  Thanks,
  Damian
  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.
-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: web application too slow

2008-04-06 Thread Graham Dumpleton

Independently of database access, what webserver/module are you using
to host the application?

Graham

On Apr 7, 9:24 am, tux21b [EMAIL PROTECTED] wrote:
 Hi everybody,

 I know that this problem isn't related directly to sqlalchemy, but
 maybe somebody can help me thought. We have developed a big web
 application which also includes a bulletin board. There are some heavy
 SQL queries for loading and eagerloading the forum structure, but
 luckily everything is cached using memcached. So there is only one
 query for determining the permissions, which doesn't take long.

 Unfortunately the forum still remains extremely slow, with productive
 data in use (since we are planning a migration, there is already more
 than enough data available). With extremely slow I mean that it can
 currently handle about 1 request per second.

 Here is a link to the profile stats, maybe someone of you is
 experienced in optimizing python applications (with sqlalchemy objects
 in the background) and can give me some hints.

 Profile-Stats:http://paste.pocoo.org/show/38235/

 Thanks in advice for every single tip. :)

 Regards,
 Christoph Hack
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Strange caching problem with Apache

2008-03-28 Thread Graham Dumpleton



On Mar 29, 11:02 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Mar 28, 2008, at 7:56 PM, john spurling wrote:

  I added debugging to get id(session) and len(list(session)). The
  session id is unique every time, and len(list(session)) is 0. I also
  called session.clear() before using it. Unfortunately, the caching
  behavior persists.

  Any other suggestions? Thank you very much for your time and help!

 if its zero, then the Session isnt caching.  Something is going on  
 HTTP/process-wise.

Could it be that because Apache is a multi process web server (on
UNIX), that OP is getting confused through subsequent requests
actually hitting a different process.

That said, sqlalchemy as I understood it was meant to deal with that,
ie., change made from one process should be reflected in another
process straight away upon a new query, ie., cached data should be
replaced. Is it possible that what ever insures that has been
disabled.

OP should perhaps print out os.getpid() so they know which process is
handling the request each time. This may help to explain what is going
on.

Graham

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Number of opened connections with mod_python

2007-10-08 Thread Graham Dumpleton

Damn it. I hate it when Google groups appears to loose my messages.
The post I just sent to this group appeared really quick, but the much
longer message I sent earlier still hasn't arrived.

Anyway, what my original longer post which appears to have disappeared
pointed out was that because the OP is using mod_python.publisher,
their problems with consumption of resources may be getting
exacerbated by mod_python's module reloading mechanism. In short, the
engine object will be recreated every time the code file is changed
and reloaded. If this doesn't automatically release the connections
still in the pool, this may result any many pools being created, with
older ones becoming unusable and just consuming resources.

Even if the issues with the settings is more the problem, the module
reloading could still make it worse of a problem depending on
SQLAlchemy works. Thus OP should probably also move the creation of
the engine out into a separate Python module somewhere on the
PythonPath and not in the document tree. This way mod_python will not
do reloading of it.

For more details of mod_python module reloader see import_module()
documentation in:

  http://www.modpython.org/live/current/doc-html/pyapi-apmeth.html

The end of the documentation for that function briefly mentions the
issues with increasing resource consumption when reloading occurs.
Since mod_python.publisher uses that function then it would be
affected.

Sorry if this is duplicated information due to my original post
finally turning up.

Graham

On Oct 9, 9:54 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Oct 8, 2007, at 4:35 PM, Karl Pflästerer wrote:





  Hi,
  I try to use SA together with mod_python and its publisher algorithm.
  How can I limit the number of opened connections?
  I use Mysql as db, Apache2.2.6 prefork, mod_python and SA.
  There are only 8 Apache childs running, but a lot more connections
  open.

  I have the problem that connections are never closed, so after some
  time
  the mysql sever doesn't handle any more requests.
  A simple script like:

  engine = create_engine('mysql://[EMAIL PROTECTED]/pflaesterer?
  use_unicode=Truecharset=utf8',
  encoding = 'utf8', convert_unicode=False,
  pool_recycle=1, pool_size=1, echo_pool=True,
  strategy='threadlocal')
  meta  = MetaData()
  meta.bind = engine

  def index(req):
   conn = engine.connect()
   conn.close()
   return 'hier'

  called ten times gives me 10 opened connections.

 If I run the app from a shell, using a loop and calling index() 10
 times, only one connection is ever opened since its immediately
 returned to the pool, and then reused for the next iteration.  But
 that's also because the program runs in under a second; since you
 have pool_recycle set to only 1 second, its going to open a brand
 new connection every second (and close the previous one).  If i add
 time.sleep(1) to my loop then this behavior becomes apparent (but
 still, only one connection opened at a time).

 Keep in mind theres also a max_overflow setting at work here as
 well which is defaulted to 10, so the above engine is capable of
 pooling 11 simultaneous connections, if you werent calling close()
 and the connection was left opened.  To narrow it down to exactly 1
 in all cases, max_overflow would need be set to 0.

 However, even if you set pool_size=1 and max_overflow=0 (and
 pool_recycle to something sane like 3600), none of this will help
 control the total number of connections when using mod_python with
 Apache prefork.   First off, the threadlocal setting is meaningless
 since prefork does not use multiple threads per process.  Secondly,
 each new child apache process will represent a brand new Python
 interpreter with its own engine and connection pool.  So in this
 case, to control the number of concurrent database connections
 opened, you'd have to control the number of apache processes; if
 apache opened 60 child processes, you'd have 60 connection pools.
 Apache's default settings for total number of child processes are
 also quite high for a dynamic scripting application; each child
 process can take many megs of space and you can quickly run out of
 physical memory unless you set this number to be pretty low (or you
 have many gigs of physical memory).

 There's no way to have a single in-python connection pool shared
 amongst child processes since that's not prefork's execution model.
 The threaded MPM would produce the threaded behavior you're looking
 for but I think in most cases in order to have a single, threaded app
 server most people use a technology like FastCGI or mod_proxy which
 communicate with a separate, multithreaded Python process.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email