On 02/03/2016 05:47 PM, Daniel Cochran wrote:
I recently switched to SQL Alchemy to take advantage of connection
pooling, but I notice that my SQL server will have anywhere from 30-60
open connections throughout the day, ("threads_connected"), and
sometimes it randomly climbs up to the max of 2000 and just completely
falls over. This climb has nothing to do with queries per second,
though, sometimes it will happen at 2am when traffic is very low.

The obvious culprit to the high # of connections is that one of my API
endpoints that connects to the SQL instance is not closing connections,
though I'm not sure where that would be happening. Here is my configuration:

|
classSqlAlchemy(object):
"""A light abstraction for SQL Alchemy."""


def__init__(self,connection):
base=automap_base()
self.engine =sql.create_engine(connection,
                                     max_overflow=2,
                                     pool_size=10,
                                     pool_recycle=120,
                                     pool_timeout=15)


base.prepare(self.engine,reflect=True)


# These models reflect the tables in your database.
self.models =base.classes
self.Session=scoped_session(sessionmaker(bind=self.engine))


# Ping the connection to make sure it's valid. Per:
# http://docs.sqlalchemy.org/en/latest/core/pooling.html
@sql.event.listens_for(self.engine,'engine_connect')
defping_connection(connection,branch):# pylint: disable=unused-variable
ifbranch:
# "branch" refers to a sub-connection of a connection,
# we don't want to bother pinging on these.
return


try:
# run a SELECT 1.   use a core select() so that
# the SELECT of a scalar value without a table is
# appropriately formatted for the backend
         connection.scalar(sql.select([1]))
exceptsql.exc.DBAPIErroraserr:
# catch SQLAlchemy's DBAPIError, which is a wrapper
# for the DBAPI's exception.  It includes a .connection_invalidated
# attribute which specifies if this connection is a "disconnect"
# condition, which is based on inspection of the original exception
# by the dialect in use.
iferr.connection_invalidated:
# run the same SELECT again - the connection will re-validate
# itself and establish a new connection.  The disconnect detection
# here also causes the whole connection pool to be invalidated
# so that all stale connections are discarded.
           connection.scalar(sql.select([1]))
else:
raise


definsert(self,table,data):
"""Insert one record into a SQL table.


     Args:
       table: A string representing a table name that exists in self.engine.
       data: A dict of key/value pairs representing the inserted record.


     Returns:
       A dict of the newly entered log.
     """
     session =self.get_session()
     record_as_dict ={}

try:
       model =getattr(self.models,table)
       record =model(**data)
       session.add(record)
       session.commit()
       record_as_dict =to_dict(record)
exceptException,err:
raiseException(err)
finally:
self.close_session()


returnrecord_as_dict

defclose_session(self):
"""Close a SQL Alchemy Session."""
returnself.Session.remove()

defget_session(self):
"""Returns a SQL Alchemy Session."""
returnself.Session()
|


In my API, I then would do something like this on POST, where
get_sqlAlchemy returns a singleton instance of the class above via the
webapp2 registry:

|
defpost(self):
     record =get_sqlAlchemy().insert('MyTable',self.request.body)
self.response.set_status(201)
self.response.out.write(json.EncodeJSON(record))
|


everything looks fine, except what does get_sqlAlchemy() do? If that creates a new SqlAlchemy instance each time, then you're making new connection pools on every request. even then, when these are garbage collected the underlying database connections would be closed.

Using a single engine with your settings you won't get more than 12 connection at a time per Python process (note this does *not* include a child subprocess, e.g. via fork() or python multiprocessing).

I may just be wildly misunderstanding how connection pooling works, but
my desire would be that no more than 12 connections are ever open with
the server, as I'm using Google App Engine and I believe the max
concurrent connections is 12:

https://cloud.google.com/appengine/docs/python/cloud-sql/#Python_Connect_to_your_database

How does my configuration above fail to limit the connections?

not enough information here to tell. Your program may be creating multiple engines or it may be sharing the engine across process boundaries, either of which will work against the limits within the pool itself.


--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
<mailto:sqlalchemy+unsubscr...@googlegroups.com>.
To post to this group, send email to sqlalchemy@googlegroups.com
<mailto:sqlalchemy@googlegroups.com>.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to