I recently switched to SQL Alchemy to take advantage of connection pooling, but I notice that my SQL server will have anywhere from 30-60 open connections throughout the day, ("threads_connected"), and sometimes it randomly climbs up to the max of 2000 and just completely falls over. This climb has nothing to do with queries per second, though, sometimes it will happen at 2am when traffic is very low.
The obvious culprit to the high # of connections is that one of my API endpoints that connects to the SQL instance is not closing connections, though I'm not sure where that would be happening. Here is my configuration: class SqlAlchemy(object): """A light abstraction for SQL Alchemy.""" def __init__(self, connection): base = automap_base() self.engine = sql.create_engine(connection, max_overflow=2, pool_size=10, pool_recycle=120, pool_timeout=15) base.prepare(self.engine, reflect=True) # These models reflect the tables in your database. self.models = base.classes self.Session = scoped_session(sessionmaker(bind=self.engine)) # Ping the connection to make sure it's valid. Per: # http://docs.sqlalchemy.org/en/latest/core/pooling.html @sql.event.listens_for(self.engine, 'engine_connect') def ping_connection(connection, branch): # pylint: disable=unused-variable if branch: # "branch" refers to a sub-connection of a connection, # we don't want to bother pinging on these. return try: # run a SELECT 1. use a core select() so that # the SELECT of a scalar value without a table is # appropriately formatted for the backend connection.scalar(sql.select([1])) except sql.exc.DBAPIError as err: # catch SQLAlchemy's DBAPIError, which is a wrapper # for the DBAPI's exception. It includes a .connection_invalidated # attribute which specifies if this connection is a "disconnect" # condition, which is based on inspection of the original exception # by the dialect in use. if err.connection_invalidated: # run the same SELECT again - the connection will re-validate # itself and establish a new connection. The disconnect detection # here also causes the whole connection pool to be invalidated # so that all stale connections are discarded. connection.scalar(sql.select([1])) else: raise def insert(self, table, data): """Insert one record into a SQL table. Args: table: A string representing a table name that exists in self.engine. data: A dict of key/value pairs representing the inserted record. Returns: A dict of the newly entered log. """ session = self.get_session() record_as_dict = {} try: model = getattr(self.models, table) record = model(**data) session.add(record) session.commit() record_as_dict = to_dict(record) except Exception, err: raise Exception(err) finally: self.close_session() return record_as_dict def close_session(self): """Close a SQL Alchemy Session.""" return self.Session.remove() def get_session(self): """Returns a SQL Alchemy Session.""" return self.Session() In my API, I then would do something like this on POST, where get_sqlAlchemy returns a singleton instance of the class above via the webapp2 registry: def post(self): record = get_sqlAlchemy().insert('MyTable', self.request.body) self.response.set_status(201) self.response.out.write(json.EncodeJSON(record)) I may just be wildly misunderstanding how connection pooling works, but my desire would be that no more than 12 connections are ever open with the server, as I'm using Google App Engine and I believe the max concurrent connections is 12: https://cloud.google.com/appengine/docs/python/cloud-sql/#Python_Connect_to_your_database How does my configuration above fail to limit the connections? -- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at https://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.