On Jun 10, 5:41 pm, Oveek <mov...@gmail.com> wrote:
> So my rocky trip through database land continues.

Ah. Yes. I'll do some responding here before moving on into your newer
message.

> Every single request to TiddlyWeb is creating a new connection to
> postgre and the connections are rapidly eating up available memory. I
> did pretty much all my testing in postgre, but a quick check using
> mysql revealed the same behavior.

Ah. Yes. This is unfortunate but not surprising. As you've figured out
SQLAlchemy's connection pool is not getting reused. I feared something
like this might happen, because a new engine is created every time
__init__ is called (again, as you've figured out). I was
optimistically hoping that SQLAlchemy would just handle it through
some kind of magic (it has plenty of other magic), but no. I hoped
that when self.engine went out of scope things would be cleaned up.
Clearly this is not the case, but I bet we can fix it with a bit of
jiggery pokery.

> After skimming the sqlalchemy docs on Sessions a bit, I think there
> are a couple options. One option I'm confident will work is to clean
> up the session created in each request by calling Session.remove()
> after the request is done. An alternative (not sure if it's possible)
> would be to try to connect a session to an existing connection created
> by a previous request.

One option would be for the wsgi middleware that established the
tiddlweb.store entry in the environ to do session removal on the way
back up the stack.

Another would be to establish some module level variables that are
used.

> I though it might be
> possible to use a destructor, a __del__ method, to do it, but I found
> the destructor is not called after each request. That itself raises
> some questions.

It certainly does. This episode is going to be very productive in
exposing issues in the system. I'm glad we did this while still pre
1.0. When I was writing the first few rounds of sql.py and adjusting
the tests to work with it, I found all sorts of problems with how
things are scoped, so here we seem to have another one.

What ought to happen is that the store object stays in scope all the
way out to to the top of the WSGI stack and then gets finalized. But
it wouldn't surprise me if a reference is being kept somewhere.

Are you doing your tests in mod_python, mod_wsgi or the built in
server? They each may have different handling of the WSGI environment.

> Anyway before getting into too much detail about any possible
> remedies, I'll wait for your take on the situation.

The perfect solution is one where a single connection pool of suitable
size is created and used by all the subsequent requests. This should
be possible with proper scoping of some variables. A singleton of some
type holding onto session and engine stuff is probably the way to go.

> interestingly I'm getting the impression that you're not getting this
> problem using the SQLite database on peermore, because the site seems
> to be doing okay. I think I saw somewhere that SQLite, being a single
> file, handles multiple connections differently so that might account
> for it. Either that or your server has a lot of memory and the
> incremental increase in memory usage hasn't had an effect yet. If you
> are getting new connections on each request you may be in for an
> unpleasant surprise somewhere down the line.

I am using sqlite on peermore and I think it basically doesn't use
connections at all: it's just file operations with fancy semaphores
for concurrency handling. I think. The server has 168MB of RAM so it's
not that there's tons of space.

More in the next message.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"TiddlyWikiDev" group.
To post to this group, send email to TiddlyWikiDev@googlegroups.com
To unsubscribe from this group, send email to 
tiddlywikidev+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/TiddlyWikiDev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to