I have a web page that makes a couple of hundred ajax calls when it
loads. But the calls are recursive. The response from one call
generates a couple of more calls. The responses from those calls
generate others, etc. So this is not a blast of 200 simultaneous calls
to the server. In most cases the count of active database connections
never gets over 10-15 at a time. I have the max count on the connection
pool set to 125.
The server has very low traffic averaging a few pages an hour.
Yesterday, I was testing this one page described above and and I started
getting errors that no connections were available in the pool. My
connection abort timeouts are very short, about 5 seconds, and the wait
time for a free connection is 60 seconds, I believe. So theoretically,
even if there were connection leaks, they should be returned well before
waiting connections timed out, right? I went to the AWS RDS console to
look at the connections graph. It showed that in the previous couple of
minutes the database went from 3 connections to the max of 125. The
page had timeout errors. But it was done. There was no more activity
on this webapp. Yet the database monitor continued to show 125
connections. I kept refreshing for about 10 minutes expecting the
connection pool to finally drop all of the connections. Nothing.
Finally after about 10 minutes it still showed 125 connections, I
bounced tomcat and the connection count on the database immediately fell
to zero. BTW, I have logAbandoned set to true, and I'm not getting any
abandon log entries.
The problem occurred twice requiring a reboot of TC each time. I
enabled some jmx code in my web app that logs the connection count each
time I request/return a connection. Of course, now that I've got
logging on, I can't reproduce the problem. I can reload that page a
hundred times now, and my AWS RDS monitor shows a bump up to 10-15
connections for a brief time, then returns, exactly like I believe it
should be working.
I'm not a big fan of problems that just go away. I really want to
figure out what is happening here, even if it's random. I don't
believe this is a problem in the mainline code. Otherwise I think it
would be more consistent. And again, even if I was not returning
connections, they should be timing out and being logged. Another option
is that I have a rogue thread that is sucking up all of the connections
and holding them. But this problem only starts when I load this page.
And I'm not aware of code that starts new threads in that area. Is
there anything I can be doing to the connection pool to make it not
close connections and return them to the pool? Or is there anything the
database could be doing that would cause closing a connection to fail?
I guess I'm just grasping at straws. Is there any type of low-level
logging that I can enable that will tell me each time a connection is
requested and returned (and possibly the call stack each time)?
I've got a feeling this one is going to bite me some time soon and take
a client site down with it. I really need to understand this one.
Thanks as always,
Jerry
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org