-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 All,
Recently, there was a post on the list regarding connection pool leakage from Tomcat (ref: http://www.nabble.com/Right-way-to-close-database-connection-pool-td24832197.html). I'm sure this question has come up before as well. I contributed a listener class that would shut down a JNDI DataSource and installed it in my development environment. Satisfied that I was protected from the evils of JDBC connection pool leakage, I moved on with my life. This morning, as I did a live build-and-re-deploy of my webapp, I saw the JNDIDataSourceShutdownListener doing its thing, and I got to thinking: I receploy my development webapp over and over again during the day and I've never run out of database connections. It's pretty rare that I have to really bounce Tomcat, so I decided to see what happened when I disabled my listener and let the alleged leak occur. My environment: Linux 2.6.14-gentoo-r5 #2 PREEMPT Sat Dec 17 16:30:55 EST 2005 i686 AMD Athlon(tm) XP 1700+ AuthenticAMD GNU/Linux java version "1.6.0_13" Java(TM) SE Runtime Environment (build 1.6.0_13-b03) Java HotSpot(TM) Client VM (build 11.3-b02, mixed mode, sharing) Tomcat 5.5.26 MySQL Connector/J 5.1.7 (installed into CATALINA_HOME/common/lib) MySQL Server 5.0.70 I deploy my webapp by creating a directory in my appBase and filling it with files, including my META-INF/context.xml file: <Context reloadable="true"> <WatchedResource>WEB-INF/struts-config.xml</WatchedResource> <Resource name="jdbc/myDataSource" auth="Container" type="javax.sql.DataSource" maxActive="1" maxIdle="1" maxWait="10000" url="jdbc:mysql://localhost.localdomain/myDb?characterEncoding=utf8&dumpQueriesOnException=true" username="scott" password="tiger" driverClassName="com.mysql.jdbc.Driver" removeAbandoned="true" removeAbandonedTimeout="30" logAbandoned="true" testOnBorrow="true" validationQuery="/* ping */ SELECT 1" /> <Resource name="jdbc/myOtherDataSource" auth="Container" type="javax.sql.DataSource" maxActive="1" maxIdle="1" maxWait="10000" url="jdbc:mysql://localhost.localdomain/myOtherDb?characterEncoding=latin1&dumpQueriesOnException=true" username="scott" password="tiger" driverClassName="com.mysql.jdbc.Driver" removeAbandoned="true" removeAbandonedTimeout="30" logAbandoned="true" testOnBorrow="true" validationQuery="/* ping */ SELECT 1" /> </Context> Yes, I have two JNDI DataSources. I re-deploy my webapp by simply replacing whatever files need to be updated. If I touch a .class file in WEB-INF/classes, Tomcat will reload my webapp. I disabled my JNDI DataSourceShutdownListener and repeatedly recompiled a class and updated my webapp's deployment directory. Tomcat redeployed my web application each time, and I ran "mysqladmin -u root processlist" every second or so during the restart to see what was happening with the server. If I was lucky, I could see both the "old" connection (there's only one) from my primary DataSource (the first one listed) and the "new" one at the same time (my webapp reads a bunch of stuff from the db on startup, so a JDBC connection is immediately created and used from the primary DataSource. The secondary DataSource is rarely used and no connections were ever established via that DataSource during my tests. After a second or two, the "old" connection would disappear from MySQL server's process list. Sometimes, I could get it to hang around for as much as 30 seconds after the redeploy, but it always dies. I suspect that the delay has to do with GC scheduling. Finally, after several restarts, I tried checking the heap: $ ./jmap -histo:live 16816 | grep BasicDataSource 369: 2 240 o.a.t.dbcp.dbcp.BasicDataSource Hmm... maybe there are a bunch of dead ones laying around: $ ./jmap -histo 16816 | grep BasicDataSource 369: 2 240 o.a.t.dbcp.dbcp.BasicDataSource Nope. So it appears that on my setup, this leak does not occur. (!) I have a theory about this. In all my years of using MySQL, I have discovered that it is very developer-friendly. That is, it is very tolerant of abuse: you can write the sloppiest JDBC code imaginable and never run out of resources on the server side. Oracle, on the other hand, I have observed to be very consistently adamant about managing every little detail of your connections, statements, cursors, etc.: if you forgot to close a ResultSet somewhere, you were screwed (in spite of the fact that the JDBC spec requires that ResultSets bound to Statements should be freed when the Statement is freed, but that's another matter). I seem to recall open statements surviving a JVM shutdown, too, which meant that restarting your app server wouldn't clear the connections: they had to be killed on the server side to free them up. My guess is that Tomcat allows the old DataSource to go out of scope, and the whole connection pool, and any connections still in it are GC'd. As part of that process, the socket connections to the database are closed. MySQL, being the smart, developer-friendly database that it is, closes those connections on the server-side and cleans everything up. If I were to repeat these tests on a particularly whiny version of Oracle, I'd have open connections on the server coming our my ears. Can anyone confirm my hypothesis? I'm glad that code such as my cleanup filter might be useful to someone, but it's certainly not useful to me at the present time. Thanks, - -chris -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkp8P/MACgkQ9CaO5/Lv0PD0CgCfW/Sh+ShJAMXSXTRN8yKjwFXc n/UAn0weJaga1tdUjIoHAfKdI8OBloDq =XtkF -----END PGP SIGNATURE----- --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org