Hi,
More information about tomcat shutdown and object swapping probably belongs
in the development list. It is quite a bit of work to extend DBCP and write
extensions to tomcat, and at the end of the day most of those problems I
would consider as bugs. DBCP specifically cannot be easily extended, we had
to use the original source files with some modifications, my least favorite
option.

MySQL documentation covers their JDBC client configuration very well. The
number of options is little overwhelming, but after some time and a few
tests it is possible to find a configuration that is a match to the system
resources.
See:
http://dev.mysql.com/doc/refman/5.1/en/connector-j-reference-configuration-properties.html
The configuration is version specific, find your version before reading.

About the database clearing up dangling connections, you will have to read
the database documentation. I hate to say RTFM, but there is nothing else I
can say here :)

To disable caching in DBCP, use configuration option:
poolPreparedStatements=false

I don’t remember where exactly you specify it, since our extension just
passes a property file to DBCP. This is actually something that I would like
to see built into Tomcat/DBCP (property file configuration). Your system
admin will thank you.

To troubleshoot where caching is being done, your best bet is stepping
through the code with a debugger. Make sure the configuration of the test
system mirrors the production system.

It may not be possible to use the mysql caching algorithms, since they may
use features that are not publicly available through JDBC and are part of
their protocol. I don’t know the source well enough to comment on how it
works internally.

Specific example of leaks:
1.    Close tomcat
2.    Connection pool is removed.
3.    Request comes in to the war file (the one that is in the closing
process)
4.    Request gets processed.
5.    Servlet needs a connection from DBCP.
6.    Tomcat does not find the DBCP pool in JNDI, so it recreates it.
7.    The newly created connection pool will not be used after this request,
and will not be closed. If the pool pings the connection, it will stay open.

A similar path I believe exists while hot deploying a war file, which also
exhibits the problem of handling requests in the middle of shutdown.

It has been a while since we worked on it, the details may vary from what is
listed above. However, one thing I do remember is that after some objects
were shut down Tomcat was still processing requests that required those
objects, and there was more than one way to reproduce it.

After all this work, we just shut down tomcat when we update war files.

It is possible to fix this. Two things that I would start with:
1.    Standard server shutdown order. This is too much information for this
list.
2.    Call shutdown callbacks of different APIs, give developer an option to
deal with it. For example JMX shutdown process does not happen correctly.

You may be able to fix this via listeners on various levels. We got to the
conclusion that shutting down objects in tomcat was little chaotic and no
one could say in confidence that using a certain listener API will work
better than another without spending a few days of tracing the code and
doing lots of tests. The problem is that in a simple invocation everything
seems to work fine, but if you throw in a few threads and methods that take
some time to respond then tracing becomes complicated and time intensive.
>From my past experience multithreaded debugging can be time intensive,
especially when it involves so many components and possible paths as we have
in a full blown application container.

To make a long story short:
1.    Disable caching in DBCP
2.    Enable caching in your JDBC client
3.    Shut down tomcat when you replace war files or when you change
configuration.

Elli

On Mon, Oct 26, 2009 at 6:54 PM, Martin Gainty <mgai...@hotmail.com> wrote:

>
> mg>good work
>
> > From: e...@sustainlane.com
> > To: users@tomcat.apache.org
> >
> > Hi,
> > I did not follow this thread form the beginning, but I can provide a
> > few tips about using connection pools in tomcat.
> > 1. DBCP has quite a few problems in the area of scalability. We have
> > our own modified version of the source that makes it more concurrent,
> > and I believe some of those changes were integrated into DBCP. Use the
> > latest stable version of DBCP form the commons project. It is a jar
> > file in tomcat that you can easily replace. The fixes do not require
> > using concurrent, that part of the code is not causing the problems.
> > 2. If your JDBC driver supports caching of prepared statements and
> > metadata, do it in the driver and disable this in DBCP. IMO DBCP does
> > a poor job at best in caching. We use mysql and its JDBC driver is
> > doing an excellent job.
> mg>is it possible to port the working (MySQL) Driver caching algorithms to
> DBCP?
>
> > 3. Your JDBC driver may already be caching metadata that DBCP is
> > caching. In this case you are caching the same data twice. Make sure
> > it dose not happen, it is a big memory overload on the JVM.
> mg>how to determine which metadata is being cached?
>
> > 4. Tomcat has a problem doing a clean shutdown of DBCP, and other JNDI
> > resources. I traced in the debugger dangling connection pools that are
> > created during the shutdown process. If your pool is configure to ping
> > the connections once in a while, they can stay open for a long time,
> > possibly forever. Our solution is custom extension that cleans up
> > pools, which works in conjunction with our extended implementation of
> > DBCP.
> mg>any way to factor the cleanup code to DBCP?
>
> > 5. The connection pool leak is caused mostly when war files are
> > replaced under load. If you are experiencing a problem of leaks in
> > those conditions, then some common options are:
> > A. Write custom extension to the pooling mechanism as we did. This is
> > not a 100% solution.
> mg>any specific examples?
>
> > B. Avoid hot deployment of apps by shutting down tomcat before
> > updates. This is safer, but also not 100% clean.
> mg>any way to factor the cleanup code to TC hot deployment code?
>
> > C. Block Tomcat during the update. If you have a load balancer,
> > redirect traffic to other tomcat instances, and then do the update
> > while tomcat has no load. This reduces the problems significantly.
> mg>tough to request the ops people to block all TC instances to be updated
> mg>personnel would have to be allocated to showup on off hours..
> mg>this could be the most resource-intensive (most expensive) of all
> options
> >
> > When you do a full tomcat shutdown, there will still be connections
> > that are not closed, but the process itself will finish, and the
> > database will clean up the connections after some time.
> mg>how is the time span assigned?
>
> >This is of course not as clean as closing all the connections on server
> shutdown,
> > but I don't know of any better option. I believe our custom cleanup
> > code does close most connections on shutdown, but I have no 100%
> > certainty or evidence that this is actually true. However it does do a
> > lot of closing that did not happen before.
> mg>if your algorithm is based on any events such as ContainerDestroy i'm
> thinking a listener could accomplish the objective?
>
> > I am not aware of any way to completely avoid dangling connection pool
> > after hot deployment under load. We tried to fix this but it got too
> > complicated, it is much easier to restart Tomcat and swallow the
> > bitter pill. You can still do hot deployment of war files that do not
> > access the database, though it is possible that the same leaks will
> > leave lots of hanging objects of other types (like email clients, JMS
> > clients, thread pools, HttpClient, etc).
> mg>anything you can suggest to clean up these orphaned resources will help
> everyone
>
> > E
> mg>many thanks for the thoughtful commentary
> mg>ccing the commons-users list as I'm sure they would be very interested
> in your solution
> mg>Martin
>

Reply via email to