Artur,

> -----Original Message-----
> From: Artur Tomusiak - Hannon Hill <artur.tomus...@hannonhill.com>
> Sent: Tuesday, February 21, 2023 4:31 PM
> To: users@tomcat.apache.org
> Subject: Database related performance degradation after upgrading from
> Tomcat 9.0.33 to Tomcat 9.0.69
> 
> After upgrading from Tomcat 9.0.33 to Tomcat 9.0.69, jobs in our application
> that execute lots of quick database queries end up being visibly slower - 16%
> slower on average in a typical setup where a database is on a local area
> network. Here is additional information we have
> confirmed:
> 
>    - This is not specific to operating system (tested on Linux and MacOS)
>    - This is not related to database vendors (tested on MySQL and Oracle)
>    - This is not related to our software (identical code runs on different
>    versions of Tomcat)
>    - This is related to database connection latency as opposed to the speed
>    of the database or the app - the longer the database latency, the more
>    significant the slowdown is. When testing with a local database on the same
>    machine, there is no performance hit between the two versions of Tomcat.
>    When testing with a database on another network across the Internet (very
>    high latency), the job running on Tomcat 9.0.69 is about 50% slower than
>    Tomcat 9.0.33.
>    - This might be coincidental, but based on the number of queries the job
>    executes and the database connection latency, it appears as if each
>    database query required 2 additional network trips to the database on
>    Tomcat 9.0.69 as compared to Tomcat 9.0.33.
>       - For example, if a job executes about 37,000 queries, and the
>       database connection latency is about 0.15 ms, the job ends up being
> about
>       11 seconds slower on Tomcat 9.0.69 than Tomcat 9.0.33 based on our
> tests.
>       This seems to add up to 2 extra network trips per query because 37,000
>       queries * 0.15 ms/trip * 2 extra trips/query = 11,100 ms = 11.1 s.
>       - Another example is when testing with a db connection over the
>       Internet (25 ms latency) and a job that executes 1,231 queries, that 
> job is
>       more or less 60 seconds slower on Tomcat 9.0.69 than Tomcat
> 9.0.33 based on
>       our tests. Again, if we assumed that there are extra 2 trips to the
>       database per query, this adds up: 1,231 queries * 25 ms/trip * 2 extra
>       trips/query = 61,550 ms = 61.55 s.
> 
> We are suspecting that the slowness comes from around getting a database
> connection from the connection pool, though we spotted an occasional
> slowness around transaction committing as well.
> 
> Here is our database connection configuration in context.xml file:
> 
>   <Resource
>     name="jdbc/CascadeDS"
>     auth="Container"
>     type="javax.sql.DataSource"
>     username="@{dbusername}"
>     password="@{dbpassword}"
>     driverClassName="com.mysql.jdbc.Driver"
> 
> url="jdbc:mysql://@{dbhostport}/@{dbname}?useUnicode=true&amp;char
> acterEncoding=UTF-8&amp;useSSL=false"
>     maxTotal="250"
>     maxIdle="10"
>     maxWaitMillis="3000"
>     removeAbandonedOnBorrow="true"
>     removeAbandonedOnMaintenance="true"
>     removeAbandonedTimeout="300"
>     logAbandoned="true"
>     testOnBorrow="true"
>     testOnCreate="true"
>   />
> 
> Is this a known issue? If not, is there any additional information I could
> provide to help troubleshoot or replicate the problem?
> 
> Thank you,
> Artur Tomusiak

I don't know anything about version differences between those versions of 
Tomcat, but I do know a thing or two about JDBC and network round trips.

Assuming the connections are open and in the pool, here are the possible round 
trips that I can think of:

1. Check the health of the connection upon checkout (testOnBorrow=true)
2. Disable autocommit
3. Prepare JDBC statement
4. Execute statement & read initial results
5. Read additional results
6. Commit/roll back transaction
7. Enable auto commit

Statement caching should help with #3.

4 and 5 are a little mushy.  I know with Oracle the first 10 rows by default 
come back in the response to the query.  If there are more results, they will 
require additional round trips unless you've increased the JDBC prefetch size.

7 might surprise you but typically pools store connections with autocommit 
enabled because that is the default state for a connection.  Some pools allow 
you to disable this functionality.  IOW, the autocommit state doesn't change 
unless you explicitly change it.

Also, you need to think about what's going on at the TCP level.  If there are 
enough packets in the response, the DB can't just send them all at once.  It 
might have to pause to wait for the ACKs from the client.  So even if the row 
count is low but the rows are wide, there might be extra pauses in there.  As 
long as the data was the same in your tests, I don't think this should be an 
issue, though.

Hope this helps.

Reply via email to