-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Kevin A. Burton wrote:
> I'm experimenting with using ConnectorJ 3.1.7 to connect to MySQL slaves 
> to handle load balancing queries.
> 
> The environment would either include a hardware loadbalancer or round 
> robin DNS (or static configuration).  The main component though is 
> having the JDBC driver continually reconnect (to actually balance the load)
> 
> The major problem is that we run a LOT of small queries to generate our 
> pages so per-query latency is very important. 

[snip]
> So with our DB pool each query only takes .217 ms but without it its 
> taking 3.173 ms.  This doesn't sound like much but if you multiply that 
> by 1000x queries its significant and can seriously screw with total 
> throughput.
> 
> I've also tried removing any queries and JUST connection (without a 
> SELECT 1) test and it still takes 2.8ms.
> 
> Here's my JDBC connection string:
> 
> jdbc:mysql://127.0.0.1/mysql?useUnicode=true&characterEncoding=UTF-8&useOldUTF8Behavior=true&holdResultsOpenOverStatementClose=true&cacheServerConfiguration=true&elideSetAutoCommits=true
> 
> I've also increase thread_cache_size on the server and Thread_created is 
> a static value.  This only had a slight performance advantage.
> 
> Any ideas?
> 
> Kevin
> 

Kevin,

You're caching just about everything that can be cached on the client
side, so the client side is as fast as it's ever going to get. Since
you're caching the server configuration as well as not sending
auto-commits when they're not needed, the only thing that's happening is
TCP/IP connection setup, followed by the MySQL authentication protocol
(unless you're calling setTransactionIsolation() or other session-level
stuff before you issue queries).

You might consider running a connection pool, but with _really_short_
lifetimes (on the order of a few seconds/few 10 seconds), which would
spread the TCP/IP establishment/authentication overhead out over a few
of your 'transactions', but still allow load balancing at a slightly
less granular level than 'single-shot' connections, or use a ThreadLocal
and hold the same connection for the duration of your pageview.

A simple benchmark I just threw together is telling me that you're
probably going to use up .5-.7ms just to setup a vanilla TCP/IP
connection over 100mbit (localhost was actually _slower_ on my
development machine, even to a generic ServerSocket instance), so I'm
not sure you're never going to see anything faster if you always create
a new connection.

The only other obvious thing I can think of since you're using
short-lived connections is to make sure you've not running reverse-DNS
lookups inside the MySQL server (--skip-name-resolve), although that's
usually one of the first things a really high-volume installation does.

Hopefully someone from the non-Java camp where they're used to dealing
with short lived connections will have some more pointers for you.

        -Mark

- --
Mark Matthews
MySQL AB, Software Development Manager - Client Connectivity
Office: +1 408 213 6557
www.mysql.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.6 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFB/EcHtvXNTca6JD8RAi3jAJ9NmX3NuzmDMI9nWDEeWtWYAZs0IACeJgGW
1zZjrtA1eq0YIJWYtfp2sN8=
=nrh8
-----END PGP SIGNATURE-----

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to