Connection latency reduction attempt for load balancing mysql slaves.

2005-01-29 Thread Kevin A. Burton
I'm experimenting with using ConnectorJ 3.1.7 to connect to MySQL slaves 
to handle load balancing queries.

The environment would either include a hardware loadbalancer or round 
robin DNS (or static configuration).  The main component though is 
having the JDBC driver continually reconnect (to actually balance the load)

The major problem is that we run a LOT of small queries to generate our 
pages so per-query latency is very important. 

After benchmarking continual reconnect to my laptop running MySQL 4.1.9 
and with our DB connection pool I'm noticing a 14x performance gap (with 
manual reconnects still loosing).

Here are the results:
   // Total parse count: 1000
   // Total duration: 3173  milliseconds
   // Total avg duration: 3.173  milliseconds
   // Total per second: 315.15915
   // Testing method: test2
   // 
   // Total parse count: 1000
   // Total duration: 217  milliseconds
   // Total avg duration: 0.217  milliseconds
   // Total per second: 4608.295
So with our DB pool each query only takes .217 ms but without it its 
taking 3.173 ms.  This doesn't sound like much but if you multiply that 
by 1000x queries its significant and can seriously screw with total 
throughput.

I've also tried removing any queries and JUST connection (without a 
SELECT 1) test and it still takes 2.8ms.

Here's my JDBC connection string:
jdbc:mysql://127.0.0.1/mysql?useUnicode=truecharacterEncoding=UTF-8useOldUTF8Behavior=trueholdResultsOpenOverStatementClose=truecacheServerConfiguration=trueelideSetAutoCommits=true
I've also increase thread_cache_size on the server and Thread_created is 
a static value.  This only had a slight performance advantage.

Any ideas?
Kevin
--
Use Rojo (RSS/Atom aggregator).  Visit http://rojo.com. Ask me for an 
invite!  Also see irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
If you're interested in RSS, Weblogs, Social Networking, etc... then you 
should work for Rojo!  If you recommend someone and we hire them you'll 
get a free iPod!
   
Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Connection latency reduction attempt for load balancing mysql slaves.

2005-01-29 Thread Mark Matthews
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Kevin A. Burton wrote:
 I'm experimenting with using ConnectorJ 3.1.7 to connect to MySQL slaves 
 to handle load balancing queries.
 
 The environment would either include a hardware loadbalancer or round 
 robin DNS (or static configuration).  The main component though is 
 having the JDBC driver continually reconnect (to actually balance the load)
 
 The major problem is that we run a LOT of small queries to generate our 
 pages so per-query latency is very important. 

[snip]
 So with our DB pool each query only takes .217 ms but without it its 
 taking 3.173 ms.  This doesn't sound like much but if you multiply that 
 by 1000x queries its significant and can seriously screw with total 
 throughput.
 
 I've also tried removing any queries and JUST connection (without a 
 SELECT 1) test and it still takes 2.8ms.
 
 Here's my JDBC connection string:
 
 jdbc:mysql://127.0.0.1/mysql?useUnicode=truecharacterEncoding=UTF-8useOldUTF8Behavior=trueholdResultsOpenOverStatementClose=truecacheServerConfiguration=trueelideSetAutoCommits=true
 
 I've also increase thread_cache_size on the server and Thread_created is 
 a static value.  This only had a slight performance advantage.
 
 Any ideas?
 
 Kevin
 

Kevin,

You're caching just about everything that can be cached on the client
side, so the client side is as fast as it's ever going to get. Since
you're caching the server configuration as well as not sending
auto-commits when they're not needed, the only thing that's happening is
TCP/IP connection setup, followed by the MySQL authentication protocol
(unless you're calling setTransactionIsolation() or other session-level
stuff before you issue queries).

You might consider running a connection pool, but with _really_short_
lifetimes (on the order of a few seconds/few 10 seconds), which would
spread the TCP/IP establishment/authentication overhead out over a few
of your 'transactions', but still allow load balancing at a slightly
less granular level than 'single-shot' connections, or use a ThreadLocal
and hold the same connection for the duration of your pageview.

A simple benchmark I just threw together is telling me that you're
probably going to use up .5-.7ms just to setup a vanilla TCP/IP
connection over 100mbit (localhost was actually _slower_ on my
development machine, even to a generic ServerSocket instance), so I'm
not sure you're never going to see anything faster if you always create
a new connection.

The only other obvious thing I can think of since you're using
short-lived connections is to make sure you've not running reverse-DNS
lookups inside the MySQL server (--skip-name-resolve), although that's
usually one of the first things a really high-volume installation does.

Hopefully someone from the non-Java camp where they're used to dealing
with short lived connections will have some more pointers for you.

-Mark

- --
Mark Matthews
MySQL AB, Software Development Manager - Client Connectivity
Office: +1 408 213 6557
www.mysql.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.6 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFB/EcHtvXNTca6JD8RAi3jAJ9NmX3NuzmDMI9nWDEeWtWYAZs0IACeJgGW
1zZjrtA1eq0YIJWYtfp2sN8=
=nrh8
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]