Re: my.cnf optimization question ..
On Tue, Sep 11, 2001 at 03:17:47PM -0500, Weslee Bilodeau wrote: They average around 500 connections/second at any given time, two have a master/slave setup. Connections/sec or Queries/sec? That's a lot of connections per second? Can you use persistent connections? It would save a lot of overhead. Half and half, at the moment .. We had a problem that when we connected to multiple database servers from PHP using persistant connections, it would get confused. By confused I meant 1.) It would 'leak' MySQL connections. Somehow it 'forgot' it had an open connection. 2.) It would pick the wrong server. Each server has specific information on it, in seperate database/tables. The persistant connection code would actually hand back the wrong database handle and the queries would fail. We had to disable persistant connections because of this. Ouch. I hadn't heard about those problems before. It's probably worth testing again, 'cause it will give you a nice boost--especially at that connection rate. Our current my.cnf file: [mysqld] skip-locking set-variable= key_buffer=384M set-variable= max_allowed_packet=1M set-variable= table_cache=400 set-variable= sort_buffer=10M set-variable= record_buffer=10M set-variable= thread_cache=8 You might increase the thread_cache quite a bit. It'll bypass the need to create/destroy threads all the time. It may not give you a big improvement, but it can't hurt. The reason I'm asking is because of the MySQL page on Linux-specific notes. Which basically said you can have lots of threads, providing your key-cache is low. Wonder what that really means... or at least what the rationale behind it is. Have you experimented much with they key_buffer? Any noticeable difference if you double it or cut it in half? I'm trying to get as many threads as I can , and get MySQL to be ram-happy for speed as well. A good plan. ;-) Any changes for this many connections/ram that would be suggested? How are things looking in SHOW STATUS? Any red flags that you've noticed? Any slow queries? Slow queries are actually watched with a fine-tooth comb. I catch anything slow, I first smack the programmer along side the head, then throw Paul's book at them. I attempt to optimize it where possible, or just drop thier table and force them to do it over again. Hahhahahah... Well, that's good to hear, I suppose. The queries themselves seem to be pretty well optimized, I just wanna make sure the config I've got can scale. :) I know the feeling. I'm used to servers with a few hundered connections, with 512 MB of ram .. Not what they want, which is a few thousand, with 2gb of ram. Having not broken the 1,000 mark myself, I don't have a lot else here to say. Jeremy -- Jeremy D. Zawodny, [EMAIL PROTECTED] Technical Yahoo - Yahoo Finance Desk: (408) 349-7878 Fax: (408) 349-5454 Cell: (408) 685-5936 MySQL 3.23.41-max: up 7 days, processed 146,377,980 queries (241/sec. avg) - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: my.cnf optimization question ..
They average around 500 connections/second at any given time, two have a master/slave setup. Connections/sec or Queries/sec? That's a lot of connections per second? Can you use persistent connections? It would save a lot of overhead. Half and half, at the moment .. We had a problem that when we connected to multiple database servers from PHP using persistant connections, it would get confused. By confused I meant 1.) It would 'leak' MySQL connections. Somehow it 'forgot' it had an open connection. 2.) It would pick the wrong server. Each server has specific information on it, in seperate database/tables. The persistant connection code would actually hand back the wrong database handle and the queries would fail. We had to disable persistant connections because of this. We last tested about 5 months ago, so the problem could very well be gone. It was however a true pain in the rear to replicate. Right now, persistant connections are I really hope, but unsure. Care to share the config file? With that much RAM, there are probably some things you can tweak to make optimal use of the 2GB RAM. But if you've done a lot of that, I won't bother pointing them out (obviously). Our current my.cnf file: [mysqld] skip-locking set-variable= key_buffer=384M set-variable= max_allowed_packet=1M set-variable= table_cache=400 set-variable= sort_buffer=10M set-variable= record_buffer=10M set-variable= thread_cache=8 set-variable= thread_concurrency=4 set-variable= myisam_sort_buffer_size=64M set-variable= max_connections=3072 set-variable= open_files_limit=6144 set-variable= long_query_time=4 tmpdir=/data/tmp log-bin user=mysql binlog-do-db=blahblahblah binlog-do-db=blahblahblahblah server-id=1 log=/usr/local/var/mysqld.log log-slow-queries=/usr/local/var/slow-queries.log pid-file=/usr/local/var/mysqld.pid [mysql] no-auto-rehash [isamchk] set-variable= key_buffer=256M set-variable= sort_buffer=256M set-variable= read_buffer=2M set-variable= write_buffer=2M [myisamchk] set-variable= key_buffer=256M set-variable= sort_buffer=256M set-variable= read_buffer=2M set-variable= write_buffer=2M [mysqlhotcopy] interactive-timeout [safe_mysqld] err-log=/usr/local/var/mysqld.err The reason I'm asking is because of the MySQL page on Linux-specific notes. Which basically said you can have lots of threads, providing your key-cache is low. I'm trying to get as many threads as I can , and get MySQL to be ram-happy for speed as well. Any changes for this many connections/ram that would be suggested? How are things looking in SHOW STATUS? Any red flags that you've noticed? Any slow queries? Slow queries are actually watched with a fine-tooth comb. I catch anything slow, I first smack the programmer along side the head, then throw Paul's book at them. I attempt to optimize it where possible, or just drop thier table and force them to do it over again. The queries themselves seem to be pretty well optimized, I just wanna make sure the config I've got can scale. :) I'm used to servers with a few hundered connections, with 512 MB of ram .. Not what they want, which is a few thousand, with 2gb of ram. Jeremy Thanks for any help, Weslee - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
my.cnf optimization question ..
Basic (maybe?) question on some optimal variables for MySQLd's my.cnf configuration .. Right now, I have three MySQL servers, each with 2 GB ram, dual-CPU P3 1ghz. They average around 500 connections/second at any given time, two have a master/slave setup. MySQL is prettty much the only thing running, nice'd at -20 on a Patched Linux 2.4.9 along w/patched glibc 2.2.4 (LinuxThreads patches). Its run smoothly when I tested it w/~4000 connections and didn't skip a beat. The question basically goes with, I'm expecting to get around 1-2,000 connections/second on average on the systems, using a modified 'my-huge.cnf' from the distribution. No InnoDB, BDB, or overly huge blobs being used. On average each query joins w/about 4 tables on primary/unique indexes where possible. Any changes for this many connections/ram that would be suggested? Thanks, Weslee - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: my.cnf optimization question ..
On Mon, Sep 10, 2001 at 04:54:57PM -0500, Weslee Bilodeau wrote: Basic (maybe?) question on some optimal variables for MySQLd's my.cnf configuration .. Right now, I have three MySQL servers, each with 2 GB ram, dual-CPU P3 1ghz. Nice. :-) They average around 500 connections/second at any given time, two have a master/slave setup. Connections/sec or Queries/sec? That's a lot of connections per second? Can you use persistent connections? It would save a lot of overhead. MySQL is prettty much the only thing running, nice'd at -20 on a Patched Linux 2.4.9 along w/patched glibc 2.2.4 (LinuxThreads patches). Its run smoothly when I tested it w/~4000 connections and didn't skip a beat. The question basically goes with, I'm expecting to get around 1-2,000 connections/second on average on the systems, using a modified 'my-huge.cnf' from the distribution. Care to share the config file? With that much RAM, there are probably some things you can tweak to make optimal use of the 2GB RAM. But if you've done a lot of that, I won't bother pointing them out (obviously). On average each query joins w/about 4 tables on primary/unique indexes where possible. Any changes for this many connections/ram that would be suggested? How are things looking in SHOW STATUS? Any red flags that you've noticed? Any slow queries? Jeremy -- Jeremy D. Zawodny, [EMAIL PROTECTED] Technical Yahoo - Yahoo Finance Desk: (408) 349-7878 Fax: (408) 349-5454 Cell: (408) 685-5936 MySQL 3.23.41-max: up 4 days, processed 103,895,494 queries (249/sec. avg) - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php