On Thu, Sep 11, 2003 at 11:30:20AM -0700, Mark Kaufer wrote: > > By the way, Jeremy's original answer was more correct than mine, because > > he noted where I did not that the sort buffer is a per-client resource > > and is allocated once for each client -- or at least for each client that > > issues queries requiringn sorting, such as those with ORDER BY clauses. > > Thanks Paul and Jeremy for the replies. That helped out a great deal and > I very well may recompile the MySQL installs on my FreeBSD boxes using > LinuxThreads.
You'll probably find (as we have) that LinuxThreads/MySQL on FreeBSD easily outperforms FreeBSD's native threads. > Now I'm wondering just how optimised (or non-optimised as the case may be) > my configurations are. Below are some settings specified in the my.cnf of > a linux box with 2Gb of memory that I'd say roughly 75%-80% of its purpose > in life is dedicated to MySQL: > > set-variable = key_buffer=256M > set-variable = max_allowed_packet=1M > set-variable = table_cache=64 > set-variable = sort_buffer=2M > set-variable = net_buffer_length=8K > set-variable = myisam_sort_buffer_size=2M > set-variable = max_connections=1000 > set-variable = thread_concurrency=10 > innodb_data_file_path = ibdata1:10M:autoextend > set-variable = innodb_buffer_pool_size=40M > set-variable = innodb_additional_mem_pool_size=40M > set-variable = innodb_log_file_size=5M > set-variabl e = innodb_log_buffer_size=5M > innodb_flush_log_at_trx_commit=1 > set-variable = innodb_lock_wait_timeout=50 > > Am I allocating too little memory to table_cache, sort_buffer_size, and > innodb_buffer_pool_size given the 2Gb of memory? That depends... How much data do you have in InnoDB and MyISAM tables? How many tables do you have in each? > So I can also adjust these settings on other boxes, is there a way to > mathematically determine what percentage of totally memory to set these > variables to? Sort of, but it's not as scientific as you'd probably like. Take, for example, the InnoDB Buffer Pool. That's the chunk of memory that InnoDB reserves for caching data and indexes. You'd generally want to make it as big as you can without adversely affecting system performance overall. But if you only have 100MB of InnoDB data, there's little point in a 1GB buffer pool. Also, you find that even if you have 100GB in InnoDB, the buffer pool never gets very utilized because the actual working set of data is quite small. For adjusting the table_cache, I tend to look at the SHOW STATUS output to see how often MySQL is opening and closing tables. If it's happening a lot, then I'd think about increasing the value. What's "a lot"? Well, that also depends. On a low volume system 10,000 might be "a lot" but on a really busy system that's been up for half a year, maybe 100,000 is a lot. There's no magic formula with this stuff. It's more often a case of making simple, careful changes and watching how they affect system performance, queries per second, etc. You might also look at an interactive tool that can give you a slightly different view of what's going on. I wrote mytop partly for that purpose. Jeremy -- Jeremy D. Zawodny | Perl, Web, MySQL, Linux Magazine, Yahoo! <[EMAIL PROTECTED]> | http://jeremy.zawodny.com/ MySQL 4.0.15-Yahoo-SMP: up 4 days, processed 164,286,296 queries (451/sec. avg) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]