While the advice on the problem may be correct, their explanation of the culprit I think is incorrect. Applications don't care about the block size, that is abstracted out by the file system. Typically the block size defaults to 4k, but that is changeable to a wide range to suit your data needs. If you are using very large files (i.e. video), you would want to increase that to maybe 128k or larger. If you have lots of tiny files, you may want to reduce that to 2k or less to fit the file size.

What happens is that a disk is accessed a block at a time. If the file system is setup for 4k blocks, a very large file will be accessed 4k at a time. That can be a lot of reads or writes. A tiny file, say 1k, is also accessed 4k at a time. That's 3k wasted.

Now, database tables and indexes are really just large files, but they are typically accessed randomly. The ideal is to have your block size match your record size. This is rarely possible since different tables have different record sizes. And of course, some tables have variable length records.

Unfortunately, there is no simple formula or tool to tell you the optimal block size for your data. You need to run tests. For starters, figure out your average record length for your busy tables. If your records are large, you may want to increase the block size. If they are small, maybe decrease the block size. BUT, you also need to keep in mind how your data is accessed. If you are typically doing full table scans, then your record length shouldn't carry as much weight, you would want to access the entire file as quick as you can, which means large block sizes.
To get the optimal setup, it will eventually come down to trial and error testing. You don't have to test on a high end system, just something that won't bottleneck at the CPU or RAM. Partition a disk with various block sizes and move the database around running tests on each partition.


Regardless, you should first double check that the bottleneck is I/O. The bottleneck in any system is always RAM, CPU, Network, or I/O (disk). Use the various "stat" tools (iostat, vmstat, netstat, etc.) to figure out your bottleneck. Not sure if they are named something else on Solaris. Also, see you if you can your hands on a copy of dtrace. I've never used it, but it's supposed to be an amazing performance profiling tool by Sun for Solaris. But it may be only for Solaris 10.

Keep in mind that I/O isn't necessarily restricted to disk, it could be the I/O card or even the bus on the logic board. Apple had this issue in their late model G4 systems where the CPU and RAM were faster than the bus between them. Two gigabit ethernet cards can theoretically saturate a typical PCI bus.


On Jan 4, 2005, at 1:47 PM, Bruce Dembecki wrote:

Hi! We're setting up a Solaris system which is behaving poorly, very slow
disk performance. The nice folks at Sun have suggested a mismatch between
the block size MySQL is writing to disk and the block size the the Operating
System is writing to disk.


While we can see the logic in the argument, I have over the years not been
able to find anywhere that this information was available or controlable, so
my assumption has been that mysql gives the data to the operating system to
write to disk and doesn't deal with block sizes.


So here's the questions...

In mysql 4.0.23 or 4.1.8 for Solaris Sparc 64 bit what is the block size
mysql uses for writes to disk, and is there a way to control that?


Best Regards, Bruce


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577


-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to