I don't think there would be any benefit to using InnoDB, at least not from a transaction support view.

After your nightly optimize/repair are you also doing a flush? That may help.

I haven't seen any direct comparisons between HFS+ and file systems supported by Linux. I would believe that Linux would be faster since Linux tends to be geared towards performance first rather than usability. But you shouldn't rely on disk caching only. The disks still need to be read in order to fill the cache, so you want to get the best disk performance you can. Based on your other email, it looks like you are using individual disks for storing your data. While I understand what you were trying to do by separating your data onto different disks, you would get far better performance by combining your disks in a RAID, even a software RAID.
If you are using software based RAID, you would need to choose between mirroring or striping. Both will give you better read speeds, mirroring will slow down writes. If you are striping, the more drives you use the better performance you'll get, although I wouldn't put more than 4 drives on a single SCSI card.
I think you can use Apple's RAID software for your SCSI disk, but SoftRAID (softraid.com) would give you more options. Moving to RAID should improve things across the board and will give the best bang for your buck (SoftRAID is $129). Personally, I think you should always use some form of RAID on all servers.



On Jan 26, 2004, at 5:41 PM, Adam Goldstein wrote:


I have added these settings to my newer my.cnf, including replacing the key_buffer=1600M with this 768M... It was a touch late today to see if it has a big effect during the heavy load period (~3am to 4pm EST, site has mostly european users)

I did not have any of these settings explicitly set in my latest my.cnf trialsm, except key_buffer, and I ommitted the innodb ones, as we are not (currently) using innodb... would there be any benefit? transactions are not a priority, so says my client, so he does not use them.

I see the query_cache_size is rather large here, but I am unsure what the default size would be. I do not know, yet, how large I can/should make either setting, but, it does appear to work without malloc/memory errors appearing in the log. Note: while it bitched in the logs about the malloc setting, the server did not crash, but, kept running. Obviously with an undetermined amount of cache. I cannot seem to find any good way to know how much ram (cache/buffer/other) mysql uses, as the top output from osx is not very appealing... not that linux top tells me much more either. On average, on the old system (all on one box) mysql was said to be using about 350MB avg in top... except after the nightly optimize/repair script which left it using 1.2G of ram for hours, and making all queries rather slow.

Also- a more G5 specific question: as MySql is supposed to gain much from the OS disk caching, how does OSX/HFS+ compare to other *nIX filesystems... such as Linux 2.4 w/reiserfs?

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577


-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to