I have had linux on soft-raid5 (6x18G, 8x9G, 4x18G) systems, and the load was even higher... The explanation for this could be that at high IO rates the data is not 100% synced across the spindles, and therefore smaller files (ie files smaller than the chunk size on each physical disk) must wait to be passed under the heads on all the disks... While larger chunk sizes may help this, I'm not sure. A large ram buffer and read ahead on a dedicated raid system is more likely to work in that case, but, that would require either yet another fileserver (fairly expensive), or a hw dedicated Raid server (much more expensive), like the Xraid, which did not produce any real difference in the mysql bench results previously posted here. In fact, going by those simple benchmarks alone, my box already beat the Xserve/Xraid system in most of the tests.

Of course, the validity or relativity of those tests to a real world, heavily used server may be in question. :) I also am having trouble finding relative bench data to other good power systems (ie. I would like to see how this stacks up against an 8+G dual/quad xeon or sparc, etc)

I will ensure his nightly optimize/repair scripts feature the flush.

But, none of this yet explains why testing from the linux box using the remote G5/mysql server (over only 100Mbit switch) gives better results than testing directly on the server.

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net


On Jan 27, 2004, at 9:45 AM, Brent Baisley wrote:


I don't think there would be any benefit to using InnoDB, at least not from a transaction support view.

After your nightly optimize/repair are you also doing a flush? That may help.

I haven't seen any direct comparisons between HFS+ and file systems supported by Linux. I would believe that Linux would be faster since Linux tends to be geared towards performance first rather than usability. But you shouldn't rely on disk caching only. The disks still need to be read in order to fill the cache, so you want to get the best disk performance you can. Based on your other email, it looks like you are using individual disks for storing your data. While I understand what you were trying to do by separating your data onto different disks, you would get far better performance by combining your disks in a RAID, even a software RAID.
If you are using software based RAID, you would need to choose between mirroring or striping. Both will give you better read speeds, mirroring will slow down writes. If you are striping, the more drives you use the better performance you'll get, although I wouldn't put more than 4 drives on a single SCSI card.
I think you can use Apple's RAID software for your SCSI disk, but SoftRAID (softraid.com) would give you more options. Moving to RAID should improve things across the board and will give the best bang for your buck (SoftRAID is $129). Personally, I think you should always use some form of RAID on all servers.



On Jan 26, 2004, at 5:41 PM, Adam Goldstein wrote:


I have added these settings to my newer my.cnf, including replacing the key_buffer=1600M with this 768M... It was a touch late today to see if it has a big effect during the heavy load period (~3am to 4pm EST, site has mostly european users)

I did not have any of these settings explicitly set in my latest my.cnf trialsm, except key_buffer, and I ommitted the innodb ones, as we are not (currently) using innodb... would there be any benefit? transactions are not a priority, so says my client, so he does not use them.

I see the query_cache_size is rather large here, but I am unsure what the default size would be. I do not know, yet, how large I can/should make either setting, but, it does appear to work without malloc/memory errors appearing in the log. Note: while it bitched in the logs about the malloc setting, the server did not crash, but, kept running. Obviously with an undetermined amount of cache. I cannot seem to find any good way to know how much ram (cache/buffer/other) mysql uses, as the top output from osx is not very appealing... not that linux top tells me much more either. On average, on the old system (all on one box) mysql was said to be using about 350MB avg in top... except after the nightly optimize/repair script which left it using 1.2G of ram for hours, and making all queries rather slow.

Also- a more G5 specific question: as MySql is supposed to gain much from the OS disk caching, how does OSX/HFS+ compare to other *nIX filesystems... such as Linux 2.4 w/reiserfs?

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to