Raid 5 is just as common as any other raid in software, and on my other boxes it does not present any problem at all... I have seen excellent tests with raid5 in software, and many contest that software raid 5 on a high powered system is faster than hardware raid 5 using the same disks-- I haven't seen proof of this, however.I have seen the CPU's used in many raid5 hardware cards and they are surprisingly slow (avg 33mhz).

The record sizes for our database are completely random, and therefore would likely require a multitude of disk reads, which would then be likely to need waits on spindles, etc (I am not aware of anyone syncing spindles anymore, or if it would have any effect if we did).

We are almost ready to switch to Gbit enet, however, I am unsure it will help either... according to my graphs, internal traffic (to/from the mysql/G5 server) is only an average of ~1.3Mbs & 1.0 Mbs, with peaks to 5.7Mbs/5.0Mbs (I dunno is the below graph will make it through the list...). This graph is from the Apache/php server.





--
Adam Goldstein
White Wolf Networks
http://whitewlf.net


On Jan 28, 2004, at 11:33 AM, Brent Baisley wrote:


The split setup may be faster because you don't have contention for resources. Depending on how much data is being moved over the network connection, making it Gb ethernet may speed things up more.

In a RAID, ideally the strip size would match the record size in your database. So one record equals one read. Stripe sizes that are too small require multiple reads per record, stripe sizes that are too large require extraneous data to be read. Read ahead often doesn't work that well with databases since the access is totally random. Unless you are accessing the database in the same order the records were written.

Did you have a software based RAID 5 setup on the Linux box? I never heard of implementing RAID 5 in software. I'm not sure what the CPU overhead would be on that, especially with 8 disks. So what exactly is your current setup (computers, disks, ram, software, database locations, etc)?



On Jan 27, 2004, at 10:48 PM, Adam Goldstein wrote:

I have had linux on soft-raid5 (6x18G, 8x9G, 4x18G) systems, and the load was even higher... The explanation for this could be that at high IO rates the data is not 100% synced across the spindles, and therefore smaller files (ie files smaller than the chunk size on each physical disk) must wait to be passed under the heads on all the disks... While larger chunk sizes may help this, I'm not sure. A large ram buffer and read ahead on a dedicated raid system is more likely to work in that case, but, that would require either yet another fileserver (fairly expensive), or a hw dedicated Raid server (much more expensive), like the Xraid, which did not produce any real difference in the mysql bench results previously posted here. In fact, going by those simple benchmarks alone, my box already beat the Xserve/Xraid system in most of the tests.

Of course, the validity or relativity of those tests to a real world, heavily used server may be in question. :) I also am having trouble finding relative bench data to other good power systems (ie. I would like to see how this stacks up against an 8+G dual/quad xeon or sparc, etc)

I will ensure his nightly optimize/repair scripts feature the flush.

But, none of this yet explains why testing from the linux box using the remote G5/mysql server (over only 100Mbit switch) gives better results than testing directly on the server.

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to