And lo, Geof Goodrum saith unto me:
> 
> On Wed, 14 Oct 1998 [EMAIL PROTECTED] wrote:
> 
> > It seems like Red Hat 5.1's kernel (at least, the updated one I have)
> > already contains older RAID patches; I get nearly-endless:
> > 
> > /usr/src/linux/include/linux/modules/md.ver:13: warning: `md_update_sb' redefined
> > /usr/src/linux/include/linux/modules/ksyms.ver:224: warning: this is the location 
>of the previous definition
> > 
> > when I make modules (several symbols included).  Is this a sign I needed to
> > somehow unpatch the previous RAID stuff, or should remove the symbols from
> > md.ver or ksyms.ver?
> 
> I just finished upgrading my RedHat 5.0 setup with 0.41 RAID arrays to
> RedHat 5.1 with the new 0.90 alpha RAID and the raid0145-19981005-C-2.0.35
> patch.
> 
> I considered patching the kernel-source-2.0.35-2 RPM, but decided to stick
> with a plain vanilla kernel.  You do need to add the following line to
> md.c after patching the kernel:
Given that wasn't all of your problems, I'm glad I went ahead and ignored
the warnings.  It works fine, except it spammed my syslog (with read errors
on /dev/md0) then crashed when I tried to mke2fs the RAID array before the 
reconstruction was done (numlock worked but ctrl-alt-del didn't, nor did
the telnet session I'd mke2fs'd from), then when I rebooted *two* of the 
disks were deemed "bad" because they were unclean, and it wouldn't 
reconstruct.

So I redid mkraid, let it finish the reconstruction, then did the mke2fs 
on the 5-disk array, and between the RAID processes and mke2fs it took 100% 
of the CPU time.  Here's some bonnie observations, after loading the
same stuff onto the array as was there for the pre-0.90 RAID build:

              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
RAID5.5   100  3136 42.8  3453 32.6  1863 30.7  4469 49.9  6073 40.9 150.0 10.7
RAID5.5   100  3266 43.6  3423 30.8  1818 31.9  4548 52.0  6067 39.9 147.0 10.6
RAID5.5   256  3048 43.1  3261 31.4  1852 31.3  5195 54.7  6155 41.0 103.0  8.7
TM1280S   100  2989 44.1  2763 26.2  1177 19.4  2294 23.3  2707 13.2  43.5  3.6
TM1280S   256  2500 37.0  2640 25.8  1151 18.5  2500 24.3  2710 13.5  25.3  2.9
NEWRAID5 1024  3610 50.8  4140 41.1  2489 43.4  7141 80.8  8482 67.5  66.0  5.7

Throughput has improved significantly with the new RAID patches, but at
a serious CPU cost.  This despite getting 550M/s with the P2-MMX checksum
code benchmark.  I'm waiting for a bonnie run via 100bTX NFS to see if
that does any better.  This is also using size 16 blocks, vs. size 32
blocks with the old RAID config file...

        Keith 


-- 
"The avalanche has already started; |Linux: http://www.linuxhq.com     |"Zooty,
it is too late for the pebbles to   |KDE:   http://www.kde.org         | zoot
vote." Kosh, "Believers", Babylon 5 |Keith: [EMAIL PROTECTED]      | zoot!"
 www.midwinter.com/lurk/lurker.html |http://www.enteract.com/~kwrohrer | --Rebo

Reply via email to