>                   input      output     random
>                 MB/s  %cpu MB/s  %cpu   /s   %cpu
> 
> 1drive-jbod     19.45 16.3 17.99 16.4 153.90 4.0
> raid0           48.49 42.1 25.48 23.1 431.00 7.4
> raid01          53.23 41.4 21.22 19.0 313.10 9.5
> raid5           52.47 39.3 21.35 19.8 365.60 11.2
> raid5-degraded  20.23 15.5 21.86 20.3 277.90 7.8

So in most cases you wrote data much faster than writing it?
Or am I misinterpreting your table?

> I hacked my copy of bonnie in two ways: to skip the per-char tests,
> which I'm not interested in, and to call fdatasync() after each write
> test (but that change didn't really make much difference).  I use
> an awk script to pick the best results from multiple runs, and to
> convert KB/s to MB/s.

Sounds quite useful :) willing to put it somewhere?

> I don't know why your sw over hw raid numbers are so poor by comparison.
> Did you try plain hw raid, as above?

My current 40MB/sec (s/w 5 over h/w 0) and 43MB/sec (s/w 0 over h/w 0)
numbers are at least getting closer and I hope to keep digging
into the scsi drive config stuff for improved performance.

If my DAC1164P didn't have a bad channel on it, I'd be testing
over 3 channels which should help performance immensely based
on my previous results.

> My hypothesis is that the mylex itself, or the kernel + driver,
> is limited in the number of requests/s that can be handled.

I don't see it settable in the DAC960 driver ... any ideas?
I've flashed with all the latest s/w from mylex.com

DAC960: ***** DAC960 RAID Driver Version 2.2.2 of 3 July 1999 *****
DAC960: Copyright 1998-1999 by Leonard N. Zubkoff <[EMAIL PROTECTED]>
DAC960#0: Configuring Mylex DAC1164P PCI RAID Controller
DAC960#0:   Firmware Version: 5.07-0-79, Channels: 3, Memory Size: 32MB
DAC960#0:   PCI Bus: 11, Device: 8, Function: 0, I/O Address: Unassigned
DAC960#0:   PCI Address: 0xFE010000 mapped at 0xC0800000, IRQ Channel: 9
DAC960#0:   Controller Queue Depth: 128, Maximum Blocks per Command: 128
DAC960#0:   Driver Queue Depth: 127, Maximum Scatter/Gather Segments: 33
DAC960#0:   Stripe Size: 8KB, Segment Size: 8KB, BIOS Geometry: 128/32

Stripe and Segment size hasn't made any difference in performance for me.

As soon as my shipment of more 1164P's come in, I'll be spreading the
drives across many more channels.  My 20 drives over 2 channels numbers
equalling my 10 drives over 2 channels sure indicate to do so :)

I'm still trying to figure out the *HUGE* difference in performance
I saw between s/w raid on partitions vs. s/w raid on whole drives :)

Thanks,

James
-- 
Miscellaneous Engineer --- IBM Netfinity Performance Development

Reply via email to