On Mon, Apr 24, 2000 at 09:13:20PM -0400, Scott M. Ransom wrote:
> 
> Then I moved back to kernel 2.2.15-pre18 with the RAID and IDE patches
> and here are my results:
> 
>   RAID0 on Promise Card 2.2.15-pre18 (1200MB test)
> ----------------------------------------------------------
>  -------Sequential Output-------- ---Sequential Input-- --Random--
>  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
>  K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
>   6833 99.2 42532 44.4 18397 42.2  7227 98.3 47754 33.0 182.8  1.5
>             *****                            *****
> 
> When doing _actual_ work (I/O bound reads on huge data sets), I often
> see sustained read performance as high as 50MB/s.
> 
> Tests on the individual drives show 28+ MB/s.

   What stripe size, CPU and memory is used here?  I have a similar setup
(2.2.15pre19, IDE+RAID patches), 4 master IDE Deskstars, on board VIA and
offboard Promise controllers, and K6-2/500 & 256M ram and see 22MB/s native
but a max of 28 MB/s for all stripes.  dd/hdparm -t on multiple drives
simultaneously appears to show complete contention between the separate
chains.

hdparm -t /dev/hde
19.16 MB/sec

( hdparm -t /dev/hde &) ; (hdparm -t /dev/hdg &)
11.43 MB/sec
10.47 MB/sec

   With all four drives throughput per drive is less than 7MB/sec. With
RAID0 across all four drives I get 28 MB/sec according to bonnie, vs 22
MB/sec on single drives.  I had been attributing that to a singly-entrant
IDE driver in 2.2, but your results make me think there's some other reason
I don't see linear speedups.  Is this a dual CPU system perhaps?  Something
unusual about the interrupt handling?  UDMA33 vs. UDMA 66 (I'm using 40
conductor cables, perhaps I need the 80s)?

Reply via email to