> I believe you concluded that since 10+10 drives didn't get much
> better performance than 4+4 drives, that the SCSI bus was
> being nearly saturated by 4 drives.
> 
> My experience is that 8 drives on 1 bus wasn't much worse than
> 3+3+2 drives on 3 busses.  On that basis, I concluded that
> SCSI bus saturation is not a problem for 8 drives on 1 bus,
> and that something else is limiting performance.
> 
> Do you have comparable results for 8 drives on one bus?

hw_raid0_4+4
     -------Sequential Output-------- ---Sequential Input-- --Random--
     -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
  MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
2047 20878 92.6 51166 40.8 21132 46.8 25705 94.6 49572 70.6 597.1  6.3

hw_raid0_8_on_1
     -------Sequential Output-------- ---Sequential Input-- --Random--
     -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
  MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
2047 21774 98.5 29975 24.0 13814 30.4 25131 92.2 27747 39.1 607.1  5.6

My block operations definitely got MUCH worse :)
I believe the only reason it wasn't a 2x difference was b/c the 4
drives on each channel didn't fully saturate each channel... it
looks like around 4.5 Cheetah-3's would saturate each 80MB/sec
channel... since I'm here... why is p5_mmx faster than pII_mmx?

   pII_mmx   :  1123.188 MB/sec 
   p5_mmx    :  1170.051 MB/sec 
   8regs     :   859.155 MB/sec 
   32regs    :   505.206 MB/sec 

Just curious... not sure how the simd ops differ between the two

James
-- 
Miscellaneous Engineer --- IBM Netfinity Performance Development

Reply via email to