>   On an Intel architecture machine you'll never get more than about 80MBs
>   regardless of the number of SCSI busses or the speed of the disks.  The
>   PCI bus becomes a bottleneck at this point.
> 
> Another consideration of course. But I think his problem was that he
> couldn't get any higher than 30MB/s, let alone 80. :)
>   
> What about 64bit PCI? A lot of Intel, Compaq, Dell and Alpha boards
> have those slots, and intraserver for one has 64bit PCI scsi
> controllers. Then there's the even rarer 66MHz PCI. Wonder how they
> would affect the benchmarks.

FWIW, the eXtremeRAID 1100 cards are 64-bit PCI only (as are the ServeRAID
cards in my previous testing).  Other testing I've done has shown many
situations where even our quad P6/200 machines (PC Server 704) could
sustain 40MB/sec over the 32-bit 33-MHz PCI bus, so I'm really hoping
that I can do better with the 64-bit 33-Mhz bus :)

>   I missed the start of this thread, so I don't know what RAID level you're
>   using.  I did some RAID-0 tests with the new Linux RAID code back in March
>   on a dual 450Mhz Xeon box.  Throughput on a single LVD bus appears to peak
>   at about 55MBs - you can get 90% of this with four 7,200RPM Baracudas.  
>   With two LVD busses, write performance peaks at just over 70MBs
>   (diminishing returns after six disks)

Could you describe the set-up?  When I switched to s/w raid0 over
h/w raid0 just for testing, my block write rate in bonnie only went
up to 43 MB/sec.  All the best performance has come with the smallest
chunk-sizes that make sense (4k), where my improvement was significant
over 64k chunk-sizes.

> which ties in with why james only sees 30MB/s - 10 drives per
> channel.

I'll drive pulling 5 drives out of each drive enclosure, which will
leave 5 10krpm Cheetah-3 LVD drives on each of the two LVD channels.
So, the theory is too much bus contention and my numbers should improve
over 10 of the same drives on each of the two channels?

Out of sheer curiosity, I made this one on the raw drive, not
using the partition...

On the raw drives, s/w raid0 over 2 h/w raid0's (each channel separate)
     -------Sequential Output-------- ---Sequential Input-- --Random--
     -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
  MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
2047  8658 43.3 12563 16.5  3235 10.2  4598 18.6  4729  8.4 141.7  1.8

Afterwards, I umount, raidstop, confirm dead with /proc/mdstat and get:
Re-read table failed with error 16: Device or resource busy.

Since I didn't wanna go through a reboot to clear this, I simply added
the "p1" extensions to the raidtab entries, even though fdisk said:

Disk /dev/rd/c0d0p1 doesn't contain a valid partition table
Disk /dev/rd/c0d1p1 doesn't contain a valid partition table

And it actually let me mkraid that and mdstat showed it active!
mke2fs and bonnie later, I get these results:

On partitions(?), s/w raid0 over 2 h/w raid0's (each channel separate)
     -------Sequential Output-------- ---Sequential Input-- --Random--
     -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
  MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
2047 22127 98.9 58031 46.4 21058 47.0 23962 89.2 43068 62.1 648.0  7.8

This represents the same block-write rate I was getting with 10 drives
on each channel, so I'd definitely have to agree that the channels
are definitely in the way.

Any ideas on the bizarre results with raid on the raw drive block devs?
For that matter, mkraid letting me raid partitions that fdisk said
weren't even there?

Thanks!

James Manning
-- 
Miscellaneous Engineer --- IBM Netfinity Performance Development

Reply via email to