I disagree here.  I have a power-edge 6300 with 4x36 gig drives in a
raid-5 array using the megaraid card.  For testing, I made this a
3x36 raid-5 array with the 4th drive set up as a hot spare.  I copied
a bunch of files to this thing, pulled one of the three drives out while
the system was up, and watched.  It started beeping.  I could still
access all the files.  I hit the power button, and then powered back
up.  The beeping continued.  I continued to be able to access the drive.
(configured as one huge partition).  After about 45 minutes, the beeping
stopped.  I powered down again, went into the raid bios, checked the
status, and found my hot spare was now re-mapped as the 'missing drive'.
I plugged in the old drive, made it the hot spare, booted, and
everything still ran fine.  No errors.  No hangs.

This card appears to have 128mb of DRAM, and so far, it seems to be
very fast...

I don't want my cpus busy chugging around computing parity and repairing
the data when a drive goes down.  I'd rather let the hardware controller
handle that.

Just an opinion, but for me hardware raid seems to work fine.


Robert Hyatt                    Computer and Information Sciences
[EMAIL PROTECTED]               University of Alabama at Birmingham
(205) 934-2213                  115A Campbell Hall, UAB Station 
(205) 934-5473 FAX              Birmingham, AL 35294-1170

On Tue, 2 May 2000, Ard van Breemen wrote:

> On Mon, May 01, 2000 at 03:23:42PM -0600, Anders Engle wrote:
> > We just got a PowerEdge 4300 from Dell. It came with the PERC2/SC raid
> > card, and that has to be THE slowest raid card ever. I've tried updating
> > to newest drivers and so forth and it was still pretty bad. So I tried a
> > software raid solution using the on board controller. Now, that kicked the
> > PERC cards butt. However I'm still thinking that a hardware solution would
> > be preferable in this case. 
> > I've read what I could about hw raid cards that support linux, but some
> > real life experience would be invaluable. So does anyone dare to recomend
> > a card? Price is less of a consern over performance.
> In my experience hardware raid is always significantly slower than software
> raid.
> Imagine a quad xeon sitting idle because the raid can only handle about
> 7MB/s with 4 disks in raid 5.
> Now imagine that software raid in a single processor alone (PII-450) achieves
> at least a throughput of 20MB/s read/write with ENOUGH cpu to spare!
> (Ehh, just loaded the wrong scsi-driver I imagine, that had a bottle neck
> at 20MB or so?)
> 
> Beside the difference in speed: some brands of raid-controllers (I mean those
> cards for scsi) are really in alpha or something, looking at the headaches
> they are causing. Locking up, bad (eh actually none) scripting interface, and
> most of all: when you need them (bad disk) the fail to boot because of a bad
> disk... Still these are sold as the best raid-controllers ever...
> 
> No, my choice is almost definite: software-raid can easily beat hardware
> raid, and there is the source code... (Which could have been really helpfull
> in case of the hardware raid)...
> --
>  intel1: 7:59am up 9:22, 1 user, load average: 0.00, 0.00, 0.00
> -
> Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/dmentre/smp-howto/
> To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]
> 

-
Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/dmentre/smp-howto/
To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]

Reply via email to