Can anyone relate their experiences with IDE RAID in Linux?  I'm thinking of
getting a Promise IDE RAID card..

The NT people seem to abhor the idea of software RAID or IDE RAID.   If SCSI
software RAID is so good, maybe it's not worth the $350 SCSI RAID card.  On
the other hand, if IDE RAID works equally well, I'd rather spend $700 on IDE
then $4000 for SCSI.



-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of D. Stark - eSN
Sent: Wednesday, January 24, 2001 4:53 AM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: RE: [expert] RAID


First off, any SCSI card will do, though some won't do well with hot swap.
Look into each one seprately and make your decision.

Second, under linux, Mylex cards are the BOMB. They've had some availability
issues lately though. We use DPT cards, but since Adaptec has bought DPT, we
aren't expecting any more kernel updates to thier software. So we'll
probably go Mylex with any new machines and when the time is right to
upgrade the current boxen.

We use software RAID here in our shop on a good number of machines, as well.
Mainly the mirroring RAID 1 variety. This is with RH 6.2, but I don't think
there should be many differences, if any, between RH and Mdk.

Its almost scary how well it works. After using the RedHat GUI installer to
do the basic setup, we were more or less done. It just worked. This was on
an HP LPR, but we've had good success with Dell 2450s as well.

We tried to do some very NASTY things to it. Pulled one of the two drives
out while running. The kernel spit out some ugly warnings all over the
screen (mainly due to SCSI bus errors), but the machine continued to
function perfectly. We shut the machine down, pulled /dev/sda, and it booted
off of sdb like nothing was wrong. Took out a completely identical
unformatted drive and tossed it in hot. I created the proper fdisk
partitions, and used the hotadd commands to rebuild the array. Shut down the
machine, pulled the original sdb out, and booted off the freshly created
drive. Again, it worked.

Setup is important. Don't make one huge raid partition. Make a bunch of
small /dev/md devices, one for each partition. You'll have less chance of
data corruption should one drive or the other go down in some strange way.
But then, that same advice applies with *any* server implementation.

The long and short of it is, the software raid for scsi is VERY mature. If
you recompile your kernel, there are certain things you NEED to have
compiled in, and certain things that NEED to be in the inital ramdisk. Just
read the docs that come with the package. I will lose no sleep at night
because of the software RAID running at the office.

This is by no means an endorsement on either my behalf or my employer's. My
advice is to set up software raid on the machine before it enters production
and play with it as we did. Find out how fault tolerent it is, and what
you'll need to do do recover.

Derek Stark
IT / Linux Admin
eSupportNow
xt 8952

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Homer Shimpsian
Sent: Tuesday, January 23, 2001 8:36 PM
To: [EMAIL PROTECTED]
Subject: [expert] RAID


Can someone explain to me their RAID experiences in Linux?

How would hot swapping work?  Do U need specific software with a SCSI raid
controller to handle this?

Can anyone recommend a SCSI RAID controller for use with Linux Mandrake 7.1?


Thanks for your advice.  HA, Loadbalancing, redundancy, fail-over oh yeah





Reply via email to