> > If the bad drive is put in by itself, after a while the disk is
> > failed and it tries to boot by floppy.
>
> Does Lilo ever appear? or does the BIOS ask itself for a floppy
disk?
> if LILO Loading Linux...... does not appear, then your HD is never
> going to make it as a root partition holder.
Nope, Lilo never gets loaded. Because hda is not working, it gets stuck
there trying to access it continually. I can hear it grinding away, but
nothing happens.
>
> > cable btw. The BIOS had the usual settings allowing me to set the boot
> > order (Floppy first, CDrom next, hard disk 0, then network (no, i
can't
> > put hard disk 1, I wish i could), and finally had "Boot other devices"
set
> > to yes.
>
> What would happen if you plug the faulty drive on the second HD
instead
> of the first one? so that lilo boots... ??
>
Yeap... if you plug the faulty drive into hdc (you know what i mean) and
the working into hda, then it boots.
> > My question: if this was hardware RAID 1... would this have happened?
> > Would the hardware RAID controller recognise the problem, and only
stop
> > briefly, then try the second disk automatically and transparently?
>
> In my experience (ICP-Vortex fibrechannel and scsi), yes the hardware
RAID does
> spot the faulty drive and kicks in with the sane one immediately, the
OS is
> alerted that a drive is at fault in the array, but apart from that
everything
> runs smoothly.
>
> Depending on your syslog configuration, it whines that you should
change the
> faulty drive with a good one until you do.
Thats want I want then... ;-)
> > Case 2)
> > I simulated errors by connecting a flaky IDE cable to one of the
drives. I
> > was hoping the software RAID would either compensate by doing most of
it's
> > reading from the good drive (with a good cable) or labelling the flaky
> > cable/drive as bad, but instead it started slowing down, and writing
to
> > the array was taking much longer and strange errors starting occurring
> > during writing.
> >
> > My question: would hardware raid have handled this situation any
better?
>
>
> Again, in my experience: definitely yes.
No choice but hardware raid then ... despite the extra costs.
> >
> > And as for Hardware IDE raid, which is better... Promise or HighPoint?
> > promise seems to be better supported in the kernel, but I'm not so
sure.
> > What happens when (for example) a disk in the array fails? How do you
> > control the hardware raid so you can control a rebuild? And for
Promise,
> > HighPoint, etc., what are the devices going to be called (/dev/hde? or
> > maybe /dev/raid/array1?)
>
> Dunno about IDE RAID, but with ICP-Vortex both FC and SCSI, you get a
nifty
> little console application (icpcon) which allows you to manage every
feature of
> the hardware, you can add/remove/modify arrays, change raid levels in an
array,
> monitor IO and cache in physical/host/array drives, rescan the bus for
new
> disks/devices, change cluster/non-shared settings and a big etc.
Actually
> icpcon does everything that the controller BIOS allows, with the same
> 'interface' but on the shell.
>
> I assume that Promise or whatever would have an application that would
allow
> to mangle the arrays or at least monitor them... but then again if you
don't
> have hot-swap capability there isn't much that you can change once the
system
> is up and running.
>
> Although I think at comdex I saw some IDE RAID boxes with hot-swap bays,
I
> don't know how commercial those might be as opposed to SCA/hotswap scsi
which
> seems to be everywhere now.
The 3ware 7xxx IDE RAID cards and associated hotwap bays are hotswap... at
a price.
wypiwyg (what you pay is what you get) in this case.
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]