I want to second the recommendation for Areca controllers.  We have two
systems - The first is using an 1160 (16-port PCI-x) with 16 400GB
drives, the 2nd is using the newer 1261ML card (16 port PCI Express,
mini SAS connectors) with 16 500GB drives. Comments below:

> 1) Do these controllers, from a BIOS level, permit SMART commands
>    to be sent directly to the drives (via pass(4)) so you can
>    monitor drives for potential upcoming failures and perform
>    drive tests, via smartctl?

The cli32 will give you smart attributes, but no testing when the drive
is part of a RaidSet.  The controllers do support passthrough with JBOD.
See below for more information.

> 2) Regardless of performance, have you actually tried a hard failure
>    with these controllers and seen what both the controller and the
>    OS do?  A good example is to pull the SATA power plug out of one
>    of the drives in the array while it's powered on and see what
>    happens, both from a controller perspective and what FreeBSD does.
>    The same question applies to hot-swapping.

Both our systems are using RAID 6, and I've tested them thoroughly.
I've pulled one drive out, OS didn't really care.  The 16-port cards
have their own Ethernet interface which will send emails for any events.
Put the drive back in and the controllers just automatically start
rebuilding.  While I was rebuilding, I pulled a different drive.  Again
- FreeBSD happily went on.  Put the drive back in and the controllers
would rebuild the 2nd drive after done with the first.   During all
this, FreeBSD could do anything it wanted with the array.  I then
dropped power to the entire system.  Coming back on, it would start
rebuilding once FreeBSD started to load (the controller needs to have an
OS Driver loaded before it starts to rebuild, or you need to go into the
BIOS utility to process the rebuild in the foreground).  Booting, fsck,
or anything else would work just fine while the system was rebuilding.
You can set the background processing to be 5%, 20%, 50%, or 80%.
Obviously, 80% will be quicker but slow down access to the system.  I
didn't feel that it slowed down TOO much, but that is a matter of
opinion.  Setting it to 5% or 20% should have little or no change in
performance.

> 3) Does Areca provide any form of carriage/enclosure medium, such as
>    an enclosure which supports 4 drives, allows hot-swapping, and
>    allows you to query the enclosure for statistics (fan RPM,
thermals,
>    and so on)?

Areca supports individual HD Activity/Failure connections and the I2C
standard.  I don't believe Areca has its own external drive cage, but
it'll work with many others.  I'm not sure about drive cage thermals or
fans, but it monitors everything about the drives.  We handle case
thermals via the motherboard.  The controllers we have do not have any
external connectors, so it would all be internal.

> 4) string'ing the cli32 binary returns some references to SMART, but
>    the monitoring is generally retarded (literally, not slang) -- it
>    looks as if it just wants to use SMART to say "drive bad" or "drive
>    good".  This is not an effective use of SMART, and does nothing
>    for those wanting to monitor drives properly (read: temperature,
>    excessive ECC, perform SMART tests for bad blocks, etc.).

In RAID mode, I don't think it allows you to test each drive directly.
It'll monitor the SMART attributes and generate warnings (for instance,
if a drive gets too hot), or if there are other problems.  If you're
looking at using the controller in JBOD mode, or a drive in JBOD mode,
you can create a passthrough, which I would assume would allow you
direct access, but I haven't tested that.
> 
> 5) Is there native FreeBSD 6.x binaries for administrative utilities?
>    It doesn't look like it, but maybe I'm looking at the wrong
utility:
>    ~/V1.5_50930 $ file cli32
>    cli32: ELF 32-bit LSB executable, Intel 80386, version 1, for
> FreeBSD 4.2, statically linked, not stripped

While the utility may be 4.2, it's statically linked, so it works fine
on any later version (of FreeBSD) I've tested.  

> I'm sorry if I sound bitter, but I must have gone through 4 different
> brands of SATA RAID controllers before saying "screw this" and going
> with non-RAID or using geom.  I don't have anything against Areca
> (I've never used their hardware), but I have no desire to use hardware
> which does not support the above things -- which in 2007 should be
> standard by all means.

The only thing that the Areca cards may not give you is the ability to
run SMART tests on a drive.  I think the idea is that Areca just
monitors the drive and if the drive reports any sort of stability, it
takes the drive offline and notifies you.  In a case like that, I would
be more inclined to pull the drive, replace it with a new one, and run
my tests on the old (possibly bad) drive outside the RAID array.  IMHO,
I don't want to take chances testing a drive and having the array in a
degraded state.  Another possibility would be to set aside one port that
you don't use, and create a passthrough for that port to run your tests.
Running additional tests and leaving an array in a degraded state is
(again, my opinion) asking for trouble.  

Hope that helps!


Jaime Bozza
Qlinks Media Group

_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to