You might consider Steve Gibson's SpinRite 6.0 by Gibson Research Corporation
(grc.com).
Many of his customers do just that - they run SpinRite on NEW drives (even
though it's often used as a preventive maintenance or disaster recovery
utility). It puts them through their paces and analyzes them thoroughly. I
believe it does mark marginal bad sectors as "bad" if it cannot revitalize them.
I actually used SpinRite (I think v1.0 !) regularly on my first hard drive
(30MB RLL!). That product has been around for about 30 years, and is
legitimate. He's re-writing it currently from the ground up, and if you buy
the current version (6.0), upgrades to 6.x and 7.0 (the new re-written one)
will be free. I think the price is around $89.00.
He also has a great weekly podcast called Security Now! you might check out.
I have no formal affiliation with grc.com - this is just a personal
recommendation.
Cheers,
Rick Quendun
KMUZ Engineering
________________________________
From: nathan lawson <nathan...@gmail.com>
To: Jim Stewart <jstew...@paceaudio.com>
Cc: "rivendell-dev@lists.rivendellaudio.org"
<rivendell-dev@lists.rivendellaudio.org>
Sent: Friday, February 14, 2014 2:33 PM
Subject: Re: [RDD] Redundant Hard Drive/Backup
Just be aware that Software RAID has its settings saved in the software of the
OS so it can be a right mare to recover from. Thats when i decided to start
looking at even the lower end hardware cards...
Regards
On Fri, Feb 14, 2014 at 10:32 PM, Jim Stewart <jstew...@paceaudio.com> wrote:
Here is what I will add to this RAID discussion:
>
>1) The best description of the difference between hardware and software
>RAID is “what CPU is doing the RAID”. A true hardware RAID doesn’t tax the
>host system for RAID functions. That said on many workloads, (like I’d expect
>most Rivendell ones) there is likely plenty of spare CPU cycles to do RAID
>while the CPU is likely just waiting for I/O anyway. Video Rendering is
>likely a different story!
>2) People do get a false sense of security with RAID. As previously
>mentioned, it does not protect you from corruption, accidental deletions, etc.
> More than that people often don’t consider how they are going to have to
>deal with an actual RAID system failure. Consider the following:
>a. I’ve seen I many times, someone has a fancy, high-priced hardware
>RAID system on a mission critical system so that they can sleep nights feeling
>pretty protected. Suddenly the RAID hardware goes down! Did they think about
>having a space RAID box laying around? No! They have to get a new one flown
>in at great expense only to be out priced by the expense of the actual down
>time!
>b. Okay so you have one of those motherboard BIOS based software RAID
>setups (that a lot of people *think* is hardware RAID), in this case you
>typically get the worse parts of software and hardware RAID in this situation.
> Not only are you still stealing main CPU cycles, but once again now the
>motherboard goes down and you have to find another one that does that system’s
>way of doing RAID!
>c. Now consider Linux software RAID. You can have all the hardware
>failures you want and simply boot your Linux + RAID set up on new hardware and
>you are up and running again! The only drawback here is your stuck with
>running Linux (LOL) to operate your RAID.
>3) Linux RAID also seem less picky about choice of hard drives as you can
>mix and match (although typically not the greatest idea for performance
>reasons), and all is fine. Also I don’t know about those BIOS RAID solutions,
>but if you have hot-swapable drives, you shouldn’t have to shut down your
>system to replace and rebuild drives. Granted most good hardware RAID systems
>give you this too.
>
>I’ve been subject to another advantage of RAID: I’ve recently had lots of
>trouble with modern hard drives that have “not-quite-defective” sectors. This
>has been a real pain for me as the whole system stalls out as the hard drive
>struggles to read data in a “not officially bad sector”, which it eventually
>does, but only after a system slowdown. I wish someone would write a good
>disk tester that as real short time-outs so to mark these marginal areas bad
>and be done with it! Anyway, with RAID mirroring, it seems like the system
>runs just fine (as long as you are reading, not writing) as any bad spots on
>one drive are read instead by the other one in the mirror set. Yea I know,
>why am I messing with bad drives? The truth is I can’t seem to find any that
>don’t do this these days, I think I’ve tried all the (few remaining) hard
>drive makes/models there are. I’ve been told that if I go with some sort of
>“high-end”
drives like SAS interface ones, that the QC is higher on them and I probably
won’t have the problem. It would be too bad if this is what it takes.
>_______________________________________________
>Rivendell-dev mailing list
>Rivendell-dev@lists.rivendellaudio.org
>http://caspian.paravelsystems.com/mailman/listinfo/rivendell-dev
>
>
--
Nathan Lawson
_______________________________________________
Rivendell-dev mailing list
Rivendell-dev@lists.rivendellaudio.org
http://caspian.paravelsystems.com/mailman/listinfo/rivendell-dev
_______________________________________________
Rivendell-dev mailing list
Rivendell-dev@lists.rivendellaudio.org
http://caspian.paravelsystems.com/mailman/listinfo/rivendell-dev