On 08/29/14 16:04, Evan Root wrote:
> It seems that after reading the backblaze and google papers about drive
> reliability that there is no statistically obvious difference. It's too
> close to call. Both papers end with hedges and further questions.  Even if
> enterprise drives are more reliable it isn't by more than a single percent
> and it isn't consistent enough to matter either.
> 
> Evan Root, CCNA

there's more to it than just the drives, though.
You have to count the interface, drivers and everything else in the disk
subsystem.  When you do this, I think there is a very clear winner: IDE
and SATA disks on dull, boring interfaces.  (for what it is worth, I'm
not sure AHCI is "dull and boring" yet.  Hopefully soon, though, looking
forward to being able to call it that, as AHCI rocks).

Just about everyone gets IDE and SATA right in software.  I've seen
problems with SCSI, SAS, and RAID drivers in almost every OS.

When it comes to reliabilty, simplicity rocks.  Simple systems have
simple problems.  Complex systems have all the problems of simple
systems, plus complex problems of their own.  SCSI and SAS hot-swap
backplanes are a good example: ever see one fail on your server?  If you
tend more than a dozen, I bet you have.  Ever see them fail on a desktop
with a SATA or IDE drive?  Of course not.  Can't fail if it isn't there!
 (actually, SCSI/SAS back plane failures are kind of amazing if you
think about it -- hardly an active part on them!).  More, if your SATA
cable is bad, you can get a new one in minutes by stripping a less
critical machine, an hour if you have to go buy one, vs. having to find
the precise part for YOUR server.

To skew the results even more, consider the manufacturers who lock their
systems so only THEIR drives can be used with their machines.  They will
give you a line of bullshit about "oh, we've tested them and they are
more reliable!".  I have come to the conclusion that these people are
either idiots or liars, and I don't give a rat's ass which, either way,
their products don't belong in my data center.  How many people have
seen those manufacturers change hw providers, and suddenly need firmware
or driver upgrades to support those new devices...with catastrophic
failures when you don't install those BEFORE the new disk?

When's the last time you saw a firmware or driver update for a SATA or
IDE device, vs. what's the FIRST thing IBM, Dell or HP will tell you
when your expensive server crashes? ("Update everything, call me if it
happens again").

What's the last time your old server went wonky because the cache
battery was declared bad...and yet, you couldn't just go down to
Batteries-R-Us and pick up a new $5 battery, you had to buy YOUR BRAND's
replacement battery with a three-digit price tag (and a stupid ID chip
to tell the cache controller that the idiot/liars blessed this battery).

Yes, they'll give you reasons for all those design decisions, but
reality is, they cause you failures and down time that is 100% preventable.

I've done statistically interesting, if not significant, comparisons
between simple workstations with simple hw mirroring SATA systems vs.
high-end servers at 4+ times the price, for the same application running
the same software in the same company, and from a reliability
standpoint, it was a clear victory -- for the cheap workstations with
consumer-grade disks (it wasn't just the disks, either).

Nick.

Reply via email to