At 03:45 PM 7/12/2006, Jon Frisby wrote:
This REALLY should be an academic concern.  Either you have a system
that can tolerate the failure of a drive, or you do not.  The
frequency of failure rates is pretty much irrelevant:  You can train
incredibly non-technical (inexpensive) people to respond to a pager
and hot-swap a bad drive.
If you are in the position where the typical failure-rate of a class
of drive is of concern to you then either: A) You have a different
problem causing all your drives to fail ultra-fast (heat, electrical
noise, etc) or B) You haven't adequately designed your storage
subsystem.


It all depends how valuable your uptime is. If you double or triple the time between hard disk failures, most people would pay extra for that so they buy SCSI drive. You wouldn't take your family car and race in the Indy 500, would you? After a few laps at 150 mph (if you can get it going that fast), it will seize up, so you go into the pit stop and what? Get another family car and drive that? And keep doing that until you finish the race? Down time is extremely expensive and embarrassing. Just talk to the guys at FastMail who has had 2 outages even with hardware raid in place. Recovery doesn't always work as smoothly as you think it should.

Save yourself the headache, and just set up a RAID10 PATA/SATA array
with a hot spare.   Not sure if Linux/FreeBSD/et al support hot-swap
of drives when using software RAID, but if it does then you don't
even need to spend a few hundred bucks on a RAID controller.

Software RAID? Are you serious? No way!

Mike




-JF


On Jul 12, 2006, at 12:11 PM, mos wrote:

At 12:42 PM 7/12/2006, you wrote:
On Tuesday 11 July 2006 19:26, mos wrote:
> SCSI drives are also designed to run 24/7 whereas IDE drives are
more
> likely to fail if used on a busy server.

This used to be the case.  But there are SATA drives out there now
being made
for "enterprise class," 100% duty cycle operations.  See, for
example,
http://www.westerndigital.com/en/products/Products.asp? DriveID=238&Language=en
No, I am not affiliated with WD, just had good experience with
these drives.
1.2 Million Hours MTBF at 100% duty cycle and a five year
warranty.  Not bad.

That's good to hear, but  MTBF is really a pie in the sky estimate.
I had an expensive HP tape drive that had something like 20,000 hr
MTBF. Both of my units failed at under 70 hours. HP's estimate was
power on hours (unit powered on and doing nothing), and did NOT
include hours when the tape was in motion. Sheesh.

To get the MTBF estimate, the manufacturer will power on 100 drives
(or more) and time to see when the first one fails. If it fails in
1000 hours, then the MTBF is 100x1000hrs or 100,000 hours. This is
far from being accurate because as we all know, the older the
drive, the more likely it is to fail. (Especially after the
warranty period has expired, failure rate is quite high<g>).

I am hoping the newer SATA II drives will provide SCSI performance
at a reasonable price. It would be interesting to see if anyone has
polled ISP's to see what they're using. I know they charge more (or
at least they used to) for SCSI drives if you are renting a server
from them. It would be interesting to see what their failure rate
is on IDE vs SCSI vs SATA.

Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql? [EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to