This REALLY should be an academic concern. Either you have a system that can tolerate the failure of a drive, or you do not. The frequency of failure rates is pretty much irrelevant: You can train incredibly non-technical (inexpensive) people to respond to a pager and hot-swap a bad drive.

If you are in the position where the typical failure-rate of a class of drive is of concern to you then either: A) You have a different problem causing all your drives to fail ultra-fast (heat, electrical noise, etc) or B) You haven't adequately designed your storage subsystem.

Save yourself the headache, and just set up a RAID10 PATA/SATA array with a hot spare. Not sure if Linux/FreeBSD/et al support hot-swap of drives when using software RAID, but if it does then you don't even need to spend a few hundred bucks on a RAID controller.

-JF


On Jul 12, 2006, at 12:11 PM, mos wrote:

At 12:42 PM 7/12/2006, you wrote:
On Tuesday 11 July 2006 19:26, mos wrote:
> SCSI drives are also designed to run 24/7 whereas IDE drives are more
> likely to fail if used on a busy server.

This used to be the case. But there are SATA drives out there now being made for "enterprise class," 100% duty cycle operations. See, for example, http://www.westerndigital.com/en/products/Products.asp? DriveID=238&Language=en No, I am not affiliated with WD, just had good experience with these drives. 1.2 Million Hours MTBF at 100% duty cycle and a five year warranty. Not bad.

That's good to hear, but MTBF is really a pie in the sky estimate. I had an expensive HP tape drive that had something like 20,000 hr MTBF. Both of my units failed at under 70 hours. HP's estimate was power on hours (unit powered on and doing nothing), and did NOT include hours when the tape was in motion. Sheesh.

To get the MTBF estimate, the manufacturer will power on 100 drives (or more) and time to see when the first one fails. If it fails in 1000 hours, then the MTBF is 100x1000hrs or 100,000 hours. This is far from being accurate because as we all know, the older the drive, the more likely it is to fail. (Especially after the warranty period has expired, failure rate is quite high<g>).

I am hoping the newer SATA II drives will provide SCSI performance at a reasonable price. It would be interesting to see if anyone has polled ISP's to see what they're using. I know they charge more (or at least they used to) for SCSI drives if you are renting a server from them. It would be interesting to see what their failure rate is on IDE vs SCSI vs SATA.

Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql? [EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to