Indeed. 

Sent from my Verizon Wireless BlackBerry

-----Original Message-----
From: "Martin Blackstone" <mblackst...@gmail.com>
Date: Sat, 17 Oct 2009 16:18:45 
To: NT System Admin Issues<ntsysadmin@lyris.sunbelt-software.com>
Subject: RE: "Why RAID 5 stops working in 2009 | Storage Bits | ZDNet.com"

Not yet. The write speed of SSD’s still isn’t that good. Iÿ’s on reads that
it’s great.

They still have high failure rates and cost is still extravagant.

 

They are great in places like banks and stock brokers who read data and 1
second can cost a million bucks.

 

Companies like EMC and NetApp are just starting to come out with them.

Today to get them you have to buy truly tier one storage

 

From: Andrew Levicki [mailto:and...@levicki.me.uk] 
Sent: Saturday, October 17, 2009 11:13 AM
To: NT System Admin Issues
Subject: Re: "Why RAID 5 stops working in 2009 | Storage Bits | ZDNet.com"

 

In my opinion, we're on the cusp of seeing solid state storage becoming the
norm and we will be able to put hard drives out to pasture or use them more
for backups than tapes.

 

Although we have much faster hard disks nowadays than ever, it's amazing
that we are still at the behest of such a mechanical device for our mission
/ business critical data. Solid state FTW.

 

Regards,

 

Andrew

2009/10/17 Angus Scott-Fleming <angu...@geoapps.com>

Scaremongering, or legitimate things to worry about?  Lots of the "Talkback"
comments are that ZDNet is over the top these days, but it seems to me he's
got
some legitimate points.

------- Included Stuff Follows -------
Why RAID 5 stops working in 2009 | Storage Bits | ZDNet.com

 Disks fail
   While disks are incredibly reliable devices, they do fail. Our best data
-
   from CMU and Google - finds that over 3% of drives fail each year in the
   first three years of drive life, and then failure rates start rising
fast.

   With 7 brand new disks, you have ~20% chance of seeing a disk failure
each
   year. Factor in the rising failure rate with age and over 4 years you are
   almost certain to see a disk failure during the life of those disks.

   But yÿÿ´re protected by RAID 5, right? Not in 2009.

 Reads fail
   SATA drives are commonly specified with an unrecoverable read error rate
   (URE) of 10^14. Which means that once every 100,000,000,000,000 bits, the
   disk will very politely tell you that, so sorry, but I really, truly
can´t
   read that sector back to you.

   One hundred trillion bits is about 12 terabytes. Sound like a lot? Not in
   2009.

 Disk capacities double
   Disk drive capacities double every 18-24 months. We have 1 TB drives now,
   and in 2009 wÿ´ll have 2 TB drives.

   With a 7 drive RAID 5 disk failure, yÿÿ´ll have 6 remaining 2 TB drives.
   As the RAID controller is busily reading through those 6 disks to
   reconstruct the data from the failed drive, it is almost certain it will
   see an URE.

   So the read fails. And when that happens, you are one unhappy camper. The
   message "we caÿ´t read this RAID volume" travels up the chain of command
   until an error message is presented on the screen. 12 TB of your
carefully
   protected - you thought! - data is gone. Oh, you didÿ´t back it up to
   tape? Bummer!

--------- Included Stuff Ends ---------
More here with links: http://blogs.zdnet.com/storage/?p=162


--
Angus Scott-Fleming
GeoApps, Tucson, Arizona
1-520-290-5038
+-----------------------------------+




~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ <http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/>  ~

 

 

 

~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ <http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/>  ~

~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ <http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/>  ~

Reply via email to