On 01/08/2013 03:40 PM, Jacob Albretsen wrote:
On Tuesday, January 08, 2013 01:11:09 PM Robert LeBlanc wrote:
On Mon, Jan 7, 2013 at 9:56 PM, Phillip Hellewell<[email protected]>  wrote:
On Mon, Jan 07, 2013 at 11:06:30AM -0700, Michael Torrie wrote:
Unless you want to spend significant cash, I don't see any reason to go
with SSD, frankly.  The affordable ones have extremely high failure
rates from the reviews I've read.  And they often fail suddenly and
spectacularly, with no warning.  At least the ones you see advertised on
newegg (OCZ, etc).
Hmm, this really worries me.  I can't afford to have my server die
suddenly and unexpectedly.  And I've heard you're "not supposed to" use
RAID with SSDs.  Maybe SSD is a better choice with laptops than servers.
I don't know of any techinical reason for this other than some people
complaining that RAID does not pass down TRIM. The fact is TRIM support is
not fully implemented anyway. I would have no problem RAIDing SSD and
consider it safe practice like any disk that can fail (I'm pretty sure that
is all of them).
Not that this hands down discounts "not supposed to", but I have run a busy
database server that we tried two SSDs in a RAID1, and it's worked just dandy
for over a year.
I'm going to go out on a limb here, and say that the suggestion of not RAIDing SSDs is to avoid premature wearing due to partial-stripe and multiple-disk writes for a single block update, specifically when using RAID2/3/4/5/6, and especially if the raid chunks/stripes are not aligned properly.

Yes, many SSDs have wear-leveling, and you will probably update your server long before the drive wears out, under normal load. But also remember that spinning rust has MTBFs of over 100 years, as I recall. And still, those disks can and do fail. I myself had a string of 7 failed SAS drives on one server fail in 3 separate occasions due to somebody in the Philippines being handed the wrong spindle lubricant during a specific 6 month period. So just because an SSD is reported to be extremely unlikely to fail, doesn't mean the wrong doping compound wasn't used on the silicon your drive was made from. Or you may have bought a far more failure prone MLC-based SSD instead of an SLC-based one.

Also, not all wear-leveling mechanisms are implemented the same. Some have RAM on the drive where the block updates are written then AND-ed with the rest of the original page and the full page rewritten to another page somewhere else. How the new pages are chosen, how and when failing pages are detected, how many extra pages are held in reserve to re-map the page, how the current page map is implemented, etc, are all things that could be done improperly, or at least in a way that won't work well with RAID work loads.

On the performance end, a single SSD may be able to bury the performance of a given RAID config. Why pay the extra money for an SSD if the hardware or software is going to be a major bottleneck anyway? Let's say an SSD page is 4MB (pulling numbers out of thin air here), and your RAID5 chunk/stripe is only 256KB. A single 4MB write would have a repeated RMW-cycle penalty compared to a single, in-place page allocation on one SSD. There goes the benefit of streamed-writes to your RAID/SSD. On the other hand, stream writing more data than a single SSD can handle to a RAID5 would improve performance assuming the SSD RAM cache is too small, and that the pages can complete their update before the write stream comes back around to the first saturated SSD.

Long post short (too late), I can see serious benefits to having RAID1 (and maybe RAID10) on SSDs. The block/page updates might go to different pages, reducing the risk of a poor wear-leveling algorithm causing both drives to fail at once, less downtime WHEN one of the drives fails, and a partial page write causing no more harm to the SSD than a single SSD would experience. The biggest risk is saturating the bus, thus cutting your total performance to about half what it would be using a single drive.

Grazie,
Daniel Fussell

Disclaimer:  I've never had a true SSD, but I've seen one on TV...

--------------------
BYU Unix Users Group
http://uug.byu.edu/

The opinions expressed in this message are the responsibility of their
author.  They are not endorsed by BYU, the BYU CS Department or BYU-UUG.
___________________________________________________________________
List Info (unsubscribe here): http://uug.byu.edu/mailman/listinfo/uug-list

Reply via email to