Il 02/04/2012 23:55, Dan McAllister ha scritto:
From an earlier post:

    Of course your hardware configuration can make a difference too.
    You mentioned 2x2TB drives. I hope you have them configured as
    raid-1 mirrors (software raid should do fine for raid-1). raid-1
    will give you a little slower write, but faster read performance.

I'm a little bit of a stickler on RAID assumptions... so I am going to stick my nose into this (where it probably does not belong).
Indulge me or not -- I've been up-front about what I'm doing here! :-)

There are a *significant *PERFORMANCE differences between hardware and software RAID -- even RAID-1. There are several criteria to consider for each technology -- specifically, read performance and write performance under both normal and degraded conditions, so here goes:

  * Normal RAID-1 Read Times:

    There is no benefit in software RAID-1 for read-times on good
    (healthy) arrays -- whether you read from drive 1 or drive 0 won't
    matter, except for the potential overlapping of seek times, which
    compared to hardware RAID-1 is minimal. There is only the one I/O
    channel on a standard controller, and so you can actually accept
    the data (I/O from drive to controller) from only 1 drive at a time.

    There /can be /a considerable performance *increase *in using
    HARDWARE RAID-1, assuming your controller supports parallel reads
    (see below on "/cheap/" RAID cards). On small reads, the increase
    can be anywhere from 25-40%, on larger file reads, the increase
    can approach a /theoretical /max of 50%! (By increase, I mean
    performance increase -- as measured by /decreases /in wait times
    from the moment a read request is received to the moment it is
    satisfied back to the calling routine).

  * Degraded RAID-1 Read Times:

    In software RAID-1, a failed drive can essentially bring down a
    server as the controller continually tries to access a failing or
    failed drive (assuming the drive isn't unrecognizable) and blocks
    other I/O as a result. The only fix is to reboot (often in single
    user mode) and mark the bad drive as down or removed. MOST
    software RAID (e.g. embedded motherboard controllers) cannot (or
    more correctly, WILL not) do this automatically -- they just
    continue to flail themselves against the failing drive, making the
    system response time measured in minutes (vs. milliseconds).  Odd
    side-note: Linux MD RAID-1 performs MUCH better in the degraded
    mode, because it is much faster to mark a bad drive as down, thus
    ending the I/O blocks that otherwise cripple the system.

    In hardware RAID-1, a failed drive is usually taken offline
    /automatically /by the controller (similar to the response time of
    the Linux MD software solution, but NOT similar to the cheap or
    motherboard RAID solution that tries repeatedly to access a bad
    drive) -- so read times essentially revert to JBOD (single disk)
    read performance levels. There can be some hiccups while the card
    determines the good/bad status of the bad drive, but better
    hardware RAID cards do this while continuing to read from the
    "good" drive (making a new read request while it "works on" the
    bad drive) -- and during this time, read performance can be
    impacted, but usually only slightly, and NOT essentially blocking
    the entire server behind the I/O waits.... then, once the drive
    testing is done and it is marked down, the performance essentially
    equals JBOD performance (as with the software solution, once the
    bad drive is removed -- logically or physically).

  * Normal RAID-1 Write Times:

    There is a *write-penalty* in software RAID-1, even on good arrays
    -- because all data has to be written twice, and even with NCQ,
    the controller has to "talk" to each drive independently (that is,
    send the write data to each drive, one drive at a time). NCQ does
    offset this somewhat (assuming you enabled it), but not during
    heavy load times because there is really only one I/O channel on
    the drive controller (even on SATA RAID controllers).

    There is usually *_no write-penalty_* in hardware RAID-1, because
    the controllers are designed to "talk" to each drive
    independently. (NOTE: Some cheaper RAID cards use non-RAID
    optimized controllers, in which case the performance is the same
    as with the software RAID, except you're not spending the
    processing time on the main system, you're using the on-board RAID
    processor on the card.) If you're interested in write-performance
    improvements, INSIST on a /_*REAL *_/hardware RAID solution!

  * Degraded RAID-1 Write Times:

    Again, in software RAID-1, a failed drive can essentially bring
    down a server as the controller continues to try to write to a
    failing or failed drive (assuming the drive isn't unrecognizable).
    As with the read function, the only reasonably fast fix is to
    reboot in single user mode and mark the bad drive down. You often
    cannot do this with the system live because the response times are
    practically non-existent due to the I/O blocks on the bad drive.
    Also, this may be difficult on a remote server because, even in
    single-user mode, if the failed array contains the root partition,
    you'll get I/O blocked loading even in single-user mode!

    In hardware RAID-1, a failed drive is usually taken offline
    entirely automatically, so write times revert to JBOD performance.
    There can be some hiccups while the card determines the good/bad
    status of the bad drive (as noted earlier), but better hardware
    RAID cards do this while continuing to write to the "good" drive.

NOTE: I do make assumptions here -- notably that the "hardware" RAID solution you purchase is a card that really does implement RAID, vs. /simulating /RAID (like the software versions do). A $50 RAID card is not a /REAL /RAID card... most $200+ RAID cards are -- but read the specs carefully! (This is kind of like the old "real modem" vs. "win-modem" debates... just as back then, you want the "hardware accelerated" RAID solution -- which is more expensive, but well worth it!)

To me, the performance issues during the degraded state alone makes the extra hardware purchase well worth the extra $$$. I personally like both the Adaptec and AMCC/3-Ware (now LSI) solutions for SATA RAID arrays. After all, the whole point of the RAID-1 array is to protect the overall system WHEN A DRIVE FAILS... if all you wanted was faster performance, use RAID-0 (no protection, just striping!)

One last point -- back in the 1990's, RAID 3 and RAID 5 (and later, RAID 6) were popular because storage was _/expensive /_(and most RAID arrays were using *SCSI *drives, which were significantly _more _expensive than PATA or SATA drives). But times have changed and 2TB SATA (or even SAS) drives are _cheap and common._ The only RAID technology that out-PERFORMS a hardware RAID 1 array is a hardware RAID-10 array (which is a striped mirror)... and I personally only use those for EXTREMELY large RAID-1 arrays!

OK... so this is the end of my RAID tutorial... I hope you learned something! :-)

Dan McAllister
IT4SOHO

PS: I learned all of this when I worked as a management consultant to EMC back in the early 2000's... back when hardware RAID-1 and RAID-5 were at or near the "tipping point" in cost/benefit, so it was often a "use case" issue as to which technology was more effective for a particular client. We were often up against "homemade" external RAID systems or humongous RAID5 arrays from IBM or Hitachi -- we were early adopters of RAID-1 for performance results! Of course, it didn't hurt that RAID-1 requires more physical drives! :-)


World is grey sometimes, between the white of HW raid and the black of SW raid.

I am a fan of hardware raid, and we use at the most HP SmartArray cards, as we feel they are very reliable and fast.

But, usage of ZFS in last installations put us in front of a new age of thinking: ZFS reccomends to disable hardware raid and use JBOS instead. ZFS will care about RAID. So, willing to use this new FS, we have no other choice than have HW raid on boot disks (with UFS), and JBOD on other disks (with ZFS).

At the same time, while we are currently using RAID 10 and RAID 5, I still see very serious reasons to use RAID6 in some environments.

Disks are generally cheap, but serious disks are expensive, despite of the huge size. So I don't blame is someone is putting 12 x 2TB disks in a RAID6 configuration for slow speed or dedicated storage (like backups or seldom used archives). 12 disks in RAID10 means 5 used disks (5 are mirrored, 2 are hot spairs). 12 disks in RAID6 means 9 used disks (2 for RAID6, one for hot spare). Difference in used storage is a lot. I agree RAID10 is a lot faster, but in some environments this could not be important, while price and reliability could be more important.

Regards,

Tonino


--

IT4SOHO, LLC
PO Box 507
St. Petersburg, FL 33731-0507

CALL TOLL FREE:
   877-IT4SOHO

We have support plans for QMail!



--
------------------------------------------------------------
        Inter@zioni            Interazioni di Antonio Nati
   http://www.interazioni.it      to...@interazioni.it
------------------------------------------------------------

Reply via email to