On Wed, 11 Oct 2000, Bill Carlson wrote:
> Depending on how the RAID5 is setup, it can actually be slower than a
> single disk! Especially on data writes. RAID5 is a compromise between data
> safety, i/o speed and price. A decent RAID1 or RAID 1+0 will cost a little
> more (depending on your needs), but the performance will be better as
> well.

This is only true for cheap hardware and/or small numbers of drives.

> You mention 20GB drive, something tells me these are IDE. Again, a price
> vs performance tradeoff as a good SCSI setup will leave the top of the
> line IDE setup in the dust, but will also cost a pretty penny.

The platter speed and not maxing out your controller bandwidth is the real
issue.  Get 10Krpm SCSI drives and don't plug too many into a controller
(no more than 7 active on a U160 controller for example, preferably less).

> Either way, I'd move the queue to a RAID1 setup instead of RAID5.

Note that most decent hardware or software RAID implementations stripe
reads from RAID-1, so a RAID-1+0 of n drives would get roughly the same
read performance as a RAID-0 of the same number of drives or a RAID-5 of
n+1 drives.

Unfortunately, writes to RAID-1 cannot be striped, so you will get
theoretically half the write speed of an n-drive RAID-0 or n+1-drive
RAID-5.  This assumes there is no latency due to RAID-5 parity calcs
(which an SMP server with lots of cycles to spare for software RAID
or a modern hardware RAID controller with a large cache can usually
accomplish) and that you're not maxing out your controller or bus
bandwidth.

The real downside to RAID-5 these days is degraded-mode performance.  That
is, when a drive dies and the controller or kernel has to "recreate" the
missing data based on the parity on other drives on the fly.  RAID-1 (and
thus RAID-0+1) don't have this problem as the data on each drive is copied
exactly to it's mate and thus doesn't need to be "recreated" for reads
when crippled.

My personal favorite for performance, reliability, price and size is
RAID-5+5, but that gets into some very *large* array sizes... (16x18GB
SCSI drives between 4 controllers yields 9x18GB worth of usable space but
you can lose an entire controller or up to one drive off each controller
and still have a working array).  And at those sizes price is usually no
object so companies tend to rely on outsourced-management BCV 3-way
RAID-0+1 anyway...

But this is a religious argument that has been hashed out repeatedly on
many mailing lists I've been on in past years.  The RAID-1 crowd and the
RAID-5 crowd never gain converts as neither seems to be able to produce
controlled benchmarks convincing enough to sway the other.  My suggestion
is to test, test, test under your environment or the best approximation
you can fabricate, and pick what works best for you.

--------------------------------------------------------------------------
Jeremy Stanley, Information Security Specialist         Foveon Corporation
--------------------------------------------------------------------------


Reply via email to