"scott.marlowe" <[EMAIL PROTECTED]> writes:

> If you are writing 4k out to a RAID5 of 10 disks, this is what happens:
> 
> (assumiung 64k stipes...)
> READ data stripe (64k read)
> READ parity stripe (64k read)
> make changes to data stripe
> XOR new data stripe with old parity stripe to get a new parity stripe
> write new parity stripe (64k)
> write new data stripe (64k)
> 
> So it's not as bad as you might think.  

The main negative for RAID5 is that it had to do that extra READ. If you're
doing lots of tiny updates then the extra latency to have to go read the
parity block before it can write the parity block out is a real killer. For
that reason people prefer 0+1 for OLTP systems.

But you have to actually test your setup in practice to see if it hurts. A big
data warehousing system will be faster under RAID5 than under RAID1+0 because
of the extra disks in the stripeset. The more disks in the stripeset the more
bandwidth you get.

Even for OLTP systems I've had success with RAID5 or not depending largely on
the quality of the implementation. The Hitachi systems were amazing. They had
enough battery backed cache that the extra latency for the parity read/write
cycle really never showed up at all. But it had a lot more than 128M. I think
it had 1G and could be expanded.

-- 
greg


---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to