I just ran a handful of tests on a 14-disk array on a SCSI hardware
RAID card.

>From some quickie benchmarks using the bonnie++ benchmark, it appears
that the RAID5 across all 14 disks is a bit faster than RAID50 and
noticeably faster than RAID10...

Sample numbers for a 10Gb file (speed in Kbytes/second)
                                                       
                   RAID5     RAID50       RAID10       
sequential write:  39728     37568        23533        
read/write file:   13831     13289        11400             
sequential read:   52184     51529        54222     


Hardware is a Dell 2650 dual Xeon, 4GB Ram, PERC3/DC RAID card with
14 external U320 SCSI 15kRPM drives.  Software is FreeBSD 4.8 with the
default newfs settings.

The RAID drives were configured with 32k stripe size.  From informal
tests it doesn't seem to make much difference in the bonnie++
benchmark to go with 64k stripe on the RAID10 (didn't test it with
RAID5 or RAID50).  They say use larger stripe size for sequential
access, and lower for random access.

My concern is speed.  Any RAID config on this system has more disk
space than I will need for a LOOONG time.

My Postgres load is a heavy mix of select/update/insert.  ie, it is a
very actively updated and read database.

The conventional wisdom has been to use RAID10, but with 14 disks, I'm
kinda leaning toward RAID50 or perhaps just RAID5.

Has anyone else done similar tests of different RAID levels?  What
were your conclusions?

Raw output from bonnie++ available upon request.

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faqs/FAQ.html

Reply via email to