I ran testing of hardware RAID versus all software and didn't see any 
differences that made either a clear winner.  For production platforms you're 
just as well off having JBODs.

This was with bonnie++ on a V240 running Solaris 10u3.  A 3511 array fully 
populated with 12 380-gig drives, single controller with 1-gig RAM with 415G 
firmware.

I tried:
HW RAID-5 (11 disks one hot spare)
2 HW RAID-5 LUNS, then ZFS mirroring on top of that

Then I made each disk an indivdual LUN and ran:
RAIDZ2 (11 disks in set, 1 hot spare)
RAID-10 across all 12 disks

Yeah they weren't PRECISELY comparable setups but it was useful tests of some 
configuration you might run if given random older hardware.

The ZFS RAID-10 came out ahead particularly on read, if I had to pick I would 
go with that.  I understand the state of the ZIL flush is such that the HW RAID 
features as far as cache are essentially wasted.   I read this is being worked 
on but not clear if fixes are in nv69 or not.  Perhaps when that is all fixed 
it will make sense again to utilize controller hardware you already have.  I 
have an nv69 box, if I can swipe a 3310 or 3511 array for it I'll run bonnie++ 
on that later in the week.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to