Solor Vox wrote, On 04/09/2010 09:16 AM:
So for argument's sake, lets say that of the
usable 4.5TB, 4TB is for large 8GB and up files.  I also plan on
either ext4 or xfs.

Another variable here is fsck time. We found jfs to have the most consistent fsck times (not the shortest, but never the longest) However that was for backup drives with lots of files.

While this all may seem like a bit much, getting it right can mean an
extra 30-50MB/s or more from the array.  So, has anyone done this type
of optimization?  I'd really rather not spend a week(s) testing
different values as 6TB arrays can take several hours to build.

You've really got no option but to test.

I suggest you create a test regime that creates and destroys raids, and tests them. Your tests don't need to be full sized, but you'd have to wait for the md to finish synching.

We go with raid1 with some minor exceptions which are raid5. For them, we found the defaults is "good enough" unless you push numbers right out to the ends, where performance drops off massively.

Even the default settings should be enough to saturate gig ethernet.


--
Craig Falconer

Reply via email to