Should clarify that the first test mentioned below used the same gstripe setup as the latter one but did not specify any newfs blocksize:

#gstripe label -v -s 128k test /dev/mfid0 /dev/mfid2
#newfs -U /dev/stripe/test

sorry,
Ben



----------------------------------------
Hello,

I think this might be useful information, and am also hoping for a
little input.

We've been doing some FreeBSD benchmarking on Dell PE2950 systems with
Perc6 controllers (dual-quad Xeon, 16GB, Perc6=LSI card, mfi driver,
7.0-RELEASE).  There are two controllers in each system, and each has
two MD1000 disk shelves attached via the 2 4x SAS interfaces.  (so 30PD
available to each controller, 60 PD on the system).

My baseline was this - on linux 2.6.20 we're doing 800MB/s write and
greater read with this configuration:  2 raid6 volumes volumes striped
into a raid0 volume using linux software raid, XFS filesystem.  Each
raid6 is a volume on one controller using 30 PD.  We've spent time
tuning this, more than I have with FreeBSD so far.

Initially I was getting strangely poor read results.  Here is one
example (before launching into quicker dd tests, i already had similarly
bad results from some more complete iozone tests):

time dd if=/dev/zero of=/test/deletafile bs=1M count=10240
10737418240 bytes transferred in 26.473629 secs (405589209 bytes/sec)
 time dd if=/test/deletafile of=/dev/null bs=1M count=10240
10737418240 bytes transferred in 157.700367 secs (68087465 bytes/sec)

To make a very long story short, much better results achieved in the end
by simply by increasing the filesystem blocksize to the maximum (same dd
commands).  I'm running a more thorough test on this setup using iozone:

#gstripe label -v -s 128k test /dev/mfid0 /dev/mfid2
#newfs -U -b 65536 /dev/stripe/test

#write:  19.240875 secs (558052492 bytes/sec)
#read:  20.000606 secs (536854644 bytes/sec)

Also did this in /boot/loader.conf - it effected nothing very much in
any test but the settings seemed reasonable so I kept them:
kern.geom.stripe.fast=1
vfs.hirunningspace=5242880
vfs.read_max=32

Any other suggestions to get best throughput?  There is also HW RAID
stripe size to adjust larger or smaller.  ZFS is also on the list for
testing.  Should I perhaps be running -CURRENT or -STABLE to be get best
results with ZFS?

-Ben







--
Benjeman Meekhof - UM ATLAS/AGLT2 Computing
[EMAIL PROTECTED]


_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to