Steven Hartland wrote:
Scott I've sent this to you as from reading around you did the
original driver conversion and as such may have an idea
on the areas I could look at hope you dont mind.
Ok some real strange going on write performance is ~ 140MB/s:
gstat:
dT: 0.505 flag_I 500000us sizeof 240 i -1
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 1100 4 63 13.2 1096 140313 1.2 57.8| da0
0 1100 4 63 13.3 1096 140313 1.3 59.3| da0s1
where as read is ~42MB/s
gstat:
dT: 0.505 flag_I 500000us sizeof 240 i -1
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
1 335 335 42836 2.8 0 0 0.0 93.3| da0
1 335 335 42836 2.8 0 0 0.0 93.6| da0s1
First of all, you're only sending roughly 3GB of data through. Since
you have 2GB of RAM, you're likely getting a lot of write caching
from the OS. If you want more representative numbers, either test
with a much larger data set or with a much smaller RAM size. Second,
since you're going through the filesystem, there is a very good chance
that the filesystem blocks are not aligning well with the array
blocks. This hurts quite a bit on any controller, and I can imagine
it being extremely bad on a controller like this one. Try doing your
DD test straight to the device node. In my local testing I was able to
get about 400MB/sec across 6 disks in RAID-0. RAID-5 read should
get almost as good in a similar configuration (unless the RAID stack
is checking parity on read, a question that I cannot answer). I would
expect RAID-5 write to be significantly lower due to the extra cpu and
memory bus overhead of doing the parity calculations. This will of
course depend also on the speed of your drives and the speed of your PCI
and memory bus.
The DD commands that I usually use:
dd if=/dev/zero of=/dev/da0 bs=1m
dd if=/dev/da0 of=/dev/null bs=1m
_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[EMAIL PROTECTED]"