----- Original Message ----- From: "Scott Long" <[EMAIL PROTECTED]>
Ok some real strange going on write performance is ~ 140MB/s:
gstat:
dT: 0.505  flag_I 500000us  sizeof 240  i -1
L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
   0   1100      4     63   13.2   1096 140313    1.2   57.8| da0
   0   1100      4     63   13.3   1096 140313    1.3   59.3| da0s1

where as read is ~42MB/s
gstat:
dT: 0.505  flag_I 500000us  sizeof 240  i -1
L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
   1    335    335  42836    2.8      0      0    0.0   93.3| da0
   1    335    335  42836    2.8      0      0    0.0   93.6| da0s1

First of all, you're only sending roughly 3GB of data through. Since you have 2GB of RAM, you're likely getting a lot of write caching from the OS. If you want more representative numbers, either test with a much larger data set or with a much smaller RAM size. Second, since you're going through the filesystem, there is a very good chance that the filesystem blocks are not aligning well with the array blocks. This hurts quite a bit on any controller, and I can imagine it being extremely bad on a controller like this one. Try doing your DD test straight to the device node. In my local testing I was able to get about 400MB/sec across 6 disks in RAID-0. RAID-5 read should get almost as good in a similar configuration (unless the RAID stack is checking parity on read, a question that I cannot answer). I would expect RAID-5 write to be significantly lower due to the extra cpu and memory bus overhead of doing the parity calculations. This will of course depend also on the speed of your drives and the speed of your PCI and memory bus.

The DD commands that I usually use:

dd if=/dev/zero of=/dev/da0 bs=1m
dd if=/dev/da0 of=/dev/null bs=1m


Just retried with a ~10Gb data set:
Write to FS:
dd if=/dev/zero of=.testfile bs=1m count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 92.517222 secs (113338466 bytes/sec)

Read from FS:
dd if=.testfile of=/dev/null bs=1m count=10000 10000+0 records in
10000+0 records out
10485760000 bytes transferred in 225.723348 secs (46454034 bytes/sec)


Read from device:
dd if=/dev/da0 of=/dev/null bs=1m count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 200.723666 secs (52239779 bytes/sec)

N.B. didn't do the write to the device direct as the array is
the system drive so I don't think it would take kindly to that :)

So it doesn't seem like caching is an issue and as others are seeing
similar performance issues on other RAID controllers is could well
not be a driver issue but I'm not ruling that out as yet as it could be
the same problem in each respective driver.
So far Highpoint 1820a and 3ware ?? ( Pete can u fill in the blank here )
are exhibiting the same issue.

   Steve


================================================
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it.


In the event of misdirection, illegible or incomplete transmission please 
telephone (023) 8024 3137
or return the E.mail to [EMAIL PROTECTED]

_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to