Benjeman J. Meekhof wrote:

> My baseline was this - on linux 2.6.20 we're doing 800MB/s write and
> greater read with this configuration:  2 raid6 volumes volumes striped
> into a raid0 volume using linux software raid, XFS filesystem.  Each
> raid6 is a volume on one controller using 30 PD.  We've spent time
> tuning this, more than I have with FreeBSD so far.

> time dd if=/dev/zero of=/test/deletafile bs=1M count=10240
> 10737418240 bytes transferred in 26.473629 secs (405589209 bytes/sec)
>  time dd if=/test/deletafile of=/dev/null bs=1M count=10240
> 10737418240 bytes transferred in 157.700367 secs (68087465 bytes/sec)

I had similar ratio of results when comparing FreeBSD+UFS to most
high-performance Linux file systems (XFS is really great!), so I'd guess
it's about as fast as you can get with this combination.

> Any other suggestions to get best throughput?  There is also HW RAID
> stripe size to adjust larger or smaller.  ZFS is also on the list for
> testing.  Should I perhaps be running -CURRENT or -STABLE to be get best
> results with ZFS?

ZFS will be up to 50% faster on tests such as yours, so you should
definitely try it. Unfortunately it's not stable and you probably don't
want to use it in production. AFAIK there are no significant differences
between ZFS in -current and -stable.



Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to