On 14/02/2010 17:28, Jonathan Belson wrote:
After reading some earlier threads about zfs performance, I decided to test my 
own server.  I found the results rather surprising...

Thanks to everyone who responded. I experimented with my load.conf settings, leaving me with the following:

vm.kmem_size="1280M"
vfs.zfs.prefetch_disable="1"

That kmem_size seems quite big for a machine with only (!) 2GB of RAM, but I wanted to see if it gave better results than 1024MB (it did, an extra ~5MB/s).

The rest of the settings are defaults:

vm.kmem_size_scale: 3
vm.kmem_size_max: 329853485875
vm.kmem_size_min: 0
vm.kmem_size: 1342177280
vfs.zfs.arc_min: 104857600
vfs.zfs.arc_max: 838860800


My numbers are a lot better with these settings:

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 63.372441 secs (33092492 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 60.647568 secs (34579326 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 68.241539 secs (30731312 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 68.722902 secs (30516057 bytes/sec)

Writing a 200MB file to a UFS partition gives around 37MB/s, so the zfs overhead is costing me a few MB per second. I'm guessing that the hard drives themselves have rather sucky performance (I used to use Spinpoints, but receiving three faulty ones in a row put me off them).


Reading from a raw device:

# dd if=/dev/ad4s1a of=/dev/null bs=1M count=2000
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 11.286550 secs (95134635 bytes/sec)

# dd if=/dev/ad4s1a of=/dev/null bs=1M count=2000
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 11.445131 secs (93816473 bytes/sec)

# dd if=/dev/ad4s1a of=/dev/null bs=1M count=2000
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 11.284961 secs (95148032 bytes/sec)


Reading from zfs file:

# dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=4000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 25.643737 secs (81780281 bytes/sec)

# dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=4000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 25.444214 secs (82421567 bytes/sec)

# dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=4000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 25.572888 secs (82006851 bytes/sec)


So, the value of arc_max from the zfs tuning wiki seemed to be the main brake on performance.

Cheers,

--Jon
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to