Re: More zfs benchmarks

2010-02-15 Thread Boris Samorodov
On Sun, 14 Feb 2010 22:58:58 + Jonathan Belson wrote:
 On 14 Feb 2010, at 21:15, Joshua Boyd wrote:

  Here are my relevant settings:
  
  vfs.zfs.prefetch_disable=0
^^ [1]
 I already had prefetch disabled, but ...

Just a note: prefetch is not disabled here [1].

-- 
WBR, Boris Samorodov (bsam)
Research Engineer, http://www.ipt.ru Telephone  Internet SP
FreeBSD Committer, http://www.FreeBSD.org The Power To Serve
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: More zfs benchmarks

2010-02-15 Thread Lorenzo On The Lists

On 14.02.10 18:28, Jonathan Belson wrote:


The machine is a Dell SC440, dual core 2GHz E2180, 2GB of RAM and ICH7 SATA300 
controller.  There are three Hitachi 500GB drives (HDP725050GLA360) in a raidz1 
configuration (version 13).  I'm running amd64 7.2-STABLE from 14th Jan.

First of all, I tried creating a 200MB file on / (the only non-zfs partition):



..snip..

Hi,

FYI,

I Just made the same tests, on a FreeBSD 8.0-STABLE #4: Thu Dec  3 
19:00:06 CET 2009, 4GB RAM, zpool comprised of:


NAME SIZE   USED  AVAILCAP  HEALTH  ALTROOT
tank1.81T  1.57T   251G86%  ONLINE  -

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  mirrorONLINE   0 0 0
ad10ONLINE   0 0 0
ad12ONLINE   0 0 0

zpool upgrade
This system is currently running ZFS pool version 13.

I'm getting near-to-hardware performance on all tests, i.e. 94MB/s 
minimum. Tried with 200MB, 2000MB, 4000MB and 8000MB files repeatedly. 
All wonderful.


E.g.:

dd if=/tank/testfs/testfile.dat bs=1m of=/dev/null count=8000
8388608000 bytes transferred in 83.569786 secs (100378479 bytes/sec)

marx# dd if=/tank/testfs/testfile.dat bs=1m of=/dev/null count=8000
8388608000 bytes transferred in 78.234149 secs (107224378 bytes/sec)

Did repeated writing and reading. I have NO ZFS-related tunables at all 
in /boot/loader.conf. All left to self-tuning and defaults as advised 
since 8.0.


kstat.zfs.misc.arcstats.memory_throttle_count: 0 at all times.

Regards,

Lorenzo

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: More zfs benchmarks

2010-02-15 Thread Jeremy Chadwick
On Sun, Feb 14, 2010 at 05:28:28PM +, Jonathan Belson wrote:
 Hiya
 
 After reading some earlier threads about zfs performance, I decided to test 
 my own server.  I found the results rather surprising...

Below are my results from my home machine.  Note that my dd size and
count differ from what the OP provided.

I should note that powerd(8) is in effect on this box; I probably should
have disabled it and forced the CPU frequency to be at max before doing
these tests.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

# uname -a
FreeBSD icarus.home.lan 8.0-STABLE FreeBSD 8.0-STABLE #0: Sat Jan 16 17:48:04 
PST 2010 r...@icarus.home.lan:/usr/obj/usr/src/sys/X7SBA_RELENG_8_amd64  
amd64

# uptime
 6:51AM  up 29 days, 12:48, 2 users, load averages: 0.06, 0.04, 0.01

# sysctl hw.machine hw.model hw.ncpu hw.physmem hw.usermem hw.realmem 
hw.pagesizes
hw.machine: amd64
hw.model: Intel(R) Core(TM)2 Duo CPU E8400  @ 3.00GHz
hw.ncpu: 2
hw.physmem: 4285317120
hw.usermem: 3520425984
hw.realmem: 5100273664
hw.pagesizes: 4096 2097152 0

# sysctl vm.kmem_size vm.kmem_size_min vm.kmem_size_max vm.kmem_size_scale
vm.kmem_size: 1378439168
vm.kmem_size_min: 0
vm.kmem_size_max: 329853485875
vm.kmem_size_scale: 3

# dmesg | egrep '(ata[57]|atapci0)'
atapci0: Intel ICH9 SATA300 controller port 
0x1c50-0x1c57,0x1c44-0x1c47,0x1c48-0x1c4f,0x1c40-0x1c43,0x18e0-0x18ff mem 
0xdc000800-0xdc000fff irq 17 at device 31.2 on pci0
atapci0: [ITHREAD]
atapci0: AHCI called from vendor specific driver
atapci0: AHCI v1.20 controller with 6 3Gbps ports, PM supported
ata2: ATA channel 0 on atapci0
ata3: ATA channel 1 on atapci0
ata4: ATA channel 2 on atapci0
ata5: ATA channel 3 on atapci0
ata5: [ITHREAD]
ata6: ATA channel 4 on atapci0
ata7: ATA channel 5 on atapci0
ata7: [ITHREAD]
ad10: 953869MB WDC WD1001FALS-00J7B1 05.00K05 at ata5-master UDMA100 SATA 
3Gb/s
ad14: 953869MB WDC WD1001FALS-00J7B1 05.00K05 at ata7-master UDMA100 SATA 
3Gb/s

# egrep '^[a-z]' /boot/loader.conf
kern.maxdsiz=1536M
kern.dfldsiz=1536M
kern.maxssiz=256M
hint.sio.1.disabled=1
vm.pmap.pg_ps_enabled=1
vfs.zfs.prefetch_disable=1
debug.cpufreq.lowest=1500

# zpool status
  pool: storage
 state: ONLINE
 scrub: scrub stopped after 0h27m with 0 errors on Fri Feb 12 10:55:49 2010
config:

NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  mirrorONLINE   0 0 0
ad10ONLINE   0 0 0
ad14ONLINE   0 0 0

errors: No known data errors

kstat.zfs.misc.arcstats before tests
==
kstat.zfs.misc.arcstats.hits: 102892520
kstat.zfs.misc.arcstats.misses: 1043985
kstat.zfs.misc.arcstats.demand_data_hits: 100502054
kstat.zfs.misc.arcstats.demand_data_misses: 1010714
kstat.zfs.misc.arcstats.demand_metadata_hits: 2390466
kstat.zfs.misc.arcstats.demand_metadata_misses: 33271
kstat.zfs.misc.arcstats.prefetch_data_hits: 0
kstat.zfs.misc.arcstats.prefetch_data_misses: 0
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 0
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 0
kstat.zfs.misc.arcstats.mru_hits: 4675003
kstat.zfs.misc.arcstats.mru_ghost_hits: 11655
kstat.zfs.misc.arcstats.mfu_hits: 98217517
kstat.zfs.misc.arcstats.mfu_ghost_hits: 15079
kstat.zfs.misc.arcstats.deleted: 2456180
kstat.zfs.misc.arcstats.recycle_miss: 6236
kstat.zfs.misc.arcstats.mutex_miss: 1238
kstat.zfs.misc.arcstats.evict_skip: 0
kstat.zfs.misc.arcstats.hash_elements: 5753
kstat.zfs.misc.arcstats.hash_elements_max: 23704
kstat.zfs.misc.arcstats.hash_collisions: 643164
kstat.zfs.misc.arcstats.hash_chains: 229
kstat.zfs.misc.arcstats.hash_chain_max: 5
kstat.zfs.misc.arcstats.p: 839285616
kstat.zfs.misc.arcstats.c: 841024368
kstat.zfs.misc.arcstats.c_min: 107690560
kstat.zfs.misc.arcstats.c_max: 861524480
kstat.zfs.misc.arcstats.size: 96783432
kstat.zfs.misc.arcstats.hdr_size: 1196624
kstat.zfs.misc.arcstats.l2_hits: 258
kstat.zfs.misc.arcstats.l2_misses: 0
kstat.zfs.misc.arcstats.l2_feeds: 3337
kstat.zfs.misc.arcstats.l2_rw_clash: 0
kstat.zfs.misc.arcstats.l2_writes_sent: 576
kstat.zfs.misc.arcstats.l2_writes_done: 576
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 6
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 2
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 797
kstat.zfs.misc.arcstats.l2_abort_lowmem: 14
kstat.zfs.misc.arcstats.l2_cksum_bad: 0
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_size: 0
kstat.zfs.misc.arcstats.l2_hdr_size: 0
kstat.zfs.misc.arcstats.memory_throttle_count: 8929



test #1 (327,680,000 bytes)
=
# dd if=/dev/zero of=/storage/test01 bs=64k count=5000
5000+0 records in
5000+0 

Re: More zfs benchmarks

2010-02-15 Thread Jonathan Belson

On 14/02/2010 17:28, Jonathan Belson wrote:

After reading some earlier threads about zfs performance, I decided to test my 
own server.  I found the results rather surprising...


Thanks to everyone who responded.  I experimented with my load.conf settings, 
leaving me with the following:


vm.kmem_size=1280M
vfs.zfs.prefetch_disable=1

That kmem_size seems quite big for a machine with only (!) 2GB of RAM, but I 
wanted to see if it gave better results than 1024MB (it did, an extra ~5MB/s).


The rest of the settings are defaults:

vm.kmem_size_scale: 3
vm.kmem_size_max: 329853485875
vm.kmem_size_min: 0
vm.kmem_size: 1342177280
vfs.zfs.arc_min: 104857600
vfs.zfs.arc_max: 838860800


My numbers are a lot better with these settings:

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 63.372441 secs (33092492 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 60.647568 secs (34579326 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 68.241539 secs (30731312 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 68.722902 secs (30516057 bytes/sec)

Writing a 200MB file to a UFS partition gives around 37MB/s, so the zfs overhead 
is costing me a few MB per second.  I'm guessing that the hard drives themselves 
have rather sucky performance (I used to use Spinpoints, but receiving three 
faulty ones in a row put me off them).



Reading from a raw device:

# dd if=/dev/ad4s1a of=/dev/null bs=1M count=2000
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 11.286550 secs (95134635 bytes/sec)

# dd if=/dev/ad4s1a of=/dev/null bs=1M count=2000
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 11.445131 secs (93816473 bytes/sec)

# dd if=/dev/ad4s1a of=/dev/null bs=1M count=2000
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 11.284961 secs (95148032 bytes/sec)


Reading from zfs file:

# dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=4000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 25.643737 secs (81780281 bytes/sec)

# dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=4000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 25.444214 secs (82421567 bytes/sec)

# dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=4000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 25.572888 secs (82006851 bytes/sec)


So, the value of arc_max from the zfs tuning wiki seemed to be the main brake on 
performance.


Cheers,

--Jon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: More zfs benchmarks

2010-02-15 Thread Freddie Cash
On Mon, Feb 15, 2010 at 9:51 AM, Jonathan Belson j...@witchspace.com wrote:

 On 14/02/2010 17:28, Jonathan Belson wrote:

 After reading some earlier threads about zfs performance, I decided to
 test my own server.  I found the results rather surprising...


 Thanks to everyone who responded.  I experimented with my load.conf
 settings, leaving me with the following:

 vm.kmem_size=1280M
 vfs.zfs.prefetch_disable=1

 That kmem_size seems quite big for a machine with only (!) 2GB of RAM, but
 I wanted to see if it gave better results than 1024MB (it did, an extra
 ~5MB/s).


For a system with 2 GB of RAM, and possibly slow harddrives, consider adding
a cache vdev (L2ARC).  The 4 GB and larger USB flash drives are getting to
be pretty fast for reads (which is what the L2ARC is for).

On my home system, which is a 32-bit FreeBSD 8-STABLE box with a 3.0 GHz P4
and 2 GB of RAM, adding a 4 GB Transcend JetFlash has done wonders for
improving stability and read speed.  Most of my apps now load from the USB
stick instead of the slow raidz1 vdev (3x 120 GB SATA drives).

Haven't done any real benchmarks yet (still upgrading to KDE 4.4), but
things feel smoother, and it hasn't locked up since adding the USB stick.
 On this box, running ktorrent 24/7 used to lock up the box after 3-5 days
(can't even toggle numlock).

This box uses a kmem_max of 1 GB, and an arc_max of 512 MB.  With a 4 GB
L2ARC.  :)

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: More zfs benchmarks

2010-02-15 Thread Miroslav Lachman

Jeremy Chadwick wrote:

On Sun, Feb 14, 2010 at 05:28:28PM +, Jonathan Belson wrote:

Hiya

After reading some earlier threads about zfs performance, I decided to test my 
own server.  I found the results rather surprising...


Below are my results from my home machine.  Note that my dd size and
count differ from what the OP provided.

I should note that powerd(8) is in effect on this box; I probably should
have disabled it and forced the CPU frequency to be at max before doing
these tests.


I did the same tests as you on my backup storage server HP ML110 G5 with 
4x 1TB Samsung drives in RAIDZ.


Unfortunately there is no kstat.zfs.misc.arcstats.memory_throttle_count 
on FreeBSD 7.2


I can run this test on Sun Fire X2100 with 4GB RAM, 2x 500GB Hitachi 
drives in ZFS mirror on FreeBSD 7.2 (let me know if somebody is 
interested in results for comparision)



r...@kiwi ~/# uname -a
FreeBSD kiwi.codelab.cz 7.2-RELEASE-p4 FreeBSD 7.2-RELEASE-p4 #0: Fri 
Oct  2 08:22:32 UTC 2009 
r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64


r...@kiwi ~/# uptime
 6:46PM  up 6 days,  7:30, 1 user, load averages: 0.00, 0.00, 0.00

r...@kiwi ~/# sysctl hw.machine hw.model hw.ncpu hw.physmem hw.usermem 
hw.realmem hw.pagesizes

hw.machine: amd64
hw.model: Intel(R) Pentium(R) Dual  CPU  E2160  @ 1.80GHz
hw.ncpu: 2
hw.physmem: 5219966976
hw.usermem: 801906688
hw.realmem: 5637144576
sysctl: unknown oid 'hw.pagesizes'

r...@kiwi ~/# sysctl vm.kmem_size vm.kmem_size_min vm.kmem_size_max 
vm.kmem_size_scale

vm.kmem_size: 1684733952
vm.kmem_size_min: 0
vm.kmem_size_max: 3865468109
vm.kmem_size_scale: 3

r...@kiwi ~/# dmesg | egrep '(ata[01]|atapci0)'
atapci0: Intel ICH9 SATA300 controller port 
0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0x1c10-0x1c1f,0x1c00-0x1c0f at 
device 31.2 on pci0

ata0: ATA channel 0 on atapci0
ata0: [ITHREAD]
ata1: ATA channel 1 on atapci0
ata1: [ITHREAD]
ad0: 953869MB SAMSUNG HD103UJ 1AA01113 at ata0-master SATA300
ad1: 953869MB SAMSUNG HD103UJ 1AA01113 at ata0-slave SATA300
ad2: 953869MB SAMSUNG HD103UJ 1AA01113 at ata1-master SATA300
ad3: 953869MB SAMSUNG HD103UJ 1AA01113 at ata1-slave SATA300

r...@kiwi ~/# egrep '^[a-z]' /boot/loader.conf
hw.bge.allow_asf=1

r...@kiwi ~/# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz1ONLINE   0 0 0
ad0 ONLINE   0 0 0
ad1 ONLINE   0 0 0
ad2 ONLINE   0 0 0
ad3 ONLINE   0 0 0

errors: No known data errors


before tests
r...@kiwi ~/# sysctl kstat.zfs.misc.arcstats
kstat.zfs.misc.arcstats.hits: 350294273
kstat.zfs.misc.arcstats.misses: 8369056
kstat.zfs.misc.arcstats.demand_data_hits: 4336959
kstat.zfs.misc.arcstats.demand_data_misses: 135936
kstat.zfs.misc.arcstats.demand_metadata_hits: 267825050
kstat.zfs.misc.arcstats.demand_metadata_misses: 6177625
kstat.zfs.misc.arcstats.prefetch_data_hits: 138128
kstat.zfs.misc.arcstats.prefetch_data_misses: 400434
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 77994136
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1655061
kstat.zfs.misc.arcstats.mru_hits: 158218094
kstat.zfs.misc.arcstats.mru_ghost_hits: 9777
kstat.zfs.misc.arcstats.mfu_hits: 114654575
kstat.zfs.misc.arcstats.mfu_ghost_hits: 244807
kstat.zfs.misc.arcstats.deleted: 9904481
kstat.zfs.misc.arcstats.recycle_miss: 2855906
kstat.zfs.misc.arcstats.mutex_miss: 9362
kstat.zfs.misc.arcstats.evict_skip: 1483848
kstat.zfs.misc.arcstats.hash_elements: 0
kstat.zfs.misc.arcstats.hash_elements_max: 553646
kstat.zfs.misc.arcstats.hash_collisions: 8012499
kstat.zfs.misc.arcstats.hash_chains: 15382
kstat.zfs.misc.arcstats.hash_chain_max: 16
kstat.zfs.misc.arcstats.p: 1107222849
kstat.zfs.misc.arcstats.c: 1263550464
kstat.zfs.misc.arcstats.c_min: 52647936
kstat.zfs.misc.arcstats.c_max: 1263550464
kstat.zfs.misc.arcstats.size: 1263430144


test #1 (327,680,000 bytes) [~412MB/s - buffered]
=
r...@kiwi ~/# dd if=/dev/zero of=/tank/test01 bs=64k count=5000
5000+0 records in
5000+0 records out
32768 bytes transferred in 0.758220 secs (432170107 bytes/sec)

test #1 (kstat.zfs.misc.arcstats)
===
r...@kiwi ~/# sysctl kstat.zfs.misc.arcstats
kstat.zfs.misc.arcstats.hits: 350294422
kstat.zfs.misc.arcstats.misses: 8369059
kstat.zfs.misc.arcstats.demand_data_hits: 4337042
kstat.zfs.misc.arcstats.demand_data_misses: 135936
kstat.zfs.misc.arcstats.demand_metadata_hits: 267825116
kstat.zfs.misc.arcstats.demand_metadata_misses: 6177628
kstat.zfs.misc.arcstats.prefetch_data_hits: 138128
kstat.zfs.misc.arcstats.prefetch_data_misses: 400434
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 77994136
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 1655061
kstat.zfs.misc.arcstats.mru_hits: 158218145
kstat.zfs.misc.arcstats.mru_ghost_hits: 9777

More zfs benchmarks

2010-02-14 Thread Jonathan Belson
Hiya

After reading some earlier threads about zfs performance, I decided to test my 
own server.  I found the results rather surprising...

The machine is a Dell SC440, dual core 2GHz E2180, 2GB of RAM and ICH7 SATA300 
controller.  There are three Hitachi 500GB drives (HDP725050GLA360) in a raidz1 
configuration (version 13).  I'm running amd64 7.2-STABLE from 14th Jan.


First of all, I tried creating a 200MB file on / (the only non-zfs partition):

# dd if=/dev/zero of=/root/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 6.158355 secs (34053769 bytes/sec)

# dd if=/dev/zero of=/root/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 5.423107 secs (38670674 bytes/sec)

# dd if=/dev/zero of=/root/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 6.113258 secs (34304982 bytes/sec)


Next, I tried creating a 200MB file on a zfs partition:

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 58.540571 secs (3582391 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 46.867240 secs (4474665 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 21.145221 secs (9917853 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 19.387938 secs (10816787 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 21.378161 secs (9809787 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 23.774958 secs (8820844 bytes/sec)

Ouch!  Ignoring the first result, that's still over three times slower than the 
non-zfs test.


With a 2GB test file:

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 547.901945 secs (3827605 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 595.052017 secs (3524317 bytes/sec)

# dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 517.326470 secs (4053827 bytes/sec)

Even worse :-(


Reading 2GB from a raw device:

dd if=/dev/ad4s1a of=/dev/null bs=1M count=2000
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 13.914145 secs (77169084 bytes/sec)

 
Reading 2GB from a zfs partition (unmounting each time):

dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 29.905155 secs (70126772 bytes/sec)

dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 32.557361 secs (64414066 bytes/sec)

dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 34.137874 secs (61431828 bytes/sec)

For reading, there seems to be much less of a disparity in performance.

I notice that one drive is on atapci0 and the other two are on atapci1, but 
surely it wouldn't make this much of a difference to write speeds?

Cheers,

--Jon

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: More zfs benchmarks

2010-02-14 Thread Artem Belevich
Can you check if kstat.zfs.misc.arcstats.memory_throttle_count sysctl
increments during your tests?

ZFS self-throttles writes if it thinks system is running low on
memory. Unfortunately on FreeBSD the 'free' list is a *very*
conservative indication of available memory so ZFS often starts
throttling before it's really needed. With only 2GB in the system,
that's probably what slows you down.

The code is in arc_memory_throttle() in
sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c, if anyone's
curious.

--Artem



On Sun, Feb 14, 2010 at 9:28 AM, Jonathan Belson j...@witchspace.com wrote:
 Hiya

 After reading some earlier threads about zfs performance, I decided to test 
 my own server.  I found the results rather surprising...

 The machine is a Dell SC440, dual core 2GHz E2180, 2GB of RAM and ICH7 
 SATA300 controller.  There are three Hitachi 500GB drives (HDP725050GLA360) 
 in a raidz1 configuration (version 13).  I'm running amd64 7.2-STABLE from 
 14th Jan.


 First of all, I tried creating a 200MB file on / (the only non-zfs partition):

 # dd if=/dev/zero of=/root/zerofile.000 bs=1M count=200
 200+0 records in
 200+0 records out
 209715200 bytes transferred in 6.158355 secs (34053769 bytes/sec)

 # dd if=/dev/zero of=/root/zerofile.000 bs=1M count=200
 200+0 records in
 200+0 records out
 209715200 bytes transferred in 5.423107 secs (38670674 bytes/sec)

 # dd if=/dev/zero of=/root/zerofile.000 bs=1M count=200
 200+0 records in
 200+0 records out
 209715200 bytes transferred in 6.113258 secs (34304982 bytes/sec)


 Next, I tried creating a 200MB file on a zfs partition:

 # dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
 200+0 records in
 200+0 records out
 209715200 bytes transferred in 58.540571 secs (3582391 bytes/sec)

 # dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
 200+0 records in
 200+0 records out
 209715200 bytes transferred in 46.867240 secs (4474665 bytes/sec)

 # dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
 200+0 records in
 200+0 records out
 209715200 bytes transferred in 21.145221 secs (9917853 bytes/sec)

 # dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
 200+0 records in
 200+0 records out
 209715200 bytes transferred in 19.387938 secs (10816787 bytes/sec)

 # dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
 200+0 records in
 200+0 records out
 209715200 bytes transferred in 21.378161 secs (9809787 bytes/sec)

 # dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=200
 200+0 records in
 200+0 records out
 209715200 bytes transferred in 23.774958 secs (8820844 bytes/sec)

 Ouch!  Ignoring the first result, that's still over three times slower than 
 the non-zfs test.


 With a 2GB test file:

 # dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
 2000+0 records in
 2000+0 records out
 2097152000 bytes transferred in 547.901945 secs (3827605 bytes/sec)

 # dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
 2000+0 records in
 2000+0 records out
 2097152000 bytes transferred in 595.052017 secs (3524317 bytes/sec)

 # dd if=/dev/zero of=/tank/test/zerofile.000 bs=1M count=2000
 2000+0 records in
 2000+0 records out
 2097152000 bytes transferred in 517.326470 secs (4053827 bytes/sec)

 Even worse :-(


 Reading 2GB from a raw device:

 dd if=/dev/ad4s1a of=/dev/null bs=1M count=2000
 1024+0 records in
 1024+0 records out
 1073741824 bytes transferred in 13.914145 secs (77169084 bytes/sec)


 Reading 2GB from a zfs partition (unmounting each time):

 dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=2000
 2000+0 records in
 2000+0 records out
 2097152000 bytes transferred in 29.905155 secs (70126772 bytes/sec)

 dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=2000
 2000+0 records in
 2000+0 records out
 2097152000 bytes transferred in 32.557361 secs (64414066 bytes/sec)

 dd if=/tank/test/zerofile.000 of=/dev/null bs=1M count=2000
 2000+0 records in
 2000+0 records out
 2097152000 bytes transferred in 34.137874 secs (61431828 bytes/sec)

 For reading, there seems to be much less of a disparity in performance.

 I notice that one drive is on atapci0 and the other two are on atapci1, but 
 surely it wouldn't make this much of a difference to write speeds?

 Cheers,

 --Jon

 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: More zfs benchmarks

2010-02-14 Thread Jonathan Belson
On 14 Feb 2010, at 19:13, Artem Belevich wrote:
 Can you check if kstat.zfs.misc.arcstats.memory_throttle_count sysctl
 increments during your tests?
 
 ZFS self-throttles writes if it thinks system is running low on
 memory. Unfortunately on FreeBSD the 'free' list is a *very*
 conservative indication of available memory so ZFS often starts
 throttling before it's really needed. With only 2GB in the system,
 that's probably what slows you down.

I tested a number of times during a 2GB write to a zfs partition and the count 
stayed at 0.

Cheers,

--Jon

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: More zfs benchmarks

2010-02-14 Thread Jonathan Belson

On 14 Feb 2010, at 20:26, Jonathan Belson wrote:

 On 14 Feb 2010, at 19:13, Artem Belevich wrote:
 Can you check if kstat.zfs.misc.arcstats.memory_throttle_count sysctl
 increments during your tests?
 
 ZFS self-throttles writes if it thinks system is running low on
 memory. Unfortunately on FreeBSD the 'free' list is a *very*
 conservative indication of available memory so ZFS often starts
 throttling before it's really needed. With only 2GB in the system,
 that's probably what slows you down.
 
 I tested a number of times during a 2GB write to a zfs partition and the 
 count stayed at 0.

Oh, I should add that I use the following settings from the zfs tuning guide:

vm.kmem_size=1024M
vm.kmem_size_max=1024M
vfs.zfs.arc_max=100M

Cheers,

--Jon

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: More zfs benchmarks

2010-02-14 Thread Michael Loftis



--On Sunday, February 14, 2010 5:28 PM + Jonathan Belson 
j...@witchspace.com wrote:



Hiya

After reading some earlier threads about zfs performance, I decided to
test my own server.  I found the results rather surprising...



You really need to test with at least 4GB of data, else you're just testing 
caching speeds on writing.  Use a test suite like bonnie++ and you'll see 
just how poor the ZFS performance is, especially with multiple readers on 
the same file, atleast in 8.0.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: More zfs benchmarks

2010-02-14 Thread Joshua Boyd
Repeated the same tests on my AMD64 dual core 4GB system with 5 HD103SI 1T
drives in raidz1 on a Supermicro PCI-E controller, running 8-STABLE.

foghornleghorn# dd if=/dev/zero of=/usr/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 4.246402 secs (49386563 bytes/sec)
foghornleghorn# dd if=/dev/zero of=/usr/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 3.913826 secs (53583169 bytes/sec)
foghornleghorn# dd if=/dev/zero of=/usr/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 4.436917 secs (47265975 bytes/sec)

foghornleghorn# dd if=/dev/zero of=/tank/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 0.377800 secs (555095486 bytes/sec)
foghornleghorn# dd if=/dev/zero of=/tank/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 0.140478 secs (1492869742 bytes/sec)
foghornleghorn# dd if=/dev/zero of=/tank/zerofile.000 bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 0.140452 secs (1493143431 bytes/sec)

foghornleghorn# dd if=/dev/zero of=/tank/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 8.117563 secs (258347487 bytes/sec)
foghornleghorn# dd if=/dev/zero of=/tank/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 8.251862 secs (254142882 bytes/sec)
foghornleghorn# dd if=/dev/zero of=/tank/zerofile.000 bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 8.307188 secs (252450287 bytes/sec)

foghornleghorn# dd if=/dev/da0 of=/dev/null bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 18.958791 secs (110616336 bytes/sec)
foghornleghorn# dd if=/dev/da0 of=/dev/null bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 18.924833 secs (110814822 bytes/sec)
foghornleghorn# dd if=/dev/da0 of=/dev/null bs=1M count=2000
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 18.893001 secs (111001529 bytes/sec)

foghornleghorn# dd if=/tank/zerofile.000 of=/dev/null bs=1M
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 5.156406 secs (406708089 bytes/sec)
foghornleghorn# dd if=/tank/zerofile.000 of=/dev/null bs=1M
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 5.126920 secs (409047148 bytes/sec)
foghornleghorn# dd if=/tank/zerofile.000 of=/dev/null bs=1M
2000+0 records in
2000+0 records out
2097152000 bytes transferred in 5.145461 secs (407573211 bytes/sec)

Here are my relevant settings:

vfs.zfs.prefetch_disable=0
vfs.zfs.zil_disable=1

Other than that, I'm trusting FreeBSD's default settings, and they seem to
be working pretty well.

On Sun, Feb 14, 2010 at 3:34 PM, Michael Loftis mlof...@wgops.com wrote:



 --On Sunday, February 14, 2010 5:28 PM + Jonathan Belson 
 j...@witchspace.com wrote:

  Hiya

 After reading some earlier threads about zfs performance, I decided to
 test my own server.  I found the results rather surprising...


 You really need to test with at least 4GB of data, else you're just testing
 caching speeds on writing.  Use a test suite like bonnie++ and you'll see
 just how poor the ZFS performance is, especially with multiple readers on
 the same file, atleast in 8.0.

 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org




-- 
Joshua Boyd
JBipNet

E-mail: boy...@jbip.net

http://www.jbip.net
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: More zfs benchmarks

2010-02-14 Thread Jonathan Belson
On 14 Feb 2010, at 21:15, Joshua Boyd wrote:

 Repeated the same tests on my AMD64 dual core 4GB system with 5 HD103SI 1T
 drives in raidz1 on a Supermicro PCI-E controller, running 8-STABLE.

[ snip results ]

I was hoping I'd get something closer to these figures...

 Here are my relevant settings:
 
 vfs.zfs.prefetch_disable=0
 vfs.zfs.zil_disable=1

I already had prefetch disabled, but retrying with zil disabled made no 
difference.

What is your arc_min and arc_max set to?

 
 On Sun, Feb 14, 2010 at 3:34 PM, Michael Loftis mlof...@wgops.com wrote:
 
 You really need to test with at least 4GB of data, else you're just testing
 caching speeds on writing.  Use a test suite like bonnie++ and you'll see

I'd expect to get more than 4MB/s if I was just measuring cache speed :-)

Cheers,

--Jon

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: More zfs benchmarks

2010-02-14 Thread Joshua Boyd
On Sun, Feb 14, 2010 at 6:12 PM, Joshua Boyd boy...@jbip.net wrote:



 On Sun, Feb 14, 2010 at 5:58 PM, Jonathan Belson j...@witchspace.comwrote:

 On 14 Feb 2010, at 21:15, Joshua Boyd wrote:

  Repeated the same tests on my AMD64 dual core 4GB system with 5 HD103SI
 1T
  drives in raidz1 on a Supermicro PCI-E controller, running 8-STABLE.

 [ snip results ]

 I was hoping I'd get something closer to these figures...

  Here are my relevant settings:
 
  vfs.zfs.prefetch_disable=0
  vfs.zfs.zil_disable=1

 I already had prefetch disabled, but retrying with zil disabled made no
 difference.


 That setting actually enables prefetch ... I have 4GB of RAM, so I felt
 safe turning it on.



 What is your arc_min and arc_max set to?


 System Memory:
  Physical RAM: 4015 MB
  Free Memory : 2130 MB

 ARC Size:
  Current Size: 783 MB (arcsize)
  Target Size (Adaptive):   783 MB (c)
  Min Size (Hard Limit):101 MB (zfs_arc_min)
  Max Size (Hard Limit):808 MB (zfs_arc_max)

 ARC Size Breakdown:
  Most Recently Used Cache Size:  97% 764 MB (p)
  Most Frequently Used Cache Size:   2% 18 MB (c-p)

 ARC Efficency:
  Cache Access Total: 148383
  Cache Hit Ratio:  66% 98434   [Defined State for buffer]
  Cache Miss Ratio: 33% 49949   [Undefined State for Buffer]
  REAL Hit Ratio:   65% 96744   [MRU/MFU Hits Only]

  Data Demand   Efficiency:99%
  Data Prefetch Efficiency: 0%

 CACHE HITS BY CACHE LIST:
   Anon:1%  1515[ New
 Customer, First Cache Hit ]
   Most Recently Used: 28%  27931 (mru)  [ Return
 Customer ]
   Most Frequently Used:   69%  68813 (mfu)  [ Frequent
 Customer ]
   Most Recently Used Ghost:0%  175 (mru_ghost)[ Return
 Customer Evicted, Now Back ]
   Most Frequently Used Ghost:  0%  0 (mfu_ghost)[ Frequent
 Customer Evicted, Now Back ]
 CACHE HITS BY DATA TYPE:
   Demand Data:12%  12369
   Prefetch Data:   0%  0
   Demand Metadata:85%  84375
   Prefetch Metadata:   1%  1690
 CACHE MISSES BY DATA TYPE:
   Demand Data: 0%  6
   Prefetch Data:  96%  47994
   Demand Metadata: 3%  1580
   Prefetch Metadata:   0%  369
 -




 
  On Sun, Feb 14, 2010 at 3:34 PM, Michael Loftis mlof...@wgops.com
 wrote:
 
  You really need to test with at least 4GB of data, else you're just
 testing
  caching speeds on writing.  Use a test suite like bonnie++ and you'll
 see

 I'd expect to get more than 4MB/s if I was just measuring cache speed :-)

 Cheers,

 --Jon




 --
 Joshua Boyd
 JBipNet

 E-mail: boy...@jbip.net

 http://www.jbip.net




-- 
Joshua Boyd
JBipNet

E-mail: boy...@jbip.net

http://www.jbip.net
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: More zfs benchmarks

2010-02-14 Thread Joshua Boyd
Here's my bonnie++ results:

foghornleghorn# bonnie++ -s 8192 -d. -n64 -uroot
Using uid:0, gid:0.
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96   --Sequential Output-- --Sequential Input-
--Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec
%CP
foghornleghorn.r 8G   184  99 191627  43 150484  35   416  98 388853  49
103.3   3
Latency 47480us1645ms1545ms   53943us 186ms
2449ms
Version  1.96   --Sequential Create-- Random
Create
foghornleghorn.res. -Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec
%CP
 64 22256  80 37111  99 17478  88 22686  81 37208  99 15053
90
Latency 28364us 336us 274us   27001us 321us
80123us
1.96,1.96,foghornleghorn.res.openband.net
,1,1266194600,8G,,184,99,191627,43,150484,35,416,98,388853,49,103.3,3,64,22256,80,37111,99,17478,88,22686,81,37208,99,15053,90,47480us,1645ms,1545ms,53943us,186ms,2449ms,28364us,336us,274us,27001us,321us,80123us


On Sun, Feb 14, 2010 at 6:23 PM, Joshua Boyd boy...@jbip.net wrote:



 On Sun, Feb 14, 2010 at 6:12 PM, Joshua Boyd boy...@jbip.net wrote:



 On Sun, Feb 14, 2010 at 5:58 PM, Jonathan Belson j...@witchspace.comwrote:

 On 14 Feb 2010, at 21:15, Joshua Boyd wrote:

  Repeated the same tests on my AMD64 dual core 4GB system with 5 HD103SI
 1T
  drives in raidz1 on a Supermicro PCI-E controller, running 8-STABLE.

 [ snip results ]

 I was hoping I'd get something closer to these figures...

  Here are my relevant settings:
 
  vfs.zfs.prefetch_disable=0
  vfs.zfs.zil_disable=1

 I already had prefetch disabled, but retrying with zil disabled made no
 difference.


 That setting actually enables prefetch ... I have 4GB of RAM, so I felt
 safe turning it on.



 What is your arc_min and arc_max set to?


 System Memory:
  Physical RAM: 4015 MB
  Free Memory : 2130 MB

 ARC Size:
  Current Size: 783 MB (arcsize)
  Target Size (Adaptive):   783 MB (c)
  Min Size (Hard Limit):101 MB (zfs_arc_min)
  Max Size (Hard Limit):808 MB (zfs_arc_max)

 ARC Size Breakdown:
  Most Recently Used Cache Size:  97% 764 MB (p)
  Most Frequently Used Cache Size:   2% 18 MB (c-p)

 ARC Efficency:
  Cache Access Total: 148383
  Cache Hit Ratio:  66% 98434   [Defined State for buffer]
  Cache Miss Ratio: 33% 49949   [Undefined State for
 Buffer]
  REAL Hit Ratio:   65% 96744   [MRU/MFU Hits Only]

  Data Demand   Efficiency:99%
  Data Prefetch Efficiency: 0%

 CACHE HITS BY CACHE LIST:
   Anon:1%  1515[ New
 Customer, First Cache Hit ]
   Most Recently Used: 28%  27931 (mru)  [ Return
 Customer ]
   Most Frequently Used:   69%  68813 (mfu)  [ Frequent
 Customer ]
   Most Recently Used Ghost:0%  175 (mru_ghost)[ Return
 Customer Evicted, Now Back ]
   Most Frequently Used Ghost:  0%  0 (mfu_ghost)[ Frequent
 Customer Evicted, Now Back ]
 CACHE HITS BY DATA TYPE:
   Demand Data:12%  12369
   Prefetch Data:   0%  0
   Demand Metadata:85%  84375
   Prefetch Metadata:   1%  1690
 CACHE MISSES BY DATA TYPE:
   Demand Data: 0%  6
   Prefetch Data:  96%  47994
   Demand Metadata: 3%  1580
   Prefetch Metadata:   0%  369
 -




 
  On Sun, Feb 14, 2010 at 3:34 PM, Michael Loftis mlof...@wgops.com
 wrote:
 
  You really need to test with at least 4GB of data, else you're just
 testing
  caching speeds on writing.  Use a test suite like bonnie++ and you'll
 see

 I'd expect to get more than 4MB/s if I was just measuring cache speed :-)

 Cheers,

 --Jon




 --
 Joshua Boyd
 JBipNet

 E-mail: boy...@jbip.net

 http://www.jbip.net




 --
 Joshua Boyd
 JBipNet

 E-mail: boy...@jbip.net

 http://www.jbip.net




-- 
Joshua Boyd
JBipNet

E-mail: boy...@jbip.net
Cell: (513) 375-0157

http://www.jbip.net
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org