Re: [zfs-discuss] Fileserver performance tests

2007-10-11 Thread Thomas Liesner
Hi,
compression is off.
I've checked rw-perfomance with 20 simultaneous cp and with the following...

#!/usr/bin/bash
for ((i=1; i=20; i++))
do
  cp lala$i lulu$i 
done

(lala1-20 are 2gb files)

...and ended up with 546mb/s. Not too bad at all.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread eric kustarz
Since you were already using filebench, you could use the  
'singlestreamwrite.f' and 'singlestreamread.f' workloads (with  
nthreads set to 20, iosize set to 128k) to achieve the same things.

With the latest version of filebench, you can then use the '-c'  
option to compare your results in a nice HTML friendly way.

eric

On Oct 9, 2007, at 9:25 AM, Thomas Liesner wrote:

 i wanted to test some simultanious sequential writes and wrote this  
 little snippet:

 #!/bin/bash
 for ((i=1; i=20; i++))
 do
   dd if=/dev/zero of=lala$i bs=128k count=32768 
 done

 While the script was running i watched zpool iostat and measured  
 the time between starting and stopping of the writes (usually i saw  
 bandwth figures around 500...)
 The result was 409 mb/s in writes. Not too bad at all :)

 Now the same with sequential reads:

 #!/bin/bash
 for ((i=1; i=20; i++))
 do
   dd if=lala$i of=/dev/zero bs=128k 
 done

 again checked with zpool iostat seeing even higher numbers around  
 850 and the result was 910mb/s...

 wow
 that all looks quite promising :)

 Tom


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread Thomas Liesner
Hi Eric,

Are you talking about the documentation at:
http://sourceforge.net/projects/filebench
or:
http://www.opensolaris.org/os/community/performance/filebench/
and:
http://www.solarisinternals.com/wiki/index.php/FileBench
?

i was talking about the solarisinternals wiki. I can't find any documentation 
at the sourceforge site and the opensolaris site refers to solarisinternals for 
a more detailed documentation. The INSTALL document within the distribution 
refers to solarisinternals and pkgadd which of course isn't working without 
providing a package ;)

This is the output of make within filebench/filebench:

[EMAIL PROTECTED] # make
make: Warning: Can't find `../Makefile.cmd': Datei oder Verzeichnis nicht 
gefunden
make: Fatal error in reader: Makefile, line 27: Read of include file 
`../Makefile.cmd' failed


Before looking at the results, decide if that really *is* your 
expected workload

Sure enough i have to dig deeper into the filebench workloads and create my own 
workload to represent my expected workload even better, but the tasks within 
the fileserver workload are already quite representative (i could skip the 
append test though...)

Regards,
Tom
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread Luke Lonergan
Hi Eric,

On 10/10/07 12:50 AM, eric kustarz [EMAIL PROTECTED] wrote:

 Since you were already using filebench, you could use the
 'singlestreamwrite.f' and 'singlestreamread.f' workloads (with
 nthreads set to 20, iosize set to 128k) to achieve the same things.

Yes but once again we see the utility of the zero software needed approach
to benchmarking!  The dd test rules for general audience on the mailing
lists IMO.

The other goodness aspect of the dd test is that the results are
indisputable because dd is baked into the OS.

That all said - we don't have a simple dd benchmark for random seeking.

 With the latest version of filebench, you can then use the '-c'
 option to compare your results in a nice HTML friendly way.

That's worth the effort.

- Luke


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread Spencer Shepler

On Oct 10, 2007, at 8:41 AM, Luke Lonergan wrote:

 Hi Eric,

 On 10/10/07 12:50 AM, eric kustarz [EMAIL PROTECTED] wrote:

 Since you were already using filebench, you could use the
 'singlestreamwrite.f' and 'singlestreamread.f' workloads (with
 nthreads set to 20, iosize set to 128k) to achieve the same things.

 Yes but once again we see the utility of the zero software needed  
 approach
 to benchmarking!  The dd test rules for general audience on the  
 mailing
 lists IMO.

 The other goodness aspect of the dd test is that the results are
 indisputable because dd is baked into the OS.

And filebench will be in the next build in the same way.

Spencer


 That all said - we don't have a simple dd benchmark for random  
 seeking.

 With the latest version of filebench, you can then use the '-c'
 option to compare your results in a nice HTML friendly way.

 That's worth the effort.

 - Luke


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread Spencer Shepler

On Oct 10, 2007, at 2:56 AM, Thomas Liesner wrote:

 Hi Eric,

 Are you talking about the documentation at:
 http://sourceforge.net/projects/filebench
 or:
 http://www.opensolaris.org/os/community/performance/filebench/
 and:
 http://www.solarisinternals.com/wiki/index.php/FileBench
 ?

 i was talking about the solarisinternals wiki. I can't find any  
 documentation at the sourceforge site and the opensolaris site  
 refers to solarisinternals for a more detailed documentation. The  
 INSTALL document within the distribution refers to  
 solarisinternals and pkgadd which of course isn't working without  
 providing a package ;)

 This is the output of make within filebench/filebench:

 [EMAIL PROTECTED] # make
 make: Warning: Can't find `../Makefile.cmd': Datei oder Verzeichnis  
 nicht gefunden
 make: Fatal error in reader: Makefile, line 27: Read of include  
 file `../Makefile.cmd' failed

I am working to clean that up and will be posting binaries as well.

Spencer



 Before looking at the results, decide if that really *is* your
 expected workload

 Sure enough i have to dig deeper into the filebench workloads and  
 create my own workload to represent my expected workload even  
 better, but the tasks within the fileserver workload are already  
 quite representative (i could skip the append test though...)

 Regards,
 Tom


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-10 Thread eric kustarz

 That all said - we don't have a simple dd benchmark for random  
 seeking.

Feel free to try out randomread.f and randomwrite.f - or combine them  
into your own new workload to create a random read and write workload.

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
Hi again,

i did not want to compare the filebench test with the single mkfile command.
Still, i was hoping to see similar numbers in the filbench stats.
Any hints what i could do to further improve the performance?
Would a raid1 over two stripes be faster?

TIA,
Tom
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Dick Davies
Hi Thomas

the point I was making was that you'll see low performance figures
with 100 concurrent threads. If you set nthreads to something closer
to your expected load, you'll get a more accurate figure.

Also, there's a new filebench out now, see

 http://blogs.sun.com/erickustarz/entry/filebench

will be integrated into Nevada in b76, according to Eric.

On 09/10/2007, Thomas Liesner [EMAIL PROTECTED] wrote:
 Hi again,

 i did not want to compare the filebench test with the single mkfile command.
 Still, i was hoping to see similar numbers in the filbench stats.
 Any hints what i could do to further improve the performance?
 Would a raid1 over two stripes be faster?

 TIA,
 Tom


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
Hi,

i checked with $nthreads=20 which will roughly represent the expected load and 
these are the results:

IO Summary:   7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s,255us 
cpu/op,   0.2ms latency

BTW, smpatch is still running and further tests will get done when the system 
is rebooted.

The figures published at...
http://blogs.sun.com/timthomas/feed/entries/atom?cat=%2FSun+Fire+X4500
...made me expect to see higher rates with my setup.

I have seen the new filebench at sourceforge, but did not manage to install. 
It's a source ditrsibution now and the wiki and readmes are not updated yet. A 
simple make didn't do the trick though ;)

Thanks again,
Tom
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
Hi,

i checked with $nthreads=20 which will roughly represent the expected load and 
these are the results:

IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s, 255us cpu/op, 0.2ms 
latency

BTW, smpatch is still running and further tests will get done when the system 
is rebooted.

The figures published at...
http://blogs.sun.com/timthomas/feed/entries/atom?cat=%2FSun+Fire+X4500
...made me expect to see higher rates with my setup.

I have seen the new filebench at sourceforge, but did not manage to install. 
It's a source ditrsibution now and the wiki and readmes are not updated yet. A 
simple make didn't do the trick though ;)

Thanks again,
Tom
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread eric kustarz

On Oct 9, 2007, at 4:25 AM, Thomas Liesner wrote:

 Hi,

 i checked with $nthreads=20 which will roughly represent the  
 expected load and these are the results:

Note, here is the description of the 'fileserver.f' workload:

define process name=filereader,instances=1
{
   thread name=filereaderthread,memsize=10m,instances=$nthreads
   {
 flowop openfile name=openfile1,filesetname=bigfileset,fd=1
 flowop appendfilerand name=appendfilerand1,iosize=$meaniosize,fd=1
 flowop closefile name=closefile1,fd=1
 flowop openfile name=openfile2,filesetname=bigfileset,fd=1
 flowop readwholefile name=readfile1,fd=1
 flowop closefile name=closefile2,fd=1
 flowop deletefile name=deletefile1,filesetname=bigfileset
 flowop statfile name=statfile1,filesetname=bigfileset
   }
}


Each thread in 'nthreads' is executing the above:
- open
- append
- close
- open
- read
- close
- delete
- stat

You have 20 parallel threads doing the above.

Before looking at the results, decide if that really *is* your  
expected workload.


 IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s, 255us  
 cpu/op, 0.2ms latency

 BTW, smpatch is still running and further tests will get done when  
 the system is rebooted.

 The figures published at...
 http://blogs.sun.com/timthomas/feed/entries/atom?cat=%2FSun+Fire+X4500
 ...made me expect to see higher rates with my setup.

 I have seen the new filebench at sourceforge, but did not manage to  
 install. It's a source ditrsibution now and the wiki and readmes  
 are not updated yet. A simple make didn't do the trick though ;)

Are you talking about the documentation at:
http://sourceforge.net/projects/filebench
or:
http://www.opensolaris.org/os/community/performance/filebench/
and:
http://www.solarisinternals.com/wiki/index.php/FileBench
?

I'll figure out why make isn't working.

eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Thomas Liesner
i wanted to test some simultanious sequential writes and wrote this little 
snippet:

#!/bin/bash
for ((i=1; i=20; i++))
do
  dd if=/dev/zero of=lala$i bs=128k count=32768 
done

While the script was running i watched zpool iostat and measured the time 
between starting and stopping of the writes (usually i saw bandwth figures 
around 500...)
The result was 409 mb/s in writes. Not too bad at all :)

Now the same with sequential reads:

#!/bin/bash
for ((i=1; i=20; i++))
do
  dd if=lala$i of=/dev/zero bs=128k 
done

again checked with zpool iostat seeing even higher numbers around 850 and the 
result was 910mb/s...

wow 
that all looks quite promising :)

Tom
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-09 Thread Anton B. Rang
Do you have compression turned on? If so, dd'ing from /dev/zero isn't very 
useful as a benchmark. (I don't recall if all-zero blocks are always detected 
if checksumming is turned on, but I seem to recall that they are, even if 
compression is off.)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Fileserver performance tests

2007-10-08 Thread Thomas Liesner
Hi all,

i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun 
x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver 
suite.
I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS 
controllers, attached two sas-jbods with 8 SATA-HDDs each und created a zfs 
pool as a raid 10 by doing something like the following:

[i]zpool create zfs_raid10_16_disks mirror c3t0d0 c4t0d0 mirror c3t1d0 c4t1d0 
mirror c3t2d0 c4t2d0 mirror c3t3d0 c4t3d0 mirror c3t4d0 c4t4d0 mirror c3t5d0 
c4t5d0 mirror c3t6d0 c4t6d0 mirror c3t7d0 c4t7d0[/i]

the i set noatime and ran the following filebench tests:



[i]
[EMAIL PROTECTED] # ./filebench
filebench load fileserver
12746: 7.445: FileServer Version 1.14 2005/06/21 21:18:52 personality 
successfully loaded
12746: 7.445: Usage: set $dir=dir
12746: 7.445:set $filesize=sizedefaults to 131072
12746: 7.445:set $nfiles=value defaults to 1000
12746: 7.445:set $nthreads=value   defaults to 100
12746: 7.445:set $meaniosize=value defaults to 16384
12746: 7.445:set $meandirwidth=size defaults to 20
12746: 7.445: (sets mean dir width and dir depth is calculated as log (width, 
nfiles)
12746: 7.445:
12746: 7.445:run runtime (e.g. run 60)
12746: 7.445: syntax error, token expected on line 43
filebench set $dir=/zfs_raid10_16_disks/test
filebench run 60
12746: 47.198: Fileset bigfileset: 1000 files, avg dir = 20.0, avg depth = 2.3, 
mbytes=122
12746: 47.218: Removed any existing fileset bigfileset in 1 seconds
12746: 47.218: Creating fileset bigfileset...
12746: 60.222: Preallocated 1000 of 1000 of fileset bigfileset in 14 seconds
12746: 60.222: Creating/pre-allocating files
12746: 60.222: Starting 1 filereader instances
12751: 61.228: Starting 100 filereaderthread threads
12746: 64.228: Running...
12746: 65.238: Run took 1 seconds...
12746: 65.266: Per-Operation Breakdown
statfile1 988ops/s   0.0mb/s  0.0ms/op   22us/op-cpu
deletefile1   991ops/s   0.0mb/s  0.0ms/op   48us/op-cpu
closefile2997ops/s   0.0mb/s  0.0ms/op4us/op-cpu
readfile1 997ops/s 139.8mb/s  0.2ms/op  175us/op-cpu
openfile2 997ops/s   0.0mb/s  0.0ms/op   28us/op-cpu
closefile1   1081ops/s   0.0mb/s  0.0ms/op6us/op-cpu
appendfilerand1   982ops/s  14.9mb/s  0.1ms/op   91us/op-cpu
openfile1 982ops/s   0.0mb/s  0.0ms/op   27us/op-cpu

12746: 65.266:
IO Summary:   8088 ops 8017.4 ops/s, (997/982 r/w) 155.6mb/s,508us 
cpu/op,   0.2ms
12746: 65.266: Shutting down processes
filebench[/i]

I expected to see some higher numbers really...
a simple time mkfile 16g lala gave me something like 280Mb/s.

Would anyone comment on this?

TIA,
Tom
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver performance tests

2007-10-08 Thread johansen
 statfile1 988ops/s   0.0mb/s  0.0ms/op   22us/op-cpu
 deletefile1   991ops/s   0.0mb/s  0.0ms/op   48us/op-cpu
 closefile2997ops/s   0.0mb/s  0.0ms/op4us/op-cpu
 readfile1 997ops/s 139.8mb/s  0.2ms/op  175us/op-cpu
 openfile2 997ops/s   0.0mb/s  0.0ms/op   28us/op-cpu
 closefile1   1081ops/s   0.0mb/s  0.0ms/op6us/op-cpu
 appendfilerand1   982ops/s  14.9mb/s  0.1ms/op   91us/op-cpu
 openfile1 982ops/s   0.0mb/s  0.0ms/op   27us/op-cpu
 
 IO Summary:   8088 ops 8017.4 ops/s, (997/982 r/w) 155.6mb/s,508us 
 cpu/op,   0.2ms

 I expected to see some higher numbers really...
 a simple time mkfile 16g lala gave me something like 280Mb/s.

mkfile isn't an especially realistic test for performance.  You'll note
that the fileserver workload is performing stats, deletes, closes,
reads, opens, and appends.  Mkfile is a write benchmark.  You might
consider trying the singlestreamwrite benchmark, if you're looking for
a single-threaded write performance test.

-j
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss