Thanks for that output script. Not sure why it is trying to export rpool rather than testpool, but notice that it was NOT allowed to do so. On the other hand, it does appear that /testpool isn't being found. Is /testpool where the filesystem is mounted?

Anyway, inserting an exit(0) in the flush script will FileBench from exporting / importing the pools. Why don't you try that and tell me if FileBench continues to fail. If it does fail, could you check to see if you can ls /testpool?
Drew

On Mar 12, 2009, at 4:24 PM, Asif Iqbal wrote:

On Thu, Mar 12, 2009 at 5:09 PM, Andrew Wilson <[email protected] > wrote:
First off, it is not a bug, it is a feature that is necessary to flush the
zfs ARC. Otherwise you get rather excessive performance. This is done
through a script called fs_flush, though, and it is easy to comment out that
part of the script. The script is found in filebench/scripts
(usr/benchmarks/filebench/scripts/fs_flush on Solaris machines.

For example, here is the beginning of the fs_flush script:
#!/usr/bin/perl
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
#
# Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# ident "%Z%%M% %I%     %E% SMI"

#
# Put commands in  here to flush the file system cache after
# file set creation but prior to steady state
#
# For most file systems, filebench already handles fs cache flushing
# For ZFS, it needs some help, so this script does
#    "zpool export <poolname>" then "zpool import <poolname>"
#

$fs = $ARGV[0];
$dir = $ARGV[1];

#
# if not zfs, inform user and exit.
#
if (($fs =~ m/^zfs$/) != 1) {
print "filesystem type is: $fs, no action required, so exiting \n";
     exit(0);
}

Just put exit(0); in the script without a conditional, and it will do
nothing.

But I like to filebench zfs fs too

See if I have two pools. One rootpool and one testpool. I still want
to do the test on testpool.

 pool: rpool
state: ONLINE
scrub: none requested
config:

       NAME          STATE     READ WRITE CKSUM
       splunk        ONLINE       0     0     0
         mirror      ONLINE       0     0     0
           c0t0d0s0  ONLINE       0     0     0
           c0t1d0s0  ONLINE       0     0     0

errors: No known data errors

 pool: testpool
state: ONLINE
scrub: none requested
config:

       NAME        STATE     READ WRITE CKSUM
       testpool    ONLINE       0     0     0
         raidz2    ONLINE       0     0     0
           c0t2d0  ONLINE       0     0     0
           c0t3d0  ONLINE       0     0     0
           c0t4d0  ONLINE       0     0     0
           c0t5d0  ONLINE       0     0     0
           c0t6d0  ONLINE       0     0     0
       logs        ONLINE       0     0     0
         c0t7d0    ONLINE       0     0     0

errors: No known data errors

bash-3.00# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
rpool                   6.91G  60.0G  35.5K  /rpool
rpool/ROOT              3.91G  60.0G    18K  legacy
rpool/ROOT/rootset      3.91G  60.0G  3.78G  /
rpool/ROOT/rootset/var   129M  60.0G   129M  /var
rpool/dump              1.00G  60.0G  1.00G  -
rpool/export             249K  60.0G    19K  /export
rpool/export/home        230K  60.0G   230K  /export/home
rpool/swap                 2G  62.0G    90K  -
testpool                 57.8G   142G  57.8G  /testpool

# cat fileio.prof
[..]
DEFAULTS {
       runtime = 120;
       dir = /testpool;
       stats = /tmp;
       filesystem = zfs;
       description = "fileio zfs";
       filesize = 10g;
}
[..rest as default..]

see how it is miserably failing

bash-3.00# /opt/filebench/bin/filebench fileio
parsing profile for config: randomread2k
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ randomread2k/thisrun.f
FileBench Version 1.3.4
666: 0.007: Random Read Version 2.0 IO personality successfully loaded
 666: 0.007: Creating/pre-allocating files and filesets
 666: 0.007: File largefile1: mbytes=10240
 666: 0.007: Creating file largefile1...
 666: 0.008: Preallocated 1 of 1 of file largefile1 in 1 seconds
 666: 0.008: waiting for fileset pre-allocation to finish
 666: 104.412: Running '/opt/filebench/scripts/fs_flush zfs /testpool'
'zpool export rpool'
cannot unmount '/': Device busy
'zpool import rpool'
cannot import 'rpool': no such pool available
 666: 105.399: Change dir to
/tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/randomread2k
 666: 105.399: Starting 1 rand-read instances
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

parsing profile for config: randomread8k
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ randomread8k/thisrun.f
FileBench Version 1.3.4
Cannot open shm /var/tmp/fbench.La4ub: No such file or directory
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

parsing profile for config: randomread1m
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ randomread1m/thisrun.f
FileBench Version 1.3.4
Cannot open shm /var/tmp/fbenchXMa4vb: No such file or directory
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

parsing profile for config: randomwrite2k
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ randomwrite2k/thisrun.f
FileBench Version 1.3.4
Cannot open shm /var/tmp/fbenchKNa4wb: No such file or directory
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

parsing profile for config: randomwrite8k
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ randomwrite8k/thisrun.f
FileBench Version 1.3.4
Cannot open shm /var/tmp/fbenchxOa4xb: No such file or directory
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

parsing profile for config: randomwrite1m
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ randomwrite1m/thisrun.f
FileBench Version 1.3.4
Cannot open shm /var/tmp/fbenchkPa4yb: No such file or directory
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

parsing profile for config: singlestreamread1m
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ singlestreamread1m/thisrun.f
FileBench Version 1.3.4
Cannot open shm /var/tmp/fbench9Pa4zb: No such file or directory
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

parsing profile for config: singlestreamreaddirect1m
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ singlestreamreaddirect1m/thisrun.f
FileBench Version 1.3.4
Cannot open shm /var/tmp/fbenchWQa4Ab: No such file or directory
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

parsing profile for config: singlestreamwrite1m
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ singlestreamwrite1m/thisrun.f
FileBench Version 1.3.4
Cannot open shm /var/tmp/fbenchJRa4Bb: No such file or directory
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

parsing profile for config: singlestreamwritedirect1m
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ singlestreamwritedirect1m/thisrun.f
FileBench Version 1.3.4
Cannot open shm /var/tmp/fbenchwSa4Cb: No such file or directory
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

parsing profile for config: multistreamread1m
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ multistreamread1m/thisrun.f
FileBench Version 1.3.4
Cannot open shm /var/tmp/fbenchjTa4Db: No such file or directory
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

parsing profile for config: multistreamreaddirect1m
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ multistreamreaddirect1m/thisrun.f
FileBench Version 1.3.4
Cannot open shm /var/tmp/fbench8Ta4Eb: No such file or directory
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

parsing profile for config: multistreamwrite1m
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ multistreamwrite1m/thisrun.f
FileBench Version 1.3.4
Cannot open shm /var/tmp/fbenchVUa4Fb: No such file or directory
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

parsing profile for config: multistreamwritedirect1m
Running /tmp/splunk-test-zfs-fileio-Mar_13_2009-00h_18m_02s/ multistreamwritedirect1m/thisrun.f
FileBench Version 1.3.4
Cannot open shm /var/tmp/fbenchIVa4Gb: No such file or directory
Generating html for /tmp/splunk-test-zfs-fileio- Mar_13_2009-00h_18m_02s

bash-3.00#

What I don't understand why it is trying to export rpool.



All versions will do this on zfs by default.

I am not sure how to get just the filebench binaries, but I can tell you that they are installed as part of OpenSolaris. Everything you need to run
filebench, including the appropriate go_filebench binary, lives in
/usr/benchmarks/filebench on the machine that you have installed OpenSolaris
on.

Drew

On 03/12/09 11:21 AM, Asif Iqbal wrote:

Where can I get the latest filebench source code from?

The filebench 1.3.4 has a bug. It unmounts the root pool while doing a
test. I was using fileio.prof and using zfs instead of tmpfs as my
filesystem.

I see the source can be broswed from here


http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/filebench/

But how do I get the code?







--
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

_______________________________________________
perf-discuss mailing list
[email protected]

Reply via email to