Try changing the run 10 to sleep 10...
Richard.
On 12/2/08 8:34 AM, "Demetri S. Mouratis" <[EMAIL PROTECTED]> wrote:
Gang,
I'm having trouble getting the 'foreach' syntax to loop through different I/O
sizes. Here is a sample of code I'm trying to get working:
#!/opt/filebench/bin/go_filebench -f
#
define file name=data, path=/filebench/local, size=128m, prealloc, reuse
define process name=randWriteProcess,instances=1
{
thread name=randWriteThread1, memsize=8m, instances=1
{
flowop write name=randWriter1, filename=data, iosize=$iosize, random
}
}
foreach $iosize in 8k, 16k, 64k
{
run 10
}
But it hangs on the second iteration:
$ ./foreach.f
FileBench Version 1.3.4
27404: 0.030: Iterating $iosize=8192
27404: 0.030: Creating/pre-allocating files and filesets
27404: 0.042: File data: mbytes=128
27404: 0.042: Re-using file data.
27404: 0.043: Creating file data...
27404: 0.063: Preallocated 1 of 1 of file data in 1 seconds
27404: 0.063: waiting for fileset pre-allocation to finish
27404: 0.064: Starting 1 randWriteProcess instances
27405: 1.069: Starting 1 randWriteThread1 threads
27404: 4.078: Running...
27404: 14.178: Run took 10 seconds...
27404: 14.180: Per-Operation Breakdown
randWriter1 854ops/s 6.6mb/s 1.0ms/op 112us/op-cpu
27404: 14.180:
IO Summary: 8627 ops 854.3 ops/s, (0/854 r/w) 6.6mb/s, 336us cpu/op,
1.0ms latency
27404: 14.180: Shutting down processes
27404: 17.208: Iterating $iosize=16384
27404: 17.209: Creating/pre-allocating files and filesets
27404: 17.209: File data: mbytes=128
27404: 17.209: Re-using file data.
27404: 17.209: Creating file data...
27404: 19.619: Preallocated 1 of 1 of file data in 3 seconds
27404: 19.619: waiting for fileset pre-allocation to finish
[hangs forever, calling nanosleep once a second]
There are no foreach examples in filebench/workloads, so I'm wondering if other
people are having success with this.
...Demetri
--
This message posted from opensolaris.org
_______________________________________________
perf-discuss mailing list
[email protected]
_______________________________________________
perf-discuss mailing list
[email protected]