This could be a bug: the actual file sizes in a set are a gamma
distribution, with the median at $filesize. Using a file should
yield something different. Perhaps some of the logic that sets file
sizes in light of a singular set needs looking at?
On Mon, Oct 29, 2007 at 01:14:28PM -0700, John Vijoe George wrote:
> I was running a sequential read test using the filemicro_seqread file using
> the following parameters using filebench-1.0.0 on Red Hat Linux - 2.6.18.
>
> FileBench Version 1.0.0
> filebench> load filemicro_seqread
> 2406: 8.407: FileMicro-SeqRead Version 2.0 personality successfully loaded
> 2406: 8.407: Usage: set $dir=<dir> defaults to /mnt/testfs/
> 2406: 8.407: set $iosize=<size> defaults to 4096
> 2406: 8.407: set $filesize=<size> defaults to 3072
> 2406: 8.407: set $nthreads=<value> defaults to 1
> 2406: 8.407: set $cached=<bool> defaults to 0
> 2406: 8.407:
> 2406: 8.407: run runtime (e.g. run 60)
> filebench> run 60
>
>
>
>
> As can be seen from the above parameters, the filesize I create is 3072 (3k).
> When I look in to the /mnt/testfs, I see the following:
>
> # ls -lt bigfileset/00000001/00000001
> -rw-r--r-- 1 root root 393 Oct 29 2007 bigfileset/00000001/00000001
>
> I see only 393 bytes!! Why would this be?
>
> When I increased the filesize to 20k, I see a file only 2620bytes - about 10X
> less. What would I be doing wrong?
>
> The filemicro_seq used is as follows:
> set $dir=/mnt/testfs/
> set $nthreads=1
> set $iosize=4k
> set $filesize=20k
> set $cached=0
>
> define fileset
> name=bigfileset,path=$dir,size=$filesize,entries=$nthreads,dirwidth=1024,prealloc=100,cached=$cached
>
> define process name=filereader,instances=1
> {
> thread name=filereaderthread,memsize=10m,instances=$nthreads
> {
> flowop read name=append-file,filesetname=bigfileset,iosize=$iosize,fd=1
> }
> }
>
>
>
> __________________________________________________
> Do You Yahoo!?
> Tired of spam? Yahoo! Mail has the best spam protection around
> http://mail.yahoo.com
> _______________________________________________
> perf-discuss mailing list
> [email protected]
_______________________________________________
perf-discuss mailing list
[email protected]