On Thu, 24 Dec 2009 at 13:05, Peter Grandi wrote:
> Unfortunately there seems to be an overproduction of rather
> meaningless file system "benchmarks"...

That's why I wrote "benchmark programs" and not "benchmarks": I'm very 
well aware that opiones vary a lot about what use (synthetic) benchmarks 
are, but whenever I'm looking for _equal_ comparisons (that's what my 
tests are about) I only find stale data.

> * In the "generic" test the 'tar' test bandwidth is exactly the
>   same ("276.68 MB/s") for nearly all filesystems.

True, because I'm tarring up ~2.7GB of content while the box is equipped 
with 8GB of RAM. So it *should* be the same for all filesystems, as Linux 
could easily hold all this in its caches. Still, jfs and zfs manage to be 
slower than the rest.

> * There are read transfer rates higher than the one reported by
>   'hdparm' which is "66.23 MB/sec" (comically enough *all* the
>   read transfer rates your "benchmarks" report are higher).

The bonnie++ read results are nearly report 66MB/s, the "generic" tests 
are running with help of the filesystem caches (see above), so it's no 
wonder that they're higher than normal read from disk - as the tests are 
allowed to utilize their caches as well.

> BTW the use of Bonnie++ is also usually a symptom of a poor
> misunderstanding of file system benchmarking.

I didn't write this application, yet I find it useful for the sake of 
comparison: do "something" to the filesystem, and then the same for all 
the other filesystmes as well. Maybe I should add such a disclaimer[0] to 
the results page as well :-)

> I think that it is rather better to run a few simple operations
> (like the "generic" test) properly (unlike the "generic" test),

If you're suggesting to run the generic tests with big enough data to NOT 
hit the filesystem caches anymore - that's what I did not like to do. I 
think we're all lucky that filesystem caches exists, otherwise things 
would be a lot slower.

> That is however a generally valid conclusion, but with a very,
> very important qualification: for freshly loaded filesystems.

I hope that's to clear to everybody as soon as they hear the word 
"benchmark" - yes, I always mkfs before running the next tests.
But again: this is done for every filesystem and this is not a 
filesystem-aging-test.
I've yet to find good benchmark programs (results) that take filesystem 
(us)age into account. Maybe I should try this myself, but the usage 
patterns would be far from "real world usage" but limited to simple, but 
numerous cp/rm operations on the filesystem. I'm happy to take any 
pointers for this to implement :-)

Christian.

[0] http://www.sabi.co.uk/Notes/linuxFS.html#fsRefsBench
-- 
BOFH excuse #379:

We've picked COBOL as the language of choice.

------------------------------------------------------------------------------
This SF.Net email is sponsored by the Verizon Developer Community
Take advantage of Verizon's best-in-class app development support
A streamlined, 14 day to market process makes app distribution fast and easy
Join now and get one step closer to millions of Verizon customers
http://p.sf.net/sfu/verizon-dev2dev 
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to