On Fri, 4 Jan 2008 22:43:55 +1100 Chris Samuel <[EMAIL PROTECTED]> wrote:
> On Fri, 4 Jan 2008, Yannick Gingras wrote: > > > Greetings, > > Hi Yannick, > > > Zach Brown pointed me to Btrfs after I published my own flawed > > attempt to benchmark the too many files problem on GNU/Linux. > > This one ? > > http://ygingras.net/b/2007/12/too-many-files:-reiser-fs-vs-hashed-paths > > > Bonnie++ is a popular tool to benchmark FS performances but it's > > hard to extend. As an example, I wanted to measure the impact of > > path hashing and I would have had a hard time doing that with > > bonnie++. > > You could always approach Russell Coker about making it easier to > extend as he's still working on what will be v2.0 of Bonnie++. He's > a nice guy and quite approachable (and is a fellow Aussie, always a > bonus :-)). > > http://etbe.coker.com.au/2007/12/03/new-bonnie-releases/ > > > Is there a need for a unified benchmarking system with easy to > > write plugins? I think so. Over time, more plugins would be added > > and the overall measurement would become less (more?) flawed. > > I reckon it's a good plan, and Joe Landman over at Scalable > Informatics has one, though his "Bioinformatics Benchmark System" is > not aimed at directly testing disk I/O but rather at benchmarking > applications. Still, it is GPL'd and so if i is truly pluggable then > you might be able to take advantage of it.. The sun people have filebench, which they say is modular and workload based. I haven't tried it yet but keep meaning to. It looks like they are missing one of my favorite tests, which is timing how long it takes to read all the files created by a workload (simulating a backup). Another somewhat modular tool is Jens Axboe's fio: http://brick.kernel.dk/snaps In general, I think it is more important to have a framework for running tests and organizing results than it is to have all the tests under a single tool. Something like autotest with better results gathering. I plan on converting my current tools to send out results that are easier to parse for this kind of setup. > > > But I digress. On your benchmark page [1], most tests have > > detailed methodology. Aside from those, what benchmark tools do > > you use to compare the performance of Btrfs with that of other file > > systems? My benchmarking focus so far is basically disk format QA. The tests are constructed to confirm the layout on disk is doing what I expect, and I'm trying to find the corner cases where it will perform badly. Aside from the tests I've mentioned on those pages, I also run fio to make sure my sequential read/write speeds match ext2 and xfs. This is one thing that has really improved since the v0.5 in Chris' benchmarking page (rewrite would still be slower unless -o nodatacow was used). > > When I commissioned to write an article for LinuxWorld on "Emerging > Linux Filesystems" [1] (including btrfs, NILFS, Reiser4, etc) earlier > in the year I decided I'd take a mix of both real world tasks and > synthetic benchmarks and try and mix them up. Seemed to work > reasonably well and produced a few surprises for me! A good read and interesting results. Hopefully they will fund a follow up ;) -chris _______________________________________________ Btrfs-devel mailing list [email protected] http://oss.oracle.com/mailman/listinfo/btrfs-devel
