Re: compilebench numbers for ext4

2007-10-25 Thread Chris Mason
On Thu, 25 Oct 2007 17:40:25 -0500
"Jose R. Santos" <[EMAIL PROTECTED]> wrote:

> > > 
> > > I really want to use seekwatcher to test some of the stuff that
> > > I'm doing for flex_bg feature but it barfs on me in my test
> > > machine.
> > > 
> > > running :sleep 10:
> > > done running sleep 10
> > > Device: /dev/sdh
> > >   Total: 0 events (dropped 0), 1368 KiB
> > > data blktrace done
> > > Traceback (most recent call last):
> > >   File "/usr/bin/seekwatcher", line 534, in ?
> > > add_range(hist, step, start, size)
> > >   File "/usr/bin/seekwatcher", line 522, in add_range
> > > val = hist[slot]
> > > IndexError: list index out of range
> > 
> > I don't think you have any events in the trace.  Try this instead:
> > 
> > echo 3 > /proc/sys/vm/drop_caches
> > seekwatcher -t find-trace -d /dev/ -p 'find /usr/local -type f'
> 
> Nope, get the same error.  There does seem to be data recorded in the
> trace files and iostat does show activity on the disk.

Hmmm, could you please send me your trace files.  There will be one for
each cpu, starting with find-trace-blktrace

> > I wanted to benchmark flexbg too, but couldn't quite figure out the
> > correct patch combination ;)
> 
> Ill attach e2progfs and Kernel patches but do realize that these are
> experimental patches that Im using to test what layout would work
> best.  Don't take them too seriously as it is largely incomplete.

Thanks, I'll try this out.

> 
> Currently trying to come up with workloads to test this and other
> changes with.  Im am warming up to yours :)

At least for the write phases of compilebench, it should benefit from
data and metadata separation.  It made a very big difference in btrfs,
(from 20MB/s up to 32MB/s on create).  However it did make the read
phases slower.

-chris
-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: compilebench numbers for ext4

2007-10-25 Thread Chris Mason
On Thu, 25 Oct 2007 10:34:49 -0500
"Jose R. Santos" <[EMAIL PROTECTED]> wrote:

> On Mon, 22 Oct 2007 19:31:04 -0400
> Chris Mason <[EMAIL PROTECTED]> wrote:
> 
> > Hello everyone,
> > 
> > I recently posted some performance numbers for Btrfs with different
> > blocksizes, and to help establish a baseline I did comparisons with
> > Ext3.
> > 
> > The graphs, numbers and a basic description of compilebench are
> > here:
> > 
> > http://oss.oracle.com/~mason/blocksizes/
> 
> I've been playing a bit with the workload and I have a couple of
> comments.
> 
> 1) I find the averaging of results at the end of the run misleading
> unless you run a high number of directories.  A single very good
> result due to page caching effects seems to skew the final results
> output. Have you considered providing output of the standard
> deviation of the data points as well in order to show how widely the
> results are spread. 

This is the main reason I keep the output from each run.  Stdev would
definitely help as well, I'll put it on the todo list.

> 
> 2) You mentioned that one of the goals of the benchmark is to measure
> locality during directory aging, but the workloads seems too well
> order to truly age the filesystem.  At least that's what I can gather
> from the output the benchmark spits out.  It may be that Im not
> understanding the relationship between INITIAL_DIRS and RUNS, but the
> workload seem to been localized to do operations on a single dir at a
> time.  Just wondering is this is truly stressing allocation algorithms
> in a significant or realistic way.

A good question.  compilebench has two modes, and the default is better
at aging then the run I graphed on ext4.  compilebench isn't trying to
fragment individual files, but it is instead trying to fragment
locality, and lower the overall performance of a directory tree.

In the default run, the patch, clean, and compile operations end up
changing around groups of files in a somewhat random fashion (at least
from the FS point of view).  But, it is still a workload where a good
FS should be able to maintain locality and provide consistent results
over time.

The ext4 numbers I sent here are from compilebench --makej, which is a
shorter and less complex run.  It has a few simple phases:

* create some number of kernel trees sequentially
* write new files into those trees in random order
* read a three of the trees
* delete all the trees

It is a very basic test that can give you a picture of directory
layout, writeback performance and overall locality.

> 
> If I understand how compilebench works, directories would be allocated
> with in one or two block group boundaries so the data and meta data
> would be in very close proximity.  I assume that doing random lookup
> through the entire file set would show some weakness in the ext3 meta
> data layout.

Probably.

> 
> I really want to use seekwatcher to test some of the stuff that I'm
> doing for flex_bg feature but it barfs on me in my test machine.
> 
> running :sleep 10:
> done running sleep 10
> Device: /dev/sdh
>   Total: 0 events (dropped 0), 1368 KiB data
> blktrace done
> Traceback (most recent call last):
>   File "/usr/bin/seekwatcher", line 534, in ?
> add_range(hist, step, start, size)
>   File "/usr/bin/seekwatcher", line 522, in add_range
> val = hist[slot]
> IndexError: list index out of range

I don't think you have any events in the trace.  Try this instead:

echo 3 > /proc/sys/vm/drop_caches
seekwatcher -t find-trace -d /dev/ -p 'find /usr/local -type f'

> 
> This is running on a PPC64/gentoo combination.  Dont know if this
> means anything to you.  I have a very basic algorithm for to take
> advantage block group metadata grouping and want be able to better
> visualize how different IO patterns take advantage or are hurt by the
> feature.

I wanted to benchmark flexbg too, but couldn't quite figure out the
correct patch combination ;)

> 
> > To match the ext4 numbers with Btrfs, I'd probably have to turn off
> > data checksumming...
> > 
> > But oddly enough I saw very bad ext4 read throughput even when
> > reading a single kernel tree (outside of compilebench).  The time
> > to read the tree was almost 2x ext3.  Have others seen similar
> > problems?
> > 
> > I think the ext4 delete times are so much better than ext3 because
> > this is a single threaded test.  delayed allocation is able to get
> > everything into a few extents, and these all end up in the inode.
> > So, the delete phase only needs to seek around in small directories
> > and seek to well grouped inode

Re: compilebench numbers for ext4

2007-10-23 Thread Chris Mason
On Tue, 23 Oct 2007 18:13:53 +0530
"Aneesh Kumar K.V" <[EMAIL PROTECTED]> wrote:

> 
> I get this error while running compilebench  
> 
> http://oss.oracle.com/~mason/compilebench/compilebench-0.4.tar.bz2

I've uploaded compilebench-0.6.tar.bz2 and updated the docs on the
compilebench page.  This includes the --makej option that I used for
the numbers I have posted (sorry, I thought that was pushed out
already).

For consistency with seekwatcher, I changed the -d working_dir option
into -D working_dir.  The actual run I used was:

./compilebench -D /mnt --makej -i 20 -d /dev/ -t trace-ext4

-d and -t make compilebench start blktrace for you at the start of each
phase, which allows easy creation of the graphs, but this isn't
required.

> 
> 
> elm3b138:~/compilebench-0.4# ./compilebench  -d /ext4/
> Traceback (most recent call last):
>   File "./compilebench", line 541, in ?
> total_runs += func(dset, rnd)
>   File "./compilebench", line 431, in create_one_dir
> mbs = run_directory(dset.unpatched, dirname, "create dir")
>   File "./compilebench", line 217, in run_directory
> fp = file(fname, 'a+')
> IOError: [Errno 2] No such file or directory:
> '/ext4/kernel-75618/fs/smbfs/symlink.c' elm3b138:~/compilebench-0.4#

I'm not sure, did you run out of space?

-chris
-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: compilebench numbers for ext4

2007-10-22 Thread Chris Mason
On Mon, 22 Oct 2007 17:12:58 -0700
Mingming Cao <[EMAIL PROTECTED]> wrote:

> On Mon, 2007-10-22 at 19:31 -0400, Chris Mason wrote:
> > Hello everyone,
> > 
> > I recently posted some performance numbers for Btrfs with different
> > blocksizes, and to help establish a baseline I did comparisons with
> > Ext3.
> > 
> 
> Thanks for doing this, Chris!
> 
> > The graphs, numbers and a basic description of compilebench are
> > here:
> > 
> > http://oss.oracle.com/~mason/blocksizes/
> > 
> > Ext3 easily wins the read phase, but scores poorly while creating
> > files and deleting them.  Since ext3 is winning the read phase, we
> > can assume the file layout is fairly good.  I think most of the
> > problems during the write phase are caused by pdflush doing
> > metadata writeback.  The file data and metadata are written
> > separately, and so we end up seeking between things that are
> > actually close together.
> > 
> > Andreas asked me to give ext4 a try, so I grabbed the patch queue
> > from Friday along with the latest Linus kernel.  The FS was created
> > with:
> > 
> > mkfs.ext3 -I 256 /dev/
> > mount -o delalloc,mballoc,data=ordered -t ext4dev /dev/
> > 
> > I did expect delayed allocation to help the write phases of
> > compilebench, especially the parts where it writes out .o files in
> > random order (basically writing medium sized files all over the
> > directory tree).
> 
> Unfortunately delayed allocation support for ordered mode is not there
> yet. 

Sorry, I meant to write data=writeback, not sure how my fingers typed
ordered instead.

> 
> >   But, every phase except reads showed huge
> > improvements.
> > 
> > http://oss.oracle.com/~mason/compilebench/ext4/ext-create-compare.png
> > http://oss.oracle.com/~mason/compilebench/ext4/ext-compile-compare.png
> > http://oss.oracle.com/~mason/compilebench/ext4/ext-read-compare.png
> > http://oss.oracle.com/~mason/compilebench/ext4/ext-rm-compare.png
> > 
> > To match the ext4 numbers with Btrfs, I'd probably have to turn off
> > data checksumming...
> > 
> > But oddly enough I saw very bad ext4 read throughput even when
> > reading a single kernel tree (outside of compilebench).  The time
> > to read the tree was almost 2x ext3.  Have others seen similar
> > problems?
> > 
> thanks for point this out, will run compilebench. 
> 
> Trying to understand the Disk IO graph
> http://oss.oracle.com/~mason/compilebench/ext4/ext-read-compare.png
> it looks like ext3 the blocks are spread over the disk, while ext4 is
> more around the same place, is this right?

It does look like that, but the ext4 movie shows the middle line a
little differently than the graph.  The middle ext4 line is actually
comprised of a lot of seeks.

For comparison, here's the ext3 movie:

http://oss.oracle.com/~mason/compilebench/ext4/ext3-read.mpg

Even though the ext3 data looks more spread out, there are more
throughput peaks, and fewer seeks overall in ext3.

-chris


-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: compilebench numbers for ext4

2007-10-22 Thread Chris Mason
On Mon, 22 Oct 2007 19:31:04 -0400
Chris Mason <[EMAIL PROTECTED]> wrote:
 
> I did expect delayed allocation to help the write phases of
> compilebench, especially the parts where it writes out .o files in
> random order (basically writing medium sized files all over the
> directory tree).  But, every phase except reads showed huge
> improvements.
> 
> http://oss.oracle.com/~mason/compilebench/ext4/ext-create-compare.png
> http://oss.oracle.com/~mason/compilebench/ext4/ext-compile-compare.png
> http://oss.oracle.com/~mason/compilebench/ext4/ext-read-compare.png
> http://oss.oracle.com/~mason/compilebench/ext4/ext-rm-compare.png

This might make the IO during reads a little easier to see.  The dirs
will look like the kernel after a make -j.  So each directory will have
a bunch of small .c files that are close together and a bunch of .o
files that are randomly created across the tree.

http://oss.oracle.com/~mason/compilebench/ext4/ext4-read.mpg

-chris
-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


compilebench numbers for ext4

2007-10-22 Thread Chris Mason
Hello everyone,

I recently posted some performance numbers for Btrfs with different
blocksizes, and to help establish a baseline I did comparisons with
Ext3.

The graphs, numbers and a basic description of compilebench are here:

http://oss.oracle.com/~mason/blocksizes/

Ext3 easily wins the read phase, but scores poorly while creating files
and deleting them.  Since ext3 is winning the read phase, we can assume
the file layout is fairly good.  I think most of the problems during the
write phase are caused by pdflush doing metadata writeback.  The file
data and metadata are written separately, and so we end up seeking
between things that are actually close together.

Andreas asked me to give ext4 a try, so I grabbed the patch queue from
Friday along with the latest Linus kernel.  The FS was created with:

mkfs.ext3 -I 256 /dev/
mount -o delalloc,mballoc,data=ordered -t ext4dev /dev/

I did expect delayed allocation to help the write phases of
compilebench, especially the parts where it writes out .o files in
random order (basically writing medium sized files all over the
directory tree).  But, every phase except reads showed huge
improvements.

http://oss.oracle.com/~mason/compilebench/ext4/ext-create-compare.png
http://oss.oracle.com/~mason/compilebench/ext4/ext-compile-compare.png
http://oss.oracle.com/~mason/compilebench/ext4/ext-read-compare.png
http://oss.oracle.com/~mason/compilebench/ext4/ext-rm-compare.png

To match the ext4 numbers with Btrfs, I'd probably have to turn off data
checksumming...

But oddly enough I saw very bad ext4 read throughput even when reading
a single kernel tree (outside of compilebench).  The time to read the
tree was almost 2x ext3.  Have others seen similar problems?

I think the ext4 delete times are so much better than ext3 because this
is a single threaded test.  delayed allocation is able to get
everything into a few extents, and these all end up in the inode.  So,
the delete phase only needs to seek around in small directories and
seek to well grouped inodes.  ext3 probably had to seek all over for
the direct/indirect blocks.

So, tomorrow I'll run a few tests with delalloc and mballoc
independently, but if there are other numbers people are interested in,
please let me know.

(test box was a desktop machine with single sata drive, barriers were
not used).

-chris
-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: pagefault in generic_file_buffered_write() causing deadlock

2006-11-15 Thread Chris Mason
On Wed, Nov 15, 2006 at 11:29:57AM -0800, Andrew Morton wrote:
> Oh well.  If it's a deadlock (this is not clear from your description) then
> please gather backtraces of all affected tasks.
> 
> There is an ab/ba deadlock with journal_start() and lock_page(), iirc. 
> Chris and I had a look at that a while back and collapsed in exhaustion -
> it isn't pretty.  

This should be the page fault/journal lock inversion stuff Nick was
working on.  His patchset had a pretty good description of the problems,
Badari can also dig through the novell/ltc bugzillas for vmmstress.
Should be LTC9358.

Hopefully Nick's patches will address all of this.  sles9 had a partial
solution for the mmap deadlock, I think it was to dirty the inode at a
later time.  For some reason, I thought this workload was passing in
later kernels...

-chris
-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html