On Mon, 18 Aug 2008, Chris Mason wrote:
> On Sat, 2008-08-16 at 22:26 +0300, Szabolcs Szakacsits wrote:
> >
> > We tried compilebench (-i 30 -r 0) just for fun using kernel 2.6.26,
> > freshly formatted partition, with defaults. Results:
> >
> > MB/sRuntime (s)
> >
On Sat, 2008-08-16 at 22:26 +0300, Szabolcs Szakacsits wrote:
> On Fri, 15 Aug 2008, Chris Mason wrote:
>
> > Ext3 and XFS score somewhere between 10-15MB/s on the same test...
>
> Interesting (and cool animations).
>
> We tried compilebench (-i 30 -r 0) just for fun using kernel 2.6.26,
> fre
On Fri, 15 Aug 2008, Chris Mason wrote:
> Ext3 and XFS score somewhere between 10-15MB/s on the same test...
Interesting (and cool animations).
We tried compilebench (-i 30 -r 0) just for fun using kernel 2.6.26,
freshly formatted partition, with defaults. Results:
MB/sRunt
On Sat, Aug 16, 2008 at 02:10:10PM -0400, Chris Mason wrote:
>
> I tried just the writeback_index patch and got only 4 fragmented files
> on ext4 after a compilebench run. Then I tried again and got 1200.
> Seems there is something timing dependent in here ;)
>
Yeah, the patch Aneesh sent to ch
On Fri, 2008-08-15 at 16:37 -0400, Chris Mason wrote:
> On Fri, 2008-08-15 at 15:59 -0400, Theodore Tso wrote:
> > On Fri, Aug 15, 2008 at 01:52:52PM -0400, Chris Mason wrote:
> > > Have you tried this one:
> > >
> > > http://article.gmane.org/gmane.linux.file-systems/25560
> > >
> > > This bug s
On Fri, 2008-08-15 at 15:59 -0400, Theodore Tso wrote:
> On Fri, Aug 15, 2008 at 01:52:52PM -0400, Chris Mason wrote:
> > Have you tried this one:
> >
> > http://article.gmane.org/gmane.linux.file-systems/25560
> >
> > This bug should cause fragmentation on small files getting forced out
> > due
On Fri, Aug 15, 2008 at 01:52:52PM -0400, Chris Mason wrote:
> Have you tried this one:
>
> http://article.gmane.org/gmane.linux.file-systems/25560
>
> This bug should cause fragmentation on small files getting forced out
> due to memory pressure in ext4. But, I wasn't able to really
> demonstra
On Fri, 2008-08-15 at 09:45 -0400, Theodore Tso wrote:
> On Fri, Aug 15, 2008 at 08:46:01AM -0400, Chris Mason wrote:
> > Whoops the link above is wrong, try:
> >
> > http://oss.oracle.com/~mason/compilebench
>
> Thanks, I figured it out.
>
> > It is worth noting that the end throughput doesn't
On Fri, Aug 15, 2008 at 08:46:01AM -0400, Chris Mason wrote:
> Whoops the link above is wrong, try:
>
> http://oss.oracle.com/~mason/compilebench
Thanks, I figured it out.
> It is worth noting that the end throughput doesn't matter quite as much
> as the writeback pattern. Ext4 is pretty solid
On Fri, 2008-08-15 at 03:39 +0200, Andi Kleen wrote:
> > The async worker threads should be spreading the load across CPUs pretty
> > well, and even a single CPU could keep up with 100MB/s checksumming.
> > But, the async worker threads do randomize the IO somewhat because the
> > IO goes from pdfl
On Thu, 2008-08-14 at 21:10 -0400, Chris Mason wrote:
> On Thu, 2008-08-14 at 19:44 -0400, Theodore Tso wrote:
> > > I spent a bunch of time hammering on different ways to fix this without
> > > increasing nr_requests, and it was a mixture of needing better tuning in
> > > btrfs and needing to init
> The async worker threads should be spreading the load across CPUs pretty
> well, and even a single CPU could keep up with 100MB/s checksumming.
> But, the async worker threads do randomize the IO somewhat because the
> IO goes from pdflush -> one worker thread per CPU -> submit_bio. So,
> maybe
On Thu, 2008-08-14 at 23:17 +0200, Andi Kleen wrote:
> On Thu, Aug 14, 2008 at 05:00:56PM -0400, Chris Mason wrote:
> > Btrfs defaults 57.41 MB/s
Looks like I can get the btrfs defaults up to 64MB/s with some writeback
tweaks.
> > Btrfs dup no csum 74.59 MB/s
>
>
On Thu, 2008-08-14 at 19:44 -0400, Theodore Tso wrote:
> > I spent a bunch of time hammering on different ways to fix this without
> > increasing nr_requests, and it was a mixture of needing better tuning in
> > btrfs and needing to init mapping->writeback_index on inode allocation.
> >
> > So, to
> I spent a bunch of time hammering on different ways to fix this without
> increasing nr_requests, and it was a mixture of needing better tuning in
> btrfs and needing to init mapping->writeback_index on inode allocation.
>
> So, today's numbers for creating 30 kernel trees in sequence:
>
> Btrf
On Thu, Aug 14, 2008 at 05:00:56PM -0400, Chris Mason wrote:
> Btrfs defaults 57.41 MB/s
> Btrfs dup no csum 74.59 MB/s
With duplications checksums seem to be quite costly (CPU bound?)
> Btrfs no duplication76.83 MB/s
> Btrfs no dup no csum no inline 7
On Fri, 2008-08-08 at 14:48 -0400, Chris Mason wrote:
> On Thu, 2008-08-07 at 20:02 +0200, Andi Kleen wrote:
> > Chris Mason <[EMAIL PROTECTED]> writes:
> > >
> > > Metadata is duplicated by default even on single spindle drives,
> >
> > Can you please say a bit how much that impacts performance?
On Sat, Aug 09, 2008 at 03:23:22AM +0200, Andi Kleen wrote:
> > In theory, if the elevator was smart enough, it could actually help
> > read seekiness; there are two copies of the metadata, and it shouldn't
>
> That assumes the elevator actually knows what is nearby? I thought
> that wasn't that e
> In theory, if the elevator was smart enough, it could actually help
> read seekiness; there are two copies of the metadata, and it shouldn't
That assumes the elevator actually knows what is nearby? I thought
that wasn't that easy with modern disks with multiple spindles
and invisible remapping,
On Fri, Aug 08, 2008 at 11:56:25PM +0200, Andi Kleen wrote:
> > So, the mirroring turns a single large write into two large writes.
> > Definitely not free, but always a fixed cost.
>
> Thanks for the explanation and the numbers. I see that's the advantage of
> copy-on-write that you can actually
> So, the mirroring turns a single large write into two large writes.
> Definitely not free, but always a fixed cost.
Thanks for the explanation and the numbers. I see that's the advantage of
copy-on-write that you can actually always cluster the metadata together and
get always batched IO this
On Thu, 2008-08-07 at 20:02 +0200, Andi Kleen wrote:
> Chris Mason <[EMAIL PROTECTED]> writes:
> >
> > Metadata is duplicated by default even on single spindle drives,
>
> Can you please say a bit how much that impacts performance? That sounds
> costly.
Most metadata is allocated in groups of 1
> If there is no
> alternate mirror, the caller gets EIO and in the case of a failed csum,
> the page is zero filled (actually filled with ones so I can find bogus
> pages in an oops).
You mention there will be a utility to scrub the disks to repair stuff
like this, but does it make sense to retry
Chris Mason <[EMAIL PROTECTED]> writes:
>
> Metadata is duplicated by default even on single spindle drives,
Can you please say a bit how much that impacts performance? That sounds
costly.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message t
Chris Mason wrote:
I haven't done any real single cpu testing, it may make sense in those
workloads to checksum and submit directly in the calling context. But
real single cpu boxes are harder to come by these days.
They're still pretty common in the embedded/low power space. I could
see so
Chris Mason wrote on 07/08/2008 11:34:02:
> > > * Helper threads for checksumming and other background tasks. Most
CPU
> > > intensive operations have been pushed off to helper threads to take
> > > advantage of SMP machines. Streaming read and write throughput now
> > > scale to disk speed eve
On Thu, 2008-08-07 at 11:08 +0200, Peter Zijlstra wrote:
> On Tue, 2008-08-05 at 15:01 -0400, Chris Mason wrote:
>
> > * Fine grained btree locking. The large fs_mutex is finally gone.
> > There is still some work to do on the locking during extent allocation,
> > but the code is much more scalab
On Thu, 2008-08-07 at 17:03 +0300, Ahmed Kamal wrote:
> With csum errors, do we get warnings in logs ?
Yes
> Does too many faults cause a device to be flagged as faulty ?
Not yet
> is there any user-space application to monitor/scrub/re-silver btrfs
> volumes ?
>
Not yet, but there definitel
On Tue, 2008-08-05 at 15:01 -0400, Chris Mason wrote:
> * Fine grained btree locking. The large fs_mutex is finally gone.
> There is still some work to do on the locking during extent allocation,
> but the code is much more scalable than it was.
Cool - will try to find a cycle to stare at the co
On Tue, 2008-08-05 at 15:01 -0400, Chris Mason wrote:
> There are still more disk format changes planned, but we're making every
> effort to get them out of the way as quickly as we can. You can see the
> major features we have planned on the development timeline:
>
> http://btrfs.wiki.kernel.or
On Thu, 2008-08-07 at 11:14 +0200, Peter Zijlstra wrote:
> On Tue, 2008-08-05 at 15:01 -0400, Chris Mason wrote:
>
> > There are still more disk format changes planned, but we're making every
> > effort to get them out of the way as quickly as we can. You can see the
> > major features we have pl
Hello everyone,
Btrfs v0.16 is available for download, please see
http://btrfs.wiki.kernel.org/ for download links and project
information.
v0.16 has a shiny new disk format, and is not compatible with
filesystems created by older Btrfs releases. But, it should be the
fastest Btrfs yet, with a w
32 matches
Mail list logo