On Thu, Jan 11, 2007 at 12:08:10PM +1100, Nick Piggin wrote: > David Chinner wrote: > >Sure, but that doesn't really show the how erratic the per-filesystem > >throughput is because the test I'm running is PCI-X bus limited in > >it's throughput at about 750MB/s. Each dm device is capable of about > >340MB/s write, so when one slows down, the others will typically > >speed up. > > But you do also get aggregate throughput drops? (ie. 2.6.20-rc3-worse)
Yes - you can see that from the vmstat output I sent. At 500GB into the write of each file (about 60% of the disks filled) the per fs write rate should be around 220MB/s, so aggregate should be around 650MB/s. That's what Im seeing with 2.6.18 and 2.6.20-rc3 with a tweaked dirty_ratio. Without the dirty_ratio tweak, you see what is in 2.6.20-rc3-worse. e.g. I just changed dirty ratio from 10 to 40 and I've gone from consistent 210-215MB/s per filesystm (~630-650MB/s aggregate) to ranging over 110-200MB/s per fielsystem and aggregates of ~450-600MB/s. I changed dirty_ratio back to 10, and within 15 seconds we are back to consistent 210MB/s per filesystem and 630-650MB/s write. > >So, what I've attached is three files which have both > >'vmstat 5' output and 'iostat 5 |grep dm-' output in them. > > Ahh, sorry to be unclear, I meant: > > cat /proc/vmstat > pre > run_test > cat /proc/vmstat > post Ok, I'll get back to you on that one - even at 600+MB/s, writing 5TB of data takes some time.... Cheers, Dave. -- Dave Chinner Principal Engineer SGI Australian Software Group - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/