On Wed, Feb 09, 2005 at 09:05:21PM -0500, Bryan Henderson wrote: > >I see much larger IO chunks and better throughput. So, I guess its > >worth doing it > > I hate to see something like this go ahead based on empirical results > without theory. It might make things worse somewhere else. > > Do you have an explanation for why the IO chunks are larger? Is the I/O > scheduler not building large I/Os out of small requests? Is the queue > running dry while the device is actually busy?
Yes, the queue is running dry, and there is much more evidence of that besides just the throughput numbers. I am inferring this using iostat which shows that average device utilization fluctuates between 83 and 99 percent and the average request size is around 650 sectors (going to the device) without writepages. With writepages, device utilization never drops below 95 percent and is usually about 98 percent utilized, and the average request size to the device is around 1000 sectors. Not to mention the io-scheduler merge rate is reduced by a few orders of magnitude (16k vs ~30) . I'm not sure what theory you are looking for here? We do the work of coalescing io requests up front, rather than relying on an io-scheduler to save us. What is the point of the 2.6 block-io subsystem (i.e. the bio layer) if you don't use it to its fullest potential? I can give you pointers to the data if you're interested. Sonny - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html