On Thu, Feb 10, 2005 at 09:51:42AM -0800, Bryan Henderson wrote:
> >I am inferring this using iostat which shows that average device
> >utilization fluctuates between 83 and 99 percent and the average
> >request size is around 650 sectors (going to the device) without
> >writepages. 
> >
> >With writepages, device utilization never drops below 95 percent and
> >is usually about 98 percent utilized, and the average request size to
> >the device is around 1000 sectors.
> 
> Well that blows away the only two ways I know that this effect can happen. 
>  The first has to do with certain code being more efficient than other 
> code at assembling I/Os, but the fact that the CPU utilization is the same 
> in both cases pretty much eliminates that.  

No, I don't think you can draw that conclusion based on total CPU
utilization, because in the writepages case we are spending more time
(as a percentage of total time) copying data from userspace, which
leads to an increase in CPU utilization.  So, I think this shows that the
writepages code path is in fact more efficient than the ioscheduler path.

Here's the oprofile output from the runs where you'll see
__copy_from_user_ll at the top of both profiles:

No writepages:

CPU: P4 / Xeon, speed 1997.8 MHz (estimated)
Counted GLOBAL_POWER_EVENTS events (time during which processor is not stopped) 
with a unit mask of 0x01 (mandatory) count 100000
samples  %        image name               app name                 symbol name
2225649  38.7482  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 __copy_from_user_ll
1471012  25.6101  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 poll_idle
104736    1.8234  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 __block_commit_write
92702     1.6139  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 mark_offset_cyclone
90077     1.5682  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 _spin_lock
83649     1.4563  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 __block_write_full_page
81483     1.4186  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 generic_file_buffered_write
69232     1.2053  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 ext3_writeback_commit_write


With writepages:

CPU: P4 / Xeon, speed 1997.98 MHz (estimated)
Counted GLOBAL_POWER_EVENTS events (time during which processor is not stopped) 
with a unit mask of 0x01 (mandatory) count 100000
samples  %        image name               app name                 symbol name
2487751  43.4411  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 __copy_from_user_ll
1518775  26.5209  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 poll_idle
124956    2.1820  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 _spin_lock
93689     1.6360  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 generic_file_buffered_write
93139     1.6264  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 mark_offset_cyclone
89683     1.5660  vmlinux-autobench-2.6.10-autokern1 
vmlinux-autobench-2.6.10-autokern1 ext3_writeback_commit_write

So we see 38% vs 43% which I belive should be directly correlated with
throughput ( about 12% diff. here ). 


> The other is where the 
> interactivity of the I/O generator doesn't match the buffering in the 
> device so that the device ends up 100% busy processing small I/Os that 
> were sent to it because it said all the while that it needed more work. 
> But in the small-I/O case, we don't see a 100% busy device.

That might be possible, but I'm not sure how one could account for it?

The application, VM, and I/O systems are all so intertwined it would be
difficult to isolate the application if we are trying to measure
maximum throughput, no?

 
> So why would the device be up to 17% idle, since the writepages case makes 
> it apparent that the I/O generator is capable of generating much more 
> work?  Is there some queue plugging (I/O scheduler delays sending I/O to 
> the device even though the device is idle) going on?

Again, I think the amount of work being generated is directly related
to how quickly the dirty pages are being flushed out, so
inefficiencies in the io-system bubble up to the generator.

Sonny


-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to