On Thu, 2005-02-10 at 10:00, Bryan Henderson wrote:
> >Don't you think, filesystems submitting biggest chunks of IO
> >possible is better than submitting 1k-4k chunks and hoping that
> >IO schedulers do the perfect job ? 
> 
> No, I don't see why it would better.  In fact intuitively, I think the I/O 
> scheduler, being closer to the device, should do a better job of deciding 
> in what packages I/O should go to the device.  After all, there exist 
> block devices that don't process big chunks faster than small ones.  But 
> 
> So this starts to look like something where you withhold data from the I/O 
> scheduler in order to prevent it from scheduling the I/O wrongly because 
> you (the pager/filesystem driver) know better.  That shouldn't be the 
> architecture.
> 
> So I'd like still like to see a theory that explains why submitting the 
> I/O a little at a time (i.e. including the bio_submit() in the loop that 
> assembles the I/O) causes the device to be idle more.
> 
> >We all learnt thro 2.4 RAW code about the overhead of doing 512bytes
> >IO and making the elevator merge all the peices together.
> 
> That was CPU time, right?  In the present case, the numbers say it takes 
> the same amount of CPU time to assemble the I/O above the I/O scheduler as 
> inside it.

One clear distinction between submitting smaller chunks vs larger
ones is - number of call backs we get and the processing we need to
do.

I don't think we have enough numbers here to get to bottom of this.
CPU utilization remains same in both cases, doesn't mean that - the
test took exactly same amount of time. I don't even think that we
are doing a fixed number of IOs. Its possible that by doing larger
IOs we save CPU and use that CPU to push more data ?



Thanks,
Badari

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to