>Don't you think, filesystems submitting biggest chunks of IO
>possible is better than submitting 1k-4k chunks and hoping that
>IO schedulers do the perfect job ? 

No, I don't see why it would better.  In fact intuitively, I think the I/O 
scheduler, being closer to the device, should do a better job of deciding 
in what packages I/O should go to the device.  After all, there exist 
block devices that don't process big chunks faster than small ones.  But 

So this starts to look like something where you withhold data from the I/O 
scheduler in order to prevent it from scheduling the I/O wrongly because 
you (the pager/filesystem driver) know better.  That shouldn't be the 
architecture.

So I'd like still like to see a theory that explains why submitting the 
I/O a little at a time (i.e. including the bio_submit() in the loop that 
assembles the I/O) causes the device to be idle more.

>We all learnt thro 2.4 RAW code about the overhead of doing 512bytes
>IO and making the elevator merge all the peices together.

That was CPU time, right?  In the present case, the numbers say it takes 
the same amount of CPU time to assemble the I/O above the I/O scheduler as 
inside it.

--
Bryan Henderson                          IBM Almaden Research Center
San Jose CA                              Filesystems
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to