On Wed, 2005-02-09 at 18:05, Bryan Henderson wrote:
> >I see much larger IO chunks and better throughput. So, I guess its
> >worth doing it
> 
> I hate to see something like this go ahead based on empirical results 
> without theory.  It might make things worse somewhere else.
> 
> Do you have an explanation for why the IO chunks are larger?  Is the I/O 
> scheduler not building large I/Os out of small requests?  Is the queue 
> running dry while the device is actually busy?
> 

Bryan,

I would like to find out what theory you are looking for.

Don't you think, filesystems submitting biggest chunks of IO
possible is better than submitting 1k-4k chunks and hoping that
IO schedulers do the perfect job ? 

BTW, writepages() is being used for other filesystems like JFS.

We all learnt thro 2.4 RAW code about the overhead of doing 512bytes
IO and making the elevator merge all the peices together. Thats
one reason why 2.6 DIO/RAW code is completely written from
scratch to submit the biggest possible IO chunks.

Well, I agree that we should have theory behind the results.
We are just playing with prototypes for now.

Thanks,
Badari

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to