On Fri, Sep 04, 2020 at 04:48:07PM +0200, Bean Huo wrote:
> From: Bean Huo <[email protected]>
> 
> Current generic_file_buffered_read() will break up the larger batches of pages
> and read data in single page length in case of ra->ra_pages == 0. This patch 
> is
> to allow it to pass the batches of pages down to the device if the supported
> maximum IO size >= the requested size.

At least ubifs and mtd seem to force ra_pages = 0 to disable read-ahead
entirely, so this seems intentional.

Reply via email to