On Fri, 2020-09-04 at 11:09 -0700, Andrew Morton wrote:
> On Fri,  4 Sep 2020 16:48:07 +0200 Bean Huo <[email protected]>
> wrote:
> 
> > From: Bean Huo <[email protected]>
> > 
> > Current generic_file_buffered_read() will break up the larger
> > batches of pages
> > and read data in single page length in case of ra->ra_pages == 0.
> > This patch is
> > to allow it to pass the batches of pages down to the device if the
> > supported
> > maximum IO size >= the requested size.
> > 
> > ...
> > 
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -2062,6 +2062,7 @@ ssize_t generic_file_buffered_read(struct
> > kiocb *iocb,
> >     struct file *filp = iocb->ki_filp;
> >     struct address_space *mapping = filp->f_mapping;
> >     struct inode *inode = mapping->host;
> > +   struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
> >     struct file_ra_state *ra = &filp->f_ra;
> >     loff_t *ppos = &iocb->ki_pos;
> >     pgoff_t index;
> > @@ -2098,9 +2099,14 @@ ssize_t generic_file_buffered_read(struct
> > kiocb *iocb,
> >             if (!page) {
> >                     if (iocb->ki_flags & IOCB_NOIO)
> >                             goto would_block;
> > -                   page_cache_sync_readahead(mapping,
> > -                                   ra, filp,
> > -                                   index, last_index - index);
> > +
> > +                   if (!ra->ra_pages && bdi->io_pages >=
> > last_index - index)
> > +                           __do_page_cache_readahead(mapping,
> > filp, index,
> > +                                                     last_index -
> > index, 0);
> > +                   else
> > +                           page_cache_sync_readahead(mapping, ra,
> > filp,
> > +                                                     index,
> > +                                                     last_index -
> > index);
> >                     page = find_get_page(mapping, index);
> >                     if (unlikely(page == NULL))
> >                             goto no_cached_page;
> 
> I assume this is a performance patch.  What are the observed changes
> in behaviour?
> 
> What is special about ->ra_pages==0?  Wouldn't this optimization
> still
> be valid if ->ra_pages==2?
> 
> Doesn't this defeat the purpose of having ->ra_pages==0?


Hi Andrew
Sorry, I am still not quite understanding your above three questions. 

Based on my shallow understanding, ra_pages is associated with
read_ahead_kb. Seems ra_pages controls the maximum read-ahead window
size, but it doesn't work when the requested size exceeds ra_pages. 

If I set the read_ahead_kb to 0, also, as Christoph mentioned, MTD
forcibly sets ra_pages to 0.  I think the intention is that only wants
to disable read-ahead, however, doesn't want
generic_file_buffered_read() to split the request and read data with
4KB chunk size separately.

Thanks,
Bean

Reply via email to