On Thu, Jul 09, 2020 at 06:05:19PM +0100, Matthew Wilcox wrote:
> On Thu, Jul 09, 2020 at 09:09:26AM -0700, Darrick J. Wong wrote:
> > On Thu, Jul 09, 2020 at 12:25:27PM +1000, Dave Chinner wrote:
> > > iomap: Only invalidate page cache pages on direct IO writes
> > > 
> > > From: Dave Chinner <dchin...@redhat.com>
> > > 
> > > The historic requirement for XFS to invalidate cached pages on
> > > direct IO reads has been lost in the twisty pages of history - it was
> > > inherited from Irix, which implemented page cache invalidation on
> > > read as a method of working around problems synchronising page
> > > cache state with uncached IO.
> > 
> > Urk.
> > 
> > > XFS has carried this ever since. In the initial linux ports it was
> > > necessary to get mmap and DIO to play "ok" together and not
> > > immediately corrupt data. This was the state of play until the linux
> > > kernel had infrastructure to track unwritten extents and synchronise
> > > page faults with allocations and unwritten extent conversions
> > > (->page_mkwrite infrastructure). IOws, the page cache invalidation
> > > on DIO read was necessary to prevent trivial data corruptions. This
> > > didn't solve all the problems, though.
> > > 
> > > There were peformance problems if we didn't invalidate the entire
> > > page cache over the file on read - we couldn't easily determine if
> > > the cached pages were over the range of the IO, and invalidation
> > > required taking a serialising lock (i_mutex) on the inode. This
> > > serialising lock was an issue for XFS, as it was the only exclusive
> > > lock in the direct Io read path.
> > > 
> > > Hence if there were any cached pages, we'd just invalidate the
> > > entire file in one go so that subsequent IOs didn't need to take the
> > > serialising lock. This was a problem that prevented ranged
> > > invalidation from being particularly useful for avoiding the
> > > remaining coherency issues. This was solved with the conversion of
> > > i_mutex to i_rwsem and the conversion of the XFS inode IO lock to
> > > use i_rwsem. Hence we could now just do ranged invalidation and the
> > > performance problem went away.
> > > 
> > > However, page cache invalidation was still needed to serialise
> > > sub-page/sub-block zeroing via direct IO against buffered IO because
> > > bufferhead state attached to the cached page could get out of whack
> > > when direct IOs were issued.  We've removed bufferheads from the
> > > XFS code, and we don't carry any extent state on the cached pages
> > > anymore, and so this problem has gone away, too.
> > > 
> > > IOWs, it would appear that we don't have any good reason to be
> > > invalidating the page cache on DIO reads anymore. Hence remove the
> > > invalidation on read because it is unnecessary overhead,
> > > not needed to maintain coherency between mmap/buffered access and
> > > direct IO anymore, and prevents anyone from using direct IO reads
> > > from intentionally invalidating the page cache of a file.
> > > 
> > > Signed-off-by: Dave Chinner <dchin...@redhat.com>
> > > ---
> > >  fs/iomap/direct-io.c | 33 +++++++++++++++++----------------
> > >  1 file changed, 17 insertions(+), 16 deletions(-)
> > > 
> > > diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
> > > index ec7b78e6feca..ef0059eb34b5 100644
> > > --- a/fs/iomap/direct-io.c
> > > +++ b/fs/iomap/direct-io.c
> > > @@ -475,23 +475,24 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter 
> > > *iter,
> > >   if (ret)
> > >           goto out_free_dio;
> > >  
> > > - /*
> > > -  * Try to invalidate cache pages for the range we're direct
> > > -  * writing.  If this invalidation fails, tough, the write will
> > > -  * still work, but racing two incompatible write paths is a
> > > -  * pretty crazy thing to do, so we don't support it 100%.
> > 
> > I always wondered about the repeated use of 'write' in this comment
> > despite the lack of any sort of WRITE check logic.  Seems fine to me,
> > let's throw it on the fstests pile and see what happens.
> > 
> > Reviewed-by: Darrick J. Wong <darrick.w...@oracle.com>
> 
> Reviewed-by: Matthew Wilcox (Oracle) <wi...@infradead.org>
> 
> > --D
> > 
> > > -  */
> > > - ret = invalidate_inode_pages2_range(mapping,
> > > -                 pos >> PAGE_SHIFT, end >> PAGE_SHIFT);
> > > - if (ret)
> > > -         dio_warn_stale_pagecache(iocb->ki_filp);
> > > - ret = 0;
> > > + if (iov_iter_rw(iter) == WRITE) {
> > > +         /*
> > > +          * Try to invalidate cache pages for the range we're direct
> > > +          * writing.  If this invalidation fails, tough, the write will
> > > +          * still work, but racing two incompatible write paths is a
> > > +          * pretty crazy thing to do, so we don't support it 100%.
> > > +          */
> > > +         ret = invalidate_inode_pages2_range(mapping,
> > > +                         pos >> PAGE_SHIFT, end >> PAGE_SHIFT);
> > > +         if (ret)
> > > +                 dio_warn_stale_pagecache(iocb->ki_filp);
> > > +         ret = 0;
> > >  
> > > - if (iov_iter_rw(iter) == WRITE && !wait_for_completion &&
> > > -     !inode->i_sb->s_dio_done_wq) {
> > > -         ret = sb_init_dio_done_wq(inode->i_sb);
> > > -         if (ret < 0)
> > > -                 goto out_free_dio;
> > > +         if (!wait_for_completion &&
> > > +             !inode->i_sb->s_dio_done_wq) {
> > > +                 ret = sb_init_dio_done_wq(inode->i_sb);
> > > +                 if (ret < 0)
> > > +                         goto out_free_dio;

...and yes I did add in the closing brace here. :P

--D

> > >   }
> > >  
> > >   inode_dio_begin(inode);

Reply via email to