Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Christoph Hellwig
On Mon, Jul 20, 2020 at 04:51:25PM -0500, Goldwyn Rodrigues wrote:
> Hi Christoph,
> 
> On  9:46 13/07, Christoph Hellwig wrote:
> > Hi all,
> > 
> > this series has two parts:  the first one picks up Dave's patch to avoid
> > invalidation entierly for reads, picked up deep down from the btrfs iomap
> > thread.  The second one falls back to buffered writes if invalidation fails
> > instead of leaving a stale cache around.  Let me know what you think about
> > this approch.
> 
> Which kernel version are these changes expected?
> btrfs dio switch to iomap depends on this.

No idea.  Darrick, are you ok with picking this up soon so that
Goldwyn can pull it in?



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Matthew Wilcox
On Tue, Jul 21, 2020 at 04:53:13PM +0200, Christoph Hellwig wrote:
> On Mon, Jul 20, 2020 at 04:51:25PM -0500, Goldwyn Rodrigues wrote:
> > Hi Christoph,
> > 
> > On  9:46 13/07, Christoph Hellwig wrote:
> > > Hi all,
> > > 
> > > this series has two parts:  the first one picks up Dave's patch to avoid
> > > invalidation entierly for reads, picked up deep down from the btrfs iomap
> > > thread.  The second one falls back to buffered writes if invalidation 
> > > fails
> > > instead of leaving a stale cache around.  Let me know what you think about
> > > this approch.
> > 
> > Which kernel version are these changes expected?
> > btrfs dio switch to iomap depends on this.
> 
> No idea.  Darrick, are you ok with picking this up soon so that
> Goldwyn can pull it in?

I thought you were going to respin this with EREMCHG changed to ENOTBLK?



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Christoph Hellwig
On Tue, Jul 21, 2020 at 04:04:32PM +0100, Matthew Wilcox wrote:
> On Tue, Jul 21, 2020 at 04:53:13PM +0200, Christoph Hellwig wrote:
> > On Mon, Jul 20, 2020 at 04:51:25PM -0500, Goldwyn Rodrigues wrote:
> > > Hi Christoph,
> > > 
> > > On  9:46 13/07, Christoph Hellwig wrote:
> > > > Hi all,
> > > > 
> > > > this series has two parts:  the first one picks up Dave's patch to avoid
> > > > invalidation entierly for reads, picked up deep down from the btrfs 
> > > > iomap
> > > > thread.  The second one falls back to buffered writes if invalidation 
> > > > fails
> > > > instead of leaving a stale cache around.  Let me know what you think 
> > > > about
> > > > this approch.
> > > 
> > > Which kernel version are these changes expected?
> > > btrfs dio switch to iomap depends on this.
> > 
> > No idea.  Darrick, are you ok with picking this up soon so that
> > Goldwyn can pull it in?
> 
> I thought you were going to respin this with EREMCHG changed to ENOTBLK?

Oh, true.  I'll do that ASAP.



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Christoph Hellwig
On Tue, Jul 21, 2020 at 04:14:37PM +0100, Matthew Wilcox wrote:
> On Tue, Jul 21, 2020 at 05:06:15PM +0200, Christoph Hellwig wrote:
> > On Tue, Jul 21, 2020 at 04:04:32PM +0100, Matthew Wilcox wrote:
> > > I thought you were going to respin this with EREMCHG changed to ENOTBLK?
> > 
> > Oh, true.  I'll do that ASAP.
> 
> Michael, could we add this to manpages?

Umm, no.  -ENOTBLK is internal - the file systems will retry using
buffered I/O and the error shall never escape to userspace (or even the
VFS for that matter).



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Matthew Wilcox
On Tue, Jul 21, 2020 at 05:06:15PM +0200, Christoph Hellwig wrote:
> On Tue, Jul 21, 2020 at 04:04:32PM +0100, Matthew Wilcox wrote:
> > I thought you were going to respin this with EREMCHG changed to ENOTBLK?
> 
> Oh, true.  I'll do that ASAP.

Michael, could we add this to manpages?

--- write.2.old 2020-07-21 11:11:17.740491825 -0400
+++ write.2 2020-07-21 11:13:05.900389249 -0400
@@ -192,6 +192,12 @@
 .IR count ,
 or the file offset is not suitably aligned.
 .TP
+.B ENOTBLK
+The file was opened with the
+.B O_DIRECT
+flag, and the I/O could not be completed due to another process using
+the file.
+.TP
 .B EIO
 A low-level I/O error occurred while modifying the inode.
 This error may relate to the write-back of data written by an earlier



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Darrick J. Wong
On Tue, Jul 21, 2020 at 04:53:13PM +0200, Christoph Hellwig wrote:
> On Mon, Jul 20, 2020 at 04:51:25PM -0500, Goldwyn Rodrigues wrote:
> > Hi Christoph,
> > 
> > On  9:46 13/07, Christoph Hellwig wrote:
> > > Hi all,
> > > 
> > > this series has two parts:  the first one picks up Dave's patch to avoid
> > > invalidation entierly for reads, picked up deep down from the btrfs iomap
> > > thread.  The second one falls back to buffered writes if invalidation 
> > > fails
> > > instead of leaving a stale cache around.  Let me know what you think about
> > > this approch.
> > 
> > Which kernel version are these changes expected?
> > btrfs dio switch to iomap depends on this.
> 
> No idea.  Darrick, are you ok with picking this up soon so that
> Goldwyn can pull it in?

Yes; I was nearly about to ping you to see if you were going to have
time to push out a corrected patch 2 in time for ... 5.9?  5.10?

--D



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Matthew Wilcox
On Tue, Jul 21, 2020 at 05:16:16PM +0200, Christoph Hellwig wrote:
> On Tue, Jul 21, 2020 at 04:14:37PM +0100, Matthew Wilcox wrote:
> > On Tue, Jul 21, 2020 at 05:06:15PM +0200, Christoph Hellwig wrote:
> > > On Tue, Jul 21, 2020 at 04:04:32PM +0100, Matthew Wilcox wrote:
> > > > I thought you were going to respin this with EREMCHG changed to ENOTBLK?
> > > 
> > > Oh, true.  I'll do that ASAP.
> > 
> > Michael, could we add this to manpages?
> 
> Umm, no.  -ENOTBLK is internal - the file systems will retry using
> buffered I/O and the error shall never escape to userspace (or even the
> VFS for that matter).

Ah, I made the mistake of believing the comments that I could see in
your patch instead of reading the code.

Can I suggest deleting this comment:

/*
 * No fallback to buffered IO on errors for XFS, direct IO will either
 * complete fully or fail.
 */

and rewording this one:

/*
 * Allow a directio write to fall back to a buffered
 * write *only* in the case that we're doing a reflink
 * CoW.  In all other directio scenarios we do not
 * allow an operation to fall back to buffered mode.
 */

as part of your revised patchset?



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Christoph Hellwig
On Tue, Jul 21, 2020 at 08:27:54AM -0700, Darrick J. Wong wrote:
> On Tue, Jul 21, 2020 at 05:16:16PM +0200, Christoph Hellwig wrote:
> > On Tue, Jul 21, 2020 at 04:14:37PM +0100, Matthew Wilcox wrote:
> > > On Tue, Jul 21, 2020 at 05:06:15PM +0200, Christoph Hellwig wrote:
> > > > On Tue, Jul 21, 2020 at 04:04:32PM +0100, Matthew Wilcox wrote:
> > > > > I thought you were going to respin this with EREMCHG changed to 
> > > > > ENOTBLK?
> > > > 
> > > > Oh, true.  I'll do that ASAP.
> > > 
> > > Michael, could we add this to manpages?
> > 
> > Umm, no.  -ENOTBLK is internal - the file systems will retry using
> > buffered I/O and the error shall never escape to userspace (or even the
> > VFS for that matter).
> 
> It's worth dropping a comment somewhere that ENOTBLK is the desired
> "fall back to buffered" errcode, seeing as Dave and I missed that in
> XFS...

Sounds like a good idea, but what would a good place be?



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Christoph Hellwig
On Tue, Jul 21, 2020 at 04:31:36PM +0100, Matthew Wilcox wrote:
> > Umm, no.  -ENOTBLK is internal - the file systems will retry using
> > buffered I/O and the error shall never escape to userspace (or even the
> > VFS for that matter).
> 
> Ah, I made the mistake of believing the comments that I could see in
> your patch instead of reading the code.
> 
> Can I suggest deleting this comment:
> 
> /*
>  * No fallback to buffered IO on errors for XFS, direct IO will either
>  * complete fully or fail.
>  */
> 
> and rewording this one:
> 
> /*
>  * Allow a directio write to fall back to a buffered
>  * write *only* in the case that we're doing a reflink
>  * CoW.  In all other directio scenarios we do not
>  * allow an operation to fall back to buffered mode.
>  */
> 
> as part of your revised patchset?

That isn't actually true.  In current mainline we only fallback on
reflink RMW cases, but with this series we also fall back for
invalidation failures.



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Matthew Wilcox
On Tue, Jul 21, 2020 at 05:42:40PM +0200, Christoph Hellwig wrote:
> On Tue, Jul 21, 2020 at 04:31:36PM +0100, Matthew Wilcox wrote:
> > > Umm, no.  -ENOTBLK is internal - the file systems will retry using
> > > buffered I/O and the error shall never escape to userspace (or even the
> > > VFS for that matter).
> > 
> > Ah, I made the mistake of believing the comments that I could see in
> > your patch instead of reading the code.
> > 
> > Can I suggest deleting this comment:
> > 
> > /*
> >  * No fallback to buffered IO on errors for XFS, direct IO will 
> > either
> >  * complete fully or fail.
> >  */
> > 
> > and rewording this one:
> > 
> > /*
> >  * Allow a directio write to fall back to a buffered
> >  * write *only* in the case that we're doing a reflink
> >  * CoW.  In all other directio scenarios we do not
> >  * allow an operation to fall back to buffered mode.
> >  */
> > 
> > as part of your revised patchset?
> 
> That isn't actually true.  In current mainline we only fallback on
> reflink RMW cases, but with this series we also fall back for
> invalidation failures.

... that's why I'm suggesting that you delete the first one and rewrite
the second one.  Because they aren't true.



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Darrick J. Wong
On Tue, Jul 21, 2020 at 05:41:32PM +0200, Christoph Hellwig wrote:
> On Tue, Jul 21, 2020 at 08:27:54AM -0700, Darrick J. Wong wrote:
> > On Tue, Jul 21, 2020 at 05:16:16PM +0200, Christoph Hellwig wrote:
> > > On Tue, Jul 21, 2020 at 04:14:37PM +0100, Matthew Wilcox wrote:
> > > > On Tue, Jul 21, 2020 at 05:06:15PM +0200, Christoph Hellwig wrote:
> > > > > On Tue, Jul 21, 2020 at 04:04:32PM +0100, Matthew Wilcox wrote:
> > > > > > I thought you were going to respin this with EREMCHG changed to 
> > > > > > ENOTBLK?
> > > > > 
> > > > > Oh, true.  I'll do that ASAP.
> > > > 
> > > > Michael, could we add this to manpages?
> > > 
> > > Umm, no.  -ENOTBLK is internal - the file systems will retry using
> > > buffered I/O and the error shall never escape to userspace (or even the
> > > VFS for that matter).
> > 
> > It's worth dropping a comment somewhere that ENOTBLK is the desired
> > "fall back to buffered" errcode, seeing as Dave and I missed that in
> > XFS...
> 
> Sounds like a good idea, but what would a good place be?

In the comment that precedes iomap_dio_rw() for the iomap version,
and...

...ye $deity, the old direct-io.c file is a mess of wrappers.  Uh...  a
new comment preceding __blockdev_direct_IO?  Or blockdev_direct_IO?  Or
both?

Or I guess the direct_IO documentation in vfs.rst...?

``direct_IO``
called by the generic read/write routines to perform direct_IO -
that is IO requests which bypass the page cache and transfer
data directly between the storage and the application's address
space.  This function can return -ENOTBLK to signal that it is
necessary to fallback to buffered IO.  Note that
blockdev_direct_IO and variants can also return -ENOTBLK.

--D



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Darrick J. Wong
On Tue, Jul 21, 2020 at 06:01:43PM +0200, Christoph Hellwig wrote:
> On Tue, Jul 21, 2020 at 08:59:25AM -0700, Darrick J. Wong wrote:
> > In the comment that precedes iomap_dio_rw() for the iomap version,
> 
> maybe let's just do that..
> 
> > ``direct_IO``
> > called by the generic read/write routines to perform direct_IO -
> > that is IO requests which bypass the page cache and transfer
> > data directly between the storage and the application's address
> > space.  This function can return -ENOTBLK to signal that it is
> > necessary to fallback to buffered IO.  Note that
> > blockdev_direct_IO and variants can also return -ENOTBLK.
> 
> ->direct_IO is not used for iomap and various other implementations.
> In fact it is a horrible hack that I've been trying to get rid of
> for a while.

Agreed, but for now there are still a number of fses who are still on
the old directio code; let's try to keep the drainbamage/confusion
potential to a minimum so it doesn't spread to our iomap shinyness. :)

--D



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Darrick J. Wong
On Tue, Jul 21, 2020 at 04:52:01PM +0100, Matthew Wilcox wrote:
> On Tue, Jul 21, 2020 at 05:42:40PM +0200, Christoph Hellwig wrote:
> > On Tue, Jul 21, 2020 at 04:31:36PM +0100, Matthew Wilcox wrote:
> > > > Umm, no.  -ENOTBLK is internal - the file systems will retry using
> > > > buffered I/O and the error shall never escape to userspace (or even the
> > > > VFS for that matter).
> > > 
> > > Ah, I made the mistake of believing the comments that I could see in
> > > your patch instead of reading the code.
> > > 
> > > Can I suggest deleting this comment:
> > > 
> > > /*
> > >  * No fallback to buffered IO on errors for XFS, direct IO will 
> > > either
> > >  * complete fully or fail.
> > >  */
> > > 
> > > and rewording this one:
> > > 
> > > /*
> > >  * Allow a directio write to fall back to a buffered
> > >  * write *only* in the case that we're doing a reflink
> > >  * CoW.  In all other directio scenarios we do not
> > >  * allow an operation to fall back to buffered mode.
> > >  */
> > > 
> > > as part of your revised patchset?
> > 
> > That isn't actually true.  In current mainline we only fallback on
> > reflink RMW cases, but with this series we also fall back for
> > invalidation failures.
> 
> ... that's why I'm suggesting that you delete the first one and rewrite
> the second one.  Because they aren't true.

/*
 * We allow only three outcomes of a directio: (1) it succeeds
 * completely; (2) it fails with a negative error code; or (3) it
 * returns -ENOTBLK to signal that we should fall back to buffered IO.
 *
 * The third scenario should only happen if iomap refuses to perform the
 * direct IO, or the direct write request requires CoW but is not aligned
 * to the filesystem block size.
 */

?

--D



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Christoph Hellwig
On Tue, Jul 21, 2020 at 08:59:25AM -0700, Darrick J. Wong wrote:
> In the comment that precedes iomap_dio_rw() for the iomap version,

maybe let's just do that..

> ``direct_IO``
>   called by the generic read/write routines to perform direct_IO -
>   that is IO requests which bypass the page cache and transfer
>   data directly between the storage and the application's address
>   space.  This function can return -ENOTBLK to signal that it is
>   necessary to fallback to buffered IO.  Note that
>   blockdev_direct_IO and variants can also return -ENOTBLK.

->direct_IO is not used for iomap and various other implementations.
In fact it is a horrible hack that I've been trying to get rid of
for a while.



Re: [Cluster-devel] RFC: iomap write invalidation

2020-07-21 Thread Darrick J. Wong
On Tue, Jul 21, 2020 at 05:16:16PM +0200, Christoph Hellwig wrote:
> On Tue, Jul 21, 2020 at 04:14:37PM +0100, Matthew Wilcox wrote:
> > On Tue, Jul 21, 2020 at 05:06:15PM +0200, Christoph Hellwig wrote:
> > > On Tue, Jul 21, 2020 at 04:04:32PM +0100, Matthew Wilcox wrote:
> > > > I thought you were going to respin this with EREMCHG changed to ENOTBLK?
> > > 
> > > Oh, true.  I'll do that ASAP.
> > 
> > Michael, could we add this to manpages?
> 
> Umm, no.  -ENOTBLK is internal - the file systems will retry using
> buffered I/O and the error shall never escape to userspace (or even the
> VFS for that matter).

It's worth dropping a comment somewhere that ENOTBLK is the desired
"fall back to buffered" errcode, seeing as Dave and I missed that in
XFS...

--D



[Cluster-devel] [PATCH 2/3] iomap: Only invalidate page cache pages on direct IO writes

2020-07-21 Thread Christoph Hellwig
From: Dave Chinner 

The historic requirement for XFS to invalidate cached pages on
direct IO reads has been lost in the twisty pages of history - it was
inherited from Irix, which implemented page cache invalidation on
read as a method of working around problems synchronising page
cache state with uncached IO.

XFS has carried this ever since. In the initial linux ports it was
necessary to get mmap and DIO to play "ok" together and not
immediately corrupt data. This was the state of play until the linux
kernel had infrastructure to track unwritten extents and synchronise
page faults with allocations and unwritten extent conversions
(->page_mkwrite infrastructure). IOws, the page cache invalidation
on DIO read was necessary to prevent trivial data corruptions. This
didn't solve all the problems, though.

There were peformance problems if we didn't invalidate the entire
page cache over the file on read - we couldn't easily determine if
the cached pages were over the range of the IO, and invalidation
required taking a serialising lock (i_mutex) on the inode. This
serialising lock was an issue for XFS, as it was the only exclusive
lock in the direct Io read path.

Hence if there were any cached pages, we'd just invalidate the
entire file in one go so that subsequent IOs didn't need to take the
serialising lock. This was a problem that prevented ranged
invalidation from being particularly useful for avoiding the
remaining coherency issues. This was solved with the conversion of
i_mutex to i_rwsem and the conversion of the XFS inode IO lock to
use i_rwsem. Hence we could now just do ranged invalidation and the
performance problem went away.

However, page cache invalidation was still needed to serialise
sub-page/sub-block zeroing via direct IO against buffered IO because
bufferhead state attached to the cached page could get out of whack
when direct IOs were issued.  We've removed bufferheads from the
XFS code, and we don't carry any extent state on the cached pages
anymore, and so this problem has gone away, too.

IOWs, it would appear that we don't have any good reason to be
invalidating the page cache on DIO reads anymore. Hence remove the
invalidation on read because it is unnecessary overhead,
not needed to maintain coherency between mmap/buffered access and
direct IO anymore, and prevents anyone from using direct IO reads
from intentionally invalidating the page cache of a file.

Signed-off-by: Dave Chinner 
Reviewed-by: Darrick J. Wong 
Reviewed-by: Matthew Wilcox (Oracle) 
Signed-off-by: Christoph Hellwig 
---
 fs/iomap/direct-io.c | 31 +++
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index ec7b78e6fecaf9..190967e87b69e4 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -475,23 +475,22 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
if (ret)
goto out_free_dio;
 
-   /*
-* Try to invalidate cache pages for the range we're direct
-* writing.  If this invalidation fails, tough, the write will
-* still work, but racing two incompatible write paths is a
-* pretty crazy thing to do, so we don't support it 100%.
-*/
-   ret = invalidate_inode_pages2_range(mapping,
-   pos >> PAGE_SHIFT, end >> PAGE_SHIFT);
-   if (ret)
-   dio_warn_stale_pagecache(iocb->ki_filp);
-   ret = 0;
+   if (iov_iter_rw(iter) == WRITE) {
+   /*
+* Try to invalidate cache pages for the range we are writing.
+* If this invalidation fails, tough, the write will still work,
+* but racing two incompatible write paths is a pretty crazy
+* thing to do, so we don't support it 100%.
+*/
+   if (invalidate_inode_pages2_range(mapping, pos >> PAGE_SHIFT,
+   end >> PAGE_SHIFT))
+   dio_warn_stale_pagecache(iocb->ki_filp);
 
-   if (iov_iter_rw(iter) == WRITE && !wait_for_completion &&
-   !inode->i_sb->s_dio_done_wq) {
-   ret = sb_init_dio_done_wq(inode->i_sb);
-   if (ret < 0)
-   goto out_free_dio;
+   if (!wait_for_completion && !inode->i_sb->s_dio_done_wq) {
+   ret = sb_init_dio_done_wq(inode->i_sb);
+   if (ret < 0)
+   goto out_free_dio;
+   }
}
 
inode_dio_begin(inode);
-- 
2.27.0



[Cluster-devel] iomap write invalidation v2

2020-07-21 Thread Christoph Hellwig



Changes since v1:
 - use -ENOTBLK for the direct to buffered I/O fallback everywhere
 - document the choice of -ENOTBLK better
 - update a comment in XFS



[Cluster-devel] [PATCH 3/3] iomap: fall back to buffered writes for invalidation failures

2020-07-21 Thread Christoph Hellwig
Failing to invalid the page cache means data in incoherent, which is
a very bad state for the system.  Always fall back to buffered I/O
through the page cache if we can't invalidate mappings.

Signed-off-by: Christoph Hellwig 
Acked-by: Dave Chinner 
Reviewed-by: Goldwyn Rodrigues 
---
 fs/ext4/file.c   |  2 ++
 fs/gfs2/file.c   |  3 ++-
 fs/iomap/direct-io.c | 16 +++-
 fs/iomap/trace.h |  1 +
 fs/xfs/xfs_file.c|  4 ++--
 fs/zonefs/super.c|  7 +--
 6 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/fs/ext4/file.c b/fs/ext4/file.c
index 2a01e31a032c4c..129cc1dd6b7952 100644
--- a/fs/ext4/file.c
+++ b/fs/ext4/file.c
@@ -544,6 +544,8 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, 
struct iov_iter *from)
iomap_ops = &ext4_iomap_overwrite_ops;
ret = iomap_dio_rw(iocb, from, iomap_ops, &ext4_dio_write_ops,
   is_sync_kiocb(iocb) || unaligned_io || extend);
+   if (ret == -ENOTBLK)
+   ret = 0;
 
if (extend)
ret = ext4_handle_inode_extension(inode, offset, ret, count);
diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index bebde537ac8cf2..b085a3bea4f0fd 100644
--- a/fs/gfs2/file.c
+++ b/fs/gfs2/file.c
@@ -835,7 +835,8 @@ static ssize_t gfs2_file_direct_write(struct kiocb *iocb, 
struct iov_iter *from)
 
ret = iomap_dio_rw(iocb, from, &gfs2_iomap_ops, NULL,
   is_sync_kiocb(iocb));
-
+   if (ret == -ENOTBLK)
+   ret = 0;
 out:
gfs2_glock_dq(&gh);
 out_uninit:
diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index 190967e87b69e4..c1aafb2ab99072 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -10,6 +10,7 @@
 #include 
 #include 
 #include 
+#include "trace.h"
 
 #include "../internal.h"
 
@@ -401,6 +402,9 @@ iomap_dio_actor(struct inode *inode, loff_t pos, loff_t 
length,
  * can be mapped into multiple disjoint IOs and only a subset of the IOs issued
  * may be pure data writes. In that case, we still need to do a full data sync
  * completion.
+ *
+ * Returns -ENOTBLK In case of a page invalidation invalidation failure for
+ * writes.  The callers needs to fall back to buffered I/O in this case.
  */
 ssize_t
 iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
@@ -478,13 +482,15 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
if (iov_iter_rw(iter) == WRITE) {
/*
 * Try to invalidate cache pages for the range we are writing.
-* If this invalidation fails, tough, the write will still work,
-* but racing two incompatible write paths is a pretty crazy
-* thing to do, so we don't support it 100%.
+* If this invalidation fails, let the caller fall back to
+* buffered I/O.
 */
if (invalidate_inode_pages2_range(mapping, pos >> PAGE_SHIFT,
-   end >> PAGE_SHIFT))
-   dio_warn_stale_pagecache(iocb->ki_filp);
+   end >> PAGE_SHIFT)) {
+   trace_iomap_dio_invalidate_fail(inode, pos, count);
+   ret = -ENOTBLK;
+   goto out_free_dio;
+   }
 
if (!wait_for_completion && !inode->i_sb->s_dio_done_wq) {
ret = sb_init_dio_done_wq(inode->i_sb);
diff --git a/fs/iomap/trace.h b/fs/iomap/trace.h
index 5693a39d52fb63..fdc7ae388476f5 100644
--- a/fs/iomap/trace.h
+++ b/fs/iomap/trace.h
@@ -74,6 +74,7 @@ DEFINE_EVENT(iomap_range_class, name, \
 DEFINE_RANGE_EVENT(iomap_writepage);
 DEFINE_RANGE_EVENT(iomap_releasepage);
 DEFINE_RANGE_EVENT(iomap_invalidatepage);
+DEFINE_RANGE_EVENT(iomap_dio_invalidate_fail);
 
 #define IOMAP_TYPE_STRINGS \
{ IOMAP_HOLE,   "HOLE" }, \
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index a6ef90457abf97..1b4517fc55f1b9 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -553,8 +553,8 @@ xfs_file_dio_aio_write(
xfs_iunlock(ip, iolock);
 
/*
-* No fallback to buffered IO on errors for XFS, direct IO will either
-* complete fully or fail.
+* No fallback to buffered IO after short writes for XFS, direct I/O
+* will either complete fully or return an error.
 */
ASSERT(ret < 0 || ret == count);
return ret;
diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
index 07bc42d62673ce..d0a04528a7e18e 100644
--- a/fs/zonefs/super.c
+++ b/fs/zonefs/super.c
@@ -786,8 +786,11 @@ static ssize_t zonefs_file_write_iter(struct kiocb *iocb, 
struct iov_iter *from)
if (iocb->ki_pos >= ZONEFS_I(inode)->i_max_size)
return -EFBIG;
 
-   if (iocb->ki_flags & IOCB_DIRECT)
-   return zonefs_file_dio_write(iocb, from);
+   if (iocb->ki_flags & IOCB_DIRECT) {
+   ssize_t ret = zonefs_file_dio_write(iocb, from);
+ 

[Cluster-devel] [PATCH 1/3] xfs: use ENOTBLK for direct I/O to buffered I/O fallback

2020-07-21 Thread Christoph Hellwig
This is what the classic fs/direct-io.c implementation and thuse other
file systems use.

Signed-off-by: Christoph Hellwig 
---
 fs/xfs/xfs_file.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 00db81eac80d6c..a6ef90457abf97 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -505,7 +505,7 @@ xfs_file_dio_aio_write(
 */
if (xfs_is_cow_inode(ip)) {
trace_xfs_reflink_bounce_dio_write(ip, iocb->ki_pos, 
count);
-   return -EREMCHG;
+   return -ENOTBLK;
}
iolock = XFS_IOLOCK_EXCL;
} else {
@@ -714,7 +714,7 @@ xfs_file_write_iter(
 * allow an operation to fall back to buffered mode.
 */
ret = xfs_file_dio_aio_write(iocb, from);
-   if (ret != -EREMCHG)
+   if (ret != -ENOTBLK)
return ret;
}
 
-- 
2.27.0



Re: [Cluster-devel] [PATCH 3/3] iomap: fall back to buffered writes for invalidation failures

2020-07-21 Thread Darrick J. Wong
On Tue, Jul 21, 2020 at 08:31:57PM +0200, Christoph Hellwig wrote:
> Failing to invalid the page cache means data in incoherent, which is
> a very bad state for the system.  Always fall back to buffered I/O
> through the page cache if we can't invalidate mappings.
> 
> Signed-off-by: Christoph Hellwig 
> Acked-by: Dave Chinner 
> Reviewed-by: Goldwyn Rodrigues 

For the iomap and xfs parts,
Reviewed-by: Darrick J. Wong 

But I'd still like acks from Ted, Andreas, and Damien for ext4, gfs2,
and zonefs, respectively.

(Particularly if anyone was harboring ideas about trying to get this in
before 5.10, though I've not yet heard anyone say that explicitly...)

--D

> ---
>  fs/ext4/file.c   |  2 ++
>  fs/gfs2/file.c   |  3 ++-
>  fs/iomap/direct-io.c | 16 +++-
>  fs/iomap/trace.h |  1 +
>  fs/xfs/xfs_file.c|  4 ++--
>  fs/zonefs/super.c|  7 +--
>  6 files changed, 23 insertions(+), 10 deletions(-)
> 
> diff --git a/fs/ext4/file.c b/fs/ext4/file.c
> index 2a01e31a032c4c..129cc1dd6b7952 100644
> --- a/fs/ext4/file.c
> +++ b/fs/ext4/file.c
> @@ -544,6 +544,8 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, 
> struct iov_iter *from)
>   iomap_ops = &ext4_iomap_overwrite_ops;
>   ret = iomap_dio_rw(iocb, from, iomap_ops, &ext4_dio_write_ops,
>  is_sync_kiocb(iocb) || unaligned_io || extend);
> + if (ret == -ENOTBLK)
> + ret = 0;
>  
>   if (extend)
>   ret = ext4_handle_inode_extension(inode, offset, ret, count);
> diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
> index bebde537ac8cf2..b085a3bea4f0fd 100644
> --- a/fs/gfs2/file.c
> +++ b/fs/gfs2/file.c
> @@ -835,7 +835,8 @@ static ssize_t gfs2_file_direct_write(struct kiocb *iocb, 
> struct iov_iter *from)
>  
>   ret = iomap_dio_rw(iocb, from, &gfs2_iomap_ops, NULL,
>  is_sync_kiocb(iocb));
> -
> + if (ret == -ENOTBLK)
> + ret = 0;
>  out:
>   gfs2_glock_dq(&gh);
>  out_uninit:
> diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
> index 190967e87b69e4..c1aafb2ab99072 100644
> --- a/fs/iomap/direct-io.c
> +++ b/fs/iomap/direct-io.c
> @@ -10,6 +10,7 @@
>  #include 
>  #include 
>  #include 
> +#include "trace.h"
>  
>  #include "../internal.h"
>  
> @@ -401,6 +402,9 @@ iomap_dio_actor(struct inode *inode, loff_t pos, loff_t 
> length,
>   * can be mapped into multiple disjoint IOs and only a subset of the IOs 
> issued
>   * may be pure data writes. In that case, we still need to do a full data 
> sync
>   * completion.
> + *
> + * Returns -ENOTBLK In case of a page invalidation invalidation failure for
> + * writes.  The callers needs to fall back to buffered I/O in this case.
>   */
>  ssize_t
>  iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
> @@ -478,13 +482,15 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
>   if (iov_iter_rw(iter) == WRITE) {
>   /*
>* Try to invalidate cache pages for the range we are writing.
> -  * If this invalidation fails, tough, the write will still work,
> -  * but racing two incompatible write paths is a pretty crazy
> -  * thing to do, so we don't support it 100%.
> +  * If this invalidation fails, let the caller fall back to
> +  * buffered I/O.
>*/
>   if (invalidate_inode_pages2_range(mapping, pos >> PAGE_SHIFT,
> - end >> PAGE_SHIFT))
> - dio_warn_stale_pagecache(iocb->ki_filp);
> + end >> PAGE_SHIFT)) {
> + trace_iomap_dio_invalidate_fail(inode, pos, count);
> + ret = -ENOTBLK;
> + goto out_free_dio;
> + }
>  
>   if (!wait_for_completion && !inode->i_sb->s_dio_done_wq) {
>   ret = sb_init_dio_done_wq(inode->i_sb);
> diff --git a/fs/iomap/trace.h b/fs/iomap/trace.h
> index 5693a39d52fb63..fdc7ae388476f5 100644
> --- a/fs/iomap/trace.h
> +++ b/fs/iomap/trace.h
> @@ -74,6 +74,7 @@ DEFINE_EVENT(iomap_range_class, name,   \
>  DEFINE_RANGE_EVENT(iomap_writepage);
>  DEFINE_RANGE_EVENT(iomap_releasepage);
>  DEFINE_RANGE_EVENT(iomap_invalidatepage);
> +DEFINE_RANGE_EVENT(iomap_dio_invalidate_fail);
>  
>  #define IOMAP_TYPE_STRINGS \
>   { IOMAP_HOLE,   "HOLE" }, \
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index a6ef90457abf97..1b4517fc55f1b9 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -553,8 +553,8 @@ xfs_file_dio_aio_write(
>   xfs_iunlock(ip, iolock);
>  
>   /*
> -  * No fallback to buffered IO on errors for XFS, direct IO will either
> -  * complete fully or fail.
> +  * No fallback to buffered IO after short writes for XFS, direct I/O
> +  * will either complete fully or return an error.
>*/
>   ASSERT(ret < 0 || ret == count);
>   return ret;
> diff --git a/f

Re: [Cluster-devel] [PATCH 1/3] xfs: use ENOTBLK for direct I/O to buffered I/O fallback

2020-07-21 Thread Darrick J. Wong
On Tue, Jul 21, 2020 at 08:31:55PM +0200, Christoph Hellwig wrote:
> This is what the classic fs/direct-io.c implementation and thuse other
> file systems use.
> 
> Signed-off-by: Christoph Hellwig 

Looks ok to me,
Reviewed-by: Darrick J. Wong 

--D

> ---
>  fs/xfs/xfs_file.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index 00db81eac80d6c..a6ef90457abf97 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -505,7 +505,7 @@ xfs_file_dio_aio_write(
>*/
>   if (xfs_is_cow_inode(ip)) {
>   trace_xfs_reflink_bounce_dio_write(ip, iocb->ki_pos, 
> count);
> - return -EREMCHG;
> + return -ENOTBLK;
>   }
>   iolock = XFS_IOLOCK_EXCL;
>   } else {
> @@ -714,7 +714,7 @@ xfs_file_write_iter(
>* allow an operation to fall back to buffered mode.
>*/
>   ret = xfs_file_dio_aio_write(iocb, from);
> - if (ret != -EREMCHG)
> + if (ret != -ENOTBLK)
>   return ret;
>   }
>  
> -- 
> 2.27.0
> 



Re: [Cluster-devel] [PATCH 3/3] iomap: fall back to buffered writes for invalidation failures

2020-07-21 Thread Damien Le Moal
On 2020/07/22 3:32, Christoph Hellwig wrote:
> Failing to invalid the page cache means data in incoherent, which is
> a very bad state for the system.  Always fall back to buffered I/O
> through the page cache if we can't invalidate mappings.
> 
> Signed-off-by: Christoph Hellwig 
> Acked-by: Dave Chinner 
> Reviewed-by: Goldwyn Rodrigues 
> ---
>  fs/ext4/file.c   |  2 ++
>  fs/gfs2/file.c   |  3 ++-
>  fs/iomap/direct-io.c | 16 +++-
>  fs/iomap/trace.h |  1 +
>  fs/xfs/xfs_file.c|  4 ++--
>  fs/zonefs/super.c|  7 +--
>  6 files changed, 23 insertions(+), 10 deletions(-)
> 
> diff --git a/fs/ext4/file.c b/fs/ext4/file.c
> index 2a01e31a032c4c..129cc1dd6b7952 100644
> --- a/fs/ext4/file.c
> +++ b/fs/ext4/file.c
> @@ -544,6 +544,8 @@ static ssize_t ext4_dio_write_iter(struct kiocb *iocb, 
> struct iov_iter *from)
>   iomap_ops = &ext4_iomap_overwrite_ops;
>   ret = iomap_dio_rw(iocb, from, iomap_ops, &ext4_dio_write_ops,
>  is_sync_kiocb(iocb) || unaligned_io || extend);
> + if (ret == -ENOTBLK)
> + ret = 0;
>  
>   if (extend)
>   ret = ext4_handle_inode_extension(inode, offset, ret, count);
> diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
> index bebde537ac8cf2..b085a3bea4f0fd 100644
> --- a/fs/gfs2/file.c
> +++ b/fs/gfs2/file.c
> @@ -835,7 +835,8 @@ static ssize_t gfs2_file_direct_write(struct kiocb *iocb, 
> struct iov_iter *from)
>  
>   ret = iomap_dio_rw(iocb, from, &gfs2_iomap_ops, NULL,
>  is_sync_kiocb(iocb));
> -
> + if (ret == -ENOTBLK)
> + ret = 0;
>  out:
>   gfs2_glock_dq(&gh);
>  out_uninit:
> diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
> index 190967e87b69e4..c1aafb2ab99072 100644
> --- a/fs/iomap/direct-io.c
> +++ b/fs/iomap/direct-io.c
> @@ -10,6 +10,7 @@
>  #include 
>  #include 
>  #include 
> +#include "trace.h"
>  
>  #include "../internal.h"
>  
> @@ -401,6 +402,9 @@ iomap_dio_actor(struct inode *inode, loff_t pos, loff_t 
> length,
>   * can be mapped into multiple disjoint IOs and only a subset of the IOs 
> issued
>   * may be pure data writes. In that case, we still need to do a full data 
> sync
>   * completion.
> + *
> + * Returns -ENOTBLK In case of a page invalidation invalidation failure for
> + * writes.  The callers needs to fall back to buffered I/O in this case.
>   */
>  ssize_t
>  iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
> @@ -478,13 +482,15 @@ iomap_dio_rw(struct kiocb *iocb, struct iov_iter *iter,
>   if (iov_iter_rw(iter) == WRITE) {
>   /*
>* Try to invalidate cache pages for the range we are writing.
> -  * If this invalidation fails, tough, the write will still work,
> -  * but racing two incompatible write paths is a pretty crazy
> -  * thing to do, so we don't support it 100%.
> +  * If this invalidation fails, let the caller fall back to
> +  * buffered I/O.
>*/
>   if (invalidate_inode_pages2_range(mapping, pos >> PAGE_SHIFT,
> - end >> PAGE_SHIFT))
> - dio_warn_stale_pagecache(iocb->ki_filp);
> + end >> PAGE_SHIFT)) {
> + trace_iomap_dio_invalidate_fail(inode, pos, count);
> + ret = -ENOTBLK;
> + goto out_free_dio;
> + }
>  
>   if (!wait_for_completion && !inode->i_sb->s_dio_done_wq) {
>   ret = sb_init_dio_done_wq(inode->i_sb);
> diff --git a/fs/iomap/trace.h b/fs/iomap/trace.h
> index 5693a39d52fb63..fdc7ae388476f5 100644
> --- a/fs/iomap/trace.h
> +++ b/fs/iomap/trace.h
> @@ -74,6 +74,7 @@ DEFINE_EVENT(iomap_range_class, name,   \
>  DEFINE_RANGE_EVENT(iomap_writepage);
>  DEFINE_RANGE_EVENT(iomap_releasepage);
>  DEFINE_RANGE_EVENT(iomap_invalidatepage);
> +DEFINE_RANGE_EVENT(iomap_dio_invalidate_fail);
>  
>  #define IOMAP_TYPE_STRINGS \
>   { IOMAP_HOLE,   "HOLE" }, \
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index a6ef90457abf97..1b4517fc55f1b9 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -553,8 +553,8 @@ xfs_file_dio_aio_write(
>   xfs_iunlock(ip, iolock);
>  
>   /*
> -  * No fallback to buffered IO on errors for XFS, direct IO will either
> -  * complete fully or fail.
> +  * No fallback to buffered IO after short writes for XFS, direct I/O
> +  * will either complete fully or return an error.
>*/
>   ASSERT(ret < 0 || ret == count);
>   return ret;
> diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
> index 07bc42d62673ce..d0a04528a7e18e 100644
> --- a/fs/zonefs/super.c
> +++ b/fs/zonefs/super.c
> @@ -786,8 +786,11 @@ static ssize_t zonefs_file_write_iter(struct kiocb 
> *iocb, struct iov_iter *from)
>   if (iocb->ki_pos >= ZONEFS_I(inode)->i_max_size)
>   return -EFBIG

Re: [Cluster-devel] [PATCH 3/3] iomap: fall back to buffered writes for invalidation failures

2020-07-21 Thread Christoph Hellwig
On Tue, Jul 21, 2020 at 01:37:49PM -0700, Darrick J. Wong wrote:
> On Tue, Jul 21, 2020 at 08:31:57PM +0200, Christoph Hellwig wrote:
> > Failing to invalid the page cache means data in incoherent, which is
> > a very bad state for the system.  Always fall back to buffered I/O
> > through the page cache if we can't invalidate mappings.
> > 
> > Signed-off-by: Christoph Hellwig 
> > Acked-by: Dave Chinner 
> > Reviewed-by: Goldwyn Rodrigues 
> 
> For the iomap and xfs parts,
> Reviewed-by: Darrick J. Wong 
> 
> But I'd still like acks from Ted, Andreas, and Damien for ext4, gfs2,
> and zonefs, respectively.
> 
> (Particularly if anyone was harboring ideas about trying to get this in
> before 5.10, though I've not yet heard anyone say that explicitly...)

Why would we want to wait another whole merge window?