3.16.36-rc1 review patch.  If anyone has any objections, please let me know.

------------------

From: Dave Chinner <dchin...@redhat.com>

commit ec56b1f1fdc69599963574ce94cc5693d535dd64 upstream.

Lock ordering for the new mmap lock needs to be:

mmap_sem
  sb_start_pagefault
    i_mmap_lock
      page lock
        <fault processsing>

Right now xfs_vm_page_mkwrite gets this the wrong way around,
While technically it cannot deadlock due to the current freeze
ordering, it's still a landmine that might explode if we change
anything in future. Hence we need to nest the locks correctly.

Signed-off-by: Dave Chinner <dchin...@redhat.com>
Reviewed-by: Jan Kara <j...@suse.cz>
Reviewed-by: Brian Foster <bfos...@redhat.com>
Signed-off-by: Dave Chinner <da...@fromorbit.com>
Signed-off-by: Ben Hutchings <b...@decadent.org.uk>
Cc: Jan Kara <j...@suse.cz>
Cc: x...@oss.sgi.com
---
 fs/xfs/xfs_file.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -1441,15 +1441,20 @@ xfs_filemap_page_mkwrite(
        struct vm_fault         *vmf)
 {
        struct xfs_inode        *ip = XFS_I(vma->vm_file->f_mapping->host);
-       int                     error;
+       int                     ret;
 
        trace_xfs_filemap_page_mkwrite(ip);
 
+       sb_start_pagefault(VFS_I(ip)->i_sb);
+       file_update_time(vma->vm_file);
        xfs_ilock(ip, XFS_MMAPLOCK_SHARED);
-       error = block_page_mkwrite(vma, vmf, xfs_get_blocks);
+
+       ret = __block_page_mkwrite(vma, vmf, xfs_get_blocks);
+
        xfs_iunlock(ip, XFS_MMAPLOCK_SHARED);
+       sb_end_pagefault(VFS_I(ip)->i_sb);
 
-       return error;
+       return block_page_mkwrite_return(ret);
 }
 
 const struct file_operations xfs_file_operations = {

Reply via email to