4.4-stable review patch.  If anyone has any objections, please let me know.

------------------

From: chenjie <chenj...@huawei.com>

commit 6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91 upstream.

MADVISE_WILLNEED has always been a noop for DAX (formerly XIP) mappings.
Unfortunately madvise_willneed() doesn't communicate this information
properly to the generic madvise syscall implementation.  The calling
convention is quite subtle there.  madvise_vma() is supposed to either
return an error or update &prev otherwise the main loop will never
advance to the next vma and it will keep looping for ever without a way
to get out of the kernel.

It seems this has been broken since introduction.  Nobody has noticed
because nobody seems to be using MADVISE_WILLNEED on these DAX mappings.

[mho...@suse.com: rewrite changelog]
Link: http://lkml.kernel.org/r/20171127115318.911-1-guoxue...@huawei.com
Fixes: fe77ba6f4f97 ("[PATCH] xip: madvice/fadvice: execute in place")
Signed-off-by: chenjie <chenj...@huawei.com>
Signed-off-by: guoxuenan <guoxue...@huawei.com>
Acked-by: Michal Hocko <mho...@suse.com>
Cc: Minchan Kim <minc...@kernel.org>
Cc: zhangyi (F) <yi.zh...@huawei.com>
Cc: Miao Xie <miao...@huawei.com>
Cc: Mike Rapoport <r...@linux.vnet.ibm.com>
Cc: Shaohua Li <s...@fb.com>
Cc: Andrea Arcangeli <aarca...@redhat.com>
Cc: Mel Gorman <mgor...@techsingularity.net>
Cc: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
Cc: David Rientjes <rient...@google.com>
Cc: Anshuman Khandual <khand...@linux.vnet.ibm.com>
Cc: Rik van Riel <r...@redhat.com>
Cc: Carsten Otte <co...@de.ibm.com>
Cc: Dan Williams <dan.j.willi...@intel.com>
Signed-off-by: Andrew Morton <a...@linux-foundation.org>
Signed-off-by: Linus Torvalds <torva...@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>

---
 mm/madvise.c |    4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -223,15 +223,14 @@ static long madvise_willneed(struct vm_a
 {
        struct file *file = vma->vm_file;
 
+       *prev = vma;
 #ifdef CONFIG_SWAP
        if (!file) {
-               *prev = vma;
                force_swapin_readahead(vma, start, end);
                return 0;
        }
 
        if (shmem_mapping(file->f_mapping)) {
-               *prev = vma;
                force_shm_swapin_readahead(vma, start, end,
                                        file->f_mapping);
                return 0;
@@ -246,7 +245,6 @@ static long madvise_willneed(struct vm_a
                return 0;
        }
 
-       *prev = vma;
        start = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
        if (end > vma->vm_end)
                end = vma->vm_end;


Reply via email to