We can divide vma_adjust() into two categories based on whether the
*insert* is NULL or not. And when *insert* is NULL, it has two users:
mremap() and shift_arg_pages().

For the second vma_adjust() in shift_arg_pages(), the vma must not have
next. Otherwise vma_adjust() would expand next->vm_start instead of just
shift the vma.

Fortunately, shift_arg_pages() is only used in setup_arg_pages() to move
stack, which is placed on the top of the address range. This means the
vma is not expected to have a next.

Since mremap() calls vma_adjust() to expand itself, shift_arg_pages() is
the only case it may fall into mprotect case 4 by accident. Let's add a
BUG_ON() and comment to inform the following audience.

Signed-off-by: Wei Yang <richard.weiy...@linux.alibaba.com>
---
 fs/exec.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/fs/exec.c b/fs/exec.c
index a91003e28eaa..3ff44ab0d112 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -682,6 +682,7 @@ static int shift_arg_pages(struct vm_area_struct *vma, 
unsigned long shift)
        struct mmu_gather tlb;
 
        BUG_ON(new_start > new_end);
+       BUG_ON(vma->vm_next);
 
        /*
         * ensure there are no vmas between where we want to go
@@ -726,6 +727,8 @@ static int shift_arg_pages(struct vm_area_struct *vma, 
unsigned long shift)
 
        /*
         * Shrink the vma to just the new range.  Always succeeds.
+        * Since !vma->vm_next, __vma_adjust() would not go to mprotect case
+        * 4 to expand next.
         */
        vma_adjust(vma, new_start, new_end, vma->vm_pgoff, NULL);
 
-- 
2.20.1 (Apple Git-117)

Reply via email to