Commit-ID: 591a3d7c09fa08baff48ad86c2347dbd28a52753
Gitweb: http://git.kernel.org/tip/591a3d7c09fa08baff48ad86c2347dbd28a52753
Author: Kirill A. Shutemov <[email protected]>
AuthorDate: Fri, 24 Mar 2017 14:13:05 +0300
Committer: Ingo Molnar <[email protected]>
CommitDate: Tue, 28 Mar 2017 08:23:27 +0200
mm: Fix false-positive VM_BUG_ON() in page_cache_{get,add}_speculative()
0day testing by Fengguang Wu triggered this crash while running Trinity:
kernel BUG at include/linux/pagemap.h:151!
...
CPU: 0 PID: 458 Comm: trinity-c0 Not tainted 4.11.0-rc2-00251-g2947ba0 #1
...
Call Trace:
__get_user_pages_fast()
get_user_pages_fast()
get_futex_key()
futex_requeue()
do_futex()
SyS_futex()
do_syscall_64()
entry_SYSCALL64_slow_path()
It' VM_BUG_ON() due to false-negative in_atomic(). We call
page_cache_get_speculative() with disabled local interrupts.
It should be atomic enough.
So let's check for disabled interrupts in the VM_BUG_ON() condition
too, to resolve this.
( This got triggered by the conversion of the x86 GUP code to the
generic GUP code. )
Reported-by: Fengguang Wu <[email protected]>
Signed-off-by: Kirill A. Shutemov <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Aneesh Kumar K.V <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: LKP <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link:
http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
include/linux/pagemap.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 84943e8..316a19f 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -148,7 +148,7 @@ static inline int page_cache_get_speculative(struct page
*page)
#ifdef CONFIG_TINY_RCU
# ifdef CONFIG_PREEMPT_COUNT
- VM_BUG_ON(!in_atomic());
+ VM_BUG_ON(!in_atomic() && !irqs_disabled());
# endif
/*
* Preempt must be disabled here - we rely on rcu_read_lock doing
@@ -186,7 +186,7 @@ static inline int page_cache_add_speculative(struct page
*page, int count)
#if !defined(CONFIG_SMP) && defined(CONFIG_TREE_RCU)
# ifdef CONFIG_PREEMPT_COUNT
- VM_BUG_ON(!in_atomic());
+ VM_BUG_ON(!in_atomic() && !irqs_disabled());
# endif
VM_BUG_ON_PAGE(page_count(page) == 0, page);
page_ref_add(page, count);