From: Konstantin Khlebnikov <khlebni...@yandex-team.ru>

[ Upstream commit c46038017fbdcac627b670c9d4176f1d0c2f5fa3 ]

Do not remain stuck forever if something goes wrong.  Using a killable
lock permits cleanup of stuck tasks and simplifies investigation.

Replace the only unkillable mmap_sem lock in clear_refs_write().

Link: http://lkml.kernel.org/r/156007493826.3335.5424884725467456239.stgit@buzz
Signed-off-by: Konstantin Khlebnikov <khlebni...@yandex-team.ru>
Reviewed-by: Roman Gushchin <g...@fb.com>
Reviewed-by: Cyrill Gorcunov <gorcu...@gmail.com>
Reviewed-by: Kirill Tkhai <ktk...@virtuozzo.com>
Acked-by: Michal Hocko <mho...@suse.com>
Cc: Alexey Dobriyan <adobri...@gmail.com>
Cc: Al Viro <v...@zeniv.linux.org.uk>
Cc: Matthew Wilcox <wi...@infradead.org>
Cc: Michal Koutný <mkou...@suse.com>
Cc: Oleg Nesterov <o...@redhat.com>
Signed-off-by: Andrew Morton <a...@linux-foundation.org>
Signed-off-by: Linus Torvalds <torva...@linux-foundation.org>
Signed-off-by: Sasha Levin <sas...@kernel.org>
---
 fs/proc/task_mmu.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 8b1376c42647..d544ab2e4210 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1136,7 +1136,10 @@ static ssize_t clear_refs_write(struct file *file, const 
char __user *buf,
                        goto out_mm;
                }
 
-               down_read(&mm->mmap_sem);
+               if (down_read_killable(&mm->mmap_sem)) {
+                       count = -EINTR;
+                       goto out_mm;
+               }
                tlb_gather_mmu(&tlb, mm, 0, -1);
                if (type == CLEAR_REFS_SOFT_DIRTY) {
                        for (vma = mm->mmap; vma; vma = vma->vm_next) {
-- 
2.20.1

Reply via email to