On Fri 03-06-16 17:43:47, Sergey Senozhatsky wrote:
> On (06/03/16 09:25), Michal Hocko wrote:
> > > it's quite hard to trigger the bug (somehow), so I can't
> > > follow up with more information as of now.
> 
> either I did something very silly fixing up the patch, or the
> patch may be causing general protection faults on my system.
> 
> RIP collect_mm_slot() + 0x42/0x84
>       khugepaged

So is this really collect_mm_slot called directly from khugepaged or is
some inlining going on there?

>       prepare_to_wait_event
>       maybe_pmd_mkwrite
>       kthread
>       _raw_sin_unlock_irq
>       ret_from_fork
>       kthread_create_on_node
> 
> collect_mm_slot() + 0x42/0x84 is

I guess that the problem is that I have missed that __khugepaged_exit
doesn't clear the cached khugepaged_scan.mm_slot. Does the following on
top fixes that?
---
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 6574c62ca4a3..e6f4e6fd587a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2021,6 +2021,8 @@ void __khugepaged_exit(struct mm_struct *mm)
        spin_lock(&khugepaged_mm_lock);
        mm_slot = get_mm_slot(mm);
        if (mm_slot) {
+               if (khugepaged_scan.mm_slot == mm_slot)
+                       khugepaged_scan.mm_slot = NULL;
                collect_mm_slot(mm_slot);
                clear_bit(MMF_VM_HUGEPAGE, &mm->flags);
        }
-- 
Michal Hocko
SUSE Labs

Reply via email to