On Wed, Sep 21, 2016 at 8:43 AM, Roman Pen
<roman.peny...@profitbricks.com> wrote:
> If panic_on_oops is not set and oops happens inside workqueue kthread,
> kernel kills this kthread.  Current patch fixes recursive GPF which
> happens in that case with the following stack:

Oleg, can you take a look at this?

--Andy

>
>   [<ffffffff81397f75>] dump_stack+0x68/0x93
>   [<ffffffff8106954b>] ? do_exit+0x7ab/0xc10
>   [<ffffffff8108fd73>] __schedule_bug+0x83/0xe0
>   [<ffffffff81716d5a>] __schedule+0x7ea/0xba0
>   [<ffffffff810c864f>] ? vprintk_default+0x1f/0x30
>   [<ffffffff8116a63c>] ? printk+0x48/0x50
>   [<ffffffff81717150>] schedule+0x40/0x90
>   [<ffffffff8106976a>] do_exit+0x9ca/0xc10
>   [<ffffffff810c8e3d>] ? kmsg_dump+0x11d/0x190
>   [<ffffffff810c8d37>] ? kmsg_dump+0x17/0x190
>   [<ffffffff81021ee9>] oops_end+0x99/0xd0
>   [<ffffffff81052da5>] no_context+0x185/0x3e0
>   [<ffffffff81053083>] __bad_area_nosemaphore+0x83/0x1c0
>   [<ffffffff810c820e>] ? vprintk_emit+0x25e/0x530
>   [<ffffffff810531d4>] bad_area_nosemaphore+0x14/0x20
>   [<ffffffff8105355c>] __do_page_fault+0xac/0x570
>   [<ffffffff810c66fe>] ? console_trylock+0x1e/0xe0
>   [<ffffffff81002036>] ? trace_hardirqs_off_thunk+0x1a/0x1c
>   [<ffffffff81053a2c>] do_page_fault+0xc/0x10
>   [<ffffffff8171f812>] page_fault+0x22/0x30
>   [<ffffffff81089bc3>] ? kthread_data+0x33/0x40
>   [<ffffffff8108427e>] ? wq_worker_sleeping+0xe/0x80
>   [<ffffffff817169eb>] __schedule+0x47b/0xba0
>   [<ffffffff81717150>] schedule+0x40/0x90
>   [<ffffffff8106957d>] do_exit+0x7dd/0xc10
>   [<ffffffff81021ee9>] oops_end+0x99/0xd0
>
> The root cause is that zeroed task->vfork_done member is accessed from
> wq_worker_sleeping() hook.  The zeroing out happens on the following
> path:
>
>    oops_end()
>    do_exit()
>    exit_mm()
>    mm_release()
>    complete_vfork_done()
>
> In order to fix a bug dead tasks must be ignored.
>
> Signed-off-by: Roman Pen <roman.peny...@profitbricks.com>
> Cc: Andy Lutomirski <l...@kernel.org>
> Cc: Josh Poimboeuf <jpoim...@redhat.com>
> Cc: Borislav Petkov <b...@alien8.de>
> Cc: Brian Gerst <brge...@gmail.com>
> Cc: Denys Vlasenko <dvlas...@redhat.com>
> Cc: H. Peter Anvin <h...@zytor.com>
> Cc: Peter Zijlstra <pet...@infradead.org>
> Cc: Thomas Gleixner <t...@linutronix.de>
> Cc: Ingo Molnar <mi...@redhat.com>
> Cc: Tejun Heo <t...@kernel.org>
> Cc: linux-kernel@vger.kernel.org
> ---
>  kernel/sched/core.c | 16 +++++++++++++++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 2c303e7..50772e5 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3380,8 +3380,22 @@ static void __sched notrace __schedule(bool preempt)
>                          * If a worker went to sleep, notify and ask workqueue
>                          * whether it wants to wake up a task to maintain
>                          * concurrency.
> +                        *
> +                        * Also the following stack is possible:
> +                        *    oops_end()
> +                        *    do_exit()
> +                        *    schedule()
> +                        *
> +                        * If panic_on_oops is not set and oops happens on
> +                        * a workqueue execution path, thread will be killed.
> +                        * That is definitly sad, but not to make the 
> situation
> +                        * even worse we have to ignore dead tasks in order 
> not
> +                        * to step on zeroed out members (e.g. t->vfork_done 
> is
> +                        * already NULL on that path, since we were called by
> +                        * do_exit()))
>                          */
> -                       if (prev->flags & PF_WQ_WORKER) {
> +                       if (prev->flags & PF_WQ_WORKER &&
> +                           prev->state != TASK_DEAD) {
>                                 struct task_struct *to_wakeup;
>
>                                 to_wakeup = wq_worker_sleeping(prev);
> --
> 2.9.3
>



-- 
Andy Lutomirski
AMA Capital Management, LLC

Reply via email to