Re: [PATCH v2] printk: fix deadlock when kernel panic

2021-02-09 Thread Sergey Senozhatsky
On (21/02/09 10:19), Petr Mladek wrote:
> On Sat 2021-02-06 13:41:24, Muchun Song wrote:

[..]

> What about the following commit message? It uses imperative language
> and explains that the patch just prevents the deadlock. It removes
> some details. The diagram is better than many words.
> 
> 
> printk_safe_flush_on_panic() caused the following deadlock on our server:
> 
> CPU0: CPU1:
> panic rcu_dump_cpu_stacks
>   kdump_nmi_shootdown_cpus  nmi_trigger_cpumask_backtrace
> register_nmi_handler(crash_nmi_callback)  printk_safe_flush
> __printk_safe_flush
>   
> raw_spin_lock_irqsave(_lock)
> // send NMI to other processors
> apic_send_IPI_allbutself(NMI_VECTOR)
> // NMI interrupt, 
> dead loop
> crash_nmi_callback
>   printk_safe_flush_on_panic
> printk_safe_flush
>   __printk_safe_flush
> // deadlock
> raw_spin_lock_irqsave(_lock)

[..]

I would also add to the commit message that it avoids the deadlock
_in this particular case_ at expense of losing contents of printk_safe
buffers. This looks important enough to be mentioned.

-ss


Re: [External] Re: [PATCH v2] printk: fix deadlock when kernel panic

2021-02-09 Thread Sergey Senozhatsky
On (21/02/09 09:39), Petr Mladek wrote:
> > > So then this never re-inits the safe_read_lock?
> 
> Yes, but it will also not cause the deadlock.

Right.

> I prefer this approach. It is straightforward because it handles
> read_lock the same way as logbuf_lock.

I'm fine with that approach, but this needs to be in the commit message.
Something like "lose printk_safe message when we think we will deadlock
on printk_safe flush".

-ss


Re: [External] Re: [PATCH v2] printk: fix deadlock when kernel panic

2021-02-09 Thread Muchun Song
On Tue, Feb 9, 2021 at 5:19 PM Petr Mladek  wrote:
>
> On Sat 2021-02-06 13:41:24, Muchun Song wrote:
> > We found a deadlock bug on our server when the kernel panic. It can be
> > described in the following diagram.
> >
> > CPU0: CPU1:
> > panic rcu_dump_cpu_stacks
> >   kdump_nmi_shootdown_cpus  
> > nmi_trigger_cpumask_backtrace
> > register_nmi_handler(crash_nmi_callback)  printk_safe_flush
> > __printk_safe_flush
> >   
> > raw_spin_lock_irqsave(_lock)
> > // send NMI to other processors
> > apic_send_IPI_allbutself(NMI_VECTOR)
> > // NMI interrupt, 
> > dead loop
> > crash_nmi_callback
> >   printk_safe_flush_on_panic
> > printk_safe_flush
> >   __printk_safe_flush
> > // deadlock
> > raw_spin_lock_irqsave(_lock)
> >
> > The register_nmi_handler() can be called in the __crash_kexec() or the
> > crash_smp_send_stop() on the x86-64. Because CPU1 is interrupted by the
> > NMI with holding the read_lock and crash_nmi_callback() never returns,
> > CPU0 can deadlock when printk_safe_flush_on_panic() is called.
> >
> > When we hold the read_lock and then interrupted by the NMI, if the NMI
> > handler call nmi_panic(), it is also can lead to deadlock.
> >
> > In order to fix it, we make read_lock global and rename it to
> > safe_read_lock. And we handle safe_read_lock the same way in
> > printk_safe_flush_on_panic() as we handle logbuf_lock there.
>
> What about the following commit message? It uses imperative language
> and explains that the patch just prevents the deadlock. It removes
> some details. The diagram is better than many words.
>
> 
> printk_safe_flush_on_panic() caused the following deadlock on our server:
>
> CPU0: CPU1:
> panic rcu_dump_cpu_stacks
>   kdump_nmi_shootdown_cpus  nmi_trigger_cpumask_backtrace
> register_nmi_handler(crash_nmi_callback)  printk_safe_flush
> __printk_safe_flush
>   
> raw_spin_lock_irqsave(_lock)
> // send NMI to other processors
> apic_send_IPI_allbutself(NMI_VECTOR)
> // NMI interrupt, 
> dead loop
> crash_nmi_callback
>   printk_safe_flush_on_panic
> printk_safe_flush
>   __printk_safe_flush
> // deadlock
> raw_spin_lock_irqsave(_lock)
>
> DEADLOCK: read_lock is taken on CPU1 and will never get released.
>
> It happens when panic() stops a CPU by NMI while it has been in
> the middle of printk_safe_flush().
>
> Handle the lock the same way as logbuf_lock. The printk_safe buffers
> are flushed only when both locks can be safely taken.
>
> Note: It would actually be safe to re-init the locks when all CPUs were
>   stopped by NMI. But it would require passing this information
>   from arch-specific code. It is not worth the complexity.
>   Especially because logbuf_lock and printk_safe buffers have been
>   obsoleted by the lockless ring buffer.
> 

Many thanks. It is clear.

>
> > Fixes: cf9b1106c81c ("printk/nmi: flush NMI messages on the system panic")
> > Signed-off-by: Muchun Song 
>
> With an updated commit message:
>
> Reviewed-by: Petr Mladek 

Thanks. I will update the commit message.

>
> Best Regards,
> Petr


Re: [PATCH v2] printk: fix deadlock when kernel panic

2021-02-09 Thread Petr Mladek
On Sat 2021-02-06 13:41:24, Muchun Song wrote:
> We found a deadlock bug on our server when the kernel panic. It can be
> described in the following diagram.
> 
> CPU0: CPU1:
> panic rcu_dump_cpu_stacks
>   kdump_nmi_shootdown_cpus  nmi_trigger_cpumask_backtrace
> register_nmi_handler(crash_nmi_callback)  printk_safe_flush
> __printk_safe_flush
>   
> raw_spin_lock_irqsave(_lock)
> // send NMI to other processors
> apic_send_IPI_allbutself(NMI_VECTOR)
> // NMI interrupt, 
> dead loop
> crash_nmi_callback
>   printk_safe_flush_on_panic
> printk_safe_flush
>   __printk_safe_flush
> // deadlock
> raw_spin_lock_irqsave(_lock)
> 
> The register_nmi_handler() can be called in the __crash_kexec() or the
> crash_smp_send_stop() on the x86-64. Because CPU1 is interrupted by the
> NMI with holding the read_lock and crash_nmi_callback() never returns,
> CPU0 can deadlock when printk_safe_flush_on_panic() is called.
> 
> When we hold the read_lock and then interrupted by the NMI, if the NMI
> handler call nmi_panic(), it is also can lead to deadlock.
> 
> In order to fix it, we make read_lock global and rename it to
> safe_read_lock. And we handle safe_read_lock the same way in
> printk_safe_flush_on_panic() as we handle logbuf_lock there.

What about the following commit message? It uses imperative language
and explains that the patch just prevents the deadlock. It removes
some details. The diagram is better than many words.


printk_safe_flush_on_panic() caused the following deadlock on our server:

CPU0: CPU1:
panic rcu_dump_cpu_stacks
  kdump_nmi_shootdown_cpus  nmi_trigger_cpumask_backtrace
register_nmi_handler(crash_nmi_callback)  printk_safe_flush
__printk_safe_flush
  
raw_spin_lock_irqsave(_lock)
// send NMI to other processors
apic_send_IPI_allbutself(NMI_VECTOR)
// NMI interrupt, dead 
loop
crash_nmi_callback
  printk_safe_flush_on_panic
printk_safe_flush
  __printk_safe_flush
// deadlock
raw_spin_lock_irqsave(_lock)

DEADLOCK: read_lock is taken on CPU1 and will never get released.

It happens when panic() stops a CPU by NMI while it has been in
the middle of printk_safe_flush().

Handle the lock the same way as logbuf_lock. The printk_safe buffers
are flushed only when both locks can be safely taken.

Note: It would actually be safe to re-init the locks when all CPUs were
  stopped by NMI. But it would require passing this information
  from arch-specific code. It is not worth the complexity.
  Especially because logbuf_lock and printk_safe buffers have been
  obsoleted by the lockless ring buffer.


> Fixes: cf9b1106c81c ("printk/nmi: flush NMI messages on the system panic")
> Signed-off-by: Muchun Song 

With an updated commit message:

Reviewed-by: Petr Mladek 

Best Regards,
Petr


Re: [External] Re: [PATCH v2] printk: fix deadlock when kernel panic

2021-02-09 Thread Petr Mladek
On Mon 2021-02-08 23:40:07, Muchun Song wrote:
> On Mon, Feb 8, 2021 at 9:12 PM Sergey Senozhatsky
>  wrote:
> >
> > On (21/02/08 16:49), Muchun Song wrote:
> > > On Mon, Feb 8, 2021 at 2:38 PM Sergey Senozhatsky
> > >  wrote:
> > > >
> > > > On (21/02/06 13:41), Muchun Song wrote:
> > > > > We found a deadlock bug on our server when the kernel panic. It can be
> > > > > described in the following diagram.
> > > > >
> > > > > CPU0: CPU1:
> > > > > panic rcu_dump_cpu_stacks
> > > > >   kdump_nmi_shootdown_cpus  
> > > > > nmi_trigger_cpumask_backtrace
> > > > > register_nmi_handler(crash_nmi_callback)  printk_safe_flush
> > > > > 
> > > > > __printk_safe_flush
> > > > >   
> > > > > raw_spin_lock_irqsave(_lock)
> > > > > // send NMI to other processors
> > > > > apic_send_IPI_allbutself(NMI_VECTOR)
> > > > > // NMI 
> > > > > interrupt, dead loop
> > > > > 
> > > > > crash_nmi_callback
> > > >
> > > > At what point does this decrement num_online_cpus()? Any chance that
> > > > panic CPU can apic_send_IPI_allbutself() and 
> > > > printk_safe_flush_on_panic()
> > > > before num_online_cpus() becomes 1?
> > >
> > > I took a closer look at the code. IIUC, It seems that there is no point
> > > which decreases num_online_cpus.
> >
> > So then this never re-inits the safe_read_lock?

Yes, but it will also not cause the deadlock.
printk_safe_flush_on_panic() will return without flushing
the buffers.

> Right. If we encounter this case, we do not flush printk
> buffer. So, it seems my previous patch is the right fix.
> Right?
> 
> https://lore.kernel.org/patchwork/patch/1373563/

No, there is a risk of deadlock caused by logbuf_lock, see
https://lore.kernel.org/lkml/YB0nggSa7a95UCIK@alley/

> >if (num_online_cpus() > 1)
> >return;
> >
> >debug_locks_off();
> >raw_spin_lock_init(_read_lock);
> >
> > -ss

I prefer this approach. It is straightforward because it handles
read_lock the same way as logbuf_lock.

IMHO, it is not worth inventing any more complexity. Both logbuf_lock
and read_lock are obsoleted by the lockless ringbuffer. And we need
something simple to get backported to the already released kernels.

Best Regards,
Petr


Re: [External] Re: [PATCH v2] printk: fix deadlock when kernel panic

2021-02-08 Thread Muchun Song
On Mon, Feb 8, 2021 at 9:12 PM Sergey Senozhatsky
 wrote:
>
> On (21/02/08 16:49), Muchun Song wrote:
> > On Mon, Feb 8, 2021 at 2:38 PM Sergey Senozhatsky
> >  wrote:
> > >
> > > On (21/02/06 13:41), Muchun Song wrote:
> > > > We found a deadlock bug on our server when the kernel panic. It can be
> > > > described in the following diagram.
> > > >
> > > > CPU0: CPU1:
> > > > panic rcu_dump_cpu_stacks
> > > >   kdump_nmi_shootdown_cpus  
> > > > nmi_trigger_cpumask_backtrace
> > > > register_nmi_handler(crash_nmi_callback)  printk_safe_flush
> > > > __printk_safe_flush
> > > >   
> > > > raw_spin_lock_irqsave(_lock)
> > > > // send NMI to other processors
> > > > apic_send_IPI_allbutself(NMI_VECTOR)
> > > > // NMI 
> > > > interrupt, dead loop
> > > > 
> > > > crash_nmi_callback
> > >
> > > At what point does this decrement num_online_cpus()? Any chance that
> > > panic CPU can apic_send_IPI_allbutself() and printk_safe_flush_on_panic()
> > > before num_online_cpus() becomes 1?
> >
> > I took a closer look at the code. IIUC, It seems that there is no point
> > which decreases num_online_cpus.
>
> So then this never re-inits the safe_read_lock?

Right. If we encounter this case, we do not flush printk
buffer. So, it seems my previous patch is the right fix.
Right?

https://lore.kernel.org/patchwork/patch/1373563/

>
>if (num_online_cpus() > 1)
>return;
>
>debug_locks_off();
>raw_spin_lock_init(_read_lock);
>
> -ss


Re: [External] Re: [PATCH v2] printk: fix deadlock when kernel panic

2021-02-08 Thread Sergey Senozhatsky
On (21/02/08 16:49), Muchun Song wrote:
> On Mon, Feb 8, 2021 at 2:38 PM Sergey Senozhatsky
>  wrote:
> >
> > On (21/02/06 13:41), Muchun Song wrote:
> > > We found a deadlock bug on our server when the kernel panic. It can be
> > > described in the following diagram.
> > >
> > > CPU0: CPU1:
> > > panic rcu_dump_cpu_stacks
> > >   kdump_nmi_shootdown_cpus  
> > > nmi_trigger_cpumask_backtrace
> > > register_nmi_handler(crash_nmi_callback)  printk_safe_flush
> > > __printk_safe_flush
> > >   
> > > raw_spin_lock_irqsave(_lock)
> > > // send NMI to other processors
> > > apic_send_IPI_allbutself(NMI_VECTOR)
> > > // NMI interrupt, 
> > > dead loop
> > > crash_nmi_callback
> >
> > At what point does this decrement num_online_cpus()? Any chance that
> > panic CPU can apic_send_IPI_allbutself() and printk_safe_flush_on_panic()
> > before num_online_cpus() becomes 1?
>
> I took a closer look at the code. IIUC, It seems that there is no point
> which decreases num_online_cpus.

So then this never re-inits the safe_read_lock?

   if (num_online_cpus() > 1)
   return;

   debug_locks_off();
   raw_spin_lock_init(_read_lock);

-ss


Re: [External] Re: [PATCH v2] printk: fix deadlock when kernel panic

2021-02-08 Thread Muchun Song
On Mon, Feb 8, 2021 at 2:38 PM Sergey Senozhatsky
 wrote:
>
> On (21/02/06 13:41), Muchun Song wrote:
> > We found a deadlock bug on our server when the kernel panic. It can be
> > described in the following diagram.
> >
> > CPU0: CPU1:
> > panic rcu_dump_cpu_stacks
> >   kdump_nmi_shootdown_cpus  
> > nmi_trigger_cpumask_backtrace
> > register_nmi_handler(crash_nmi_callback)  printk_safe_flush
> > __printk_safe_flush
> >   
> > raw_spin_lock_irqsave(_lock)
> > // send NMI to other processors
> > apic_send_IPI_allbutself(NMI_VECTOR)
> > // NMI interrupt, 
> > dead loop
> > crash_nmi_callback
>
> At what point does this decrement num_online_cpus()? Any chance that
> panic CPU can apic_send_IPI_allbutself() and printk_safe_flush_on_panic()
> before num_online_cpus() becomes 1?

I took a closer look at the code. IIUC, It seems that there is no point
which decreases num_online_cpus.

>
> >   printk_safe_flush_on_panic
> > printk_safe_flush
> >   __printk_safe_flush
> > // deadlock
> > raw_spin_lock_irqsave(_lock)
>
> -ss


Re: [PATCH v2] printk: fix deadlock when kernel panic

2021-02-07 Thread Sergey Senozhatsky
On (21/02/06 13:41), Muchun Song wrote:
> We found a deadlock bug on our server when the kernel panic. It can be
> described in the following diagram.
> 
> CPU0: CPU1:
> panic rcu_dump_cpu_stacks
>   kdump_nmi_shootdown_cpus  nmi_trigger_cpumask_backtrace
> register_nmi_handler(crash_nmi_callback)  printk_safe_flush
> __printk_safe_flush
>   
> raw_spin_lock_irqsave(_lock)
> // send NMI to other processors
> apic_send_IPI_allbutself(NMI_VECTOR)
> // NMI interrupt, 
> dead loop
> crash_nmi_callback

At what point does this decrement num_online_cpus()? Any chance that
panic CPU can apic_send_IPI_allbutself() and printk_safe_flush_on_panic()
before num_online_cpus() becomes 1?

>   printk_safe_flush_on_panic
> printk_safe_flush
>   __printk_safe_flush
> // deadlock
> raw_spin_lock_irqsave(_lock)

-ss