Hello,

        This patch set extends a lock-less NMI per-cpu buffers idea to
handle recursive printk() calls. The basic mechanism is pretty much the
same -- at the beginning of a deadlock-prone section we switch to lock-less
printk callback, and return back to a default printk implementation at the
end; the messages are getting flushed to a logbuf buffer from a safer
context.

See patch 0006 for examples of possible deadlock scenarios that are
handled by printk-safe.


v7:
-- don't touch printk_nmi_enter/exit
-- new printk_safe macros naming: printk_safe_enter_irqsave/exit_irqrestore
-- added printk_safe_enter_irq/printk_safe_exit_irq
-- added patch that converts the rest of printk.c to printk-safe
-- fix build error on !NMI configs
-- improved lost_messages reporting

v6:
-- re-based
-- addressed Petr's review (added comments; moved lost accounting to seq buf)
...

[against next-20161224 + "printk: always report dropped messages" patch set]

Sergey Senozhatsky (8):
  printk: use vprintk_func in vprintk()
  printk: rename nmi.c and exported api
  printk: introduce per-cpu safe_print seq buffer
  printk: always use deferred printk when flush printk_safe lines
  printk: report lost messages in printk safe/nmi contexts
  printk: use printk_safe buffers in printk
  printk: remove zap_locks() function
  printk: convert the rest to printk-safe

 include/linux/printk.h                 |  21 ++-
 init/Kconfig                           |  16 ++-
 init/main.c                            |   2 +-
 kernel/kexec_core.c                    |   2 +-
 kernel/panic.c                         |   4 +-
 kernel/printk/Makefile                 |   2 +-
 kernel/printk/internal.h               |  79 ++++++-----
 kernel/printk/printk.c                 | 220 +++++++++++++------------------
 kernel/printk/{nmi.c => printk_safe.c} | 234 +++++++++++++++++++++++----------
 lib/nmi_backtrace.c                    |   2 +-
 10 files changed, 335 insertions(+), 247 deletions(-)
 rename kernel/printk/{nmi.c => printk_safe.c} (53%)

-- 
2.11.0

Reply via email to