On (01/29/16 21:43), Byungchul Park wrote:
[..]
> +extern int is_console_lock(raw_spinlock_t *lock);
> +
>  static void __spin_lock_debug(raw_spinlock_t *lock)
>  {
>       u64 i;
> @@ -113,11 +115,19 @@ static void __spin_lock_debug(raw_spinlock_t *lock)
>                       return;
>               __delay(1);
>       }
> -     /* lockup suspected: */
> -     spin_dump(lock, "lockup suspected");
> +
> +     /*
> +      * If this function is called from printk(), then we should
> +      * not call printk() more. Or it will cause an infinite
> +      * recursive cycle!
> +      */
> +     if (likely(!is_console_lock(lock))) {
> +             /* lockup suspected: */
> +             spin_dump(lock, "lockup suspected");
>  #ifdef CONFIG_SMP
> -     trigger_all_cpu_backtrace();
> +             trigger_all_cpu_backtrace();
>  #endif
> +     }

/* speaking in a context of printk locks only */

... may be for a recoverable "lockup suspected" spin_dump()
we can switch to deferred printk of the messages, which is a
bit better than nothing. but for unrecoverable "lockup suspected"
spin_dump() -- an actual bug (spin lock owner is not going to
release the lock any more) -- we need something else, I think.
the bug will neither be reported nor fixed.

        -ss

>       /*
>        * The trylock above was causing a livelock.  Give the lower level arch
> diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
> index 2ce8826..568ab11 100644
> --- a/kernel/printk/printk.c
> +++ b/kernel/printk/printk.c
> @@ -1981,6 +1981,11 @@ asmlinkage __visible void early_printk(const char 
> *fmt, ...)
>  }
>  #endif
>  
> +int is_console_lock(raw_spinlock_t *lock)
> +{
> +     return lock == &console_sem.lock;
> +}
> +
>  static int __add_preferred_console(char *name, int idx, char *options,
>                                  char *brl_options)
>  {
> -- 
> 1.9.1
> 

Reply via email to