On (01/29/16 15:54), Byungchul Park wrote:
> On Fri, Jan 29, 2016 at 09:27:03AM +0900, Sergey Senozhatsky wrote:
> > 
> > well, the stack is surely limited, but on every
> > spin_dump()->spin_lock() recursive call it does another
> > round of
> > 
> >     u64 loops = loops_per_jiffy * HZ;
> > 
> >     for (i = 0; i < loops; i++) {
> >             if (arch_spin_trylock(&lock->raw_lock))
> >                     return;
> >             __delay(1);
> >     }
> > 
> > so if you have 1000 spin_dump()->spin_lock() then, well,
> > something has been holding the lock for '1000 * loops_per_jiffy * HZ'.
> 
> Or the printk() is heavily called and the lock is congested.

well, isn't it the case that ticket-based locking assumes at least
some sort of fairness? how many cpus do you have there? you can
have `num_online_cpus() - 1' tasks spinning on the spin lock and
1 owning the spin lock... if your lock is in correct state (no
before/after spinlock debug errors) even most unlucky task should
get the lock eventually...

        -ss

Reply via email to