On 01/22/2015 11:02 PM, Sasha Levin wrote:
> On 01/22/2015 10:51 PM, Paul E. McKenney wrote:
>> On Thu, Jan 22, 2015 at 10:29:01PM -0500, Sasha Levin wrote:
>>>> On 01/21/2015 07:43 PM, Paul E. McKenney wrote:
>>>>>> On Wed, Jan 21, 2015 at 10:44:57AM -0500, Sasha Levin wrote:
>>>>>>>> On 01/20/2015 09:57 PM, Paul E. McKenney wrote:
>>>>>>>>>>>>>> So RCU believes that an RCU read-side critical section that 
>>>>>>>>>>>>>> ended within
>>>>>>>>>>>>>>>>>> an interrupt handler (in this case, an hrtimer) somehow got 
>>>>>>>>>>>>>>>>>> preempted.
>>>>>>>>>>>>>>>>>> Which is not supposed to happen.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Do you have CONFIG_PROVE_RCU enabled?  If not, could you 
>>>>>>>>>>>>>>>>>> please enable it
>>>>>>>>>>>>>>>>>> and retry?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I did have CONFIG_PROVE_RCU, and didn't see anything else 
>>>>>>>>>>>>>> besides what I pasted here.
>>>>>>>>>> OK, fair enough.  I do have a stack of RCU CPU stall-warning changes 
>>>>>>>>>> on
>>>>>>>>>> their way in, please see v3.19-rc1..630181c4a915 in -rcu, which is 
>>>>>>>>>> at:
>>>>>>>>>>
>>>>>>>>>> git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git
>>>>>>>>>>
>>>>>>>>>> These handle the problems that Dave Jones, yourself, and a few others
>>>>>>>>>> located this past December.  Could you please give them a spin?
>>>>>>>>
>>>>>>>> They seem to be a part of -next already, so this testing already 
>>>>>>>> includes them.
>>>>>>>>
>>>>>>>> I seem to be getting them about once a day, anything I can add to 
>>>>>>>> debug it?
>>>>>>
>>>>>> Could you please try reproducing with the following patch?
>>>>
>>>> Yes, and I've got mixed results. It reproduced, and all I got was:
>>>>
>>>> [  717.645572] ===============================
>>>> [  717.645572] [ INFO: suspicious RCU usage. ]
>>>> [  717.645572] 3.19.0-rc5-next-20150121-sasha-00064-g3c37e35-dirty #1809 
>>>> Tainted: G        W
>>>> [  717.645572] -------------------------------
>>>> [  717.645572] kernel/rcu/tree_plugin.h:337 rcu_read_unlock() from irq or 
>>>> softirq with blocking in critical section!!!
>>>> [  717.645572] !
>>>> [  717.645572]
>>>> [  717.645572] other info that might help us debug this:
>>>> [  717.645572]
>>>> [  717.645572]
>>>> [  717.645572] rcu_scheduler_active = 1, debug_locks = 1
>>>> [  717.645572] 3 locks held by trinity-c29/16497:
>>>> [  717.645572]  #0:  (&sb->s_type->i_mutex_key){+.+.+.}, at: 
>>>> [<ffffffff81bec373>] lookup_slow+0xd3/0x420
>>>> [  717.645572]  #1:
>>>> [hang]
>>>>
>>>> So the rest of the locks/stack trace didn't get printed, nor the 
>>>> pr_alert() which
>>>> should follow that.
>>>>
>>>> I've removed the lockdep call and will re-run it.
>> Thank you!  You are keeping the pr_alert(), correct?
> 
> Yup, just the lockdep call goes away.

Okay, this reproduced faster than I anticipated:

[  786.160131] ->rcu_read_unlock_special: 0x100 (b: 0, nq: 1)
[  786.239513] ->rcu_read_unlock_special: 0x100 (b: 0, nq: 1)
[  786.240503] ->rcu_read_unlock_special: 0x100 (b: 0, nq: 1)
[  786.242575] ->rcu_read_unlock_special: 0x100 (b: 0, nq: 1)
[  786.243565] ->rcu_read_unlock_special: 0x100 (b: 0, nq: 1)
[  786.243565] ->rcu_read_unlock_special: 0x100 (b: 0, nq: 1)
[  786.243565] ->rcu_read_unlock_special: 0x100 (b: 0, nq: 1)
[  786.243565] ->rcu_read_unlock_special: 0x100 (b: 0, nq: 1)
[  786.243565] ->rcu_read_unlock_special: 0x100 (b: 0, nq: 1)

It seems like the WARN_ON_ONCE was hiding the fact it actually got hit couple
of times in a very short interval. Maybe that would also explain lockdep 
crapping
itself.


Thanks,
Sasha

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to