Currently I always see this warning at boot:
===================================================
[ INFO: suspicious rcu_dereference_check() usage. ]
---------------------------------------------------
kernel/sched.c:616 invoked rcu_dereference_check() without protection!
other info that might help us debug this:
rcu_scheduler_active = 1, debug_locks = 0
3 locks held by swapper/1:
#0: (cpu_add_remove_lock){+.+.+.}, at: [<c01356a6>]
cpu_maps_update_begin+0xf/0x11
#1: (cpu_hotplug.lock){+.+.+.}, at: [<c01355ee>] cpu_hotplug_begin+0x1d/0x40
#2: (&rq->lock){-.-...}, at: [<c05ecfd1>] init_idle+0x25/0x20a
stack backtrace:
Pid: 1, comm: swapper Not tainted 2.6.35.3-9.1-mid #1
Call Trace:
[<c05eeb8c>] ? printk+0xf/0x11
[<c015a056>] lockdep_rcu_dereference+0x7d/0x86
[<c05ed0dc>] init_idle+0x130/0x20a
[<c05ed46b>] fork_idle+0x70/0x79
[<c05ebf77>] do_fork_idle+0xe/0x1c
[<c05ebce1>] do_boot_cpu+0xcc/0x354
[<c05ebf69>] ? do_fork_idle+0x0/0x1c
[<c05ec548>] native_cpu_up+0xbe/0x124
[<c05ed4fd>] _cpu_up.clone.0+0x78/0xc7
[<c05ed584>] cpu_up+0x38/0x45
[<c08bc233>] smp_init+0x3b/0x85
[<c08bc812>] ? kernel_init+0x0/0x130
[<c08bc881>] kernel_init+0x6f/0x130
[<c0102bfa>] kernel_thread_helper+0x6/0x10
This is a backport of the following upstream commits from v2.6.36.
Signed-off-by: Chris Leech <[email protected]>
commit 1144182a8757f2a1f909f0c592898aaaf80884fc
Author: Paul E. McKenney <[email protected]>
Date: Wed Oct 6 17:15:35 2010 -0700
net: suppress RCU lockdep false positive in sock_update_classid
The sock_update_classid() function calls task_cls_classid(current),
but the calling task cannot go away, so there is no danger of
the associated structures disappearing. Insert an RCU read-side
critical section to suppress the false positive.
commit 4ee0a603926cad973e4d384f48c5e279a0fd4118
Author: Dongdong Deng <[email protected]>
Date: Tue Sep 28 16:32:43 2010 +0800
rcu: using ACCESS_ONCE() to observe the jiffies_stall/rnp->qsmask value
Using ACCESS_ONCE() to observe the jiffies_stall/rnp->qsmask value
due to the caller didn't hold the root_rcu/rnp node's lock. Although
use without ACCESS_ONCE() is safe due to the value loaded being used
but once, the ACCESS_ONCE() is a good documentation aid -- the variables
are being loaded without the services of a lock.
commit 6506cf6ce68d78a5470a8360c965dafe8e4b78e3
Author: Peter Zijlstra <[email protected]>
Date: Thu Sep 16 17:50:31 2010 +0200
sched: fix RCU lockdep splat from task_group()
The cgroup_subsys_state structures referenced by idle tasks are never
freed, because the idle tasks should be part of the root cgroup,
which is not removable.
The problem is that while we do in-fact hold rq->lock, the newly spawned
idle thread's cpu is not yet set to the correct cpu so the lockdep check
in task_group():
lockdep_is_held(&task_rq(p)->lock)
will fail.
But this is a chicken and egg problem. Setting the CPU's runqueue requires
that the CPU's runqueue already be set. ;-)
So insert an RCU read-side critical section to avoid the complaint.
commit b0a0f667a349247bd7f05f806b662a25653822bc
Author: Paul E. McKenney <[email protected]>
Date: Wed Oct 6 17:32:51 2010 -0700
sched: suppress RCU lockdep splat in task_fork_fair
Here a newly created task is having its runqueue assigned. The new task
is not yet on the tasklist, so cannot go away. This is therefore a false
positive, suppress with an RCU read-side critical section.
---
kernel/rcutree.c | 4 ++--
kernel/sched.c | 12 ++++++++++++
kernel/sched_fair.c | 5 ++++-
net/core/sock.c | 6 +++++-
4 files changed, 23 insertions(+), 4 deletions(-)
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index d443734..ef01915 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -532,9 +532,9 @@ static void check_cpu_stall(struct rcu_state *rsp, struct
rcu_data *rdp)
if (rcu_cpu_stall_panicking)
return;
- delta = jiffies - rsp->jiffies_stall;
+ delta = jiffies - ACCESS_ONCE(rsp->jiffies_stall);
rnp = rdp->mynode;
- if ((rnp->qsmask & rdp->grpmask) && delta >= 0) {
+ if ((ACCESS_ONCE(rnp->qsmask) & rdp->grpmask) && delta >= 0) {
/* We haven't checked in, so go dump stack. */
print_cpu_stall(rsp);
diff --git a/kernel/sched.c b/kernel/sched.c
index 6d0dbeb..8ae0f60 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5164,7 +5164,19 @@ void __cpuinit init_idle(struct task_struct *idle, int
cpu)
idle->se.exec_start = sched_clock();
cpumask_copy(&idle->cpus_allowed, cpumask_of(cpu));
+ /*
+ * We're having a chicken and egg problem, even though we are
+ * holding rq->lock, the cpu isn't yet set to this cpu so the
+ * lockdep check in task_group() will fail.
+ *
+ * Similar case to sched_fork(). / Alternatively we could
+ * use task_rq_lock() here and obtain the other rq->lock.
+ *
+ * Silence PROVE_RCU
+ */
+ rcu_read_lock();
__set_task_cpu(idle, cpu);
+ rcu_read_unlock();
rq->curr = rq->idle = idle;
#if defined(CONFIG_SMP) && defined(__ARCH_WANT_UNLOCKED_CTXSW)
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index a878b53..b605791 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -3542,8 +3542,11 @@ static void task_fork_fair(struct task_struct *p)
raw_spin_lock_irqsave(&rq->lock, flags);
- if (unlikely(task_cpu(p) != this_cpu))
+ if (unlikely(task_cpu(p) != this_cpu)) {
+ rcu_read_lock();
__set_task_cpu(p, this_cpu);
+ rcu_read_unlock();
+ }
update_curr(cfs_rq);
diff --git a/net/core/sock.c b/net/core/sock.c
index 2cf7f9f..fd652f4 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1059,7 +1059,11 @@ static void sk_prot_free(struct proto *prot, struct sock
*sk)
#ifdef CONFIG_CGROUPS
void sock_update_classid(struct sock *sk)
{
- u32 classid = task_cls_classid(current);
+ u32 classid;
+
+ rcu_read_lock(); /* doing current task, which cannot vanish */
+ classid = task_cls_classid(current);
+ rcu_read_unlock();
if (classid && classid != sk->sk_classid)
sk->sk_classid = classid;
_______________________________________________
Meego-kernel mailing list
[email protected]
http://lists.meego.com/listinfo/meego-kernel