* Peter Zijlstra [2012-08-13 09:51:13]:
> On Fri, 2012-08-10 at 21:54 +0530, Srikar Dronamraju wrote:
>
> > This change worked well on the 2 node machine
> > but on the 8 node machine it hangs with repeated messages
> >
> > Pid: 60935, comm: numa01 Tainted: GW3.5.0-numasched_v2_020
* Peter Zijlstra [2012-08-13 10:11:28]:
> On Mon, 2012-08-13 at 09:51 +0200, Peter Zijlstra wrote:
> > On Fri, 2012-08-10 at 21:54 +0530, Srikar Dronamraju wrote:
> >
> > > This change worked well on the 2 node machine
> > > but on the 8 node machine it hangs with repeated messages
> > >
> > >
On Mon, 2012-08-13 at 09:51 +0200, Peter Zijlstra wrote:
> On Fri, 2012-08-10 at 21:54 +0530, Srikar Dronamraju wrote:
>
> > This change worked well on the 2 node machine
> > but on the 8 node machine it hangs with repeated messages
> >
> > Pid: 60935, comm: numa01 Tainted: GW3.5.0-n
On Fri, 2012-08-10 at 21:54 +0530, Srikar Dronamraju wrote:
> This change worked well on the 2 node machine
> but on the 8 node machine it hangs with repeated messages
>
> Pid: 60935, comm: numa01 Tainted: GW3.5.0-numasched_v2_020812+ #4
> Call Trace:
> [] ? rcu_check_callback s+0x
> ---
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1539,6 +1539,7 @@ struct task_struct {
> #ifdef CONFIG_SMP
> u64 node_stamp; /* migration stamp */
> unsigned long numa_contrib;
> + struct callback_head numa_work;
> #endif /* CONFIG_SMP */
On Tue, 2012-08-07 at 22:49 +0530, Srikar Dronamraju wrote:
> Are you referring to this the commit 158e1645e (trim task_work: get rid of
> hlist)
No, to something like the below..
> I am also able to reproduce this on another 8 node machine too.
Ship me one ;-)
> Just to update, I had to rever
* Peter Zijlstra [2012-08-07 15:52:48]:
> On Tue, 2012-08-07 at 18:03 +0530, Srikar Dronamraju wrote:
> > Hi,
> >
> > INFO: rcu_sched self-detected stall on CPU { 7} (t=105182911 jiffies)
> > Pid: 5173, comm: qpidd Tainted: GW3.5.0numasched_v2_020812+ #1
> > Call Trace:
> >[
* John Stultz [2012-08-07 10:08:51]:
> On 08/07/2012 05:33 AM, Srikar Dronamraju wrote:
> >Hi,
> >
> >I saw this while I was running the 2nd August -tip kernel + Peter's
> >numasched patches.
> >
> >Top showed load average to be 240, there was one cpu (cpu 7) which
> >showed 100% while all other
On 08/07/2012 05:33 AM, Srikar Dronamraju wrote:
Hi,
I saw this while I was running the 2nd August -tip kernel + Peter's
numasched patches.
Top showed load average to be 240, there was one cpu (cpu 7) which
showed 100% while all other cpus were idle. The system showed some
sluggishness. Befor
On Tue, 2012-08-07 at 18:03 +0530, Srikar Dronamraju wrote:
> Hi,
>
> I saw this while I was running the 2nd August -tip kernel + Peter's
> numasched patches.
>
> Top showed load average to be 240, there was one cpu (cpu 7) which
> showed 100% while all other cpus were idle. The system showed
Hi,
I saw this while I was running the 2nd August -tip kernel + Peter's
numasched patches.
Top showed load average to be 240, there was one cpu (cpu 7) which
showed 100% while all other cpus were idle. The system showed some
sluggishness. Before I saw this I ran Andrea's autonuma benchmark cou
11 matches
Mail list logo