On Tue, Jan 05, 2016 at 05:52:11PM +0900, Byungchul Park wrote:
>
> Upstream commits to be applied
> ==
>
> e3fca9e: sched: Replace post_schedule with a balance callback list
> 4c9a4bc: sched: Allow balance callbacks for check_class_changed()
> 8046d68: sched,rt:
Commit-ID: 642c2d671ceff40e9453203ea0c66e991e11e249
Gitweb: http://git.kernel.org/tip/642c2d671ceff40e9453203ea0c66e991e11e249
Author: Peter Zijlstra <pet...@infradead.org>
AuthorDate: Mon, 30 Nov 2015 12:56:15 +0100
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Fri
On Mon, Nov 23, 2015 at 03:19:19PM +0100, Frederic Weisbecker wrote:
> On Thu, Nov 19, 2015 at 09:28:28PM +0100, Peter Zijlstra wrote:
> > On Thu, Nov 19, 2015 at 04:47:33PM +0100, Frederic Weisbecker wrote:
> > > +++ b/include/linux/vtime.h
> > > @@ -17,9 +
On Thu, Nov 19, 2015 at 04:47:33PM +0100, Frederic Weisbecker wrote:
> +++ b/include/linux/vtime.h
> @@ -17,9 +17,20 @@ static inline bool vtime_accounting_cpu_enabled(void) {
> return true; }
> #endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */
>
> #ifdef CONFIG_VIRT_CPU_ACCOUNTING_GEN
> +/*
> +
mic_xxx_return barrier semantics")
>
> This patch depends on patch "powerpc: Make value-returning atomics fully
> ordered" for PPC_ATOMIC_ENTRY_BARRIER definition.
>
> Cc: <stable@vger.kernel.org> # 3.4+
> Signed-off-by: Boqun Feng <boqun.f...@gmail.com>
A
ed, which can avoid possible
> memory ordering problems if userspace code relies on futex system call
> for fully ordered semantics.
>
> Cc: <stable@vger.kernel.org> # 3.4+
> Signed-off-by: Boqun Feng <boqun.f...@gmail.com>
Acked-by: Peter Zijlstra (Intel) <
On Thu, Oct 22, 2015 at 08:07:16PM +0800, Boqun Feng wrote:
> On Wed, Oct 21, 2015 at 09:48:25PM +0200, Peter Zijlstra wrote:
> > On Wed, Oct 21, 2015 at 12:35:23PM -0700, Paul E. McKenney wrote:
> > > > > > > I ask this because I recall Pet
On Wed, Oct 21, 2015 at 12:35:23PM -0700, Paul E. McKenney wrote:
> > > > > I ask this because I recall Peter once bought up a discussion:
> > > > >
> > > > > https://lkml.org/lkml/2015/8/26/596
> > So a full barrier on one side of these operations is enough, I think.
> > IOW, there is no need
On Tue, Oct 20, 2015 at 11:46:33AM -0700, Andi Kleen wrote:
> From: Andi Kleen
>
> This fixes a bug added with the earlier 90405aa02. The bug
> could lead to lost LBR call stacks. When restoring the LBR
> state we need to use the TOS of the previous context, not
> the
On Tue, Oct 20, 2015 at 02:28:35PM -0700, Paul E. McKenney wrote:
> I am not seeing a sync there, but I really have to defer to the
> maintainers on this one. I could easily have missed one.
So x86 implies a full barrier for everything that changes the CPL; and
some form of implied ordering
On Tue, Oct 20, 2015 at 03:15:32PM +0800, Boqun Feng wrote:
> On Wed, Oct 14, 2015 at 01:19:17PM -0700, Paul E. McKenney wrote:
> >
> > Am I missing something here? If not, it seems to me that you need
> > the leading lwsync to instead be a sync.
> >
> > Of course, if I am not missing
On Wed, Oct 14, 2015 at 08:51:34AM +0800, Boqun Feng wrote:
> On Wed, Oct 14, 2015 at 11:10:00AM +1100, Michael Ellerman wrote:
> > Thanks for fixing this. In future you should send a patch like this as a
> > separate patch. I've not been paying attention to it because I assumed it
> > was
>
>
On Wed, Oct 14, 2015 at 05:26:53PM +0800, Boqun Feng wrote:
> Michael and Peter, rest of this patchset depends on commits which are
> currently in the locking/core branch of the tip, so I would like it as a
> whole queued there. Besides, I will keep this patch Cc'ed to stable in
> future versions,
On Wed, Oct 14, 2015 at 01:19:17PM -0700, Paul E. McKenney wrote:
> Suppose we have something like the following, where "a" and "x" are both
> initially zero:
>
> CPU 0 CPU 1
> - -
>
> WRITE_ONCE(x, 1);
On Fri, Oct 09, 2015 at 11:25:09AM +0100, Thomas Gleixner wrote:
> Hans,
>
> On Fri, 9 Oct 2015, Hans Zuidam wrote:
> > On 9 okt. 2015, at 11:06, Thomas Gleixner wrote:
> > > You cannot use an explicit 32bit read. We need an access which
> > > handles the fault gracefully.
>
very little time
> * (e.g. a polling loop)
> */
>
> I'll include it in my pull request.
In which case:
Acked-by: Peter Zijlstra (Intel) <pet...@infradead.org>
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Commit-ID: ebfb4988f0378e2ac3b4a0aa1ea20d724293f392
Gitweb: http://git.kernel.org/tip/ebfb4988f0378e2ac3b4a0aa1ea20d724293f392
Author: Peter Zijlstra <pet...@infradead.org>
AuthorDate: Thu, 10 Sep 2015 11:58:27 +0200
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate: Sun,
@vger.kernel.org
Acked-by: Peter Zijlstra (Intel) pet...@infradead.org
--
To unsubscribe from this list: send the line unsubscribe stable in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Commit-ID: fed66e2cdd4f127a43fd11b8d92a99bdd429528c
Gitweb: http://git.kernel.org/tip/fed66e2cdd4f127a43fd11b8d92a99bdd429528c
Author: Peter Zijlstra pet...@infradead.org
AuthorDate: Thu, 11 Jun 2015 10:32:01 +0200
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Tue, 4 Aug 2015 09:57
On Mon, Jul 20, 2015 at 11:47:11AM -0700, Andy Lutomirski wrote:
@@ -300,6 +300,7 @@ unknown_nmi_error(unsigned char reason, struct pt_regs
*regs)
panic(NMI: Not continuing);
pr_emerg(Dazed and confused, but trying to continue\n);
+ dump_stack();
}
Commit-ID: a833581e372a4adae2319d8dc379493edbc444e9
Gitweb: http://git.kernel.org/tip/a833581e372a4adae2319d8dc379493edbc444e9
Author: Peter Zijlstra pet...@infradead.org
AuthorDate: Thu, 9 Jul 2015 19:23:38 +0200
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Fri, 10 Jul 2015 10:24
And iirc you're relying on asm-generic/barrier.h to issue
smp_mb__{before,after}_atomic() as smp_mb(), right?
Acked-by: Peter Zijlstra (Intel) pet...@infradead.org
Although I'd love to know why you need those extra barriers in your
spinlocks...
--
To unsubscribe from this list: send the line
Commit-ID: d525211f9d1be8b523ec7633f080f2116f5ea536
Gitweb: http://git.kernel.org/tip/d525211f9d1be8b523ec7633f080f2116f5ea536
Author: Peter Zijlstra pet...@infradead.org
AuthorDate: Thu, 19 Feb 2015 18:03:11 +0100
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Mon, 23 Mar 2015 10
On Sat, Feb 28, 2015 at 09:36:15PM +0100, Manfred Spraul wrote:
+/*
+ * Place this after a control barrier (such as e.g. a spin_unlock_wait())
+ * to ensure that reads cannot be moved ahead of the control_barrier.
+ * Writes do not need a barrier, they are not speculated and thus cannot
+ *
Commit-ID: 40767b0dc768060266d261b4a330164b4be53f7c
Gitweb: http://git.kernel.org/tip/40767b0dc768060266d261b4a330164b4be53f7c
Author: Peter Zijlstra pet...@infradead.org
AuthorDate: Wed, 28 Jan 2015 15:08:03 +0100
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Wed, 4 Feb 2015 07:42
On Tue, Jan 06, 2015 at 02:34:35PM -0800, Andi Kleen wrote:
From: Andi Kleen a...@linux.intel.com
There was another report of a boot failure with a #GP fault in the
uncore SBOX initialization. The earlier work around was not enough
for this system.
The boot was failing while trying to
On Fri, Dec 05, 2014 at 01:58:01PM +0800, jun.zh...@intel.com wrote:
From: zhang jun jun.zh...@intel.com
find_idlest_cpu return -1 is not reasonable, set default value to this_cpu.
This fails to explain why.
Signed-off-by: zhang jun jun.zh...@intel.com
Signed-off-by: Chuansheng Liu
@vger.kernel.org.
thanks,
greg k-h
-- original commit in Linus's tree --
From 9c2b9d30e28559a78c9e431cdd7f2c6bf5a9ee67 Mon Sep 17 00:00:00 2001
From: Peter Zijlstra pet...@infradead.org
Date: Mon, 29 Sep 2014 12:12:01 +0200
Subject: [PATCH] perf: Fix perf bug
in sched_setaffinity() under RCU read lock
Probability of use-after-free isn't zero in this place.
Signed-off-by: Kirill Tkhai ktk...@parallels.com
Signed-off-by: Peter Zijlstra (Intel) pet...@infradead.org
Cc: stable@vger.kernel.org # v3.14+
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc
On Wed, Nov 05, 2014 at 04:47:14PM +0100, Maxime Coquelin wrote:
On 11/05/2014 12:10 PM, Rasmus Villemoes wrote:
On Tue, Nov 04 2014, Maxime COQUELIN maxime.coque...@st.com wrote:
-#define GENMASK(h, l) (((U32_C(1) ((h) - (l) + 1)) - 1)
(l))
-#define GENMASK_ULL(h, l)
On Mon, Nov 03, 2014 at 06:39:58PM +0100, Maxime COQUELIN wrote:
On some 32 bits architectures, including x86, GENMASK(31, 0) returns 0
instead of the expected ~0UL.
This is the same on some 64 bits architectures with GENMASK_ULL(63, 0).
This is due to an overflow in the shift operand, 1
On Tue, Nov 04, 2014 at 11:03:57AM +0100, Maxime COQUELIN wrote:
-#define GENMASK(h, l)(((U32_C(1) ((h) - (l) + 1)) - 1)
(l))
-#define GENMASK_ULL(h, l)(((U64_C(1) ((h) - (l) + 1)) - 1) (l))
+#define GENMASK(h, l) ((~0UL (BITS_PER_LONG - (h - l + 1))) l)
32 for GENMASK,
1 64 for GENMASK_ULL.
Fixes: 10ef6b0dffe404bcc54e94cb2ca1a5b18445a66b
Cc: stable@vger.kernel.org #v3.13+
Reported-by: Eric Paire eric.pa...@st.com
Suggested-by: Peter Zijlstra pet...@infradead.org
Signed-off-by: Maxime Coquelin maxime.coque...@st.com
There doesn't appear
Commit-ID: 9c2b9d30e28559a78c9e431cdd7f2c6bf5a9ee67
Gitweb: http://git.kernel.org/tip/9c2b9d30e28559a78c9e431cdd7f2c6bf5a9ee67
Author: Peter Zijlstra pet...@infradead.org
AuthorDate: Mon, 29 Sep 2014 12:12:01 +0200
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Fri, 3 Oct 2014 05:41
an obsolete task pointer when
retrying, so no one actually would deactive that event in this situation.
Fix it directly by reloading the task pointer in perf_remove_from_context().
This should cure the above soft lockup.
Cc: stable@vger.kernel.org
Cc: Peter Zijlstra a.p.zijls...@chello.nl
Cc: Paul
On Thu, Aug 28, 2014 at 04:27:35PM -0700, Cong Wang wrote:
From: Cong Wang cw...@twopensource.com
We saw a kernel soft lockup in perf_remove_from_context(),
it looks like the `perf` process, when exiting, could not go
out of the retry loop. Meanwhile, the target process was forking
a child.
of 1)\n,
param.sched_priority);
else
printf(priority setting fine\n);
}
Cc: Peter Zijlstra pet...@infradead.org
Cc: Ingo Molnar mi...@kernel.org
Cc: Thomas Gleixner t...@linutronix.de
Cc: stable@vger.kernel.org # 3.14+
Fixes: 7479f3c9cf67 sched: Move
On Sun, Jul 13, 2014 at 05:43:13PM -0400, Sasha Levin wrote:
On 07/11/2014 11:59 AM, Peter Zijlstra wrote:
I agree with you that The call trace is very clear on it that its not,
but
when you have 500 call traces you really want something better than
going
through it one call
On Thu, Jul 10, 2014 at 03:02:29PM -0400, Sasha Levin wrote:
What if we move lockdep's acquisition point to after it actually got the
lock?
NAK, you want to do deadlock detection _before_ you're stuck in a
deadlock.
We'd miss deadlocks, but we don't care about them right now. Anyways, doesn't
On Fri, Jul 11, 2014 at 10:33:15AM +0200, Vlastimil Babka wrote:
Quoting Hugh from previous mail in this thread:
[ 363.600969] INFO: task trinity-c327:9203 blocked for more than 120
seconds.
[ 363.605359] Not tainted
3.16.0-rc4-next-20140708-sasha-00022-g94c7290-dirty #772
On Fri, Jul 11, 2014 at 07:55:50AM -0700, Hugh Dickins wrote:
On Fri, 11 Jul 2014, Sasha Levin wrote:
There's no easy way to see whether a given task is actually holding a lock
or
is just blocking on it without going through all those tasks one by one and
looking at their trace.
On Sat, Jun 14, 2014 at 03:00:09PM +0200, Mateusz Guzik wrote:
proc_sched_show_task does:
if (nr_switches)
do_div(avg_atom, nr_switches);
nr_switches is unsigned long and do_div truncates it to 32 bits, which
means it can test non-zero on e.g. x86-64 and be truncated to zero for
Commit-ID: 3896c329df8092661dac80f55a8c3f60136fd61a
Gitweb: http://git.kernel.org/tip/3896c329df8092661dac80f55a8c3f60136fd61a
Author: Peter Zijlstra pet...@infradead.org
AuthorDate: Tue, 24 Jun 2014 14:48:19 +0200
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Wed, 2 Jul 2014 08:33
freed
anon_vma-root
to check rwsem.
This patch puts freeing of child anon_vma before freeing of anon_vma-root.
Cc: stable@vger.kernel.org # v3.0+
Acked-by: Peter Zijlstra pet...@infradead.org
Signed-off-by: Andrey Ryabinin a.ryabi...@samsung.com
---
Changes since v1:
- just made it more
Commit-ID: ce5f7f8200ca2504f6f290044393d73ca314965a
Gitweb: http://git.kernel.org/tip/ce5f7f8200ca2504f6f290044393d73ca314965a
Author: Peter Zijlstra pet...@infradead.org
AuthorDate: Mon, 12 May 2014 22:50:34 +0200
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Thu, 22 May 2014 10
Commit-ID: dbdb22754fde671dc93d2fae06f8be113d47f2fb
Gitweb: http://git.kernel.org/tip/dbdb22754fde671dc93d2fae06f8be113d47f2fb
Author: Peter Zijlstra pet...@infradead.org
AuthorDate: Fri, 9 May 2014 10:49:03 +0200
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Thu, 22 May 2014 10:21
he got this race when dl_bandwidth_enabled()
was not set.
Other thing, pointed by Peter Zijlstra:
Now I suppose the problem can still actually happen when
you change the root domain and trigger a effective affinity
change that way.
To fix that we do the same as made
On Tue, May 20, 2014 at 09:08:53AM +0400, Kirill Tkhai wrote:
20.05.2014, 04:00, Peter Zijlstra pet...@infradead.org:
On Mon, May 19, 2014 at 11:31:19PM +0400, Kirill Tkhai wrote:
@@ -513,9 +513,17 @@ static enum hrtimer_restart dl_task_timer(struct
hrtimer *timer
On Mon, May 19, 2014 at 11:31:19PM +0400, Kirill Tkhai wrote:
@@ -513,9 +513,17 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer
*timer)
struct sched_dl_entity,
dl_timer);
/preempt.h include (from linux/preempt.h) it should be safe.
Obviously its not actually been tested (as per the tags) because the
patch is different.
---
Subject: x86,preempt: Fix preemption for i386
From: Peter Zijlstra pet...@infradead.org
Date: Wed, 9 Apr 2014 16:24:47 +0200
Many people reported
On Wed, Apr 23, 2014 at 04:40:18PM +0900, Hidetoshi Seto wrote:
(2014/04/23 4:45), Peter Zijlstra wrote:
On Thu, Apr 17, 2014 at 06:41:41PM +0900, Hidetoshi Seto wrote:
[TARGET OF THIS PATCH]:
Complete rework for iowait accounting implies that some user
interfaces might be replaced
() doesn't make any kind of sense.
iowait isn't per cpu since effectively tasks that aren't
running aren't assigned a cpu (as Oleg already pointed out).
-- Peter Zijlstra
Now some kernel folks realized that accounting iowait as per-cpu
does not make sense in SMP world. When we were
On Wed, Apr 16, 2014 at 03:33:06PM +0900, Hidetoshi Seto wrote:
[3] : new tricks
To use seqcount, observers must be readers and never be writers.
It means that:
- Observed cpu's time stats are fixed at idle entry, and
unchanged while sleeping (otherwise results of readers will
On Thu, Apr 17, 2014 at 12:05:19PM +0200, Peter Zijlstra wrote:
+static inline void iowait_start(struct rq *rq)
+{
+ raw_spin_lock(rq-iowait_lock);
+ rq-nr_iowait++;
+ raw_spin_unlock(rq-iowait_lock);
+ current-in_iowait = 1;
+}
+
+static inline void iowait_stop(struct rq
On Wed, Apr 16, 2014 at 03:33:06PM +0900, Hidetoshi Seto wrote:
I hope I can clarify my idea and thoughts in the following sentence...
[1] : should we make a change on a /proc/stat field semantic?
As Frederic stated in previous mail:
quote
So what we can do for example is to account
On Thu, Apr 10, 2014 at 06:11:03PM +0900, Hidetoshi Seto wrote:
- if (ts-idle_active) {
- delta = ktime_sub(now, ts-idle_entrytime);
- if (nr_iowait_cpu(cpu) 0)
- ts-iowait_sleeptime = ktime_add(ts-iowait_sleeptime,
delta);
- else
On Tue, Apr 15, 2014 at 10:48:54AM +0200, Peter Zijlstra wrote:
On Thu, Apr 10, 2014 at 06:11:03PM +0900, Hidetoshi Seto wrote:
- if (ts-idle_active) {
- delta = ktime_sub(now, ts-idle_entrytime);
- if (nr_iowait_cpu(cpu) 0)
- ts-iowait_sleeptime
() doesn't make any kind of sense.
iowait isn't per cpu since effectively tasks that aren't
running aren't assigned a cpu (as Oleg already pointed out).
-- Peter Zijlstra
Now some kernel folks realized that accounting iowait as per-cpu
does not make sense in SMP world. When we were
On Thu, Apr 10, 2014 at 06:13:54PM +0900, Hidetoshi Seto wrote:
[WHAT THIS PATCH PROPOSED]:
To fix problem 1, this patch adds seqcount for NO_HZ idle
accounting to avoid possible races between reader/writer.
And to cope with problem 2, I introduced delayed iowait
accounting to get
Commit-ID: 26e61e8939b1fe8729572dabe9a9e97d930dd4f6
Gitweb: http://git.kernel.org/tip/26e61e8939b1fe8729572dabe9a9e97d930dd4f6
Author: Peter Zijlstra pet...@infradead.org
AuthorDate: Fri, 21 Feb 2014 16:03:12 +0100
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Thu, 27 Feb 2014 12
Commit-ID: e3703f8cdfcf39c25c4338c3ad8e68891cca3731
Gitweb: http://git.kernel.org/tip/e3703f8cdfcf39c25c4338c3ad8e68891cca3731
Author: Peter Zijlstra pet...@infradead.org
AuthorDate: Mon, 24 Feb 2014 12:06:12 +0100
Committer: Ingo Molnar mi...@kernel.org
CommitDate: Thu, 27 Feb 2014 12
On Tue, Feb 04, 2014 at 12:29:12PM +, Will Deacon wrote:
@@ -112,17 +114,20 @@ static inline int atomic_cmpxchg(atomic_t *ptr, int
old, int new)
unsigned long tmp;
int oldval;
+ smp_mb();
+
asm volatile(// atomic_cmpxchg\n
-1: ldaxr %w1, %2\n
+1: ldxr
On Wed, Jan 15, 2014 at 07:03:13PM +0100, Robert Richter wrote:
On 15.01.14 15:57:29, Robert Richter wrote:
@@ -816,6 +817,18 @@ static int force_ibs_eilvt_setup(void)
return ret;
}
+void ibs_eilvt_setup(void)
Grrr, this is not static. Could this be changed when the patch is
On Thu, Dec 19, 2013 at 08:13:21AM -0800, H. Peter Anvin wrote:
How does this look? Completely untested, of course.
I do wonder if we need more memory barriers, though.
An alternative would be to move everything into mwait_idle_with_hints().
-hpa
diff --git
On Thu, Dec 19, 2013 at 06:07:41PM +0100, Ingo Molnar wrote:
* H. Peter Anvin h...@zytor.com wrote:
On 12/19/2013 08:21 AM, Peter Zijlstra wrote:
What's that mb for?
It already exists in mwait_idle_with_hints(); I just moved it into
this common function. It is a bit odd
On Thu, Dec 19, 2013 at 06:25:35PM +0100, Peter Zijlstra wrote:
That said, I would find it very strange indeed if a CLFLUSH doesn't also
flush the store buffer.
OK, it explicitly states it does not do that and you indeed need an
mfence before the clflush.
--
To unsubscribe from this list
On Thu, Dec 19, 2013 at 06:25:35PM +0100, Peter Zijlstra wrote:
On Thu, Dec 19, 2013 at 06:07:41PM +0100, Ingo Molnar wrote:
Likewise, having a barrier before the MONITOR looks sensible as well.
I again have to disagree, one would expect monitor to flush all that is
required to start
On Thu, Dec 19, 2013 at 11:22:01AM -0800, H. Peter Anvin wrote:
On 12/19/2013 10:19 AM, Ingo Molnar wrote:
( It would also be nice to know whether MONITOR loads that cacheline
into the CPUs cache, and in what state it loads it. )
I would assume that is implementation-dependent.
On Tue, Dec 17, 2013 at 04:02:58PM +0400, Kirill Tkhai wrote:
13.12.2013, 19:42, Peter Zijlstra pet...@infradead.org:
On Wed, Nov 27, 2013 at 07:59:13PM +0400, Kirill Tkhai wrote:
This patch touches RT group scheduling case.
Functions inc_rt_prio_smp() and dec_rt_prio_smp() change
On Wed, Nov 27, 2013 at 07:59:13PM +0400, Kirill Tkhai wrote:
This patch touches RT group scheduling case.
Functions inc_rt_prio_smp() and dec_rt_prio_smp() change (global) rq's
priority,
while rt_rq passed to them may be not the top-level rt_rq. This is wrong,
because
changing of
On Tue, Nov 26, 2013 at 01:07:25AM +, Ma, Xindong wrote:
I've already aware that they've protected by spinlock, this is why I adding a
memory barrier to fix it.
That doesn't make sense.. the spinlocks should provide the required
serialization, there's nothing to fix.
I reproduced this
On Tue, Nov 26, 2013 at 01:07:25AM +, Ma, Xindong wrote:
[ 1038.694701] putmetho-11202 1...1 1035007289001: futex_wait: LEON, wait
==, addr:41300384, pid:11202
[ 1038.694716] putmetho-11202 1...1 1035007308860: futex_wait_queue_me:
LEON, q-task = 11202
[ 1038.694731] SharedPr-11272
On Mon, Nov 25, 2013 at 01:15:17PM +, Ma, Xindong wrote:
We encountered following panic several times:
[ 74.671982] BUG: unable to handle kernel NULL pointer dereference at
0008
[ 74.672101] IP: [c129bb27] wake_futex+0x47/0x80
[ 74.674144] [c129bc29] futex_wake+0xc9/0x110
[
On Tue, May 07, 2013 at 12:43:04AM +0200, Stephane Eranian wrote:
But that implies that you'd know that on Intel precise mode uses PEBS
and that PEBS
does not take cmask events. That seems to contradict the philosophy of
perf_events
where the kernel does the work for you.
This is basically
On Tue, May 07, 2013 at 08:48:05AM +0200, Ingo Molnar wrote:
Also, this code only runs when the event is set up, so a bit of sanity
checking can only help, right?
Nah, its all very circumspect. In fact; while what Andi states is 'true':
documentation in the Intel SDM 18.6.1.1 states:
On Mon, May 06, 2013 at 07:44:19PM +0200, Stephane Eranian wrote:
On Thu, May 2, 2013 at 9:37 AM, Peter Zijlstra pet...@infradead.org wrote:
On Wed, Apr 24, 2013 at 04:04:54PM -0700, Andi Kleen wrote:
From: Andi Kleen a...@linux.intel.com
The PEBS documentation in the Intel SDM 18.6.1.1
On Wed, Apr 24, 2013 at 04:04:54PM -0700, Andi Kleen wrote:
From: Andi Kleen a...@linux.intel.com
The PEBS documentation in the Intel SDM 18.6.1.1 states:
PEBS events are only valid when the following fields of IA32_PERFEVTSELx are
all
zero: AnyThread, Edge, Invert, CMask.
Since
On Thu, Apr 25, 2013 at 07:42:11PM +0200, Andi Kleen wrote:
There's the non cachable region tracking. But there's no guarantee a
MMIO has to be in there, driver may still rely just on MTRRs. Also
there may be MMIOs the kernel doesn't know about which just happen
to be somewhere in the direct
On Wed, Apr 24, 2013 at 04:04:53PM -0700, Andi Kleen wrote:
Possible options:
I) Disable FAR calls for ANY_CALL/RETURNS.
This just means syscalls are not logged
as calls. This also lowers the overhead of call logging.
This changes semantics slightly.
This is reasonable on Sandy Bridge and
On Thu, Apr 25, 2013 at 06:41:00PM +0200, Andi Kleen wrote:
So why not do the same as we do for userspace? Copy MAX_INSN_SIZE bytes
and trap -EFAULT.
Read the whole description, then you'll know why that is insecure.
You didn't actually explicitly mention it; you just said unconditional
On Thu, Apr 25, 2013 at 07:00:37PM +0200, Andi Kleen wrote:
Traping the read deals with the first. The second shouldn't be a problem
since
we generally only allow kernel info for CAP_ADMIN; if we don't already for
LBR
that needs to be fixed separately.
Where is that check? I don't
On Thu, 2012-10-04 at 15:27 -0700, Greg Kroah-Hartman wrote:
I'm puzzled as well. Any ideas if I should do anything here or not?
So I think the current v3.5.5 code is fine. I'm just not smart enough to
figure out how 3.6 got fuzzed, this git thing is confusing as hell.
--
To unsubscribe from
On Fri, 2012-10-05 at 10:10 -0700, Jonathan Nieder wrote:
Peter Zijlstra wrote:
On Thu, 2012-10-04 at 15:27 -0700, Greg Kroah-Hartman wrote:
I'm puzzled as well. Any ideas if I should do anything here or not?
So I think the current v3.5.5 code is fine.
Now I'm puzzled. You wrote
On Thu, 2012-10-04 at 10:46 -0700, Greg Kroah-Hartman wrote:
On Thu, Oct 04, 2012 at 12:11:01PM +0800, Huacai Chen wrote:
Hi, Greg
I found that Linux-3.5.5 accept this commit sched: Add missing call
to calc_load_exit_idle() but I think this isn't needed. Because
5167e8d5417b
On Fri, 2012-08-03 at 15:29 +0200, Richard Weinberger wrote:
get_robust_list has at least two valid use cases.
1. checkpoint/restore in userspace
2. post mortem analysis
Shouldn't this then also be added as a comment somewhere near the
implementation to avoid a repeat of this deprecate /
On Tue, 2012-07-24 at 15:06 +0100, Ben Hutchings wrote:
On Mon, 2012-07-23 at 02:07 +0100, Ben Hutchings wrote:
3.2-stable review patch. If anyone has any objections, please let me know.
--
From: Peter Zijlstra a.p.zijls...@chello.nl
commit
On Tue, 2012-07-17 at 19:16 -0500, Jonathan Nieder wrote:
I'm thrilled to see this regression fix for stable@, but are we really
really sure that it won't cause new regressions?
Doug Smythies ran a ~68 hour test on it, running various synthetic loads
of various frequencies against it and
On Fri, 2012-07-20 at 12:13 -0500, Jonathan Nieder wrote:
Peter Zijlstra wrote:
On Tue, 2012-07-17 at 19:16 -0500, Jonathan Nieder wrote:
I'm thrilled to see this regression fix for stable@, but are we really
really sure that it won't cause new regressions?
Doug Smythies ran a ~68
by mentioning why we're in hardirq
context to begin with.
Acked-by: Peter Zijlstra a.p.zijls...@chello.nl
--
To unsubscribe from this list: send the line unsubscribe stable in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, 2012-07-11 at 14:45 +0200, Thomas Gleixner wrote:
On Wed, 11 Jul 2012, Prarit Bhargava wrote:
On 07/10/2012 06:43 PM, John Stultz wrote:
clock_was_set() cannot be called from hard interrupt context because
it calls on_each_cpu(). For fixing the widely reported leap seconds
On Wed, 2012-07-11 at 09:05 -0400, Prarit Bhargava wrote:
Both of those options seem like a lot of work for something that happens once
every 3-4 years, and may not happen ever again[1]. Based on that statement,
if
we're going to modify code I would prefer that it be as lightweight as
On Wed, 2012-07-11 at 17:18 +0200, Thomas Gleixner wrote:
Right. I think with the atomic update of the offset in the timer
interrupt we are on the safe side. The main problem of timers expiring
early forever is covered by this.
Thinking more about it.
If time goes backwards, then the IPI
On Fri, 2012-05-18 at 12:40 +0200, Robert Richter wrote:
+ case 0x031:
+ if (hweight_long(hwc-config
ARCH_PERFMON_EVENTSEL_UMASK) = 1)
+ return amd_f15_PMC20;
+ return emptyconstraint;
+
c308b56b5398779cd3da0f62ab26b0453494c3d4 Mon Sep 17 00:00:00 2001
From: Peter Zijlstra pet...@infradead.org
Date: Thu, 1 Mar 2012 15:04:46 +0100
Subject: [PATCH 1/1] sched: Fix nohz load accounting -- again!
MIME-Version: 1.0
Content-Type: text/plain; charset=utf8
Content-Transfer-Encoding
, before cpu0 has set
cpu1 active, we have a deadlock.
Typically it's this CPU frequency transition that happens at
this time, so let's just not wait for it to happen, it will
happen whenever the CPU eventually comes online instead.
Cc: Peter Zijlstra pet...@infradead.org
Signed-off
On Wed, 2012-03-07 at 08:43 -0800, gre...@linuxfoundation.org wrote:
This is a note to let you know that I've just added the patch titled
CPU hotplug, cpusets, suspend: Don't touch cpusets during suspend/resume
Please forget you ever saw this one its got issues,.. Linus is about to
96 matches
Mail list logo