On Tue, Aug 30, 2016 at 08:47:24AM +0200, Peter Zijlstra wrote:

> If oncpu is not valid, the sched_out that made it invalid will have
> updated the event count and we're good.
> 
> All I'll leave is an explicit comment that we've ignored the
> smp_call_function_single() return value on purpose.

Something like so..

---
 kernel/events/core.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 3f07e6cfc1b6..a35cbc382b2c 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3576,12 +3576,21 @@ static int perf_event_read(struct perf_event *event, 
bool group)
 
                local_cpu = get_cpu();
                cpu_to_read = find_cpu_to_read(event, local_cpu);
+
+               /*
+                * Purposely ignore the smp_call_function_single() return
+                * value.
+                *
+                * If event->oncpu isn't a valid CPU it means the event got
+                * scheduled out and that will have updated the event count.
+                *
+                * Therefore, either way, we'll have an up-to-date event count
+                * after this.
+                */
+               (void)smp_call_function_single(cpu_to_read, __perf_event_read, 
&data, 1);
                put_cpu();
 
-               ret = smp_call_function_single(cpu_to_read, __perf_event_read, 
&data, 1);
-               /* The event must have been read from an online CPU: */
-               WARN_ON_ONCE(ret);
-               ret = ret ? : data.ret;
+               ret = data.ret;
        } else if (event->state == PERF_EVENT_STATE_INACTIVE) {
                struct perf_event_context *ctx = event->ctx;
                unsigned long flags;

Reply via email to