> On Oct 8, 2019, at 2:35 AM, Peter Zijlstra <[email protected]> wrote:
> 
> On Wed, Oct 02, 2019 at 11:43:17PM -0700, Song Liu wrote:
>> This is a rare corner case, but it does happen:
>> 
>> In perf_rotate_context(), when the first cpu flexible event fail to
>> schedule, cpu_rotate is 1, while cpu_event is NULL.
> 
> You're failing to explain how we get here in the first place.

On Intel CPU, it could be trigger by something like:

perf stat -e ref-cycles:D,ref-cycles,cycles -C 5 -I 1000

I will add this to v2. 

> 
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 4655adbbae10..50021735f367 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -3775,6 +3775,13 @@ static void rotate_ctx(struct perf_event_context 
>> *ctx, struct perf_event *event)
>>      if (ctx->rotate_disable)
>>              return;
>> 
>> +    /* if no event specified, try to rotate the first event */
>> +    if (!event)
>> +            event = rb_entry_safe(rb_first(&ctx->flexible_groups.tree),
>> +                                  typeof(*event), group_node);
>> +    if (!event)
>> +            return;
>> +
>>      perf_event_groups_delete(&ctx->flexible_groups, event);
>>      perf_event_groups_insert(&ctx->flexible_groups, event);
>> }
> 
> I don't think that is a very nice solution; would not something simpler
> like this be sufficient? (although possible we should rename that
> function now).

Yes, this is much cleaner. Thanks!

Let me submit v2 based on this. 

Thanks,
Song

> 
> And it needs more comments.
> 
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index 4655adbbae10..772a9854eb3a 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -3782,8 +3797,17 @@ static void rotate_ctx(struct perf_event_context *ctx, 
> struct perf_event *event)
> static inline struct perf_event *
> ctx_first_active(struct perf_event_context *ctx)
> {
> -     return list_first_entry_or_null(&ctx->flexible_active,
> -                                     struct perf_event, active_list);
> +     struct perf_event *event;
> +
> +     event = list_first_entry_or_null(&ctx->flexible_active,
> +                                      struct perf_event, active_list);
> +
> +     if (!event) {
> +             event = rb_entry_safe(rb_first(&ctx->flexible_groups.tree),
> +                                   typeof(*event), group_node);
> +     }
> +
> +     return event;
> }
> 
> static bool perf_rotate_context(struct perf_cpu_context *cpuctx)


Reply via email to