On Thu, Aug 01, 2013 at 02:46:58PM +0200, Jiri Olsa wrote:
> On Tue, Jul 23, 2013 at 02:31:04AM +0200, Frederic Weisbecker wrote:
> > This is going to be used by the full dynticks subsystem
> > as a finer-grained information to know when to keep and
> > when to stop the tick.
> > 
> > Original-patch-by: Peter Zijlstra <pet...@infradead.org>
> > Signed-off-by: Frederic Weisbecker <fweis...@gmail.com>
> > Cc: Jiri Olsa <jo...@redhat.com>
> > Cc: Peter Zijlstra <pet...@infradead.org>
> > Cc: Namhyung Kim <namhy...@kernel.org>
> > Cc: Ingo Molnar <mi...@kernel.org>
> > Cc: Arnaldo Carvalho de Melo <a...@redhat.com>
> > Cc: Stephane Eranian <eran...@google.com>
> > ---
> >  kernel/events/core.c |    7 +++++++
> >  1 files changed, 7 insertions(+), 0 deletions(-)
> > 
> > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > index b40c3db..f9bd39b 100644
> > --- a/kernel/events/core.c
> > +++ b/kernel/events/core.c
> > @@ -141,6 +141,7 @@ enum event_type_t {
> >  struct static_key_deferred perf_sched_events __read_mostly;
> >  static DEFINE_PER_CPU(atomic_t, perf_cgroup_events);
> >  static DEFINE_PER_CPU(atomic_t, perf_branch_stack_events);
> > +static DEFINE_PER_CPU(atomic_t, perf_freq_events);
> >  
> >  static atomic_t nr_mmap_events __read_mostly;
> >  static atomic_t nr_comm_events __read_mostly;
> > @@ -3139,6 +3140,9 @@ static void unaccount_event_cpu(struct perf_event 
> > *event, int cpu)
> >     }
> >     if (is_cgroup_event(event))
> >             atomic_dec(&per_cpu(perf_cgroup_events, cpu));
> > +
> > +   if (event->attr.freq)
> > +           atomic_dec(&per_cpu(perf_freq_events, cpu));
> >  }
> >  
> >  static void unaccount_event(struct perf_event *event)
> > @@ -6473,6 +6477,9 @@ static void account_event_cpu(struct perf_event 
> > *event, int cpu)
> >     }
> >     if (is_cgroup_event(event))
> >             atomic_inc(&per_cpu(perf_cgroup_events, cpu));
> > +
> > +   if (event->attr.freq)
> > +           atomic_inc(&per_cpu(perf_freq_events, cpu));
> 
> cpu could be -1 in here.. getting:

Ho humm, right you are. 

So we have:

static void account_event_cpu(struct perf_event *event, int cpu)
{
        if (event->parent)
                return;

        if (has_branch_stack(event)) {
                if (!(event->attach_state & PERF_ATTACH_TASK))
                        atomic_inc(&per_cpu(perf_branch_stack_events, cpu));
        }
        if (is_cgroup_event(event))
                atomic_inc(&per_cpu(perf_cgroup_events, cpu));

        if (event->attr.freq)
                atomic_inc(&per_cpu(perf_freq_events, cpu));
}

Where the freq thing is new and shiney, but we already had the other
two. Of those, cgroup events must be per cpu so that should be good,
the branch_stack thing tests ATTACH_TASK, which should also be good, but
leaves me wonder wth they do for those that are attached to tasks.

But yes, the frequency thing is borken.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to