On Tue, Jan 17, 2017 at 09:38:39AM -0800, David Carrillo-Cisneros wrote:
> This is a low-hanging fruit optimization. It replaces the iteration over
> the "pmus" list in cgroup switch by an iteration over a new list that
> contains only cpuctxs with at least one cgroup event.
> 
> This is necessary because the number of pmus have increased over the years
> e.g modern x86 server systems have well above 50 pmus.
> The iteration over the full pmu list is unneccessary and can be costly in
> heavy cache contention scenarios.

While I haven't done any measurement of the overhead, this looks like a
nice rework/cleanup.

Since this is only changing the management of cpu contexts, this
shouldn't adversely affect systems with heterogeneous CPUs. I've also
given this a spin on such a system, to no ill effect.

I have one (very minor) comment below, but either way:

Acked-by: Mark Rutland <[email protected]>
Tested-by: Mark Rutland <[email protected]>

> @@ -889,6 +876,7 @@ list_update_cgroup_event(struct perf_event *event,
>                        struct perf_event_context *ctx, bool add)
>  {
>       struct perf_cpu_context *cpuctx;
> +     struct list_head *lentry;

It might be worth calling this cpuctx_entry, so that it's clear which
list element it refers to. I can imagine we'll add more list
manipulation in this path in future.

Thanks,
Mark.

Reply via email to