William Cohen <[email protected]> writes:
>
> Making user-space set up performance events for each cpu certainly
> simplifies the kernel code for system-wide monitoring. The cgroup
> support is essentially like system-wide monitoring with additional
> filtering on the cgroup and things get more complicated using the perf
> cgroup support when the cgroups are not pinned to a particular
> processor, O(cgroups*cpus) opens and reads.  If the cgroups is scaled
> up at the same rate as cpus, this would be O(cpus^2).  I am wondering

Using O() notation here is misleading because a perf event 
is not an algorithmic step. It's just a data structure in memory,
associated with a file descriptor.  But the number of active
events at a time is always limited by the number of counters
in the CPU (ignoring software events here) and is comparable
small.

The memory usage is not a significant problem, it is dwarfed by other
data structures per CPU.  Usually the main problem people run into is
running out of file descriptors because most systems still run with a
ulimit -n default of 1024, which is easy to reach with even a small
number of event groups on a system with a moderate number of CPUs.

However ulimit -n can be easily fixed: just increase it. Arguably
the distribution defaults should probably be increased.

-Andi
-- 
[email protected] -- Speaking for myself only
--
To unsubscribe from this list: send the line "unsubscribe linux-perf-users" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to