On 10/24/2018 12:32 PM, Arnaldo Carvalho de Melo wrote:
Em Wed, Oct 24, 2018 at 09:23:34AM -0700, Andi Kleen escreveu:
+void perf_event_munmap(void)
+{
+       struct perf_cpu_context *cpuctx;
+       unsigned long flags;
+       struct pmu *pmu;
+
+       local_irq_save(flags);
+       list_for_each_entry(cpuctx, this_cpu_ptr(&sched_cb_list), 
sched_cb_entry) {

Would be good have a fast path here that checks for the list being empty
without disabling the interrupts. munmap can be somewhat hot. I think
it's ok to make it slower with perf running, but we shouldn't impact
it without perf.

Right, look at how its counterpart, perf_event_mmap() works:

void perf_event_mmap(struct vm_area_struct *vma)
{
         struct perf_mmap_event mmap_event;

         if (!atomic_read(&nr_mmap_events))
                 return;
<SNIP>
}


Thanks. I'll add the nr_mmap_events check in V2.

Thanks,
Kan

Reply via email to