Em Thu, Apr 21, 2016 at 12:48:58PM +0200, Peter Zijlstra escreveu: > On Wed, Apr 20, 2016 at 07:47:30PM -0300, Arnaldo Carvalho de Melo wrote: > > The default remains 127, which is good for most cases, and not even hit > > most of the time, but then for some cases, as reported by Brendan, 1024+ > > deep frames are appearing on the radar for things like groovy, ruby. > yea gawds ;-) > > +++ b/kernel/events/callchain.c
> > @@ -73,7 +81,7 @@ static int alloc_callchain_buffers(void) > > if (!entries) > > return -ENOMEM; > > > > - size = sizeof(struct perf_callchain_entry) * PERF_NR_CONTEXTS; > > + size = perf_callchain_entry__sizeof() * PERF_NR_CONTEXTS; > > > > for_each_possible_cpu(cpu) { > > entries->cpu_entries[cpu] = kmalloc_node(size, GFP_KERNEL, > > And this alloc _will_ fail if you put in a decent sized value.. > > Should we put in a dmesg WARN if this alloc fails and > perf_event_max_stack is 'large' ? Unsure, it already returns -ENOMEM, see, some lines above, i.e. it better have error handling up this, ho-hum, call chain, I'm checking... > > @@ -215,3 +223,25 @@ exit_put: > > > > return entry; > > } > > + > > +int perf_event_max_stack_handler(struct ctl_table *table, int write, > > + void __user *buffer, size_t *lenp, loff_t > > *ppos) > > +{ > > + int new_value = sysctl_perf_event_max_stack, ret; > > + struct ctl_table new_table = *table; > > + > > + new_table.data = &new_value; > cute :-) Hey, I found it on sysctl_schedstats() and sysctl_numa_balancing(), as a way to read that value but only make it take effect if some condition was true (nr_callchain_events == 0 in this case), granted, could be better, less clever, but I leave this for later ;-) > > + ret = proc_dointvec_minmax(&new_table, write, buffer, lenp, ppos); > > + if (ret || !write) > > + return ret; > > + > > + mutex_lock(&callchain_mutex); > > + if (atomic_read(&nr_callchain_events)) > > + ret = -EBUSY; > > + else > > + sysctl_perf_event_max_stack = new_value; > > + > > + mutex_unlock(&callchain_mutex); > > + > > + return ret; > > +}