* Jiri Olsa <jo...@redhat.com> wrote:

> On Tue, Oct 24, 2017 at 02:59:44PM +0200, Ingo Molnar wrote:
> > 
> > * Jiri Olsa <jo...@redhat.com> wrote:
> > 
> > > I recently made some changes on threaded record, which are based
> > > on Namhyungs time* API, which is needed to read/sort the data afterwards
> > > 
> > > but I wasn't able to get any substantial and constant reduce of LOST 
> > > events
> > > and then I got sidetracked and did not finish, but it's in here:
> > 
> > So, in the context of system-wide profiling, the way that would work best I 
> > think 
> > is the following:
> > 
> >   thread #0 binds itself to CPU#0 (via sched_setaffinity) and creates a 
> > per-CPU event on CPU#0
> >   thread #1 binds itself to CPU#1 (via sched_setaffinity) and creates a 
> > per-CPU event on CPU#1
> >   thread #2 binds itself to CPU#2 (via sched_setaffinity) and creates a 
> > per-CPU event on CPU#2
> > 
> > etc.
> > 
> > Is this how you implemented it?
> 
> in a way ;-) but I made it more generic and let record create just
> few threads and let them share cpu subset.. and so there was no binding
> 
> > 
> > If the threads in the thread pool are just free-running then the scheduler 
> > might 
> > not migrate it to the 'right' CPU that is streaming the perf events and 
> > there will 
> > be a lot of cross-talking between CPUs.
> 
> ok it's easy to add binding now and 1:1 thread:cpu mapping.. I'll retry

Please Cc: me - this is a really interesting aspect of perf scalability!

Thanks,

        Ingo

Reply via email to