On Mon, Oct 26, 2020 at 11:52:21AM +0300, Alexei Budankov wrote:
> 
> On 24.10.2020 18:44, Jiri Olsa wrote:
> > On Wed, Oct 21, 2020 at 07:02:56PM +0300, Alexey Budankov wrote:
> > 
> > SNIP
> > 
> >>  
> >>    record__synthesize(rec, true);
> >> -  /* this will be recalculated during process_buildids() */
> >> -  rec->samples = 0;
> >>  
> >>    if (!err) {
> >>            if (!rec->timestamp_filename) {
> >> @@ -2680,9 +2709,12 @@ int cmd_record(int argc, const char **argv)
> >>  
> >>    }
> >>  
> >> -  if (rec->opts.kcore)
> >> +  if (rec->opts.kcore || record__threads_enabled(rec))
> >>            rec->data.is_dir = true;
> >>  
> >> +  if (record__threads_enabled(rec))
> >> +          rec->opts.affinity = PERF_AFFINITY_CPU;
> > 
> > so all the threads will pin to cpu and back before reading?
> 
> No, they will not back. Thread mask compares to mmap mask before
> read and the thread migrates if masks don't match. This happens
> once on the first mmap read. So explicit pinning can be avoided.

hum, is that right? the check in record__adjust_affinity
is checking global 'rec->affinity_mask', at lest I assume
it's still global ;-)

        if (rec->opts.affinity != PERF_AFFINITY_SYS &&
            !bitmap_equal(rec->affinity_mask.bits, map->affinity_mask.bits,
                          rec->affinity_mask.nbits)) {

I think this can never be equal if you have more than one map

when I check on sched_setaffinity syscalls:

  # perf trace -e syscalls:sys_enter_sched_setaffinity

while running record --threads, I see sched_setaffinity
calls all the time

jirka

Reply via email to