On Sat, Oct 05, 2019 at 05:16:14PM +0800, Leo Yan wrote: > The synthesized flow use 'tidq->packet' for instruction samples; on the > other hand, 'tidp->prev_packet' is used to generate the thread stack and > the branch samples, this results in the instruction samples using one > packet ahead than thread stack and branch samples ('tidp->prev_packet' > vs 'tidq->packet'). > > This leads to an instruction's callchain error as shows in below > example: > > main 1579 100 instructions: > ffff000010214854 perf_event_update_userpage+0x4c ([kernel.kallsyms]) > ffff000010214850 perf_event_update_userpage+0x48 ([kernel.kallsyms]) > ffff000010219360 perf_swevent_add+0x88 ([kernel.kallsyms]) > ffff0000102135f4 event_sched_in.isra.57+0xbc ([kernel.kallsyms]) > ffff0000102137a0 group_sched_in+0x60 ([kernel.kallsyms]) > ffff000010213b84 flexible_sched_in+0xfc ([kernel.kallsyms]) > ffff00001020c0b4 visit_groups_merge+0x12c ([kernel.kallsyms]) > > In the callchain log, for the two continuous lines the up line contains > one child function info and the followed line contains the caller > function info, and so forth. So the first two lines are: > > perf_event_update_userpage+0x4c => the sampled instruction > perf_event_update_userpage+0x48 => the parent function's calling > > The child function and parent function both are the same function > perf_event_update_userpage(), but this isn't a recursive function, thus > the sequence for perf_event_update_userpage() calling itself shouldn't > never happen. This callchain error is caused by the instruction sample > using an ahead packet than the thread stack, the thread stack is deferred > to process the new packet and misses to pop stack if it is just a return > packet. > > To fix this issue, we can simply change to use 'tidq->prev_packet' to > generate the instruction samples, this allows the thread stack to push > and pop synchronously with instruction sample. Finally, the callchain > can be displayed correctly as below: > > main 1579 100 instructions: > ffff000010214854 perf_event_update_userpage+0x4c ([kernel.kallsyms]) > ffff000010219360 perf_swevent_add+0x88 ([kernel.kallsyms]) > ffff0000102135f4 event_sched_in.isra.57+0xbc ([kernel.kallsyms]) > ffff0000102137a0 group_sched_in+0x60 ([kernel.kallsyms]) > ffff000010213b84 flexible_sched_in+0xfc ([kernel.kallsyms]) > ffff00001020c0b4 visit_groups_merge+0x12c ([kernel.kallsyms]) > > Signed-off-by: Leo Yan <leo....@linaro.org> > --- > tools/perf/util/cs-etm.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c > index 56e501cd2f5f..fa969dcb45d2 100644 > --- a/tools/perf/util/cs-etm.c > +++ b/tools/perf/util/cs-etm.c > @@ -1419,7 +1419,7 @@ static int cs_etm__sample(struct cs_etm_queue *etmq, > struct cs_etm_packet *tmp; > int ret; > u8 trace_chan_id = tidq->trace_chan_id; > - u64 instrs_executed = tidq->packet->instr_count; > + u64 instrs_executed = tidq->prev_packet->instr_count; > > tidq->period_instructions += instrs_executed; > > @@ -1450,7 +1450,7 @@ static int cs_etm__sample(struct cs_etm_queue *etmq, > */ > s64 offset = (instrs_executed - instrs_over - 1); > u64 addr = cs_etm__instr_addr(etmq, trace_chan_id, > - tidq->packet, offset); > + tidq->prev_packet, offset);
I have tested this set in --per-thread mode and things are working as advertised. Did you see how things look like in CPU-wide scenarios? Thanks, Mathieu > > ret = cs_etm__synth_instruction_sample( > etmq, tidq, addr, etm->instructions_sample_period); > -- > 2.17.1 >