On Fri, Jun 23, 2017 at 10:55:47AM -0300, Arnaldo Carvalho de Melo wrote: > Em Fri, Jun 23, 2017 at 02:48:24PM +0900, Namhyung Kim escreveu: > > Even every module has loaded onto same addresses, some modules can be > > changed and reloaded. > > Can you rephrase the above statement? > > You mean even if a module is reloaded at the same address it can have a > different symtab and thus misresolve symbols if we get a sample for one > module version and try to resolve symbols using a symtab for another > module version?
Right, as I said in other thread, the module can be unloaded, modified and reloaded between "perf record" and "perf report". > > > In that case it needs to access to the old module > > in the build-id cache. > > Right, we need to make sure that during a 'perf record' session a módule > isn't reloaded, because at 'perf record' exit we will record that module > build-id in a perf.data header and grab a copy (or a hardlink, if > possible) into the ~/.debug build-id cache, then, at 'perf report' time > we will read that build-id perf.data table and lookup in the build-id > cache to get the matching symtab (and full elf .ko file to do things > like annotation). That's the desired behavior. But "perf report" didn't use modules in the build-id cache when it used kcore (and kallsyms). Thanks, Namhyung > > > Cc: Adrian Hunter <adrian.hun...@intel.com> > > Cc: Wang Nan <wangn...@huawei.com> > > Signed-off-by: Namhyung Kim <namhy...@kernel.org> > > --- > > tools/perf/util/symbol.c | 45 +++++++++++++++++++++++---------------------- > > 1 file changed, 23 insertions(+), 22 deletions(-) > > > > diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c > > index ce79a51f25bf..fe46eb782297 100644 > > --- a/tools/perf/util/symbol.c > > +++ b/tools/perf/util/symbol.c > > @@ -1155,6 +1155,7 @@ static int dso__load_kcore(struct dso *dso, struct > > map *map, > > int err, fd; > > char kcore_filename[PATH_MAX]; > > struct symbol *sym; > > + struct map_groups old_mg; > > > > if (!kmaps) > > return -EINVAL; > > @@ -1196,15 +1197,8 @@ static int dso__load_kcore(struct dso *dso, struct > > map *map, > > goto out_err; > > } > > > > - /* Remove old maps */ > > - old_map = map_groups__first(kmaps, map->type); > > - while (old_map) { > > - struct map *next = map_groups__next(old_map); > > - > > - if (old_map != map) > > - map_groups__remove(kmaps, old_map); > > - old_map = next; > > - } > > + old_mg = *kmaps; > > + kmaps->maps[map->type].entries = RB_ROOT; > > > > /* Find the kernel map using the first symbol */ > > sym = dso__first_symbol(dso, map->type); > > @@ -1223,24 +1217,31 @@ static int dso__load_kcore(struct dso *dso, struct > > map *map, > > while (!list_empty(&md.maps)) { > > new_map = list_entry(md.maps.next, struct map, node); > > list_del_init(&new_map->node); > > - if (new_map == replacement_map) { > > - map->start = new_map->start; > > - map->end = new_map->end; > > - map->pgoff = new_map->pgoff; > > - map->map_ip = new_map->map_ip; > > - map->unmap_ip = new_map->unmap_ip; > > - /* Ensure maps are correctly ordered */ > > - map__get(map); > > - map_groups__remove(kmaps, map); > > - map_groups__insert(kmaps, map); > > - map__put(map); > > - } else { > > - map_groups__insert(kmaps, new_map); > > + > > + map_groups__insert(kmaps, new_map); > > + > > + if (new_map != replacement_map) { > > + old_map = map_groups__find(&old_mg, new_map->type, > > new_map->start); > > + if (old_map && dso__loaded(old_map->dso, > > old_map->type)) { > > + new_map->pgoff = old_map->pgoff; > > + > > + dso__put(new_map->dso); > > + new_map->dso = dso__get(old_map->dso); > > + } > > } > > > > map__put(new_map); > > } > > > > + /* Remove old maps */ > > + old_map = map_groups__first(&old_mg, map->type); > > + while (old_map) { > > + struct map *next = map_groups__next(old_map); > > + > > + map_groups__remove(&old_mg, old_map); > > + old_map = next; > > + } > > + > > /* > > * Set the data type and long name so that kcore can be read via > > * dso__data_read_addr(). > > -- > > 2.13.1