On 16/04/19 7:01 PM, Jiri Olsa wrote:
> Maps in kcore do not cover bpf maps, so we can't just
> remove everything. Keeping all kernel maps, which are
> not covered by kcore maps.

Memory for jited-bpf is allocated from the same area that is used for
modules.  In the case of /proc/kcore, that entire area is mapped, so there
won't be any bpf-maps that are not covered.  For copies of kcore made by
'perf buildid-cache' the same would be true for any bpf that got allocated
in between modules.

But shouldn't the bpf map supersede the kcore map for the address range that
it maps?  I guess that would mean splitting the kcore map, truncating the
first piece and inserting the bpf map in between.

> 
> Link: http://lkml.kernel.org/n/[email protected]
> Signed-off-by: Jiri Olsa <[email protected]>
> ---
>  tools/perf/util/symbol.c | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> index 5cbad55cd99d..96738a7a8c14 100644
> --- a/tools/perf/util/symbol.c
> +++ b/tools/perf/util/symbol.c
> @@ -1166,6 +1166,18 @@ static int kcore_mapfn(u64 start, u64 len, u64 pgoff, 
> void *data)
>       return 0;
>  }
>  
> +static bool in_kcore(struct kcore_mapfn_data *md, struct map *map)
> +{
> +     struct map *iter;
> +
> +     list_for_each_entry(iter, &md->maps, node) {
> +             if ((map->start >= iter->start) && (map->start < iter->end))
> +                     return true;
> +     }
> +
> +     return false;
> +}
> +
>  static int dso__load_kcore(struct dso *dso, struct map *map,
>                          const char *kallsyms_filename)
>  {
> @@ -1222,7 +1234,7 @@ static int dso__load_kcore(struct dso *dso, struct map 
> *map,
>       while (old_map) {
>               struct map *next = map_groups__next(old_map);
>  
> -             if (old_map != map)
> +             if (old_map != map && !in_kcore(&md, old_map))
>                       map_groups__remove(kmaps, old_map);
>               old_map = next;
>       }
> 

Reply via email to