On Tue, Nov 10, 2020 at 1:04 AM Xin Yin <yinxin_1...@aliyun.com> wrote:
>
> For lru_percpu_map update elem, prealloc_lru_pop() may return
> an unclear elem, if the func called by bpf prog and "onallcpus"
> set to false, it may update an elem whith dirty data.
>
> Clear percpu value of the elem, before use it.
>
> Signed-off-by: Xin Yin <yinxin_1...@aliyun.com>

The alternative fix commit d3bec0138bfb ("bpf: Zero-fill re-used
per-cpu map element")
was already merged.
Please double check that it fixes your test.

> ---
>  kernel/bpf/hashtab.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
>
> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> index 728ffec52cf3..b1f781ec20b6 100644
> --- a/kernel/bpf/hashtab.c
> +++ b/kernel/bpf/hashtab.c
> @@ -709,6 +709,16 @@ static void pcpu_copy_value(struct bpf_htab *htab, void 
> __percpu *pptr,
>         }
>  }
>
> +static void pcpu_init_value(struct bpf_htab *htab, void __percpu *pptr)
> +{
> +       u32 size = round_up(htab->map.value_size, 8);
> +       int cpu;
> +
> +       for_each_possible_cpu(cpu) {
> +               memset(per_cpu_ptr(pptr, cpu), 0, size);
> +       }
> +}
> +
>  static bool fd_htab_map_needs_adjust(const struct bpf_htab *htab)
>  {
>         return htab->map.map_type == BPF_MAP_TYPE_HASH_OF_MAPS &&
> @@ -1075,6 +1085,9 @@ static int __htab_lru_percpu_map_update_elem(struct 
> bpf_map *map, void *key,
>                 pcpu_copy_value(htab, htab_elem_get_ptr(l_old, key_size),
>                                 value, onallcpus);
>         } else {
> +               if (!onallcpus)
> +                       pcpu_init_value(htab,
> +                                       htab_elem_get_ptr(l_new, key_size));
>                 pcpu_copy_value(htab, htab_elem_get_ptr(l_new, key_size),
>                                 value, onallcpus);
>                 hlist_nulls_add_head_rcu(&l_new->hash_node, head);
> --
> 2.19.5
>

Reply via email to