[ Cc'ing Masami as he maintains uprobes (we need to add uprobes to
  the MAINTAINERS file ]

-- Steve

On Wed, 16 Jan 2019 13:20:27 +0200
Elena Reshetova <elena.reshet...@intel.com> wrote:

> atomic_t variables are currently used to implement reference
> counters with the following properties:
>  - counter is initialized to 1 using atomic_set()
>  - a resource is freed upon counter reaching zero
>  - once counter reaches zero, its further
>    increments aren't allowed
>  - counter schema uses basic atomic operations
>    (set, inc, inc_not_zero, dec_and_test, etc.)
> 
> Such atomic variables should be converted to a newly provided
> refcount_t type and API that prevents accidental counter overflows
> and underflows. This is important since overflows and underflows
> can lead to use-after-free situation and be exploitable.
> 
> The variable uprobe.ref is used as pure reference counter.
> Convert it to refcount_t and fix up the operations.
> 
> **Important note for maintainers:
> 
> Some functions from refcount_t API defined in lib/refcount.c
> have different memory ordering guarantees than their atomic
> counterparts.
> The full comparison can be seen in
> https://lkml.org/lkml/2017/11/15/57 and it is hopefully soon
> in state to be merged to the documentation tree.
> Normally the differences should not matter since refcount_t provides
> enough guarantees to satisfy the refcounting use cases, but in
> some rare cases it might matter.
> Please double check that you don't have some undocumented
> memory guarantees for this variable usage.
> 
> For the uprobe.ref it might make a difference
> in following places:
>  - put_uprobe(): decrement in refcount_dec_and_test() only
>    provides RELEASE ordering and control dependency on success
>    vs. fully ordered atomic counterpart
> 
> Suggested-by: Kees Cook <keesc...@chromium.org>
> Reviewed-by: David Windsor <dwind...@gmail.com>
> Reviewed-by: Hans Liljestrand <ishkam...@gmail.com>
> Signed-off-by: Elena Reshetova <elena.reshet...@intel.com>
> ---
>  kernel/events/uprobes.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index ad415f7..750aece 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -66,7 +66,7 @@ static struct percpu_rw_semaphore dup_mmap_sem;
>  
>  struct uprobe {
>       struct rb_node          rb_node;        /* node in the rb tree */
> -     atomic_t                ref;
> +     refcount_t              ref;
>       struct rw_semaphore     register_rwsem;
>       struct rw_semaphore     consumer_rwsem;
>       struct list_head        pending_list;
> @@ -561,13 +561,13 @@ set_orig_insn(struct arch_uprobe *auprobe, struct 
> mm_struct *mm, unsigned long v
>  
>  static struct uprobe *get_uprobe(struct uprobe *uprobe)
>  {
> -     atomic_inc(&uprobe->ref);
> +     refcount_inc(&uprobe->ref);
>       return uprobe;
>  }
>  
>  static void put_uprobe(struct uprobe *uprobe)
>  {
> -     if (atomic_dec_and_test(&uprobe->ref)) {
> +     if (refcount_dec_and_test(&uprobe->ref)) {
>               /*
>                * If application munmap(exec_vma) before uprobe_unregister()
>                * gets called, we don't get a chance to remove uprobe from
> @@ -658,7 +658,7 @@ static struct uprobe *__insert_uprobe(struct uprobe 
> *uprobe)
>       rb_link_node(&uprobe->rb_node, parent, p);
>       rb_insert_color(&uprobe->rb_node, &uprobes_tree);
>       /* get access + creation ref */
> -     atomic_set(&uprobe->ref, 2);
> +     refcount_set(&uprobe->ref, 2);
>  
>       return u;
>  }

Reply via email to