> Date: Fri, 1 Nov 2019 14:13:26 +0100
> From: Martin Pieuchot <m...@openbsd.org>
> 
> On 01/11/19(Fri) 13:12, Mark Kettenis wrote:
> > > From: "Ted Unangst" <t...@tedunangst.com>
> > > Date: Fri, 01 Nov 2019 00:18:35 -0400
> > > 
> > > Theo de Raadt wrote:
> > > > In uvm_map_inentry_fix(), two variables in the map are now being read
> > > > without a per-map read lock, this was previously protected by the
> > > > kernel lock
> > > > 
> > > >         if (addr < map->min_offset || addr >= map->max_offset)
> > > >                 return (FALSE);
> > > > 
> > > > When I wrote this I was told to either use KERNEL_LOCK() or
> > > > vm_map_lock_read(), which is vm_map_lock_read_ln() .. if
> > > > KERNEL_LOCK() is no longer good should vm_map_lock_read be moved
> > > > upwards, or are you confident those offsets are safe to read
> > > > independently without any locking??
> > > 
> > > indeed, another thread can expand the map, so that should have the read 
> > > lock.
> > 
> > That can't happen.  The map limits are only touched by uvm_map_setup()
> > and uvmspace_exec().  The former is when the map gets created and
> > isn't known to other threads yet.  The latter is called from exec(2)
> > and the process is guaranteed to be single-threaded at that point.
> 
> Diff below starts documenting which locking primitives apply to the
> members of "struct uvm_map", ok?

That matches my understanding of the code.

ok kettenis@

> Index: uvm/uvm_map.h
> ===================================================================
> RCS file: /cvs/src/sys/uvm/uvm_map.h,v
> retrieving revision 1.62
> diff -u -p -r1.62 uvm_map.h
> --- uvm/uvm_map.h     14 Jun 2019 05:52:43 -0000      1.62
> +++ uvm/uvm_map.h     1 Nov 2019 13:10:49 -0000
> @@ -287,15 +287,19 @@ RBT_PROTOTYPE(uvm_map_addr, vm_map_entry
>   *
>   *
>   * read_locks and write_locks are used in lock debugging code.
> + *
> + *  Locks used to protect struct members in this file:
> + *   I       immutable after creation or exec(2)
> + *   v       `vm_map_lock' (this map `lock' or `mtx') 
>   */
>  struct vm_map {
> -     struct pmap *           pmap;           /* Physical map */
> +     struct pmap             *pmap;          /* [I] Physical map */
>       struct rwlock           lock;           /* Lock for map data */
>       struct mutex            mtx;
> -     u_long                  sserial;        /* counts stack changes */
> -     u_long                  wserial;        /* counts PROT_WRITE increases 
> */
> +     u_long                  sserial;        /* [v] # stack changes */
> +     u_long                  wserial;        /* [v] # PROT_WRITE increases */
>  
> -     struct uvm_map_addr     addr;           /* Entry tree, by addr */
> +     struct uvm_map_addr     addr;           /* [v] Entry tree, by addr */
>  
>       vsize_t                 size;           /* virtual size */
>       int                     ref_count;      /* Reference count */
> @@ -303,16 +307,16 @@ struct vm_map {
>       struct mutex            flags_lock;     /* flags lock */
>       unsigned int            timestamp;      /* Version number */
>  
> -     vaddr_t                 min_offset;     /* First address in map. */
> -     vaddr_t                 max_offset;     /* Last address in map. */
> +     vaddr_t                 min_offset;     /* [I] First address in map. */
> +     vaddr_t                 max_offset;     /* [I] Last address in map. */
>  
>       /*
>        * Allocation overflow regions.
>        */
> -     vaddr_t                 b_start;        /* Start for brk() alloc. */
> -     vaddr_t                 b_end;          /* End for brk() alloc. */
> -     vaddr_t                 s_start;        /* Start for stack alloc. */
> -     vaddr_t                 s_end;          /* End for stack alloc. */
> +     vaddr_t                 b_start;        /* [v] Start for brk() alloc. */
> +     vaddr_t                 b_end;          /* [v] End for brk() alloc. */
> +     vaddr_t                 s_start;        /* [v] Start for stack alloc. */
> +     vaddr_t                 s_end;          /* [v] End for stack alloc. */
>  
>       /*
>        * Special address selectors.
> 

Reply via email to