On Mon, Nov 28, 2016 at 04:01:14PM +0100, Martin Pieuchot wrote:
> Last diff to trade the KERNEL_LOCK for a mutex in order to protect data
> accessed inside bpf_catchpacket().
> 
> Note about the multiples data structures:
> 
>   - selwakeup() is called in a thread context (task) so we rely on the
>     KERNEL_LOCK() to serialize access to kqueue(9) data.
> 
>   - the global list of descriptor ``bpf_d_list``, accessed via 
>     bpfilter_lookup() is also protected by the KERNEL_LOCK().
> 
>   - bpf_get() increments the refcount of a descriptor.  It is needed
>     to increment it around ifppromisc() which can sleep.
> 
>   - A mutex now protects the rotating buffers and their associated
>     fields.
>  
>  - The dance around uiomove(9) is here to check that buffers aren't
>   rotated while data is copied to userland.  Setting ``b->bd_fbuf''
>   to NULL should be enough to let bpf_catchpacket() to drop the
>   patcket.  But I added ``__in_uiomove'' to be able to have usable
>   panic if something weird happen.
> 
> Comments, oks?

The thing I'm concerned about here is the call to bpf_wakeup() from
bpf_catchpacket() - this modifies the bpf_d refcount outside the kernel lock.
Elsewhere the refcount is modified while holding the kernel lock, with or
without the mutex.  It'll never free the bpf_d there, since the bpf_d list
still holds a reference, but the refcount operations aren't atomic, so the
counter could get messed up.  Is there a reason not to use atomics there?

Reply via email to