On 16/08/16(Tue) 20:01, Philip Guenther wrote:
> On Mon, 15 Aug 2016, Martin Pieuchot wrote:
> > I'd like to make sure that bpf_tap(9) does not grab the KERNEL_LOCK().
> > The reason is to reduce the potential lock ordering problems within PF.
> > 
> > I'm currently using a mutex to serialize buffer changes, but since
> > bpf_wakeup() will still need the KERNEL_LOCK(), I'm using a task for
> > that.
> 
> Yes, this is a good direction to unblock the network stack work.  
> However, I suspect there's a bug below.
> 
> 
> > @@ -515,12 +519,28 @@ bpfread(dev_t dev, struct uio *uio, int 
> >  void
> >  bpf_wakeup(struct bpf_d *d)
> >  {
> > -   wakeup((caddr_t)d);
> > +   /*
> > +    * As long as csignal() and selwakeup() need to be protected
> > +    * by the KERNEL_LOCK() we have to delay the wakeup to
> > +    * another context to keep the hot path KERNEL_LOCK()-free.
> > +    */
> > +   bpf_get(d);
> > +   task_add(systq, &d->bd_wake_task);
> 
> We increment the reference count here and decrement it in bpf_wakeup_cb().
> However, if the task is already queued here in bpf_wakeup() then won't it
> get bumped again and only released once?  I suspect this should be:
>       bpf_get(d);
>       if (!task_add(systq, &d->bd_wake_task))
>               bpf_put(d);

Good catch, I'll got with this then.

Reply via email to