> Date: Fri, 19 May 2023 11:35:35 +0200
> From: Claudio Jeker <cje...@diehard.n-r-g.com>

Hi Claudio,

> On Fri, May 19, 2023 at 06:10:19PM +1000, David Gwynne wrote:
> > On Fri, May 19, 2023 at 08:11:13AM +0200, Claudio Jeker wrote:
> > > On Fri, May 19, 2023 at 01:56:38PM +1000, David Gwynne wrote:
> > > > this is a tiny slice off a big pfsync diff i've been working on. when
> > > > you bring pfsync down i need it to wait until all the work it's been
> > > > doing in the network stack has finished, which means i need a barrier
> > > > for all the network taskqs. that's what this implements.
> > > > 
> > > > a barrier per taskq would mean iterating over the taskqs and waiting for
> > > > a barrier on each one. by using a refcnt and shoving a task onto each of
> > > > them in one go, i only have to wait for the slowest one, not all
> > > > of them in series.
> > > > 
> > > > ok?
> > > 
> > > Not sure if it matters here but wouldn't it be even better to insert this
> > > barrier on the head of the task queue? At least I think you just want one
> > > task run to be done.
> > 
> > that suggestion is a bit triggering for me because:
> > 
> > https://cvsweb.openbsd.org/src/sys/kern/kern_task.c#rev1.29
> 
> Oh my. What a night mare. One could implement flush_workqueue() via
> taskq_add() if we ever want to make taskq_barrier faster.
>  
> > > Now 'ifconfig pfsync0 down' is not a hot path so it does not matter but
> > > such barriers have the tendency to end up in unexpected places.
> > > 
> > > Running all tasks in parallel is a good compromise right now.
> > 
> > i'd still run the tasks in parallel, even if i queued the barrier work
> > at the head of the taskq.
> 
> Yes, I did not formulate that well. I think running the barrier in parallel
> is a good idea and better than what is done in some of other barriers.
>  
> I got burned by the smr_barrier and the fact that this barrier can take
> quite long to finish. Because of that it is not that trivial to use SMR
> in latency critical code.

The way I think about barriers is that they enforce ordering.  So in
the context of a queue, it really makes most sense for a barrier to
make sure that all currently queued tasks complete before we continue.

Seems what you're after really is a way to make sure we stop running
queued tasks as quickly as possible.  If there is a need for such a
thing, it probably should have a different name.


> > > OK claudio@
> > > 
> > > > Index: if.c
> > > > ===================================================================
> > > > RCS file: /cvs/src/sys/net/if.c,v
> > > > retrieving revision 1.696
> > > > diff -u -p -r1.696 if.c
> > > > --- if.c        14 May 2023 01:46:53 -0000      1.696
> > > > +++ if.c        19 May 2023 03:50:10 -0000
> > > > @@ -3481,3 +3481,19 @@ net_tq(unsigned int ifindex)
> > > >  
> > > >         return (sn->sn_taskq);
> > > >  }
> > > > +
> > > > +void
> > > > +net_tq_barriers(const char *wmesg)
> > > > +{
> > > > +       struct task barriers[NET_TASKQ];
> > > > +       struct refcnt r = REFCNT_INITIALIZER();
> > > > +       int i;
> > > > +
> > > > +       for (i = 0; i < nitems(barriers); i++) {
> > > > +               task_set(&barriers[i], (void (*)(void 
> > > > *))refcnt_rele_wake, &r);
> > > > +               refcnt_take(&r);
> > > > +               task_add(softnets[i].sn_taskq, &barriers[i]);
> > > > +       }
> > > > + 
> > > > +       refcnt_finalize(&r, wmesg);
> > > > +}
> > > > Index: if.h
> > > > ===================================================================
> > > > RCS file: /cvs/src/sys/net/if.h,v
> > > > retrieving revision 1.212
> > > > diff -u -p -r1.212 if.h
> > > > --- if.h        15 May 2023 16:34:56 -0000      1.212
> > > > +++ if.h        19 May 2023 03:50:10 -0000
> > > > @@ -560,6 +560,7 @@ int if_congested(void);
> > > >  __dead void    unhandled_af(int);
> > > >  int    if_setlladdr(struct ifnet *, const uint8_t *);
> > > >  struct taskq * net_tq(unsigned int);
> > > > +void   net_tq_barriers(const char *);
> > > >  
> > > >  #endif /* _KERNEL */
> > > >  
> > > > 
> > > 
> > > -- 
> > > :wq Claudio
> > 
> 
> -- 
> :wq Claudio
> 
> 

Reply via email to