<snip>

> 
> Unless I misunderstand something, the usual way we deal with that is the
> worker barrier as mentioned by Neale.
> 
> API calls and CLI commands are executed under this barrier unless marked as
> mp_safe (which is off by default).
> When the worker barrier is requested by the main thread, all worker threads
> are drained and stopped. Then the critical section is executed, the barrier is
> released and workers resume.
> So, as long as the bihash delete (or any shared data non-atomic modification)
> happens under this barrier, you do not need to take care of workers being
> active: VPP is taking care of it for you 😊
Thank you, I was not aware of this. Would not this result in packet drops on 
the data plane if the critical section is big? For ex: searching through the 
bi-hash table for a particular entry to delete.

> 
> On the other hand, if you do modify shared data structures in the datapath,
> you are on your own - you need to take care of the data consistency.
> Again, the way we usually deal with that is to do a "rpc" to the main thread -
> then the main thread can request the worker barrier, etc.
> 
> Or do you refer to other situations?
I was looking at the bi-hash library on a standalone basis. The entries are 
deleted and buckets are freed without any synchronization between the writer 
and the data plane threads. If the synchronization is handled outside of this 
library using 'worker barrier' it should be alright.

> 
> Best
> Ben
> 
> > -----Original Message-----
> > From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Honnappa
> > Nagarahalli
> > Sent: jeudi 27 février 2020 17:51
> > To: cho...@chopps.org; vpp-dev@lists.fd.io; Honnappa Nagarahalli
> > <honnappa.nagaraha...@arm.com>
> > Cc: nd <n...@arm.com>
> > Subject: Re: [vpp-dev] Q: how best to avoid locking for cleanup.
> >
> > I think there are similar issues in bi-hash (i.e. the entry could be
> > deleted from control plane while the data plane threads are doing the
> > lookup).
> >
> >
> >
> > Thanks,
> >
> > Honnappa
> >
> >
> >
> > From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of Christian
> > Hopps via Lists.Fd.Io
> > Sent: Thursday, February 27, 2020 5:09 AM
> > To: vpp-dev <vpp-dev@lists.fd.io>
> > Cc: vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] Q: how best to avoid locking for cleanup.
> >
> >
> >
> > I received a private message indicating that one solution was to just
> > wait "long enough" for the packets to drain. This is the method I'm
> > going to go with as it's simple (albeit not as deterministic as some
> > marking/callback scheme :)
> >
> > For my case I think I can wait ridiculously long for "long enough" and
> > just have a process do garbage collection after a full second.
> >
> > I do wonder how many other cases of "state associated with in-flight
> > packets" there might be, and if more sophisticated general solution
> > might be useful.
> >
> > Thanks,
> > Chris.
> >
> > > On Feb 25, 2020, at 6:27 PM, Christian Hopps <cho...@chopps.org
> > <mailto:cho...@chopps.org> > wrote:
> > >
> > > I've got a (hopefully) interesting problem with locking in VPP.
> > >
> > > I need to add some cleanup code to my IPTFS additions to ipsec.
> > Basically I have some per-SA queues that I need to cleanup when an SA
> > is deleted.
> > >
> > > - ipsec only deletes it's SAs when its "fib_node" locks reach zero.
> > > - I hoping this means that ipsec will only be deleting the SA after
> > > the
> > FIB has stopped injecting packets "along" this SA path (i.e., it's
> > removed prior to the final unlock/deref).
> > > - I'm being called back by ipsec during the SA deletion.
> > > - I have queues (one RX for reordering, one TX for aggregation and
> > subsequent output) associated with the SA, both containing locks, that
> > need to be emptied and freed.
> > > - These queues are being used in multiple worker threads in various
> > graph nodes in parallel.
> > >
> > > What I think this means is that when my "SA deleted" callback is
> > > called,
> > no *new* packets will be delivered on the SA path. Good so far.
> > >
> > > What I'm concerned with is the packets that may currently be "in-flight"
> > in the graph, as these will have the SA associated with them, and thus
> > my code may try and use the per SA queues which I'm now trying to delete.
> > >
> > > There's a somewhat clunky solution involving global locks prior to
> > > and
> > after using an SA in each node, tracking it's validity (which has it's
> > own issues), freeing when no longer in use etc.. but this would
> > introduce global locking in the packet path which I'm loathe to do.
> > >
> > > What I'd really like is if there was something like this:
> > >
> > > - packet ingress to SA fib node, fib node lock count increment.
> > > - packet completes it's journey through the VPP graph (or at least
> > > my
> > part of it) and decrements that fib node lock count.
> > > - when the SA should be deleted it removes it's fib node from the
> > > fib,
> > thus preventing new packets entering the graph then unlocks.
> > > - the SA is either immediately deleted (no packets in flight), or
> > deleted when the last packet completes it's graph traversal.
> > >
> > > I could do something like this inside my own nodes (my first node is
> > point B), but then there's still the race between when the fib node is
> > used to inject the packet to the next node in the graph (point A) and
> > that packet arriving at my first IPTFS node (point B), when the SA
> > deletion could occur. Maybe i could modify the fib code to do this at
> > point A. I haven't looked closely at the fib code yet.
> > >
> > > Anyway I suspect this has been thought about before, and maybe
> > > there's
> > even a solution already present in VPP, so I wanted to ask. :)
> > >
> > > Thanks,
> > > Chris.
> > >
> > >
> > >

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15601): https://lists.fd.io/g/vpp-dev/message/15601
Mute This Topic: https://lists.fd.io/mt/71544411/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to