Hi Chris,

All of the APIs that result in the removal of an SA are not marked as MP safe. 
This means that the worker threads are paused at the ‘barrier’ as the API is 
handled. Worker threads reach the barrier once they complete the frame they are 
working on. So there are no packets in flight when the SA is deleted.

One more comment inline

From: <vpp-dev@lists.fd.io> on behalf of Christian Hopps <cho...@chopps.org>
Date: Wednesday 26 February 2020 at 00:28
To: vpp-dev <vpp-dev@lists.fd.io>
Cc: Christian Hopps <cho...@chopps.org>
Subject: [vpp-dev] Q: how best to avoid locking for cleanup.

I've got a (hopefully) interesting problem with locking in VPP.

I need to add some cleanup code to my IPTFS additions to ipsec. Basically I 
have some per-SA queues that I need to cleanup when an SA is deleted.

- ipsec only deletes it's SAs when its "fib_node" locks reach zero.
- I hoping this means that ipsec will only be deleting the SA after the FIB has 
stopped injecting packets "along" this SA path (i.e., it's removed prior to the 
final unlock/deref).
It means that there are no other objects that refer to this SA.
The fib_node serves as an objects linkage into the FIB graph. In the FIB graph 
nomenclature children ‘point to’ their one parent and parents have many 
children. In the data-plane children typically pass packets to their parents, 
e.g. a route is a child of an adjacency.
An SA has no children. It gets packets from entities that are not linked into 
the FIB graph – for SAs I think it is always via n input or output feature. An 
SA, in tunnel mode used in an SPD policy, is a child of the route that resolves 
the encap destination – this allows it to track where to send packets post 
encap and thus elide the lookup in the DP.
/neale

- I'm being called back by ipsec during the SA deletion.
- I have queues (one RX for reordering, one TX for aggregation and subsequent 
output) associated with the SA, both containing locks, that need to be emptied 
and freed.
- These queues are being used in multiple worker threads in various graph nodes 
in parallel.

What I think this means is that when my "SA deleted" callback is called, no 
*new* packets will be delivered on the SA path. Good so far.

What I'm concerned with is the packets that may currently be "in-flight" in the 
graph, as these will have the SA associated with them, and thus my code may try 
and use the per SA queues which I'm now trying to delete.

There's a somewhat clunky solution involving global locks prior to and after 
using an SA in each node, tracking it's validity (which has it's own issues), 
freeing when no longer in use etc.. but this would introduce global locking in 
the packet path which I'm loathe to do.

What I'd really like is if there was something like this:

- packet ingress to SA fib node, fib node lock count increment.
- packet completes it's journey through the VPP graph (or at least my part of 
it) and decrements that fib node lock count.
- when the SA should be deleted it removes it's fib node from the fib, thus 
preventing new packets entering the graph then unlocks.
- the SA is either immediately deleted (no packets in flight), or deleted when 
the last packet completes it's graph traversal.

I could do something like this inside my own nodes (my first node is point B), 
but then there's still the race between when the fib node is used to inject the 
packet to the next node in the graph (point A) and that packet arriving at my 
first IPTFS node (point B), when the SA deletion could occur. Maybe i could 
modify the fib code to do this at point A. I haven't looked closely at the fib 
code yet.

Anyway I suspect this has been thought about before, and maybe there's even a 
solution already present in VPP, so I wanted to ask. :)

Thanks,
Chris.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15585): https://lists.fd.io/g/vpp-dev/message/15585
Mute This Topic: https://lists.fd.io/mt/71544411/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to