Grossly simplified explanation: Because of how VPP works, all packets are 
processed to completion on every vector (there may be exceptions, but it’s not 
typical, VPP really does not want to hang on to packets across cycles). Once a 
vector has been completed, there’s a barrier lock to allow main thread 
operations on worker thread data structures (such as collecting counters), and 
once those are complete the barrier is lifted and the next vector ingested. Not 
all main thread operations happen on every barrier, and naturally given the 
time sensitive nature all that code is carefully vetted. It may seem like an 
inefficient way, but it avoids a lot of locking – and especially checking for 
locks.

Thus “flushed” is the wrong perspective; there is no flushing when everything 
is processed to completion anyway.

Chris.

From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> on behalf of satish amara 
<satishkam...@gmail.com>
Date: Monday, July 12, 2021 at 16:48
To: Damjan Marion <dmar...@me.com>, vpp-dev@lists.fd.io <vpp-dev@lists.fd.io>
Subject: [EXTERNAL] Re: [vpp-dev] Multi-threading locks and synchronization
"Interfaces are created/deleted under the barrier so there is not packets in 
flight."
Can you please add more details?  I just gave a scenario. In general, this is 
applicable to all meta fields  /Opaque fields. How are handling the outdated 
meta fields in packets due to config changes?  Does this mean all the packets 
need to be flushed out or processed before the new config is applied?

On Mon, Jul 12, 2021 at 11:04 AM Damjan Marion 
<dmar...@me.com<mailto:dmar...@me.com>> wrote:

On 11.07.2021., at 17:10, satish amara 
<satishkam...@gmail.com<mailto:satishkam...@gmail.com>> wrote:


[Edited Message Follows]
Hi,
   I have a few questions about how synchronization is being done when there 
are multiple workers/threads accessing the same data structure.
For example, IPsec headers have a seq number that gets incremented.
If we have IPsec flow and encrypting packets on VPP do we assume that packets 
in the same flow go to the same core for encryption?

Yes, we handoff all packets to the owning thread. For fat flows we have crypto 
scheduler which offloads crypto operations to the multiple cores
but order is still maintained by the owning thread.


In IP lookup will find the adjacency and store it in the Opaque field.. Send to 
another node to rewrite. Before the packet gets processed in the next node if 
the interface is removed from the forwarding table. What will happen?   Lots of 
info being stored in Opaque fields for a couple of features. How the code is 
making sure changes in the config data are taken care of when the packets are 
still being processed by intermediate nodes in the graph.

Interfaces are created/deleted under the barrier so there is not packets in 
flight.

—
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#19749): https://lists.fd.io/g/vpp-dev/message/19749
Mute This Topic: https://lists.fd.io/mt/84165477/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to