Re: [vpp-dev] VPP IPSec Responder Server (VPN Server) VPP Route injection

2020-02-27 Thread Neale Ranns via Lists.Fd.Io

Hi Ravin,

I would suggest two things:

  1.  In your application you should maintain an association between 
strongswan’s client and the tunnel you create in VPP for it. Then, since the 
routes are associated with the client they can easily be matched to the tunnel. 
You’ll need this sort of association when the client is deleted and so the 
tunnel is too.
  2.  Both the [soon to be depreceated] ipsec tunnel and ipip tunnel create 
APIs allow you to choose the instance number of the tunnel. So the X in ipipX 
is your choice and not a counter.

/neale


From:  on behalf of "ravinder.ya...@hughes.com" 

Date: Wednesday 26 February 2020 at 20:28
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] VPP IPSec Responder Server (VPN Server) VPP Route injection

Route Injection VPP IPSec: "Routing traffic through ipsec0 interface on the VPP 
responder"

Setup Details: StrongSwan IPsec client initiator which establishes 250 IPSec 
tunnels with the VPP head-end responder.

Case # Only one IPSec tunnel:
ipip0 (ipsec00) interface and its straight forward to add a route for remote ip 
range.

Case # When you have more than two IPSec, the ipsec interface name depends on 
which tunnels got established first.
ipip0 interface (Can't add route because don't know which remote ip range): 
Could be remote 1 or 2
ipip1 interface (Can't add route because don't know which remote ip range): 
Could be remote 1 or 2

This becomes a big issue when you have 250 clients coming in at the same time. 
It become impossible to decide which route gets injected on which interface!!

-Ravin


Ref: 
https://wiki.fd.io/view/VPP/IPSec_and_IKEv2#Routing_traffic_through_ipsec0_interface_on_the_VPP_responder

Routing traffic through ipsec0 interface on the VPP responder

At this point of the configuration, you still do not have end to end secure 
connectivity. You need to route traffic through ipsec0 created interface on 
VPP. There are two ways of doing it.

First: using a dummy IP address.

set interface state ipsec0 up

set interface ip address ipsec0 11.11.11.11/32

ip route add 192.168.3.0/24 via 11.11.11.11 ipsec0

Second: binding logical and physical interfaces

You must use tunnel endpoint interface.

ip route add 192.168.3.0/24 via ipsec0

set interface state ipsec0 up

set interface unnumbered ipsec0 use TenGigabitEthernet4/0/0
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15580): https://lists.fd.io/g/vpp-dev/message/15580
Mute This Topic: https://lists.fd.io/mt/71571955/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Error: esp4-encrypt-tun-handoff : congestion drop #ipsec

2020-02-27 Thread Neale Ranns via Lists.Fd.Io

Ravin,

Due to the RX and TX actions that need to be performed on an SA in an ‘atomic’ 
fashion, each SA is bound to a single worker thread. The trhread chosen is the 
one that first sees a packet for that SA. When subsequent packets arrive on a 
different thread they need to be transferred to the SA’s bound thread – this 
process is called handoff.
The drops you are seeing occur when the input rate on the receive thread 
exceeds the tx rate on the SA’s thread. You might look at which threads are 
involved and see if the handoff can be avoided or at least reduced. Otherwise 
this is an indication that VPP is hitting a processing limit.

/neale

From:  on behalf of "ravinder.ya...@hughes.com" 

Date: Wednesday 26 February 2020 at 23:59
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Error: esp4-encrypt-tun-handoff : congestion drop #ipsec


[Edited Message Follows]
Folks,

I am seeing packet congestion drop errors "esp4-encrypt-tun-handoff". What 
could be the reason for this and how can i debug this further?
   124  ikev2 IKEv2 packets processed
31  ikev2 IKE_SA_INIT retransmit
31 ip4-udp-lookup no error
  19061984esp4-decrypt-tunESP pkts received
 1esp4-decrypt-tunIntegrity check failed
 134757254esp4-encrypt-tunESP pkts received
  19061984ipsec4-tun-inputgood packets received
 33913esp4-encrypt-tun-handoffcongestion drop
31ip4-glean   ARP requests sent
   263 ethernet-input no error

Thank you,
-Ravin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15581): https://lists.fd.io/g/vpp-dev/message/15581
Mute This Topic: https://lists.fd.io/mt/71573632/21656
Mute #ipsec: https://lists.fd.io/mk?hashtag=ipsec&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Q: how best to avoid locking for cleanup.

2020-02-27 Thread Christian Hopps
I received a private message indicating that one solution was to just wait 
"long enough" for the packets to drain. This is the method I'm going to go with 
as it's simple (albeit not as deterministic as some marking/callback scheme :) 

For my case I think I can wait ridiculously long for "long enough" and just 
have a process do garbage collection after a full second.

I do wonder how many other cases of "state associated with in-flight packets" 
there might be, and if more sophisticated general solution might be useful.

Thanks,
Chris.

> On Feb 25, 2020, at 6:27 PM, Christian Hopps  wrote:
> 
> I've got a (hopefully) interesting problem with locking in VPP.
> 
> I need to add some cleanup code to my IPTFS additions to ipsec. Basically I 
> have some per-SA queues that I need to cleanup when an SA is deleted.
> 
> - ipsec only deletes it's SAs when its "fib_node" locks reach zero.
> - I hoping this means that ipsec will only be deleting the SA after the FIB 
> has stopped injecting packets "along" this SA path (i.e., it's removed prior 
> to the final unlock/deref).
> - I'm being called back by ipsec during the SA deletion.
> - I have queues (one RX for reordering, one TX for aggregation and subsequent 
> output) associated with the SA, both containing locks, that need to be 
> emptied and freed.
> - These queues are being used in multiple worker threads in various graph 
> nodes in parallel.
> 
> What I think this means is that when my "SA deleted" callback is called, no 
> *new* packets will be delivered on the SA path. Good so far.
> 
> What I'm concerned with is the packets that may currently be "in-flight" in 
> the graph, as these will have the SA associated with them, and thus my code 
> may try and use the per SA queues which I'm now trying to delete.
> 
> There's a somewhat clunky solution involving global locks prior to and after 
> using an SA in each node, tracking it's validity (which has it's own issues), 
> freeing when no longer in use etc.. but this would introduce global locking 
> in the packet path which I'm loathe to do.
> 
> What I'd really like is if there was something like this:
> 
> - packet ingress to SA fib node, fib node lock count increment.
> - packet completes it's journey through the VPP graph (or at least my part of 
> it) and decrements that fib node lock count.
> - when the SA should be deleted it removes it's fib node from the fib, thus 
> preventing new packets entering the graph then unlocks.
> - the SA is either immediately deleted (no packets in flight), or deleted 
> when the last packet completes it's graph traversal.
> 
> I could do something like this inside my own nodes (my first node is point 
> B), but then there's still the race between when the fib node is used to 
> inject the packet to the next node in the graph (point A) and that packet 
> arriving at my first IPTFS node (point B), when the SA deletion could occur. 
> Maybe i could modify the fib code to do this at point A. I haven't looked 
> closely at the fib code yet.
> 
> Anyway I suspect this has been thought about before, and maybe there's even a 
> solution already present in VPP, so I wanted to ask. :)
> 
> Thanks,
> Chris.
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15582): https://lists.fd.io/g/vpp-dev/message/15582
Mute This Topic: https://lists.fd.io/mt/71544411/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release branches?

2020-02-27 Thread Neale Ranns via Lists.Fd.Io

Hi Chris,

Adding an IS_INBOUND flag could be non-backward compatible if not setting the 
INBOUND flag on an SA and then using it in an inbound context resulted in an 
error being returned to the user. so existing clients would be obligated to set 
this new flag. If that’s not the case, and client’s don’t have to set the flag, 
then I’d ask the question, can the flag be set later, by VPP, when the SA is 
used in an inbound context?

Regards,
neale


From:  on behalf of Christian Hopps 
Date: Tuesday 25 February 2020 at 16:25
To: vpp-dev 
Cc: Christian Hopps 
Subject: [vpp-dev] Can I submit some API changes for 19.08, et al. release 
branches?

I've got a couple of changes to the ipsec API that I'd like to upstream to 
match the vpp "kernel" code I'm going try and upstream to strongswan.

1) Add: Add an ip_route_lookup/reply pair (semver minor++)
2) Fix: Add IS_INBOUND flag (value 0x40) to ipsec_sad_flags (semver patch++)
optional) Fix: possibly add the other missing flags to ipsec_sad_flags so they 
can be properly returned on queries.

I think submitting these for release branches is OK after reading 
https://wiki.fd.io/view/VPP/API_Versioning

I'm coding to 19.08 right now, if I'd like to have those changes in that branch 
I would imagine I'd need to also submit changes for 20.01 and master?

I admit to being confused about the CRC stuff, and the warnings in the 19.08.1 
release notes and what those warnings imply. Is it safe to assume the CRC stuff 
can be ignored and external clients will still work (given no semver major 
change) even if a CRC chagnes?

Side Note: from the API link: "If a new message type is added, the old message 
type must be maintained for at least one release. The change must be included 
in the release notes, and it would be helpful if the message could be tagged as 
deprecated in the API language and a logging message given to the user."

Given there are 3 releases per year, only maintaining an old compatible 
function for 1 release seems rather aggressive. It does say "at least" though. 
:)

Thanks,
Chris.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15583): https://lists.fd.io/g/vpp-dev/message/15583
Mute This Topic: https://lists.fd.io/mt/71535081/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release branches?

2020-02-27 Thread Neale Ranns via Lists.Fd.Io


From:  on behalf of Christian Hopps 
Date: Tuesday 25 February 2020 at 22:09
To: Andrew 👽 Yourtchenko 
Cc: Christian Hopps , "Dave Barach (dbarach)" 
, vpp-dev 
Subject: Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release 
branches?



> On Feb 25, 2020, at 3:44 PM, Andrew 👽 Yourtchenko  wrote:
>
> That’s the APIs in the master prior to the API freeze, in my book.
>
> The today’s situation is:
>
> - want bleeding edge features ? Use master.
> - want sort-of-boring-stability ? Use release branch
>
> Now, I am really interested why the folks who do want to experiment with the 
> APIs, do not pick the master ? (Say, specifically for your case).

FWIW, I'm absolutely fine following whatever guidelines, it's open source and 
I'm sure people can make things work with whatever is best for both project[s].

In my case we're not really talking about an API that is "bleeding edge", but 
rather "waited around for someone to need/implement it". Doing a route/fib 
entry lookup isn't very bleeding edge given what VPP does. :)

True 😊 but the general usage model for VPP is that there is one agent/client 
giving it all the state it needs to function. So why would that agent need 
lookup functions, it has all the data already. The dump APIs serve to 
repopulate the agent with all state should it crash.

/neale


My use case is interfacing an external IKE daemon (strongswan), I want to do a 
route lookup (rather than download the entire FIB content and search for a 
match myself hoping to use the same criteria that VPP would, which is the only 
current API solution available).

I'd like to contribute my strongswan changes back to strongswan project, but 
having them only be usable for some future yet-to-be-released version of VPP 
might not be useful to people shipping products based mostly on 
"sort-of-boring-but-stable" features.

Also worth noting, I implemented a basic route lookup (prefix based, from a 
given table, either exact or longest prefix match), I didn't add any other 
filtering or fanciness. I figured fanciness could be added later (if needed), 
and went for simple, and if more complexity was needed, well there's always 
what came before it. :)

  define ip_route_lookup
  {
u32 client_index;
u32 context;
u32 table_id;
u8 exact;
vl_api_prefix_t prefix;
  };
  define ip_route_lookup_reply
  {
u32 context;
i32 retval;
vl_api_ip_route_t route;
  };

Thanks,
Chris.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15584): https://lists.fd.io/g/vpp-dev/message/15584
Mute This Topic: https://lists.fd.io/mt/71535081/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Q: how best to avoid locking for cleanup.

2020-02-27 Thread Neale Ranns via Lists.Fd.Io
Hi Chris,

All of the APIs that result in the removal of an SA are not marked as MP safe. 
This means that the worker threads are paused at the ‘barrier’ as the API is 
handled. Worker threads reach the barrier once they complete the frame they are 
working on. So there are no packets in flight when the SA is deleted.

One more comment inline

From:  on behalf of Christian Hopps 
Date: Wednesday 26 February 2020 at 00:28
To: vpp-dev 
Cc: Christian Hopps 
Subject: [vpp-dev] Q: how best to avoid locking for cleanup.

I've got a (hopefully) interesting problem with locking in VPP.

I need to add some cleanup code to my IPTFS additions to ipsec. Basically I 
have some per-SA queues that I need to cleanup when an SA is deleted.

- ipsec only deletes it's SAs when its "fib_node" locks reach zero.
- I hoping this means that ipsec will only be deleting the SA after the FIB has 
stopped injecting packets "along" this SA path (i.e., it's removed prior to the 
final unlock/deref).
It means that there are no other objects that refer to this SA.
The fib_node serves as an objects linkage into the FIB graph. In the FIB graph 
nomenclature children ‘point to’ their one parent and parents have many 
children. In the data-plane children typically pass packets to their parents, 
e.g. a route is a child of an adjacency.
An SA has no children. It gets packets from entities that are not linked into 
the FIB graph – for SAs I think it is always via n input or output feature. An 
SA, in tunnel mode used in an SPD policy, is a child of the route that resolves 
the encap destination – this allows it to track where to send packets post 
encap and thus elide the lookup in the DP.
/neale

- I'm being called back by ipsec during the SA deletion.
- I have queues (one RX for reordering, one TX for aggregation and subsequent 
output) associated with the SA, both containing locks, that need to be emptied 
and freed.
- These queues are being used in multiple worker threads in various graph nodes 
in parallel.

What I think this means is that when my "SA deleted" callback is called, no 
*new* packets will be delivered on the SA path. Good so far.

What I'm concerned with is the packets that may currently be "in-flight" in the 
graph, as these will have the SA associated with them, and thus my code may try 
and use the per SA queues which I'm now trying to delete.

There's a somewhat clunky solution involving global locks prior to and after 
using an SA in each node, tracking it's validity (which has it's own issues), 
freeing when no longer in use etc.. but this would introduce global locking in 
the packet path which I'm loathe to do.

What I'd really like is if there was something like this:

- packet ingress to SA fib node, fib node lock count increment.
- packet completes it's journey through the VPP graph (or at least my part of 
it) and decrements that fib node lock count.
- when the SA should be deleted it removes it's fib node from the fib, thus 
preventing new packets entering the graph then unlocks.
- the SA is either immediately deleted (no packets in flight), or deleted when 
the last packet completes it's graph traversal.

I could do something like this inside my own nodes (my first node is point B), 
but then there's still the race between when the fib node is used to inject the 
packet to the next node in the graph (point A) and that packet arriving at my 
first IPTFS node (point B), when the SA deletion could occur. Maybe i could 
modify the fib code to do this at point A. I haven't looked closely at the fib 
code yet.

Anyway I suspect this has been thought about before, and maybe there's even a 
solution already present in VPP, so I wanted to ask. :)

Thanks,
Chris.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15585): https://lists.fd.io/g/vpp-dev/message/15585
Mute This Topic: https://lists.fd.io/mt/71544411/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release branches?

2020-02-27 Thread Vratko Polak -X (vrpolak - PANTHEON TECHNOLOGIES at Cisco) via Lists.Fd.Io
> Adding a new API into an existing file will still change the CRC on the 
> plugin/module

Looking into a .api.json file, I see "crc" only for messages,
but the whole file also ends with a "vl_api_version" value.

> if we are aiming for binary compatibility

Crc values guard against backward-incompatible edits.
Vl_api_version changes even for backward compatible edits
(within the same .api file), so perhaps we can tolerate such change.

Vratko.

From: vpp-dev@lists.fd.io  On Behalf Of Andrew Yourtchenko
Sent: Wednesday, February 26, 2020 6:19 PM
To: Vratko Polak -X (vrpolak - PANTHEON TECH SRO at Cisco) 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release 
branches?

Hmm so that’s an interesting question. Adding a new API into an existing file 
will still change the CRC on the plugin/module - which means that if we are 
aiming for binary compatibility (which is I think what we are doing) - it means 
we can only allow the one-shot addition of new .api files, right ?

--a


On 26 Feb 2020, at 17:26, Vratko Polak -X (vrpolak - PANTHEON TECH SRO at 
Cisco) mailto:vrpo...@cisco.com>> wrote:

> as soon as the CRC of any existing messages does not change, a patch is okay 
> to include into 19.08

Does that mean we want an api-crc job also for 1908 stream?
We currently have api-crc job only for master,
because other branches were not expected to edit .api files.

Vratko.

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Andrew 
Yourtchenko
Sent: Tuesday, February 25, 2020 6:49 PM
To: Dave Barach (dbarach) mailto:dbar...@cisco.com>>
Cc: Christian Hopps mailto:cho...@chopps.org>>; vpp-dev 
mailto:vpp-dev@lists.fd.io>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release 
branches?

With my 19.08 release manager hat on,
For the current state of the matters my view is “as soon as the CRC of any 
existing messages does not change, a patch is okay to include into 19.08”.

This applies recursively, meaning that if one add a new message to 1908, it 
comes in as “frozen” with no further changes. Why ? Because if it is not 
perfect API wise it’s not stable, it should first settle down on master.

That’s the logic I had been using so far.

I am currently working on an 19.08 api translation layer  experiment that in 
principle should allow more freedom... but till that is successful that is my 
point of view. Unless the community decides otherwise, of course.

--a



On 25 Feb 2020, at 16:56, Dave Barach via Lists.Fd.Io 
mailto:dbarach=cisco@lists.fd.io>> wrote:

Dear Chris,

Adding missing flags to ipsec_sad_flags shouldn’t break much of anything. Never 
say never, but for me, I wouldn’t hesitate to merge such a patch.

Adding an entirely new message will renumber subsequent core engine messages, 
and implies client recompilation. My $0.02: again, not such a big deal.

What do other people think?

Dave

P.S. in master/latest, enum ipsec_sad_flags has moved to 
.../src/vnet/ipsec/ipsec_types.api...


From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Christian Hopps
Sent: Tuesday, February 25, 2020 10:25 AM
To: vpp-dev mailto:vpp-dev@lists.fd.io>>
Cc: Christian Hopps mailto:cho...@chopps.org>>
Subject: [vpp-dev] Can I submit some API changes for 19.08, et al. release 
branches?

I've got a couple of changes to the ipsec API that I'd like to upstream to 
match the vpp "kernel" code I'm going try and upstream to strongswan.

1) Add: Add an ip_route_lookup/reply pair (semver minor++)
2) Fix: Add IS_INBOUND flag (value 0x40) to ipsec_sad_flags (semver patch++)
optional) Fix: possibly add the other missing flags to ipsec_sad_flags so they 
can be properly returned on queries.

I think submitting these for release branches is OK after reading 
https://wiki.fd.io/view/VPP/API_Versioning

I'm coding to 19.08 right now, if I'd like to have those changes in that branch 
I would imagine I'd need to also submit changes for 20.01 and master?

I admit to being confused about the CRC stuff, and the warnings in the 19.08.1 
release notes and what those warnings imply. Is it safe to assume the CRC stuff 
can be ignored and external clients will still work (given no semver major 
change) even if a CRC chagnes?

Side Note: from the API link: "If a new message type is added, the old message 
type must be maintained for at least one release. The change must be included 
in the release notes, and it would be helpful if the message could be tagged as 
deprecated in the API language and a logging message given to the user."

Given there are 3 releases per year, only maintaining an old compatible 
function for 1 release seems rather aggressive. It does say "at least" though. 
:)

Thanks,
Chris.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15586): https://lists.fd.io/g/vpp-de

[vpp-dev] ipv6 icmp between link local address not working #vnet

2020-02-27 Thread Gigo Thomas via Lists.Fd.Io
*Ping/ICMP communication between two peers is not working when using ipv6 link 
local address*
The VPP version is 19.01.2 and vppsb router plugin used: Platform is 
Linux-Debian.

1st Device: Linux: root@rad:/# ip addr show vpp2 10: vpp2: 
 mtu 1500 qdisc fq_codel state UP group 
default qlen 1000 link/ether 00:08:a2:09:5a:fe brd ff:ff:ff:ff:ff:ff inet 
11.11.11.11/24 brd 11.11.11.255 scope global vpp2 valid_lft forever 
preferred_lft forever inet6 fe80::208:a2ff:fe09:5afe/64 scope link valid_lft 
forever preferred_lft forever root@rad:/# VPP: vpp# show ip6 interface 
GigabitEthernet0/14/2 GigabitEthernet0/14/2 is admin up Link-local address(es): 
fe80::208:a2ff:fe09:5afe Joined group address(es): ff02::1 ff02::2 ff02::16 
Advertised Prefixes: MTU is 1790 ICMP error messages are unlimited ICMP 
redirects are disabled ICMP unreachables are not sent ND DAD is disabled ND 
advertised reachable time is 0 ND advertised retransmit interval is 0 (msec) ND 
router advertisements are sent every 200 seconds (min interval is 150) ND 
router advertisements live for 600 seconds Hosts don't use stateless autoconfig 
for addresses ND router advertisements sent 8 ND router solicitations received 
0 ND router solicitations dropped 0 vpp# 2nd Device: Linux: root@rad:/etc/vpp# 
ip addr show vpp0 10: vpp0:  mtu 1500 qdisc 
fq_codel state UP group default qlen 1000 link/ether 00:08:a2:0c:bb:42 brd 
ff:ff:ff:ff:ff:ff inet 11.11.11.1/24 brd 11.11.11.255 scope global vpp0 
valid_lft forever preferred_lft forever inet6 fe80::208:a2ff:fe0c:bb42/64 scope 
link valid_lft forever preferred_lft forever root@rad:/etc/vpp# vpp# show ip6 
interface GigabitEthernet0/14/0 GigabitEthernet0/14/0 is admin up Link-local 
address(es): fe80::208:a2ff:fe0c:bb42 Joined group address(es): ff02::1 ff02::2 
ff02::16 Advertised Prefixes: MTU is 1790 ICMP error messages are unlimited 
ICMP redirects are disabled ICMP unreachables are not sent ND DAD is disabled 
ND advertised reachable time is 0 ND advertised retransmit interval is 0 (msec) 
ND router advertisements are sent every 200 seconds (min interval is 150) ND 
router advertisements live for 600 seconds Hosts don't use stateless autoconfig 
for addresses ND router advertisements sent 8 ND router solicitations received 
0 ND router solicitations dropped 0 vpp# /* If ping from second device to 1st 
device(and vice versa) from Linux it is not working. */ root@ewn:/etc/vpp# ping 
fe80::208:a2ff:fe09:5afe%vpp0 PING 
fe80::208:a2ff:fe09:5afe%vpp0(fe80::208:a2ff:fe09:5afe%vpp0) 56 data bytes --- 
fe80::208:a2ff:fe09:5afe%vpp0 ping statistics --- 11 packets transmitted, 0 
received, 100% packet loss, time 10225ms ^Croot@ewn:/etc/vpp#

Please advise.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15587): https://lists.fd.io/g/vpp-dev/message/15587
Mute This Topic: https://lists.fd.io/mt/71587151/21656
Mute #vnet: https://lists.fd.io/mk?hashtag=vnet&subid=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Coverity run FAILED as of 2020-02-27 14:00:24 UTC

2020-02-27 Thread Noreply Jenkins
Coverity run failed today.

Current number of outstanding issues are 6
Newly detected: 0
Eliminated: 0
More details can be found at  
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15588): https://lists.fd.io/g/vpp-dev/message/15588
Mute This Topic: https://lists.fd.io/mt/71587191/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] ipv6 icmp between link local address not working #vnet

2020-02-27 Thread Gigo Thomas via Lists.Fd.Io
[Edited Message Follows]
[Reason: Ping/ICMP communication between two peers is not working when using 
ipv6 link local address The VPP version is 19.01.2 and vppsb router plugin 
used: Platform is Linux-Debian.]

*Ping/ICMP communication between two peers is not working when using ipv6 link 
local address*
The VPP version is 19.01.2 and vppsb router plugin used: Platform is 
Linux-Debian.
*1st Device:*

> 
> Linux:
> root@ewn:/# ip addr show vpp2
> 10: vpp2:  mtu 1500 qdisc fq_codel state
> UP group default qlen 1000
> link/ether 00:08:a2:09:5a:fe brd ff:ff:ff:ff:ff:ff
> inet 11.11.11.11/24 brd 11.11.11.255 scope global vpp2
> valid_lft forever preferred_lft forever
> inet6 fe80::208:a2ff:fe09:5afe/64 scope link
> valid_lft forever preferred_lft forever
> root@ewn:/#
> 
> VPP:
> vpp# show ip6 interface GigabitEthernet0/14/2
> GigabitEthernet0/14/2 is admin up
> Link-local address(es):
> fe80::208:a2ff:fe09:5afe
> Joined group address(es):
> ff02::1
> ff02::2
> ff02::16
> Advertised Prefixes:
> MTU is 1790
> ICMP error messages are unlimited
> ICMP redirects are disabled
> ICMP unreachables are not sent
> ND DAD is disabled
> ND advertised reachable time is 0
> ND advertised retransmit interval is 0 (msec)
> ND router advertisements are sent every 200 seconds (min interval is 150)
> ND router advertisements live for 600 seconds
> Hosts  don't use stateless autoconfig for addresses
> ND router advertisements sent 8
> ND router solicitations received 0
> ND router solicitations dropped 0
> vpp#
> 

*2nd Device:*

> 
> Linux:
> root@rad:/etc/vpp# ip addr show vpp0
> 10: vpp0:  mtu 1500 qdisc fq_codel state
> UP group default qlen 1000
> link/ether 00:08:a2:0c:bb:42 brd ff:ff:ff:ff:ff:ff
> inet 11.11.11.1/24 brd 11.11.11.255 scope global vpp0
> valid_lft forever preferred_lft forever
> inet6 fe80::208:a2ff:fe0c:bb42/64 scope link
> valid_lft forever preferred_lft forever
> root@rad:/etc/vpp#
> vpp# show ip6 interface GigabitEthernet0/14/0
> GigabitEthernet0/14/0 is admin up
> Link-local address(es):
> fe80::208:a2ff:fe0c:bb42
> Joined group address(es):
> ff02::1
> ff02::2
> ff02::16
> Advertised Prefixes:
> MTU is 1790
> ICMP error messages are unlimited
> ICMP redirects are disabled
> ICMP unreachables are not sent
> ND DAD is disabled
> ND advertised reachable time is 0
> ND advertised retransmit interval is 0 (msec)
> ND router advertisements are sent every 200 seconds (min interval is 150)
> ND router advertisements live for 600 seconds
> Hosts  don't use stateless autoconfig for addresses
> ND router advertisements sent 8
> ND router solicitations received 0
> ND router solicitations dropped 0
> vpp#
> 
> /* If ping from second device to 1st device(and vice versa) from Linux it
> is not working. */
> root@ewn:/etc/vpp# ping fe80::208:a2ff:fe09:5afe%vpp0
> PING fe80::208:a2ff:fe09:5afe%vpp0(fe80::208:a2ff:fe09:5afe%vpp0) 56 data
> bytes
> 
> --- fe80::208:a2ff:fe09:5afe%vpp0 ping statistics ---
> 11 packets transmitted, 0 received, 100% packet loss, time 10225ms
> 
> ^Croot@ewn:/etc/vpp#
> 

Please advise.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15587): https://lists.fd.io/g/vpp-dev/message/15587
Mute This Topic: https://lists.fd.io/mt/71587151/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] ipv6 icmp between link local address not working #vnet

2020-02-27 Thread Gigo Thomas via Lists.Fd.Io
[Edited Message Follows]

*Ping/ICMP communication between two peers is not working when using ipv6 link 
local address*
The VPP version is 19.01.2 and vppsb router plugin used: Platform is 
Linux-Debian.
*1st Device:*

> 
> Linux:
> root@ewn:/# ip addr show vpp2
> 10: vpp2:  mtu 1500 qdisc fq_codel state
> UP group default qlen 1000
> link/ether 00:08:a2:09:5a:fe brd ff:ff:ff:ff:ff:ff
> inet 11.11.11.11/24 brd 11.11.11.255 scope global vpp2
> valid_lft forever preferred_lft forever
> inet6 fe80::208:a2ff:fe09:5afe/64 scope link
> valid_lft forever preferred_lft forever
> root@ewn:/#
> 
> VPP:
> vpp# show ip6 interface GigabitEthernet0/14/2
> GigabitEthernet0/14/2 is admin up
> Link-local address(es):
> fe80::208:a2ff:fe09:5afe
> Joined group address(es):
> ff02::1
> ff02::2
> ff02::16
> Advertised Prefixes:
> MTU is 1790
> ICMP error messages are unlimited
> ICMP redirects are disabled
> ICMP unreachables are not sent
> ND DAD is disabled
> ND advertised reachable time is 0
> ND advertised retransmit interval is 0 (msec)
> ND router advertisements are sent every 200 seconds (min interval is 150)
> ND router advertisements live for 600 seconds
> Hosts  don't use stateless autoconfig for addresses
> ND router advertisements sent 8
> ND router solicitations received 0
> ND router solicitations dropped 0
> vpp#
> 

*2nd Device:*

> 
> Linux:
> root@rad:/etc/vpp# ip addr show vpp0
> 10: vpp0:  mtu 1500 qdisc fq_codel state
> UP group default qlen 1000
> link/ether 00:08:a2:0c:bb:42 brd ff:ff:ff:ff:ff:ff
> inet 11.11.11.1/24 brd 11.11.11.255 scope global vpp0
> valid_lft forever preferred_lft forever
> inet6 fe80::208:a2ff:fe0c:bb42/64 scope link
> valid_lft forever preferred_lft forever
> root@rad:/etc/vpp#
> vpp# show ip6 interface GigabitEthernet0/14/0
> GigabitEthernet0/14/0 is admin up
> Link-local address(es):
> fe80::208:a2ff:fe0c:bb42
> Joined group address(es):
> ff02::1
> ff02::2
> ff02::16
> Advertised Prefixes:
> MTU is 1790
> ICMP error messages are unlimited
> ICMP redirects are disabled
> ICMP unreachables are not sent
> ND DAD is disabled
> ND advertised reachable time is 0
> ND advertised retransmit interval is 0 (msec)
> ND router advertisements are sent every 200 seconds (min interval is 150)
> ND router advertisements live for 600 seconds
> Hosts  don't use stateless autoconfig for addresses
> ND router advertisements sent 8
> ND router solicitations received 0
> ND router solicitations dropped 0
> vpp#
> 
> 

/* If ping from second device to 1st device(and vice versa) from Linux it is 
not working. */
root@ewn:/etc/vpp# ping fe80::208:a2ff:fe09:5afe%vpp0
PING fe80::208:a2ff:fe09:5afe%vpp0(fe80::208:a2ff:fe09:5afe%vpp0) 56 data bytes

--- fe80::208:a2ff:fe09:5afe%vpp0 ping statistics ---
11 packets transmitted, 0 received, 100% packet loss, time 10225ms

^Croot@ewn:/etc/vpp#
Please advise.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15587): https://lists.fd.io/g/vpp-dev/message/15587
Mute This Topic: https://lists.fd.io/mt/71587151/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release branches?

2020-02-27 Thread Christian Hopps

> On Feb 27, 2020, at 6:26 AM, Neale Ranns (nranns)  wrote:
> 
>
> Hi Chris,
>
> Adding an IS_INBOUND flag could be non-backward compatible if not setting the 
> INBOUND flag on an SA and then using it in an inbound context resulted in an 
> error being returned to the user. so existing clients would be obligated to 
> set this new flag. If that’s not the case, and client’s don’t have to set the 
> flag, then I’d ask the question, can the flag be set later, by VPP, when the 
> SA is used in an inbound context?

tl;dr - no error generated, behavior remains the same for current clients.

The older clients wont get any error from not setting the flag. Currently (well 
1908) the flag not being set causes things to happen in prep for outbound 
packets (fib stuff and template headers get created). This setup is skipped if 
the INBOUND flag is set, but as it doesn't exist in the current API that can't 
happen.

Adding the missing flag allows clients to avoid having this unneeded state 
setup. Also I didn't look too close at the implications of this state (the fib 
stuff?) being setup for INBOUND SAs -- avoiding it may be more important than 
I'm making it sound.

FWIW, in my case, I actually start running multiple workers which immediately 
start allocating and outputting packets when an outbound SA is created, so the 
impact is a bit more severe for me.

Thanks,
Chris.

> Regards,
> neale
>
>
> From:  on behalf of Christian Hopps 
> Date: Tuesday 25 February 2020 at 16:25
> To: vpp-dev 
> Cc: Christian Hopps 
> Subject: [vpp-dev] Can I submit some API changes for 19.08, et al. release 
> branches?
>
> I've got a couple of changes to the ipsec API that I'd like to upstream to 
> match the vpp "kernel" code I'm going try and upstream to strongswan.
> 
> 1) Add: Add an ip_route_lookup/reply pair (semver minor++)
> 2) Fix: Add IS_INBOUND flag (value 0x40) to ipsec_sad_flags (semver patch++)
> optional) Fix: possibly add the other missing flags to ipsec_sad_flags so 
> they can be properly returned on queries.
> 
> I think submitting these for release branches is OK after reading 
> https://wiki.fd.io/view/VPP/API_Versioning
> 
> I'm coding to 19.08 right now, if I'd like to have those changes in that 
> branch I would imagine I'd need to also submit changes for 20.01 and master?
> 
> I admit to being confused about the CRC stuff, and the warnings in the 
> 19.08.1 release notes and what those warnings imply. Is it safe to assume the 
> CRC stuff can be ignored and external clients will still work (given no 
> semver major change) even if a CRC chagnes?
> 
> Side Note: from the API link: "If a new message type is added, the old 
> message type must be maintained for at least one release. The change must be 
> included in the release notes, and it would be helpful if the message could 
> be tagged as deprecated in the API language and a logging message given to 
> the user."
> 
> Given there are 3 releases per year, only maintaining an old compatible 
> function for 1 release seems rather aggressive. It does say "at least" 
> though. :)
> 
> Thanks,
> Chris.
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15589): https://lists.fd.io/g/vpp-dev/message/15589
Mute This Topic: https://lists.fd.io/mt/71535081/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release branches?

2020-02-27 Thread Christian Hopps


> On Feb 27, 2020, at 6:26 AM, Neale Ranns (nranns)  wrote:
> 
>  
>  
> From:  on behalf of Christian Hopps 
> Date: Tuesday 25 February 2020 at 22:09
> To: Andrew 👽 Yourtchenko 
> Cc: Christian Hopps , "Dave Barach (dbarach)" 
> , vpp-dev 
> Subject: Re: [vpp-dev] Can I submit some API changes for 19.08, et al. 
> release branches?
>  
> 
> 
> > On Feb 25, 2020, at 3:44 PM, Andrew 👽 Yourtchenko  
> > wrote:
> > 
> > That’s the APIs in the master prior to the API freeze, in my book.
> > 
> > The today’s situation is:
> > 
> > - want bleeding edge features ? Use master.
> > - want sort-of-boring-stability ? Use release branch
> > 
> > Now, I am really interested why the folks who do want to experiment with 
> > the APIs, do not pick the master ? (Say, specifically for your case).
> 
> FWIW, I'm absolutely fine following whatever guidelines, it's open source and 
> I'm sure people can make things work with whatever is best for both 
> project[s].
> 
> In my case we're not really talking about an API that is "bleeding edge", but 
> rather "waited around for someone to need/implement it". Doing a route/fib 
> entry lookup isn't very bleeding edge given what VPP does. :)
>  
> True 😊 but the general usage model for VPP is that there is one agent/client 
> giving it all the state it needs to function. So why would that agent need 
> lookup functions, it has all the data already. The dump APIs serve to 
> repopulate the agent with all state should it crash.

I suppose that's why I needed to add this API then. :)

We're using VPP more like a replacement for the kernel networking stack, with 
multiple networking clients interfacing to it, rather than just one monolithic 
application.

Thanks,
Chris.

>  
> /neale
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15590): https://lists.fd.io/g/vpp-dev/message/15590
Mute This Topic: https://lists.fd.io/mt/71535081/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release branches?

2020-02-27 Thread Neale Ranns via Lists.Fd.Io


From:  on behalf of Christian Hopps 
Date: Thursday 27 February 2020 at 15:16
To: "Neale Ranns (nranns)" 
Cc: Christian Hopps , Andrew 👽 Yourtchenko 
, "Dave Barach (dbarach)" , vpp-dev 

Subject: Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release 
branches?

[snip]
>
> In my case we're not really talking about an API that is "bleeding edge", but 
> rather "waited around for someone to need/implement it". Doing a route/fib 
> entry lookup isn't very bleeding edge given what VPP does. :)
>
> True 😊 but the general usage model for VPP is that there is one agent/client 
> giving it all the state it needs to function. So why would that agent need 
> lookup functions, it has all the data already. The dump APIs serve to 
> repopulate the agent with all state should it crash.

I suppose that's why I needed to add this API then. :)

We're using VPP more like a replacement for the kernel networking stack, with 
multiple networking clients interfacing to it, rather than just one monolithic 
application.

Ok. Just don’t fall into the pit that is ‘I want VPP to tell client X when 
client Y does something’ – VPP is not a message bus 😊

/neale


Thanks,
Chris.

>
> /neale

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15591): https://lists.fd.io/g/vpp-dev/message/15591
Mute This Topic: https://lists.fd.io/mt/71535081/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release branches?

2020-02-27 Thread Christian Hopps


> On Feb 27, 2020, at 9:41 AM, Neale Ranns via Lists.Fd.Io 
>  wrote:
> 
>
>
> From:  on behalf of Christian Hopps 
> Date: Thursday 27 February 2020 at 15:16
> To: "Neale Ranns (nranns)" 
> Cc: Christian Hopps , Andrew 👽 Yourtchenko 
> , "Dave Barach (dbarach)" , vpp-dev 
> 
> Subject: Re: [vpp-dev] Can I submit some API changes for 19.08, et al. 
> release branches?
>
> [snip] 
> > 
> > In my case we're not really talking about an API that is "bleeding edge", 
> > but rather "waited around for someone to need/implement it". Doing a 
> > route/fib entry lookup isn't very bleeding edge given what VPP does. :)
> >
> > True 😊 but the general usage model for VPP is that there is one 
> > agent/client giving it all the state it needs to function. So why would 
> > that agent need lookup functions, it has all the data already. The dump 
> > APIs serve to repopulate the agent with all state should it crash.
> 
> I suppose that's why I needed to add this API then. :)
> 
> We're using VPP more like a replacement for the kernel networking stack, with 
> multiple networking clients interfacing to it, rather than just one 
> monolithic application.
> 
> Ok. Just don’t fall into the pit that is ‘I want VPP to tell client X when 
> client Y does something’ – VPP is not a message bus 😊

I'd like to be careful here. If VPP is serving as a replacement for the 
networking stack with multiple clients interfacing to it, then it does serve as 
the single-source-of-truth on things like interface state, routes, etc. So, I 
do want to hear inside my IKE daemon from VPP that a route, intefrace, etc, may 
have changed, I don't want to have to interface the IKE daemon to N possible 
pieces of software that might modify routes, interfaces, etc.

I'm only being careful here b/c if one looks at e.g., netlink functionality and 
then at the VPP api there are some gaps (e.g., interface address add/del and 
route add delete events are missing I believe). I'm assuming that these gaps 
exist b/c, as you say, people have generally not needed them, but not b/c there 
a design philosophy against them.

Thanks,
Chris.

>
> /neale
>
> 
> Thanks,
> Chris.
> 
> >
> > /neale
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15592): https://lists.fd.io/g/vpp-dev/message/15592
Mute This Topic: https://lists.fd.io/mt/71535081/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] networking-vpp 20.01 for OpenStack

2020-02-27 Thread Jerome Tollet via Lists.Fd.Io
Hello,
This announcement may be of interest to you: 
http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012877.html
Jerome

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15593): https://lists.fd.io/g/vpp-dev/message/15593
Mute This Topic: https://lists.fd.io/mt/71590281/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release branches?

2020-02-27 Thread Neale Ranns via Lists.Fd.Io

Hi Chris,

There is a design philosophy against sending notifications to agents about 
information that comes from agents. This is in contrast to notifications to 
agents about events that occur in the data-plane, like DHCP lease, new ARP/ND, 
learned L2 addresses, etc.

/neale

From:  on behalf of Christian Hopps 
Date: Thursday 27 February 2020 at 16:32
To: "Neale Ranns (nranns)" 
Cc: Christian Hopps , "vpp-dev@lists.fd.io" 

Subject: Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release 
branches?



> On Feb 27, 2020, at 9:41 AM, Neale Ranns via Lists.Fd.Io 
>  wrote:
>
>
>
> From:  on behalf of Christian Hopps 
> Date: Thursday 27 February 2020 at 15:16
> To: "Neale Ranns (nranns)" 
> Cc: Christian Hopps , Andrew 👽 Yourtchenko 
> , "Dave Barach (dbarach)" , vpp-dev 
> 
> Subject: Re: [vpp-dev] Can I submit some API changes for 19.08, et al. 
> release branches?
>
> [snip]
> >
> > In my case we're not really talking about an API that is "bleeding edge", 
> > but rather "waited around for someone to need/implement it". Doing a 
> > route/fib entry lookup isn't very bleeding edge given what VPP does. :)
> >
> > True 😊 but the general usage model for VPP is that there is one 
> > agent/client giving it all the state it needs to function. So why would 
> > that agent need lookup functions, it has all the data already. The dump 
> > APIs serve to repopulate the agent with all state should it crash.
>
> I suppose that's why I needed to add this API then. :)
>
> We're using VPP more like a replacement for the kernel networking stack, with 
> multiple networking clients interfacing to it, rather than just one 
> monolithic application.
>
> Ok. Just don’t fall into the pit that is ‘I want VPP to tell client X when 
> client Y does something’ – VPP is not a message bus 😊

I'd like to be careful here. If VPP is serving as a replacement for the 
networking stack with multiple clients interfacing to it, then it does serve as 
the single-source-of-truth on things like interface state, routes, etc. So, I 
do want to hear inside my IKE daemon from VPP that a route, intefrace, etc, may 
have changed, I don't want to have to interface the IKE daemon to N possible 
pieces of software that might modify routes, interfaces, etc.

I'm only being careful here b/c if one looks at e.g., netlink functionality and 
then at the VPP api there are some gaps (e.g., interface address add/del and 
route add delete events are missing I believe). I'm assuming that these gaps 
exist b/c, as you say, people have generally not needed them, but not b/c there 
a design philosophy against them.

Thanks,
Chris.

>
> /neale
>
>
> Thanks,
> Chris.
>
> >
> > /neale
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15594): https://lists.fd.io/g/vpp-dev/message/15594
Mute This Topic: https://lists.fd.io/mt/71535081/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Q: how best to avoid locking for cleanup.

2020-02-27 Thread Honnappa Nagarahalli
I think there are similar issues in bi-hash (i.e. the entry could be deleted 
from control plane while the data plane threads are doing the lookup).

Thanks,
Honnappa

From: vpp-dev@lists.fd.io  On Behalf Of Christian Hopps 
via Lists.Fd.Io
Sent: Thursday, February 27, 2020 5:09 AM
To: vpp-dev 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Q: how best to avoid locking for cleanup.

I received a private message indicating that one solution was to just wait 
"long enough" for the packets to drain. This is the method I'm going to go with 
as it's simple (albeit not as deterministic as some marking/callback scheme :)

For my case I think I can wait ridiculously long for "long enough" and just 
have a process do garbage collection after a full second.

I do wonder how many other cases of "state associated with in-flight packets" 
there might be, and if more sophisticated general solution might be useful.

Thanks,
Chris.

> On Feb 25, 2020, at 6:27 PM, Christian Hopps 
> mailto:cho...@chopps.org>> wrote:
>
> I've got a (hopefully) interesting problem with locking in VPP.
>
> I need to add some cleanup code to my IPTFS additions to ipsec. Basically I 
> have some per-SA queues that I need to cleanup when an SA is deleted.
>
> - ipsec only deletes it's SAs when its "fib_node" locks reach zero.
> - I hoping this means that ipsec will only be deleting the SA after the FIB 
> has stopped injecting packets "along" this SA path (i.e., it's removed prior 
> to the final unlock/deref).
> - I'm being called back by ipsec during the SA deletion.
> - I have queues (one RX for reordering, one TX for aggregation and subsequent 
> output) associated with the SA, both containing locks, that need to be 
> emptied and freed.
> - These queues are being used in multiple worker threads in various graph 
> nodes in parallel.
>
> What I think this means is that when my "SA deleted" callback is called, no 
> *new* packets will be delivered on the SA path. Good so far.
>
> What I'm concerned with is the packets that may currently be "in-flight" in 
> the graph, as these will have the SA associated with them, and thus my code 
> may try and use the per SA queues which I'm now trying to delete.
>
> There's a somewhat clunky solution involving global locks prior to and after 
> using an SA in each node, tracking it's validity (which has it's own issues), 
> freeing when no longer in use etc.. but this would introduce global locking 
> in the packet path which I'm loathe to do.
>
> What I'd really like is if there was something like this:
>
> - packet ingress to SA fib node, fib node lock count increment.
> - packet completes it's journey through the VPP graph (or at least my part of 
> it) and decrements that fib node lock count.
> - when the SA should be deleted it removes it's fib node from the fib, thus 
> preventing new packets entering the graph then unlocks.
> - the SA is either immediately deleted (no packets in flight), or deleted 
> when the last packet completes it's graph traversal.
>
> I could do something like this inside my own nodes (my first node is point 
> B), but then there's still the race between when the fib node is used to 
> inject the packet to the next node in the graph (point A) and that packet 
> arriving at my first IPTFS node (point B), when the SA deletion could occur. 
> Maybe i could modify the fib code to do this at point A. I haven't looked 
> closely at the fib code yet.
>
> Anyway I suspect this has been thought about before, and maybe there's even a 
> solution already present in VPP, so I wanted to ask. :)
>
> Thanks,
> Chris.
>
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15595): https://lists.fd.io/g/vpp-dev/message/15595
Mute This Topic: https://lists.fd.io/mt/71544411/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Can I submit some API changes for 19.08, et al. release branches?

2020-02-27 Thread Christian Hopps


> On Feb 27, 2020, at 11:45 AM, Neale Ranns (nranns)  wrote:
> 
>
> Hi Chris,
>
> There is a design philosophy against sending notifications to agents about 
> information that comes from agents. This is in contrast to notifications to 
> agents about events that occur in the data-plane, like DHCP lease, new 
> ARP/ND, learned L2 addresses, etc.

That doesn't really help me understand and worries me more.

Would providing the same functionality as exists in the netlink socket (for 
routes and interfaces) be against VPP design philosophy?

Thanks,
Chris.



>
> /neale
>
> From:  on behalf of Christian Hopps 
> Date: Thursday 27 February 2020 at 16:32
> To: "Neale Ranns (nranns)" 
> Cc: Christian Hopps , "vpp-dev@lists.fd.io" 
> 
> Subject: Re: [vpp-dev] Can I submit some API changes for 19.08, et al. 
> release branches?
>
> 
> 
> > On Feb 27, 2020, at 9:41 AM, Neale Ranns via Lists.Fd.Io 
> >  wrote:
> > 
> >
> >
> > From:  on behalf of Christian Hopps 
> > Date: Thursday 27 February 2020 at 15:16
> > To: "Neale Ranns (nranns)" 
> > Cc: Christian Hopps , Andrew 👽 Yourtchenko 
> > , "Dave Barach (dbarach)" , vpp-dev 
> > 
> > Subject: Re: [vpp-dev] Can I submit some API changes for 19.08, et al. 
> > release branches?
> >
> > [snip] 
> > > 
> > > In my case we're not really talking about an API that is "bleeding edge", 
> > > but rather "waited around for someone to need/implement it". Doing a 
> > > route/fib entry lookup isn't very bleeding edge given what VPP does. :)
> > >
> > > True 😊 but the general usage model for VPP is that there is one 
> > > agent/client giving it all the state it needs to function. So why would 
> > > that agent need lookup functions, it has all the data already. The dump 
> > > APIs serve to repopulate the agent with all state should it crash.
> > 
> > I suppose that's why I needed to add this API then. :)
> > 
> > We're using VPP more like a replacement for the kernel networking stack, 
> > with multiple networking clients interfacing to it, rather than just one 
> > monolithic application.
> > 
> > Ok. Just don’t fall into the pit that is ‘I want VPP to tell client X when 
> > client Y does something’ – VPP is not a message bus 😊
> 
> I'd like to be careful here. If VPP is serving as a replacement for the 
> networking stack with multiple clients interfacing to it, then it does serve 
> as the single-source-of-truth on things like interface state, routes, etc. 
> So, I do want to hear inside my IKE daemon from VPP that a route, intefrace, 
> etc, may have changed, I don't want to have to interface the IKE daemon to N 
> possible pieces of software that might modify routes, interfaces, etc.
> 
> I'm only being careful here b/c if one looks at e.g., netlink functionality 
> and then at the VPP api there are some gaps (e.g., interface address add/del 
> and route add delete events are missing I believe). I'm assuming that these 
> gaps exist b/c, as you say, people have generally not needed them, but not 
> b/c there a design philosophy against them.
> 
> Thanks,
> Chris.
> 
> >
> > /neale
> >
> > 
> > Thanks,
> > Chris.
> > 
> > >
> > > /neale
> > 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15596): https://lists.fd.io/g/vpp-dev/message/15596
Mute This Topic: https://lists.fd.io/mt/71535081/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Q: how best to avoid locking for cleanup.

2020-02-27 Thread Benoit Ganne (bganne) via Lists.Fd.Io
Unless I misunderstand something, the usual way we deal with that is the worker 
barrier as mentioned by Neale.

API calls and CLI commands are executed under this barrier unless marked as 
mp_safe (which is off by default).
When the worker barrier is requested by the main thread, all worker threads are 
drained and stopped. Then the critical section is executed, the barrier is 
released and workers resume.
So, as long as the bihash delete (or any shared data non-atomic modification) 
happens under this barrier, you do not need to take care of workers being 
active: VPP is taking care of it for you 😊

On the other hand, if you do modify shared data structures in the datapath, you 
are on your own - you need to take care of the data consistency.
Again, the way we usually deal with that is to do a "rpc" to the main thread - 
then the main thread can request the worker barrier, etc.

Or do you refer to other situations?

Best
Ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Honnappa
> Nagarahalli
> Sent: jeudi 27 février 2020 17:51
> To: cho...@chopps.org; vpp-dev@lists.fd.io; Honnappa Nagarahalli
> 
> Cc: nd 
> Subject: Re: [vpp-dev] Q: how best to avoid locking for cleanup.
> 
> I think there are similar issues in bi-hash (i.e. the entry could be
> deleted from control plane while the data plane threads are doing the
> lookup).
> 
> 
> 
> Thanks,
> 
> Honnappa
> 
> 
> 
> From: vpp-dev@lists.fd.io  On Behalf Of Christian
> Hopps via Lists.Fd.Io
> Sent: Thursday, February 27, 2020 5:09 AM
> To: vpp-dev 
> Cc: vpp-dev@lists.fd.io
> Subject: Re: [vpp-dev] Q: how best to avoid locking for cleanup.
> 
> 
> 
> I received a private message indicating that one solution was to just wait
> "long enough" for the packets to drain. This is the method I'm going to go
> with as it's simple (albeit not as deterministic as some marking/callback
> scheme :)
> 
> For my case I think I can wait ridiculously long for "long enough" and
> just have a process do garbage collection after a full second.
> 
> I do wonder how many other cases of "state associated with in-flight
> packets" there might be, and if more sophisticated general solution might
> be useful.
> 
> Thanks,
> Chris.
> 
> > On Feb 25, 2020, at 6:27 PM, Christian Hopps   > wrote:
> >
> > I've got a (hopefully) interesting problem with locking in VPP.
> >
> > I need to add some cleanup code to my IPTFS additions to ipsec.
> Basically I have some per-SA queues that I need to cleanup when an SA is
> deleted.
> >
> > - ipsec only deletes it's SAs when its "fib_node" locks reach zero.
> > - I hoping this means that ipsec will only be deleting the SA after the
> FIB has stopped injecting packets "along" this SA path (i.e., it's removed
> prior to the final unlock/deref).
> > - I'm being called back by ipsec during the SA deletion.
> > - I have queues (one RX for reordering, one TX for aggregation and
> subsequent output) associated with the SA, both containing locks, that
> need to be emptied and freed.
> > - These queues are being used in multiple worker threads in various
> graph nodes in parallel.
> >
> > What I think this means is that when my "SA deleted" callback is called,
> no *new* packets will be delivered on the SA path. Good so far.
> >
> > What I'm concerned with is the packets that may currently be "in-flight"
> in the graph, as these will have the SA associated with them, and thus my
> code may try and use the per SA queues which I'm now trying to delete.
> >
> > There's a somewhat clunky solution involving global locks prior to and
> after using an SA in each node, tracking it's validity (which has it's own
> issues), freeing when no longer in use etc.. but this would introduce
> global locking in the packet path which I'm loathe to do.
> >
> > What I'd really like is if there was something like this:
> >
> > - packet ingress to SA fib node, fib node lock count increment.
> > - packet completes it's journey through the VPP graph (or at least my
> part of it) and decrements that fib node lock count.
> > - when the SA should be deleted it removes it's fib node from the fib,
> thus preventing new packets entering the graph then unlocks.
> > - the SA is either immediately deleted (no packets in flight), or
> deleted when the last packet completes it's graph traversal.
> >
> > I could do something like this inside my own nodes (my first node is
> point B), but then there's still the race between when the fib node is
> used to inject the packet to the next node in the graph (point A) and that
> packet arriving at my first IPTFS node (point B), when the SA deletion
> could occur. Maybe i could modify the fib code to do this at point A. I
> haven't looked closely at the fib code yet.
> >
> > Anyway I suspect this has been thought about before, and maybe there's
> even a solution already present in VPP, so I wanted to ask. :)
> >
> > Thanks,
> > Chris.
> >
> >
> >

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all me

[vpp-dev] Published: FD.io CSIT-2001 Release Report - CSIT-2001.09 update

2020-02-27 Thread Maciek Konstantynowicz (mkonstan) via Lists.Fd.Io
Update: CSIT-2001.09 maintenance report has been published:

https://docs.fd.io/csit/rls2001/report/

Changes from last week version include:

Added VPP performance test data into Detailed Results per testbed types:
  - 2n-clx (Xeon Cascadelake): IPv4, IPv6, L2.
  - 2n-dnv (Atom Denverton): IPsec, IPv4 tunnels.
  - 3n-hsw (Xeon Haswell): IPv4, L2, vhost.

Added VPP performance tests selected for analysis and graphs per testbed 
types:
  - 2n-clx (Xeon Cascadelake): all tests.
  - 3n-tsh (Arm Cortex-A72 Taishan): all tests.

Added DPDK performance tests selected for analysis and graphs per testbed 
types:
  - 2n-clx (Xeon Cascadelake): all tests.


2n-skx and 3n-skx testbeds (Xeon Skylake) are still out of service due to the 
processor microcode issue.
For status see:


https://docs.fd.io/csit/rls2001/report/vpp_performance_tests/csit_release_notes.html

https://docs.fd.io/csit/rls2001/report/dpdk_performance_tests/csit_release_notes.html

Welcome all comments, best by email to 
csit-...@lists.fd.io.

Cheers,
-Maciek


On 14 Feb 2020, at 14:02, Maciek Konstantynowicz (mkonstan) 
mailto:mkons...@cisco.com>> wrote:

Hi All,

FD.io CSIT-2001 report has been published on FD.io 
docs site:

   https://docs.fd.io/csit/rls2001/report/

Many thanks to All in CSIT, VPP and wider FD.io community who
contributed and worked hard to make CSIT-2001 happen!

Below three summaries:
- Intel Xeon 2n-skx, 3n-skx and 2n-clx Testbeds microcode issue.
- CSIT-2001 Release Summary, a high-level summary.
- Points of Note in CSIT-2001 Report, with specific links to report.

Welcome all comments, best by email to 
csit-...@lists.fd.io.

Cheers,
-Maciek


NOTE: Intel Xeon 2n-skx, 3n-skx and 2n-clx Testbeds microcode issue.

   VPP and DPDK performance test data is not included in this report
   version. This is due to the lower performance and behaviour
   inconsistency of these systems following the upgrade of processor
   microcode packages (skx ucode 0x264, clx ucode 0x52c), done
   as part of updating Ubuntu 18.04 LTS kernel version. Tested VPP and
   DPDK applications (L3fwd) are affected. Skx and Clx test data will
   be added in subsequent maintenance report version(s) once the issue
   is resolved. See https://jira.fd.io/browse/CSIT-1675.


CSIT-2001 Release Summary

1. CSIT-2001 Report

  - html link: https://docs.fd.io/csit/rls2001/report/
  - pdf link: 
https://docs.fd.io/csit/rls2001/report/_static/archive/csit_rls2001.pdf

2. New Tests

  - NFV density tests with IPsec encryption between DUTs.

  - Full test coverage for VPP AVF driver for Fortville NICs.

  - VPP Hoststack TCP/IP tests with wrk, iperf3 with LDPreload tests
without and with packet loss via VPP NSIM plugin), and QUIC/UDP/IP
transport tests.

  - Mellanox ConnectX5-2p100GE NICs in 2n-clx testbeds using VPP native
rdma driver.

  - Load Balancer tests.

3. Benchmarking

  - Fully onboarded new Intel Xeon Cascadelake Testbeds with x710,
xxv710 and mcx556a-edat NIC cards.

  - Added new High Dynamic Range Histogram latency measurements.

4. Infrastructure

  - Full migration of CSIT from Python2.7 to Python3.6.


Points of Note in the CSIT-2001 Report

Indexed specific links listed at the bottom.

1. VPP release notes
  a. Changes in CSIT-2001: [1]
  b. Known issues: [2]

2. VPP performance - 64B/IMIX throughput graphs (selected NIC models):
  a. Graphs explained: [3]
  b. L2 Ethernet Switching:[4]
  c. IPv4 Routing: [5]
  d. IPv6 Routing: [6]
  e. SRv6 Routing: [7]
  f. IPv4 Tunnels: [8]
  g. KVM VMs vhost-user:   [9]
  h. LXC/DRC Container Memif: [10]
  e. IPsec IPv4 Routing:  [11]
  f. Virtual Topology System: [12]

3. VPP performance - multi-core and latency graphs:
  a. Speedup Multi-Core:  [13]
  b. Latency: [14]

4. VPP performance comparisons
  a. VPP-20.01 vs. VPP-19.08:  [15]

5. VPP performance test details - all NICs:
  a. Detailed results 64B IMIX 1518B 9kB:  [16]
  b. Configuration:[17]

DPDK Testpmd and L3fwd performance sections follow similar structure.

6. DPDK applications:
 a. Release notes:   [18]
 b. DPDK performance - 64B throughput graphs:[19]
 c. DPDK performance - latency graphs:   [20]
 d. DPDK performance - DPDK-19.08 vs. DPDK-19.05: [21]

Functional tests, includ

Re: [vpp-dev] Q: how best to avoid locking for cleanup.

2020-02-27 Thread Christian Hopps


> On Feb 27, 2020, at 6:35 AM, Neale Ranns via Lists.Fd.Io 
>  wrote:
> 
> Hi Chris,
>
> All of the APIs that result in the removal of an SA are not marked as MP 
> safe. This means that the worker threads are paused at the ‘barrier’ as the 
> API is handled. Worker threads reach the barrier once they complete the frame 
> they are working on. So there are no packets in flight when the SA is deleted.

I don't think this covers the in-flight packets sitting in the DPDK crypto 
engine queue, they are enqueued by "dpdk-esp*-encrypt", and then dequeued after 
the crypto engine (HW most of the time) has encrypted them by the polling node 
"dpdk-crypto-input".

The barrier sync is not going to wait for the packets in those crypto-engine 
queues to drain. The barrier does block dpdk-crypto-input from running and thus 
draining those queues. So if other code in the ipsec path is making assumptions 
based on that barrier sync we might have some bugs to fix :)

It's worse for me b/c I add another queue in the packet path for timed output, 
but same problem as above.

Thanks,
Chris.

>
> One more comment inline
>
> From:  on behalf of Christian Hopps 
> Date: Wednesday 26 February 2020 at 00:28
> To: vpp-dev 
> Cc: Christian Hopps 
> Subject: [vpp-dev] Q: how best to avoid locking for cleanup.
>
> I've got a (hopefully) interesting problem with locking in VPP.
> 
> I need to add some cleanup code to my IPTFS additions to ipsec. Basically I 
> have some per-SA queues that I need to cleanup when an SA is deleted.
> 
> - ipsec only deletes it's SAs when its "fib_node" locks reach zero.
> - I hoping this means that ipsec will only be deleting the SA after the FIB 
> has stopped injecting packets "along" this SA path (i.e., it's removed prior 
> to the final unlock/deref).
> 
> It means that there are no other objects that refer to this SA.
> 
> The fib_node serves as an objects linkage into the FIB graph. In the FIB 
> graph nomenclature children ‘point to’ their one parent and parents have many 
> children. In the data-plane children typically pass packets to their parents, 
> e.g. a route is a child of an adjacency.
> 
> An SA has no children. It gets packets from entities that are not linked into 
> the FIB graph – for SAs I think it is always via n input or output feature. 
> An SA, in tunnel mode used in an SPD policy, is a child of the route that 
> resolves the encap destination – this allows it to track where to send 
> packets post encap and thus elide the lookup in the DP.
> 
> /neale
> 
> 
> - I'm being called back by ipsec during the SA deletion.
> - I have queues (one RX for reordering, one TX for aggregation and subsequent 
> output) associated with the SA, both containing locks, that need to be 
> emptied and freed.
> - These queues are being used in multiple worker threads in various graph 
> nodes in parallel.
> 
> What I think this means is that when my "SA deleted" callback is called, no 
> *new* packets will be delivered on the SA path. Good so far.
> 
> What I'm concerned with is the packets that may currently be "in-flight" in 
> the graph, as these will have the SA associated with them, and thus my code 
> may try and use the per SA queues which I'm now trying to delete.
> 
> There's a somewhat clunky solution involving global locks prior to and after 
> using an SA in each node, tracking it's validity (which has it's own issues), 
> freeing when no longer in use etc.. but this would introduce global locking 
> in the packet path which I'm loathe to do.
> 
> What I'd really like is if there was something like this:
> 
> - packet ingress to SA fib node, fib node lock count increment.
> - packet completes it's journey through the VPP graph (or at least my part of 
> it) and decrements that fib node lock count.
> - when the SA should be deleted it removes it's fib node from the fib, thus 
> preventing new packets entering the graph then unlocks.
> - the SA is either immediately deleted (no packets in flight), or deleted 
> when the last packet completes it's graph traversal.
> 
> I could do something like this inside my own nodes (my first node is point 
> B), but then there's still the race between when the fib node is used to 
> inject the packet to the next node in the graph (point A) and that packet 
> arriving at my first IPTFS node (point B), when the SA deletion could occur. 
> Maybe i could modify the fib code to do this at point A. I haven't looked 
> closely at the fib code yet.
> 
> Anyway I suspect this has been thought about before, and maybe there's even a 
> solution already present in VPP, so I wanted to ask. :)
> 
> Thanks,
> Chris.
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15599): https://lists.fd.io/g/vpp-dev/message/15599
Mute This Topic: https://lists.fd.io/mt/71544411/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Why memfd not use hugepage memory?

2020-02-27 Thread 汪翰林
Hi ALL,
I found memfd use normal memory rather than hugepage memory. But 
clib_mem_vm_ext_alloc can support hugepage memory now. Can hugepage memory 
improve memfd performance ?


Regards,
Hanlin


发自网易邮箱大师-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#15600): https://lists.fd.io/g/vpp-dev/message/15600
Mute This Topic: https://lists.fd.io/mt/71594332/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Q: how best to avoid locking for cleanup.

2020-02-27 Thread Honnappa Nagarahalli


> 
> Unless I misunderstand something, the usual way we deal with that is the
> worker barrier as mentioned by Neale.
> 
> API calls and CLI commands are executed under this barrier unless marked as
> mp_safe (which is off by default).
> When the worker barrier is requested by the main thread, all worker threads
> are drained and stopped. Then the critical section is executed, the barrier is
> released and workers resume.
> So, as long as the bihash delete (or any shared data non-atomic modification)
> happens under this barrier, you do not need to take care of workers being
> active: VPP is taking care of it for you 😊
Thank you, I was not aware of this. Would not this result in packet drops on 
the data plane if the critical section is big? For ex: searching through the 
bi-hash table for a particular entry to delete.

> 
> On the other hand, if you do modify shared data structures in the datapath,
> you are on your own - you need to take care of the data consistency.
> Again, the way we usually deal with that is to do a "rpc" to the main thread -
> then the main thread can request the worker barrier, etc.
> 
> Or do you refer to other situations?
I was looking at the bi-hash library on a standalone basis. The entries are 
deleted and buckets are freed without any synchronization between the writer 
and the data plane threads. If the synchronization is handled outside of this 
library using 'worker barrier' it should be alright.

> 
> Best
> Ben
> 
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of Honnappa
> > Nagarahalli
> > Sent: jeudi 27 février 2020 17:51
> > To: cho...@chopps.org; vpp-dev@lists.fd.io; Honnappa Nagarahalli
> > 
> > Cc: nd 
> > Subject: Re: [vpp-dev] Q: how best to avoid locking for cleanup.
> >
> > I think there are similar issues in bi-hash (i.e. the entry could be
> > deleted from control plane while the data plane threads are doing the
> > lookup).
> >
> >
> >
> > Thanks,
> >
> > Honnappa
> >
> >
> >
> > From: vpp-dev@lists.fd.io  On Behalf Of Christian
> > Hopps via Lists.Fd.Io
> > Sent: Thursday, February 27, 2020 5:09 AM
> > To: vpp-dev 
> > Cc: vpp-dev@lists.fd.io
> > Subject: Re: [vpp-dev] Q: how best to avoid locking for cleanup.
> >
> >
> >
> > I received a private message indicating that one solution was to just
> > wait "long enough" for the packets to drain. This is the method I'm
> > going to go with as it's simple (albeit not as deterministic as some
> > marking/callback scheme :)
> >
> > For my case I think I can wait ridiculously long for "long enough" and
> > just have a process do garbage collection after a full second.
> >
> > I do wonder how many other cases of "state associated with in-flight
> > packets" there might be, and if more sophisticated general solution
> > might be useful.
> >
> > Thanks,
> > Chris.
> >
> > > On Feb 25, 2020, at 6:27 PM, Christian Hopps  >  > wrote:
> > >
> > > I've got a (hopefully) interesting problem with locking in VPP.
> > >
> > > I need to add some cleanup code to my IPTFS additions to ipsec.
> > Basically I have some per-SA queues that I need to cleanup when an SA
> > is deleted.
> > >
> > > - ipsec only deletes it's SAs when its "fib_node" locks reach zero.
> > > - I hoping this means that ipsec will only be deleting the SA after
> > > the
> > FIB has stopped injecting packets "along" this SA path (i.e., it's
> > removed prior to the final unlock/deref).
> > > - I'm being called back by ipsec during the SA deletion.
> > > - I have queues (one RX for reordering, one TX for aggregation and
> > subsequent output) associated with the SA, both containing locks, that
> > need to be emptied and freed.
> > > - These queues are being used in multiple worker threads in various
> > graph nodes in parallel.
> > >
> > > What I think this means is that when my "SA deleted" callback is
> > > called,
> > no *new* packets will be delivered on the SA path. Good so far.
> > >
> > > What I'm concerned with is the packets that may currently be "in-flight"
> > in the graph, as these will have the SA associated with them, and thus
> > my code may try and use the per SA queues which I'm now trying to delete.
> > >
> > > There's a somewhat clunky solution involving global locks prior to
> > > and
> > after using an SA in each node, tracking it's validity (which has it's
> > own issues), freeing when no longer in use etc.. but this would
> > introduce global locking in the packet path which I'm loathe to do.
> > >
> > > What I'd really like is if there was something like this:
> > >
> > > - packet ingress to SA fib node, fib node lock count increment.
> > > - packet completes it's journey through the VPP graph (or at least
> > > my
> > part of it) and decrements that fib node lock count.
> > > - when the SA should be deleted it removes it's fib node from the
> > > fib,
> > thus preventing new packets entering the graph then unlocks.
> > > - the SA is either immediately deleted (no packets in flight),