Thanks Benoit and Neale.
I used loopback interfaces, but since they are in different VRFs I couldn't
establish a L3 connection between them. I will try pipe interfaces.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22419):
Hi,
I am looking for veth pair alternative in VPP. I want to pass traffics between
two VRFs and I want them to be processed as input packets to an interface( to
use NAT or other ip features), so route leakage is not gonna help. Also, since
I'm using one instance of VPP with multiple VRFs, using
Thank you. Enabling DPD is not necessary to reproduce the issue. You can use
`swanctl --list-sas` command to query SAs and see the problem.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22342): https://lists.fd.io/g/vpp-dev/message/22342
Mute
Hi Kai,
Thanks for your response. Yes your understanding is correct and
`stat_segment_connect` is called only when querying SAs. This query does not
occur unless: 1) you ask for SA status and 2) before sending DPD messages. I'm
afraid using 5.9.6 version did not change anything. May I ask if
Hi,
I used the plugin resided in `extras/vpp_sswan` on both 5.9.8 and 5.8.2
Strongswan versions. All functionalities are working great but after Child SAs
are established, there's a constant memory growth in charon process( Reaching
from 20MB RSS to 46MB RSS in 4 days, but this growth is
Hi VPP folks,
Less than 2 years ago, a new feature named `input policing` was added to VPP
and 7 months ago the output counterpart was added which enables a policer on
all the incoming/outgoing traffic on an interface.
Here is a similar patch to add class-based policing as an output feature:
Thanks Benoit.
This patch solved the problem.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#20846): https://lists.fd.io/g/vpp-dev/message/20846
Mute This Topic: https://lists.fd.io/mt/88966349/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Hi Benoit,
Because of VPP's crash, I'm afraid i can't capture the pcap. But I'll share
with you a packet trace of a normal interface with an IGMP packet.
Normally yes, the packet goes through `ip4-lookup`, but here the path is:
`ip4_input_inline`-> `ip4_input_check_x2`->`check_ver_opt_csum`.
Hi VPP folks,
Recently I ran into a problem: receiving IGMP packets over a GRE tunnel
protected by IPSec in transport mode, sometimes causes a VPP crash.
The crash happens in `ip4-local` node. Using debug image, i realized the
problem was caused by an invalid fib index passed to `fib_get()`
Hello folks,
Looking at `wg_peer_assign_thread` function One can see the logic behind peer
thread assigning which is: " Use the same thread that the packet has come from,
unless its main thread. In main thread case, randomly choose a worker thread to
handoff".
I understand assigning all peers
Hi Andrew,
Thanks for you response. That makes sense. I will monitor my box memory usage.
Unfortunately I'm using VPP 20.05. So I will try to forwardport( we have it?
:D) this patch to it.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#17433):
Hi VPP folks,
Setting ACL from VAPI, we have a panic `ACL plugin failed to allocate lookup
heap of %U bytes` in `hash_acl_set_heap` function.
It doesn't happen always. Time to time and randomly this problem occurs. My
system has 8G of RAM. VPP is running with the default `startup.conf`. I've
Hi Filip,
Thanks for your answer. I'm glad to hear them.
I understand the difficulties with ikev2_initiate_sa_init return value. And i
don't think there is a feasible solution for it because of dependencies on an
outside VPP source. Maybe events are the best choice.
Regards,
Mahdi
Hello VPP folks!
I'm using VPP 20.01 stable release. Regarding IKEv2 plugin, I've got some
questions about its design and applications( in present and in future).
The first thing i've noticed, is the API. There are no dumps nor status of
profiles/ SAs. I issue a ikev2_initiate_sa_init, and it
Hello,
I noticed once, that vapi_recv function, called from vapi_dispatch_one, is
called like this:
vapi_recv (ctx, , , SVM_Q_WAIT, 0 );
and one time, it happened just once and i couldn't regenerate it, the code was
freezed in pthread_cond_wait called from svm_queue_wait_inline.
and when i
15 matches
Mail list logo