Re: [vpp-dev] Assertion failure triggered by "ip mroute add" command (master branch)

2020-06-03 Thread Elias Rudberg
Hi Ben!

> It is probably a bug but I could not reproduce it.
> Note that commit 30cca512c (build: remove valgrind
> leftovers, 2019-11-25) is present in stable/2001
> so probably not the culprit...

Agreed.

> Can you share how you built VPP and your complete startup.conf?
> You seems to be running those commands from startup.conf directly.

Yes, I had those three commands in a file and then pointed to that file
as "exec /path/to/file" in the unix { } part of startup.conf.

Anyway, I got inspired and debugged the issue further myself: the
problem seems to be that the variable payload_proto in
vnet_ip_mroute_cmd() does not get set to anything, it end up having
whatever value was on the stack which could be any garbage.

My test works correctly after initializing it to zero, like this:

--- a/src/vnet/ip/lookup.c
+++ b/src/vnet/ip/lookup.c
@@ -661,7 +661,7 @@ vnet_ip_mroute_cmd (vlib_main_t * vm,
   unformat_input_t _line_input, *line_input = &_line_input;
   fib_route_path_t rpath, *rpaths = NULL;
   clib_error_t *error = NULL;
-  u32 table_id, is_del, payload_proto;
+  u32 table_id, is_del, payload_proto = 0;

If you want to reproduce the problem, you can simply set
payload_proto=77 (or whatever) instead of payload_proto=0 there, to
mimic garbage on the stack.

Just setting payload_proto=0 is probably not a good fix though, I guess
that just means hard-coding the FIB_PROTOCOL_IP4 value which happens to
work in my case.

To fix it properly I think payload_proto should be set to the
appropriate protocol in the different "else if" clauses, when
pfx.fp_proto is set then payload_proto should also be set, in the same
way as it is done in the vnet_ip_route_cmd() function.

I pushed a fix like that to gerrit, please have a look: 
https://gerrit.fd.io/r/c/vpp/+/27416

Best regards,
Elias

P.S.
By the way, do you think address sanitizer could be used to find this
kind of bugs?
(Or perhaps if there was a compiler option to poison the stack at each
function call, or something like that. I think it's a common problem
that code relies on uninitialized things being zero and that can
sometimes go undetected for a long time because things often happen to
be zero, forcing something nonzero could help detecting such bugs.)

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16649): https://lists.fd.io/g/vpp-dev/message/16649
Mute This Topic: https://lists.fd.io/mt/74649468/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

2020-06-03 Thread John Lo (loj) via lists.fd.io
We can use “show node counters” which should display counter for packets 
dropped due to MAC mismatch.  -John

From: Nagaraju Vemuri 
Sent: Wednesday, June 03, 2020 3:10 PM
To: John Lo (loj) 
Cc: Andrew  Yourtchenko ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

Also, do we have any counters to validate this patch?

On Wed, Jun 3, 2020 at 11:41 AM John Lo (loj) 
mailto:l...@cisco.com>> wrote:
Hi Nagaraju,

No extra config required than standard L3 setup you already have with IP 
address/subnet on your interface.  Such L3 interface should drop packets with 
unicast DMAC which does not match interface MAC.   If you can pull/clone the 
latest VPP, either master or stable/2005 branch, and build, the image should 
have my patch included.  Please let us know if it solve your problem or not.

Regards,
John

From: Nagaraju Vemuri 
mailto:nagarajuiit...@gmail.com>>
Sent: Wednesday, June 03, 2020 1:52 PM
To: Andrew  Yourtchenko mailto:ayour...@gmail.com>>
Cc: John Lo (loj) mailto:l...@cisco.com>>; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

Sure Andrew.
I will help with that.

Do I need to configure something in VPP with this patch to drop such packets?

Thanks,
Nagaraju


On Wed, Jun 3, 2020 at 10:48 AM Andrew  Yourtchenko 
mailto:ayour...@gmail.com>> wrote:
20.05.1. The fix was ready just a little bit too late to be a safe to merge 
right at the moment of the release, so given the size of the patch and that the 
issue was there for a couple of releases already I made a call to postpone it 
till the first dot release.

As for the timing for the 20.05.1 - still TBD.

Would you be able to build the VPP in your own environment and give the 
feedback whether John’s fix addresses the issue you are seeing ?

--a

On 3 Jun 2020, at 19:23, Nagaraju Vemuri 
mailto:nagarajuiit...@gmail.com>> wrote:

Thanks John.

Which release will have your fixes?


On Wed, Jun 3, 2020 at 10:21 AM John Lo (loj) 
mailto:l...@cisco.com>> wrote:
I recently submitted two patches, one for master and the other for stable/2005, 
to fix an issue with L3 virtual interfaces not filter input packets with wrong 
unicast MAC address:
https://gerrit.fd.io/r/c/vpp/+/27027
https://gerrit.fd.io/r/c/vpp/+/27311

Perhaps it is the issue you are hitting.

Regards,
John

From: Nagaraju Vemuri 
mailto:nagarajuiit...@gmail.com>>
Sent: Wednesday, June 03, 2020 1:06 PM
To: John Lo (loj) mailto:l...@cisco.com>>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

Hi John,

Sorry, I should have been more clear.

We are using Virtual machines(KVM based) on which VPP runs.
KVM qemu creates bridge (using brctl) on physical machine and creates TAP 
interfaces from this bridge for Virtual Machines(VMs) networking.

We run VPP on VMs and configure interfaces with L3 IP address.
When we send traffic, this linux bridge forwards traffic from one interface of 
VM to another interface on a different VM.
If the bridge has no mac-to-port binding info, it is forwarding packets to all 
interfaces, so all VPPs receive these packets.
And the VPP whose MAC is not matching with this packet, just forwards this 
packet again.
We want VPP to drop a packet if the destination MAC doesnt match with VPP 
interfaces MAC addresses.

Hope I am clear now.

Thanks,
Nagaraju



On Wed, Jun 3, 2020 at 8:53 AM John Lo (loj) 
mailto:l...@cisco.com>> wrote:
Please clarify the following:

> When the bridge has no binding info about MAC-to-port, bridge is flooding 
> packets to all interfaces.

  1.  Is this linux bridge that’s in the kernel so not a bridge domain inside 
VPP?
  2.  So packets are flooded to all interfaces in the bridge. Are you saying 
each of the interface is on a separate VPP instance?

> Hence VPP receives some packets whose MAC address is owned by some other VPP 
> instance.
> We want to drop such packets. By default VPP is forwarding these packets.

  1.  How is VPP receiving packets from its interface and forwarding them?
  2.  Is the interface in L3 mode with an IP address/subnet configured?
  3.  It can be helpful to provide “show interface addr” output or, even 
better, provide a packet trace from VPP on how one or more of the packet is 
received and forwarded.

Regards,
John

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Nagaraju Vemuri
Sent: Tuesday, June 02, 2020 8:13 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP forwarding packets not destined to it #vpp


Hi,

We are using linux bridge to connect different interfaces owned by different 
VPP instances.
When the bridge has no binding info about MAC-to-port, bridge is flooding 
packets to all interfaces.
Hence VPP receives some packets whose MAC address is owned by some other VPP 
instance.
We want to drop such packets. By default VPP is forwarding 

Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

2020-06-03 Thread John Lo (loj) via lists.fd.io
Hi Nagaraju,

No extra config required than standard L3 setup you already have with IP 
address/subnet on your interface.  Such L3 interface should drop packets with 
unicast DMAC which does not match interface MAC.   If you can pull/clone the 
latest VPP, either master or stable/2005 branch, and build, the image should 
have my patch included.  Please let us know if it solve your problem or not.

Regards,
John

From: Nagaraju Vemuri 
Sent: Wednesday, June 03, 2020 1:52 PM
To: Andrew  Yourtchenko 
Cc: John Lo (loj) ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

Sure Andrew.
I will help with that.

Do I need to configure something in VPP with this patch to drop such packets?

Thanks,
Nagaraju


On Wed, Jun 3, 2020 at 10:48 AM Andrew  Yourtchenko 
mailto:ayour...@gmail.com>> wrote:
20.05.1. The fix was ready just a little bit too late to be a safe to merge 
right at the moment of the release, so given the size of the patch and that the 
issue was there for a couple of releases already I made a call to postpone it 
till the first dot release.

As for the timing for the 20.05.1 - still TBD.

Would you be able to build the VPP in your own environment and give the 
feedback whether John’s fix addresses the issue you are seeing ?

--a


On 3 Jun 2020, at 19:23, Nagaraju Vemuri 
mailto:nagarajuiit...@gmail.com>> wrote:

Thanks John.

Which release will have your fixes?


On Wed, Jun 3, 2020 at 10:21 AM John Lo (loj) 
mailto:l...@cisco.com>> wrote:
I recently submitted two patches, one for master and the other for stable/2005, 
to fix an issue with L3 virtual interfaces not filter input packets with wrong 
unicast MAC address:
https://gerrit.fd.io/r/c/vpp/+/27027
https://gerrit.fd.io/r/c/vpp/+/27311

Perhaps it is the issue you are hitting.

Regards,
John

From: Nagaraju Vemuri 
mailto:nagarajuiit...@gmail.com>>
Sent: Wednesday, June 03, 2020 1:06 PM
To: John Lo (loj) mailto:l...@cisco.com>>
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

Hi John,

Sorry, I should have been more clear.

We are using Virtual machines(KVM based) on which VPP runs.
KVM qemu creates bridge (using brctl) on physical machine and creates TAP 
interfaces from this bridge for Virtual Machines(VMs) networking.

We run VPP on VMs and configure interfaces with L3 IP address.
When we send traffic, this linux bridge forwards traffic from one interface of 
VM to another interface on a different VM.
If the bridge has no mac-to-port binding info, it is forwarding packets to all 
interfaces, so all VPPs receive these packets.
And the VPP whose MAC is not matching with this packet, just forwards this 
packet again.
We want VPP to drop a packet if the destination MAC doesnt match with VPP 
interfaces MAC addresses.

Hope I am clear now.

Thanks,
Nagaraju



On Wed, Jun 3, 2020 at 8:53 AM John Lo (loj) 
mailto:l...@cisco.com>> wrote:
Please clarify the following:

> When the bridge has no binding info about MAC-to-port, bridge is flooding 
> packets to all interfaces.

  1.  Is this linux bridge that’s in the kernel so not a bridge domain inside 
VPP?
  2.  So packets are flooded to all interfaces in the bridge. Are you saying 
each of the interface is on a separate VPP instance?

> Hence VPP receives some packets whose MAC address is owned by some other VPP 
> instance.
> We want to drop such packets. By default VPP is forwarding these packets.

  1.  How is VPP receiving packets from its interface and forwarding them?
  2.  Is the interface in L3 mode with an IP address/subnet configured?
  3.  It can be helpful to provide “show interface addr” output or, even 
better, provide a packet trace from VPP on how one or more of the packet is 
received and forwarded.

Regards,
John

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Nagaraju Vemuri
Sent: Tuesday, June 02, 2020 8:13 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP forwarding packets not destined to it #vpp


Hi,

We are using linux bridge to connect different interfaces owned by different 
VPP instances.
When the bridge has no binding info about MAC-to-port, bridge is flooding 
packets to all interfaces.
Hence VPP receives some packets whose MAC address is owned by some other VPP 
instance.
We want to drop such packets. By default VPP is forwarding these packets.

We tried using "set interface l2 forward  disable", but this did not 
help.

Please suggest what we can do.

Thanks,
Nagaraju


--
Thanks,
Nagaraju Vemuri


--
Thanks,
Nagaraju Vemuri



--
Thanks,
Nagaraju Vemuri
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16646): https://lists.fd.io/g/vpp-dev/message/16646
Mute This Topic: https://lists.fd.io/mt/74640593/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: 

Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

2020-06-03 Thread Nagaraju Vemuri
Sure Andrew.
I will help with that.

Do I need to configure something in VPP with this patch to drop such
packets?

Thanks,
Nagaraju


On Wed, Jun 3, 2020 at 10:48 AM Andrew  Yourtchenko 
wrote:

> 20.05.1. The fix was ready just a little bit too late to be a safe to
> merge right at the moment of the release, so given the size of the patch
> and that the issue was there for a couple of releases already I made a call
> to postpone it till the first dot release.
>
> As for the timing for the 20.05.1 - still TBD.
>
> Would you be able to build the VPP in your own environment and give the
> feedback whether John’s fix addresses the issue you are seeing ?
>
> --a
>
> On 3 Jun 2020, at 19:23, Nagaraju Vemuri  wrote:
>
> 
> Thanks John.
>
> Which release will have your fixes?
>
>
> On Wed, Jun 3, 2020 at 10:21 AM John Lo (loj)  wrote:
>
>> I recently submitted two patches, one for master and the other for
>> stable/2005, to fix an issue with L3 virtual interfaces not filter input
>> packets with wrong unicast MAC address:
>>
>> https://gerrit.fd.io/r/c/vpp/+/27027
>>
>> https://gerrit.fd.io/r/c/vpp/+/27311
>>
>>
>>
>> Perhaps it is the issue you are hitting.
>>
>>
>>
>> Regards,
>>
>> John
>>
>>
>>
>> *From:* Nagaraju Vemuri 
>> *Sent:* Wednesday, June 03, 2020 1:06 PM
>> *To:* John Lo (loj) 
>> *Cc:* vpp-dev@lists.fd.io
>> *Subject:* Re: [vpp-dev] VPP forwarding packets not destined to it #vpp
>>
>>
>>
>> Hi John,
>>
>>
>>
>> Sorry, I should have been more clear.
>>
>>
>>
>> We are using Virtual machines(KVM based) on which VPP runs.
>>
>> KVM qemu creates bridge (using brctl) on physical machine and creates TAP
>> interfaces from this bridge for Virtual Machines(VMs) networking.
>>
>>
>>
>> We run VPP on VMs and configure interfaces with L3 IP address.
>>
>> When we send traffic, this linux bridge forwards traffic from one
>> interface of VM to another interface on a different VM.
>>
>> If the bridge has no mac-to-port binding info, it is forwarding packets
>> to all interfaces, so all VPPs receive these packets.
>>
>> And the VPP whose MAC is not matching with this packet, just forwards
>> this packet again.
>>
>> We want VPP to drop a packet if the destination MAC doesnt match with VPP
>> interfaces MAC addresses.
>>
>>
>>
>> Hope I am clear now.
>>
>>
>>
>> Thanks,
>>
>> Nagaraju
>>
>>
>>
>>
>>
>>
>>
>> On Wed, Jun 3, 2020 at 8:53 AM John Lo (loj)  wrote:
>>
>> Please clarify the following:
>>
>>
>>
>> > When the bridge has no binding info about MAC-to-port, bridge is
>> flooding packets to all interfaces.
>>
>>1. Is this linux bridge that’s in the kernel so not a bridge domain
>>inside VPP?
>>2. So packets are flooded to all interfaces in the bridge. Are you
>>saying each of the interface is on a separate VPP instance?
>>
>>
>>
>> > Hence VPP receives some packets whose MAC address is owned by some
>> other VPP instance.
>> > We want to drop such packets. By default VPP is forwarding these
>> packets.
>>
>>1. How is VPP receiving packets from its interface and forwarding
>>them?
>>2. Is the interface in L3 mode with an IP address/subnet configured?
>>3. It can be helpful to provide “show interface addr” output or, even
>>better, provide a packet trace from VPP on how one or more of the packet 
>> is
>>received and forwarded.
>>
>>
>>
>> Regards,
>>
>> John
>>
>>
>>
>> *From:* vpp-dev@lists.fd.io  *On Behalf Of *Nagaraju
>> Vemuri
>> *Sent:* Tuesday, June 02, 2020 8:13 PM
>> *To:* vpp-dev@lists.fd.io
>> *Subject:* [vpp-dev] VPP forwarding packets not destined to it #vpp
>>
>>
>>
>> Hi,
>>
>> We are using linux bridge to connect different interfaces owned by
>> different VPP instances.
>> When the bridge has no binding info about MAC-to-port, bridge is flooding
>> packets to all interfaces.
>> Hence VPP receives some packets whose MAC address is owned by some other
>> VPP instance.
>> We want to drop such packets. By default VPP is forwarding these packets.
>>
>> We tried using "set interface l2 forward  disable", but this
>> did not help.
>>
>> Please suggest what we can do.
>>
>>
>> Thanks,
>> Nagaraju
>>
>>
>>
>>
>> --
>>
>> Thanks,
>> Nagaraju Vemuri
>>
>
>
> --
> Thanks,
> Nagaraju Vemuri
> 
>
>

-- 
Thanks,
Nagaraju Vemuri
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16644): https://lists.fd.io/g/vpp-dev/message/16644
Mute This Topic: https://lists.fd.io/mt/74640593/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

2020-06-03 Thread Andrew Yourtchenko
20.05.1. The fix was ready just a little bit too late to be a safe to merge 
right at the moment of the release, so given the size of the patch and that the 
issue was there for a couple of releases already I made a call to postpone it 
till the first dot release.

As for the timing for the 20.05.1 - still TBD.

Would you be able to build the VPP in your own environment and give the 
feedback whether John’s fix addresses the issue you are seeing ?

--a

>> On 3 Jun 2020, at 19:23, Nagaraju Vemuri  wrote:
> 
> Thanks John.
> 
> Which release will have your fixes?
> 
> 
>> On Wed, Jun 3, 2020 at 10:21 AM John Lo (loj)  wrote:
>> I recently submitted two patches, one for master and the other for 
>> stable/2005, to fix an issue with L3 virtual interfaces not filter input 
>> packets with wrong unicast MAC address:
>> 
>> https://gerrit.fd.io/r/c/vpp/+/27027
>> 
>> https://gerrit.fd.io/r/c/vpp/+/27311
>> 
>>
>> 
>> Perhaps it is the issue you are hitting.
>> 
>>
>> 
>> Regards,
>> 
>> John
>> 
>>
>> 
>> From: Nagaraju Vemuri  
>> Sent: Wednesday, June 03, 2020 1:06 PM
>> To: John Lo (loj) 
>> Cc: vpp-dev@lists.fd.io
>> Subject: Re: [vpp-dev] VPP forwarding packets not destined to it #vpp
>> 
>>
>> 
>> Hi John,
>> 
>>
>> 
>> Sorry, I should have been more clear.
>> 
>>
>> 
>> We are using Virtual machines(KVM based) on which VPP runs.
>> 
>> KVM qemu creates bridge (using brctl) on physical machine and creates TAP 
>> interfaces from this bridge for Virtual Machines(VMs) networking.
>> 
>>
>> 
>> We run VPP on VMs and configure interfaces with L3 IP address.
>> 
>> When we send traffic, this linux bridge forwards traffic from one interface 
>> of VM to another interface on a different VM.
>> 
>> If the bridge has no mac-to-port binding info, it is forwarding packets to 
>> all interfaces, so all VPPs receive these packets.
>> 
>> And the VPP whose MAC is not matching with this packet, just forwards this 
>> packet again.
>> 
>> We want VPP to drop a packet if the destination MAC doesnt match with VPP 
>> interfaces MAC addresses.
>> 
>>
>> 
>> Hope I am clear now.
>> 
>>
>> 
>> Thanks,
>> 
>> Nagaraju
>> 
>>
>> 
>>
>> 
>>
>> 
>> On Wed, Jun 3, 2020 at 8:53 AM John Lo (loj)  wrote:
>> 
>> Please clarify the following:
>> 
>>
>> 
>> > When the bridge has no binding info about MAC-to-port, bridge is flooding 
>> > packets to all interfaces.
>> 
>> Is this linux bridge that’s in the kernel so not a bridge domain inside VPP?
>> So packets are flooded to all interfaces in the bridge. Are you saying each 
>> of the interface is on a separate VPP instance?
>>
>> 
>> > Hence VPP receives some packets whose MAC address is owned by some other 
>> > VPP instance.
>> > We want to drop such packets. By default VPP is forwarding these packets.
>> 
>> How is VPP receiving packets from its interface and forwarding them? 
>> Is the interface in L3 mode with an IP address/subnet configured? 
>> It can be helpful to provide “show interface addr” output or, even better, 
>> provide a packet trace from VPP on how one or more of the packet is received 
>> and forwarded.
>>
>> 
>> Regards,
>> 
>> John
>> 
>>
>> 
>> From: vpp-dev@lists.fd.io  On Behalf Of Nagaraju Vemuri
>> Sent: Tuesday, June 02, 2020 8:13 PM
>> To: vpp-dev@lists.fd.io
>> Subject: [vpp-dev] VPP forwarding packets not destined to it #vpp
>> 
>>
>> 
>> Hi,
>> 
>> We are using linux bridge to connect different interfaces owned by different 
>> VPP instances.
>> When the bridge has no binding info about MAC-to-port, bridge is flooding 
>> packets to all interfaces.
>> Hence VPP receives some packets whose MAC address is owned by some other VPP 
>> instance.
>> We want to drop such packets. By default VPP is forwarding these packets.
>> 
>> We tried using "set interface l2 forward  disable", but this did 
>> not help.
>> 
>> Please suggest what we can do.
>> 
>> 
>> Thanks,
>> Nagaraju
>> 
>> 
>> 
>>
>> 
>> --
>> 
>> Thanks,
>> Nagaraju Vemuri
>> 
> 
> 
> -- 
> Thanks,
> Nagaraju Vemuri
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16643): https://lists.fd.io/g/vpp-dev/message/16643
Mute This Topic: https://lists.fd.io/mt/74640593/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

2020-06-03 Thread Nagaraju Vemuri
Thanks John.

Which release will have your fixes?


On Wed, Jun 3, 2020 at 10:21 AM John Lo (loj)  wrote:

> I recently submitted two patches, one for master and the other for
> stable/2005, to fix an issue with L3 virtual interfaces not filter input
> packets with wrong unicast MAC address:
>
> https://gerrit.fd.io/r/c/vpp/+/27027
>
> https://gerrit.fd.io/r/c/vpp/+/27311
>
>
>
> Perhaps it is the issue you are hitting.
>
>
>
> Regards,
>
> John
>
>
>
> *From:* Nagaraju Vemuri 
> *Sent:* Wednesday, June 03, 2020 1:06 PM
> *To:* John Lo (loj) 
> *Cc:* vpp-dev@lists.fd.io
> *Subject:* Re: [vpp-dev] VPP forwarding packets not destined to it #vpp
>
>
>
> Hi John,
>
>
>
> Sorry, I should have been more clear.
>
>
>
> We are using Virtual machines(KVM based) on which VPP runs.
>
> KVM qemu creates bridge (using brctl) on physical machine and creates TAP
> interfaces from this bridge for Virtual Machines(VMs) networking.
>
>
>
> We run VPP on VMs and configure interfaces with L3 IP address.
>
> When we send traffic, this linux bridge forwards traffic from one
> interface of VM to another interface on a different VM.
>
> If the bridge has no mac-to-port binding info, it is forwarding packets to
> all interfaces, so all VPPs receive these packets.
>
> And the VPP whose MAC is not matching with this packet, just forwards this
> packet again.
>
> We want VPP to drop a packet if the destination MAC doesnt match with VPP
> interfaces MAC addresses.
>
>
>
> Hope I am clear now.
>
>
>
> Thanks,
>
> Nagaraju
>
>
>
>
>
>
>
> On Wed, Jun 3, 2020 at 8:53 AM John Lo (loj)  wrote:
>
> Please clarify the following:
>
>
>
> > When the bridge has no binding info about MAC-to-port, bridge is
> flooding packets to all interfaces.
>
>1. Is this linux bridge that’s in the kernel so not a bridge domain
>inside VPP?
>2. So packets are flooded to all interfaces in the bridge. Are you
>saying each of the interface is on a separate VPP instance?
>
>
>
> > Hence VPP receives some packets whose MAC address is owned by some other
> VPP instance.
> > We want to drop such packets. By default VPP is forwarding these packets.
>
>1. How is VPP receiving packets from its interface and forwarding
>them?
>2. Is the interface in L3 mode with an IP address/subnet configured?
>3. It can be helpful to provide “show interface addr” output or, even
>better, provide a packet trace from VPP on how one or more of the packet is
>received and forwarded.
>
>
>
> Regards,
>
> John
>
>
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *Nagaraju
> Vemuri
> *Sent:* Tuesday, June 02, 2020 8:13 PM
> *To:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] VPP forwarding packets not destined to it #vpp
>
>
>
> Hi,
>
> We are using linux bridge to connect different interfaces owned by
> different VPP instances.
> When the bridge has no binding info about MAC-to-port, bridge is flooding
> packets to all interfaces.
> Hence VPP receives some packets whose MAC address is owned by some other
> VPP instance.
> We want to drop such packets. By default VPP is forwarding these packets.
>
> We tried using "set interface l2 forward  disable", but this
> did not help.
>
> Please suggest what we can do.
>
>
> Thanks,
> Nagaraju
>
>
>
>
> --
>
> Thanks,
> Nagaraju Vemuri
>


-- 
Thanks,
Nagaraju Vemuri
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16642): https://lists.fd.io/g/vpp-dev/message/16642
Mute This Topic: https://lists.fd.io/mt/74640593/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Assertion failure triggered by "ip mroute add" command (master branch)

2020-06-03 Thread Benoit Ganne (bganne) via lists.fd.io
Hi Elias,

It is probably a bug but I could not reproduce it. Note that commit 30cca512c 
(build: remove valgrind leftovers, 2019-11-25) is present in stable/2001 so 
probably not the culprit...
Can you share how you built VPP and your complete startup.conf? You seems to be 
running those commands from startup.conf directly.

Best
ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Elias Rudberg
> Sent: mercredi 3 juin 2020 15:48
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Assertion failure triggered by "ip mroute add" command
> (master branch)
> 
> Hello VPP experts,
> 
> There seems to be a problem with "ip mroute add" causing assertion
> failure. This happens for the current master branch and the stable/2005
> branch, but not for stable/1908 and stable/2001.
> 
> Doing the following is enough to see the problem:
> 
> create int rdma host-if enp101s0f1 name Interface101
> set int ip address Interface101 10.0.0.1/24
> ip mroute add 224.0.0.1 via Interface101 Accept
> 
> The "ip mroute add" command there then causes an assertion failure.
> Backtrace:
> 
> Thread 1 "vpp_main" received signal SIGABRT, Aborted.
> __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
> 51../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
> (gdb) bt
> #0  __GI_raise (sig=sig@entry=6) at
> ../sysdeps/unix/sysv/linux/raise.c:51
> #1  0x74629801 in __GI_abort () at abort.c:79
> #2  0x004071a3 in os_panic () at vpp/src/vpp/vnet/main.c:371
> #3  0x755085b9 in debugger () at vpp/src/vppinfra/error.c:84
> #4  0x75508337 in _clib_error (how_to_die=2, function_name=0x0,
> line_number=0, fmt=0x776b04b0 "%s:%d (%s) assertion `%s' fails")
> at vpp/src/vppinfra/error.c:143
> #5  0x774d1ed8 in dpo_proto_to_fib (dpo_proto=255) at
> vpp/src/vnet/fib/fib_types.c:353
> #6  0x77504111 in fib_path_attached_get_adj
> (path=0x7fffb602cda0, link=255, dpo=0x7fffa6f3c2e8) at
> vpp/src/vnet/fib/fib_path.c:721
> #7  0x775038fa in fib_path_resolve (path_index=15) at
> vpp/src/vnet/fib/fib_path.c:1949
> #8  0x774f6a18 in fib_path_list_paths_add (path_list_index=13,
> rpaths=0x7fffb6523b40) at vpp/src/vnet/fib/fib_path_list.c:902
> #9  0x775c795a in mfib_entry_src_paths_add
> (msrc=0x7fffb6527c10, rpaths=0x7fffb6523b40) at
> vpp/src/vnet/mfib/mfib_entry.c:754
> #10 0x775c764e in mfib_entry_path_update (mfib_entry_index=1,
> source=MFIB_SOURCE_CLI, rpaths=0x7fffb6523b40) at
> vpp/src/vnet/mfib/mfib_entry.c:1009
> #11 0x775ce98a in mfib_table_entry_paths_update_i (fib_index=0,
> prefix=0x7fffa6f3c720, source=MFIB_SOURCE_CLI, rpaths=0x7fffb6523b40)
> at vpp/src/vnet/mfib/mfib_table.c:318
> #12 0x775ce643 in mfib_table_entry_path_update (fib_index=0,
> prefix=0x7fffa6f3c720, source=MFIB_SOURCE_CLI, rpath=0x7fffb5ffa330)
> at vpp/src/vnet/mfib/mfib_table.c:335
> #13 0x76f18ce2 in vnet_ip_mroute_cmd (vm=0x763969c0
> , main_input=0x7fffa6f3cf18, cmd=0x7fffb5efced0) at
> vpp/src/vnet/ip/lookup.c:819
> #14 0x76093139 in vlib_cli_dispatch_sub_commands
> (vm=0x763969c0 , cm=0x76396bf0
> , input=0x7fffa6f3cf18, parent_command_index=463)
> at vpp/src/vlib/cli.c:568
> #15 0x76092fdd in vlib_cli_dispatch_sub_commands
> (vm=0x763969c0 , cm=0x76396bf0
> , input=0x7fffa6f3cf18, parent_command_index=0)
> at vpp/src/vlib/cli.c:528
> #16 0x7609218f in vlib_cli_input (vm=0x763969c0
> , input=0x7fffa6f3cf18, function=0x0, function_arg=0)
> at vpp/src/vlib/cli.c:667
> #17 0x7616180b in startup_config_process (vm=0x763969c0
> , rt=0x7fffb4a9c480, f=0x0) at
> vpp/src/vlib/unix/main.c:366
> #18 0x760dd704 in vlib_process_bootstrap (_a=140736226945080)
> at vpp/src/vlib/main.c:1502
> #19 0x7552c744 in clib_calljmp () at
> vpp/src/vppinfra/longjmp.S:123
> #20 0x7fffb4d06830 in ?? ()
> #21 0x760dd2a2 in vlib_process_startup (vm=0x288,
> p=0xcd5b1d5112dc20, f=0xb4d069a0) at vpp/src/vlib/main.c:1524
> #22 0x0030b6523520 in ?? ()
> #23 0x002f in ?? ()
> #24 0x0035b4d429c0 in ?? ()
> #25 0x0034 in ?? ()
> #26 0x77b775b4 in vlibapi_get_main () at
> vpp/src/vlibapi/api_common.h:385
> Backtrace stopped: previous frame inner to this frame (corrupt stack?)
> (gdb)
> 
> The code at the assertion at fib_types.c:353 looks like this:
> 
> fib_protocol_t
> dpo_proto_to_fib (dpo_proto_t dpo_proto)
> {
> switch (dpo_proto)
> {
> case DPO_PROTO_IP6:
> return (FIB_PROTOCOL_IP6);
> case DPO_PROTO_IP4:
> return (FIB_PROTOCOL_IP4);
> case DPO_PROTO_MPLS:
> return (FIB_PROTOCOL_MPLS);
> default:
> break;
> }
> ASSERT(0);   <--- this assertion is triggered
> return (0);
> }
> 
> so apparently dpo_proto does not have any of the allowed values.
> 
> Testing earlier commits in the git history pointed to the following
> 

Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

2020-06-03 Thread John Lo (loj) via lists.fd.io
I recently submitted two patches, one for master and the other for stable/2005, 
to fix an issue with L3 virtual interfaces not filter input packets with wrong 
unicast MAC address:
https://gerrit.fd.io/r/c/vpp/+/27027
https://gerrit.fd.io/r/c/vpp/+/27311

Perhaps it is the issue you are hitting.

Regards,
John

From: Nagaraju Vemuri 
Sent: Wednesday, June 03, 2020 1:06 PM
To: John Lo (loj) 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

Hi John,

Sorry, I should have been more clear.

We are using Virtual machines(KVM based) on which VPP runs.
KVM qemu creates bridge (using brctl) on physical machine and creates TAP 
interfaces from this bridge for Virtual Machines(VMs) networking.

We run VPP on VMs and configure interfaces with L3 IP address.
When we send traffic, this linux bridge forwards traffic from one interface of 
VM to another interface on a different VM.
If the bridge has no mac-to-port binding info, it is forwarding packets to all 
interfaces, so all VPPs receive these packets.
And the VPP whose MAC is not matching with this packet, just forwards this 
packet again.
We want VPP to drop a packet if the destination MAC doesnt match with VPP 
interfaces MAC addresses.

Hope I am clear now.

Thanks,
Nagaraju



On Wed, Jun 3, 2020 at 8:53 AM John Lo (loj) 
mailto:l...@cisco.com>> wrote:
Please clarify the following:

> When the bridge has no binding info about MAC-to-port, bridge is flooding 
> packets to all interfaces.

  1.  Is this linux bridge that’s in the kernel so not a bridge domain inside 
VPP?
  2.  So packets are flooded to all interfaces in the bridge. Are you saying 
each of the interface is on a separate VPP instance?

> Hence VPP receives some packets whose MAC address is owned by some other VPP 
> instance.
> We want to drop such packets. By default VPP is forwarding these packets.

  1.  How is VPP receiving packets from its interface and forwarding them?
  2.  Is the interface in L3 mode with an IP address/subnet configured?
  3.  It can be helpful to provide “show interface addr” output or, even 
better, provide a packet trace from VPP on how one or more of the packet is 
received and forwarded.

Regards,
John

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Nagaraju Vemuri
Sent: Tuesday, June 02, 2020 8:13 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP forwarding packets not destined to it #vpp


Hi,

We are using linux bridge to connect different interfaces owned by different 
VPP instances.
When the bridge has no binding info about MAC-to-port, bridge is flooding 
packets to all interfaces.
Hence VPP receives some packets whose MAC address is owned by some other VPP 
instance.
We want to drop such packets. By default VPP is forwarding these packets.

We tried using "set interface l2 forward  disable", but this did not 
help.

Please suggest what we can do.

Thanks,
Nagaraju


--
Thanks,
Nagaraju Vemuri
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16641): https://lists.fd.io/g/vpp-dev/message/16641
Mute This Topic: https://lists.fd.io/mt/74640593/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

2020-06-03 Thread Balaji Venkatraman via lists.fd.io
Hi Nagaraju,

Perhaps you need to disable it on the interface in question?
Seems like it is enabled by default.

Thanks!
--
Balaji

set bridge-domain flood
Summary/usage
set bridge-domain flood  [disable].
Description
Layer 2 flooding can be enabled and disabled on each interface and on each 
bridge-domain. Use this command to manage bridge-domains. It is enabled by 
default.
Example usage
Example of how to enable flooding (where 200 is the bridge-domain-id):
vpp# set bridge-domain flood 200
Example of how to disable flooding (where 200 is the bridge-domain-id):
vpp# set bridge-domain flood 200 disable


--
Regards,
Balaji.


From:  on behalf of Nagaraju Vemuri 

Date: Wednesday, June 3, 2020 at 10:06 AM
To: "John Lo (loj)" 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

Hi John,

Sorry, I should have been more clear.

We are using Virtual machines(KVM based) on which VPP runs.
KVM qemu creates bridge (using brctl) on physical machine and creates TAP 
interfaces from this bridge for Virtual Machines(VMs) networking.

We run VPP on VMs and configure interfaces with L3 IP address.
When we send traffic, this linux bridge forwards traffic from one interface of 
VM to another interface on a different VM.
If the bridge has no mac-to-port binding info, it is forwarding packets to all 
interfaces, so all VPPs receive these packets.
And the VPP whose MAC is not matching with this packet, just forwards this 
packet again.
We want VPP to drop a packet if the destination MAC doesnt match with VPP 
interfaces MAC addresses.

Hope I am clear now.

Thanks,
Nagaraju



On Wed, Jun 3, 2020 at 8:53 AM John Lo (loj) 
mailto:l...@cisco.com>> wrote:
Please clarify the following:

> When the bridge has no binding info about MAC-to-port, bridge is flooding 
> packets to all interfaces.

  1.  Is this linux bridge that’s in the kernel so not a bridge domain inside 
VPP?
  2.  So packets are flooded to all interfaces in the bridge. Are you saying 
each of the interface is on a separate VPP instance?

> Hence VPP receives some packets whose MAC address is owned by some other VPP 
> instance.
> We want to drop such packets. By default VPP is forwarding these packets.

  1.  How is VPP receiving packets from its interface and forwarding them?
  2.  Is the interface in L3 mode with an IP address/subnet configured?
  3.  It can be helpful to provide “show interface addr” output or, even 
better, provide a packet trace from VPP on how one or more of the packet is 
received and forwarded.

Regards,
John

From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of Nagaraju Vemuri
Sent: Tuesday, June 02, 2020 8:13 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP forwarding packets not destined to it #vpp


Hi,

We are using linux bridge to connect different interfaces owned by different 
VPP instances.
When the bridge has no binding info about MAC-to-port, bridge is flooding 
packets to all interfaces.
Hence VPP receives some packets whose MAC address is owned by some other VPP 
instance.
We want to drop such packets. By default VPP is forwarding these packets.

We tried using "set interface l2 forward  disable", but this did not 
help.

Please suggest what we can do.

Thanks,
Nagaraju


--
Thanks,
Nagaraju Vemuri
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16639): https://lists.fd.io/g/vpp-dev/message/16639
Mute This Topic: https://lists.fd.io/mt/74640593/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

2020-06-03 Thread Nagaraju Vemuri
Hi John,

Sorry, I should have been more clear.

We are using Virtual machines(KVM based) on which VPP runs.
KVM qemu creates bridge (using brctl) on physical machine and creates TAP
interfaces from this bridge for Virtual Machines(VMs) networking.

We run VPP on VMs and configure interfaces with L3 IP address.
When we send traffic, this linux bridge forwards traffic from one interface
of VM to another interface on a different VM.
If the bridge has no mac-to-port binding info, it is forwarding packets to
all interfaces, so all VPPs receive these packets.
And the VPP whose MAC is not matching with this packet, just forwards this
packet again.
We want VPP to drop a packet if the destination MAC doesnt match with VPP
interfaces MAC addresses.

Hope I am clear now.

Thanks,
Nagaraju



On Wed, Jun 3, 2020 at 8:53 AM John Lo (loj)  wrote:

> Please clarify the following:
>
>
>
> > When the bridge has no binding info about MAC-to-port, bridge is
> flooding packets to all interfaces.
>
>1. Is this linux bridge that’s in the kernel so not a bridge domain
>inside VPP?
>2. So packets are flooded to all interfaces in the bridge. Are you
>saying each of the interface is on a separate VPP instance?
>
>
>
> > Hence VPP receives some packets whose MAC address is owned by some other
> VPP instance.
> > We want to drop such packets. By default VPP is forwarding these packets.
>
>1. How is VPP receiving packets from its interface and forwarding
>them?
>2. Is the interface in L3 mode with an IP address/subnet configured?
>3. It can be helpful to provide “show interface addr” output or, even
>better, provide a packet trace from VPP on how one or more of the packet is
>received and forwarded.
>
>
>
> Regards,
>
> John
>
>
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *Nagaraju
> Vemuri
> *Sent:* Tuesday, June 02, 2020 8:13 PM
> *To:* vpp-dev@lists.fd.io
> *Subject:* [vpp-dev] VPP forwarding packets not destined to it #vpp
>
>
>
> Hi,
>
> We are using linux bridge to connect different interfaces owned by
> different VPP instances.
> When the bridge has no binding info about MAC-to-port, bridge is flooding
> packets to all interfaces.
> Hence VPP receives some packets whose MAC address is owned by some other
> VPP instance.
> We want to drop such packets. By default VPP is forwarding these packets.
>
> We tried using "set interface l2 forward  disable", but this
> did not help.
>
> Please suggest what we can do.
>
>
> Thanks,
> Nagaraju
>


-- 
Thanks,
Nagaraju Vemuri
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16638): https://lists.fd.io/g/vpp-dev/message/16638
Mute This Topic: https://lists.fd.io/mt/74640593/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [**EXTERNAL**] [vpp-dev] VPP fast-path vs slow-path

2020-06-03 Thread Damjan Marion via lists.fd.io


I’t was just my decision to optimiize code for ipv4, ipv6, mpls traffic, 
knowing that 99.99% of traffic falls under that category….



> On 3 Jun 2020, at 18:22, Gudimetla, Leela Sankar via lists.fd.io 
>  wrote:
> 
> Gentle reminder on this. Any info would be much appreciated.
>
> Thanks,
> Leela sankar Gudimetla
> Embedded Software Engineer 3 |  Ciena 
> San Jose, CA, USA 
> M | +1.408.904.2160
>  
>
>
> From: mailto:vpp-dev@lists.fd.io>> on behalf of 
> "Gudimetla, Leela Sankar via lists.fd.io " 
> mailto:lgudimet=ciena@lists.fd.io>>
> Reply-To: Leela Gudimetla mailto:lgudi...@ciena.com>>
> Date: Monday, June 1, 2020 at 3:28 PM
> To: "vpp-dev@lists.fd.io "  >
> Subject: [**EXTERNAL**] [vpp-dev] VPP fast-path vs slow-path
>
> Hi,
>
> I came across a comment in the file src/vnet/ethernet/node.c regarding 
> fast-path vs slow-path.
>
>/* fastpath - in l3 mode hadles ip4, ip6 and mpls packets, other packets
>are considered as slowpath, in l2 mode all untagged packets are
>considered as fastpath */
>
> This makes me wondering about the VPP view of fast-path vs slow-path.
> Based on the comment, does it mean that the processing of all untagged and 
> L3-packets is called as fast-path? and rest all packets are considered as 
> slow-path?
>
> Based on some of the networking ASICs,  any packet (irrespective of L2 or L3) 
> that is processed inside the packet-pipeline from rx-interface to 
> tx-interface is called as fast-path and if a packet is punted to CPU for 
> further processing is called as slow-path. 
>
> This may not be a generic or standard definition, but wrt networking switches 
> and routers, I see this description is used more widely.
> So, I am wondering if VPP has a different description/view on it.
>
> Can someone please share what is the VPP’s view on fast-path vs slow-path? 
> Does it differ based on the received packet type i.e. L2 vs L3?
>
> Thanks,
> Leela sankar
>
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16637): https://lists.fd.io/g/vpp-dev/message/16637
Mute This Topic: https://lists.fd.io/mt/74653075/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [**EXTERNAL**] [vpp-dev] VPP fast-path vs slow-path

2020-06-03 Thread Gudimetla, Leela Sankar via lists.fd.io
Gentle reminder on this. Any info would be much appreciated.

Thanks,
Leela sankar Gudimetla
Embedded Software Engineer 3 |  Ciena
San Jose, CA, USA
M | +1.408.904.2160
[Ciena Logo]


From:  on behalf of "Gudimetla, Leela Sankar via 
lists.fd.io" 
Reply-To: Leela Gudimetla 
Date: Monday, June 1, 2020 at 3:28 PM
To: "vpp-dev@lists.fd.io" 
Subject: [**EXTERNAL**] [vpp-dev] VPP fast-path vs slow-path

Hi,

I came across a comment in the file src/vnet/ethernet/node.c regarding 
fast-path vs slow-path.

   /* fastpath - in l3 mode hadles ip4, ip6 and mpls packets, other packets
   are considered as slowpath, in l2 mode all untagged packets are
   considered as fastpath */

This makes me wondering about the VPP view of fast-path vs slow-path.
Based on the comment, does it mean that the processing of all untagged and 
L3-packets is called as fast-path? and rest all packets are considered as 
slow-path?

Based on some of the networking ASICs,  any packet (irrespective of L2 or L3) 
that is processed inside the packet-pipeline from rx-interface to tx-interface 
is called as fast-path and if a packet is punted to CPU for further processing 
is called as slow-path.

This may not be a generic or standard definition, but wrt networking switches 
and routers, I see this description is used more widely.
So, I am wondering if VPP has a different description/view on it.

Can someone please share what is the VPP’s view on fast-path vs slow-path? Does 
it differ based on the received packet type i.e. L2 vs L3?

Thanks,
Leela sankar

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16636): https://lists.fd.io/g/vpp-dev/message/16636
Mute This Topic: https://lists.fd.io/mt/74653075/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

2020-06-03 Thread John Lo (loj) via lists.fd.io
Please clarify the following:

> When the bridge has no binding info about MAC-to-port, bridge is flooding 
> packets to all interfaces.

  1.  Is this linux bridge that’s in the kernel so not a bridge domain inside 
VPP?
  2.  So packets are flooded to all interfaces in the bridge. Are you saying 
each of the interface is on a separate VPP instance?

> Hence VPP receives some packets whose MAC address is owned by some other VPP 
> instance.
> We want to drop such packets. By default VPP is forwarding these packets.

  1.  How is VPP receiving packets from its interface and forwarding them?
  2.  Is the interface in L3 mode with an IP address/subnet configured?
  3.  It can be helpful to provide “show interface addr” output or, even 
better, provide a packet trace from VPP on how one or more of the packet is 
received and forwarded.

Regards,
John

From: vpp-dev@lists.fd.io  On Behalf Of Nagaraju Vemuri
Sent: Tuesday, June 02, 2020 8:13 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP forwarding packets not destined to it #vpp


Hi,

We are using linux bridge to connect different interfaces owned by different 
VPP instances.
When the bridge has no binding info about MAC-to-port, bridge is flooding 
packets to all interfaces.
Hence VPP receives some packets whose MAC address is owned by some other VPP 
instance.
We want to drop such packets. By default VPP is forwarding these packets.

We tried using "set interface l2 forward  disable", but this did not 
help.

Please suggest what we can do.

Thanks,
Nagaraju
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16635): https://lists.fd.io/g/vpp-dev/message/16635
Mute This Topic: https://lists.fd.io/mt/74640593/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Assertion failure triggered by "ip mroute add" command (master branch)

2020-06-03 Thread Elias Rudberg
Hello VPP experts,

There seems to be a problem with "ip mroute add" causing assertion
failure. This happens for the current master branch and the stable/2005
branch, but not for stable/1908 and stable/2001.

Doing the following is enough to see the problem:

create int rdma host-if enp101s0f1 name Interface101
set int ip address Interface101 10.0.0.1/24
ip mroute add 224.0.0.1 via Interface101 Accept

The "ip mroute add" command there then causes an assertion failure.
Backtrace:

Thread 1 "vpp_main" received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at
../sysdeps/unix/sysv/linux/raise.c:51
#1  0x74629801 in __GI_abort () at abort.c:79
#2  0x004071a3 in os_panic () at vpp/src/vpp/vnet/main.c:371
#3  0x755085b9 in debugger () at vpp/src/vppinfra/error.c:84
#4  0x75508337 in _clib_error (how_to_die=2, function_name=0x0,
line_number=0, fmt=0x776b04b0 "%s:%d (%s) assertion `%s' fails")
at vpp/src/vppinfra/error.c:143
#5  0x774d1ed8 in dpo_proto_to_fib (dpo_proto=255) at
vpp/src/vnet/fib/fib_types.c:353
#6  0x77504111 in fib_path_attached_get_adj
(path=0x7fffb602cda0, link=255, dpo=0x7fffa6f3c2e8) at
vpp/src/vnet/fib/fib_path.c:721
#7  0x775038fa in fib_path_resolve (path_index=15) at
vpp/src/vnet/fib/fib_path.c:1949
#8  0x774f6a18 in fib_path_list_paths_add (path_list_index=13,
rpaths=0x7fffb6523b40) at vpp/src/vnet/fib/fib_path_list.c:902
#9  0x775c795a in mfib_entry_src_paths_add
(msrc=0x7fffb6527c10, rpaths=0x7fffb6523b40) at
vpp/src/vnet/mfib/mfib_entry.c:754
#10 0x775c764e in mfib_entry_path_update (mfib_entry_index=1,
source=MFIB_SOURCE_CLI, rpaths=0x7fffb6523b40) at
vpp/src/vnet/mfib/mfib_entry.c:1009
#11 0x775ce98a in mfib_table_entry_paths_update_i (fib_index=0,
prefix=0x7fffa6f3c720, source=MFIB_SOURCE_CLI, rpaths=0x7fffb6523b40)
at vpp/src/vnet/mfib/mfib_table.c:318
#12 0x775ce643 in mfib_table_entry_path_update (fib_index=0,
prefix=0x7fffa6f3c720, source=MFIB_SOURCE_CLI, rpath=0x7fffb5ffa330)
at vpp/src/vnet/mfib/mfib_table.c:335
#13 0x76f18ce2 in vnet_ip_mroute_cmd (vm=0x763969c0
, main_input=0x7fffa6f3cf18, cmd=0x7fffb5efced0) at
vpp/src/vnet/ip/lookup.c:819
#14 0x76093139 in vlib_cli_dispatch_sub_commands
(vm=0x763969c0 , cm=0x76396bf0
, input=0x7fffa6f3cf18, parent_command_index=463)
at vpp/src/vlib/cli.c:568
#15 0x76092fdd in vlib_cli_dispatch_sub_commands
(vm=0x763969c0 , cm=0x76396bf0
, input=0x7fffa6f3cf18, parent_command_index=0)
at vpp/src/vlib/cli.c:528
#16 0x7609218f in vlib_cli_input (vm=0x763969c0
, input=0x7fffa6f3cf18, function=0x0, function_arg=0)
at vpp/src/vlib/cli.c:667
#17 0x7616180b in startup_config_process (vm=0x763969c0
, rt=0x7fffb4a9c480, f=0x0) at
vpp/src/vlib/unix/main.c:366
#18 0x760dd704 in vlib_process_bootstrap (_a=140736226945080)
at vpp/src/vlib/main.c:1502
#19 0x7552c744 in clib_calljmp () at
vpp/src/vppinfra/longjmp.S:123
#20 0x7fffb4d06830 in ?? ()
#21 0x760dd2a2 in vlib_process_startup (vm=0x288,
p=0xcd5b1d5112dc20, f=0xb4d069a0) at vpp/src/vlib/main.c:1524
#22 0x0030b6523520 in ?? ()
#23 0x002f in ?? ()
#24 0x0035b4d429c0 in ?? ()
#25 0x0034 in ?? ()
#26 0x77b775b4 in vlibapi_get_main () at
vpp/src/vlibapi/api_common.h:385
Backtrace stopped: previous frame inner to this frame (corrupt stack?)
(gdb) 

The code at the assertion at fib_types.c:353 looks like this:

fib_protocol_t
dpo_proto_to_fib (dpo_proto_t dpo_proto)
{
switch (dpo_proto)
{
case DPO_PROTO_IP6:
return (FIB_PROTOCOL_IP6);
case DPO_PROTO_IP4:
return (FIB_PROTOCOL_IP4);
case DPO_PROTO_MPLS:
return (FIB_PROTOCOL_MPLS);
default:
break;
}
ASSERT(0);   <--- this assertion is triggered
return (0);
}

so apparently dpo_proto does not have any of the allowed values.

Testing earlier commits in the git history pointed to the following
seemingly unrelated and harmless refactoring commit as the point when
this problem started:
30cca512c (build: remove valgrind leftovers, 2019-11-25)

What we are trying to do, which has worked for VPP 19.08, is to enable
receiving of multicast packets on a given interface using two commands
like this:

ip mroute add 224.0.0.1 via Interface101 Accept
ip mroute add 224.0.0.1 via local Forward

but now for the master branch the first of those "ip mroute add" lines
gives the assertion failure.

Has something changed regarding how the "ip mroute add" command is to
be used?
If not, could the assertion failure indicate a bug somewhere?

The problem seems easy to reproduce, at least for me the assertion
happens in the same way every time.

Best regards,
Elias
-=-=-=-=-=-=-=-=-=-=-=-
Links: You 

Re: [vpp-dev] SEGMENTATION FAULT in load_balance_get()

2020-06-03 Thread Dave Barach via lists.fd.io
Please test https://gerrit.fd.io/r/c/vpp/+/27407 and report results. 

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Dave Barach via 
lists.fd.io
Sent: Wednesday, June 3, 2020 7:08 AM
To: Benoit Ganne (bganne) ; raj...@rtbrick.com
Cc: vpp-dev ; Neale Ranns (nranns) 
Subject: Re: [vpp-dev] SEGMENTATION FAULT in load_balance_get()

+1, can't tell which poison pattern is involved without a scorecard.

load_balance_alloc_i (...) is clearly not thread-safe due to calls to 
pool_get_aligned (...) and vlib_validate_combined_counter(...). 

Judicious use of pool_get_aligned_will_expand(...), 
_vec_resize_will_expand(...) and a manual barrier sync will fix this problem 
without resorting to draconian measures.

It'd sure be nice to hear from Neale before we code something like that. 

D. 

-Original Message-
From: Benoit Ganne (bganne) 
Sent: Wednesday, June 3, 2020 3:17 AM
To: raj...@rtbrick.com; Dave Barach (dbarach) 
Cc: vpp-dev ; Neale Ranns (nranns) 
Subject: RE: [vpp-dev] SEGMENTATION FAULT in load_balance_get()

Neale is away and might be slow to react.
I suspect the issue is when creating new load balance entry through 
load_blance_create(), which will get a new element from the load balance pool. 
This in turn will update the pool free bitmap, which can grow. As it is backed 
by a vector, it can be reallocated somewhere else to fit the new size.
If it is done concurrently with dataplane processing, bad things happen. The 
pattern 0x131313 is filled by dlmalloc free() and will happen in that case. I 
think the same could happen to the pool itself, not only the bitmap.
If I am correct, I am not sure how we should fix that: fib update API is marked 
as mp_safe, so we could create a fixed-size load balance pool to prevent 
runtime reallocation, but it would waste memory and impose a maximum size.

ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Rajith PR 
> via lists.fd.io
> Sent: mercredi 3 juin 2020 05:46
> To: Dave Barach (dbarach) 
> Cc: vpp-dev ; Neale Ranns (nranns) 
> 
> Subject: Re: [vpp-dev] SEGMENTATION FAULT in load_balance_get()
> 
> Hi Dave/Neal,
> 
> The adj_poison seems to be a filling pattern - - 0xfefe. Am I looking 
> into the right code or I have interpreted it incorrectly?
> 
> Thanks,
> Rajith
> 
> On Tue, Jun 2, 2020 at 7:44 PM Dave Barach (dbarach) 
> mailto:dbar...@cisco.com> > wrote:
> 
> 
>   The code manages to access a poisoned adjacency – 0x131313 fill 
> pattern – copying Neale for an opinion.
> 
> 
> 
>   D.
> 
> 
> 
>   From: vpp-dev@lists.fd.io    d...@lists.fd.io  > On Behalf Of Rajith PR 
> via lists.fd.io 
>   Sent: Tuesday, June 2, 2020 10:00 AM
>   To: vpp-dev mailto:vpp-dev@lists.fd.io> >
>   Subject: [vpp-dev] SEGMENTATION FAULT in load_balance_get()
> 
> 
> 
>   Hello All,
> 
> 
> 
>   In 19.08 VPP version we are seeing a crash while accessing the 
> load_balance_pool  in load_balanc_get() function. This is happening 
> after enabling worker threads.
> 
>   As such the FIB programming is happening in the main thread and in 
> one of the worker threads we see this crash.
> 
>   Also, this is seen when we scale to 300K+ ipv4 routes.
> 
> 
> 
>   Here is the complete stack,
> 
> 
> 
>   Thread 10 "vpp_wk_0" received signal SIGSEGV, Segmentation fault.
> 
>   [Switching to Thread 0x7fbe4aa8e700 (LWP 333)]
>   0x7fbef10636f8 in clib_bitmap_get (ai=0x1313131313131313, i=61) 
> at /home/ubuntu/Scale/libvpp/src/vppinfra/bitmap.h:201
>   201  return i0 < vec_len (ai) && 0 != ((ai[i0] >> i1) & 1);
> 
> 
> 
>   Thread 10 (Thread 0x7fbe4aa8e700 (LWP 333)):
>   #0  0x7fbef10636f8 in clib_bitmap_get (ai=0x1313131313131313,
> i=61) at /home/ubuntu/Scale/libvpp/src/vppinfra/bitmap.h:201
>   #1  0x7fbef10676a8 in load_balance_get (lbi=61) at
> /home/ubuntu/Scale/libvpp/src/vnet/dpo/load_balance.h:222
>   #2  0x7fbef106890c in ip4_lookup_inline (vm=0x7fbe8a5aa080, 
> node=0x7fbe8b3fd380, frame=0x7fbe8a5edb40) at
> /home/ubuntu/Scale/libvpp/src/vnet/ip/ip4_forward.h:369
>   #3  0x7fbef1068ead in ip4_lookup_node_fn_avx2 (vm=0x7fbe8a5aa080, 
> node=0x7fbe8b3fd380, frame=0x7fbe8a5edb40)
>   at /home/ubuntu/Scale/libvpp/src/vnet/ip/ip4_forward.c:95
>   #4  0x7fbef0c6afec in dispatch_node (vm=0x7fbe8a5aa080, 
> node=0x7fbe8b3fd380, type=VLIB_NODE_TYPE_INTERNAL, 
> dispatch_state=VLIB_NODE_STATE_POLLING,
>   frame=0x7fbe8a5edb40, last_time_stamp=381215594286358) at
> /home/ubuntu/Scale/libvpp/src/vlib/main.c:1207
>   #5  0x7fbef0c6b7ad in dispatch_pending_node (vm=0x7fbe8a5aa080, 
> pending_frame_index=2, last_time_stamp=381215594286358)
>   at /home/ubuntu/Scale/libvpp/src/vlib/main.c:1375
>   #6  0x7fbef0c6d3f0 in vlib_main_or_worker_loop 
> (vm=0x7fbe8a5aa080, is_main=0) at
> 

Re: [vpp-dev] ixge and rdma drivers

2020-06-03 Thread Christian Hopps
Looking forward at new purchases. I was trying to find the list of native 
supported PCI devices. The obvious ones I see are avf and rdma. So Intel 710 
(now 810?) virtual function and mellanox connectx-(456?)

That's not a lot to choose from, but the mellanox choice is pretty good one I 
suppose if it works well (e.g. good support for flow director type stuff etc).

I'm starting to think that I need to probably stick with DPDK though, I'm also 
looking into the intel QAT card which gets me back into mbufs overhead anyway. 
This is a bit sad since the vlib+mbuf overhead is eating bandwidth and/or pps.

Thanks,
Chris.



> On Jun 2, 2020, at 7:59 AM, Damjan Marion via lists.fd.io 
>  wrote:
> 
> 
> Hi Christian,
> 
> ixgbe driver is deprecated, that code is very old, from days when VPP was not 
> open source.
> That driver was used for Intel Niantic family (82599, x5x0) of NICs which are 
> those days replaced by  Intel Fortville (x710, xl710, xvv710, x722).
> For fortville and soon for columbiaville NICs (e810), you can use native AVF 
> driver….
> 
> You need to create virtual function first, as AVF driver works only with VFs. 
> There is script which does the job in extras/scripts/vfctl
> 
> 
> i.e.
> 
> $ extras/scripts/vfctl create :61:00.1 1
> 
> Virtual Functions:
> ID PCI Addr PCI IDDriver   MAC Addr  Config
> 0  :61:06.0 8086:37cd vfio-pci  spoof checking off, link-state auto, 
> trust on
> 
> And the in in VPP:
> 
> vpp# create interface avf :61:06.0 name eth0 
> 
> 
> 
>> On 2 Jun 2020, at 09:40, Christian Hopps  wrote:
>> 
>> Hi vpp-dev,
>> 
>> I've been contemplating trying to use native drivers in place of DPDK with 
>> the understanding that I may be paying a ~20% penalty by using DPDK. So I 
>> went to try things out, but had some trouble. The systems in paticular I'm 
>> interested in have 10GE intel NICs in them which I believe would be 
>> supported by the ixge driver. I noticed that this driver has been marked 
>> deprecated in VPP though. Is there a replacement or is DPDK required for 
>> this NIC?
>> 
>> I also have systems that have mlx5 (and eventually will have connectx-6 
>> cards). These cards appear to be supported by the rdma native driver. I was 
>> able to create the interfaces and saw TX packets but no RX.  Is this driver 
>> considered stable and usable in 19.08 (and if not which release would it be 
>> consider so)?
>> 
>> Thanks,
>> Chris.
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16632): https://lists.fd.io/g/vpp-dev/message/16632
Mute This Topic: https://lists.fd.io/mt/74623336/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP forwarding packets not destined to it #vpp

2020-06-03 Thread Dave Barach via lists.fd.io
Use the force and read the source:

/*?
* Layer 2 flooding can be enabled and disabled on each
* interface and on each bridge-domain. Use this command to
* manage bridge-domains. It is enabled by default.
*
* @cliexpar
* Example of how to enable flooding (where 200 is the bridge-domain-id):
* @cliexcmd{set bridge-domain flood 200}
* Example of how to disable flooding (where 200 is the bridge-domain-id):
* @cliexcmd{set bridge-domain flood 200 disable}
?*/
/* *INDENT-OFF* */
VLIB_CLI_COMMAND (bd_flood_cli, static) = {
  .path = "set bridge-domain flood",
  .short_help = "set bridge-domain flood  [disable]",
  .function = bd_flood,
};
/* *INDENT-ON* */

From: vpp-dev@lists.fd.io  On Behalf Of Nagaraju Vemuri
Sent: Tuesday, June 2, 2020 8:13 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP forwarding packets not destined to it #vpp


Hi,

We are using linux bridge to connect different interfaces owned by different 
VPP instances.
When the bridge has no binding info about MAC-to-port, bridge is flooding 
packets to all interfaces.
Hence VPP receives some packets whose MAC address is owned by some other VPP 
instance.
We want to drop such packets. By default VPP is forwarding these packets.

We tried using "set interface l2 forward  disable", but this did not 
help.

Please suggest what we can do.

Thanks,
Nagaraju
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16631): https://lists.fd.io/g/vpp-dev/message/16631
Mute This Topic: https://lists.fd.io/mt/74640593/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] SEGMENTATION FAULT in load_balance_get()

2020-06-03 Thread Dave Barach via lists.fd.io
+1, can't tell which poison pattern is involved without a scorecard.

load_balance_alloc_i (...) is clearly not thread-safe due to calls to 
pool_get_aligned (...) and vlib_validate_combined_counter(...). 

Judicious use of pool_get_aligned_will_expand(...), 
_vec_resize_will_expand(...) and a manual barrier sync will fix this problem 
without resorting to draconian measures.

It'd sure be nice to hear from Neale before we code something like that. 

D. 

-Original Message-
From: Benoit Ganne (bganne)  
Sent: Wednesday, June 3, 2020 3:17 AM
To: raj...@rtbrick.com; Dave Barach (dbarach) 
Cc: vpp-dev ; Neale Ranns (nranns) 
Subject: RE: [vpp-dev] SEGMENTATION FAULT in load_balance_get()

Neale is away and might be slow to react.
I suspect the issue is when creating new load balance entry through 
load_blance_create(), which will get a new element from the load balance pool. 
This in turn will update the pool free bitmap, which can grow. As it is backed 
by a vector, it can be reallocated somewhere else to fit the new size.
If it is done concurrently with dataplane processing, bad things happen. The 
pattern 0x131313 is filled by dlmalloc free() and will happen in that case. I 
think the same could happen to the pool itself, not only the bitmap.
If I am correct, I am not sure how we should fix that: fib update API is marked 
as mp_safe, so we could create a fixed-size load balance pool to prevent 
runtime reallocation, but it would waste memory and impose a maximum size.

ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Rajith PR 
> via lists.fd.io
> Sent: mercredi 3 juin 2020 05:46
> To: Dave Barach (dbarach) 
> Cc: vpp-dev ; Neale Ranns (nranns) 
> 
> Subject: Re: [vpp-dev] SEGMENTATION FAULT in load_balance_get()
> 
> Hi Dave/Neal,
> 
> The adj_poison seems to be a filling pattern - - 0xfefe. Am I looking 
> into the right code or I have interpreted it incorrectly?
> 
> Thanks,
> Rajith
> 
> On Tue, Jun 2, 2020 at 7:44 PM Dave Barach (dbarach) 
> mailto:dbar...@cisco.com> > wrote:
> 
> 
>   The code manages to access a poisoned adjacency – 0x131313 fill 
> pattern – copying Neale for an opinion.
> 
> 
> 
>   D.
> 
> 
> 
>   From: vpp-dev@lists.fd.io    d...@lists.fd.io  > On Behalf Of Rajith PR 
> via lists.fd.io 
>   Sent: Tuesday, June 2, 2020 10:00 AM
>   To: vpp-dev mailto:vpp-dev@lists.fd.io> >
>   Subject: [vpp-dev] SEGMENTATION FAULT in load_balance_get()
> 
> 
> 
>   Hello All,
> 
> 
> 
>   In 19.08 VPP version we are seeing a crash while accessing the 
> load_balance_pool  in load_balanc_get() function. This is happening 
> after enabling worker threads.
> 
>   As such the FIB programming is happening in the main thread and in 
> one of the worker threads we see this crash.
> 
>   Also, this is seen when we scale to 300K+ ipv4 routes.
> 
> 
> 
>   Here is the complete stack,
> 
> 
> 
>   Thread 10 "vpp_wk_0" received signal SIGSEGV, Segmentation fault.
> 
>   [Switching to Thread 0x7fbe4aa8e700 (LWP 333)]
>   0x7fbef10636f8 in clib_bitmap_get (ai=0x1313131313131313, i=61) 
> at /home/ubuntu/Scale/libvpp/src/vppinfra/bitmap.h:201
>   201  return i0 < vec_len (ai) && 0 != ((ai[i0] >> i1) & 1);
> 
> 
> 
>   Thread 10 (Thread 0x7fbe4aa8e700 (LWP 333)):
>   #0  0x7fbef10636f8 in clib_bitmap_get (ai=0x1313131313131313,
> i=61) at /home/ubuntu/Scale/libvpp/src/vppinfra/bitmap.h:201
>   #1  0x7fbef10676a8 in load_balance_get (lbi=61) at
> /home/ubuntu/Scale/libvpp/src/vnet/dpo/load_balance.h:222
>   #2  0x7fbef106890c in ip4_lookup_inline (vm=0x7fbe8a5aa080, 
> node=0x7fbe8b3fd380, frame=0x7fbe8a5edb40) at
> /home/ubuntu/Scale/libvpp/src/vnet/ip/ip4_forward.h:369
>   #3  0x7fbef1068ead in ip4_lookup_node_fn_avx2 (vm=0x7fbe8a5aa080, 
> node=0x7fbe8b3fd380, frame=0x7fbe8a5edb40)
>   at /home/ubuntu/Scale/libvpp/src/vnet/ip/ip4_forward.c:95
>   #4  0x7fbef0c6afec in dispatch_node (vm=0x7fbe8a5aa080, 
> node=0x7fbe8b3fd380, type=VLIB_NODE_TYPE_INTERNAL, 
> dispatch_state=VLIB_NODE_STATE_POLLING,
>   frame=0x7fbe8a5edb40, last_time_stamp=381215594286358) at
> /home/ubuntu/Scale/libvpp/src/vlib/main.c:1207
>   #5  0x7fbef0c6b7ad in dispatch_pending_node (vm=0x7fbe8a5aa080, 
> pending_frame_index=2, last_time_stamp=381215594286358)
>   at /home/ubuntu/Scale/libvpp/src/vlib/main.c:1375
>   #6  0x7fbef0c6d3f0 in vlib_main_or_worker_loop 
> (vm=0x7fbe8a5aa080, is_main=0) at
> /home/ubuntu/Scale/libvpp/src/vlib/main.c:1826
>   #7  0x7fbef0c6dc73 in vlib_worker_loop (vm=0x7fbe8a5aa080) at
> /home/ubuntu/Scale/libvpp/src/vlib/main.c:1934
>   #8  0x7fbef0cac791 in vlib_worker_thread_fn (arg=0x7fbe8de2a340) 
> at /home/ubuntu/Scale/libvpp/src/vlib/threads.c:1754
>   #9  0x7fbef092da48 in clib_calljmp () from
> 

Re: [vpp-dev] NAT44 UDP sessions are not clearing

2020-06-03 Thread Klement Sekera via lists.fd.io
Transitory means that the session has not been fully established.
Transitory (wait-closed) means the session has been established and then closed 
and it’s in transitory timeout, after which it will move to transitory 
(closed). Sessions in this state are not eligible for freeing.
Transitory (closed) means the session has been fully closed and timed out and 
it’s now ready to be freed when needed.

> On 3 Jun 2020, at 07:42, Carlito Nueno  wrote:
> 
> Testing with 30 ip addresses (users) opening around 300 sessions each. 
> 
> When using vpp-20.01 + fixes by you and Filip (before the port overloading 
> patches), total sessions and total transitory sessions were much smaller 
> (around 15062).
> 
> on vpp-20.05 with port overloading
> 
> NAT44 pool addresses:
> 130.44.9.8
>   tenant VRF independent
>   0 busy other ports
>   32 busy udp ports
>   63071 busy tcp ports
>   1 busy icmp ports
> NAT44 twice-nat pool addresses:
> max translations: 400
> max translations per user: 500
> established tcp LRU min session timeout 7792 (now 352)
> transitory tcp LRU min session timeout 294 (now 352)
> udp LRU min session timeout 312 (now 352)
> total timed out sessions: 119312
> total sessions: 128639
> total tcp sessions: 128607
> total tcp established sessions: 9300
> total tcp transitory sessions: 119307
> total tcp transitory (WAIT-CLOSED) sessions: 0
> total tcp transitory (CLOSED) sessions: 0
> total udp sessions: 32
> total icmp sessions: 0
> 
> On Tue, Jun 2, 2020 at 8:42 PM carlito nueno via 
> lists.fd.io wrote:
> Hi Klement,
> 
> Got it.
> 
> Sorry one more question :)
> 
> I did another test and I noticed that tcp transitory sessions increase 
> rapidly when I create new sessions from new internal ip address really fast 
> (without delay). for example:
> tcp sessions are never stopped, so tcp transitory sessions should be 0 at all 
> times.
> 
> from ip address 192.168.1.2
> 
> NAT44 pool addresses:
> 130.44.9.8
>   tenant VRF independent
>   0 busy other ports
>   36 busy udp ports
>   7694 busy tcp ports
>   0 busy icmp ports
> NAT44 twice-nat pool addresses:
> max translations: 400
> max translations per user: 500
> established tcp LRU min session timeout 7842 (now 402)
> udp LRU min session timeout 670 (now 402)
> total timed out sessions: 0
> total sessions: 1203
> total tcp sessions: 1200
> total tcp established sessions: 1200
> total tcp transitory sessions: 0
> total tcp transitory (WAIT-CLOSED) sessions: 0
> total tcp transitory (CLOSED) sessions: 0
> total udp sessions: 3
> total icmp sessions: 0
> 
> added 600 sessions from ip address 192.168.1.3
> 
> NAT44 pool addresses:
> 130.44.9.8
>   tenant VRF independent
>   0 busy other ports
>   36 busy udp ports
>   9395 busy tcp ports
>   0 busy icmp ports
> NAT44 twice-nat pool addresses:
> max translations: 400
> max translations per user: 500
> established tcp LRU min session timeout 7845 (now 405)
> transitory tcp LRU min session timeout 644 (now 405)
> udp LRU min session timeout 670 (now 405)
> total timed out sessions: 0
> total sessions: 2904
> total tcp sessions: 2901
> total tcp established sessions: 1800
> total tcp transitory sessions: 1101
> total tcp transitory (WAIT-CLOSED) sessions: 0
> total tcp transitory (CLOSED) sessions: 0
> total udp sessions: 3
> total icmp sessions: 0
> 
> Thanks!
> 
> On Tue, Jun 2, 2020 at 11:47 AM Klement Sekera -X (ksekera - PANTHEON TECH 
> SRO at Cisco)  wrote:
> Hi Carlito,
> 
> For ED NAT it doesn’t, as ED NAT no longer has any “user” concept. The code 
> for different flavours of NAT needs to be split and polished anyway. Idea is 
> to have data/code/APIs separate where appropriate.
> 
> Thanks,
> Klement
> 
> > On 2 Jun 2020, at 20:31, Carlito Nueno  wrote:
> > 
> > Hi Klement,
> > 
> > Really appreciate the detailed explanation! That makes sense and I could 
> > see that behavior from my tests.
> > 
> > Last question: does "max translations per user" matter any more because the 
> > concept of user doesn't exist with new NAT?
> > max translations: 400
> > max translations per user: 500
> > 
> > From my tests, each ip address can form as many sessions as needed as long 
> > as the overall/total sessions stay under "max translations".
> > 
> > Thanks!
> > 
> > On Mon, Jun 1, 2020 at 12:47 AM Klement Sekera -X (ksekera - PANTHEON TECH 
> > SRO at Cisco)  wrote:
> > Hi,
> > 
> > as you can see almost all of NAT sessions are timed out. NAT will 
> > automatically free and reuse them when needed again.
> > 
> > this line:
> > > udp LRU min session timeout 5175 (now 161589)
> > hints whether immediate reuse is possible. Minimum session timeout in the 
> > LRU list for UDP sessions is 5175, while current vpp internal time is 
> > 161589. This means the first element in LRU list for UDP session is ready 
> > to be reaped.
> > 
> > To avoid fluctuations in performance due to running periodic cleanup 
> > processes, NAT instead attempts to free one session anytime there is a 
> > request to 

Re: [vpp-dev] SEGMENTATION FAULT in load_balance_get()

2020-06-03 Thread Benoit Ganne (bganne) via lists.fd.io
Neale is away and might be slow to react.
I suspect the issue is when creating new load balance entry through 
load_blance_create(), which will get a new element from the load balance pool. 
This in turn will update the pool free bitmap, which can grow. As it is backed 
by a vector, it can be reallocated somewhere else to fit the new size.
If it is done concurrently with dataplane processing, bad things happen. The 
pattern 0x131313 is filled by dlmalloc free() and will happen in that case. I 
think the same could happen to the pool itself, not only the bitmap.
If I am correct, I am not sure how we should fix that: fib update API is marked 
as mp_safe, so we could create a fixed-size load balance pool to prevent 
runtime reallocation, but it would waste memory and impose a maximum size.

ben

> -Original Message-
> From: vpp-dev@lists.fd.io  On Behalf Of Rajith PR via
> lists.fd.io
> Sent: mercredi 3 juin 2020 05:46
> To: Dave Barach (dbarach) 
> Cc: vpp-dev ; Neale Ranns (nranns) 
> Subject: Re: [vpp-dev] SEGMENTATION FAULT in load_balance_get()
> 
> Hi Dave/Neal,
> 
> The adj_poison seems to be a filling pattern - - 0xfefe. Am I looking into
> the right code or I have interpreted it incorrectly?
> 
> Thanks,
> Rajith
> 
> On Tue, Jun 2, 2020 at 7:44 PM Dave Barach (dbarach)   > wrote:
> 
> 
>   The code manages to access a poisoned adjacency – 0x131313 fill
> pattern – copying Neale for an opinion.
> 
> 
> 
>   D.
> 
> 
> 
>   From: vpp-dev@lists.fd.io    d...@lists.fd.io  > On Behalf Of Rajith PR via
> lists.fd.io 
>   Sent: Tuesday, June 2, 2020 10:00 AM
>   To: vpp-dev mailto:vpp-dev@lists.fd.io> >
>   Subject: [vpp-dev] SEGMENTATION FAULT in load_balance_get()
> 
> 
> 
>   Hello All,
> 
> 
> 
>   In 19.08 VPP version we are seeing a crash while accessing the
> load_balance_pool  in load_balanc_get() function. This is happening after
> enabling worker threads.
> 
>   As such the FIB programming is happening in the main thread and in
> one of the worker threads we see this crash.
> 
>   Also, this is seen when we scale to 300K+ ipv4 routes.
> 
> 
> 
>   Here is the complete stack,
> 
> 
> 
>   Thread 10 "vpp_wk_0" received signal SIGSEGV, Segmentation fault.
> 
>   [Switching to Thread 0x7fbe4aa8e700 (LWP 333)]
>   0x7fbef10636f8 in clib_bitmap_get (ai=0x1313131313131313, i=61)
> at /home/ubuntu/Scale/libvpp/src/vppinfra/bitmap.h:201
>   201  return i0 < vec_len (ai) && 0 != ((ai[i0] >> i1) & 1);
> 
> 
> 
>   Thread 10 (Thread 0x7fbe4aa8e700 (LWP 333)):
>   #0  0x7fbef10636f8 in clib_bitmap_get (ai=0x1313131313131313,
> i=61) at /home/ubuntu/Scale/libvpp/src/vppinfra/bitmap.h:201
>   #1  0x7fbef10676a8 in load_balance_get (lbi=61) at
> /home/ubuntu/Scale/libvpp/src/vnet/dpo/load_balance.h:222
>   #2  0x7fbef106890c in ip4_lookup_inline (vm=0x7fbe8a5aa080,
> node=0x7fbe8b3fd380, frame=0x7fbe8a5edb40) at
> /home/ubuntu/Scale/libvpp/src/vnet/ip/ip4_forward.h:369
>   #3  0x7fbef1068ead in ip4_lookup_node_fn_avx2
> (vm=0x7fbe8a5aa080, node=0x7fbe8b3fd380, frame=0x7fbe8a5edb40)
>   at /home/ubuntu/Scale/libvpp/src/vnet/ip/ip4_forward.c:95
>   #4  0x7fbef0c6afec in dispatch_node (vm=0x7fbe8a5aa080,
> node=0x7fbe8b3fd380, type=VLIB_NODE_TYPE_INTERNAL,
> dispatch_state=VLIB_NODE_STATE_POLLING,
>   frame=0x7fbe8a5edb40, last_time_stamp=381215594286358) at
> /home/ubuntu/Scale/libvpp/src/vlib/main.c:1207
>   #5  0x7fbef0c6b7ad in dispatch_pending_node (vm=0x7fbe8a5aa080,
> pending_frame_index=2, last_time_stamp=381215594286358)
>   at /home/ubuntu/Scale/libvpp/src/vlib/main.c:1375
>   #6  0x7fbef0c6d3f0 in vlib_main_or_worker_loop
> (vm=0x7fbe8a5aa080, is_main=0) at
> /home/ubuntu/Scale/libvpp/src/vlib/main.c:1826
>   #7  0x7fbef0c6dc73 in vlib_worker_loop (vm=0x7fbe8a5aa080) at
> /home/ubuntu/Scale/libvpp/src/vlib/main.c:1934
>   #8  0x7fbef0cac791 in vlib_worker_thread_fn (arg=0x7fbe8de2a340)
> at /home/ubuntu/Scale/libvpp/src/vlib/threads.c:1754
>   #9  0x7fbef092da48 in clib_calljmp () from
> /home/ubuntu/Scale/libvpp/build-root/install-vpp_debug-
> native/vpp/lib/libvppinfra.so.1.0.1
>   #10 0x7fbe4aa8dec0 in ?? ()
>   #11 0x7fbef0ca700c in vlib_worker_thread_bootstrap_fn
> (arg=0x7fbe8de2a340) at /home/ubuntu/Scale/libvpp/src/vlib/threads.c:573
> 
>   Thanks in Advance,
> 
>   Rajith

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#16628): https://lists.fd.io/g/vpp-dev/message/16628
Mute This Topic: https://lists.fd.io/mt/74627827/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-