Re: [vpp-dev] SNAT API Question

2017-02-17 Thread Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco)
If external_sw_if_index value is ~0 (-1) external_ip_address is ussed from API 
(snat.c line 363).

snat_add_address_range – add address range to SNAT address pool
snat_add_del_interface_addr – add address of the interface to SNAT address pool 
(address is added/removed automatically when interface address is changed by 
configuration or DHCP)

I think 1024 is not significant, it's just a warning that you add a lot of 
addresses to SNAT address pool, it was here before I started work on SNAT 
plugin.

Matus

From: Jon Loeliger [mailto:j...@netgate.com]
Sent: Friday, February 17, 2017 8:09 PM
To: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 

Cc: vpp-dev 
Subject: Re: [vpp-dev] SNAT API Question



On Tue, Feb 14, 2017 at 11:52 PM, Matus Fabian -X (matfabia - PANTHEON 
TECHNOLOGIES at Cisco) mailto:matfa...@cisco.com>> wrote:
Hi Jon,

snat_static_mapping_dump list only static mappings with resolved outside 
address so snat_static_mapping_details doesn’t contain external_sw_if_index. 
But we missing read API for static mappigs with unresolved outside address, I 
will add those to snat_static_mapping_dump too.

Thanks,
Matus


How does the API handler know which of the external_ip_address
or the external_id_index to honor?  Is there an invalid value that
should be supplied in the other field when making the API call
to snat_add_static_mapping?  Or is the API user expected to
resolve the external_ip_address and supply its sw_if_index too?

Also, another question.  When placing a range of addresses on
an external interface for dynamic mappings, is the expected use
case to first use snat_add_address_range on the external IF,
then when they are all "added", issue a snat_add_del_interface_addr
to make it effective (add) or remove them (del)?

Finally, one last API implementation question.

In snat_test.c, we find this snippet of code:

static int api_snat_add_address_range (vat_main_t * vam)
{
...
start_host_order = clib_host_to_net_u32 (start_addr.as_u32);
end_host_order = clib_host_to_net_u32 (end_addr.as_u32);
...
count = (end_host_order - start_host_order) + 1;

if (count > 1024) {
errmsg ("%U - %U, %d addresses...\n",
  format_ip4_address, &start_addr,
  format_ip4_address, &end_addr,
  count);
}

Is that 1024 significant or arbitrary?  Is there some limit in the VPP
implementation that is being guarded here?  Or is that an arbitrary
UI-imposed limit?

Specifically, must other API users adhere to this limit?  If not is there
some other limit at work here?

I see under the covers that each address in this range is individually
added to some address vector.  And furthermore this vector DOES
appear to be hard limited to 1024 entries as well. (See snat.c,
vl_api_snat_add_address_range_t_handler() around line 820 or so.)

Again, arbitrary?  Or is the limit actually imposed by the FIB?

Thanks,
jdl


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] How do I create a igb_uio module on Centos

2017-02-17 Thread Burt Silverman
Hi,

I am trying to follow the procedures on
https://wiki.fd.io/view/VPP/Build,_install,_and_test_images and
https://wiki.fd.io/view/VPP/How_To_Connect_A_PCI_Interface_To_VPP but I end
up with no igb_uio module; so I cannot
# modprobe igb_uio

and that seems to correspond to

   - *vpp-dpdk-dkms* - DKMS based DPDK kernel module package (only on
   Debian/Ubuntu)


from https://wiki.fd.io/view/VPP/Build,_install,_and_test_images

so what is the workflow when using Centos? The wiki indicates that no
special workflow is required for Centos other than using systemd commands
rather than upstart commands.

Thanks,
Burt
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Interesting perf test results from Red Hat's test team

2017-02-17 Thread Alec Hothan (ahothan)
Hi Karl

Can you also tell which version of DPDK you were using for OVS and for VPP (for 
VPP is it the one bundled with 17.01?).

“The pps is the bi-directional sum of the packets received back at the traffic 
generator.”
Just to make sure….

If your traffic gen sends 1 Mpps to  each of the 2 interfaces and you get no 
drop (meaning you receive 1 Mpps from each interface). What do you report? 2 
Mpps or 4 Mpps?
You seem to say 2Mpps (sum of all RX).

The CSIT perf numbers report the sum(TX) = in the above example CSIT reports 2 
Mpps.
The CSIT numbers for 1 vhost/1 VM (practically similar to yours) are at about 
half of what you report.

https://docs.fd.io/csit/rls1701/report/vpp_performance_results_hw/performance_results_hw.html#ge2p1x520-dot1q-l2xcbase-eth-2vhost-1vm-ndrpdrdisc


scroll down the table to tc13 tc14, 4t4c (4 threads) L2XC, 64B NDR, 5.95Mpps 
(aggregated TX of the 2 interfaces) PDR 7.47Mpps.
while the results in your slides put it at around 11Mpps.

So either your testbed really switches 2 times more packets than the CSIT one, 
or you’re actually reporting double the amount compared to how CSIT reports it…

Thanks

 Alec



From: Karl Rister 
Organization: Red Hat
Reply-To: "kris...@redhat.com" 
Date: Thursday, February 16, 2017 at 11:09 AM
To: "Alec Hothan (ahothan)" , "Maciek Konstantynowicz 
(mkonstan)" , Thomas F Herbert 
Cc: Andrew Theurer , Douglas Shakshober 
, "csit-...@lists.fd.io" , vpp-dev 

Subject: Re: [vpp-dev] Interesting perf test results from Red Hat's test team

On 02/15/2017 08:58 PM, Alec Hothan (ahothan) wrote:

Great summary slides Karl, I have a few more questions on the slides.

· Did you use OSP10/OSPD/ML2 to deploy your testpmd VM/configure
the vswitch or is it direct launch using libvirt and direct config of
the vswitches? (this is a bit related to Maciek’s question on the exact
interface configs in the vswitch)

There was no use of OSP in these tests, the guest is launched via
libvirt and the vswitches are manually launched and configured with
shell scripts.

· Unclear if all the charts results were measured using 4 phys
cores (no HT) or 2 phys cores (4 threads with HT)

Only the slide 3 has any 4 core (no HT) data, all other data is captured
using HT on the appropriate number of cores: 2 for single queue, 4 for
two queue, and 6 for three queue.

· How do you report your pps? ;-) Are those
o   vswitch centric (how many packets the vswitch forwards per second
coming from traffic gen and from VMs)
o   or traffic gen centric aggregated TX (how many pps are sent by the
traffic gen on both interfaces)
o   or traffic gen centric aggregated TX+RX (how many pps are sent and
received by the traffic gen on both interfaces)

The pps is the bi-directional sum of the packets received back at the
traffic generator.

· From the numbers shown, it looks like it is the first or the last
· Unidirectional or symmetric bi-directional traffic?

symmetric bi-directional

· BIOS Turbo boost enabled or disabled?

disabled

· How many vcpus running the testpmd VM?

3, 5, or 7.  1 VCPU for house keeping and then 2 VCPUs for each queue
configuration.  Only the required VCPUs are active for any
configuration, so the VCPU count varies depending on the configuration
being tested.

· How do you range the combinations in your 1M flows src/dest
MAC? I’m not aware about any real NFV cloud deployment/VNF that handles
that type of flow pattern, do you?

We increment all the fields being modified by one for each packet until
we hit a million and then we restart at the base value and repeat.  So
all IPs and/or MACs get modified in unison.

We actually arrived at the srcMac,dstMac configuration in a backwards
manner.  On one of our systems where we develop the traffic generator we
were getting an error when doing srcMac,dstMac,srcIp,dstIp that we
couldn't figure out in the time needed for this work so we were going to
just go with srcMac,dstMac due to time constraints.  However, on the
system where we actually did the testing both worked so I just collected
both out of curiosity.


Thanks

   Alec


 *From: *mailto:vpp-dev-boun...@lists.fd.io>> 
on behalf of "Maciek
 Konstantynowicz (mkonstan)" mailto:mkons...@cisco.com>>
 *Date: *Wednesday, February 15, 2017 at 1:28 PM
 *To: *Thomas F Herbert mailto:therb...@redhat.com>>
 *Cc: *Andrew Theurer mailto:atheu...@redhat.com>>, 
Douglas Shakshober
 mailto:dsh...@redhat.com>>, 
"csit-...@lists.fd.io" 
mailto:csit-...@lists.fd.io>>,
 vpp-dev mailto:vpp-dev@lists.fd.io>>, Karl Rister 
mailto:kris...@redhat.com>>
 *Subject: *Re: [vpp-dev] Interesting perf test results from Red
 Hat's test team

 Thomas, many thanks for sending this.

 Few comments and questions after reading the slides:

 1. s3 clarification - host and data plane thread setup - vswitch pmd
 (data plane) thread placement
 a. "1PMD/core (4 core)

Re: [vpp-dev] VPP-540 : pbb tag rewrite details

2017-02-17 Thread Dave Barach (dbarach)
Please fix merge conflict(s) in https://gerrit.fd.io/r/#/c/4715/5

Thanks… Dave

From: Andrej Macak -X (amacak - PANTHEON TECHNOLOGIES at Cisco)
Sent: Friday, February 17, 2017 9:34 AM
To: vpp-dev@lists.fd.io
Cc: michal.janc...@pantheon.tech; Dave Barach (dbarach) ; 
Pavel Kotucek -X (pkotucek - PANTHEON TECHNOLOGIES at Cisco) 

Subject: RE: VPP-540 : pbb tag rewrite details

Dear VPP community,

I would like to kindly ask you could review and merge commit:  
https://gerrit.fd.io/r/#/c/4715/

Thank you,

Andrej

From: Pavel Kotucek -X (pkotucek - PANTHEON TECHNOLOGIES at Cisco)
Sent: Thursday, February 9, 2017 9:51
To: Dave Barach (dbarach) mailto:dbar...@cisco.com>>
Cc: michal.janc...@pantheon.tech; Andrej 
Macak -X (amacak - PANTHEON TECHNOLOGIES at Cisco) 
mailto:ama...@cisco.com>>
Subject: Re: VPP-540 : pbb tag rewrite details


Dear Dave,



finally I ran vpp-csit-verify-hw-perf-master-long (as short one fails in VAT - 
no JSON data) and with help of Peter Mikus we compared results with referencing 
values



https://jenkins.fd.io/view/vpp/job/vpp-csit-verify-hw-perf-master-long/1039/consoleFull



https://docs.fd.io/csit/rls1701/report/detailed_test_results/vpp_performance_results/vpp_performance_results.html



Results seem to be in tolerance (as Peter said me tolerance is +/- 200 Kpps). 
So from this point of view changes in 4715 doesn't impact performance  of vpp.



Regards



Pavel

​


From: Dave Barach (dbarach)
Sent: Tuesday, February 7, 2017 3:22 PM
To: Pavel Kotucek -X (pkotucek - PANTHEON TECHNOLOGIES at Cisco); Damjan Marion 
(damarion); Ole Troan (otroan); John Lo (loj); Keith Burns (krb); Florin Coras 
(fcoras)
Cc: michal.janc...@pantheon.tech; Andrej 
Macak -X (amacak - PANTHEON TECHNOLOGIES at Cisco)
Subject: RE: VPP-540 : pbb tag rewrite details

At a glance, the changes look OK. Please pass along csit or equivalent L2 
performance test results. I’ve -2’ed the patch until performance test results 
demonstrate that the patch does no harm.

Thanks… Dave

From: Pavel Kotucek -X (pkotucek - PANTHEON TECHNOLOGIES at Cisco)
Sent: Tuesday, February 7, 2017 9:15 AM
To: Dave Barach (dbarach) mailto:dbar...@cisco.com>>; Damjan 
Marion (damarion) mailto:damar...@cisco.com>>; Ole Troan 
(otroan) mailto:otr...@cisco.com>>; John Lo (loj) 
mailto:l...@cisco.com>>; Keith Burns (krb) 
mailto:k...@cisco.com>>; Florin Coras (fcoras) 
mailto:fco...@cisco.com>>
Cc: michal.janc...@pantheon.tech; Andrej 
Macak -X (amacak - PANTHEON TECHNOLOGIES at Cisco) 
mailto:ama...@cisco.com>>
Subject: VPP-540 : pbb tag rewrite details


Dear Committers,



according the info from page https://wiki.fd.io/view/VPP/Committers/SMEs I 
would like to ask one of you as committers to do review (and merge if possible) 
for commit



https://gerrit.fd.io/r/#/c/4715/



This task is related to JIRA https://jira.fd.io/browse/VPP-540 and was 
requested by INMARSAT team. According info from INMARSAT project leader Andrej 
Macak it was agreed by Emran and Jan.



Commit #4715 adds small improvements required by INMARSAT in interface dump and 
packet tracing.



If I have to ask somebody else to review and merge this code please let me know.



Thank you



Regards

​

Pavel
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] SNAT API Question

2017-02-17 Thread Jon Loeliger
On Tue, Feb 14, 2017 at 11:52 PM, Matus Fabian -X (matfabia - PANTHEON
TECHNOLOGIES at Cisco)  wrote:

> Hi Jon,
>
>
>
> snat_static_mapping_dump list only static mappings with resolved outside
> address so snat_static_mapping_details doesn’t contain
> external_sw_if_index. But we missing read API for static mappigs with
> unresolved outside address, I will add those to snat_static_mapping_dump
> too.
>
>
>
> Thanks,
>
> Matus
>
>
>
How does the API handler know which of the external_ip_address
or the external_id_index to honor?  Is there an invalid value that
should be supplied in the other field when making the API call
to snat_add_static_mapping?  Or is the API user expected to
resolve the external_ip_address and supply its sw_if_index too?

Also, another question.  When placing a range of addresses on
an external interface for dynamic mappings, is the expected use
case to first use snat_add_address_range on the external IF,
then when they are all "added", issue a snat_add_del_interface_addr
to make it effective (add) or remove them (del)?

Finally, one last API implementation question.

In snat_test.c, we find this snippet of code:

static int api_snat_add_address_range (vat_main_t * vam)
{
...
start_host_order = clib_host_to_net_u32 (start_addr.as_u32);
end_host_order = clib_host_to_net_u32 (end_addr.as_u32);
...
count = (end_host_order - start_host_order) + 1;

if (count > 1024) {
errmsg ("%U - %U, %d addresses...\n",
  format_ip4_address, &start_addr,
  format_ip4_address, &end_addr,
  count);
}


Is that 1024 significant or arbitrary?  Is there some limit in the VPP
implementation that is being guarded here?  Or is that an arbitrary
UI-imposed limit?

Specifically, must other API users adhere to this limit?  If not is there
some other limit at work here?

I see under the covers that each address in this range is individually
added to some address vector.  And furthermore this vector DOES
appear to be hard limited to 1024 entries as well. (See snat.c,
vl_api_snat_add_address_range_t_handler() around line 820 or so.)

Again, arbitrary?  Or is the limit actually imposed by the FIB?

Thanks,
jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] reset_fib API issue in case of IPv6 FIB

2017-02-17 Thread Neale Ranns (nranns)
Hi Jan,

What version of VPP are you testing?

Thanks,
neale

From:  on behalf of "Jan Gelety -X (jgelety - 
PANTHEON TECHNOLOGIES at Cisco)" 
Date: Friday, 17 February 2017 at 14:48
To: "vpp-dev@lists.fd.io" 
Cc: "csit-...@lists.fd.io" 
Subject: [csit-dev] [vpp-dev] reset_fib API issue in case of IPv6 FIB

Hello VPP dev team,

Usage of reset_fib API command to reset IPv6 FIB leads to incorrect entry in 
the FIB and to crash of VPP.

Could somebody have a look on Jira ticket https://jira.fd.io/browse/VPP-643, 
please?

Thanks,
Jan

From make test log:

12:14:51,710 API: reset_fib ({'vrf_id': 1, 'is_ipv6': 1})
12:14:51,712 IPv6 VRF ID 1 reset
12:14:51,712 CLI: show ip6 fib
12:14:51,714 show ip6 fib
ipv6-VRF:0, fib_index 0, flow hash: src dst sport dport proto
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[30:15175]]
[0] [@0]: dpo-drop ip6
fd01:4::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:44 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd01:7::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:71 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd01:a::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:98 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
[0] [@2]: dpo-receive
ipv6-VRF:1, fib_index 1, flow hash: src dst sport dport proto
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:15 buckets:1 uRPF:13 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd01:1::/64
  UNRESOLVED
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:16 buckets:1 uRPF:14 to:[0:0]]
[0] [@2]: dpo-receive

And later:

12:14:52,170 CLI: packet-generator enable
12:14:57,171 --- addError() TestIP6VrfMultiInst.test_ip6_vrf_02( IP6 VRF  
Multi-instance test 2 - delete 2 VRFs
) called, err is (, IOError(3, 'Waiting for 
reply timed out'), )
12:14:57,172 formatted exception is:
Traceback (most recent call last):
  File "/usr/lib/python2.7/unittest/case.py", line 331, in run
testMethod()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 365, 
in test_ip6_vrf_02
self.run_verify_test()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 322, 
in run_verify_test
self.pg_start()
  File "/home/vpp/Documents/vpp/test/framework.py", line 398, in pg_start
cls.vapi.cli('packet-generator enable')
  File "/home/vpp/Documents/vpp/test/vpp_papi_provider.py", line 169, in cli
r = self.papi.cli_inband(length=len(cli), cmd=cli)
  File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 305, in 

f = lambda **kwargs: (self._call_vpp(i, msgdef, multipart, **kwargs))
  File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 547, in 
_call_vpp
r = self.results_wait(context)
  File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 395, in 
results_wait
raise IOError(3, 'Waiting for reply timed out')
IOError: [Errno 3] Waiting for reply timed out

12:14:57,172 --- tearDown() for TestIP6VrfMultiInst.test_ip6_vrf_02( IP6 VRF  
Multi-instance test 2 - delete 2 VRFs
) called ---
12:14:57,172 CLI: show trace
12:14:57,172 VPP subprocess died unexpectedly with returncode -6 [unknown]
12:14:57,172 --- addError() TestIP6VrfMultiInst.test_ip6_vrf_02( IP6 VRF  
Multi-instance test 2 - delete 2 VRFs
) called, err is (, VppDiedError('VPP 
subprocess died unexpectedly with returncode -6 [unknown]',), )
12:14:57,173 formatted exception is:
Traceback (most recent call last):
  File "/usr/lib/python2.7/unittest/case.py", line 360, in run
self.tearDown()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 148, 
in tearDown
super(TestIP6VrfMultiInst, self).tearDown()
  File "/home/vpp/Documents/vpp/test/framework.py", line 333, in tearDown
self.logger.debug(self.vapi.cli("show trace"))
  File "/home/vpp/Documents/vpp/test/vpp_papi_provider.py", line 167, in cli
self.hook.before_cli(cli)
  File "/home/vpp/Documents/vpp/test/hook.py", line 138, in before_cli
self.poll_vpp()
  File "/home/vpp/Documents/vpp/test/hook.py", line 115, in poll_vpp
raise VppDiedError(msg)
VppDiedError: VPP subprocess died unexpectedly with returncode -6 [unknown]
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [csit-dev] [dpdk-announce] DPDK 17.02 released

2017-02-17 Thread Dave Wallace

Excellent -- thanks for the quick update Damjan!

-daw-

On 02/17/2017 04:36 AM, Damjan Marion wrote:


VPP is already on dpdk 17.02 since yesterday evening.

Folks can do “make dpdk-install-dev” to upgrade development package in 
existing workspaces.


On 15 Feb 2017, at 17:35, Dave Wallace > wrote:


Congrats to the DPDK folks for the 17.02 release!

Time to start integrating it into VPP/CSIT 17.04 ...

Thanks,
-daw-

 Forwarded Message 
Subject:[dpdk-announce] DPDK 17.02 released
Date:   Tue, 14 Feb 2017 23:41:35 +0100
From:   Thomas Monjalon 
Organization:   6WIND
To: annou...@dpdk.org



A new major release is available:
http://fast.dpdk.org/rel/dpdk-17.02.tar.xz

It has been a busy cycle considering the various holidays:
849 patches from 101 authors
655 files changed, 141527 insertions(+), 10539 deletions(-)

There are 41 new contributors
(including authors, reviewers and testers):
Thanks to Alan Dewar, Aleksander Gajewski, Anand B Jyoti, Anders Roxell,
Andrew Lee, Andrew Rybchenko, Andy Moreton, Artem Andreev, Aws Ismail,
Baruch Siach, Bert van Leeuwen, Chenghu Yao, Christian Maciocco,
Emmanuel Roullit, Hanoch Haim, Ilya V. Matveychikov, Ivan Malov,
Jacek Piasecki, Jan Wickbom, Karla Saur, Kevin Traynor, Kuba Kozak,
Lei Yao, Mark Spender, Matthieu Ternisien d'Ouville, Michał Mirosław,
Patrick MacArthur, Robert Stonehouse, Satha Rao, Shahaf Shuler,
Steve Shin, Timmons C. Player, Tom Crugnale, Xieming Katty, Yaron Illouz,
Yi Zhang, Yong Wang, Yoni Gilad, Yuan Peng, Zbigniew Bodek, Zhaoyan Chen.

These new contributors are associated with these domain names:
6wind.com , atendesoftware.pl,brocade.com 
,caviumnetworks.com ,ciena.com 
,
cisco.com ,ericsson.com ,gmail.com 
,huawei.com ,intel.com ,linaro.org 
,
mellanox.com ,netronome.com ,oktetlabs.ru 
,patrickmacarthur.net ,
radcom.com ,redhat.com , rere.qmqm.pl,sandvine.com 
,solarflare.com ,
spirent.com ,tkos.co.il ,zte.com.cn 
.

Some highlights:
- new ethdev API for hardware filtering
- Solarflare networking driver
- ARM crypto driver
- crypto scheduler driver
- crypto performance test application
- Elastic Flow Distributor library
- virtio-user/kernel-vhost as exception path

More details in the release notes:
http://dpdk.org/doc/guides/rel_notes/release_17_02.html

The new features for the 17.05 cycle must be submitted before the end
of February, in order to be reviewed and integrated during March.
The next release is expected to happen at the beginning of May.

Thanks everyone

PS: there is no special name or alias for the DPDK versions though this
one could be named Valentine, or Aurelie for love dedication ;)
___
csit-dev mailing list
csit-...@lists.fd.io 
https://lists.fd.io/mailman/listinfo/csit-dev




___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] reset_fib API issue in case of IPv6 FIB

2017-02-17 Thread Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIES at Cisco)
Hello VPP dev team,

Usage of reset_fib API command to reset IPv6 FIB leads to incorrect entry in 
the FIB and to crash of VPP.

Could somebody have a look on Jira ticket https://jira.fd.io/browse/VPP-643, 
please?

Thanks,
Jan

>From make test log:

12:14:51,710 API: reset_fib ({'vrf_id': 1, 'is_ipv6': 1})
12:14:51,712 IPv6 VRF ID 1 reset
12:14:51,712 CLI: show ip6 fib
12:14:51,714 show ip6 fib
ipv6-VRF:0, fib_index 0, flow hash: src dst sport dport proto
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:5 buckets:1 uRPF:5 to:[30:15175]]
[0] [@0]: dpo-drop ip6
fd01:4::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:44 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd01:7::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:71 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd01:a::1/128
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:98 buckets:1 uRPF:5 to:[0:0]]
[0] [@0]: dpo-drop ip6
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:6 buckets:1 uRPF:6 to:[0:0]]
[0] [@2]: dpo-receive
ipv6-VRF:1, fib_index 1, flow hash: src dst sport dport proto
::/0
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:15 buckets:1 uRPF:13 to:[0:0]]
[0] [@0]: dpo-drop ip6
fd01:1::/64
  UNRESOLVED
fe80::/10
  unicast-ip6-chain
  [@0]: dpo-load-balance: [index:16 buckets:1 uRPF:14 to:[0:0]]
[0] [@2]: dpo-receive

And later:

12:14:52,170 CLI: packet-generator enable
12:14:57,171 --- addError() TestIP6VrfMultiInst.test_ip6_vrf_02( IP6 VRF  
Multi-instance test 2 - delete 2 VRFs
) called, err is (, IOError(3, 'Waiting for 
reply timed out'), )
12:14:57,172 formatted exception is:
Traceback (most recent call last):
  File "/usr/lib/python2.7/unittest/case.py", line 331, in run
testMethod()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 365, 
in test_ip6_vrf_02
self.run_verify_test()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 322, 
in run_verify_test
self.pg_start()
  File "/home/vpp/Documents/vpp/test/framework.py", line 398, in pg_start
cls.vapi.cli('packet-generator enable')
  File "/home/vpp/Documents/vpp/test/vpp_papi_provider.py", line 169, in cli
r = self.papi.cli_inband(length=len(cli), cmd=cli)
  File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 305, in 

f = lambda **kwargs: (self._call_vpp(i, msgdef, multipart, **kwargs))
  File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 547, in 
_call_vpp
r = self.results_wait(context)
  File "build/bdist.linux-x86_64/egg/vpp_papi/vpp_papi.py", line 395, in 
results_wait
raise IOError(3, 'Waiting for reply timed out')
IOError: [Errno 3] Waiting for reply timed out

12:14:57,172 --- tearDown() for TestIP6VrfMultiInst.test_ip6_vrf_02( IP6 VRF  
Multi-instance test 2 - delete 2 VRFs
) called ---
12:14:57,172 CLI: show trace
12:14:57,172 VPP subprocess died unexpectedly with returncode -6 [unknown]
12:14:57,172 --- addError() TestIP6VrfMultiInst.test_ip6_vrf_02( IP6 VRF  
Multi-instance test 2 - delete 2 VRFs
) called, err is (, VppDiedError('VPP 
subprocess died unexpectedly with returncode -6 [unknown]',), )
12:14:57,173 formatted exception is:
Traceback (most recent call last):
  File "/usr/lib/python2.7/unittest/case.py", line 360, in run
self.tearDown()
  File "/home/vpp/Documents/vpp/test/test_ip6_vrf_multi_instance.py", line 148, 
in tearDown
super(TestIP6VrfMultiInst, self).tearDown()
  File "/home/vpp/Documents/vpp/test/framework.py", line 333, in tearDown
self.logger.debug(self.vapi.cli("show trace"))
  File "/home/vpp/Documents/vpp/test/vpp_papi_provider.py", line 167, in cli
self.hook.before_cli(cli)
  File "/home/vpp/Documents/vpp/test/hook.py", line 138, in before_cli
self.poll_vpp()
  File "/home/vpp/Documents/vpp/test/hook.py", line 115, in poll_vpp
raise VppDiedError(msg)
VppDiedError: VPP subprocess died unexpectedly with returncode -6 [unknown]
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] avoiding spoof protection for BFD ECHO packets

2017-02-17 Thread Damjan Marion (damarion)

> On 17 Feb 2017, at 14:38, Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES 
> at Cisco)  wrote:
> 
> Quoting Damjan Marion (damarion) (2017-02-17 14:30:05)
>> 
>>> On 17 Feb 2017, at 14:07, Klement Sekera -X (ksekera - PANTHEON 
>>> TECHNOLOGIES at Cisco)  wrote:
>>> 
>>> Hi guys,
>>> 
>>> BFD echo function allows testing datapaths only and thus using more
>>> aggresive rates and faster detection by using packets, which are
>>> processed only by the sender and simply looped back by the receiver.
>>> Each peer declares the willingness/rate at which it will loop back
>>> echo packets and each side decides to use the feature or not locally.
>>> 
>>> For the BFD over UDP, the echo packets are recognized by having
>>> destination port 3785.
>>> 
>>> To implement this in VPP, we need to
>>> 
>>> 1.) loop back echo packets from remote side - this is easy, already done
>>> 2.) be able to send the packets out and receive them - this hits the
>>> current spoofing protection, when a packet with destination set to our
>>> own IP address gets dropped like this:
>>> 
>>> ...
>>> 00:00:00:708351: ip4-local
>>>   UDP: 172.16.2.1 -> 172.16.1.1
>>> tos 0x00, ttl 255, length 52, checksum 0x6096
>>> fragment id 0x
>>>   UDP: 49152 -> 3785
>>> length 32, checksum 0x
>>> 00:00:00:708351: error-drop
>>> ip4-input: ip4 spoofed local-address packet drops
>>> 
>>> in this example 172.16.1.1 is the address on the interface receiving the
>>> packet.
>>> 
>>> Discussion with Neale yielded a few possible solutions, none of which is
>>> great:
>>> 
>>> 1.) add input feature to siphon BFD packets instead of going to
>>> ip4-local node
>>> 2a.) skip checks in ip4-local node based on BFD ports
>>> 2b.) skip checks in ip4-local node based on UDP port registration (via
>>> udp_register_dst_port())
>>> 3.) add information for prefix/address to FIB to skip checks for this
>>> entry
>>> 
>>> based on discussion, here are the downsides of each:
>>> 
>>> 1.) taxes all input packets
>>> 2.) layering violation, caches misses in b.) case
>>> 3.) exposes VPP to spoofed packets for non-BFD traffic
>>> 
>>> based on these, 2.) seems to hurt the least.. with 2a.) being the
>>> easiest to implement to move forward..
>>> 
>>> I would appreciate thoughts/ideas from more experienced people..
>> 
>> What about moving spoof check to udp node and keep per-registration snoop 
>> on/off flag?
> 
> Per registration of what? The UDP port? That would be the 2b.) solution,
> no?

Not exactly, but close.

- ip4-local skips check for all udp packets
- udp lookup node does check unless explicitly asked not to do so. I.e. with 
udp_register_dst_port_no_spoof().




___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] avoiding spoof protection for BFD ECHO packets

2017-02-17 Thread otroan
For IPv6 I would probably just have picked a separate IPv6 address for BFD and 
installed an entry for it in the FIB.
For IPv4... well you mistake me for someone who cares. ;-)

Cheers,
Ole


> On 17 Feb 2017, at 14:07, Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES 
> at Cisco)  wrote:
> 
> Hi guys,
> 
> BFD echo function allows testing datapaths only and thus using more
> aggresive rates and faster detection by using packets, which are
> processed only by the sender and simply looped back by the receiver.
> Each peer declares the willingness/rate at which it will loop back
> echo packets and each side decides to use the feature or not locally.
> 
> For the BFD over UDP, the echo packets are recognized by having
> destination port 3785.
> 
> To implement this in VPP, we need to
> 
> 1.) loop back echo packets from remote side - this is easy, already done
> 2.) be able to send the packets out and receive them - this hits the
> current spoofing protection, when a packet with destination set to our
> own IP address gets dropped like this:
> 
> ...
> 00:00:00:708351: ip4-local
>UDP: 172.16.2.1 -> 172.16.1.1
>  tos 0x00, ttl 255, length 52, checksum 0x6096
>  fragment id 0x
>UDP: 49152 -> 3785
>  length 32, checksum 0x
> 00:00:00:708351: error-drop
>  ip4-input: ip4 spoofed local-address packet drops
> 
> in this example 172.16.1.1 is the address on the interface receiving the
> packet.
> 
> Discussion with Neale yielded a few possible solutions, none of which is
> great:
> 
> 1.) add input feature to siphon BFD packets instead of going to
> ip4-local node
> 2a.) skip checks in ip4-local node based on BFD ports
> 2b.) skip checks in ip4-local node based on UDP port registration (via
> udp_register_dst_port())
> 3.) add information for prefix/address to FIB to skip checks for this
> entry
> 
> based on discussion, here are the downsides of each:
> 
> 1.) taxes all input packets
> 2.) layering violation, caches misses in b.) case
> 3.) exposes VPP to spoofed packets for non-BFD traffic
> 
> based on these, 2.) seems to hurt the least.. with 2a.) being the
> easiest to implement to move forward..
> 
> I would appreciate thoughts/ideas from more experienced people..
> 
> Thanks,
> Klement
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev



signature.asc
Description: Message signed with OpenPGP
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] avoiding spoof protection for BFD ECHO packets

2017-02-17 Thread Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
Quoting Damjan Marion (damarion) (2017-02-17 14:30:05)
> 
> > On 17 Feb 2017, at 14:07, Klement Sekera -X (ksekera - PANTHEON 
> > TECHNOLOGIES at Cisco)  wrote:
> > 
> > Hi guys,
> > 
> > BFD echo function allows testing datapaths only and thus using more
> > aggresive rates and faster detection by using packets, which are
> > processed only by the sender and simply looped back by the receiver.
> > Each peer declares the willingness/rate at which it will loop back
> > echo packets and each side decides to use the feature or not locally.
> > 
> > For the BFD over UDP, the echo packets are recognized by having
> > destination port 3785.
> > 
> > To implement this in VPP, we need to
> > 
> > 1.) loop back echo packets from remote side - this is easy, already done
> > 2.) be able to send the packets out and receive them - this hits the
> > current spoofing protection, when a packet with destination set to our
> > own IP address gets dropped like this:
> > 
> > ...
> > 00:00:00:708351: ip4-local
> >UDP: 172.16.2.1 -> 172.16.1.1
> >  tos 0x00, ttl 255, length 52, checksum 0x6096
> >  fragment id 0x
> >UDP: 49152 -> 3785
> >  length 32, checksum 0x
> > 00:00:00:708351: error-drop
> >  ip4-input: ip4 spoofed local-address packet drops
> > 
> > in this example 172.16.1.1 is the address on the interface receiving the
> > packet.
> > 
> > Discussion with Neale yielded a few possible solutions, none of which is
> > great:
> > 
> > 1.) add input feature to siphon BFD packets instead of going to
> > ip4-local node
> > 2a.) skip checks in ip4-local node based on BFD ports
> > 2b.) skip checks in ip4-local node based on UDP port registration (via
> > udp_register_dst_port())
> > 3.) add information for prefix/address to FIB to skip checks for this
> > entry
> > 
> > based on discussion, here are the downsides of each:
> > 
> > 1.) taxes all input packets
> > 2.) layering violation, caches misses in b.) case
> > 3.) exposes VPP to spoofed packets for non-BFD traffic
> > 
> > based on these, 2.) seems to hurt the least.. with 2a.) being the
> > easiest to implement to move forward..
> > 
> > I would appreciate thoughts/ideas from more experienced people..
> 
> What about moving spoof check to udp node and keep per-registration snoop 
> on/off flag?

Per registration of what? The UDP port? That would be the 2b.) solution,
no?

Side note: BFD itself doesn't need any spoof checks in this case (echo),
because there is internal authentication of the packet payload. The echo
function is inherently insecure because an attacker controlling the link
could selectively loop back echo packets, while blocking all other
traffic, thus falsely (until detected by control frames) keeping the
illusion that link is working. The maximum which an attacker could
achieve by manipulating the IP/UDP headers is just that (link up
illusion), because the IP/UDP is ignored when parsing our echo packets
and the payload guarantees that the echo packet is matched to the
correct BFD session. Control frames theoretically don't need the spoof
checks either - if the SHA1 authentication is turned on for the session.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] avoiding spoof protection for BFD ECHO packets

2017-02-17 Thread Damjan Marion (damarion)

> On 17 Feb 2017, at 14:07, Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES 
> at Cisco)  wrote:
> 
> Hi guys,
> 
> BFD echo function allows testing datapaths only and thus using more
> aggresive rates and faster detection by using packets, which are
> processed only by the sender and simply looped back by the receiver.
> Each peer declares the willingness/rate at which it will loop back
> echo packets and each side decides to use the feature or not locally.
> 
> For the BFD over UDP, the echo packets are recognized by having
> destination port 3785.
> 
> To implement this in VPP, we need to
> 
> 1.) loop back echo packets from remote side - this is easy, already done
> 2.) be able to send the packets out and receive them - this hits the
> current spoofing protection, when a packet with destination set to our
> own IP address gets dropped like this:
> 
> ...
> 00:00:00:708351: ip4-local
>UDP: 172.16.2.1 -> 172.16.1.1
>  tos 0x00, ttl 255, length 52, checksum 0x6096
>  fragment id 0x
>UDP: 49152 -> 3785
>  length 32, checksum 0x
> 00:00:00:708351: error-drop
>  ip4-input: ip4 spoofed local-address packet drops
> 
> in this example 172.16.1.1 is the address on the interface receiving the
> packet.
> 
> Discussion with Neale yielded a few possible solutions, none of which is
> great:
> 
> 1.) add input feature to siphon BFD packets instead of going to
> ip4-local node
> 2a.) skip checks in ip4-local node based on BFD ports
> 2b.) skip checks in ip4-local node based on UDP port registration (via
> udp_register_dst_port())
> 3.) add information for prefix/address to FIB to skip checks for this
> entry
> 
> based on discussion, here are the downsides of each:
> 
> 1.) taxes all input packets
> 2.) layering violation, caches misses in b.) case
> 3.) exposes VPP to spoofed packets for non-BFD traffic
> 
> based on these, 2.) seems to hurt the least.. with 2a.) being the
> easiest to implement to move forward..
> 
> I would appreciate thoughts/ideas from more experienced people..

What about moving spoof check to udp node and keep per-registration snoop 
on/off flag?
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] avoiding spoof protection for BFD ECHO packets

2017-02-17 Thread Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
Hi guys,

BFD echo function allows testing datapaths only and thus using more
aggresive rates and faster detection by using packets, which are
processed only by the sender and simply looped back by the receiver.
Each peer declares the willingness/rate at which it will loop back
echo packets and each side decides to use the feature or not locally.

For the BFD over UDP, the echo packets are recognized by having
destination port 3785.

To implement this in VPP, we need to

1.) loop back echo packets from remote side - this is easy, already done
2.) be able to send the packets out and receive them - this hits the
current spoofing protection, when a packet with destination set to our
own IP address gets dropped like this:

...
00:00:00:708351: ip4-local
UDP: 172.16.2.1 -> 172.16.1.1
  tos 0x00, ttl 255, length 52, checksum 0x6096
  fragment id 0x
UDP: 49152 -> 3785
  length 32, checksum 0x
00:00:00:708351: error-drop
  ip4-input: ip4 spoofed local-address packet drops

in this example 172.16.1.1 is the address on the interface receiving the
packet.

Discussion with Neale yielded a few possible solutions, none of which is
great:

1.) add input feature to siphon BFD packets instead of going to
ip4-local node
2a.) skip checks in ip4-local node based on BFD ports
2b.) skip checks in ip4-local node based on UDP port registration (via
udp_register_dst_port())
3.) add information for prefix/address to FIB to skip checks for this
entry

based on discussion, here are the downsides of each:

1.) taxes all input packets
2.) layering violation, caches misses in b.) case
3.) exposes VPP to spoofed packets for non-BFD traffic

based on these, 2.) seems to hurt the least.. with 2a.) being the
easiest to implement to move forward..

I would appreciate thoughts/ideas from more experienced people..

Thanks,
Klement
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] [csit-dev] [dpdk-announce] DPDK 17.02 released

2017-02-17 Thread Damjan Marion

VPP is already on dpdk 17.02 since yesterday evening.

Folks can do “make dpdk-install-dev” to upgrade development package in existing 
workspaces.

> On 15 Feb 2017, at 17:35, Dave Wallace  wrote:
> 
> Congrats to the DPDK folks for the 17.02 release!
> 
> Time to start integrating it into VPP/CSIT 17.04 ...
> 
> Thanks,
> -daw-
> 
>  Forwarded Message 
> Subject:  [dpdk-announce] DPDK 17.02 released
> Date: Tue, 14 Feb 2017 23:41:35 +0100
> From: Thomas Monjalon  
> 
> Organization: 6WIND
> To:   annou...@dpdk.org 
> 
> A new major release is available:
>   http://fast.dpdk.org/rel/dpdk-17.02.tar.xz 
> 
> 
> It has been a busy cycle considering the various holidays:
>   849 patches from 101 authors
>   655 files changed, 141527 insertions(+), 10539 deletions(-)
> 
> There are 41 new contributors
> (including authors, reviewers and testers):
> Thanks to Alan Dewar, Aleksander Gajewski, Anand B Jyoti, Anders Roxell,
> Andrew Lee, Andrew Rybchenko, Andy Moreton, Artem Andreev, Aws Ismail,
> Baruch Siach, Bert van Leeuwen, Chenghu Yao, Christian Maciocco,
> Emmanuel Roullit, Hanoch Haim, Ilya V. Matveychikov, Ivan Malov,
> Jacek Piasecki, Jan Wickbom, Karla Saur, Kevin Traynor, Kuba Kozak,
> Lei Yao, Mark Spender, Matthieu Ternisien d'Ouville, Michał Mirosław,
> Patrick MacArthur, Robert Stonehouse, Satha Rao, Shahaf Shuler,
> Steve Shin, Timmons C. Player, Tom Crugnale, Xieming Katty, Yaron Illouz,
> Yi Zhang, Yong Wang, Yoni Gilad, Yuan Peng, Zbigniew Bodek, Zhaoyan Chen.
> 
> These new contributors are associated with these domain names:
> 6wind.com, atendesoftware.pl, brocade.com, caviumnetworks.com, ciena.com,
> cisco.com, ericsson.com, gmail.com, huawei.com, intel.com, linaro.org,
> mellanox.com, netronome.com, oktetlabs.ru, patrickmacarthur.net,
> radcom.com, redhat.com, rere.qmqm.pl, sandvine.com, solarflare.com,
> spirent.com, tkos.co.il, zte.com.cn.
> 
> Some highlights:
>   - new ethdev API for hardware filtering
>   - Solarflare networking driver
>   - ARM crypto driver
>   - crypto scheduler driver
>   - crypto performance test application
>   - Elastic Flow Distributor library
>   - virtio-user/kernel-vhost as exception path
> 
> More details in the release notes:
>   http://dpdk.org/doc/guides/rel_notes/release_17_02.html 
> 
> 
> The new features for the 17.05 cycle must be submitted before the end
> of February, in order to be reviewed and integrated during March.
> The next release is expected to happen at the beginning of May.
> 
> Thanks everyone
> 
> PS: there is no special name or alias for the DPDK versions though this
> one could be named Valentine, or Aurelie for love dedication ;)
> ___
> csit-dev mailing list
> csit-...@lists.fd.io
> https://lists.fd.io/mailman/listinfo/csit-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] memif - packet memory interface

2017-02-17 Thread Damjan Marion (damarion)

> On 17 Feb 2017, at 06:30, Zhou, Danny  wrote:
> 
> Very Interesting...
> 
> Damjan,
> 
> Do you think if it makes sense to use virtio_user/vhost_user pairs to connect 
> two VPPs instances running
> inside two container? 
> 
> Essentially, the memif and virtio_user/vhost_user pairs both leverage shared 
> memory for fast inter-process
> communication, within similar performance and same isolation/security 
> concern, but the later one obviously 
> is realistic standard.


I think using the virtio/vhost-user is this specific use case is bad idea.
It is simply built to address different problem.

- pointer conversions (guest mem mapping) is unnecessary and expensive
- ring layout is not optimal
- too many different options doesn’t help with speed (anylayout, mergeable rx 
buffers, different size of virtio header, indirect desc)
- too many different options also make whole code hard to maintain
- it is hard to protect from misbehaving client in efficient way, as it deals 
with pointers
- standard is still very qemu/linux kernel focused 

The question is do we really need a standard for something which is very simple 
(like memif should be) and can be explained in one page of text.
If answer is yes, we can build one instead trying to adopt virtio. My personal 
preference is to build neutral library and document things properly.


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev