[vpp-dev] vagrant centos image missing %py2_build macro

2017-09-13 Thread Dave Wallace

Hi Tom,

I've been trying to get the vagrant VM build for Centos 7 
(VPP_VAGRANT_DISTRO=centos7) to work in .../vpp/extras/vagrant. It 
currently fails with the following error:


 %< 
make[2]: Leaving directory `/vpp/extras/rpm/vpp-17.10/build-root'
+ cd /vpp/extras/rpm/vpp-17.10/build-root/../src/vpp-api/python
+ %py2_build
/var/tmp/rpm-tmp.Ndv7bp: line 31: fg: no job control
error: Bad exit status from /var/tmp/rpm-tmp.Ndv7bp (%build)
 %< 

After some research, I discovered that this issue is caused because the 
%py2_build macro is missing from python-devel.  This bug report says 
that it is fixed in python-2.7.5-55.el7: 
https://bugzilla.redhat.com/show_bug.cgi?id=1297522


Neither %py_build or %py2_build appear to be installed in the centos 
vagrant box (puppetlabs/centos-7.2-64-nocm):


 %< 
[vagrant@localhost vpp]$ rpm --eval %{py2_build}
%{py2_build}
[vagrant@localhost vpp]$ rpm --eval %{py_build}
%{py_build}
 %< 

I also noticed that the Linux Foundation minions are running Centos 7.3, 
so I tried the using the following vagrant box which appears to be the 
corresponding centos 7.3 box: alltiersolutions/centos-7.3-64-nocm



How do I install the %py2_build on either Centos 7.2 or 7.3?

Thanks,
-daw-

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] FD.io Jenkins Maintenance: 2017-09-14 @ 0500 UTC (10am PDT)

2017-09-13 Thread Vanessa Valderrama
What:

LF is enabling openSUSE minions for VPP jobs in Jenkins

When:

2017-09-14 @ 0500 UTC (10am PDT)

Where:

Please contact valderrv via IRC fdio-meeting if you experiene any issue
related to this change

Impact:

No restart is required for this change.  Once the change is made VPP
jobs will build on openSUSE minions.


signature.asc
Description: OpenPGP digital signature
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] CANCELED - Re: FD.io Enabling Jenkins openSUSE : 2017-09-13 @ 0830 UTC (1:30pm PDT)

2017-09-13 Thread Vanessa Valderrama
Canceling change due to API freeze.

Thank you,
Vanessa

On 09/13/2017 03:19 PM, Florin Coras wrote:
> Vanessa, 
>
> Today is API freeze, could you postpone until tomorrow?
>
> Florin
>
>> On Sep 13, 2017, at 12:55 PM, Vanessa Valderrama
>> > > wrote:
>>
>>
>>
>>
>>
>> What:
>>
>> LF is enabling openSUSE minions for VPP jobs in Jenkins
>>
>> When:
>>
>> 2017-09-13 @ 0830 UTC (1:30pm PDT)
>>
>> Where:
>>
>> Please contact valderrv via IRC fdio-meeting if you experiene any
>> issue related to this change
>>
>>
>> Impact:
>>
>> No restart is required for this change.  Once the change is made VPP
>> jobs will build on openSUSE minions.
>> ___
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io 
>> https://lists.fd.io/mailman/listinfo/vpp-dev
>



signature.asc
Description: OpenPGP digital signature
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] FD.io Enabling Jenkins openSUSE : 2017-09-13 @ 0830 UTC (1:30pm PDT)

2017-09-13 Thread Florin Coras
Vanessa, 

Today is API freeze, could you postpone until tomorrow?

Florin

> On Sep 13, 2017, at 12:55 PM, Vanessa Valderrama 
>  wrote:
> 
> 
> 
> 
> 
> What:
> 
> LF is enabling openSUSE minions for VPP jobs in Jenkins
> When:
> 
> 2017-09-13 @ 0830 UTC (1:30pm PDT)
> 
> Where: 
> 
> Please contact valderrv via IRC fdio-meeting if you experiene any issue 
> related to this change
> 
> 
> Impact:
> 
> No restart is required for this change.  Once the change is made VPP jobs 
> will build on openSUSE minions.
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] [FD.io Helpdesk #44785] Another 6 hour build timeout

2017-09-13 Thread Vanessa Valderrama via RT
This issue appears to be resolved with the switch to the new instances.

On Fri Aug 25 12:16:52 2017, valderrv wrote:
> It appears build times have recovered.  I opened a ticket with the
> vendor to determine the root cause of the timeouts and will update the
> ticket when I receive a response.
> 
> Thank you,
> Vanessa
> 
> On Thu Aug 24 13:57:31 2017, dwallacelf wrote:
> > Dear helpdesk,
> >
> > Please investigate this build failure
> > https://jenkins.fd.io/job/vpp-verify-master-centos7/6757/
> >
> > The build output has that has indications of something seriously
> > wrong
> > with the minion's os:
> >
> > [From
> > https://logs.fd.io/production/vex-yul-rot-jenkins-1/vpp-verify-
> > master-
> > centos7/6757/console-timestamp.log.gz]
> >
> > 08:37:27  /usr/bin/tar: vpp-17.10/.gitignore: time stamp 2017-08-24
> > 16:44:25 is 29217.570024474 s in the future
> > 08:37:27  /usr/bin/tar: vpp-17.10/.gitreview: time stamp 2017-08-24
> > 16:44:25 is 29217.569892077 s in the future
> > 08:37:27  /usr/bin/tar: vpp-17.10/LICENSE: time stamp 2017-08-24
> > 16:44:25 is 29217.569781399 s in the future
> > 08:37:27  /usr/bin/tar: vpp-17.10/MAINTAINERS: time stamp 2017-08-24
> > 16:44:25 is 29217.569661502 s in the future
> > 08:37:27  /usr/bin/tar: vpp-17.10/Makefile: time stamp 2017-08-24
> > 16:44:25 is 29217.569547964 s in the future
> >
> >
> > I don't see any reason for the build timeout to be 6 hours.  Can this
> > be
> > changed to something closer to 2 hours.  It is a waste of resources
> > to
> > continue to allow the minions to spend hours producing nothing of
> > value.
> >
> > Thanks,
> > -daw-



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


[vpp-dev] [FD.io Helpdesk #45343] More build timeouts for vpp-verify-master-ubuntu1604

2017-09-13 Thread Vanessa Valderrama via RT
This issue appears to be resolved with the switch to the new instances.


On Wed Sep 06 15:48:23 2017, valderrv wrote:
> We are in the process of switching to dedicated instances that should
> resolve this issue.  We hope to have this complete tomorrow around
> 9:00am PDT
> 
> 
> On 09/06/2017 02:40 PM, Florin Coras wrote:
> > Hi, 
> >
> > Any news regarding this? We are 1 week away from API freeze and the
> > infra makes it almost impossible to merge patches! 
> >
> > Thanks, 
> > Florin
> >
> >> On Sep 4, 2017, at 9:44 PM, Dave Wallace  >> > wrote:
> >>
> >> Dear helpd...@fd.io,
> >>
> >> There has been another string of build timeouts for
> >> vpp-verify-master-ubuntu1604:
> >>
> >> https://jenkins.fd.io/job/vpp-verify-master-ubuntu1604/buildTimeTrend
> >>
> >> Please change the timeout for build failures from 360 minutes to 120
> >> minutes in addition to addressing the slow minion issue.
> >>
> >> Thanks,
> >> -daw-
> >> ___
> >> vpp-dev mailing list
> >> vpp-dev@lists.fd.io 
> >> https://lists.fd.io/mailman/listinfo/vpp-dev
> >
> 



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07

2017-09-13 Thread Maciek Konstantynowicz (mkonstan)
// resending without attachment, as lists.fd.io don’t like 
.jpeg cargo..

John,

This is indeed a correct observation. On the first look it does look a
bit weird. Maybe cause CSIT NDR and PDR discovery tests run with 10sec
trials. And VM tests were the lucky ones, and we need more samples (more
test executions) to get it right. But I’m guessing here, need more
data..

In the soak tests I’ve been running, I noticed that some 60sec runs at
17.04 NDR (2x 4.6Mpps) are completing without a single frame loss. Some
are completing at ~5% frame loss. But running the NIC-to-NIC soak test
now for >43hrs, we do see pkt loss averaging at 0.001%. See attached
screenshots from ixia.

We would need to understand more about the nature of your code fixes
(and added functionality), in order to explain this counterintuitive
trend and adjust test case design to catch the results representative to
expected and required behaviour.


-Maciek

On 13 Sep 2017, at 16:26, John Lo (loj) > 
wrote:

Looking at the new result, it appears that PNIC to PNIC performance mostly 
degrade slightly while PNIC to VM to PNIC performance mostly improved slightly 
or same from 17.04 to master. Does that seem right to you, Maciek?

The L2FIB scale fix has been merged to both master and 17.07 already. These are 
the patches:
17.07 – https://gerrit.fd.io/r/#/c/8243/
Master – https://gerrit.fd.io/r/#/c/8289/

I will go ahead and close Jira ticket VPP-963.

Regards,
John


From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Maciek Konstantynowicz 
(mkonstan)
Sent: Wednesday, September 13, 2017 10:28 AM
To: Billy McFall >; 
csit-...@lists.fd.io; vpp-dev 
>
Subject: Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07

Hello,

RECOMMENDATION
After reviewing the results, CSIT team recommends to apply the l2fib MAC
scale fix to vpp17.07 ASAP, as the fix greatly improves the NDR and PDR
performance for all tested L2BD MAC scale scenarios. However, CSIT team
wants to note that the VPP performance after the fix still shows a small
regression compared to vpp17.04. Detail below..

FURTHER DETAIL
Here the final update on CSIT verifying the code fix to correct VPP
frame throughput for L2 bridging with higher scale MAC tables (bigger
L2FIBs).

Following number of CSIT jenkins jobs have been executed, each execution
yielding one complete set of data, referred below as a sample.

vpp master (with fix) tests - 10 samples
vpp 17.04 tests - 6 samples

The tests have been executed across all three physical testbeds present
in FD.io CSIT labs operated by LF IT and CSIT project team.
Testbeds selection was pseudo-random based on testbed availability
during jjb testbed allocation request.

Breakdown of test results is included in updated .xlsx attachments to
CSIT jira ticket CSIT-794 [5]. All other references for breakdown data
stay unchanged [1]..[8].

In summary we report following relative FPS/PPS throughput change
between vpp17.04 and vpp-master after the fix:

1,000,000 MAC entries in L2FIB
up to 5% relative throughput drop
100,000 MAC entries in L2FIB
up to 3% relative throughput drop
10,000 MAC entries in L2FIB
up to 5% relative throughput drop

In addition we have performed IXIA based soak tests over a period of
over 36hrs (it's still running),  with IXIA running at NDR rate
(testcase: l2bdscale1mmaclrn-ndrdisc), with IXIA reporting 0.001% frame
loss over the current duration of the test.

Regards,
-Maciek

[1] CSIT-786 L2FIB scale testing [https://gerrit.fd.io/r/#/c/8145/ ge8145] 
[https://jira.fd.io/browse/CSIT-786 CSIT-786];
L2FIB scale testing for 10k, 100k, 1M FIB entries
 ./l2:
 10ge2p1x520-eth-l2bdscale10kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale100kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale1mmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale10kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale100kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale1mmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
[2] VPP master branch [https://gerrit.fd.io/r/#/c/8173/ ge8173];
[3] VPP stable/1707 [https://gerrit.fd.io/r/#/c/8167/ 
ge8167];
[4] VPP stable/1704 [https://gerrit.fd.io/r/#/c/8172/ 
ge8172];
[5] CSIT-794 VPP v17.07 L2BD yields lower NDR and PDR performance vs. v17.04, 
20170825_l2fib_regression_10k_100k_1M.xlsx, [https://jira.fd.io/browse/CSIT-794 
CSIT-794];
[6] TRex v2.28 Ethernet FCS mis-calculation issue 
[https://jira.fd.io/browse/CSIT-793 
CSIT-793];
[7] commit 

[vpp-dev] Fwd: VPP Performance drop from 17.04 to 17.07

2017-09-13 Thread Maciek Konstantynowicz (mkonstan)
FYI latest CSIT status re l2fib performance regression in 17.07.
I expect the fix to get included in vpp 17.07 maintenance release.

In parallel, we owe the community an explanation that the observed
performance degradation is due to adding missing mandatory L2 bridging
functionality and  this is the current cost of doing so, as no work
comes for free. And that it is a one-off degradation, and not a bad
trend that will impact "best-on-the-planet” network data plane
performance properties of VPP. I will work with John Lo who owns this
feature, and VPP data plane gurus on cc: to arrive to a satisfactory
explanation to community.

Hope this makes sense..

-Maciek

Begin forwarded message:

From: Maciek Konstantynowicz >
Subject: Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07
Date: 13 September 2017 at 15:27:52 BST
To: Billy McFall >, 
"csit-...@lists.fd.io" 
>, vpp-dev 
>
Cc: "Maciek Konstantynowicz (mkonstan)" 
>

Hello,

RECOMMENDATION
After reviewing the results, CSIT team recommends to apply the l2fib MAC
scale fix to vpp17.07 ASAP, as the fix greatly improves the NDR and PDR
performance for all tested L2BD MAC scale scenarios. However, CSIT team
wants to note that the VPP performance after the fix still shows a small
regression compared to vpp17.04. Detail below..

FURTHER DETAIL
Here the final update on CSIT verifying the code fix to correct VPP
frame throughput for L2 bridging with higher scale MAC tables (bigger
L2FIBs).

Following number of CSIT jenkins jobs have been executed, each execution
yielding one complete set of data, referred below as a sample.

vpp master (with fix) tests - 10 samples
vpp 17.04 tests - 6 samples

The tests have been executed across all three physical testbeds present
in FD.io CSIT labs operated by LF IT and CSIT project team.
Testbeds selection was pseudo-random based on testbed availability
during jjb testbed allocation request.

Breakdown of test results is included in updated .xlsx attachments to
CSIT jira ticket CSIT-794 [5]. All other references for breakdown data
stay unchanged [1]..[8].

In summary we report following relative FPS/PPS throughput change
between vpp17.04 and vpp-master after the fix:

1,000,000 MAC entries in L2FIB
up to 5% relative throughput drop
100,000 MAC entries in L2FIB
up to 3% relative throughput drop
10,000 MAC entries in L2FIB
up to 5% relative throughput drop

In addition we have performed IXIA based soak tests over a period of
over 36hrs (it's still running),  with IXIA running at NDR rate
(testcase: l2bdscale1mmaclrn-ndrdisc), with IXIA reporting 0.001% frame
loss over the current duration of the test.

Regards,
-Maciek

[1] CSIT-786 L2FIB scale testing [https://gerrit.fd.io/r/#/c/8145/ ge8145] 
[https://jira.fd.io/browse/CSIT-786 CSIT-786];
L2FIB scale testing for 10k, 100k, 1M FIB entries
 ./l2:
 10ge2p1x520-eth-l2bdscale10kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale100kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale1mmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale10kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale100kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale1mmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
[2] VPP master branch [https://gerrit.fd.io/r/#/c/8173/ ge8173];
[3] VPP stable/1707 [https://gerrit.fd.io/r/#/c/8167/ 
ge8167];
[4] VPP stable/1704 [https://gerrit.fd.io/r/#/c/8172/ 
ge8172];
[5] CSIT-794 VPP v17.07 L2BD yields lower NDR and PDR performance vs. v17.04, 
20170825_l2fib_regression_10k_100k_1M.xlsx, [https://jira.fd.io/browse/CSIT-794 
CSIT-794];
[6] TRex v2.28 Ethernet FCS mis-calculation issue 
[https://jira.fd.io/browse/CSIT-793 
CSIT-793];
[7] commit 25ff2ea3a31e422094f6d91eab46222a29a77c4b;
[8] VPP v17.07 L2BD NDR and PDR multi-thread performance broken 
[https://jira.fd.io/browse/VPP-963 
VPP-963];

On 28 Aug 2017, at 18:11, Maciek Konstantynowicz (mkonstan) 
> wrote:


On 28 Aug 2017, at 17:47, Billy McFall 
> wrote:



On Mon, Aug 28, 2017 at 8:53 AM, Maciek Konstantynowicz (mkonstan) 
> wrote:
+ csit-dev

Billy,

Per the last week CSIT project call, from CSIT perspective, we
classified your reported issue as Test coverage escape.

Summary
===
CSIT test coverage got fixed, see more detail below. The CSIT tests
uncovered 

Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07

2017-09-13 Thread Maciek Konstantynowicz (mkonstan)
// resending without attachment, as lists.fd.io don’t like 
.jpeg cargo..

John,

This is indeed a correct observation. On the first look it does look a
bit weird. Maybe cause CSIT NDR and PDR discovery tests run with 10sec
trials. And VM tests were the lucky ones, and we need more samples (more
test executions) to get it right. But I’m guessing here, need more
data..

In the soak tests I’ve been running, I noticed that some 60sec runs at
17.04 NDR (2x 4.6Mpps) are completing without a single frame loss. Some
are completing at ~5% frame loss. But running the NIC-to-NIC soak test
now for >43hrs, we do see pkt loss averaging at 0.001%. See attached
screenshots from ixia.

We would need to understand more about the nature of your code fixes
(and added functionality), in order to explain this counterintuitive
trend and adjust test case design to catch the results representative to
expected and required behaviour.


-Maciek

On 13 Sep 2017, at 16:26, John Lo (loj) > 
wrote:

Looking at the new result, it appears that PNIC to PNIC performance mostly 
degrade slightly while PNIC to VM to PNIC performance mostly improved slightly 
or same from 17.04 to master. Does that seem right to you, Maciek?

The L2FIB scale fix has been merged to both master and 17.07 already. These are 
the patches:
17.07 – https://gerrit.fd.io/r/#/c/8243/
Master – https://gerrit.fd.io/r/#/c/8289/

I will go ahead and close Jira ticket VPP-963.

Regards,
John


From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Maciek Konstantynowicz 
(mkonstan)
Sent: Wednesday, September 13, 2017 10:28 AM
To: Billy McFall >; 
csit-...@lists.fd.io; vpp-dev 
>
Subject: Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07

Hello,

RECOMMENDATION
After reviewing the results, CSIT team recommends to apply the l2fib MAC
scale fix to vpp17.07 ASAP, as the fix greatly improves the NDR and PDR
performance for all tested L2BD MAC scale scenarios. However, CSIT team
wants to note that the VPP performance after the fix still shows a small
regression compared to vpp17.04. Detail below..

FURTHER DETAIL
Here the final update on CSIT verifying the code fix to correct VPP
frame throughput for L2 bridging with higher scale MAC tables (bigger
L2FIBs).

Following number of CSIT jenkins jobs have been executed, each execution
yielding one complete set of data, referred below as a sample.

vpp master (with fix) tests - 10 samples
vpp 17.04 tests - 6 samples

The tests have been executed across all three physical testbeds present
in FD.io CSIT labs operated by LF IT and CSIT project team.
Testbeds selection was pseudo-random based on testbed availability
during jjb testbed allocation request.

Breakdown of test results is included in updated .xlsx attachments to
CSIT jira ticket CSIT-794 [5]. All other references for breakdown data
stay unchanged [1]..[8].

In summary we report following relative FPS/PPS throughput change
between vpp17.04 and vpp-master after the fix:

1,000,000 MAC entries in L2FIB
up to 5% relative throughput drop
100,000 MAC entries in L2FIB
up to 3% relative throughput drop
10,000 MAC entries in L2FIB
up to 5% relative throughput drop

In addition we have performed IXIA based soak tests over a period of
over 36hrs (it's still running),  with IXIA running at NDR rate
(testcase: l2bdscale1mmaclrn-ndrdisc), with IXIA reporting 0.001% frame
loss over the current duration of the test.

Regards,
-Maciek

[1] CSIT-786 L2FIB scale testing [https://gerrit.fd.io/r/#/c/8145/ ge8145] 
[https://jira.fd.io/browse/CSIT-786 CSIT-786];
L2FIB scale testing for 10k, 100k, 1M FIB entries
 ./l2:
 10ge2p1x520-eth-l2bdscale10kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale100kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale1mmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale10kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale100kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale1mmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
[2] VPP master branch [https://gerrit.fd.io/r/#/c/8173/ ge8173];
[3] VPP stable/1707 [https://gerrit.fd.io/r/#/c/8167/ 
ge8167];
[4] VPP stable/1704 [https://gerrit.fd.io/r/#/c/8172/ 
ge8172];
[5] CSIT-794 VPP v17.07 L2BD yields lower NDR and PDR performance vs. v17.04, 
20170825_l2fib_regression_10k_100k_1M.xlsx, [https://jira.fd.io/browse/CSIT-794 
CSIT-794];
[6] TRex v2.28 Ethernet FCS mis-calculation issue 
[https://jira.fd.io/browse/CSIT-793 
CSIT-793];
[7] commit 

Re: [vpp-dev] net/mlx5: install libmlx5 & libibverbs if no OFED

2017-09-13 Thread Dave Barach (dbarach)
I typically use "git commit --amend" followed by "git review [--draft]".

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Shachar Beiser
Sent: Wednesday, September 13, 2017 11:29 AM
To: vpp-dev@lists.fd.io
Cc: Shahaf Shuler ; Damjan Marion (damarion) 

Subject: [vpp-dev] net/mlx5: install libmlx5 & libibverbs if no OFED

Hi,

  I would like to send a second patch fixing the comments I received .
  I understand that it may not be done by "git push" & also "git 
review"/"git review -s" seems like it has no effect.

  What is the procedure to send a second patch ?
 -Shachar Beiser.
  https://gerrit.fd.io/r/#/c/8390/1


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07

2017-09-13 Thread John Lo (loj)
Looking at the new result, it appears that PNIC to PNIC performance mostly 
degrade slightly while PNIC to VM to PNIC performance mostly improved slightly 
or same from 17.04 to master. Does that seem right to you, Maciek?

The L2FIB scale fix has been merged to both master and 17.07 already. These are 
the patches:
17.07 – https://gerrit.fd.io/r/#/c/8243/
Master – https://gerrit.fd.io/r/#/c/8289/

I will go ahead and close Jira ticket VPP-963.

Regards,
John


From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Maciek Konstantynowicz (mkonstan)
Sent: Wednesday, September 13, 2017 10:28 AM
To: Billy McFall ; csit-...@lists.fd.io; vpp-dev 

Subject: Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07

Hello,

RECOMMENDATION
After reviewing the results, CSIT team recommends to apply the l2fib MAC
scale fix to vpp17.07 ASAP, as the fix greatly improves the NDR and PDR
performance for all tested L2BD MAC scale scenarios. However, CSIT team
wants to note that the VPP performance after the fix still shows a small
regression compared to vpp17.04. Detail below..

FURTHER DETAIL
Here the final update on CSIT verifying the code fix to correct VPP
frame throughput for L2 bridging with higher scale MAC tables (bigger
L2FIBs).

Following number of CSIT jenkins jobs have been executed, each execution
yielding one complete set of data, referred below as a sample.

vpp master (with fix) tests - 10 samples
vpp 17.04 tests - 6 samples

The tests have been executed across all three physical testbeds present
in FD.io CSIT labs operated by LF IT and CSIT project team.
Testbeds selection was pseudo-random based on testbed availability
during jjb testbed allocation request.

Breakdown of test results is included in updated .xlsx attachments to
CSIT jira ticket CSIT-794 [5]. All other references for breakdown data
stay unchanged [1]..[8].

In summary we report following relative FPS/PPS throughput change
between vpp17.04 and vpp-master after the fix:

1,000,000 MAC entries in L2FIB
up to 5% relative throughput drop
100,000 MAC entries in L2FIB
up to 3% relative throughput drop
10,000 MAC entries in L2FIB
up to 5% relative throughput drop

In addition we have performed IXIA based soak tests over a period of
over 36hrs (it's still running),  with IXIA running at NDR rate
(testcase: l2bdscale1mmaclrn-ndrdisc), with IXIA reporting 0.001% frame
loss over the current duration of the test.

Regards,
-Maciek

[1] CSIT-786 L2FIB scale testing [https://gerrit.fd.io/r/#/c/8145/ ge8145] 
[https://jira.fd.io/browse/CSIT-786 CSIT-786];
L2FIB scale testing for 10k, 100k, 1M FIB entries
 ./l2:
 10ge2p1x520-eth-l2bdscale10kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale100kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale1mmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale10kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale100kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale1mmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
[2] VPP master branch [https://gerrit.fd.io/r/#/c/8173/ ge8173];
[3] VPP stable/1707 [https://gerrit.fd.io/r/#/c/8167/ ge8167];
[4] VPP stable/1704 [https://gerrit.fd.io/r/#/c/8172/ ge8172];
[5] CSIT-794 VPP v17.07 L2BD yields lower NDR and PDR performance vs. v17.04, 
20170825_l2fib_regression_10k_100k_1M.xlsx, [https://jira.fd.io/browse/CSIT-794 
CSIT-794];
[6] TRex v2.28 Ethernet FCS mis-calculation issue 
[https://jira.fd.io/browse/CSIT-793 CSIT-793];
[7] commit 25ff2ea3a31e422094f6d91eab46222a29a77c4b;
[8] VPP v17.07 L2BD NDR and PDR multi-thread performance broken 
[https://jira.fd.io/browse/VPP-963 VPP-963];

On 28 Aug 2017, at 18:11, Maciek Konstantynowicz (mkonstan) 
> wrote:


On 28 Aug 2017, at 17:47, Billy McFall 
> wrote:



On Mon, Aug 28, 2017 at 8:53 AM, Maciek Konstantynowicz (mkonstan) 
> wrote:
+ csit-dev

Billy,

Per the last week CSIT project call, from CSIT perspective, we
classified your reported issue as Test coverage escape.

Summary
===
CSIT test coverage got fixed, see more detail below. The CSIT tests
uncovered regression for L2BD with MAC learning with higher total number
of MACs in L2FIB, >>10k MAC, for multi-threaded configurations. Single-
threaded configurations seem to be not impacted.

Billy, Karl, Can you confirm this aligns with your findings?

When you say "multi-threaded configuration", I assume you mean multiple worker 
threads?

Yes, I should have said multiple data plane threads, in VPP land that’s worker 
threads indeed.


Karl's tests had 4 workers, one for each NIC (physical and vhost-user). He only 
tested multi-threaded, so we can not confirm that single-threaded 
configurations seem to be not impacted.

Okay. Still your result 

Re: [vpp-dev] VPP Performance drop from 17.04 to 17.07

2017-09-13 Thread Maciek Konstantynowicz (mkonstan)
Hello,

RECOMMENDATION
After reviewing the results, CSIT team recommends to apply the l2fib MAC
scale fix to vpp17.07 ASAP, as the fix greatly improves the NDR and PDR
performance for all tested L2BD MAC scale scenarios. However, CSIT team
wants to note that the VPP performance after the fix still shows a small
regression compared to vpp17.04. Detail below..

FURTHER DETAIL
Here the final update on CSIT verifying the code fix to correct VPP
frame throughput for L2 bridging with higher scale MAC tables (bigger
L2FIBs).

Following number of CSIT jenkins jobs have been executed, each execution
yielding one complete set of data, referred below as a sample.

vpp master (with fix) tests - 10 samples
vpp 17.04 tests - 6 samples

The tests have been executed across all three physical testbeds present
in FD.io CSIT labs operated by LF IT and CSIT project team.
Testbeds selection was pseudo-random based on testbed availability
during jjb testbed allocation request.

Breakdown of test results is included in updated .xlsx attachments to
CSIT jira ticket CSIT-794 [5]. All other references for breakdown data
stay unchanged [1]..[8].

In summary we report following relative FPS/PPS throughput change
between vpp17.04 and vpp-master after the fix:

1,000,000 MAC entries in L2FIB
up to 5% relative throughput drop
100,000 MAC entries in L2FIB
up to 3% relative throughput drop
10,000 MAC entries in L2FIB
up to 5% relative throughput drop

In addition we have performed IXIA based soak tests over a period of
over 36hrs (it's still running),  with IXIA running at NDR rate
(testcase: l2bdscale1mmaclrn-ndrdisc), with IXIA reporting 0.001% frame
loss over the current duration of the test.

Regards,
-Maciek

[1] CSIT-786 L2FIB scale testing [https://gerrit.fd.io/r/#/c/8145/ ge8145] 
[https://jira.fd.io/browse/CSIT-786 CSIT-786];
L2FIB scale testing for 10k, 100k, 1M FIB entries
 ./l2:
 10ge2p1x520-eth-l2bdscale10kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale100kmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale1mmaclrn-ndrpdrdisc.robot
 10ge2p1x520-eth-l2bdscale10kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale100kmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
 10ge2p1x520-eth-l2bdscale1mmaclrn-eth-2vhostvr1024-1vm-cfsrr1-ndrpdrdisc
[2] VPP master branch [https://gerrit.fd.io/r/#/c/8173/ ge8173];
[3] VPP stable/1707 [https://gerrit.fd.io/r/#/c/8167/ ge8167];
[4] VPP stable/1704 [https://gerrit.fd.io/r/#/c/8172/ ge8172];
[5] CSIT-794 VPP v17.07 L2BD yields lower NDR and PDR performance vs. v17.04, 
20170825_l2fib_regression_10k_100k_1M.xlsx, [https://jira.fd.io/browse/CSIT-794 
CSIT-794];
[6] TRex v2.28 Ethernet FCS mis-calculation issue 
[https://jira.fd.io/browse/CSIT-793 CSIT-793];
[7] commit 25ff2ea3a31e422094f6d91eab46222a29a77c4b;
[8] VPP v17.07 L2BD NDR and PDR multi-thread performance broken 
[https://jira.fd.io/browse/VPP-963 VPP-963];

On 28 Aug 2017, at 18:11, Maciek Konstantynowicz (mkonstan) 
> wrote:


On 28 Aug 2017, at 17:47, Billy McFall 
> wrote:



On Mon, Aug 28, 2017 at 8:53 AM, Maciek Konstantynowicz (mkonstan) 
> wrote:
+ csit-dev

Billy,

Per the last week CSIT project call, from CSIT perspective, we
classified your reported issue as Test coverage escape.

Summary
===
CSIT test coverage got fixed, see more detail below. The CSIT tests
uncovered regression for L2BD with MAC learning with higher total number
of MACs in L2FIB, >>10k MAC, for multi-threaded configurations. Single-
threaded configurations seem to be not impacted.

Billy, Karl, Can you confirm this aligns with your findings?

When you say "multi-threaded configuration", I assume you mean multiple worker 
threads?

Yes, I should have said multiple data plane threads, in VPP land that’s worker 
threads indeed.

Karl's tests had 4 workers, one for each NIC (physical and vhost-user). He only 
tested multi-threaded, so we can not confirm that single-threaded 
configurations seem to be not impacted.

Okay. Still your result align with our tests, both CSIT and offline with IXIA.


Our numbers are a little different from yours, but we are both seeing drops 
between releases.

Your numbers are different most likely due to different MAC scale. You
quote MAC scale per direction, we quote total MAC scale, i.e. total
number of VPP l2fib entries.

We had a bigger drop off with 10k flows, but seems to be similar with the 
million flow tests.

Our 10k flows is equivalent of 2* 5k flows, defined as:

flow-ab1 => (smac-a1,dmac-b1)
flow-ab2 => (smac-a2,dmac-b2)
..
flow-ab5000 => (smac-a5000,dmac-b5000)

flow-ba1 => (smac-b1,dmac-a1)
flow-ba2 => (smac-b2,dmac-a2)
..
flow-ba5000 => (smac-b5000,dmac-a5000)

In your case, based on description provided by Karl on the last CSIT
call I 

[vpp-dev] Spurious patch verification failure (gerrit 8400)

2017-09-13 Thread Dave Barach (dbarach)
See gerrit https://gerrit.fd.io/r/#/c/8400, 
https://jenkins.fd.io/job/vpp-verify-master-centos7/7070/console


12:29:12 make[1]: Leaving directory 
`/w/workspace/vpp-verify-master-centos7/test'
12:29:12 [vpp-verify-master-centos7] $ /bin/bash 
/tmp/hudson3100921859131279854.sh
12:29:12 Loaded plugins: fastestmirror, langpacks
12:29:12 Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache 
fast
12:29:17 Determining fastest mirrors
12:29:18  * base: centos.mirror.ca.planethoster.net
12:29:18  * epel: ftp.cse.buffalo.edu
12:29:18  * extras: centos.mirror.iweb.ca
12:29:18  * updates: centos.mirror.netelligent.ca
12:29:21 Package redhat-lsb-4.1-27.el7.centos.1.x86_64 already installed and 
latest version
12:29:21 Nothing to do
12:29:21 DISTRIB_ID: CentOS
12:29:21 DISTRIB_RELEASE: 7.3.1611
12:29:21 DISTRIB_CODENAME: Core
12:29:21 DISTRIB_DESCRIPTION: "CentOS Linux release 7.3.1611 (Core) "
12:29:21 INSTALLING VPP-DPKG-DEV from apt/yum repo
12:29:21 REPO_URL: https://nexus.fd.io/content/repositories/fd.io.master.centos7
12:29:21 Loaded plugins: fastestmirror, langpacks
12:29:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:29:52 Trying other mirror.
12:30:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:30:22 Trying other mirror.
12:30:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:30:52 Trying other mirror.
12:31:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:31:22 Trying other mirror.
12:31:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:31:52 Trying other mirror.
12:32:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:32:22 Trying other mirror.
12:32:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:32:52 Trying other mirror.
12:33:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:33:22 Trying other mirror.
12:33:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:33:52 Trying other mirror.
12:34:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:34:22 Trying other mirror.
12:34:22
12:34:22
12:34:22  One of the configured repositories failed (fd.io master branch latest 
merge),
12:34:22  and yum doesn't have enough cached data to continue. At this point 
the only
12:34:22  safe thing yum can do is fail. There are a few ways to work "fix" 
this:
12:34:22
12:34:22  1. Contact the upstream for the repository and get them to fix 
the problem.
12:34:22
12:34:22  2. Reconfigure the baseurl/etc. for the repository, to point to a 
working
12:34:22 upstream. This is most often useful if you are using a newer
12:34:22 distribution release than is supported by the repository (and 
the
12:34:22 packages for the previous distribution release still work).
12:34:22
12:34:22  

Re: [vpp-dev] cicn option missing for vppctl command

2017-09-13 Thread Vikrant Talponkar
Hi Alberto,

Thanks a lot for your response. In the mean while I managed to the cicn-plugin 
installed on VPP.
I’m not sure I’m even moving in the right direction. Can you please help me 
with where I can reach out to, to get some basic ccnx examples running on my 
own systems?

Regards,
Vikrant Talponkar
Sr. Software Engineer

P  : +91 20 6604 6000 (Extn: 6080)

www.xoriant.com

[cid:image001.jpg@01D2514A.32631630]  
[cid:image002.jpg@01D2514A.32631630]    
[cid:image004.jpg@01D2514A.32631630]   
 [cid:image005.jpg@01D2514A.32631630] 

[cid:image006.jpg@01D2514A.32631630]

From: Alberto Compagno (acompagn) [mailto:acomp...@cisco.com]
Sent: Wednesday, September 13, 2017 2:53 PM
To: Vikrant Talponkar ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] cicn option missing for vppctl command

Hi Vikrant,

That’s the wrong mailing list. I’ll move the thread to the cicn-dev list. 
Please subscribe there if you haven’t done it already.

Alberto

From: > on 
behalf of Vikrant Talponkar 
>
Date: Tuesday, 12 September 2017 at 14:43
To: "vpp-dev@lists.fd.io" 
>
Subject: [vpp-dev] cicn option missing for vppctl command

Hi,

I am fairly new to Cicn. I am trying to create a scenario and mentioned in the 
https://wiki.fd.io/view/Simple-vms setup.

I have VPP installed on my relay machine, version v17.04.2-release. I also have 
cicn-plugin installed using apt-get.

I have dpdk setup and I see all the network interfaces with:
Vppctl show interface

When I try the following the following command though:
vppctl cicn enable

I see `cicn` is not an option available for the vppctl command, can anyone 
please help me out I’m missing out here?

Regards,
Vikrant Talponkar
Sr. Software Engineer

P  : +91 20 6604 6000 (Extn: 6080)

www.xoriant.com

[id:image001.jpg@01D2514A.32631630]  
[id:image002.jpg@01D2514A.32631630]    
[id:image004.jpg@01D2514A.32631630]    
[id:image005.jpg@01D2514A.32631630] 

[id:image006.jpg@01D2514A.32631630]

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] cicn option missing for vppctl command

2017-09-13 Thread Alberto Compagno (acompagn)
Hi Vikrant,

That’s the wrong mailing list. I’ll move the thread to the cicn-dev list. 
Please subscribe there if you haven’t done it already.

Alberto

From:  on behalf of Vikrant Talponkar 

Date: Tuesday, 12 September 2017 at 14:43
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] cicn option missing for vppctl command

Hi,

I am fairly new to Cicn. I am trying to create a scenario and mentioned in the 
https://wiki.fd.io/view/Simple-vms setup.

I have VPP installed on my relay machine, version v17.04.2-release. I also have 
cicn-plugin installed using apt-get.

I have dpdk setup and I see all the network interfaces with:
Vppctl show interface

When I try the following the following command though:
vppctl cicn enable

I see `cicn` is not an option available for the vppctl command, can anyone 
please help me out I’m missing out here?

Regards,
Vikrant Talponkar
Sr. Software Engineer

P  : +91 20 6604 6000 (Extn: 6080)

www.xoriant.com

[id:image001.jpg@01D2514A.32631630]  
[id:image002.jpg@01D2514A.32631630]    
[id:image004.jpg@01D2514A.32631630]    
[id:image005.jpg@01D2514A.32631630] 

[id:image006.jpg@01D2514A.32631630]

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev