[vpp-dev] Congrats to Marco Varlese on his election as a vpp project committer

2018-02-08 Thread Dave Barach (dbarach)
It gives me great pleasure to announce Marco's election as a vpp project 
committer, confirmed a few minutes ago by the fd.io vpp TSC.

Vanessa V. will take care of [+2 button] mechanics shortly.

Thanks much to Marco for his interest in the vpp project!

Dave
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] SCTP coverity-scan warnings addressed

2018-02-08 Thread Dave Barach (dbarach)
+1, thanks Marco...!

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Thursday, February 8, 2018 7:06 AM
To: Marco Varlese 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] SCTP coverity-scan warnings addressed

Great, thanks!

Chris

> -Original Message-
> From: Marco Varlese [mailto:mvarl...@suse.de]
> Sent: Thursday, February 8, 2018 5:16
> To: Luke, Chris 
> Cc: Florin Coras ; vpp-dev@lists.fd.io
> Subject: SCTP coverity-scan warnings addressed
> 
> Hi Chris,
> 
> Just to update you that I took care of the action item which came up 
> during the VPP project-meeting on Tuesday.
> 
> The patch https://gerrit.fd.io/r/#/c/10433/ addressing the warnings 
> (8) re SCTP was merged.
> 
> 
> Cheers,
> --
> Marco V
> 
> SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton 
> HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] FW: New Committer Nomination: Marco Varlese

2018-02-06 Thread Dave Barach (dbarach)
Copying the list...

From: Luke, Chris [mailto:chris_l...@comcast.com]
Sent: Tuesday, February 6, 2018 11:40 AM
To: Dave Barach (dbarach) ; Keith Burns (krb) 
; Florin Coras (fcoras) ; John Lo (loj) 
; Damjan Marion (damarion) ; Neale Ranns 
(nranns) ; Ole Troan ; Dave Wallace 
; Ed Warnicke (eaw) 
Subject: RE: New Committer Nomination: Marco Varlese

+1

From: Dave Barach (dbarach) [mailto:dbar...@cisco.com]
Sent: Tuesday, February 6, 2018 8:56
To: Keith Burns (krb) mailto:k...@cisco.com>>; Florin Coras 
(fcoras) mailto:fco...@cisco.com>>; John Lo (loj) 
mailto:l...@cisco.com>>; Luke, Chris 
mailto:chris_l...@cable.comcast.com>>; Damjan 
Marion (damarion) mailto:damar...@cisco.com>>; Neale Ranns 
(nranns) mailto:nra...@cisco.com>>; Ole Troan 
mailto:o...@cisco.com>>; Dave Wallace 
mailto:dwallac...@gmail.com>>; Ed Warnicke (eaw) 
mailto:e...@cisco.com>>
Subject: New Committer Nomination: Marco Varlese

Folks,

In view of significant code contributions to the vpp project - see below - I'm 
pleased to nominate Marco Varlese as a vpp project committer. I have high 
confidence that he'll be a major asset to the project in a committer role.

Marco has contributed 46 merged patches, including significant new feature 
work.  Example: host stack implementation of SCTP, 8 KLOC 
https://gerrit.fd.io/r/#/c/9150.


Please vote (+1, 0, -1) on vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>. 
We'll need a recorded vote so that the TSC will approve Marco's nomination.

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] New fd.io vpp project committer vote: Marco Varlese

2018-02-06 Thread Dave Barach (dbarach)
Oh, oops, forgot to vote myself: +1

(😊).. D.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Dave Barach (dbarach)
Sent: Tuesday, February 6, 2018 9:24 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] New fd.io vpp project committer vote: Marco Varlese

Copying vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io> to formally open the 
vote described below. Voting is limited to current committers, and will remain 
open for 1 week, or until folks have voted.

Thanks… Dave

From: Dave Barach (dbarach)
Sent: Tuesday, February 6, 2018 8:56 AM
To: Keith Burns (krb) mailto:k...@cisco.com>>; Florin Coras 
(fcoras) mailto:fco...@cisco.com>>; John Lo (loj) 
mailto:l...@cisco.com>>; Luke, Chris 
mailto:chris_l...@comcast.com>>; Damjan Marion 
(damarion) mailto:damar...@cisco.com>>; Neale Ranns 
(nranns) mailto:nra...@cisco.com>>; Ole Troan 
mailto:o...@cisco.com>>; Dave Wallace 
mailto:dwallac...@gmail.com>>; Ed Warnicke (eaw) 
mailto:e...@cisco.com>>
Subject: New Committer Nomination: Marco Varlese

Folks,

In view of significant code contributions to the vpp project – see below – I’m 
pleased to nominate Marco Varlese as a vpp project committer. I have high 
confidence that he’ll be a major asset to the project in a committer role.

Marco has contributed 46 merged patches, including significant new feature 
work.  Example: host stack implementation of SCTP, 8 KLOC 
https://gerrit.fd.io/r/#/c/9150. All merged patches: 
https://gerrit.fd.io/r/#/q/status:merged+owner:%22Marco+Varlese+%253Cmarco.varlese%2540suse.de%253E%22


Please vote (+1, 0, -1) on vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>. 
We’ll need a recorded vote so that the TSC will approve Marco’s nomination.

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] FW: New Committer Nomination: Marco Varlese

2018-02-06 Thread Dave Barach (dbarach)
To record Damjan’s vote…

From: Damjan Marion (damarion)
Sent: Tuesday, February 6, 2018 9:07 AM
To: Dave Barach (dbarach) 
Cc: Keith Burns (krb) ; Florin Coras (fcoras) 
; John Lo (loj) ; Luke, Chris 
; Neale Ranns (nranns) ; Ole Troan 
; Dave Wallace ; Ed Warnicke (eaw) 

Subject: Re: New Committer Nomination: Marco Varlese

+1

On 6 Feb 2018, at 14:55, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:
Folks,

In view of significant code contributions to the vpp project – see below – I’m 
pleased to nominate Marco Varlese as a vpp project committer. I have high 
confidence that he’ll be a major asset to the project in a committer role.

Marco has contributed 46 merged patches, including significant new feature 
work.  Example: host stack implementation of SCTP, 8 KLOC 
https://gerrit.fd.io/r/#/c/9150.


Please vote (+1, 0, -1) on vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>. 
We’ll need a recorded vote so that the TSC will approve Marco’s nomination.

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] New fd.io vpp project committer vote: Marco Varlese

2018-02-06 Thread Dave Barach (dbarach)
Copying vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>  to formally open
the vote described below. Voting is limited to current committers, and will
remain open for 1 week, or until folks have voted.

 

Thanks. Dave

 

From: Dave Barach (dbarach) 
Sent: Tuesday, February 6, 2018 8:56 AM
To: Keith Burns (krb) ; Florin Coras (fcoras)
; John Lo (loj) ; Luke, Chris
; Damjan Marion (damarion) ;
Neale Ranns (nranns) ; Ole Troan ; Dave
Wallace ; Ed Warnicke (eaw) 
Subject: New Committer Nomination: Marco Varlese

 

Folks,

 

In view of significant code contributions to the vpp project - see below -
I'm pleased to nominate Marco Varlese as a vpp project committer. I have
high confidence that he'll be a major asset to the project in a committer
role.  

 

Marco has contributed 46 merged patches, including significant new feature
work.  Example: host stack implementation of SCTP, 8 KLOC
https://gerrit.fd.io/r/#/c/9150. All merged patches:
https://gerrit.fd.io/r/#/q/status:merged+owner:%22Marco+Varlese+%253Cmarco.v
arlese%2540suse.de%253E%22 

 

 

Please vote (+1, 0, -1) on vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
. We'll need a recorded vote so that the TSC will approve Marco's
nomination.

 

Thanks... Dave

 



smime.p7s
Description: S/MIME cryptographic signature
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Calling a C function in one plugin from another plugin?

2018-02-05 Thread Dave Barach (dbarach)
You can ask vlib_get_plugin_symbol ("plugin_name", "function_name") for the 
address of a function... 

Returns NULL if e.g. the plugin in question isn't loaded or the symbol is 
missing.

HTH... D.

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Michael Lilja
Sent: Monday, February 5, 2018 9:54 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Calling a C function in one plugin from another plugin?

Hi,

I'm looking at using DPDK rte_flow (generic flow API) for ACL offloading. From 
what I can see the only option I have is to implement a v1_msg_* receiver in 
the DPDK plugin to accept commands from ACL via the SHMEM rings. The concern I 
have is that this might be in conflict with the design of VPP, I'm not sure if 
VPP is designed to have inter-plugin-communication?

Does anyone have another approach to call DPDK functions from within another 
plugins instead of the v1_msg_* layer?

Thanks,
Michael
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] unformat %s eats newlines

2018-02-02 Thread Dave Barach (dbarach)
Folks who need even slightly bulletproof configuration methods should use 
binary APIs: directly from C or through one of several language bindings.

Debug CLI is a developer’s tool, subject to change without notice, and 
supported at the implementer’s discretion.

Extra and/or unparsed input should not go unnoticed: the next function up the 
parse stack will complain.

IIWY I’d leave unformat(…) alone.

HTH… D.

From: Andreas Schultz [mailto:andreas.schu...@travelping.com]
Sent: Friday, February 2, 2018 3:26 PM
To: Dave Barach (dbarach) 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] unformat %s eats newlines

Dave Barach (dbarach) mailto:dbar...@cisco.com>> schrieb am 
Fr., 2. Feb. 2018 um 19:22 Uhr:
Why not simply:

while (…)
  {
if (unformat(input, “name %s”, &name))
  ;
else if (…)
  ;
else
 break;
  }

if ()
return clib_error_return (0, "parse error: '%U'",
  format_unformat_error, input);

That would mean that malformated optional and random additional stuff would get 
unnoticed. CLI verification is already not that strong (the usual while loop 
parsing permits random argument order even when the help strings suggest 
strongly ordered arguments).

Is there a reason that unformat eats the newline or is just to hard to change?

Andreas

D.

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>] On 
Behalf Of Andreas Schultz
Sent: Friday, February 2, 2018 12:47 PM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] unformat %s eats newlines

A typical construct to parse arguments is to use unformat in a while loop that 
checks for UNFORMAT_END_OF_INPUT.
For multiline input that relies on the detection of "\n" in the input stream.

The problem is that a construct like:

unformat (input, "name %_%v%_", &name)

eats the newline when it is the only characted following the string to be 
parsed.

This even break reading a multi line config with exec.

Regards
Andreas
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] unformat %s eats newlines

2018-02-02 Thread Dave Barach (dbarach)
Why not simply:

while (…)
  {
if (unformat(input, “name %s”, &name))
  ;
else if (…)
  ;
else
 break;
  }

if ()
return clib_error_return (0, "parse error: '%U'",
  format_unformat_error, input);

D.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Andreas Schultz
Sent: Friday, February 2, 2018 12:47 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] unformat %s eats newlines

A typical construct to parse arguments is to use unformat in a while loop that 
checks for UNFORMAT_END_OF_INPUT.
For multiline input that relies on the detection of "\n" in the input stream.

The problem is that a construct like:

unformat (input, "name %_%v%_", &name)

eats the newline when it is the only characted following the string to be 
parsed.

This even break reading a multi line config with exec.

Regards
Andreas
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] How to get dns server of dhcp client interface

2018-01-31 Thread Dave Barach (dbarach)
Option 6 (dhcp server) parsing is not implemented. See 
…/src/vnet/dhcp/client.c, switch statement near line 112…

Should be a simple coding task. Feel free to submit a patch.

Worst-case, file a Jira ticket so we don’t forget about it.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of ???
Sent: Tuesday, January 30, 2018 7:58 PM
To: vpp-dev 
Subject: [vpp-dev] How to get dns server of dhcp client interface

Hi,vpp-dev team,
When I use dhcp client to get a wan ip address for vpp interface,How to get the 
dns server address?
sudo vppctl set dhcp client intfc GigabitEthernet0/a/0
vagrant@localhost:~$ sudo vppctl show dhcp client
[0] GigabitEthernet0/a/0 state DHCP_BOUND addr 10.180.30.193/24 gw 10.180.30.1

Thanks.


Regards,
Jzhchen


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP Graph Optimization

2018-01-26 Thread Dave Barach (dbarach)
Dear David,

 

A bit of history. We worked on vpp for a decade before making any serious 
effort to multi-thread it. The first scheme that I tried was to break up the 
graph into reconfigurable pipeline stages. Effective partitioning of the graph 
is highly workload-dependent, and it can change in a heartbeat. the resulting 
system runs at the speed of the slowest pipeline stage.

 

In terms of easily measured inter-thread handoff cost, it’s not awful. 2-3 
clocks/pkt. Handing vectors of packets between threads can cause a festival of 
cache coherence traffic, and it can easily undo the positive effects of ddio 
(packet data DMA into the cache hierarchy).

 

We actually use the scheme you describe in a very fine-grained way: dual and 
quad loop graph dispatch functions process 2 or 4 packets at the same time. 
Until we run out of registers, a superscalar CPU can “do the same thing to 2 or 
4 packets at the same time” pretty effectively. Including memory hierarchy 
stalls, vpp averages more than two instructions retired per clock cycle.

 

At the graph node level, I can’t see how to leverage this technique. Presenting 
[identical] vectors to 2 (or more) nodes running on multiple threads would mean 
(1) the parallelized subgraph would run at the speed of the slowest node. (2) 
you’d pay the handoff costs already discussed above, and (3) you’d need an 
expensive algorithm to make sure that all vector replicas were finished before 
reentering sequential processing. (4) None of the graph nodes we’ve ever 
constructed are free of ordering constraints. Every node alters packet state in 
a meaningful way, or they wouldn’t be worth having. (😉)… 

 

We’ve had considerable success with flow-hashing across a set of identical 
graph replicas [worker threads], even when available hardware RSS hashing is 
not useful [think about NATted UDP traffic]. 

 

Hope this is of some interest.

 

Thanks… Dave

 

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of David Bainbridge
Sent: Friday, January 26, 2018 12:39 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP Graph Optimization

 

I have just started to read up on VPP/FD.io, and I have a question about graph 
optimization and was wondering if (as I suspect) this has already been thought 
about and either planned or decided against.

 

The documentation I found on VPP essentially says that VPP uses batch 
processing and processes all packets in a vector on one step before proceeding 
to the next step. The claim is this provides overall better throughput because 
of instruction caching.

 

I was wondering if optimization of the graph to understand where concurrency 
can be leveraged has been considered, as well as where you could process the 
vector by two steps with an offset. If this is possible, then steps could be 
pinned to cores and perhaps both concurrency and instruction caching could be 
leveraged.

 

For example assume the following graph:

 



 

In this graph, steps B,C can be done concurrently as they don't "modify" the 
vector. Steps D, E can't be done concurrently, but as they don't require look 
back/forward they can be done in offset.

 

What I am suggesting is, if there are enough cores, then steps could be pinned 
to cores to achieve the benefits of instruction caching, and after step A is 
complete, steps B,C could be done concurrently. After B,C are complete then D 
can be started and as D completes processing on a packet if can then be 
processed by E (i.e., the entire vector does not need to be processed by D 
before processing by E is started).

 

I make no argument that this doesn't increase complexity and also introduces 
coordination costs that don't exists today. To be fair, offset processing could 
be viewed as splitting the original large vector into smaller vectors and 
processing the smaller vectors from start to finish (almost dynamic 
optimization based on dynamic vector resizing).

Just curious to hear others thoughts and if some of this has been thought 
through or experimented with. As I said, just thinking off the cuff and 
wondering; not fully thought through.

 

avèk respè,

/david

 



smime.p7s
Description: S/MIME cryptographic signature
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] openSUSE build fails

2018-01-26 Thread Dave Barach (dbarach)
As Marco wrote: we’ve experienced sporadic, inexplicable LF infra-related build 
failures since the project started more than two years ago. It’s unusual for an 
otherwise correct patch to require more than one “recheck” for validation, but 
it’s absolutely not unknown.

To mitigate these problems, Ed Kern has built a containerized Jenkins minion 
system which runs on physical hardware, instead of the current setup which 
relies on cloud-hosted Openstack VMs. As soon as practicable – post 18.01 CSIT 
report – we’ll switch to it.

Given a failure which isn’t obviously related to a specific patch, please press 
the “recheck” button. No need to ask, just do it. In case of persistent 
failure, please email vpp-dev.

Thanks… Dave

From: Ni, Hongjun [mailto:hongjun...@intel.com]
Sent: Friday, January 26, 2018 3:25 AM
To: Marco Varlese ; Ole Troan 
Cc: Dave Barach (dbarach) ; Gabriel Ganne 
; Billy McFall ; Damjan Marion 
(damarion) ; vpp-dev 
Subject: RE: [vpp-dev] openSUSE build fails

Hi Marco,

Thank you for your explanation.  Would contact you if I ran into similar issue 
again.

Thanks,
Hongjun

From: Marco Varlese [mailto:mvarl...@suse.de]
Sent: Friday, January 26, 2018 4:21 PM
To: Ni, Hongjun mailto:hongjun...@intel.com>>; Ole Troan 
mailto:otr...@employees.org>>
Cc: Dave Barach (dbarach) mailto:dbar...@cisco.com>>; 
Gabriel Ganne mailto:gabriel.ga...@enea.com>>; Billy 
McFall mailto:bmcf...@redhat.com>>; Damjan Marion 
(damarion) mailto:damar...@cisco.com>>; vpp-dev 
mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] openSUSE build fails

On Fri, 2018-01-26 at 06:58 +, Ni, Hongjun wrote:
I rechecked this patch twice, and it built successfully now.

But why need to recheck twice?
If a "recheck" fixed that then it must be an infrastructure glitch; that's the 
only thing I can think of...

That would not be a surprise either since it does happen from time-to-time to 
see random build failures which get fixed by a "recheck".

Having said that, if you happen to have again this sort of problems (and which 
do not go away with a recheck) feel free to drop me an email and I will look 
into it. Just take into account I'm based at UTC+1.



-Hongjun
- Marco


From: Ole Troan [mailto:otr...@employees.org]
Sent: Friday, January 26, 2018 2:53 PM
To: Ni, Hongjun mailto:hongjun...@intel.com>>
Cc: Dave Barach (dbarach) mailto:dbar...@cisco.com>>; Marco 
Varlese mailto:mvarl...@suse.de>>; Gabriel Ganne 
mailto:gabriel.ga...@enea.com>>; Billy McFall 
mailto:bmcf...@redhat.com>>; Damjan Marion (damarion) 
mailto:damar...@cisco.com>>; vpp-dev 
mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] openSUSE build fails

Hi Hongjun,

I have no OpenSUSE at hand, and could not give it a try.

Neither do I.

Ole



From: Ole Troan [mailto:otr...@employees.org]
Sent: Friday, January 26, 2018 2:08 PM
To: Ni, Hongjun mailto:hongjun...@intel.com>>
Cc: Dave Barach (dbarach) mailto:dbar...@cisco.com>>; Marco 
Varlese mailto:mvarl...@suse.de>>; Gabriel Ganne 
mailto:gabriel.ga...@enea.com>>; Billy McFall 
mailto:bmcf...@redhat.com>>; Damjan Marion (damarion) 
mailto:damar...@cisco.com>>; vpp-dev 
mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] openSUSE build fails

Hongjun,

This looks suspect:

03:32:31 APIGEN vlibmemory/memclnt.api.h 03:32:31 JSON API 
vlibmemory/memclnt.api.json 03:32:31 SyntaxError: invalid syntax 
(vppapigentab.py, line 11) 03:32:31 
WARNING:vppapigen:/w/workspace/vpp-verify-master-opensuse/build-root/rpmbuild/BUILD/vpp-18.04/build-data/../src/vlibmemory/memclnt.api:0:1:
 Old Style VLA: u8 data[0]; 03:32:31 Makefile:8794: recipe for target 
'vlibmemory/memclnt.api.h' failed 03:32:31 make[5]: *** 
[vlibmemory/memclnt.api.h] Error 1 03:32:31 make[5]: *** Waiting for unfinished 
jobs 03:32:31




Can you try running vppapigen manually on that platform?
Vppapigen —debug —input memclnt.api ...

Cheers
Ole


On 26 Jan 2018, at 06:38, Ni, Hongjun 
mailto:hongjun...@intel.com>> wrote:
Hi all,

It seems that OpenSUSE build failed for this patch:
https://jenkins.fd.io/job/vpp-verify-master-opensuse/1285/console

Please help to take a

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Dave Barach (dbarach)
Sent: Friday, December 15, 2017 11:19 PM
To: Marco Varlese mailto:mvarl...@suse.de>>; Gabriel Ganne 
mailto:gabriel.ga...@enea.com>>; Billy McFall 
mailto:bmcf...@redhat.com>>
Cc: Damjan Marion (damarion) mailto:damar...@cisco.com>>; 
vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] openSUSE build fails

Dear Marco,

Thanks very much...

Dave

From: Marco Varlese [mailto:mvarl...@suse.de]
Sent: Friday, December 15, 2017 9:06 AM
To: Dave Barach (dbarach) mailto:dbar...@cisco.com>>; 
Gabriel Ganne mailto:gabriel.ga.

Re: [vpp-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

2018-01-25 Thread Dave Barach (dbarach)
Congrats to DaveW and the rest of the fd.io vpp team on the 18.01 release!

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Dave Wallace
Sent: Thursday, January 25, 2018 12:23 AM
To: vpp-dev@lists.fd.io; csit-...@lists.fd.io
Subject: [vpp-dev] VPP 18.01 Release artifacts are now available on nexus.fd.io

Folks,

The VPP 18.01 Release artifacts are now available on nexus.fd.io

The ubuntu.xenial and centos packages can be installed following the recipe on 
the wiki: https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages

Thank you to all of the VPP community who have contributed to the 18.01 VPP 
Release.


Elvis has left the building!
-daw-
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Missing PLY ?

2018-01-24 Thread Dave Barach (dbarach)
“$ make install-dep” fixed it for me… D.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jon Loeliger
Sent: Wednesday, January 24, 2018 1:58 PM
To: vpp-dev 
Subject: [vpp-dev] Missing PLY ?

Hey Kids,

The new API Gen seems to want ply.lex, but I don't think
it is listed as a dependency or something somewhere.  Or
maybe I have a really crappy Python.  Dunno.

Net effect, shown below, isn't good.

Did I miss a step?

Thanks,
jdl


make[4]: Entering directory 
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/build-vpp-native/vpp'
  APIGEN   vlibmemory/memclnt.api.h
  JSON API vlibmemory/memclnt.api.json
Traceback (most recent call last):
  File 
"/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/tools/bin/vppapigen",
 line 4, in 
import ply.lex as lex
ImportError: No module named ply.lex
Traceback (most recent call last):
  File 
"/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/tools/bin/vppapigen",
 line 4, in 
import ply.lex as lex
ImportError: No module named ply.lex
make[4]: *** [vlibmemory/memclnt.api.h] Error 1
make[4]: *** Waiting for unfinished jobs
make[4]: *** [vlibmemory/memclnt.api.json] Error 1
make[4]: Leaving directory 
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root/build-vpp-native/vpp'
make[3]: *** [vpp-build] Error 2
make[3]: Leaving directory 
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root'
make[2]: *** [install-packages] Error 1
make[2]: Leaving directory 
`/home/jdl/workspace/vpp/build-root/rpmbuild/vpp-18.04.0/build-root'
error: Bad exit status from /var/tmp/rpm-tmp.8lAVBj (%build)

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Question and bug found on GTP performance testing

2018-01-24 Thread Dave Barach (dbarach)
We're not going to turn vnet_register_interface(...) into an epic catalog of 
special-purpose strcmp's. Any patch which looks the least bit like the diffs 
shown below is guaranteed to be scored -2, and never merged.

Please let John propose a mechanism to address this issue.

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Lollita Liu
Sent: Wednesday, January 24, 2018 5:09 AM
To: John Lo (loj) ; vpp-dev@lists.fd.io
Cc: Kingwel Xie ; David Yu Z 
; Terry Zhang Z ; Jordy 
You 
Subject: Re: [vpp-dev] Question and bug found on GTP performance testing

Hi, John.

We tried to bypass the node creation in interface creation and 
try the case again.  The GTPU throughput does not be affected by the interface 
creation any more. The basic source code is as follow:

diff --git a/src/vnet/interface.c b/src/vnet/interface.c
index 82eccc1..451019e 100644
--- a/src/vnet/interface.c
+++ b/src/vnet/interface.c
@@ -745,6 +745,10 @@ vnet_register_interface (vnet_main_t * vnm,
   hw->max_l3_packet_bytes[VLIB_RX] = ~0;
   hw->max_l3_packet_bytes[VLIB_TX] = ~0;

+  if (0 == strcmp(dev_class->name, "GTPU")) {
+goto skip_add_node;
+  }
+
   tx_node_name = (char *) format (0, "%v-tx", hw->name);
   output_node_name = (char *) format (0, "%v-output", hw->name);

@@ -881,6 +885,8 @@ vnet_register_interface (vnet_main_t * vnm,
   setup_output_node (vm, hw->output_node_index, hw_class);
   setup_tx_node (vm, hw->tx_node_index, dev_class);

+skip_add_node:
+
   /* Call all up/down callbacks with zero flags when interface is created. */
   vnet_sw_interface_set_flags_helper (vnm, hw->sw_if_index, /* flags */ 0,
  
VNET_INTERFACE_SET_FLAGS_HELPER_IS_CREATE);

BR/Lollita Liu

From: Lollita Liu
Sent: Tuesday, January 23, 2018 11:28 AM
To: 'John Lo (loj)' mailto:l...@cisco.com>>; 
vpp-dev@lists.fd.io
Cc: David Yu Z mailto:david.z...@ericsson.com>>; 
Kingwel Xie mailto:kingwel@ericsson.com>>; Terry 
Zhang Z mailto:terry.z.zh...@ericsson.com>>; Jordy 
You mailto:jordy@ericsson.com>>
Subject: RE: Question and bug found on GTP performance testing

Hi, John,
The internal mechanism is very clear to me now.

And do you have any thought about the dead lock on main thread?

BR/Lollita Liu

From: John Lo (loj) [mailto:l...@cisco.com]
Sent: Tuesday, January 23, 2018 11:18 AM
To: Lollita Liu mailto:lollita@ericsson.com>>; 
vpp-dev@lists.fd.io
Cc: David Yu Z mailto:david.z...@ericsson.com>>; 
Kingwel Xie mailto:kingwel@ericsson.com>>; Terry 
Zhang Z mailto:terry.z.zh...@ericsson.com>>; Jordy 
You mailto:jordy@ericsson.com>>
Subject: RE: Question and bug found on GTP performance testing

Hi Lolita,

Thank you for providing information from your performance test with observed 
behavior and problems.

On interface creation, including tunnels, VPP always creates dedicated output 
and tx nodes for each interface. As you correctly observed, these dedicated tx 
and output nodes are not used for most tunnel interfaces such as GTPU and 
VXLAN. All these tunnel interfaces of the same tunnel type would use an 
existing tunnel type specific encap node as their output nodes.

I can see that for large scale tunnel deployments, creation of a large number 
of these not-used output and tx nodes can be an issue, especially when multiple 
worker threads are used. The worker threads will be blocked from forwarding 
packets while the main thread is busy creating these nodes and do setups for 
multiple worker threads.

I believe we should improve VPP interface creation to allow a way for creating 
interfaces, such as tunnels, where existing (encap-)nodes can be specified as 
interface output nodes without creating dedicated tx and output nodes.

Your observation that the forwarding PPS impact only occur during initial 
tunnel creation and not subsequent delete and create is as expected. It is 
because on tunnel deletion, the associated interfaces are not deleted but kept 
in a reused pool for subsequent creation of the same tunnel type. It may not be 
the best approach for interface usage flexibility but it certainly helps with 
efficiency of tunnel delete and create cases.

I will work on the interface creation improvement described above when I get a 
chance.  I can let you know when a patch is available on vpp master for you to 
try.  As for 18.01 release, it is probably too late to include this improvement.

Regards,
John

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Lollita Liu
Sent: Monday, January 22, 2018 5:04 AM
To: vpp-dev@lists.fd.io
Cc: David Yu Z mailto:david.z...@ericsson.com>>; 
Kingwel Xie mailto:kingwel@ericsson.com>>; Terry 
Zhang Z mailto:terry.z.zh...@ericsson.com>>; Jordy 
You mailto:jordy@ericsson.com>>
Subject: [vpp-dev] Question 

Re: [vpp-dev] heap per thread

2018-01-24 Thread Dave Barach (dbarach)
Yes, it’s possible. This is not the obvious way to do it.

Before I answer any questions: what are you trying to accomplish? Idiomatic vpp 
coding techniques typically don’t result in enough memory allocator traffic to 
make it worth using per-thread heaps.

D.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Saeed P
Sent: Wednesday, January 24, 2018 12:53 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] heap per thread

Hi
I tried to change the memory allocation on VPP to set different mheap per 
worker not a shared mheap per worker.
so on /vlib/threads.c at start_workers function chang as follow :

   if ( !strcmp( tr->name , "workers") )
   {
   tr->mheap_size = new_mheap_size ;
   }
   vec_add2 (vlib_worker_threads, w, 1);

  if (tr->mheap_size)
w->thread_mheap = mheap_alloc (0 , 
tr->mheap_size);
  else
w->thread_mheap = main_heap;

 by default the "tr->mheap_size" is zero so go into else and use the main_heap 
but now allocate mheap for workers, but it has coredump as GDB shows:

 Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
mheap_get_search_free_bin (align_offset=4, align=, 
n_user_data_bytes_arg=, bin=11, v=0x7fffb5bdd000) at 
/root/CGNAT/build-data/../src/vppinfra/mheap.c:401
401   uword this_object_n_user_data_bytes = mheap_elt_data_bytes 
(e);

 Is it possible to set different mheap per worker ?


Thanks,
-Saeed
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] RFC: Error Codes

2018-01-23 Thread Dave Barach (dbarach)
Right. The error number base needs to be managed just like the message ID base… 
D.

From: Jon Loeliger [mailto:j...@netgate.com]
Sent: Tuesday, January 23, 2018 9:39 AM
To: Ole Troan 
Cc: Dave Barach (dbarach) ; vpp-dev 
Subject: Re: [vpp-dev] RFC: Error Codes

On Tue, Jan 23, 2018 at 8:12 AM, Ole Troan 
mailto:otr...@employees.org>> wrote:
Dear Dave,

> I would be tempted to have the compiler emit 
> "foreach__api_error" macros [or similar]:
>
> #define foreach_foo_api_error \
> _(SUCCESS, "Success") \
> _(ERROR, "This didn't go well")
>
> To minimize pain in upgrading existing C-code...

Ah, yes of course.
Done.
https://gerrit.fd.io/r/#/c/10204/

Cheers,
Ole

Glad to see you guys like the notion!

With this, will plugins have to manage an error number base for
each plugin now?  Or will that be some form of magic behind the scene?

Thanks,
jdl

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] How to link zmq library to new plugin in vpp

2018-01-23 Thread Dave Barach (dbarach)
This report leaves a bit to be desired. As in: "configure file" means what? 
Where is the build output?

Laying that aside, if your plugin is called xxx, try adding:

 xxx_plugin_la_LIBADD += -lzmq

to src/plugins/xxx.am...

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Vadnere, Neha R
Sent: Tuesday, January 23, 2018 4:56 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] How to link zmq library to new plugin in vpp

Hi,

I want to use zmq APIs in my plugin in VPP. I tried to add following line in 
configure file:
LDFLAGS+=-L/usr/local/lib -lzmq
autoreconf -fis
./configure
make
make install

and  I am building new plugin from main vpp directory
cd vpp/
make build
make run

But this is not working for me. Can anybody please let me know the correct way 
to link library to plugin?

Regards,
Neha

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] RFC: Error Codes

2018-01-23 Thread Dave Barach (dbarach)
I would be tempted to have the compiler emit "foreach__api_error" 
macros [or similar]:

#define foreach_foo_api_error \
_(SUCCESS, "Success") \
_(ERROR, "This didn't go well") 

To minimize pain in upgrading existing C-code...

D.


-Original Message-
From: Ole Troan [mailto:otr...@employees.org] 
Sent: Tuesday, January 23, 2018 5:41 AM
To: Dave Barach (dbarach) ; Jon Loeliger 
Cc: vpp-dev 
Subject: Re: [vpp-dev] RFC: Error Codes

Dave, Jon,

> On 22 Jan 2018, at 19:34, Dave Barach (dbarach)  wrote:
> 
> Dear Jon,
> 
> That makes sense to me. Hopefully Ole will comment with respect to 
> adding statements of the form
> 
> error { FOO_NOT_AVAILABLE, “Resource ‘foo’ is not available } ;
> 
> to the new Python PLY-based API generator.
> 
> The simple technique used to allocate plugin message-ID’s seems to work OK to 
> solve the analogous problem here.

That makes sense to me too (wonder why we haven't done that before. ;-))

Here is the patch to the compiler:

https://gerrit.fd.io/r/10204 VPPAPIGEN: Error definitions

VPPAPIGEN: Error definitions
This commit adds support for defining errors.

errors {
  SUCCESS, "No error";
  ERROR, "This didn't go well";
};

Which results in the following C:

vl_error(VL_API_ERROR_SUCCESS, "No error") vl_error(VL_API_ERROR_ERROR, "This 
didn't go well")

And JSON:
 "errors": [ [ "SUCCESS", "No error" ], [ "ERROR", "This is wrong" ] ]


Does that seem sane?

Cheers,
Ole

> 
> Thanks… Dave
> 
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] 
> On Behalf Of Jon Loeliger
> Sent: Monday, January 22, 2018 12:13 PM
> To: vpp-dev 
> Subject: [vpp-dev] RFC: Error Codes
> 
> Hey VPP Aficionados,
> 
> I would like to make a proposal for a new way to introduce error codes 
> into the VPP code base.  The two main motivations for the proposal are
> 
> 1) to improve the over-all error messages coupled to their API 
> calls, and
> 2) to clearly delineate the errors for VNET from those of various plugins.
> 
> Recently, it was pointed out to me that the errors for the various 
> plugins should not introduce new, plugin-specific errors into the main 
> VNET list of errors (src/vnet/api_errno.h) on the basis that plugins 
> shouldn't clutter VNET, should be more self-sustaining, and should stand 
> alone.
> 
> Without a set of generic error codes that can be used by the various 
> plugins, there would then be no error codes as viable return values 
> from the API calls defined by plugins.
> 
> So here is my proposal:
> 
> - Extend the API definition files to allow the definition of error 
> messages
>   and codes specific to VNET, or to a plugin.
> 
> - Each plugin registers its error codes with a main registry upon being 
> loaded.
> 
> - The global error table is maintained, perhaps much like API enums today.
> 
> - Each API call then has a guaranteed set of return values defined 
> directly
>   within its own API definition, thus coupling API calls and their 
> possible
>   returned error codes as well.
> 
> Other thoughts?
> 
> Thanks,
> jdl
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] RFC: Error Codes

2018-01-22 Thread Dave Barach (dbarach)
Dear Jon,

That makes sense to me. Hopefully Ole will comment with respect to adding 
statements of the form

error { FOO_NOT_AVAILABLE, “Resource ‘foo’ is not available } ;

to the new Python PLY-based API generator.

The simple technique used to allocate plugin message-ID’s seems to work OK to 
solve the analogous problem here.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jon Loeliger
Sent: Monday, January 22, 2018 12:13 PM
To: vpp-dev 
Subject: [vpp-dev] RFC: Error Codes

Hey VPP Aficionados,

I would like to make a proposal for a new way to introduce error codes
into the VPP code base.  The two main motivations for the proposal are

1) to improve the over-all error messages coupled to their API calls,
and
2) to clearly delineate the errors for VNET from those of various plugins.

Recently, it was pointed out to me that the errors for the various plugins
should not introduce new, plugin-specific errors into the main VNET list
of errors (src/vnet/api_errno.h) on the basis that plugins shouldn't clutter
VNET, should be more self-sustaining, and should stand alone.

Without a set of generic error codes that can be used by the various plugins,
there would then be no error codes as viable return values from the API calls
defined by plugins.

So here is my proposal:

- Extend the API definition files to allow the definition of error messages
  and codes specific to VNET, or to a plugin.

- Each plugin registers its error codes with a main registry upon being 
loaded.

- The global error table is maintained, perhaps much like API enums today.

- Each API call then has a guaranteed set of return values defined directly
  within its own API definition, thus coupling API calls and their possible
  returned error codes as well.

Other thoughts?

Thanks,
jdl

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Create an arc

2018-01-18 Thread Dave Barach (dbarach)
Here's one way to solve the problem, which should result in a patch we can 
merge:



  *   Add head-of-feature-arc processing to ip4/6_lookup_inline() under control 
of an integer argument [which will be passed as a constant 0 or 1].
  *   Create a couple of new nodes “ip4-lookup-with-post-lookup-arc” [or some 
better name] in ip4/6_forward.c, which instantiate the head of feature arc code
  *   Add the “…with-post-lookup-arc” nodes to the current pre-lookup rx 
feature arc, before the vanilla lookup nodes.
  *   Make the …with-post-lookup-arc” nodes siblings of the normal lookup 
nodes, so they inherit successor arcs/indices automatically.
  *   Add your node(s) to the post-lookup arc



To make traffic flow: enable the …with-post-lookup-arc nodes in the current rx 
feature arc AND enable your node(s) on the post-lookup arc



If done correctly, this should cost zero clock cycles in the speed path: a hard 
requirement.



HTH… D.



-Original Message-
From: korian edeline [mailto:korian.edel...@ulg.ac.be]
Sent: Thursday, January 18, 2018 6:37 AM
To: Neale Ranns (nranns) ; Dave Barach (dbarach) 
; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Create an arc



Hello Neale, Dave,



Thanks for your answers.



I would like to catch all (not on a prefix basis) traffic to-be-forwarded.

- I would need the TX sw_if_index, so i think the nodes should be placed after 
ip4-lookup.

- i have to be before ip4-rewrite, not to compute checksums 2 times.



Right now, my nodes are placed before lookup, via ip4-unicast feature arc and 
they can be enabled/disabled via vnet_feature_enable_disable.

Something similar, but after lookup, would be really convenient.



Regards,

Korian



On 01/18/2018 11:01 AM, Neale Ranns (nranns) wrote:

> Hi Korian,

>

> Constructing the VLIB graph between ipX-lookup and ipX-rewrite (and really to 
> interface-output) is best achieved by following the DPO architecture. You can 
> read a little about it here:

>https://wiki.fd.io/view/VPP/DPOs_and_Feature_Arcs

>

> Step one is to implement a new DPOs to represent your two new nodes. You’ll 
> find many examples of DPOs in vnet/dpo/*. Step 2 is then to ‘resolve’ the IP 
> prefix via your DPO. The means for that is, e.g, from vnet/bier/bier_table.c:

>

>  bt->bt_lfei = fib_table_entry_special_dpo_add(mpls_fib_index,

>&pfx,

>FIB_SOURCE_BIER,

>
> FIB_ENTRY_FLAG_EXCLUSIVE,

>&dpo);

>

> the rather badly named EXCLUSIVE flag means the caller is providing the DPO 
> and so FIB has no need to perform its usual resolution. The FIB_SOURCE_BIER 
> identifies ‘who’ is providing the forwarding information (the DPO) and thus 
> the relative priority of that information. There is a simple linear priority 
> scheme among the sources enumerated by fib_source_t.

> Step 3 is to ‘stack’ your DPOs, i.e. to form the chain/graph that will be 
> followed in the data-plane. The FIB API above automatically stacks the 
> load_balance_t DPO (which is the result of the lookup) on your DPO passed.

>

> note that the above provides you with ‘override’ semantics, i.e. for a given 
> prefix you can override (assuming your source has higher priority) the 
> existing forwarding information for that prefix. If instead your requirements 
> are to apply further rules/checks/replications on the packets before they are 
> forwarded using the existing information, then this is what I call 
> ‘interposition’. I have an outstanding patch for this:

>https://gerrit.fd.io/r/#/c/9336/

> I’ll try and get it finished soon.

>

> The last issue to consider is whether your override or interposition needs to 
> affect only the prefix you specify in the call to 
> fib_table_entry_special_dpo_add() or to all longer mask prefixes that it 
> covers. For example, if you specify 10.0.0.0/24, and some other source 
> specifies 10.0.0.0/25 and 10.128.0.0/25 then your prefix is never matched. In 
> order to ‘push’ your forwarding down to all longer mask prefixes in the 
> sub-tree one needs to explicitly specify this. Again, this is an outstanding 
> patch:

>   https://gerrit.fd.io/r/#/c/9477/

>

>

> Having said all that, if what you are after Is not running your

> feature on a per-prefix basis, but instead on a per-output interface

> basis, then you want the ip4-output feature arc ☺

>

> hth,

> neale

>

>

> -Original Message-

> From: "Dave Barach (dbarach)" mailto:dbar...@cisco.com>>

> Date: Wednesday, 17 January 2018 at 16:21

> To: korian edeline 
> mailto:korian.edel...@ulg.ac.be>>, 
> "vpp-

Re: [vpp-dev] Create an arc

2018-01-17 Thread Dave Barach (dbarach)
Dear Korian,

Steering traffic from ip4_lookup to  is easily accomplished by 
setting the fib result [dpo->dpoi_next_node] to send matching traffic where you 
want it to go. 

Add an arc from ip4/6_lookup to  by calling vlib_node_add_next(...) 
to create the arc, then create fib entries with dpoi_next_node set to the 
returned next_index.

This is not a feature arc problem. Attempting to solve it as such will cause no 
end of trouble. 

Neale, please jump in as needed...

HTH... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of korian edeline
Sent: Wednesday, January 17, 2018 9:30 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Create an arc

Hi all,

Here is the deal:

I have 2 nodes (my-node-1, my-node-2),  I would like my-node-1 to receive 
packets from ip4-lookup, forwarding to either ip4-rewrite, error-drop or 
my-node-2. my-node-2 should only receive from my-node-1 and forward to 
ip4-rewrite or error-drop.

If I put them BEFORE ip4-lookup, i can use pre-built arc ip4-unicast and 
everything works perfect. But i figured that if i want them after ip4-lookup, i 
have to create my own arc. So here is what i have, plus replacing occurences of 
"ip4-unicast" by "my-arc".

VNET_FEATURE_ARC_INIT (my_arc, static) = {
   .arc_name = "my-arc,
   .start_nodes = VNET_FEATURES ("ip4-lookup"),
   .arc_index_ptr = &my_main.feature_arc_index };

What am i missing  ?

Thanks

Korian

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Proposal to remove ssvm_eth

2018-01-13 Thread Dave Barach (dbarach)
Dear Florin,

Quite to the contrary: removing the ssvm_ethernet driver would be a Good Thing. 
I built it as a prototype a long time ago. It has not been widely adopted. 
Memif solves the same general problem [much better], so please go ahead...

Thanks... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Florin Coras
Sent: Friday, January 12, 2018 7:56 PM
To: vpp-dev 
Subject: [vpp-dev] Proposal to remove ssvm_eth

Hi everyone, 

I’m in the process of cleaning up the ssvm code and realized some of the data 
structures have fields that are only used within the ssvm_eth code. Since we 
now have memif, and nobody is really maintaining ssvm_eth, I’d like to remove 
the code. 

Therefore, does anybody have something against me doing that?

Thanks, 
Florin


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Memory Leakage Test By Valgrind

2018-01-08 Thread Dave Barach (dbarach)
Mheap.c has its own highly accurate memory leak checker. I haven’t tried the 
valgrind integration in many years. Valgrind makes vpp run slowly enough to 
make it unusable.

To use the built-in leakfinder: build TAG=vpp_debug, set #define 
MHEAP_HAVE_SMALL_OBJECT_CACHE to 0 in .../src/vppinfra/mheap_bootstrap.h.

Then:


  *   Start vpp, configure it, and so forth.
  *   “memory-trace on”
  *   
  *   “show memory”

“show memory” prints a nice report of all memory allocated during .

HTH… Dave

P.S. You probably won’t want to see all of the initial memory allocations, but 
if you do supply the command line stanza “ ... vlib { ... memory-trace ... }”


From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Saeed P
Sent: Monday, January 8, 2018 3:23 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Memory Leakage Test By Valgrind

Hi,
I want to check memory leakage on VPP so I use Valgrind memcheck tool to do it,
but it gives me a lot of errors on output, like "Invalid read of size ..." and
at the end of the valgrind memcheck report is saying :
"This is usually caused by using VALGRIND_MALLOCLIKE_BLOCK in an inappropriate 
way."
I compile VPP with CLIB_DEBUG=1 (make build command) so the code includes the 
signals
to valgrind regarding VPP has its own memory allocator.
I use command with options below:
valgrind --leak-check=full
 --show-leak-kinds=all
 --read-var-info=yes
   --trace-children=yes
 --fair-sched=yes
 --log-file=memcheck-output.log
   /root/vpp/build-root/install-vpp_debug-native/vpp/bin/vpp \
 -c /etc/vpp/startup.conf

Do you test Valgrind Memcheck on VPP successfully?
which command and configuration use?
Is there any useful tool for that, except internal commands:
"show memory" , "memory-trace"

Thanks,
-Saeed
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP 18.01 RC1 milestone is complete!

2018-01-05 Thread Dave Barach (dbarach)
Hey Dave, thanks for all your work to make this happen!

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Dave Wallace
Sent: Thursday, January 4, 2018 12:25 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP 18.01 RC1 milestone is complete!

Folks,

The VPP 18.01 RC1 milestone is complete. The VPP 18.01 release branch 
(stable/1801) has been created, along with the associated nexus and 
packagecloud repo's.

  *   vpp master branch is now open for all patches slated for VPP 18.04 (and 
beyond).
  *   vpp stable/1801 is open for bug fix patches only.
Per the standard process, all bug fixes to the stable branch should follow the 
best practices:

  *   All bug fixes must be double-committed to the release throttle as well as 
to the master branch
 *   Commit first to the release throttle, then "git cherry-pick" into 
master
 *   Manual merges may be required, depending on the degree of divergence 
between throttle and master
  *   All bug fixes need to have a Jira ticket
 *   Please put Jira IDs into the commit messages.
 *   Please use the same Jira ID for both the stable branch and master.
Note: I downloaded and installed the Ubuntu packages for stable/1801 & master 
following the directions on this wiki page:  
https://wiki.fd.io/view/VPP/Installing_VPP_binaries_from_packages

Please let me know if there are any issues downloading/installing the centos7 
artifacts.
-daw-

ps. Thanks to Florin Coras and Ed Warnicke for their assistance.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] 'vlib_buffer_alloc' alloc an uninitialized memory

2018-01-04 Thread Dave Barach (dbarach)
Nothing wrong here AFAICT.

vnet_buffer(b)->sw_if_index[VLIB_RX] = 2, which is plausible. “show int” to 
confirm.

vnet_buffer(b)->sw_if_index[VLIB_TX] = 0x == ~0 == 4294967295 => use 
the interface’s FIB index in ip[46]_lookup.

 sw_if_index = {2, 4294967295},


From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of ???
Sent: Thursday, January 4, 2018 1:51 AM
To: vpp-dev 
Subject: [vpp-dev] 'vlib_buffer_alloc' alloc an uninitialized memory


Hi guys,

I'm testing ikev2. In the function 'ikev2_rekey_child_sa_internal',:
bi0 = ikev2_get_new_ike_header_buff (vm, &ike0);
//The following is the debug code
 vlib_buffer_t *b0;
 b0 = vlib_get_buffer (vm, bi0);

   (gdb) p *(vnet_buffer_opaque_t *)(b0->opaque)
$1 = {
  sw_if_index = {2, 4294967295},//The rest of the value is correct
  l2_hdr_offset = 0,
  l3_hdr_offset = 0,
  l4_hdr_offset = 0,


  The wrong may cause an error in 'ip4_lookup_inline '.
  BTW, the problem doesn't happen every time.

Thanks,
Xyxue

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] 'pool_elt_at_index' Relative Addressing Cause a Mistake

2018-01-04 Thread Dave Barach (dbarach)
All of the pool_get(mypool, new_elt) variants are capable of expanding - and 
hence moving - mypool, leading to dangling references to free memory if you’re 
not careful. Here’s the usual coding pattern:

old_elt = pool_elt_at_index (mypool, index);

/* use old-elt */

pool_get (mypool, new_elt);

/* old-elt now INVALID, but index (or p[0]) is still fine */

old_elt = pool_elt_at_index (mypool, index);



Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of ???
Sent: Thursday, January 4, 2018 1:22 AM
To: vpp-dev 
Subject: [vpp-dev] 'pool_elt_at_index' Relative Addressing Cause a Mistake


Hi guys,

I'm testing ikev2. When I initiate a sa succeed(pr1), then add the other one 
(pr2), the sa->pr1->name is rewritten.

After viewing I find the 'pool_elt_at_index' is relative addressing . And the 
'pool_base' may change when use the pointer we preserved before.

eg:'pool_elt_at_index (km->profiles, p[0]);'

How can we solve the problem?

Thanks,
Xyxue

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP does not detect NIC automatically

2018-01-03 Thread Dave Barach (dbarach)
Dear Charlie,

Vpp won't touch an interface if it has an associated Linux kernel interface 
which is up, and/or has an address configured on it. Manually unbinding the 
interface - as you did - makes the Linux kernel interface disappear.

Accidentally whitelisting a host's management ethernet would be a Bad Thing. 
"Hmmm... Why can't I ping the box anymore, let alone ssh to it..."

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Li, Charlie
Sent: Wednesday, January 3, 2018 2:29 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP does not detect NIC automatically

Hi VPP team,

I am new to VPP and am following the "FDIO Quick Start Guide" 
(https://docs.google.com/document/d/1zqYN7qMavgbdkPWIJIrsPXlxNOZ_GhEveHQxpYr3qrg/edit)
 to get started.

I am running Ubuntu 16.04 and using the pre-built packages.

According to the document, vpp should detect and take over the Ethernet ports 
that are not in use (link down). But on my system, vpp does not detect any 
interfaces except the "local0".

# ps -eaf | grep vpp
root991  1 99 Nov28 ?2-06:13:50 /usr/bin/vpp -c 
/etc/vpp/startup.conf
root  83025  83014  0 15:34 pts/800:00:00 grep --color=auto vpp

# lspci
...
01:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ 
Network Connection (rev 01)
01:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ 
Network Connection (rev 01)
...

# vppctl show int
  Name   Idx   State  Counter  Count
local00down

Then I add the two interfaces to the whitelist in startup.conf

## Whitelist specific interface by specifying PCI address
dev :01:00.0
dev :01:00.1

And restart vpp, but it still does not detect the interfaces.

As a workaround, I manually bind the interfaces

~/dpdk/usertools/dpdk-devbind.py -b uio_pci_generic :01:00.0
~/dpdk/usertools/dpdk-devbind.py -b uio_pci_generic :01:00.1

And restart vpp; now everything starts to work.

Is this as expected or did I miss something?


Regards,
Charlie Li

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] No vpp project meeting 12/26/2017

2017-12-15 Thread Dave Barach (dbarach)
Have a great holiday season!

Thanks... Dave

P.S. Are folks interested in meeting on 1/2/2018, or should we reconvene on 
1/9/2018?

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] openSUSE build fails

2017-12-15 Thread Dave Barach (dbarach)
Dear Marco,

Thanks very much...

Dave

From: Marco Varlese [mailto:mvarl...@suse.de]
Sent: Friday, December 15, 2017 9:06 AM
To: Dave Barach (dbarach) ; Gabriel Ganne 
; Billy McFall 
Cc: Damjan Marion (damarion) ; vpp-dev 
Subject: Re: [vpp-dev] openSUSE build fails

We (at SUSE) are currently pushing an update to 2.2.11 for openSUSE in our 
repositories.
Once that's confirmed to be upstream, I will push a new patch to the 
ci-management repo to have the indent package upgraded to the latest version 
and re-enable the "checkstyle".


Cheers,
Marco

On Fri, 2017-12-15 at 13:51 +0000, Dave Barach (dbarach) wrote:
With a bit of fiddling, I was able to fix gerrit 9440 so that indent 2.2.10 and 
2.2.11 appear to produce identical results...

HTH... Dave

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Gabriel Ganne
Sent: Friday, December 15, 2017 8:42 AM
To: Billy McFall mailto:bmcf...@redhat.com>>; Marco Varlese 
mailto:mvarl...@suse.de>>
Cc: Damjan Marion (damarion) mailto:damar...@cisco.com>>; 
vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] openSUSE build fails


Hi,



If you browse the source http://hg.savannah.gnu.org/hgweb/indent/

The tag 2.2.11  is there, the source seems updated regularly.



Best regards,



--

Gabriel Ganne


From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
mailto:vpp-dev-boun...@lists.fd.io>> on behalf of 
Billy McFall mailto:bmcf...@redhat.com>>
Sent: Friday, December 15, 2017 2:26:42 PM
To: Marco Varlese
Cc: Damjan Marion (damarion); vpp-dev
Subject: Re: [vpp-dev] openSUSE build fails



On Fri, Dec 15, 2017 at 5:15 AM, Marco Varlese 
mailto:mvarl...@suse.de>> wrote:
Hi Damjan,

On Fri, 2017-12-15 at 09:06 +, Damjan Marion (damarion) wrote:


On 15 Dec 2017, at 08:52, Marco Varlese 
mailto:mvarl...@suse.de>> wrote:

Damjan,

On Thu, 2017-12-14 at 16:04 +, Damjan Marion (damarion) wrote:
Folks,

I'm hearing from multiple people that OpenSUSE verify job is failing (again).
I haven't heard (or read) anything over the mailing list otherwise I would have
looked into it.
Also, if you hear anything like that you can always ping me directly and I will
look into it...

yes, people pinging me...
See
https://gerrit.fd.io/r/#/c/9440/<https://url10.mailanyone.net/v1/?m=1ePq0t-0001Fl-6M&i=57e1b682&c=68TFeWozfVWr0cOeQcoSLfj_6UOcLVL45-kDlIThNR_ycQZG5LOgi7NnZMJtDMUAmhIPtu-lSoEuMy-6KVT4RlufdWPa2MdfXzb_ObzIVcMVqAGqH7isJhFQHsNuaRick9gGwiEgwUQHltVsqpH-j4MwmcVniuBLxSiCuh2d9gPyZ9J_DeIXB9ebiI349MT3YFcKCmnf4x6PSEKrRYEoXYvyBIR1brcxBEL7qox2rRo>

also:

https://gerrit.fd.io/r/#/c/9813/<https://url10.mailanyone.net/v1/?m=1ePq0t-0001Fl-6M&i=57e1b682&c=CrMjX_E-jo7WRmm-sopHZy5U_DhywlV7a5A369OJyOow2Mnl2gcRxDLpcasYhpTRR5BtPvolweaLRScakJLx-NDgwKa8ITMZEpYTSnZ33x76qqlb_GnK382fDZNMYQn6KPDthHl7JZPOslzVKjUVDmvIaFaOxiQgDYkMHw02f9pC0xMMRtuuURi0fwbx8lfGUi64rlyZBA0T4tJOBYSPjVrm_yF86cI4X2Cc5I7XB8s>
 - abandoned but it shows that something was wrong

Ok, so just summarizing our conversation on IRC for others too.

That issue is connected to the different versions of INDENT (C checkstyle) 
installed on the different distros.

openSUSE runs 2.2.10 whilst CentOS and Ubuntu run 2.2.11

What strikes me is that the upstream repo 
https://ftp.gnu.org/gnu/indent/<https://url10.mailanyone.net/v1/?m=1ePq0t-0001Fl-6M&i=57e1b682&c=SSiBFtZ5JbQhgjd9dEVXNEBYOdVo_Zo1ALr23wlHN44ValS-HDgjsawWJnWi-UHq0Pe9bgVnD5fLzJs6yISu-7ZkpGlAUgLW-IeDY4i6dsSzbSrCQ97iLT5lh93ItR7CCJtRvXBazqKbU6mxvPD_UTUCxm8qPdLPUdki9viMke3Q_tIJAReRf4KOT37lCP3T5tgGg3r1OT86tvKq2dovxDIjSQuPwKrDpiZ8AsSTB5w>
 has 2.2.10 as last revision.
Our indent package maintainer is looking at possible other sources where Indent 
could "live" these days and will let me know as soon as she finds out.

@Thomas Herbert, would you know the source where the Indent package on CentOS 
come from? Maybe that could help...

Marco, I can't find the source. I'll look around a little more. From CentoOS 
7.4:

$ sudo yum provides indent
:
indent-2.2.11-13.el7.x86_64 : A GNU program for formatting C code
Repo: base
:

$ sudo repoquery -i indent
Name: indent
Version : 2.2.11
Release : 13.el7
Architecture: x86_64
Size: 359131
Packager: CentOS BuildSystem 
<http://bugs.centos.org<https://url10.mailanyone.net/v1/?m=1ePq0t-0001Fl-6M&i=57e1b682&c=Oxgo1-3NKrvdr09W1lYMqTengBfLr3NBV2FFVNtp8fYGuDtJWoThOJlSD8GJqvFV073z9nD7sN8CIc6cGMY5Ktf0s2dmicXgEpxSpJ-1vWF3HJzKuKhaong1C79JraHgpv_RMkyn1Ti3ea_6V8IRf2brmeHyPuhEYTWSI_QG6AqFtjvX0aPRaSumejPxEeXCAykFMtWiapGJkmDmUsNaddheKgKeaLKrV5Dta5pVn40>>
Group   : Applications/Text
URL : 
http://indent.isidore-it.eu/beautify.html<https://url10.mailanyone.net/v1/?m=1ePq0t-0001Fl-6M&i=57e1b682&c=ZsJ-B8LyIX_mcc1N

Re: [vpp-dev] openSUSE build fails

2017-12-15 Thread Dave Barach (dbarach)
With a bit of fiddling, I was able to fix gerrit 9440 so that indent 2.2.10 and 
2.2.11 appear to produce identical results...

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Gabriel Ganne
Sent: Friday, December 15, 2017 8:42 AM
To: Billy McFall ; Marco Varlese 
Cc: Damjan Marion (damarion) ; vpp-dev 
Subject: Re: [vpp-dev] openSUSE build fails


Hi,



If you browse the source http://hg.savannah.gnu.org/hgweb/indent/

The tag 2.2.11  is there, the source seems updated regularly.



Best regards,



--

Gabriel Ganne


From: vpp-dev-boun...@lists.fd.io 
mailto:vpp-dev-boun...@lists.fd.io>> on behalf of 
Billy McFall mailto:bmcf...@redhat.com>>
Sent: Friday, December 15, 2017 2:26:42 PM
To: Marco Varlese
Cc: Damjan Marion (damarion); vpp-dev
Subject: Re: [vpp-dev] openSUSE build fails



On Fri, Dec 15, 2017 at 5:15 AM, Marco Varlese 
mailto:mvarl...@suse.de>> wrote:
Hi Damjan,

On Fri, 2017-12-15 at 09:06 +, Damjan Marion (damarion) wrote:



On 15 Dec 2017, at 08:52, Marco Varlese 
mailto:mvarl...@suse.de>> wrote:

Damjan,

On Thu, 2017-12-14 at 16:04 +, Damjan Marion (damarion) wrote:

Folks,

I'm hearing from multiple people that OpenSUSE verify job is failing (again).
I haven't heard (or read) anything over the mailing list otherwise I would have
looked into it.
Also, if you hear anything like that you can always ping me directly and I will
look into it...

yes, people pinging me...
See
https://gerrit.fd.io/r/#/c/9440/

also:

https://gerrit.fd.io/r/#/c/9813/
 - abandoned but it shows that something was wrong

Ok, so just summarizing our conversation on IRC for others too.

That issue is connected to the different versions of INDENT (C checkstyle) 
installed on the different distros.

openSUSE runs 2.2.10 whilst CentOS and Ubuntu run 2.2.11

What strikes me is that the upstream repo 
https://ftp.gnu.org/gnu/indent/
 has 2.2.10 as last revision.
Our indent package maintainer is looking at possible other sources where Indent 
could "live" these days and will let me know as soon as she finds out.

@Thomas Herbert, would you know the source where the Indent package on CentOS 
come from? Maybe that could help...

Marco, I can't find the source. I'll look around a little more. From CentoOS 
7.4:

$ sudo yum provides indent
:
indent-2.2.11-13.el7.x86_64 : A GNU program for formatting C code
Repo: base
:

$ sudo repoquery -i indent
Name: indent
Version : 2.2.11
Release : 13.el7
Architecture: x86_64
Size: 359131
Packager: CentOS BuildSystem 
>
Group   : Applications/Text
URL : 
http://indent.isidore-it.eu/beautify.html
   <-- BUSTED LINK
Repository  : base
Summary : A GNU program for formatting C code
Source  : indent-2.2.11-13.el7.src.rpm
Description :
Indent is a GNU program for beautifying C code, so that it is easier to
read.  Indent can also convert from one C writing style to a different
one.  Indent understands correct C syntax and tries to handle incorrect
C syntax.

Install the indent package if you are developing applications in C and
you want a program to format your code.






So generally speaking i would like to question having verify jobs for multiple
distros.
Is there really a value in compiling same code on different distros. Yes I
know gcc version can be different,
but that can be addresse

Re: [vpp-dev] openSUSE build fails

2017-12-15 Thread Dave Barach (dbarach)
Guys,

I’ll take a look at e.g. gerrit 9440, ip_frag.c and see if I can fix it.

Under the circumstances, it seems perfectly OK to s/ON/OFF/ as needed in the 
per-file patch verification on/off switch:

/*
* fd.io coding-style-patch-verification: ON
*
* Local Variables:
* eval: (c-set-style "gnu")
* End:
*/

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Marco Varlese
Sent: Friday, December 15, 2017 5:16 AM
To: Damjan Marion (damarion) 
Cc: vpp-dev 
Subject: Re: [vpp-dev] openSUSE build fails

Hi Damjan,

On Fri, 2017-12-15 at 09:06 +, Damjan Marion (damarion) wrote:



On 15 Dec 2017, at 08:52, Marco Varlese 
mailto:mvarl...@suse.de>> wrote:

Damjan,

On Thu, 2017-12-14 at 16:04 +, Damjan Marion (damarion) wrote:

Folks,

I'm hearing from multiple people that OpenSUSE verify job is failing (again).
I haven't heard (or read) anything over the mailing list otherwise I would have
looked into it.
Also, if you hear anything like that you can always ping me directly and I will
look into it...

yes, people pinging me...
See
https://gerrit.fd.io/r/#/c/9440/

also:

https://gerrit.fd.io/r/#/c/9813/ - abandoned but it shows that something was 
wrong

Ok, so just summarizing our conversation on IRC for others too.

That issue is connected to the different versions of INDENT (C checkstyle) 
installed on the different distros.

openSUSE runs 2.2.10 whilst CentOS and Ubuntu run 2.2.11

What strikes me is that the upstream repo https://ftp.gnu.org/gnu/indent/ has 
2.2.10 as last revision.
Our indent package maintainer is looking at possible other sources where Indent 
could "live" these days and will let me know as soon as she finds out.

@Thomas Herbert, would you know the source where the Indent package on CentOS 
come from? Maybe that could help...





So generally speaking i would like to question having verify jobs for multiple
distros.
Is there really a value in compiling same code on different distros. Yes I
know gcc version can be different,
but that can be addressed in simpler way, if it needs to be addressed at all.

More distros means more moving parts and bigger chance that something will
fail.
Well, I am not sure how to interpret this but (in theory) a build should be
reproducible in the first place and I should not worry about problems with build
outcomes. It doesn't only affect openSUSE and I raised it many times over the
mailing-list; when you need to run "recheck" multiple times to have a build
succeed. IMHO the issue should be addressed and not solved by putting it under
the carpet...

We all know that we have extreme fragile system, as obviously we are not be 
able to
fix that in almost 2 years, so as long as the system is as it increasing 
complexity doesn't help
and just causes frustration.

Also it cost resources
That is a different matter and if that's the case then it should be discussed
seriously; raising this argument now, after having had people investing their
times in getting stuff up and running isn't really a cool thing...

Marco, decision to have verify jobs on 2 distros was made much before you 
joined the project,
and I don't remember serious decision on that topic, it might be that at that 
time
we were simply unexperienced, or maybe we didn't expect infra to be so fragile.

Fact is that now we have ridiculous situation, 2 verify jobs says patch is OK, 
3rd one says
it is not. Which one to trust?

So please don't take this personal, i know you invested time to get suse build 
working, but still
I think it is a valid question to ask, do we really need 3 verify jobs. Should 
we have 4 tomorrow
if somebody invest his time to do verify job on Archlinux for example?

Thanks,

Damjan



--
Marco V

SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [discuss] New Option for fd.io mailing lists: groups.io

2017-12-14 Thread Dave Barach (dbarach)
Minor quibble with the assertion that we don’t “moderate” our discussions. I 
spend a bit of time every day dealing with messages sent e.g. to vpp-dev from 
(a) folks who aren’t members of the list, and (b) spam / phish emails.

You’d be surprised how much category (b) email needs to be disposed of...

Thanks… Dave

From: discuss-boun...@lists.fd.io [mailto:discuss-boun...@lists.fd.io] On 
Behalf Of Joel Halpern
Sent: Thursday, December 14, 2017 10:51 AM
To: Ed Warnicke ; t...@lists.fd.io; disc...@lists.fd.io; 
vpp-dev ; csit-...@lists.fd.io; cicn-...@lists.fd.io; 
honeycomb-dev ; deb_dpdk-...@lists.fd.io; 
rpm_d...@lists.fd.io; nsh_sfc-...@lists.fd.io; odp4vpp-...@lists.fd.io; 
pma_tools-...@lists.fd.io; puppet-f...@lists.fd.io; tldk-...@lists.fd.io; 
trex-...@lists.fd.io
Subject: Re: [discuss] New Option for fd.io mailing lists: groups.io

I like having good searchable archives.

I have to say that I am completely turned off by the end of the FAQ.  We don’t 
“moderate” any of our discussions.  And unless something is very strange, the 
use of groups.io vs mailman better not have any visible effect on participation 
in the email discussions.

Listing features like wikis seems also counter-productive.  I do not want us to 
have two separate wiki spaces.

Polls would be nice once in a while (although doodle seems to work just fine 
for most folks.)

If we want calendaring, I would want it integrated in the wiki, not part of the 
mailing list.

Yours,
Joel

From: discuss-boun...@lists.fd.io 
[mailto:discuss-boun...@lists.fd.io] On Behalf Of Ed Warnicke
Sent: Thursday, December 14, 2017 10:45 AM
To: t...@lists.fd.io; 
disc...@lists.fd.io; vpp-dev 
mailto:vpp-dev@lists.fd.io>>; 
csit-...@lists.fd.io; 
cicn-...@lists.fd.io; honeycomb-dev 
mailto:honeycomb-...@lists.fd.io>>; 
deb_dpdk-...@lists.fd.io; 
rpm_d...@lists.fd.io; 
nsh_sfc-...@lists.fd.io; 
odp4vpp-...@lists.fd.io; 
pma_tools-...@lists.fd.io; 
puppet-f...@lists.fd.io; 
tldk-...@lists.fd.io; 
trex-...@lists.fd.io
Subject: [discuss] New Option for fd.io mailing lists: groups.io

A new option has become available for handling mailing lists: 
groups.io

As a community, we need to look at this option, provide feedback, and come to a 
decision as to whether or not to migrate.  A critical part of that is having 
folks take a look, ask questions, and express opinions :)

We have a sandbox example at  https://groups.io/g/lfn  you can look at

And an example with active list and imported archive: 
https://lists.odpi.org/g/odpi-sig-bi

Major benefits include searchability, better web interface, etc.

The LF was kind enough to write a FAQ for us as we consider as a community 
whether to migrate or not:

FAQs
Q: What are the key differences between Mailman and Groups.io?
●Groups.io has a modern interface, robust user security model, and interactive, 
searchable archives
●Groups.io provides advanced features including muting threads and integrations 
with modern tools like GitHub, Slack, and Trello
● Groups.io also has optional extras like a shared calendar, 
polling, chat, a wiki, and more
● Groups.io uses a concept of subgroups, where members first join 
the project “group” (a master list), then they choose the specific “subgroup” 
lists they want to subscribe to

Q: How is the experience different for me as a list moderator or participant?
In many ways, it is very much the same. You will still find the main group at 
your existing URL and sub-groups equate to the more focused mailing lists based 
on the community’s needs. Here is an example of main group and sub-group URL 
patterns, and their respective emails:

https://lists.fd.io/g/tsc
https://lists.fd.io/g/discuss
https:/lists.fd.io/g/vpp-dev
t...@lists.fd.io
disc...@lists.fd.io
vpp-...@llists.fd.io

What is different is Groups.io’s simple but highly functional UI that will make 
the experience of moderating or participating in the community discussions more 
enjoyable.


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] undefined refrence to

2017-12-11 Thread Dave Barach (dbarach)
Gld doesn’t know that e.g. vpp_api.o needs e.g. format until after it’s already 
processed -lvppinfra. Reorder the command line.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Samuel S
Sent: Monday, December 11, 2017 2:05 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] undefined refrence to

Hi,
Finally i can write something that can be built (i think). but i have some 
issue.
i include api_common.h, api_helper_macro.h and vat_helper_marcors.h but i can't 
make it and compiler says:
/***/
gcc -Wall -I/usr/include -I/usr/include/vpp_plugins -lvlibmemoryclient -lsvm 
-lvppinfra -lvlib -lvatplugin -lpthread -lm -lrt -ldl -lcrypto vpp_api.o main.o 
-o test
vpp_api.o: In function `vpp_nat_init':
vpp_api.c:(.text+0x3be): undefined reference to `format'
vpp_api.c:(.text+0x3ce): undefined reference to 
`vl_client_get_first_plugin_msg_id'
vpp_api.c:(.text+0x405): undefined reference to `vl_noop_handler'
vpp_api.c:(.text+0x420): undefined reference to `vl_msg_api_set_handlers'
vpp_api.o: In function `vpp_connect':
vpp_api.c:(.text+0x44f): undefined reference to `vl_client_connect_to_vlib'
vpp_api.c:(.text+0x458): undefined reference to `svm_region_exit'
vpp_api.c:(.text+0x466): undefined reference to `api_main'
main.o: In function `api_snat_interface_dump':
main.c:(.text+0x29f): undefined reference to `vl_msg_api_alloc_as_if_client'
main.c:(.text+0x2fb): undefined reference to `vl_msg_api_send_shmem'
main.c:(.text+0x307): undefined reference to `vat_time_now'
main.c:(.text+0x360): undefined reference to `vat_suspend'
main.c:(.text+0x36c): undefined reference to `vat_time_now'
collect2: error: ld returned 1 exit status
makefile:20: recipe for target 'main' failed
make: *** [main] Error 1
/**/
vpp_api.h
vpp_api.c
main.c
makefile
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Build error when trying to cross-compile vpp

2017-12-11 Thread Dave Barach (dbarach)
Look in config.log and work out the name of the compiler. Fix in 
.../build-data/platforms/x86_64.mk or override from the command line.

From: nikhil ap [mailto:niks3...@gmail.com]
Sent: Sunday, December 10, 2017 8:43 AM
To: Dave Barach (dbarach) 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

Hi Dave, it doesn't work. After make boostrap, I did:

make PLATFORM=x86_64

  No cross-compiler found for platform x86_64 target x86_64-mu-linux; try 
make PLATFORM=x86_64 install-tools 
Makefile:635: recipe for target 'dpdk-configure' failed
make[1]: *** [dpdk-configure] Error 1
make[1]: Leaving directory '/home/nikhil/projects/vpp/build-root'
Makefile:322: recipe for target 'build' failed
make: *** [build] Error 2

I also tried

 make PLATFORM=x86_64 x86_64_os=rumprun-netbsd build
builds without cross-compilation ( May be because the make boostrap configured 
the native compiler)

checking for gcc... gcc
checking whether we are cross compiling... no

I guess make PLATFORM= bootsrap  where it configures is the generally 
the way of cross-compilation


On Fri, Dec 8, 2017 at 6:20 PM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:
Please try this sequence from the top of your workspace:

$ make bootstrap
$ make PLATFORM= build

That’s the “supported, plan-A” scheme. If it doesn’t work, please let us know.

If you specify PLATFORM when building host tools (i.e. vppapigen), it won’t 
work.

Thanks… Dave

From: nikhil ap [mailto:niks3...@gmail.com<mailto:niks3...@gmail.com>]
Sent: Thursday, December 7, 2017 10:58 PM

To: Dave Barach (dbarach) mailto:dbar...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

Hi Dave,

It works if I run, "make is_build_tool=yes tools-install" in .../build-root but 
if I specify the platform, I still see the same issue if I try to cross-compile 
tools with  make PLATFORM=x86_64 TAG=x86_64_debug is_build_tool=yes 
tools-install

It is hitting this configuration in ../src/configure.ac<http://configure.ac>

AM_COND_IF([CROSSCOMPILE],
[
  AC_PATH_PROG([VPPAPIGEN], [vppapigen], [no])
  if test "$VPPAPIGEN" = "no"; then
AC_MSG_ERROR([Externaly built vppapigen is needed when cross-compiling...])
  fi
],[


On Tue, Dec 5, 2017 at 8:53 PM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:
See also “bootstrap.sh...”

$ make V=0 is_build_tool=yes tools-install

Thanks… Dave

From: nikhil ap [mailto:niks3...@gmail.com<mailto:niks3...@gmail.com>]
Sent: Tuesday, December 5, 2017 9:11 AM
To: Dave Barach (dbarach) mailto:dbar...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>

Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

Hi Dave,

I added a file x86_64.mk<http://x86_64.mk> in .../build-data/plaforms/ with the 
following content:

x86_64_arch = x86_64
x86_64_os = rumprun-netbsd
x86_64_target = x86_64-rumprun-netbsd
x86_64_native_tools = vppapigen
x86_64_uses_dpdk = yes

and in the TLD I did a "make PLATFORM=x86_64 TAG=x86_64_debug bootstrap" but I 
am still seeing that vppapigen is not getting built. Any clues?

Thanks,
Nikhil


On Tue, Dec 5, 2017 at 7:05 PM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:
Dear Nikhil,

The first step in adding a new platform: construct 
.../build-data/plaforms/xxx.mk<http://xxx.mk>. There are several examples.

Note the rule:

xxx_native_tools = vppapigen

This rule builds the missing build-host tool.

Then:

“make PLATFORM=xxx TAG=xxx_debug vpp-install” or similar.

Caveat: the main Makefile “.../build-root/Makefile” is non-trivial.

In the past, we’ve used it to self-compile full toolchains, and to use the 
resulting toolchains to cross-compile embedded Linux images with squashfs / 
unionfs disk images.

All of the mechanisms are there to do interesting things, but since we seldom 
do those things anymore you can expect a certain amount of trouble.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>] On 
Behalf Of nikhil ap
Sent: Tuesday, December 5, 2017 6:05 AM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

After a bit more digging around the make file, I did this:

 make PLATFORM=x86_64 x86_64_os=rumprun-netbsd bootstrap

checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-rumprun-netbsd
checking whether we are cross compiling... yes

However, I am still seeing this error:

checking for vppapigen... no
configure: error: Externaly built vppapigen is needed when cross-compiling...
Makefile:635: recipe for target 'tools-configure' failed
make[1]: *** [tools-configure] Error 1

What is

Re: [vpp-dev] MACIP ACL replace causes ip4_table_index change?

2017-12-11 Thread Dave Barach (dbarach)
Folks will miss a clib_warning, unless they check syslog. 

I'd consider returning VNET_API_ERROR_ENTRY_ALREADY_EXISTS and calling it a day.

Thanks… Dave

-Original Message-
From: Andrew 👽 Yourtchenko [mailto:ayour...@gmail.com] 
Sent: Sunday, December 10, 2017 9:04 AM
To: Dave Barach (dbarach) 
Cc: Jon Loeliger ; vpp-dev 
Subject: Re: [vpp-dev] MACIP ACL replace causes ip4_table_index change?

Dear Dave,

On 12/9/17, Dave Barach (dbarach)  wrote:
> This looks wrong... vnet_set_input_acl_intfc(...) at line 93:
>
>   /* Return ok on ADD operation if feature is already enabled */
>   if (is_add &&
>am->classify_table_index_by_sw_if_index[ti][sw_if_index] != ~0)
>  return 0;
>
> It’s been that way for a very long time.

Yeah so I am wondering what's the right approach on fixing it, I see
three alternatives:

1) "set" the new inacl even if there is an existing one applied..
upside: consistent with what "set" means in layman's terms; downside:
bigger change vs. the existing semantics which maybe is masking some
other issues.

2) return an error rather than zero, and let the callers deal with
this. upside: no big change of the semantics. downside: returning an
error might upset some callers that were "accidentally" relying on
this behaviour.

3) stick in a "clib_warning()" saying "This will soon return an error.
The calling code needs to ensure this is handled correctly", and wait
for one or two releases, and have a JIRA for the next release to
*then* do (2) in the next release.

If this behavior has been here sufficiently long, (3) seems like a
safest action..

What do you think ?

--a


>
> Thanks… Dave
>
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On
> Behalf Of Jon Loeliger
> Sent: Saturday, December 9, 2017 11:23 AM
> To: Andrew 👽 Yourtchenko 
> Cc: vpp-dev 
> Subject: Re: [vpp-dev] MACIP ACL replace causes ip4_table_index change?
>
> On Sat, Dec 9, 2017 at 8:16 AM, Andrew 👽 Yourtchenko
> mailto:ayour...@gmail.com>> wrote:
> Jon,
>
> Hi Andrew,
>
> Thanks for taking a look at this issue!
>
> on api trace: does the below work ? (even though the current scenario
> is trivially reproducible, the api traces are very useful for tougher
> cases, and save a lot of typing while storytelling).
>
> DBGvpp# api trace on
>
> . do the things 
>
> DBGvpp# api trace save macip-trace
> API trace saved to /tmp/macip-trace
> DBGvpp# api trace custom-dump /tmp/macip-trace
> SCRIPT: macip_acl_add_replace -1 count 1 count 1 \
>   ipv4 permit \
> src mac 00:00:00:00:00:00 mask 00:00:00:00:00:00 \
> src ip 0.0.0.0/0<http://0.0.0.0/0>, \
>
> SCRIPT: macip_acl_interface_add_del sw_if_index 0 acl_index 0 add
> SCRIPT: macip_acl_add_replace 0 count 1 count 1 \
>   ipv4 permit \
> src mac 00:00:00:00:00:00 mask 00:00:00:00:00:00 \
> src ip 0.0.0.0/0<http://0.0.0.0/0>, \
>
> I think that is the right sequence.
>
>
> Now, to the issue itself: it's exactly as I described, but with a twist:
> vnet_set_input_acl_intfc(), which is used under the hood to assign the
> inacl on the interfaces, is quite picky - if there is an existing
> inacl applied, it just quietly does nothing. (@DaveB - this kinda
> feels strange, I am not sure what the logic is behind doing that.)
>
> Anyway, rather than debating on why it behaves this way, and,
> especially since we actually are deleting the tables in question, it's
> better to unapply the inacls first, and then reapply them after the
> tables have been recreated.
>
> This solves half the problem for me.  It looks like I can properly
> turn around and remove this ACL from the interface now!
>
> But I still have doubts; or at least I don't understand why the
> three table indices are 3 after initial creation, and 0 after they
> are replaced.
>
> The result is in https://gerrit.fd.io/r/#/c/9772/ - you can verify
> that it addresses the issue.
>
> I've left a comment on the code there.
>
> Despite what Gerrit thinks, this code does compile and run for me!
> So maybe just a "rebuild" request there will allow it to verify?
>
>
> Now, going on to "how exactly did this slip through" - seems the macip
> tests are quite a bit too lenient than they should be. I'll need to
> address that as well, though probably I will split the dot1q/dot1ad
> test cases out, and in the process refactor things a bit... so in the
> interests of your time, maybe 9772 can go with just an actual code
> fix.
>
> I've not read through the test as they stand today.
> I'd like to understand the "3 vs. 0" issue before I am happy to
> Code Re

Re: [vpp-dev] MACIP ACL replace causes ip4_table_index change?

2017-12-09 Thread Dave Barach (dbarach)
This looks wrong... vnet_set_input_acl_intfc(...) at line 93:

  /* Return ok on ADD operation if feature is already enabled */
  if (is_add &&
   am->classify_table_index_by_sw_if_index[ti][sw_if_index] != ~0)
 return 0;

It’s been that way for a very long time.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jon Loeliger
Sent: Saturday, December 9, 2017 11:23 AM
To: Andrew 👽 Yourtchenko 
Cc: vpp-dev 
Subject: Re: [vpp-dev] MACIP ACL replace causes ip4_table_index change?

On Sat, Dec 9, 2017 at 8:16 AM, Andrew 👽 Yourtchenko 
mailto:ayour...@gmail.com>> wrote:
Jon,

Hi Andrew,

Thanks for taking a look at this issue!

on api trace: does the below work ? (even though the current scenario
is trivially reproducible, the api traces are very useful for tougher
cases, and save a lot of typing while storytelling).

DBGvpp# api trace on

. do the things 

DBGvpp# api trace save macip-trace
API trace saved to /tmp/macip-trace
DBGvpp# api trace custom-dump /tmp/macip-trace
SCRIPT: macip_acl_add_replace -1 count 1 count 1 \
  ipv4 permit \
src mac 00:00:00:00:00:00 mask 00:00:00:00:00:00 \
src ip 0.0.0.0/0, \

SCRIPT: macip_acl_interface_add_del sw_if_index 0 acl_index 0 add
SCRIPT: macip_acl_add_replace 0 count 1 count 1 \
  ipv4 permit \
src mac 00:00:00:00:00:00 mask 00:00:00:00:00:00 \
src ip 0.0.0.0/0, \

I think that is the right sequence.


Now, to the issue itself: it's exactly as I described, but with a twist:
vnet_set_input_acl_intfc(), which is used under the hood to assign the
inacl on the interfaces, is quite picky - if there is an existing
inacl applied, it just quietly does nothing. (@DaveB - this kinda
feels strange, I am not sure what the logic is behind doing that.)

Anyway, rather than debating on why it behaves this way, and,
especially since we actually are deleting the tables in question, it's
better to unapply the inacls first, and then reapply them after the
tables have been recreated.

This solves half the problem for me.  It looks like I can properly
turn around and remove this ACL from the interface now!

But I still have doubts; or at least I don't understand why the
three table indices are 3 after initial creation, and 0 after they
are replaced.

The result is in https://gerrit.fd.io/r/#/c/9772/ - you can verify
that it addresses the issue.

I've left a comment on the code there.

Despite what Gerrit thinks, this code does compile and run for me!
So maybe just a "rebuild" request there will allow it to verify?


Now, going on to "how exactly did this slip through" - seems the macip
tests are quite a bit too lenient than they should be. I'll need to
address that as well, though probably I will split the dot1q/dot1ad
test cases out, and in the process refactor things a bit... so in the
interests of your time, maybe 9772 can go with just an actual code
fix.

I've not read through the test as they stand today.
I'd like to understand the "3 vs. 0" issue before I am happy to
Code Review +1 this patch.

I've dumped the entire debugging process into a log, which turned out
be fairly long, so to avoid sending the walls of text to the list,
I've dumped it elsewhere:
http://stdio.be/blog/2017-12-09-Debugging-VPP-MACIP-ACLs/

And, excellent!  I read through that in quite some detail.  And I understand
the "3 vs 0" issue I was seeing now!

The two pieces I missed were:  The "show inacl type l2" to see where
the chain was starting, and then noticing that the chain had indeed been
reversed once the starting point was known.

So thanks for that excellent debug layout and explanation.

And thanks for including the intermediate step of showing why simply
updating the interface at the end wasn't sufficient.  I had done that,
but hadn't yet gotten into the next function to see it was being ignored.

Thanks again for catching this.

Thanks for fixing this!


--a

So, with that, I've left  a comment and a Review -1 on the patch.
And the patch didn't Verify.  So where do we go from here?

I'm good with the patch, and we need to rebuild it.  So do we just
re-build the same patch or re-submit it as a new patch?  I will
either update or Review a-new to +1 it!

Thanks,
jdl

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Build error when trying to cross-compile vpp

2017-12-08 Thread Dave Barach (dbarach)
Please try this sequence from the top of your workspace:

$ make bootstrap
$ make PLATFORM= build

That’s the “supported, plan-A” scheme. If it doesn’t work, please let us know.

If you specify PLATFORM when building host tools (i.e. vppapigen), it won’t 
work.

Thanks… Dave

From: nikhil ap [mailto:niks3...@gmail.com]
Sent: Thursday, December 7, 2017 10:58 PM
To: Dave Barach (dbarach) 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

Hi Dave,

It works if I run, "make is_build_tool=yes tools-install" in .../build-root but 
if I specify the platform, I still see the same issue if I try to cross-compile 
tools with  make PLATFORM=x86_64 TAG=x86_64_debug is_build_tool=yes 
tools-install

It is hitting this configuration in ../src/configure.ac<http://configure.ac>

AM_COND_IF([CROSSCOMPILE],
[
  AC_PATH_PROG([VPPAPIGEN], [vppapigen], [no])
  if test "$VPPAPIGEN" = "no"; then
AC_MSG_ERROR([Externaly built vppapigen is needed when cross-compiling...])
  fi
],[


On Tue, Dec 5, 2017 at 8:53 PM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:
See also “bootstrap.sh...”

$ make V=0 is_build_tool=yes tools-install

Thanks… Dave

From: nikhil ap [mailto:niks3...@gmail.com<mailto:niks3...@gmail.com>]
Sent: Tuesday, December 5, 2017 9:11 AM
To: Dave Barach (dbarach) mailto:dbar...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>

Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

Hi Dave,

I added a file x86_64.mk<http://x86_64.mk> in .../build-data/plaforms/ with the 
following content:

x86_64_arch = x86_64
x86_64_os = rumprun-netbsd
x86_64_target = x86_64-rumprun-netbsd
x86_64_native_tools = vppapigen
x86_64_uses_dpdk = yes

and in the TLD I did a "make PLATFORM=x86_64 TAG=x86_64_debug bootstrap" but I 
am still seeing that vppapigen is not getting built. Any clues?

Thanks,
Nikhil


On Tue, Dec 5, 2017 at 7:05 PM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:
Dear Nikhil,

The first step in adding a new platform: construct 
.../build-data/plaforms/xxx.mk<http://xxx.mk>. There are several examples.

Note the rule:

xxx_native_tools = vppapigen

This rule builds the missing build-host tool.

Then:

“make PLATFORM=xxx TAG=xxx_debug vpp-install” or similar.

Caveat: the main Makefile “.../build-root/Makefile” is non-trivial.

In the past, we’ve used it to self-compile full toolchains, and to use the 
resulting toolchains to cross-compile embedded Linux images with squashfs / 
unionfs disk images.

All of the mechanisms are there to do interesting things, but since we seldom 
do those things anymore you can expect a certain amount of trouble.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>] On 
Behalf Of nikhil ap
Sent: Tuesday, December 5, 2017 6:05 AM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

After a bit more digging around the make file, I did this:

 make PLATFORM=x86_64 x86_64_os=rumprun-netbsd bootstrap

checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-rumprun-netbsd
checking whether we are cross compiling... yes

However, I am still seeing this error:

checking for vppapigen... no
configure: error: Externaly built vppapigen is needed when cross-compiling...
Makefile:635: recipe for target 'tools-configure' failed
make[1]: *** [tools-configure] Error 1

What is the issue?

On Tue, Dec 5, 2017 at 3:55 PM, nikhil ap 
mailto:niks3...@gmail.com>> wrote:
Hi All,

I am trying to cross-compile vpp. The make doesn't expose a way to pass the 
--host parameter required to configure and build using cross compilation.

Initially, I did the following:

CC=x86_64-rumprun-netbsd-gcc make bootstrap, but I saw the following error

If you meant to cross compile, use `--host'.
See `config.log' for more details

As a work-around based on the config.log, I did this following

/src/configure (Stripped other output ) --build=x86_64-linux-gnu 
--host=x86_64-rumprun-netbsd --target=x86_64-linux-gnu

However,  I saw the following error:
checking for vppapigen... no
configure: error: Externaly built vppapigen is needed when cross-compiling...

Is there a way to cleanly cross-compile?


--
Regards,
Nikhil



--
Regards,
Nikhil



--
Regards,
Nikhil



--
Regards,
Nikhil
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] how to find vlib_buffer_t.data size(capacity) @ runtime?

2017-12-07 Thread Dave Barach (dbarach)
Interpret b->current_data, b->current_length, the buffer freelist index, and 
the related vlib_buffer_free_list_t structure. 

In most cases, b->packet_data is actually VLIB_BUFFER_DATA_SIZE (2048) bytes 
long. Look at the related vlib_buffer_free_list_t to know for sure. 

Current_data is a SIGNED offset into b->packet_data[0]. It can be negative by 
as much as VLIB_BUFFER_PRE_DATA_SIZE. Typically, device drivers write the first 
octet of packet data into b->packet_data[0], but devices / device driver 
writers may place data at arbitrary [positive] offsets into b->packet_data.   

HTH... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
Sent: Thursday, December 7, 2017 8:06 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] how to find vlib_buffer_t.data size(capacity) @ runtime?

Hi,

I discovered that the packet generator does not always respect the default 
vlib_buffer_t.data size as defined in buffer.h:

#define VLIB_BUFFER_DATA_SIZE   (2048)

It derives the required buffer size from the individual packet sizes from the 
pcap file - at least that's what happens in 'make test'. In my case it's 256 
bytes.

My question is - what is the easiest way to determine the actual allocated 
vlib_buffer_t.data space at runtime? I want to be able to append some data to a 
buffer but first I would like to make sure that it fits...

Thanks,
Klement


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] memory issues

2017-12-06 Thread Dave Barach (dbarach)
Before we crank up the vppinfra memory leakfinder, etc. etc.: cat /proc/`pidof 
vpp`/maps and have a hard stare at the output.

Configure one step at a time, looking for significant changes in the address 
space layout.

HTH… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Tuesday, December 5, 2017 9:58 PM
To: 薛欣颖 ; vpp-dev 
Subject: Re: [vpp-dev] memory issues

I agree 5g is large, but I do not think this is the FIB. The default heap maxes 
out much sooner than that. Something else is going on.

For DPDK, “show dpdk buffer” and otherwise “show physmem”.

Chris.

From: 薛欣颖 mailto:xy...@fiberhome.com>>
Date: Tuesday, December 5, 2017 at 20:06
To: Chris Luke 
mailto:chris_l...@cable.comcast.com>>, vpp-dev 
mailto:vpp-dev@lists.fd.io>>
Subject: Re: Re: [vpp-dev] memory issues


Hi Chris,

I see what you mean. I have two other questions:
1. 200k static routing use 5g memory is also  large , how can I configure it 
use less physical memory?
2. How can I check the packet buffer memory?

BTW, do you have the test similar with 'the memory size 200k static routing 
use'?

Thanks,
Xyxue


From: Luke, Chris
Date: 2017-12-05 21:43
To: 薛欣颖; vpp-dev
Subject: Re: [vpp-dev] memory issues
You’re misreading top. “Virt” only means the virtual memory footprint of the 
process. This includes unused heap, shared libraries, anonymous mmap() regions 
etc. “RSS” is the resident-in-memory size. It’s actually using 5G.

“show memory” also only shows the heap usage, it does not include packet buffer 
memory.

Chris.

From: mailto:vpp-dev-boun...@lists.fd.io>> on 
behalf of 薛欣颖 mailto:xy...@fiberhome.com>>
Date: Tuesday, December 5, 2017 at 00:51
To: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] memory issues


Hi guys,

I am using vpp v18.01-rc0~241-g4c9f2a8.
I configured 200K static routing. When I 'show memory' in VPP, '150+k used'. 
But in my machine ,used almost 15g. After del the static routing ,almost using 
16g memory.
More info is shown below:

VPP# show memory
Thread 0 vpp_main
heap 0x7fffb58e9000, 1076983 objects, 110755k of 151671k used, 15386k free, 
13352k reclaimed, 16829k overhead, 1048572k capacity
User heap index=0:
heap 0x7fffb58e9000, 1076984 objects, 110755k of 151671k used, 15386k free, 
13352k reclaimed, 16829k overhead, 1048572k capacity
User heap index=1:
heap 0x77ed4000, 2 objects, 128k of 130k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=2:
heap 0x7fffb1e28000, 2 objects, 512k of 514k used, 92 free, 0 reclaimed, 1k 
overhead, 8188k capacity
User heap index=3:
heap 0x7fffb1628000, 2 objects, 512k of 514k used, 92 free, 0 reclaimed, 1k 
overhead, 8188k capacity
User heap index=4:
heap 0x7fffaf628000, 2 objects, 512k of 514k used, 92 free, 0 reclaimed, 1k 
overhead, 32764k capacity
User heap index=5:
heap 0x7fffaf528000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=6:
heap 0x7fffaf428000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=7:
heap 0x7fffaf328000, 2 objects, 120k of 122k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=8:
heap 0x7fffaf228000, 2 objects, 120k of 122k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=9:
heap 0x7fffa7228000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 131068k capacity
User heap index=10:
heap 0x7fff9f228000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 131068k capacity
User heap index=11:
heap 0x7fff9b228000, 2 objects, 16k of 18k used, 92 free, 0 reclaimed, 1k 
overhead, 65532k capacity
User heap index=12:
heap 0x7fff9b028000, 2 objects, 256k of 258k used, 92 free, 0 reclaimed, 1k 
overhead, 2044k capacity
User heap index=13:
heap 0x7fff9ae28000, 2 objects, 240k of 242k used, 92 free, 0 reclaimed, 1k 
overhead, 2044k capacity
User heap index=14:
heap 0x7fff9ad28000, 5 objects, 8k of 10k used, 168 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=15:
heap 0x7fff9ac28000, 5 objects, 8k of 10k used, 168 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=16:
heap 0x7fff9ab28000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=17:
heap 0x7fff9a128000, 2 objects, 1k of 3k used, 88 free, 0 reclaimed, 1k 
overhead, 10236k capacity
User heap index=18:
heap 0x7fff9a028000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=19:
heap 0x7fff99f28000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=20:
heap 0x7fff99e28000, 2 objects, 2k of 4k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap index=21:
heap 0x7fff99d28000, 2 objects, 8k of 10k used, 92 free, 0 reclaimed, 1k 
overhead, 1020k capacity
User heap inde

Re: [vpp-dev] some files are never compiled

2017-12-05 Thread Dave Barach (dbarach)
Merged... I’ll clean out some more junk and push another patch... Thanks… Dave

From: Gabriel Ganne [mailto:gabriel.ga...@enea.com]
Sent: Tuesday, December 5, 2017 10:14 AM
To: Dave Barach (dbarach) ; vpp-dev@lists.fd.io
Subject: Re: some files are never compiled


Thanks Dave,



I had submitted a pull-request for the smp files here : 
https://gerrit.fd.io/r/#/c/9730/

Please tell me if I should abandon it and let you do a more complete patch (I 
don't think I can judge for all the mentioned files by myself).



Best regards,



--

Gabriel Ganne


From: Dave Barach (dbarach) mailto:dbar...@cisco.com>>
Sent: Tuesday, December 5, 2017 4:06:09 PM
To: Gabriel Ganne; vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: RE: some files are never compiled


Dear Gabriel,



The files mentioned below fall into several buckets:



  *   Code samples which might reasonably move to .../extras
  *   Things we’re not using at the moment, but which would take someone a good 
long time to build from scratch.

 *   The simulated annealing driver in vppinfra/anneal.c is a good example.

  *   Debris which should be removed



I’ll push a change-set to remove debris. Most of it is mine anyhow... (😉)...



Thanks… Dave



From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Gabriel Ganne
Sent: Tuesday, December 5, 2017 9:52 AM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] some files are never compiled



Hi,



Following a question by Kevin Wang in 
VPP-1066<https://url10.mailanyone.net/v1/?m=1eMEna-0004JB-5d&i=57e1b682&c=dWeAA22ZVlhnZRfz-i1puxFN8Tk-2rBFyied8Gs8zXDL3Q9k6F90-z0RmbwFTcDpe1fhM2I86d0RWNBHd8VwdvEr6xIUAJFUUGBWtqrQ4SAz6poJV5MSO5svULzjhQ09dOY07pPOvLkZGm5vbduTTwLjeubkD5dEcO6oGql-Kl4gxIbNQD7liXGVkfS6NbeQtSvyOOWcF5cpIdRE6a-t4A_GyvHqKgofPuOcGu8KcWc>,
 I saw that some files are actually never compiled.

Could some external plugin be using them ?

Can (Should) they be removed ?



As an example, I followed the smp.c, and smp_fido.[ch] files.

They have been disabled by commit 01d86c7f6f05938c7d3fe181bd0aa2f75ccdd1df 
(reviewed here: 
https://gerrit.fd.io/r/#/c/2273/)<https://url10.mailanyone.net/v1/?m=1eMEna-0004JB-5d&i=57e1b682&c=l-t4hbRfiNUIHcmfdMfBceTCPh9V9nacm3ht2BtvZJdfe6BafPT4y2sAiyKxs3ILFgcCRQf4GEWgCXLfyWhmeR4XvUzjyzRejSl7yqU8A1qfnylWZBISY7Qk5IIaPgwqRx_qTYRI06Y-7wYuAuDsQp_sSnrtQM4oPijSLDSwlaTL_grLpgRDWHWS2iS38TfgV7brBJ9vX20IUGojBeO5oCpxWXrFWkWwLJi4wNGoN5I>
 almost 1.5 year ago.



Here is how I listed them :

for file in $(git find "\.c$"); do

f=`basename $file .c` ;

git grep -q "$f\.c";

if [ $? -eq 1 ] ;  then echo $file ; fi ;

done

src/examples/vlib/plex_test.c
src/tools/g2/mkversion.c
src/vlib/elog_samples.c
src/vlib/parse.c
src/vlib/parse_builtin.c
src/vnet/ethernet/mac_swap.c
src/vnet/fib/fib_entry_src_default.c
src/vnet/ip/ip4_test.c
src/vnet/map/examples/health_check.c
src/vpp/app/sticky_hash.c
src/vppinfra/anneal.c
src/vppinfra/mod_test_hash.c
src/vppinfra/pfhash.c
src/vppinfra/phash.c
src/vppinfra/qhash.c
src/vppinfra/smp.c
src/vppinfra/smp_fifo.c
src/vppinfra/test_pfhash.c
src/vppinfra/test_phash.c
src/vppinfra/test_pool.c
src/vppinfra/test_qhash.c
src/vppinfra/tw_timer_4t_3w_4sl_ov.c
src/vppinfra/unix-kelog.c







--

Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Build error when trying to cross-compile vpp

2017-12-05 Thread Dave Barach (dbarach)
See also “bootstrap.sh...”

$ make V=0 is_build_tool=yes tools-install

Thanks… Dave

From: nikhil ap [mailto:niks3...@gmail.com]
Sent: Tuesday, December 5, 2017 9:11 AM
To: Dave Barach (dbarach) 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

Hi Dave,

I added a file x86_64.mk<http://x86_64.mk> in .../build-data/plaforms/ with the 
following content:

x86_64_arch = x86_64
x86_64_os = rumprun-netbsd
x86_64_target = x86_64-rumprun-netbsd
x86_64_native_tools = vppapigen
x86_64_uses_dpdk = yes

and in the TLD I did a "make PLATFORM=x86_64 TAG=x86_64_debug bootstrap" but I 
am still seeing that vppapigen is not getting built. Any clues?

Thanks,
Nikhil


On Tue, Dec 5, 2017 at 7:05 PM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:
Dear Nikhil,

The first step in adding a new platform: construct 
.../build-data/plaforms/xxx.mk<http://xxx.mk>. There are several examples.

Note the rule:

xxx_native_tools = vppapigen

This rule builds the missing build-host tool.

Then:

“make PLATFORM=xxx TAG=xxx_debug vpp-install” or similar.

Caveat: the main Makefile “.../build-root/Makefile” is non-trivial.

In the past, we’ve used it to self-compile full toolchains, and to use the 
resulting toolchains to cross-compile embedded Linux images with squashfs / 
unionfs disk images.

All of the mechanisms are there to do interesting things, but since we seldom 
do those things anymore you can expect a certain amount of trouble.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>] On 
Behalf Of nikhil ap
Sent: Tuesday, December 5, 2017 6:05 AM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

After a bit more digging around the make file, I did this:

 make PLATFORM=x86_64 x86_64_os=rumprun-netbsd bootstrap

checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-rumprun-netbsd
checking whether we are cross compiling... yes

However, I am still seeing this error:

checking for vppapigen... no
configure: error: Externaly built vppapigen is needed when cross-compiling...
Makefile:635: recipe for target 'tools-configure' failed
make[1]: *** [tools-configure] Error 1

What is the issue?

On Tue, Dec 5, 2017 at 3:55 PM, nikhil ap 
mailto:niks3...@gmail.com>> wrote:
Hi All,

I am trying to cross-compile vpp. The make doesn't expose a way to pass the 
--host parameter required to configure and build using cross compilation.

Initially, I did the following:

CC=x86_64-rumprun-netbsd-gcc make bootstrap, but I saw the following error

If you meant to cross compile, use `--host'.
See `config.log' for more details

As a work-around based on the config.log, I did this following

/src/configure (Stripped other output ) --build=x86_64-linux-gnu 
--host=x86_64-rumprun-netbsd --target=x86_64-linux-gnu

However,  I saw the following error:
checking for vppapigen... no
configure: error: Externaly built vppapigen is needed when cross-compiling...

Is there a way to cleanly cross-compile?


--
Regards,
Nikhil



--
Regards,
Nikhil



--
Regards,
Nikhil
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] some files are never compiled

2017-12-05 Thread Dave Barach (dbarach)
Dear Gabriel,

We can definitely remove the three files you picked. Once verified, I’ll be 
glad to merge the patch.

Thanks… Dave

From: Gabriel Ganne [mailto:gabriel.ga...@enea.com]
Sent: Tuesday, December 5, 2017 10:14 AM
To: Dave Barach (dbarach) ; vpp-dev@lists.fd.io
Subject: Re: some files are never compiled


Thanks Dave,



I had submitted a pull-request for the smp files here : 
https://gerrit.fd.io/r/#/c/9730/

Please tell me if I should abandon it and let you do a more complete patch (I 
don't think I can judge for all the mentioned files by myself).



Best regards,



--

Gabriel Ganne


From: Dave Barach (dbarach) mailto:dbar...@cisco.com>>
Sent: Tuesday, December 5, 2017 4:06:09 PM
To: Gabriel Ganne; vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: RE: some files are never compiled


Dear Gabriel,



The files mentioned below fall into several buckets:



  *   Code samples which might reasonably move to .../extras
  *   Things we’re not using at the moment, but which would take someone a good 
long time to build from scratch.

 *   The simulated annealing driver in vppinfra/anneal.c is a good example.

  *   Debris which should be removed



I’ll push a change-set to remove debris. Most of it is mine anyhow... (😉)...



Thanks… Dave



From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Gabriel Ganne
Sent: Tuesday, December 5, 2017 9:52 AM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] some files are never compiled



Hi,



Following a question by Kevin Wang in 
VPP-1066<https://url10.mailanyone.net/v1/?m=1eMEna-0004JB-5d&i=57e1b682&c=dWeAA22ZVlhnZRfz-i1puxFN8Tk-2rBFyied8Gs8zXDL3Q9k6F90-z0RmbwFTcDpe1fhM2I86d0RWNBHd8VwdvEr6xIUAJFUUGBWtqrQ4SAz6poJV5MSO5svULzjhQ09dOY07pPOvLkZGm5vbduTTwLjeubkD5dEcO6oGql-Kl4gxIbNQD7liXGVkfS6NbeQtSvyOOWcF5cpIdRE6a-t4A_GyvHqKgofPuOcGu8KcWc>,
 I saw that some files are actually never compiled.

Could some external plugin be using them ?

Can (Should) they be removed ?



As an example, I followed the smp.c, and smp_fido.[ch] files.

They have been disabled by commit 01d86c7f6f05938c7d3fe181bd0aa2f75ccdd1df 
(reviewed here: 
https://gerrit.fd.io/r/#/c/2273/)<https://url10.mailanyone.net/v1/?m=1eMEna-0004JB-5d&i=57e1b682&c=l-t4hbRfiNUIHcmfdMfBceTCPh9V9nacm3ht2BtvZJdfe6BafPT4y2sAiyKxs3ILFgcCRQf4GEWgCXLfyWhmeR4XvUzjyzRejSl7yqU8A1qfnylWZBISY7Qk5IIaPgwqRx_qTYRI06Y-7wYuAuDsQp_sSnrtQM4oPijSLDSwlaTL_grLpgRDWHWS2iS38TfgV7brBJ9vX20IUGojBeO5oCpxWXrFWkWwLJi4wNGoN5I>
 almost 1.5 year ago.



Here is how I listed them :

for file in $(git find "\.c$"); do

f=`basename $file .c` ;

git grep -q "$f\.c";

if [ $? -eq 1 ] ;  then echo $file ; fi ;

done

src/examples/vlib/plex_test.c
src/tools/g2/mkversion.c
src/vlib/elog_samples.c
src/vlib/parse.c
src/vlib/parse_builtin.c
src/vnet/ethernet/mac_swap.c
src/vnet/fib/fib_entry_src_default.c
src/vnet/ip/ip4_test.c
src/vnet/map/examples/health_check.c
src/vpp/app/sticky_hash.c
src/vppinfra/anneal.c
src/vppinfra/mod_test_hash.c
src/vppinfra/pfhash.c
src/vppinfra/phash.c
src/vppinfra/qhash.c
src/vppinfra/smp.c
src/vppinfra/smp_fifo.c
src/vppinfra/test_pfhash.c
src/vppinfra/test_phash.c
src/vppinfra/test_pool.c
src/vppinfra/test_qhash.c
src/vppinfra/tw_timer_4t_3w_4sl_ov.c
src/vppinfra/unix-kelog.c







--

Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] some files are never compiled

2017-12-05 Thread Dave Barach (dbarach)
Dear Gabriel,

The files mentioned below fall into several buckets:


  *   Code samples which might reasonably move to .../extras
  *   Things we’re not using at the moment, but which would take someone a good 
long time to build from scratch.
 *   The simulated annealing driver in vppinfra/anneal.c is a good example.
  *   Debris which should be removed

I’ll push a change-set to remove debris. Most of it is mine anyhow... (😉)...

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Gabriel Ganne
Sent: Tuesday, December 5, 2017 9:52 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] some files are never compiled


Hi,



Following a question by Kevin Wang in 
VPP-1066, I saw that some files are 
actually never compiled.

Could some external plugin be using them ?

Can (Should) they be removed ?



As an example, I followed the smp.c, and smp_fido.[ch] files.
They have been disabled by commit 01d86c7f6f05938c7d3fe181bd0aa2f75ccdd1df 
(reviewed here: https://gerrit.fd.io/r/#/c/2273/) almost 1.5 year ago.



Here is how I listed them :

for file in $(git find "\.c$"); do

f=`basename $file .c` ;

git grep -q "$f\.c";

if [ $? -eq 1 ] ;  then echo $file ; fi ;

done
src/examples/vlib/plex_test.c
src/tools/g2/mkversion.c
src/vlib/elog_samples.c
src/vlib/parse.c
src/vlib/parse_builtin.c
src/vnet/ethernet/mac_swap.c
src/vnet/fib/fib_entry_src_default.c
src/vnet/ip/ip4_test.c
src/vnet/map/examples/health_check.c
src/vpp/app/sticky_hash.c
src/vppinfra/anneal.c
src/vppinfra/mod_test_hash.c
src/vppinfra/pfhash.c
src/vppinfra/phash.c
src/vppinfra/qhash.c
src/vppinfra/smp.c
src/vppinfra/smp_fifo.c
src/vppinfra/test_pfhash.c
src/vppinfra/test_phash.c
src/vppinfra/test_pool.c
src/vppinfra/test_qhash.c
src/vppinfra/tw_timer_4t_3w_4sl_ov.c
src/vppinfra/unix-kelog.c






--

Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Build error when trying to cross-compile vpp

2017-12-05 Thread Dave Barach (dbarach)
Dear Nikhil,

The first step in adding a new platform: construct 
.../build-data/plaforms/xxx.mk. There are several examples.

Note the rule:

xxx_native_tools = vppapigen

This rule builds the missing build-host tool.

Then:

“make PLATFORM=xxx TAG=xxx_debug vpp-install” or similar.

Caveat: the main Makefile “.../build-root/Makefile” is non-trivial.

In the past, we’ve used it to self-compile full toolchains, and to use the 
resulting toolchains to cross-compile embedded Linux images with squashfs / 
unionfs disk images.

All of the mechanisms are there to do interesting things, but since we seldom 
do those things anymore you can expect a certain amount of trouble.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of nikhil ap
Sent: Tuesday, December 5, 2017 6:05 AM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Build error when trying to cross-compile vpp

After a bit more digging around the make file, I did this:

 make PLATFORM=x86_64 x86_64_os=rumprun-netbsd bootstrap

checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-rumprun-netbsd
checking whether we are cross compiling... yes

However, I am still seeing this error:

checking for vppapigen... no
configure: error: Externaly built vppapigen is needed when cross-compiling...
Makefile:635: recipe for target 'tools-configure' failed
make[1]: *** [tools-configure] Error 1

What is the issue?

On Tue, Dec 5, 2017 at 3:55 PM, nikhil ap 
mailto:niks3...@gmail.com>> wrote:
Hi All,

I am trying to cross-compile vpp. The make doesn't expose a way to pass the 
--host parameter required to configure and build using cross compilation.

Initially, I did the following:

CC=x86_64-rumprun-netbsd-gcc make bootstrap, but I saw the following error

If you meant to cross compile, use `--host'.
See `config.log' for more details

As a work-around based on the config.log, I did this following

/src/configure (Stripped other output ) --build=x86_64-linux-gnu 
--host=x86_64-rumprun-netbsd --target=x86_64-linux-gnu

However,  I saw the following error:
checking for vppapigen... no
configure: error: Externaly built vppapigen is needed when cross-compiling...

Is there a way to cleanly cross-compile?


--
Regards,
Nikhil



--
Regards,
Nikhil
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Question on node type "VLIB_NODE_TYPE_PROCESS"

2017-11-30 Thread Dave Barach (dbarach)
At least for now, process nodes run on the main thread. See line 1587 of 
.../src/vlib/main.c.

The lldp-process is not super-complicated. Set a gdb breakpoint on line 157 
[switch(event_type)], cause it to do something, and you can walk through it, 
etc.

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Yeddula, Avinash
Sent: Thursday, November 30, 2017 5:49 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Question on node type "VLIB_NODE_TYPE_PROCESS"

Hello,

I have a setup with, 1 worker thread (Core 8) and 1 main thread (Core 0).

As I read about the node type VLIB_NODE_TYPE_PROCESS, it says
"The graph node scheduler invokes these processes in much the same way as 
traditional vector-processing run-to-completion graph  nodes".

For eg..
A node like "lldp_process_node", as I see whenever a timeout occurs or an event 
has been generated, a frame has been sent out of an interface. The questions I 
have are..


  1.  The part I'm not able to figure out yet is, where is (on which 
thread/core) this "lldp_process_node" running in the back ground ?  I'm 
assuming it cannot be worker thread.


  1.  Would you please point me the piece of code in vpp infra, that schedules 
all nodes of type "VLIB_NODE_TYPE_PROCESS".


  1.  I tried to turn on few debugs like this "VLIB_BUFFER_TRACE_TRAJECTORY" 
and few other ones. None of them seems to generate any traces/logs (show trace 
- doesn't give me any info). Any pointers on how to enable relevant logs for 
this activity.

Thanks
-Avinash

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] problem in elog format

2017-11-30 Thread Dave Barach (dbarach)
Hmmm. I’ve never seen that issue, although I haven’t run c2cpel in a while. 
I’ll take a look later today.

It looks like .../src/perftool.am builds it, so look under 
build-root/install-xxx and (possibly) install it manually...

Thanks… Dave

From: Juan Salmon [mailto:salmonju...@gmail.com]
Sent: Thursday, November 30, 2017 12:50 AM
To: Dave Barach (dbarach) 
Cc: Florin Coras ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] problem in elog format

Thanks a lot,
Now I want to convert elog file to text file.
I compiled perftools in test directory, but when running c2cpel tools, the 
following error accrued:

c2cpel: error while loading shared libraries: libcperf.so.0: cannot open shared 
object file: No such file or directory

Best Regards,
Juan Salmon.

On Wed, Nov 29, 2017 at 3:53 PM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:

PMFJI, but we have organized schemes for capturing, serializing, and eventually 
displaying string data.



Please note: a single "format" call will probably cost more than the entire 
clock-cycle budget available to process a packet. Really. Seriously. Printfs 
(aka format calls) in the packet-processing path are to be avoided at all 
costs. The basic event-logger modus operandi is to capture binary data and 
pretty-print it offline.



At times, one will need or want to log string data. Here's how to proceed:



The printf-like function elog_string(...) adds a string to the event log string 
heap, and returns a cookie which offline tools use to print that string. The 
"T" format specifier in an event definition means "go print the string at the 
indicated u32 string heap offset”. Here’s an example:



  /* *INDENT-OFF* */

  ELOG_TYPE_DECLARE (e) =

{

  .format = "serialize-msg: %s index %d",

  .format_args = "T4i4",

};

  struct

{

u32 c[2];

  } *ed;

  ed = ELOG_DATA (mc->elog_main, e);

  ed->c[0] = elog_id_for_msg_name (mc, msg->name);

  ed->c[1] = si;



So far so good, but let’s do a bit of work to keep from blowing up the string 
heap:



static u32

elog_id_for_msg_name (mc_main_t * m, char *msg_name)

{

  uword *p, r;

  uword *h = m->elog_id_by_msg_name;

  u8 *name_copy;



  if (!h)

h = m->elog_id_by_msg_name = hash_create_string (0, sizeof (uword));



  p = hash_get_mem (h, msg_name);

  if (p)

return p[0];

  r = elog_string (m->elog_main, "%s", msg_name);



  name_copy = format (0, "%s%c", msg_name, 0);



  hash_set_mem (h, name_copy, r);

  m->elog_id_by_msg_name = h;



  return r;

}



As in: each unique string appears exactly once in the event-log string heap. 
Hash_get_mem (x) is way cheaper than printf(x). Please remember that this hash 
flavor is not inherently thread-safe.



In the case of enumerated strings, use the “t” format specifier. It only costs 
1 octet to represent up to 256 constant strings:



  ELOG_TYPE_DECLARE (e) =

  {

.format = "my enum: %s",

.format_args = "t1",

.n_enum_strings =

  2,

.enum_strings =

  {

"string 1",

"string 2",

  },

   };

  struct

  {

u8 which;

  } *ed;

  ed = ELOG_DATA (&vlib_global_main.elog_main, e);

  ed->which = which;





HTH… Dave



-Original Message-
From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io>] On 
Behalf Of Florin Coras
Sent: Wednesday, November 29, 2017 4:43 AM
To: Juan Salmon mailto:salmonju...@gmail.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: [vpp-dev] problem in elog format



Hi Juan,



We don’t typically use elogs to store strings, still, you may be able to get it 
to run with:



struct

{

u8 err[20];

} * ed;



And then copy your data to err: clib_memcpy (ed->err, your_vec, vec_len 
(your_vec)). Make sure your vec is 0 terminated.



HTH,

Florin



> On Nov 28, 2017, at 9:12 PM, Juan Salmon 
> mailto:salmonju...@gmail.com>> wrote:

>

>

> I want to use event-log and send string to one of elements of ed struct.

> but the result is not correct.

>

> the sample code:

>

> ELOG_TYPE_DECLARE (e) = {

> .format = "Test LOG: %s",

> .format_args = "s20",

> };

> struct

> {

> u8 * err;

> } * ed;

>

>

> vlib_worker_thread_t * w = vlib_worker_threads + cpu_index;

> ed = ELOG_TRACK_DATA (&vlib_global_main.elog_main, e, w->elog_track);

>

> ed->err = format (0,"%s", "This is a Test");

>

>

> Could you please help me?

>

>

> Best Regards,


[vpp-dev] Frequently-asked questions wiki page

2017-11-29 Thread Dave Barach (dbarach)
Folks,

Please see https://wiki.fd.io/view/VPP/FAQ. Additions welcome. I decided to 
start with a personal favorite...

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] can't expand fixed-size pool

2017-11-29 Thread Dave Barach (dbarach)
/usr/bin/vpp[1754]: scan_device:510: can't expand fixed-size pool. The pool in 
question is not preallocated. It sure looks like something has scribbled on the 
pool's "max_elts" member. You might want to confirm w/ gdb.



I couldn't help noticing that you're running the router plugin. Please disable 
it and try again.



Thanks... Dave



-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Glaza
Sent: Wednesday, November 29, 2017 7:52 AM
To: vpp-dev 
Subject: [vpp-dev] can't expand fixed-size pool



Hi everybody,



I just build an rpm package of vpp-17.10 and on build host it runs OK, but on 
second clean host it fails with:

# /usr/bin/vpp -c /etc/vpp/startup.conf.rpmnew

vlib_plugin_early_init:356: plugin path /usr/lib/vpp_plugins

load_one_plugin:184: Loaded plugin: acl_plugin.so (Access Control Lists)

load_one_plugin:184: Loaded plugin: dpdk_plugin.so (Data Plane Development Kit 
(DPDK))

load_one_plugin:184: Loaded plugin: flowprobe_plugin.so (Flow per Packet)

load_one_plugin:184: Loaded plugin: gtpu_plugin.so (GTPv1-U)

load_one_plugin:184: Loaded plugin: ila_plugin.so (Identifier-locator 
addressing for IPv6)

load_one_plugin:184: Loaded plugin: ioam_plugin.so (Inbound OAM)

load_one_plugin:114: Plugin disabled (default): ixge_plugin.so

load_one_plugin:184: Loaded plugin: lb_plugin.so (Load Balancer)

load_one_plugin:184: Loaded plugin: libsixrd_plugin.so (IPv6 Rapid Deployment 
on IPv4 Infrastructure (RFC5969))

load_one_plugin:184: Loaded plugin: memif_plugin.so (Packet Memory Interface 
(experimetal))

load_one_plugin:184: Loaded plugin: nat_plugin.so (Network Address Translation)

load_one_plugin:184: Loaded plugin: pppoe_plugin.so (PPPoE)

load_one_plugin:184: Loaded plugin: router.so (router)

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/acl_test_plugin.so

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/dpdk_test_plugin.so

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_export_test_plugin.so

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_pot_test_plugin.so

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_trace_test_plugin.so

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/ioam_vxlan_gpe_test_plugin.so

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/lb_test_plugin.so

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/udp_ping_test_plugin.so

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/vxlan_gpe_ioam_export_test_plugin.so

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/flowprobe_test_plugin.so

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/gtpu_test_plugin.so

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/memif_test_plugin.so

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/nat_test_plugin.so

/usr/bin/vpp[1754]: load_one_plugin:63: Loaded plugin: 
/usr/lib/vpp_api_test_plugins/pppoe_test_plugin.so

/usr/bin/vpp[1754]: scan_device:510: can't expand fixed-size pool



Host is Centos 7.3.



Please help.





___

vpp-dev mailing list

vpp-dev@lists.fd.io

https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] problem in elog format

2017-11-29 Thread Dave Barach (dbarach)
PMFJI, but we have organized schemes for capturing, serializing, and eventually 
displaying string data.



Please note: a single "format" call will probably cost more than the entire 
clock-cycle budget available to process a packet. Really. Seriously. Printfs 
(aka format calls) in the packet-processing path are to be avoided at all 
costs. The basic event-logger modus operandi is to capture binary data and 
pretty-print it offline.



At times, one will need or want to log string data. Here's how to proceed:



The printf-like function elog_string(...) adds a string to the event log string 
heap, and returns a cookie which offline tools use to print that string. The 
"T" format specifier in an event definition means "go print the string at the 
indicated u32 string heap offset”. Here’s an example:



  /* *INDENT-OFF* */

  ELOG_TYPE_DECLARE (e) =

{

  .format = "serialize-msg: %s index %d",

  .format_args = "T4i4",

};

  struct

{

u32 c[2];

  } *ed;

  ed = ELOG_DATA (mc->elog_main, e);

  ed->c[0] = elog_id_for_msg_name (mc, msg->name);

  ed->c[1] = si;



So far so good, but let’s do a bit of work to keep from blowing up the string 
heap:



static u32

elog_id_for_msg_name (mc_main_t * m, char *msg_name)

{

  uword *p, r;

  uword *h = m->elog_id_by_msg_name;

  u8 *name_copy;



  if (!h)

h = m->elog_id_by_msg_name = hash_create_string (0, sizeof (uword));



  p = hash_get_mem (h, msg_name);

  if (p)

return p[0];

  r = elog_string (m->elog_main, "%s", msg_name);



  name_copy = format (0, "%s%c", msg_name, 0);



  hash_set_mem (h, name_copy, r);

  m->elog_id_by_msg_name = h;



  return r;

}



As in: each unique string appears exactly once in the event-log string heap. 
Hash_get_mem (x) is way cheaper than printf(x). Please remember that this hash 
flavor is not inherently thread-safe.



In the case of enumerated strings, use the “t” format specifier. It only costs 
1 octet to represent up to 256 constant strings:



  ELOG_TYPE_DECLARE (e) =

  {

.format = "my enum: %s",

.format_args = "t1",

.n_enum_strings =

  2,

.enum_strings =

  {

"string 1",

"string 2",

  },

   };

  struct

  {

u8 which;

  } *ed;

  ed = ELOG_DATA (&vlib_global_main.elog_main, e);

  ed->which = which;





HTH… Dave



-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Florin Coras
Sent: Wednesday, November 29, 2017 4:43 AM
To: Juan Salmon 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] problem in elog format



Hi Juan,



We don’t typically use elogs to store strings, still, you may be able to get it 
to run with:



struct

{

u8 err[20];

} * ed;



And then copy your data to err: clib_memcpy (ed->err, your_vec, vec_len 
(your_vec)). Make sure your vec is 0 terminated.



HTH,

Florin



> On Nov 28, 2017, at 9:12 PM, Juan Salmon 
> mailto:salmonju...@gmail.com>> wrote:

>

>

> I want to use event-log and send string to one of elements of ed struct.

> but the result is not correct.

>

> the sample code:

>

> ELOG_TYPE_DECLARE (e) = {

> .format = "Test LOG: %s",

> .format_args = "s20",

> };

> struct

> {

> u8 * err;

> } * ed;

>

>

> vlib_worker_thread_t * w = vlib_worker_threads + cpu_index;

> ed = ELOG_TRACK_DATA (&vlib_global_main.elog_main, e, w->elog_track);

>

> ed->err = format (0,"%s", "This is a Test");

>

>

> Could you please help me?

>

>

> Best Regards,

> Juan Salmon.

> ___

> vpp-dev mailing list

> vpp-dev@lists.fd.io

> https://lists.fd.io/mailman/listinfo/vpp-dev



___

vpp-dev mailing list

vpp-dev@lists.fd.io

https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] How to enable RSS in VPP

2017-11-28 Thread Dave Barach (dbarach)
You are sending traffic with more than one flow, correct?

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Saxena, Nitin
Sent: Tuesday, November 28, 2017 11:45 AM
To: vpp-dev@lists.fd.io
Cc: Athreya, Narayana Prasad 
Subject: [vpp-dev] How to enable RSS in VPP

HI,

I am using ConnectX-4 NIC which has Rx RSS support however I can see VPP is not 
using RSS feature with this NIC.
NIC is getting traffic from 1 queue only?  Is this can be fixed in VPP? If yes 
how?

Output from show hardware detail

==
UnknownEthernet32/0/0  1 up   UnknownEthernet32/0/0
  Ethernet address 24:8a:07:a4:6b:78
  Mellanox ConnectX-4 Family
carrier up full duplex speed 4 mtu 9216  promisc
pci id:device 15b3:1013 subsystem 15b3:0008
pci address:   :32:00.00
max rx packet len: 65536
max num of queues: rx 65535 tx 65535
promiscuous:   unicast on all-multicast on
vlan offload:  strip off filter off qinq off
rx offload caps:   vlan-strip ipv4-cksum udp-cksum tcp-cksum
tx offload caps:   vlan-insert ipv4-cksum udp-cksum tcp-cksum 
outer-ipv4-cksum
rss active:ipv4-udp
rss supported: none
rx queues 4, rx desc 1024, tx queues 5, tx desc 1024
cpu socket 0

tx frames ok 31003987272
tx bytes ok1860239236320
rx frames ok 63884415232
rx bytes ok3833064913920
extended stats:
  rx good packets63884415232
  tx good packets31003987272
  rx good bytes3833064913920
  tx good bytes1860239236320
  rx errors0
  tx errors0
  rx mbuf allocation errors0
  rx q0packets 0
  rx q0bytes   0
  rx q0errors  0
  rx q1packets 0
  rx q1bytes   0
  rx q1errors  0
  rx q2packets 0
  rx q2bytes   0
  rx q2errors  0
  rx q3packets   63884415232
  rx q3bytes   3833064913920
  rx q3errors  0
  tx q0packets 0
  tx q0bytes   0
  tx q1packets   31003987272
  tx q1bytes   1860239236320
  tx q2packets 0
  tx q2bytes   0
  tx q3packets 0
  tx q3bytes   0
  tx q4packets 0
  tx q4bytes   0


Regards,
Nitin
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP crash observed with 4k sub-interfaces and 4k FIBs

2017-11-27 Thread Dave Barach (dbarach)
Laying aside the out-of-memory issue for a minute: can you explain the vpp 
deployment you have in mind?

Given where vpp would fit in a normal network design, I’m not seeing why you’d 
want to go with a full vlan / VRF’s mesh.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Balaji Kn
Sent: Monday, November 27, 2017 4:32 AM
To: vpp-dev 
Subject: [vpp-dev] VPP crash observed with 4k sub-interfaces and 4k FIBs

Hello,

I am using VPP 17.07 and initialized heap memory as 3G in startup configuration.
My use case is to have 4k sub-interfaces to differentiated by VLAN and 
associated each sub-interface with unique VRF. Eventually using 4k FIBs.

However i am observing VPP is crashing with memory crunch while adding an ip 
route.

backtrace
#0  0x7fae4c981cc9 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7fae4c9850d8 in __GI_abort () at abort.c:89
#2  0x004070b3 in os_panic ()
at 
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vpp/vnet/main.c:263
#3  0x7fae4d19007a in clib_mem_alloc_aligned_at_offset 
(os_out_of_memory_on_failure=1,
align_offset=, align=64, size=1454172096)
at 
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/mem.h:102
#4  vec_resize_allocate_memory (v=v@entry=0x7fade2c44880, 
length_increment=length_increment@entry=1,
data_bytes=, header_bytes=, 
header_bytes@entry=24,
data_align=data_align@entry=64)
at 
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/vec.c:84
#5  0x7fae4db9210c in _vec_resize (data_align=, 
header_bytes=,
data_bytes=, length_increment=, v=)
at 
/jenkins_home/workspace/vFE/vFE_Release_Master_Build/datapath/vpp/build-data/../src/vppinfra/vec.h:142

I initially suspected FIB is consuming more heap space. But I do see much 
memory consumed by FIB table also and felt 3GB of heap is sufficient

vpp# show fib memory
FIB memory
 Name   Size  in-use /allocated   totals
 Entry   7260010 /  60010 4320720/4320720
 Entry Source3268011 /  68011 2176352/2176352
 Entry Path-Extensions   60  0   /0   0/0
multicast-Entry 1924006  /   4006 769152/769152
   Path-list 4860016 /  60016 2880768/2880768
   uRPF-list 1676014 /  76015 1216224/1216240
 Path8060016 /  60016 4801280/4801280
  Node-list elements 2076017 /  76019 1520340/1520380
Node-list heads  8 68020 /  68020 544160/544160

Is there any way to identify usage of heap memory in other modules?
Any pointers would be helpful.

Regards,
Balaji
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Import/includes in .api files

2017-11-20 Thread Dave Barach (dbarach)
Would some variant of the usual C / C++ guitar lick work?

#ifndef __defined_my_types
#define __defined_my_types 
#include 
#endif /* __defined_my_types */



-Original Message-
From: Ole Troan [mailto:otr...@employees.org] 
Sent: Monday, November 20, 2017 10:32 AM
To: Dave Barach (dbarach) 
Cc: Neale Ranns (nranns) ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Import/includes in .api files

Dave,

> Since the beginning of time, we've been running .api files through the C 
> preprocessor. Put all of your "typeonly..." definitions in a file, and 
> #include it. Should work immediately.
> 
> Thanks to Damjan, there's only one copy of the suffix rule, in 
> .../src/suffix-rules.mk. Here's the relevant rule:
> 
> %.api.h: %.api @VPPAPIGEN@
>   @echo "  APIGEN  " $@ ; \
>   mkdir -p `dirname $@` ; \
>   $(CC) $(CPPFLAGS) -E -P -C -x c $<  \
>   | @VPPAPIGEN@ --input - --output $@ --show-name $@ > /dev/null

Sorry, for misunderstanding, this seems to work perfectly fine with the 
language bindings.

Verified by moving fib_path to types.api and comparing the resulting 
ip.api.json.

Need to figure how to deal with duplicates, which will end up in the .JSON 
definitions when multiple .api include the same file.
That shouldn't be a big deal though.

Best regards,
Ole
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] vpp payload modification

2017-11-20 Thread Dave Barach (dbarach)
“It depends...”

If you’re using DPDK devices and a 1500-byte MTU, the answer is almost 
certainly “no.” Simply set b->current_length += delta, fix checksums, done.

The worst-case: added data spans the end of the current packet buffer chain and 
the beginning of a new buffer. If you need to go there, ping me and I’ll point 
you to some examples.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Abhilash Lakshmeshwar
Sent: Monday, November 20, 2017 9:03 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] vpp payload modification

Hello,

I am working on vpp version 17.07, i have a case where in i need to change the 
payload of a packet, if the size of payload is increased  ( say like increased 
by 10 bytes) from the original one do i have to reallocate the buffer to 
accommodate the extra bytes  ?

Right now the modified payload packet has truncated data at the end. i have 
modified the ip length and the checksum according to the new payload.

Am i missing anything , any pointers would be helpful.

Thanks,
Abhilash


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Import/includes in .api files

2017-11-20 Thread Dave Barach (dbarach)
Dear Neale,

Since the beginning of time, we've been running .api files through the C 
preprocessor. Put all of your "typeonly..." definitions in a file, and #include 
it. Should work immediately.

Thanks to Damjan, there's only one copy of the suffix rule, in 
.../src/suffix-rules.mk. Here's the relevant rule:

%.api.h: %.api @VPPAPIGEN@
@echo "  APIGEN  " $@ ; \
mkdir -p `dirname $@` ; \
$(CC) $(CPPFLAGS) -E -P -C -x c $<  \
| @VPPAPIGEN@ --input - --output $@ --show-name $@ > /dev/null

HTH… Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Neale Ranns (nranns)
Sent: Monday, November 20, 2017 3:28 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] RFC: Import/includes in .api files


Hi All,

I’d like to be able to re-use types defined in one .api file in many other .api 
files. My specific objective is to re-use a fib_path_t across the many APIs 
that describe a destination to which to send packets.

My first attempt at this is:
  https://gerrit.fd.io/r/#/c/9489/

I updated vppapigen to accept the keyword ‘import’, munch the subsequent 
string, and then generate the #include in the resulting .api.h. then the fun 
started… multiple type definitions, include guards, here be dragons, turn back 
now and seek assistance.
I later realised that an import statement is not required. If I create 
vnet/fib/fib.api and add it to vnet_all_api_h.h at the top, then that has some 
success. However, no import statement is not so friendly to other tools that 
parse the .api files.

So an RFC that is really an RFH; how is it best to approach this?

Regards,
Neale


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Please Call DigSafe...

2017-11-17 Thread Dave Barach (dbarach)
Dear Chris,

As you probably worked out, the forcing function for my email was a patch that 
both Florin and I -2'ed yesterday; a real stinker.

I want to facilitate discussions of the form: "I'd like to implement X or fix 
Y. What's the right way to do it? Who should I talk to about that?"

Guidelines seem like a good idea. I'll try to write something on the wiki.

Thanks... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Friday, November 17, 2017 8:51 AM
To: Dave Barach ; vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Please Call DigSafe...

Hi Dave,

After spending a few minutes to work out that you were talking about a proposed 
patch and not something any of us had merged (and, especially not that I 
merged!), I see that what we need is a balance between not discouraging people 
to experiment, or submit their ideas, but to also steer people towards relevant 
leads before they get in too deep.

Problem is, if people make huge patches before ever talking to someone, our 
first contact is when they submit it. The teaching moment is when the reviewer 
notices it. That is obviously too late for the first patch, but should help 
with subsequent work.

This is why open source generally prefers people to keep their patches small 
and thematic; most reviewers tire of seeing many large patches when they are 
developed in isolation and are directionally unsound - to the point that they 
start to see the color bar in the review list and if it's yellow-or-worse, and 
not from someone they specifically associate with quality work, typically those 
submissions end up ignored.

I don't think we have contribution guidelines for VPP or fd.io in general 
(apart from the style and doc guides); at least a very quick scan of the wiki 
was not fruitful. We should have somewhere to send new people (can we nudge 
people who login to Gerrit for the first time?), and also people whose first 
submission is unacceptable (too big, too complex, directionally unsound). And 
we as reviewers should remain vigilant and, importantly, consistent.

Chris.


> -Original Message-
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On
> Behalf Of Dave Barach
> Sent: Friday, November 17, 2017 7:45
> To: vpp-dev@lists.fd.io
> Subject: [vpp-dev] Please Call DigSafe...
> 
> Folks,
> 
> At our next project meeting, I'd like to spend a few minutes talking about a
> good-news / bad-news situation affecting the vpp project.
> 
> As the community has expanded, committers have begun noticing
> unacceptable and unfixable patches in mission-critical code. Yesterday's
> soap-opera episode involved the ip4/6 speed-paths.
> 
> I think we should allocate a bit of meeting time for folks to talk about what
> they're trying to develop, with an eye towards engaging with relevant area
> experts from the start.
> 
> In most places in the US, folks planning to dig holes on their property are
> required to call 811 (DigSafe): to avoid hitting buried gas lines and blowing 
> up
> the neighborhood. It seems like we need to create something
> similar for the vpp project.
> 
> Thoughts?
> 
> Thanks... Dave
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Discussion Topic: creating demo branches in git.fd.io/vpp

2017-11-15 Thread Dave Barach (dbarach)
+1... 

-Original Message-
From: Ole Troan [mailto:otr...@employees.org] 
Sent: Wednesday, November 15, 2017 5:49 PM
To: Dave Wallace 
Cc: Ed Warnicke ; Dave Barach (dbarach) ; 
Keith Burns (krb) ; Florin Coras (fcoras) ; 
John Lo (loj) ; Luke, Chris ; Damjan 
Marion ; Neale Ranns (nranns) ; 
vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Discussion Topic: creating demo branches in git.fd.io/vpp

Just as a data-point.
What we have done for IETF hackathons is to create a branch on github. E.g:

https://github.com/vpp-dev/vpp/tree/ietf100-nat

This allows us to do "high speed collaboration". Then cherry pick what has 
value after the event.
Perhaps something similar could be done for "demos"?

Cheers,
Ole

> On 16 Nov 2017, at 06:17, Dave Wallace  wrote:
> 
> Folks,
> 
> Per the action item from this yesterday's VPP weekly meeting, I'm asking for 
> opinions from the VPP community on allowing the creation of demo branches in 
> the VPP git repo.
> 
> The definition of a demo branch is defined as a branch pulled from master 
> that:
> 
> 1) Purpose is to demonstrate a VPP use case at a public conference/symposium 
> (kubecon 2017 as the 1st instance).
> 2) The branch will never be merged back into master.
> 2) Commits to the branch will be cherry-picked/double-committed to master.
> 
> Some comments I recall from memory (please forgive me if I have left any 
> comments out):
> 
> Pro: Will allow utilization of LF infra to utilize CI process
> Pro: Will allow publishing of demo artifacts for ease of reproduction of the 
> demo.
> Con: Will pollute repo with ephemeral code that will rapidly become out of 
> date / dead.
> Con: Sets precedent which may cause large numbers of non-production branches 
> over time.
> 
> Please feel add additional Pro/Con comments here.  Comments are welcome from 
> all members of the VPP community.
> 
> I will begin with my thoughts since yesterday's meeting:
> 
> Con: In order for the CI infra to be utilized, the addition of demo branch 
> specific jenkins jobs needs to be added to ci-management (polluting that repo 
> as well).
> Con: May add overhead to CSIT project in triaging any CSIT failures on the 
> demo branch.
> Con: Adds overhead to already over-subscribed committer task workload 
> (reviewing commits to demo branch & double commits to main)
> 
> 
> IMHO, this proposal has the potential to cause the VPP committer workload to 
> spiral out of control thus disrupting the regular release cadence.
> 
> 
> @Ed,
> 
> It might be a good idea to include the ci-management and CSIT projects in 
> this discussion since those projects may be affected by this proposal.  I'll 
> let you decide whether or not to add additional projects to the discussion?
> 
> As the TSC Chairperson, I will let you decide when to close the discussion 
> and call for a vote of the committers (whom I addressed directly on this 
> email).
> 
> 
> Thanks,
> -daw-
> 
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Discussion Topic: creating demo branches in git.fd.io/vpp

2017-11-15 Thread Dave Barach (dbarach)
I have a mild preference that we avoid creating demo branches in the master vpp 
repo. When faced with similar requirements, I’ve added branch(es) to ephemeral 
downstream mirrors.

I suppose one could destroy demo branches in the master vpp repo, but 
“something could go wrong...”

Thoughts?

Thanks... Dave

From: Dave Wallace [mailto:dwallac...@gmail.com]
Sent: Wednesday, November 15, 2017 5:17 PM
To: Ed Warnicke ; Dave Barach (dbarach) ; 
Keith Burns (krb) ; Florin Coras (fcoras) ; 
John Lo (loj) ; Luke, Chris ; Damjan 
Marion ; Neale Ranns (nranns) ; Ole 
Troan (otroan) ; vpp-dev@lists.fd.io
Subject: Discussion Topic: creating demo branches in git.fd.io/vpp

Folks,

Per the action item from this yesterday's VPP weekly meeting, I'm asking for 
opinions from the VPP community on allowing the creation of demo branches in 
the VPP git repo.

The definition of a demo branch is defined as a branch pulled from master that:

1) Purpose is to demonstrate a VPP use case at a public conference/symposium 
(kubecon 2017 as the 1st instance).
2) The branch will never be merged back into master.
2) Commits to the branch will be cherry-picked/double-committed to master.

Some comments I recall from memory (please forgive me if I have left any 
comments out):

Pro: Will allow utilization of LF infra to utilize CI process
Pro: Will allow publishing of demo artifacts for ease of reproduction of the 
demo.
Con: Will pollute repo with ephemeral code that will rapidly become out of date 
/ dead.
Con: Sets precedent which may cause large numbers of non-production branches 
over time.

Please feel add additional Pro/Con comments here.  Comments are welcome from 
all members of the VPP community.

I will begin with my thoughts since yesterday's meeting:

Con: In order for the CI infra to be utilized, the addition of demo branch 
specific jenkins jobs needs to be added to ci-management (polluting that repo 
as well).
Con: May add overhead to CSIT project in triaging any CSIT failures on the demo 
branch.
Con: Adds overhead to already over-subscribed committer task workload 
(reviewing commits to demo branch & double commits to main)


IMHO, this proposal has the potential to cause the VPP committer workload to 
spiral out of control thus disrupting the regular release cadence.


@Ed,

It might be a good idea to include the ci-management and CSIT projects in this 
discussion since those projects may be affected by this proposal.  I'll let you 
decide whether or not to add additional projects to the discussion?

As the TSC Chairperson, I will let you decide when to close the discussion and 
call for a vote of the committers (whom I addressed directly on this email).


Thanks,
-daw-
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] TCP Options: tcp_header_t and tcp_options_t

2017-11-14 Thread Dave Barach (dbarach)
Dear Justin,

Brief commercial: hopefully you added your node to the ip4 unicast feature arc, 
configured to grab pkts, pre-ip4/6-lookup. 

In feature-arc land, the following one-liner sets next0 so pkts will visit the 
next enabled feature. The last node in the ip4-unicast feature arc is 
ip4-lookup...

  /* Next node in unicast feature arc */
  vnet_get_config_data (em->config_main[table_index],
&b0->current_config_index, &next0,
/* # bytes of config data */ 0);

Check the ip protocol and ignore any non-TCP pkts:

  ip40 = vlib_buffer_get_current (b0);
  if (ip40->protocol != IP_PROTOCOL_TCP)
goto trace0;

Then use ip4_next_header() to find the tcp layer, etc. etc.

HTH... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Justin Iurman
Sent: Monday, November 13, 2017 5:30 PM
To: vpp-dev 
Subject: [vpp-dev] TCP Options: tcp_header_t and tcp_options_t

Guys,

My node is located right before ip4_lookup. What's the fastest/cleanest way to 
get options related to a TCP packet, having access to a tcp_header_t structure 
(which is not directly linked to its options) ? Actually, I'd like to modify or 
remove some options on the fly. 

Do I have to call tcp_options_parse function from src/vnet/tcp/tcp_input.c ? 
But I guess it would duplicate the job, since it is already called at one 
moment. 

Or should I get the TCP connection, which connects both tcp_header_t and 
tcp_options_t ? Or should I directly modify options "in" the packet, by moving 
the data pointer (a sort-of copy of what tcp_options_parse already does) ?

Thanks for your help !

Justin
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] vlib_validate_buffer_enqueue

2017-11-13 Thread Dave Barach (dbarach)
Dear Justin,

Quad-loops are generally not effective for table-lookup-intensive tasks. At a 
certain point, gcc runs out of registers and starts putting hot variables onto 
the stack. I've converted a number of dual loops into quad loops, only to 
discover that they're no faster than the dual loop version.

Rather than having the sample plugin propagate a bunch of "fetch me a rock" 
coding work, I went with a dual-single loop. When doing new development, I shut 
off the dual loop, make the single loop work, then build the dual (or quad) 
loop. 

With experience, building a dual (or quad) loop becomes a mechanical exercise 
easily done during a boring meeting. (😉)... 

In viable quad-loop use-cases, it's not worth any performance to also provide a 
dual loop. The dual-loop code will run at most one time; there's no chance of 
fixed overhead amortization. 

Thanks… Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Justin Iurman
Sent: Monday, November 13, 2017 5:51 AM
To: vpp-dev 
Subject: [vpp-dev] vlib_validate_buffer_enqueue

Hey guys,

In buffer_node.h, there are the following macros:
- vlib_validate_buffer_enqueue_x1
- vlib_validate_buffer_enqueue_x2
- vlib_validate_buffer_enqueue_x4

In a node, I was just wondering what was the use idea behind that ? Is it for a 
reason of speed ? I mean, you're obviously faster if you process 4 packets 
horizontally than one after the other. Why then, in the sample plugin, is the 
"x4" version not used ? A "perfect" plugin would use each of them to cover each 
case, right ? Also, why not having a "x8" (or more) version ? I guess it's 
either for a performance issue or to stop at a specific ceiling.

Thanks !

Justin
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] make test-all

2017-11-13 Thread Dave Barach (dbarach)
Try increasing the size of the shared-memory API segment. An allocation of 25mb 
is failing. You might ask yourself how sane it is to generate that much output. 

Thanks… Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Klement Sekera -X (ksekera - PANTHEON TECHNOLOGIES at Cisco)
Sent: Monday, November 13, 2017 5:27 AM
To: John Lo (loj) ; Pavel Kotucek -X (pkotucek - PANTHEON 
TECHNOLOGIES at Cisco) ; vpp-dev@lists.fd.io; Brian Brooks 

Subject: Re: [vpp-dev] make test-all

So it seems that vpp coredumps while dumping the API trace after
creating all the interfaces...

(gdb) bt
#0  0x7f14f4b1e428 in __GI_raise (sig=sig@entry=6) at 
../sysdeps/unix/sysv/linux/raise.c:54
#1  0x7f14f4b2002a in __GI_abort () at abort.c:89
#2  0x00405d83 in os_panic () at 
/home/ksekera/vpp/build-data/../src/vpp/vnet/main.c:268
#3  0x7f14f5fe5f86 in clib_mem_alloc_aligned_at_offset 
(os_out_of_memory_on_failure=1, align_offset=0, align=1, size=25282098)
at /home/ksekera/vpp/build-data/../src/vppinfra/mem.h:105
#4  clib_mem_alloc (size=25282098) at 
/home/ksekera/vpp/build-data/../src/vppinfra/mem.h:114
#5  vl_msg_api_alloc_internal (may_return_null=0, pool=, 
nbytes=25282098)
at /home/ksekera/vpp/build-data/../src/vlibmemory/memory_shared.c:176
#6  vl_msg_api_alloc (nbytes=nbytes@entry=25282082) at 
/home/ksekera/vpp/build-data/../src/vlibmemory/memory_shared.c:207
#7  0x00411392 in vl_api_cli_inband_t_handler (mp=0x300e2a0c) at 
/home/ksekera/vpp/build-data/../src/vpp/api/api.c:223
#8  0x7f14f5fdfa23 in vl_msg_api_handler_with_vm_node 
(am=am@entry=0x7f14f620d460 , the_msg=the_msg@entry=0x300e2a0c,
vm=vm@entry=0x7f14f5fd6260 , 
node=node@entry=0x7f14b410e000) at 
/home/ksekera/vpp/build-data/../src/vlibapi/api_shared.c:508
#9  0x7f14f5fef35f in memclnt_process (vm=0x7f14f5fd6260 
, node=0x7f14b410e000, f=)
at /home/ksekera/vpp/build-data/../src/vlibmemory/memory_vlib.c:970

(gdb) p input
$5 = {buffer = 0x7f14b56f6558 "dump 
/tmp/vpp-unittest-P2PEthernetAPI-qRwMY6/vpp_api_trace.test_p2p_subif_creation_10k.log\n",
  index = 18446744073709551615, buffer_marks = 0x7f14b592a240, fill_buffer = 
0x0, fill_buffer_arg = 0x0}

I'm pretty sure that the history of this mess was:

1.) the test was added first as enhanced
2.) automatic dump of api trace was added, but only tested against 'make test', 
not 'make test-all'

Thanks,
Klement

Quoting Klement Sekera (2017-11-11 22:12:52)
> Hi Brian,
> 
> it should. Though I just tried running it on latest master and got a
> timeout in test_p2p_ethernet, which shouldn't happen. I see the test was
> trying to create tens of thousands of interfaces... maybe something is
> slower than usual?
> 
> Thanks,
> Klement
> 
> Quoting Brian Brooks (2017-11-11 01:11:47)
> >Should “make test-all” pass?
> > 
> > 
> > 
> >Thanks,
> > 
> >Brian
> > 
> > 
> > 
> >IMPORTANT NOTICE: The contents of this email and any attachments are
> >confidential and may also be privileged. If you are not the intended
> >recipient, please notify the sender immediately and do not disclose the
> >contents to any other person, use it for any purpose, or store or copy 
> > the
> >information in any medium. Thank you.
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] multi-core multi-threading performance

2017-11-08 Thread Dave Barach (dbarach)
Please write up what you’ve done, and provide a pointer to your code.

Thanks… Dave

From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
Sent: Wednesday, November 8, 2017 1:19 AM
To: Dave Barach (dbarach) 
Cc: vpp-dev@lists.fd.io; John Marshall (jwm) ; Neale Ranns 
(nranns) ; Minseok Kwon 
Subject: Re: multi-core multi-threading performance

Hi all,

Any help/ideas on how we can have a better performance using multi-cores is 
appreciated.

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662


On Mon, Nov 6, 2017 at 8:10 AM, Pragash Vijayaragavan 
mailto:pxv3...@g.rit.edu>> wrote:
Ok now i provisioned 4 rx queues for 4 worker threads and yea all workers
are processing traffic, but the lookup rate has dropped, i am getting low 
packets than when it was 2 workers.

I tried configuring 4 tx queues as well, still same problem (low packets 
received compared to 2 workers).



Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662


On Mon, Nov 6, 2017 at 8:00 AM, Pragash Vijayaragavan 
mailto:pxv3...@g.rit.edu>> wrote:
Just 1, let me change it to 2 may be 3 and get back to you.

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662


On Mon, Nov 6, 2017 at 7:48 AM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:
How many RX queues did you provision? One per worker, or no supper...

Thanks… Dave

From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu<mailto:pxv3...@rit.edu>]
Sent: Monday, November 6, 2017 7:36 AM

To: Dave Barach (dbarach) mailto:dbar...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>; John Marshall (jwm) 
mailto:j...@cisco.com>>; Neale Ranns (nranns) 
mailto:nra...@cisco.com>>; Minseok Kwon 
mailto:mxk...@rit.edu>>
Subject: Re: multi-core multi-threading performance

Hi Dave,

As per your suggestion i tried sending different traffic and i could notice 
that, 1 worker acts per port (hardware NIC)

Is it true that multiple workers cannot work on same port at the same time?





Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662


On Mon, Nov 6, 2017 at 7:13 AM, Pragash Vijayaragavan 
mailto:pxv3...@g.rit.edu>> wrote:
Thanks Dave,

let me try it out real quick and get back to you.

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662


On Mon, Nov 6, 2017 at 7:11 AM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:
Incrementing / random src/dst addr/port

Thanks… Dave

From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu<mailto:pxv3...@rit.edu>]
Sent: Monday, November 6, 2017 7:06 AM
To: Dave Barach (dbarach) mailto:dbar...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>; John Marshall (jwm) 
mailto:j...@cisco.com>>; Neale Ranns (nranns) 
mailto:nra...@cisco.com>>; Minseok Kwon 
mailto:mxk...@rit.edu>>
Subject: Re: multi-core multi-threading performance

Hi Dave,

Thanks for the mail

a "show run" command shows dpdk-input process on 2 of the workers but the 
ip6-lookup process is running only on 1 worker.

What config should be done to make all threads process traffic.

This is for 4 workers and 1 main core.

Pasted output :


vpp# sh run
Thread 0 vpp_main (lcore 1)
Time 7.5, average vectors/node 0.00, last 128 main loops 0.00 per node 0.00
  vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
acl-plugin-fa-cleaner-process   any wait 0   0  
15  4.97e30.00
api-rx-from-ring active  0   0  
79  1.07e50.00
cdp-process any wait 0   0  
 3  2.65e30.00
dpdk-processany wait 0   0  
 2  6.77e70.00
fib-walkany wait 0   0  
  7474  6.74e20.00
gmon-processtime wait0   0  
 1  4.24e30.00
ikev2-manager-process   any wait 0   0  
 7  7.04e30.00
ip6-icmp-neighbor-discovery-ev  any wait 0   0  
 7  4.67e30.00
lisp-retry-service  any wait

Re: [vpp-dev] Simple setup, that does not work.

2017-11-07 Thread Dave Barach (dbarach)
Check host interface IP address, basic connectivity [cable on floor?], and so 
on.

Check “show hardware.” If the MIB stats indicate that packets are reaching the 
NIC MAC layer - but not VPP - see if /proc/cmdline contains “intel_iommu=on”. 
If it does, try removing that stanza and reboot. You can, in fact, run with the 
iommu enabled, but for a 101(a) simple test it’s not worth going there...

HTH… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of John Wei
Sent: Monday, November 6, 2017 11:00 PM
To: vpp-dev 
Subject: [vpp-dev] Simple setup, that does not work.



I followed one of the fd.io youtube demo, it is very simple, but 
it does not work for me.



  *   Restart vpp
  *   vppctl set int state GigabitEthernet13/0/0 up
  *   vppctl set int ip address GigabitEthernet13/0/0 
192.168.50.166/24
  *   # vppctl show int addr

 *   GigabitEthernet13/0/0 (up):
 * 192.168.50.166/24
 *   GigabitEthernetb/0/0 (dn):
 *   local0 (dn):

  *   on host: ping 192.168.50.166 does not work (just hang)
What is missing?
I am running v17.10-release bits.

John

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] multi-core multi-threading performance

2017-11-06 Thread Dave Barach (dbarach)
How many RX queues did you provision? One per worker, or no supper...

Thanks… Dave

From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
Sent: Monday, November 6, 2017 7:36 AM
To: Dave Barach (dbarach) 
Cc: vpp-dev@lists.fd.io; John Marshall (jwm) ; Neale Ranns 
(nranns) ; Minseok Kwon 
Subject: Re: multi-core multi-threading performance

Hi Dave,

As per your suggestion i tried sending different traffic and i could notice 
that, 1 worker acts per port (hardware NIC)

Is it true that multiple workers cannot work on same port at the same time?





Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662


On Mon, Nov 6, 2017 at 7:13 AM, Pragash Vijayaragavan 
mailto:pxv3...@g.rit.edu>> wrote:
Thanks Dave,

let me try it out real quick and get back to you.

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662


On Mon, Nov 6, 2017 at 7:11 AM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:
Incrementing / random src/dst addr/port

Thanks… Dave

From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu<mailto:pxv3...@rit.edu>]
Sent: Monday, November 6, 2017 7:06 AM
To: Dave Barach (dbarach) mailto:dbar...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>; John Marshall (jwm) 
mailto:j...@cisco.com>>; Neale Ranns (nranns) 
mailto:nra...@cisco.com>>; Minseok Kwon 
mailto:mxk...@rit.edu>>
Subject: Re: multi-core multi-threading performance

Hi Dave,

Thanks for the mail

a "show run" command shows dpdk-input process on 2 of the workers but the 
ip6-lookup process is running only on 1 worker.

What config should be done to make all threads process traffic.

This is for 4 workers and 1 main core.

Pasted output :


vpp# sh run
Thread 0 vpp_main (lcore 1)
Time 7.5, average vectors/node 0.00, last 128 main loops 0.00 per node 0.00
  vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
acl-plugin-fa-cleaner-process   any wait 0   0  
15  4.97e30.00
api-rx-from-ring active  0   0  
79  1.07e50.00
cdp-process any wait 0   0  
 3  2.65e30.00
dpdk-processany wait 0   0  
 2  6.77e70.00
fib-walkany wait 0   0  
  7474  6.74e20.00
gmon-processtime wait0   0  
 1  4.24e30.00
ikev2-manager-process   any wait 0   0  
 7  7.04e30.00
ip6-icmp-neighbor-discovery-ev  any wait 0   0  
 7  4.67e30.00
lisp-retry-service  any wait 0   0  
 3  7.21e30.00
unix-epoll-input polling  21655148   0  
 0  5.43e20.00
vpe-oam-process any wait 0   0  
 4  5.28e30.00
---
Thread 1 vpp_wk_0 (lcore 2)
Time 7.5, average vectors/node 255.99, last 128 main loops 14.00 per node 256.00
  vector rates in 4.1903e6, out 4.1903e6, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
FortyGigabitEthernet4/0/0-outp   active 12333431572992  
 0  6.58e0  255.99
FortyGigabitEthernet4/0/0-tx active 12333431572992  
 0  7.20e1  255.99
dpdk-input   polling12434731572992  
 0  5.49e1  253.91
ip6-inputactive 12333431572992  
 0  2.28e1  255.99
ip6-load-balance active 12333431572992  
 0  1.61e1  255.99
ip6-lookup   active 12333431572992  
 0  3.77e2  255.99
ip6-rewrite  active 12333431572992  
 0  2.02e1  255.99
---
Thread 2 vpp_wk_1 (lcore 3)
Time 7.5, average vectors/node 0.00, last 128 main loops 0.00 per node 0.00
  vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
 Name   

Re: [vpp-dev] multi-core multi-threading performance

2017-11-06 Thread Dave Barach (dbarach)
Incrementing / random src/dst addr/port

Thanks… Dave

From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
Sent: Monday, November 6, 2017 7:06 AM
To: Dave Barach (dbarach) 
Cc: vpp-dev@lists.fd.io; John Marshall (jwm) ; Neale Ranns 
(nranns) ; Minseok Kwon 
Subject: Re: multi-core multi-threading performance

Hi Dave,

Thanks for the mail

a "show run" command shows dpdk-input process on 2 of the workers but the 
ip6-lookup process is running only on 1 worker.

What config should be done to make all threads process traffic.

This is for 4 workers and 1 main core.

Pasted output :


vpp# sh run
Thread 0 vpp_main (lcore 1)
Time 7.5, average vectors/node 0.00, last 128 main loops 0.00 per node 0.00
  vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
acl-plugin-fa-cleaner-process   any wait 0   0  
15  4.97e30.00
api-rx-from-ring active  0   0  
79  1.07e50.00
cdp-process any wait 0   0  
 3  2.65e30.00
dpdk-processany wait 0   0  
 2  6.77e70.00
fib-walkany wait 0   0  
  7474  6.74e20.00
gmon-processtime wait0   0  
 1  4.24e30.00
ikev2-manager-process   any wait 0   0  
 7  7.04e30.00
ip6-icmp-neighbor-discovery-ev  any wait 0   0  
 7  4.67e30.00
lisp-retry-service  any wait 0   0  
 3  7.21e30.00
unix-epoll-input polling  21655148   0  
 0  5.43e20.00
vpe-oam-process any wait 0   0  
 4  5.28e30.00
---
Thread 1 vpp_wk_0 (lcore 2)
Time 7.5, average vectors/node 255.99, last 128 main loops 14.00 per node 256.00
  vector rates in 4.1903e6, out 4.1903e6, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
FortyGigabitEthernet4/0/0-outp   active 12333431572992  
 0  6.58e0  255.99
FortyGigabitEthernet4/0/0-tx active 12333431572992  
 0  7.20e1  255.99
dpdk-input   polling12434731572992  
 0  5.49e1  253.91
ip6-inputactive 12333431572992  
 0  2.28e1  255.99
ip6-load-balance active 12333431572992  
 0  1.61e1  255.99
ip6-lookup   active 12333431572992  
 0  3.77e2  255.99
ip6-rewrite  active 12333431572992  
 0  2.02e1  255.99
---
Thread 2 vpp_wk_1 (lcore 3)
Time 7.5, average vectors/node 0.00, last 128 main loops 0.00 per node 0.00
  vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
dpdk-input   polling  83188682   0  
 0  1.11e20.00
---
Thread 3 vpp_wk_2 (lcore 18)
Time 7.5, average vectors/node 0.00, last 128 main loops 0.00 per node 0.00
  vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call
---
Thread 4 vpp_wk_3 (lcore 19)
Time 7.5, average vectors/node 0.00, last 128 main loops 0.00 per node 0.00
  vector rates in 0.e0, out 0.e0, drop 0.e0, punt 0.e0
 Name State Calls  Vectors
Suspends Clocks   Vectors/Call


Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662


On Mon, Nov 6, 2017 at 6:47 AM, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:
Have you verified that all of the worker threads are processing traffic? 
Sufficiently poor RSS statistics could mean - in the limit - that only one 
worker thread is processing traffic.

Thanks… Dave

From:

Re: [vpp-dev] multi-core multi-threading performance

2017-11-06 Thread Dave Barach (dbarach)
Have you verified that all of the worker threads are processing traffic? 
Sufficiently poor RSS statistics could mean - in the limit - that only one 
worker thread is processing traffic.

Thanks… Dave

From: Pragash Vijayaragavan [mailto:pxv3...@rit.edu]
Sent: Sunday, November 5, 2017 10:03 PM
To: vpp-dev@lists.fd.io
Cc: John Marshall (jwm) ; Neale Ranns (nranns) 
; Dave Barach (dbarach) ; Minseok Kwon 

Subject: multi-core multi-threading performance

Hi ,

We are measuring performance of ip6 lookup in multi-core multi-worker 
environments and
we don't see good scaling of performance when we keep increasing the number of 
cores/workers.

We are just changing the startup.conf file to create more workers, rx-queues, 
sock-mem etc. Should we do anything else to see an increase in performance.

Is there a limitation on the performance even if we increase the number of 
workers.

Is it dependent on the number of hardware NICs we have, we only have 1 NIC to 
receive the traffic.


TIA,

Thanks,

Pragash Vijayaragavan
Grad Student at Rochester Institute of Technology
email : pxv3...@rit.edu<mailto:pxv3...@rit.edu>
ph : 585 764 4662

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP default graph

2017-10-31 Thread Dave Barach (dbarach)
Dear Mostafa,

First, “show vlib graph” describes the entire graph in detail.

Vpp uses ingress flow-hashing (e.g. hardware RSS hashing) across a set of 
threads running identical graph replicas to achieve multi-core scaling.

Historical experiments with pipelining in vpp dissuaded me from pursuing that 
processing model: the entire pipeline runs at the speed of the slowest stage. 
More to the point: if the offered workload changes, one needs to reconfigure 
the pipeline to achieve decent performance.

In vpp, you can spin up arbitrary threads and process packets however you like, 
of course.

It would help if you’d describe your application in detail, otherwise we won’t 
be able to make detailed suggestions.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Mostafa Salari
Sent: Tuesday, October 31, 2017 8:06 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP default graph

Hi, I have 3 issues:
1. I want to know what is the default structure of graph nodes when VPP is 
running!
2. In dpdk ip_pipeline application, i was able to determine how many instances 
be created and determine lcore that each instance must run on it! In this way, 
i was able to make custom optimizations and make a fast packet processing 
pipeline for my special goal. What is the way in VPP?
3. In order to change the default arrangement, what should i do?

Best regards,
Mostafa
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] gerrit 8872 centos validation failure (stable/1710)

2017-10-18 Thread Dave Barach (dbarach)
There's a zero percent chance this failure has anything to do with the patch in 
question.

I'll press the "recheck" button once again.

14:09:27   CC   vnet/ip/ip6_neighbor.lo
14:09:28   CC   vnet/ip/ip6_pg.lo
14:09:28   CC   vnet/ip/ip_api.lo
14:09:28   CC   vnet/ip/ip_checksum.lo
14:09:30 ./libtool: fork: Cannot allocate memory
14:09:30 Makefile:6169: recipe for target 'vnet/ip/ip4_source_check.lo' failed
14:09:30 make[5]: *** [vnet/ip/ip4_source_check.lo] Error 254
14:09:30 make[5]: *** Waiting for unfinished jobs
14:09:30 ./libtool: fork: Cannot allocate memory
14:09:30 ./libtool: line 1: wait_for: No record of process 19127
14:09:30 bash: ../sysdeps/nptl/fork.c:156: __libc_fork: Assertion 
`THREAD_GETMEM (self, tid) != ppid' failed.
14:09:30 gcc: internal compiler error: Segmentation fault (program cc1)
14:09:30 Please submit a full bug report,
14:09:30 with preprocessed source if appropriate.
14:09:30 See  for instructions.
14:09:30 ./libtool: fork: Cannot allocate memory
14:09:30 bash: ../sysdeps/nptl/fork.c:156: __libc_fork: Assertion 
`THREAD_GETMEM (self, tid) != ppid' failed.
14:09:30 Makefile:6169: recipe for target 'vnet/policer/policer.lo' failed
14:09:30 make[5]: *** [vnet/policer/policer.lo] Error 1
14:09:30 Build step 'Execute shell' marked build as failure
14:09:30 $ ssh-agent -k
14:09:30 FATAL: Cannot run program "ssh-agent": error=12, Cannot allocate memory

Thanks... Dave

From: Dave Barach (dbarach)
Sent: Wednesday, October 18, 2017 9:57 AM
To: csit-...@lists.fd.io; Florin Coras (fcoras) 
Cc: vpp-dev@lists.fd.io
Subject: gerrit 8872 centos validation failure (stable/1710)

Please see https://gerrit.fd.io/r/#/c/8872 and 
https://jenkins.fd.io/job/vpp-verify-1710-centos7/53. I've already pressed the 
"recheck" button. The validation failure appears unrelated to the patch.

Thanks... Dave

12:50:49 Wrote: 
/w/workspace/vpp-verify-1710-centos7/dpdk/rpm/RPMS/x86_64/vpp-dpdk-devel-17.08-vpp1.x86_64.rpm
12:50:49 Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.cojge5
12:50:49 + umask 022
12:50:49 + cd /w/workspace/vpp-verify-1710-centos7/dpdk/rpm/BUILD
12:50:49 + /usr/bin/rm -rf 
/w/workspace/vpp-verify-1710-centos7/dpdk/rpm/BUILDROOT/vpp-dpdk-17.08-vpp1.x86_64
12:50:49 + exit 0
12:50:49 mv rpm/RPMS/x86_64/*.rpm .
12:50:49 git clean -fdx rpm
12:50:49 Removing rpm/BUILD/
12:50:49 Removing rpm/BUILDROOT/
12:50:49 Removing rpm/RPMS/
12:50:49 Removing rpm/SOURCES/
12:50:49 Removing rpm/SPECS/
12:50:49 Removing rpm/SRPMS/
12:50:49 Removing rpm/tmp/
12:50:49 make[2]: Leaving directory `/w/workspace/vpp-verify-1710-centos7/dpdk'
12:50:49 sudo rpm -Uih vpp-dpdk-devel-17.08-vpp1.x86_64.rpm
12:50:49 
12:50:49   package vpp-dpdk-devel-17.08-vpp2.x86_64 (which is newer than 
vpp-dpdk-devel-17.08-vpp1.x86_64) is already installed
12:50:49 make[1]: *** [install-rpm] Error 2
12:50:49 make[1]: Leaving directory `/w/workspace/vpp-verify-1710-centos7/dpdk'
12:50:49 make: *** [dpdk-install-dev] Error 2
12:50:49 Build step 'Execute shell' marked build as failure
12:50:49 $ ssh-agent -k
12:50:49 unset SSH_AUTH_SOCK;
12:50:49 unset SSH_AGENT_PID;
12:50:49 echo Agent pid 9677 killed;
12:50:50 [ssh-agent] Stopped.
12:50:50 Skipped archiving because build is not successful
12:50:50 [PostBuildScript] - Execution post build scripts.

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP routing API dump

2017-10-18 Thread Dave Barach (dbarach)
Confirmed. Patch on the way... Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Dave Barach (dbarach)
Sent: Wednesday, October 18, 2017 10:36 AM
To: Samuel Elias -X (samelias - PANTHEON TECHNOLOGIES at Cisco) 
; Neale Ranns (nranns) 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] VPP routing API dump

Please don't assume that people scrub through jira tickets.

If you'd sent email to vpp-dev, this would have been fixed ages ago. Without 
even looking at the code, I'll bet it's a single missing ntohl(...):

(gdb) p/x 33554432
$1 = 0x200

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Samuel Elias -X (samelias - 
PANTHEON TECHNOLOGIES at Cisco)
Sent: Wednesday, October 18, 2017 10:24 AM
To: Neale Ranns (nranns) mailto:nra...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] VPP routing API dump


Hello,



Could you please take a look at this bug with routing API:

https://jira.fd.io/browse/VPP-930



It is a minor issue, but it's been breaking Honeycomb's CRUD tests for a while 
now.



tia,

- Sam


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP routing API dump

2017-10-18 Thread Dave Barach (dbarach)
Please don't assume that people scrub through jira tickets.

If you'd sent email to vpp-dev, this would have been fixed ages ago. Without 
even looking at the code, I'll bet it's a single missing ntohl(...):

(gdb) p/x 33554432
$1 = 0x200

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Samuel Elias -X (samelias - PANTHEON TECHNOLOGIES at Cisco)
Sent: Wednesday, October 18, 2017 10:24 AM
To: Neale Ranns (nranns) 
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] VPP routing API dump


Hello,



Could you please take a look at this bug with routing API:

https://jira.fd.io/browse/VPP-930



It is a minor issue, but it's been breaking Honeycomb's CRUD tests for a while 
now.



tia,

- Sam


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vnet_buffer(b)-> sw_if_index[VLIB_TX] => fib index in ip[46]_lookup

2017-10-18 Thread Dave Barach (dbarach)
Dear John,

I must've done a seriously defective [read: typo-ridden] search.

Scratch that idea... Thanks for your help...

Dave

From: John Lo (loj)
Sent: Wednesday, October 18, 2017 10:15 AM
To: Dave Barach (dbarach) ; vpp-dev@lists.fd.io
Subject: RE: vnet_buffer(b)-> sw_if_index[VLIB_TX] => fib index in ip[46]_lookup

Hi Dave,

I found quite a few places using this mechanism:


  *   In the input ACL support for PBR, there is an action to switch FIB table 
and the code uses this mechanism to specify which ones to use:
*** src/vnet/ip/ip_input_acl.c:
ip_inacl_inline[290]   vnet_buffer (b0)->sw_if_index[VLIB_TX] = 
e0->metadata;
ip_inacl_inline[348]   vnet_buffer (b0)->sw_if_index[VLIB_TX] =


  *   Ping support:
*** src/vnet/ip/ping.c:
send_ip6_ping[283] vnet_buffer (p0)->sw_if_index[VLIB_TX] = 
fib_index;
send_ip6_ping[291] vnet_buffer (p0)->sw_if_index[VLIB_TX] =
send_ip4_ping[410] vnet_buffer (p0)->sw_if_index[VLIB_TX] = 
fib_index;
send_ip4_ping[418] vnet_buffer (p0)->sw_if_index[VLIB_TX] =


  *   DHCP proxy:
*** src/vnet/dhcp/dhcp4_proxy_node.c:
dhcp_proxy_to_server_input[209] vnet_buffer(b0)->sw_if_index[VLIB_TX] =
dhcp_proxy_to_client_input[644] vnet_buffer (b0)->sw_if_index[VLIB_TX] = 
sw_if_index;

*** src/vnet/dhcp/dhcp6_proxy_node.c:
dhcpv6_proxy_to_server_input[241] vnet_buffer(b0)->sw_if_index[VLIB_TX] = 
server_fib_idx;
dhcpv6_proxy_to_client_input[683] vnet_buffer (b0)->sw_if_index[VLIB_TX] = 
original_sw_if_index


  *   ICMP6
*** src/vnet/ip/icmp6.c:
ip6_icmp_echo_request[356] vnet_buffer (p0)->sw_if_index[VLIB_TX] =
ip6_icmp_echo_request[365] vnet_buffer (p0)->sw_if_index[VLIB_TX] = 
fib_index0;
ip6_icmp_echo_request[380] vnet_buffer (p1)->sw_if_index[VLIB_TX] =
ip6_icmp_echo_request[389] vnet_buffer (p1)->sw_if_index[VLIB_TX] = 
fib_index1;
ip6_icmp_echo_request[456] vnet_buffer (p0)->sw_if_index[VLIB_TX] =
ip6_icmp_echo_request[464] vnet_buffer (p0)->sw_if_index[VLIB_TX] = 
fib_index0;


  *   VXLAN-GPE
*** src/vnet/vxlan-gpe/decap.c:
vxlan_gpe_input[332]   vnet_buffer(b0)->sw_if_index[VLIB_TX] = 
t0->decap_fib_index;
vxlan_gpe_input[415]   vnet_buffer(b1)->sw_if_index[VLIB_TX] = 
t1->decap_fib_index;
vxlan_gpe_input[435]   vnet_buffer(b1)->sw_if_index[VLIB_TX] = 
t1->decap_fib_index;
vxlan_gpe_input[576]   vnet_buffer(b0)->sw_if_index[VLIB_TX] = 
t0->decap_fib_index;

Is there another way to specify FIB table index to use if this mechanism is 
removed?

Regards,
John

From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Dave Barach (dbarach)
Sent: Wednesday, October 18, 2017 8:51 AM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] vnet_buffer(b)-> sw_if_index[VLIB_TX] => fib index in 
ip[46]_lookup

Folks,

Is anyone is actually using the "vnet_buffer(b)->sw_if_index[VLIB_TX] => 
[fib_index | ~0]" method to select the lookup fib index in ip[46]_lookup?

If not, I would like to remove the corresponding code...

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] gerrit 8872 centos validation failure (stable/1710)

2017-10-18 Thread Dave Barach (dbarach)
Please see https://gerrit.fd.io/r/#/c/8872 and 
https://jenkins.fd.io/job/vpp-verify-1710-centos7/53. I've already pressed the 
"recheck" button. The validation failure appears unrelated to the patch.

Thanks... Dave

12:50:49 Wrote: 
/w/workspace/vpp-verify-1710-centos7/dpdk/rpm/RPMS/x86_64/vpp-dpdk-devel-17.08-vpp1.x86_64.rpm
12:50:49 Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.cojge5
12:50:49 + umask 022
12:50:49 + cd /w/workspace/vpp-verify-1710-centos7/dpdk/rpm/BUILD
12:50:49 + /usr/bin/rm -rf 
/w/workspace/vpp-verify-1710-centos7/dpdk/rpm/BUILDROOT/vpp-dpdk-17.08-vpp1.x86_64
12:50:49 + exit 0
12:50:49 mv rpm/RPMS/x86_64/*.rpm .
12:50:49 git clean -fdx rpm
12:50:49 Removing rpm/BUILD/
12:50:49 Removing rpm/BUILDROOT/
12:50:49 Removing rpm/RPMS/
12:50:49 Removing rpm/SOURCES/
12:50:49 Removing rpm/SPECS/
12:50:49 Removing rpm/SRPMS/
12:50:49 Removing rpm/tmp/
12:50:49 make[2]: Leaving directory `/w/workspace/vpp-verify-1710-centos7/dpdk'
12:50:49 sudo rpm -Uih vpp-dpdk-devel-17.08-vpp1.x86_64.rpm
12:50:49 
12:50:49   package vpp-dpdk-devel-17.08-vpp2.x86_64 (which is newer than 
vpp-dpdk-devel-17.08-vpp1.x86_64) is already installed
12:50:49 make[1]: *** [install-rpm] Error 2
12:50:49 make[1]: Leaving directory `/w/workspace/vpp-verify-1710-centos7/dpdk'
12:50:49 make: *** [dpdk-install-dev] Error 2
12:50:49 Build step 'Execute shell' marked build as failure
12:50:49 $ ssh-agent -k
12:50:49 unset SSH_AUTH_SOCK;
12:50:49 unset SSH_AGENT_PID;
12:50:49 echo Agent pid 9677 killed;
12:50:50 [ssh-agent] Stopped.
12:50:50 Skipped archiving because build is not successful
12:50:50 [PostBuildScript] - Execution post build scripts.

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] vnet_buffer(b)-> sw_if_index[VLIB_TX] => fib index in ip[46]_lookup

2017-10-18 Thread Dave Barach (dbarach)
Folks,

Is anyone is actually using the "vnet_buffer(b)->sw_if_index[VLIB_TX] => 
[fib_index | ~0]" method to select the lookup fib index in ip[46]_lookup?

If not, I would like to remove the corresponding code...

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] not_last parameter of ip_add_del_route from ip.api

2017-10-18 Thread Dave Barach (dbarach)
Adding Neale for further comment, but I believe it's a FIB 1.0 historical 
artifact which has no obvious reason to exist at this point.

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Marek Gradzki -X (mgradzki - PANTHEON TECHNOLOGIES at Cisco)
Sent: Wednesday, October 18, 2017 5:59 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] not_last parameter of ip_add_del_route from ip.api

Hi,

while working on adding MPLS support to HC,
I noticed that 'not_last' param of ip_add_del_route
is ignored by the message handler in ip_api.c:
https://gerrit.fd.io/r/#/c/8826/

Could it be removed or I missed something?

Regards,
Marek

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

2017-10-17 Thread Dave Barach (dbarach)
Ack... Thanks… Dave

-Original Message-
From: Anton Baranov via RT [mailto:fdio-helpd...@rt.linuxfoundation.org] 
Sent: Tuesday, October 17, 2017 11:57 AM
To: Dave Barach (dbarach) 
Cc: vpp-dev@lists.fd.io
Subject: [FD.io Helpdesk #47101] No joy: ping6 gerrit.fd.io

We're working with our cloud provider to fix the issue.


On Tue Oct 17 10:39:05 2017, abaranov wrote:
> Thishan:
> 
> I'm checking this right now
> 
> Regards,


-- 
Anton Baranov
Systems and Network Administrator
The Linux Foundation
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vpp deadlock - syslog in signal handler

2017-10-17 Thread Dave Barach (dbarach)
In almost all cases, the glibc malloc heap will not be pickled since it's not 
used on a regular basis.

For some effort, one could replace the syslog library code, I guess.

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Gabriel Ganne
Sent: Tuesday, October 17, 2017 4:18 AM
To: vpp-dev 
Subject: [vpp-dev] vpp deadlock - syslog in signal handler


Hi,



I have encountered a deadlock in vpp on the raising of a memory alloc exception.

The signal is caught by unix_signal_handler(), which determines this is a fatal 
error and then syslogs the error message.

The problem is that syslog() then tries to allocate a scratchpad memory, and 
deadlocks since allocation is the reason why I'm here in the first place.



clib_warning() functions should be safe because all the memory needed is 
alloc'ed at init, but I don't see how this syslog() call can succeed.

Should I just remove it ?

Or is there a way I don't know about to still make this work ?



Below is a backtrace of the problem:
#0  0xa42e2c0c in __lll_lock_wait_private 
(futex=futex@entry=0xa43869a0 ) at ./lowlevellock.c:33
#1  0xa426b6e8 in __GI___libc_malloc (bytes=bytes@entry=584) at 
malloc.c:2888
#2  0xa425ace8 in __GI___open_memstream (bufloc=0x655b4670, 
bufloc@entry=0x655b46d0, sizeloc=0x655b4678, 
sizeloc@entry=0x655b46d8) at memstream.c:76
#3  0xa42cef18 in __GI___vsyslog_chk (ap=..., fmt=0xa4be2990 "%s", 
flag=-1, pri=27) at ../misc/syslog.c:167
#4  __syslog (pri=pri@entry=27, fmt=fmt@entry=0xa4be2990 "%s") at 
../misc/syslog.c:117
#5  0xa4bd7ab4 in unix_signal_handler (signum=, 
si=, uc=) at 
/home/gannega/vpp/build-data/../src/vlib/unix/main.c:119
#6  
#7  0xa42654e0 in malloc_consolidate (av=av@entry=0xa43869a0 
) at malloc.c:4182
#8  0xa4269354 in malloc_consolidate (av=0xa43869a0 ) 
at malloc.c:4151
#9  _int_malloc (av=av@entry=0xa43869a0 , 
bytes=bytes@entry=32816) at malloc.c:3450
#10 0xa426b5b4 in __GI___libc_malloc (bytes=bytes@entry=32816) at 
malloc.c:2890
#11 0xa4299000 in __alloc_dir (statp=0x655b5d48, flags=0, 
close_fd=true, fd=5) at ../sysdeps/posix/opendir.c:247
#12 opendir_tail (fd=) at ../sysdeps/posix/opendir.c:145
#13 __opendir (name=name@entry=0xa4bdf258 "/sys/bus/pci/devices") at 
../sysdeps/posix/opendir.c:200
#14 0xa4bde088 in foreach_directory_file 
(dir_name=dir_name@entry=0xa4bdf258 "/sys/bus/pci/devices", 
f=f@entry=0xa4baf4a8 , arg=arg@entry=0xa4c0af30 
,
scan_dirs=scan_dirs@entry=0) at 
/home/gannega/vpp/build-data/../src/vlib/unix/util.c:59
#15 0xa4baed64 in linux_pci_init (vm=0xa4c0af30 ) 
at /home/gannega/vpp/build-data/../src/vlib/linux/pci.c:648
#16 0xa4bae504 in vlib_call_init_exit_functions (vm=0xa4c0af30 
, head=, call_once=call_once@entry=1) at 
/home/gannega/vpp/build-data/../src/vlib/init.c:57
#17 0xa4bae548 in vlib_call_all_init_functions (vm=) at 
/home/gannega/vpp/build-data/../src/vlib/init.c:75
#18 0xa4bb3838 in vlib_main (vm=, 
vm@entry=0xa4c0af30 , input=input@entry=0x655b5fc8) 
at /home/gannega/vpp/build-data/../src/vlib/main.c:1748
#19 0xa4bd7c0c in thread0 (arg=281473445834544) at 
/home/gannega/vpp/build-data/../src/vlib/unix/main.c:567
#20 0xa44f3e38 in clib_calljmp () at 
/home/gannega/vpp/build-data/../src/vppinfra/longjmp.S:676


Best regards,

--

Gabriel Ganne
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] No joy: ping6 gerrit.fd.io

2017-10-16 Thread Dave Barach (dbarach)
It looks like gerrit.fd.io has dropped off the ipv6 radar screen. Appears not 
to be a DNS problem or other problem on my end:

$ ping6 gerrit.fd.io
PING gerrit.fd.io(2604:e100:1:0:f816:3eff:fe7e:8731) 56 data bytes
^C
--- gerrit.fd.io ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3022ms

$ ping6 www.google.com
PING www.google.com(iad30s07-in-x04.1e100.net) 56 data bytes
64 bytes from iad30s07-in-x04.1e100.net: icmp_seq=1 ttl=49 time=33.4 ms
64 bytes from iad30s07-in-x04.1e100.net: icmp_seq=2 ttl=49 time=30.4 ms
^C
--- www.google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 30.413/31.943/33.473/1.530 ms

Please investigate AYEC.

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] jvpp core future test failure (gerrit 8743)

2017-10-13 Thread Dave Barach (dbarach)
I think this should about cover the situation... (😉)... HTH... Dave

void
vl_msg_api_config (vl_msg_api_msg_config_t * c)
{
  api_main_t *am = &api_main;

  /*
   * This happens during the java core tests if the message
   * dictionary is missing newly added xxx_reply_t messages.
   * Should never happen, but since I shot myself in the foot once
   * this way, I thought I'd make it easy to debug if I ever do
   * it again... (;-)...
   */
  if (c->id == 0)
{
  if (c->name)
 clib_warning ("Trying to register %s with a NULL msg id!", c->name);
  else
 clib_warning ("Trying to register a NULL msg with a NULL msg id!");
  clib_warning ("Did you forget to call setup_message_id_table?");
  return;
}


Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Dave Barach (dbarach)
Sent: Friday, October 13, 2017 1:32 PM
To: Ole Troan (otroan) ; Ole Troan 
Cc: vpp-dev@lists.fd.io
Subject: [vpp-dev] jvpp core future test failure (gerrit 8743)

Dear Ole,

See https://gerrit.fd.io/r/#/c/8743. It turns out that the java core future 
“make test” test fails as shown below.

The patch adds three xxx_reply_t binary api messages. See 
.../src/vnet/dns/dns.api.

It sure looks like the Java code knows about them, but isn’t doing a very good 
job of registering them. Note that I had to modify the binary API client 
library to keep Java from ASSERTing due to the NULL msg id’s squawked below.

What’s going on here? These messages work like a champ in vpp_api_test...

INFO: Testing Java future API for core plugin
[New Thread 0x7fffd5f9c700 (LWP 4611)]
vl_msg_api_config:671: Trying to register dns_enable_disable_reply with a NULL 
msg id!
vl_msg_api_config:671: Trying to register dns_name_server_add_del_reply with a 
NULL msg id!
vl_msg_api_config:671: Trying to register dns_resolve_name_reply with a NULL 
msg id!
[Thread 0x7fffd5f9c700 (LWP 4611) exited]
Exception in thread "main" java.lang.IllegalStateException: API mismatch 
detected: dns_resolve_name_reply_451ab6c0 is missing
 at io.fd.vpp.jvpp.core.JVppCoreImpl.init0(Native Method)
 at io.fd.vpp.jvpp.core.JVppCoreImpl.init(JVppCoreImpl.java:75)
 at io.fd.vpp.jvpp.JVppRegistryImpl.register(JVppRegistryImpl.java:72)
 at 
io.fd.vpp.jvpp.core.future.FutureJVppCoreFacade.(FutureJVppCoreFacade.java:25)
 at 
io.fd.vpp.jvpp.core.test.FutureApiTest.testFutureApi(FutureApiTest.java:50)
 at io.fd.vpp.jvpp.core.test.FutureApiTest.main(FutureApiTest.java:44)
[New Thread 0x7fffd54af700 (LWP 4612)]

Thanks… Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] jvpp core future test failure (gerrit 8743)

2017-10-13 Thread Dave Barach (dbarach)
Yes, I did. I just worked it out myself... Thanks… Dave

From: Ole Troan (otroan)
Sent: Friday, October 13, 2017 1:48 PM
To: Ole Troan (otroan) 
Cc: Dave Barach (dbarach) ; Ole Troan 
; vpp-dev@lists.fd.io
Subject: Re: jvpp core future test failure (gerrit 8743)

s/map/dns


if you were to cut and paste.




#define vl_msg_name_crc_list
#include 
#undef vl_msg_name_crc_list

static void
setup_message_id_table (api_main_t * am)
{
#define _(id,n,crc) vl_msg_api_add_msg_name_crc (am, #n "_" #crc, id);
  foreach_vl_msg_name_crc_dns;
#undef _
}


Ole


On 13 Oct 2017, at 19:46, Ole Troan mailto:otr...@cisco.com>> 
wrote:

Dear Dave,

I wonder if you forgot to hookup the messages in the CRC dictionary?

#define vl_msg_name_crc_list
#include 
#undef vl_msg_name_crc_list

static void
setup_message_id_table (api_main_t * am)
{
#define _(id,n,crc) vl_msg_api_add_msg_name_crc (am, #n "_" #crc, id);
  foreach_vl_msg_name_crc_map;
#undef _
}


If my guess is correct, I’’ have a chat with the Java guys if we can come up 
with a slightly more user-friendly error message. ;-)


Best regards,
Ole



On 13 Oct 2017, at 19:31, Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:

Dear Ole,

See https://gerrit.fd.io/r/#/c/8743. It turns out that the java core future 
“make test” test fails as shown below.

The patch adds three xxx_reply_t binary api messages. See 
.../src/vnet/dns/dns.api.

It sure looks like the Java code knows about them, but isn’t doing a very good 
job of registering them. Note that I had to modify the binary API client 
library to keep Java from ASSERTing due to the NULL msg id’s squawked below.

What’s going on here? These messages work like a champ in vpp_api_test...

INFO: Testing Java future API for core plugin
[New Thread 0x7fffd5f9c700 (LWP 4611)]
vl_msg_api_config:671: Trying to register dns_enable_disable_reply with a NULL 
msg id!
vl_msg_api_config:671: Trying to register dns_name_server_add_del_reply with a 
NULL msg id!
vl_msg_api_config:671: Trying to register dns_resolve_name_reply with a NULL 
msg id!
[Thread 0x7fffd5f9c700 (LWP 4611) exited]
Exception in thread "main" java.lang.IllegalStateException: API mismatch 
detected: dns_resolve_name_reply_451ab6c0 is missing
 at io.fd.vpp.jvpp.core.JVppCoreImpl.init0(Native Method)
 at io.fd.vpp.jvpp.core.JVppCoreImpl.init(JVppCoreImpl.java:75)
 at io.fd.vpp.jvpp.JVppRegistryImpl.register(JVppRegistryImpl.java:72)
 at 
io.fd.vpp.jvpp.core.future.FutureJVppCoreFacade.(FutureJVppCoreFacade.java:25)
 at 
io.fd.vpp.jvpp.core.test.FutureApiTest.testFutureApi(FutureApiTest.java:50)
 at io.fd.vpp.jvpp.core.test.FutureApiTest.main(FutureApiTest.java:44)
[New Thread 0x7fffd54af700 (LWP 4612)]

Thanks… Dave


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] jvpp core future test failure (gerrit 8743)

2017-10-13 Thread Dave Barach (dbarach)
Dear Ole,

See https://gerrit.fd.io/r/#/c/8743. It turns out that the java core future 
"make test" test fails as shown below.

The patch adds three xxx_reply_t binary api messages. See 
.../src/vnet/dns/dns.api.

It sure looks like the Java code knows about them, but isn't doing a very good 
job of registering them. Note that I had to modify the binary API client 
library to keep Java from ASSERTing due to the NULL msg id's squawked below.

What's going on here? These messages work like a champ in vpp_api_test...

INFO: Testing Java future API for core plugin
[New Thread 0x7fffd5f9c700 (LWP 4611)]
vl_msg_api_config:671: Trying to register dns_enable_disable_reply with a NULL 
msg id!
vl_msg_api_config:671: Trying to register dns_name_server_add_del_reply with a 
NULL msg id!
vl_msg_api_config:671: Trying to register dns_resolve_name_reply with a NULL 
msg id!
[Thread 0x7fffd5f9c700 (LWP 4611) exited]
Exception in thread "main" java.lang.IllegalStateException: API mismatch 
detected: dns_resolve_name_reply_451ab6c0 is missing
 at io.fd.vpp.jvpp.core.JVppCoreImpl.init0(Native Method)
 at io.fd.vpp.jvpp.core.JVppCoreImpl.init(JVppCoreImpl.java:75)
 at io.fd.vpp.jvpp.JVppRegistryImpl.register(JVppRegistryImpl.java:72)
 at 
io.fd.vpp.jvpp.core.future.FutureJVppCoreFacade.(FutureJVppCoreFacade.java:25)
 at 
io.fd.vpp.jvpp.core.test.FutureApiTest.testFutureApi(FutureApiTest.java:50)
 at io.fd.vpp.jvpp.core.test.FutureApiTest.main(FutureApiTest.java:44)
[New Thread 0x7fffd54af700 (LWP 4612)]

Thanks... Dave

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [v17.07.01]: vec_add2() causing crash on ARMv8

2017-09-29 Thread Dave Barach (dbarach)
As a quick hack: try moving "u32 interrupt_pending;" to the start of the 
structure...

Thanks… Dave

-Original Message-
From: Brian Brooks [mailto:brian.bro...@arm.com] 
Sent: Friday, September 22, 2017 12:33 PM
To: Dave Barach (dbarach) 
Cc: Saxena, Nitin ; vpp-dev@lists.fd.io; Damjan Marion 
(damarion) ; Narayana, Prasad Athreya 

Subject: Re: [vpp-dev] [v17.07.01]: vec_add2() causing crash on ARMv8

On 09/28 11:57:36, Dave Barach (dbarach) wrote:
> Dear Nitin,
> 
> First off: exactly which LDXR / STXR instruction variant pairs is generated? 
> I begin to wonder if __sync_lock_test_and_set(...) might not be doing you any 
> favors. Given that dq->interrupt_pending is a u32, I would have expected a 
> 4-byte instruction with (at worst) a 4-byte alignment requirement.

It's true that a LDXR of 4 bytes only requires 4 byte alignment (not 8).

For the TAS, objdump vhost-user.o shows

  ldxr   w0, [x1]
  stxr   w3, w2, [x1]
  cbnz   w3, ..

These instructions are operating on 4 byte data because of the use of a
'w' register instead of a 'x' register to hold the actual value.

Nitin, can you confirm you see the same generated code? If so, is
&dq->interrupt_pending 4 byte aligned?

> Are there any alignment restrictions on the 1-byte variants LDXRB / STXRB?
> 
> If not: since we use dq->interrupt_pending as a one-bit flag, declaring it as 
> a u8 - or casting &dq->interrupt_pending to (u8 *) in an arch-dependent 
> fashion - might make the pain go away.
> 
> Aligning every vector in the system will waste memory, and will not legislate 
> this class of problem out of existence. So, I wouldn't want to force 8-byte 
> alignment in the way you mention.
> 
> Anyhow, aligning the first vector element to an 8-byte boundary says little 
> about the layout of elements within each vector element, especially if the 
> structure is packed.
> 
> If dq->interrupt_pending needs to be aligned to a specific boundary without 
> fail, the only completely reliable method would be to pack and pad the 
> structure e.g. to a multiple of 8 octets and ensure that interrupt_pending 
> lands on the required boundary. Then use vec_add2_ha (...) to manipulate the 
> vector.
> 
> HTH... Dave
> 
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
> Behalf Of Saxena, Nitin
> Sent: Thursday, September 28, 2017 4:53 AM
> To: vpp-dev@lists.fd.io
> Cc: Narayana, Prasad Athreya 
> Subject: [vpp-dev] [v17.07.01]: vec_add2() causing crash on ARMv8
> 
> 
> Hi All,
> 
> 
> 
> I got a crash with vpp v17.07.01 on ARMv8 Soc 
> @src/vnet/devices/virtio/vhost-user.c: Line no: 1852
> 
> 
> if (clib_smp_swap (&dq->interrupt_pending, 0) ||
> (node->state == VLIB_NODE_STATE_POLLING)){
> }
> 
> While debugging it turns out that value of (&dq->interrupt_pending) was not 8 
> byte aligned hence causing SIGBUS error on ARMv8 SoC. Further debugging tells 
> that dq was added in vector using vec_add2 (src/vnet/devices/devices.c Line 
> no: 152)
> 
> vec_add2 (rt->devices_and_queues, dq, 1)
> 
> which uses 0 byte alignment. Changing vec_add2 to vec_add2_aligned() fixed 
> the problem. My question is can we completely define vec_add2() as
> 
> #define vec_add2(V,P,N)   vec_add2_ha(V,P,N,0,8) instead of #define 
> vec_add2(V,P,N)   vec_add2_ha(V,P,N,0,0)
> 
> This can be helpful for all architecture.
> 
> Thanks,
> Nitin
> 
> 
> 

> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Compile error with linux/memfd.h

2017-09-28 Thread Dave Barach (dbarach)
Dear Eddie,

As discussed in private email: I think that the version of CentOS on your build 
system is too old. If memory serves, CentOS 7.3 is required. Google tells me 
that the earliest Linux kernel with memfd support is 3.17; it looks like your 
system is running a 3.10 derivative: 
"/usr/src/kernels/3.10.0-693.2.2.el7.x86_64/include/uapi/linux"

Other folks, please jump in on that topic.

After you resolve the CentOS version issue, you'll certainly need to run "make 
install-dep" from the workspace root: "WARNING: Please install ccache AYEC and 
re-run this script"

Thanks... Dave

P.S. We verify every patch on CentOS before merge...

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Eddie Ruan (eruan)
Sent: Thursday, September 28, 2017 5:27 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Compile error with linux/memfd.h

Hi,

I try to make my hand dirty via pulling and compiling VPP codes. I am following 
wiki.

https://wiki.fd.io/view/VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code#Building_the_first_time

I have tried different options, but the compile always stucks at following 
error, not be able to find linux/memfd.h

I found following copy from my cento OS box. I am not sure if that's one it 
looks for, or there are some other package I need to install.

[root@spitfire-2 linux]# pwd
/usr/src/kernels/3.10.0-693.2.2.el7.x86_64/include/uapi/linux
[root@spitfire-2 linux]# ls memfd.h
memfd.h

Does anyone have some hints on how to solve it?


Thanks

Eddie



[root@spitfire-2 vpp]# make bootstrap
WARNING: Please install ccache AYEC and re-run this script
make[1]: Entering directory `/nobackup/vpp/build-root'
 Arch for platform 'native' is native 
 Finding source for tools 
 Makefile fragment found in /nobackup/vpp/build-root/packages/tools.mk 
 Source found in /nobackup/vpp/src 
 Configuring tools: nothing to do 
 Building tools in /nobackup/vpp/build-root/build-tool-native/tools 
make[2]: Entering directory `/nobackup/vpp/build-root/build-tool-native/tools'
make  all-recursive
make[3]: Entering directory `/nobackup/vpp/build-root/build-tool-native/tools'
Making all in .
make[4]: Entering directory `/nobackup/vpp/build-root/build-tool-native/tools'
  CC   vppinfra/linux/mem.lo
/nobackup/vpp/build-root/../src/vppinfra/linux/mem.c:25:25: fatal error: 
linux/memfd.h: No such file or directory
#include 



[logo_Grey]



Eddie Ruan
PRINCIPAL ENGINEER.ENGINEERING
er...@cisco.com
Tel: +1 408 853 0776

Cisco Systems, Inc.
821 Alder Drive
MILPITAS
95035
United States
cisco.com


[http://www.cisco.com/assets/swa/img/thinkbeforeyouprint.gif]Think before you 
print.

This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
Please click 
here for 
Company Registration Information.


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] vlan sub interfaces

2017-09-28 Thread Dave Barach (dbarach)
See https://gerrit.fd.io/r/#/c/8590. The patch cherry-picked easily to 
stable/1707.

Assuming that the cherry-pick patch validates - and that it solves your problem 
- it will be up to Neale [as the 17.07 release manager] whether to merge it or 
not.

Please let us know whether the cherry-pick patch works for you.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Prabhjot Singh Sethi
Sent: Thursday, September 28, 2017 3:27 PM
To: Akshaya Nadahalli ; Prabhjot Singh Sethi 
; vpp-dev@lists.fd.io; John Lo (loj) 
Subject: Re: [vpp-dev] vlan sub interfaces

yes it works perfectly fine with this patch.
i hope this will be pushed to 17.07 branch as well.

Thanks for the help :)

Regards,
Prabhjot

- Original Message -
From:
"Akshaya Nadahalli" mailto:aksh...@rtbrick.com>>

To:
"Prabhjot Singh Sethi" 
mailto:prabh...@techtrueup.com>>, 
mailto:vpp-dev@lists.fd.io>>, "John Lo" 
mailto:l...@cisco.com>>
Cc:

Sent:
Thu, 28 Sep 2017 19:18:50 +0530
Subject:
Re: [vpp-dev] vlan sub interfaces


Hi Prabhjot,



Can you pls try with below patch and see if it helps:

https://gerrit.fd.io/r/#/c/8435/



Regards,

Akshaya N

On Thursday 28 September 2017 03:45 PM, Prabhjot Singh Sethi wrote:
trying again with more appropriate subject

Can some one please help if i am missing any thing over here ?

As mentioned earlier, i have interface host-eth10 and sub interface 
host-eth10.10 (create sub host-eth10 10)
host-eth10 is associated to bridge domain 2 and sub interface is associated to 
bridge domain 3
when VPP receives tagged packet with vlan 10 it still associates it to bd 2 and 
not bd 3

Note: if i don't associate any bd with base interface it just drops the packet 
with some error.

Regards,
Prabhjot


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

--
Regards,
Akshaya N
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] [v17.07.01]: vec_add2() causing crash on ARMv8

2017-09-28 Thread Dave Barach (dbarach)
Dear Nitin,

First off: exactly which LDXR / STXR instruction variant pairs is generated? I 
begin to wonder if __sync_lock_test_and_set(...) might not be doing you any 
favors. Given that dq->interrupt_pending is a u32, I would have expected a 
4-byte instruction with (at worst) a 4-byte alignment requirement.

Are there any alignment restrictions on the 1-byte variants LDXRB / STXRB?

If not: since we use dq->interrupt_pending as a one-bit flag, declaring it as a 
u8 - or casting &dq->interrupt_pending to (u8 *) in an arch-dependent fashion - 
might make the pain go away.

Aligning every vector in the system will waste memory, and will not legislate 
this class of problem out of existence. So, I wouldn't want to force 8-byte 
alignment in the way you mention.

Anyhow, aligning the first vector element to an 8-byte boundary says little 
about the layout of elements within each vector element, especially if the 
structure is packed.

If dq->interrupt_pending needs to be aligned to a specific boundary without 
fail, the only completely reliable method would be to pack and pad the 
structure e.g. to a multiple of 8 octets and ensure that interrupt_pending 
lands on the required boundary. Then use vec_add2_ha (...) to manipulate the 
vector.

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Saxena, Nitin
Sent: Thursday, September 28, 2017 4:53 AM
To: vpp-dev@lists.fd.io
Cc: Narayana, Prasad Athreya 
Subject: [vpp-dev] [v17.07.01]: vec_add2() causing crash on ARMv8


Hi All,



I got a crash with vpp v17.07.01 on ARMv8 Soc 
@src/vnet/devices/virtio/vhost-user.c: Line no: 1852


if (clib_smp_swap (&dq->interrupt_pending, 0) ||
(node->state == VLIB_NODE_STATE_POLLING)){
}

While debugging it turns out that value of (&dq->interrupt_pending) was not 8 
byte aligned hence causing SIGBUS error on ARMv8 SoC. Further debugging tells 
that dq was added in vector using vec_add2 (src/vnet/devices/devices.c Line no: 
152)

vec_add2 (rt->devices_and_queues, dq, 1)

which uses 0 byte alignment. Changing vec_add2 to vec_add2_aligned() fixed the 
problem. My question is can we completely define vec_add2() as

#define vec_add2(V,P,N)   vec_add2_ha(V,P,N,0,8) instead of #define 
vec_add2(V,P,N)   vec_add2_ha(V,P,N,0,0)

This can be helpful for all architecture.

Thanks,
Nitin



___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Stable branch for 17.10 pulled

2017-09-28 Thread Dave Barach (dbarach)
+1... Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Wednesday, September 27, 2017 11:15 PM
To: Florin Coras ; vpp-dev 
Subject: Re: [vpp-dev] Stable branch for 17.10 pulled

Great work, Florin!

Cheers,
Chris.

From: vpp-dev-boun...@lists.fd.io 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Florin Coras
Sent: Wednesday, September 27, 2017 21:46
To: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] Stable branch for 17.10 pulled

Folks,

The release branch, stable/1710, for VPP 17.10 has now been pulled and tags 
have been laid. As a result, master is yet again open for all changes.

From this point onward, up until the release date on October 25th [1], we need 
to be disciplined with respect to bugfixes. Here is the traditional list of 
common-sense suggestions:

  • All bug fixes must be double-committed to the release throttle 
as well as to the master branch
  • Commit first to the release throttle, then "git 
cherry-pick" into master
  • Manual merges may be required, depending on the 
degree of divergence between throttle and master
  • All bug fixes need to have a Jira ticket
  • Please put Jira IDs into the commit messages.
  • Please use the same Jira ID

Regards,
Florin

[1] https://wiki.fd.io/view/Projects/vpp/Release_Plans/Release_Plan_17.10
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP Python version policy

2017-09-27 Thread Dave Barach (dbarach)
+1, please make sure to put a few words on the wiki about it... (😉)... 

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Wednesday, September 27, 2017 7:04 AM
To: Ole Troan ; vpp-dev 
Subject: Re: [vpp-dev] VPP Python version policy

+1  Wholeheartedly.


> -Original Message-
> From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On
> Behalf Of Ole Troan
> Sent: Wednesday, September 27, 2017 6:19
> To: vpp-dev 
> Subject: [vpp-dev] VPP Python version policy
> 
> In light of the recent debate on the C/C++ API patch and consequences of
> adding Python 3 tools for Linux distros.
> 
> Here is a VPP Python Version Policy Proposal. Or VPPPVPP for short. ;-)
> 
> - All Python tools used as part of the VPP build MUST use Python 2.
>   (I include the automated unit testing here).
> 
> - All VPP Python packages made available to external Python applications
> MUST support both Python 2 and 3.
> 
> Comments?
> 
> Cheers,
> Ole
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] deadlock issue in VPP during DHCP packet processing

2017-09-26 Thread Dave Barach (dbarach)
Does this happen w/ master/latest? My guess: yes...

Florin and I are working on a patch to fix an obvious issue in this path right 
now, look for results shortly...

HTH... Dave


From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Balaji Kn
Sent: Tuesday, September 26, 2017 8:37 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] deadlock issue in VPP during DHCP packet processing

Hello All,

I am working on VPP 17.07 and using DHCP proxy functionality. CPU configuration 
provided as one main thread and one worker thread.

cpu {
  main-core 0
  corelist-workers 1
}

Deadlock is observed while processing DHCP offer packet in VPP. However issue 
is not observed if i comment CPU configuration in startup.conf file (if running 
in single thread) and everything works smoothly.

Following message is displayed on console.
vlib_worker_thread_barrier_sync: worker thread deadlock

Backtrace from core file generated.
[New LWP 12792]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/usr/bin/vpp -c /etc/vpp/startup.conf'.
Program terminated with signal SIGABRT, Aborted.
#0  0x7f721ab0fc37 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
56  ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0  0x7f721ab0fc37 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x7f721ab13028 in __GI_abort () at abort.c:89
#2  0x00407073 in os_panic () at 
/root/vfe/fe-vfe/datapath/vpp/build-data/../src/vpp/vnet/main.c:263
#3  0x7f721c0b5d5d in vlib_worker_thread_barrier_sync (vm=0x7f721c2e12e0 
)
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/threads.c:1192
#4  0x7f721c2e973a in vl_msg_api_handler_with_vm_node 
(am=am@entry=0x7f721c5063a0 , the_msg=the_msg@entry=0x304bc6d4,
vm=vm@entry=0x7f721c2e12e0 , 
node=node@entry=0x7f71da6a8000)
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlibapi/api_shared.c:501
#5  0x7f721c2f34be in memclnt_process (vm=, 
node=0x7f71da6a8000, f=)
at 
/root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlibmemory/memory_vlib.c:544
#6  0x7f721c08ec96 in vlib_process_bootstrap (_a=)
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1259
#7  0x7f721b2ec858 in clib_calljmp () at 
/root/vfe/fe-vfe/datapath/vpp/build-data/../src/vppinfra/longjmp.S:110
#8  0x7f71da9efe20 in ?? ()
#9  0x7f721c090041 in vlib_process_startup (f=0x0, p=0x7f71da6a8000, 
vm=0x7f721c2e12e0 )
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1281
#10 dispatch_process (vm=0x7f721c2e12e0 , p=0x7f71da6a8000, 
last_time_stamp=58535483853222, f=0x0)
at /root/vfe/fe-vfe/datapath/vpp/build-data/../src/vlib/main.c:1324
#11 0x00d800d9 in ?? ()

Any pointers would be appreciated.

Regards,
Balaji

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Poor L3/L4 Performance

2017-09-25 Thread Dave Barach (dbarach)
As discussed off-list: please stick to best-practice coding patterns. 
Single-packet frames simply cannot perform, etc.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Alessio Silvestro
Sent: Monday, September 25, 2017 10:13 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Poor L3/L4 Performance

Dear all,

I am performing some experiments on VPP in order to get some performance 
metrics for specific applications.

I am working on vpp v17.04.2-2.

In order to have a baseline of my system, I run L2 XConnect (XC) as in 
[https://perso.telecom-paristech.fr/~drossi/paper/vpp-bench-techrep.pdf].

In this case, I can achieve, similarly to the paper, ~13Mpps -- which somehow 
confirm that the
current setup is correct.

I implemented 2 further experiments:

1) L3-Xconnect

I implemented a new node that listens for traffic with specific ether_type with 
the following api:

ethernet_register_input_type(vm, ETHERNET_TYPE_X, my_node.index)

Once the traffic is received, the node sends the traffic directly to l2_output 
without any further processing.

The achieved packet rate is less than 5 Mpps.

2) L4-Xconnect

I implemented another node that listens for UDP traffic on  a specific port 
with the following api:


udp_register_dst_port (vm, UDP_DST_PORT_vxlan, vxlan_input_node.index, 1 /* 
is_ip4 */);

Once the traffic is received, the node sends the traffic directly to l2_output 
without any further processing.

The achieved packet rate is less than 4 Mpps.


The testbed is composed of 2 servers. The first server is running VPP whereas 
the second server runs the traffic generator (packetgen). The servers are 
equipped with Intel NICs capable of dual-port 10 Gbps full-duplex link. 
Generated packets have the size of 64kb.

VPP is configured to run with one main thread and one worker thread. Therefore, 
the previous values are meant for a single CPU-core.

In my opinion those values are a bit too low compared to other state-of-the-art 
approaches.

Do you have any idea on why this is happening and, if this is my fault, how I 
can fix it.

Thanks,
Alessio

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Practical use of the include-what-you-use tool for individual developers

2017-09-23 Thread Dave Barach (dbarach)
Dear Burt,

This is of interest, but I have concerns about boiling the ocean at this point 
in the release cycle. Please hold any patches on this topic until well after 
the 17.10 RC1 throttle branch pull.

Although we haven’t caused ourselves massive pain with similar work - coding 
standards cleanup, build-related directory refactoring - I’m not convinced that 
restructuring existing header files is worth the pain it may cause.

Direct inclusion creates ordering requirements which are at least as annoying 
as unnecessary build dependencies.

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Burt Silverman
Sent: Friday, September 22, 2017 9:39 PM
To: vpp-dev 
Subject: [vpp-dev] Practical use of the include-what-you-use tool for 
individual developers

This is a follow up on my recent post about the include-what-you-use tool. I 
discovered a way that you can use this tool to include a more appropriate set 
of header files in the files that you develop than would otherwise be the case.

The stated philosophy behind the tool is that you should directly include all 
header files that are used by a file. If struct a is declared in a.h and struct 
b is declared in b.h, your .c file that references both struct a and struct b 
should directly include a.h and b.h. But this will lead to including many more 
files than we have typically done in vpp. My personal preference, although not 
sanctioned by the pros, is to allow indirect header file inclusion. It turns 
out that there is a simple way to do this using include-what-you-use, and it 
does not require rewriting the tool's code.

include-what-you-use will suggest which header files should be added and which 
header files should be removed from the file that you are analyzing.

Understand that the corresponding .h file to a .c file will be treated 
specially. It will be analyzed along with the .c file.

For an example, I will use vnet/tcp/builtin_client.c. First I show files that 
are suggested to be removed from builtin_client.h.

vnet/tcp/builtin_client.h should remove these lines:
- #include   // lines 28-28
- #include   // lines 22-22
- #include   // lines 30-30
- #include   // lines 29-29
- #include   // lines 26-26
- #include   // lines 25-25

Running include-what-you-use involves running the clang C compiler, so if a 
necessary header file is missing and a type cannot be resolved, you will see 
regular compiler error messages.

After removing the header file includes above from builtin_client.h, and 
re-running include-what-you-use, we find the error:

In file included from vnet/tcp/builtin_client.c:20:
./vnet/tcp/builtin_client.h:40:3: error: unknown type name 'svm_fifo_t'
  svm_fifo_t *server_rx_fifo;
  ^

We manually search for the svm_fifo_t declaration and we see that rather than 
including svm_fifo_segment.h in builtin_client.h, we should have included 
svm_fifo.h.

Fixing that and re-running IWYU, we find

vnet/tcp/builtin_client.c:55:3: error: use of undeclared identifier 
'session_fifo_event_t' session_fifo_event_t evt;
  ^

so therefore, session.h should have been included in builtin_client.c rather 
than builtin_client.h.

We also find

vnet/tcp/builtin_client.c:258:8: error: use of undeclared identifier 
'vnet_disconnect_args_t'
  vnet_disconnect_args_t _a, *a = &_a;
  ^
so application_interface.h should have been included in builtin_client.c rather 
than builtin_client.h.

Re-running IWYU tells us that no lines need to be removed from 
builtin_client.h, however,

vnet/tcp/builtin_client.c should remove these lines:
- #include   // lines 24-24
- #include   // lines 25-25
- #include   // lines 26-26
- #include   // lines 19-19
- #include   // lines 18-18
- #include   // lines 27-27

Removing these includes, re-running IWYU indicates that no more includes need 
to be removed from either builtin_client.h or builtin_client.c, and the 
compilation is successful. We are done, and
we have

builtin_client.h includes:
#include 
#include 
#include 
#include 

builtin_client.c includes:
#include 
#include 
#include 

Now, if on the other hand, a developer prefers to include all the headers 
directly, like many experts like to see, the result would be:

The full include-list for vnet/tcp/builtin_client.c:
#include 
#include// for memset, NULL
#include// for vnet_app_attach_args_t
#include  // for stream_session_handle
#include "svm/svm_fifo.h" // for svm_fifo_t, svm_fif...
#include "vlib/cli.h" // for vlib_cli_output
#include "vlib/global_funcs.h"// for vlib_get_thread_main
#include "vlib/init.h"// for VLIB_INIT_FUNCTION
#include "vlib/node_funcs.h"  // for vlib_process_get_ev...
#include "vlib/threads.h" // for vlib_get_thread_index
#include "vlibapi/api_common.h"   // for ap

Re: [vpp-dev] some issue about using unformat %u 

2017-09-20 Thread Dave Barach (dbarach)
Varargs functions effectively bypass strong type-checking. It can’t be helped.

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of wang.hu...@zte.com.cn
Sent: Tuesday, September 19, 2017 11:34 PM
To: vpp-dev@lists.fd.io
Cc: gu.ji...@zte.com.cn; wang.ju...@zte.com.cn
Subject: [vpp-dev] some issue about using unformat %u


hi all:

we found some common using issues about the use of CLI unformat, as follow:



u16 out_port = 0;

u32 vrf_id = 0, protocol;

else if (unformat (line_input, "%U %u", unformat_ip4_address,

 &out_addr, &out_port))



when inputing u16 or u8 type param(not u32), the  local variable which behind 
of "out_port" in stack will be overwrited, is that right? is there some Notes 
about this?

I think the bellow code maybe  cause that issue.

unformat->va_unformat->do_percent->unformat_integer-> *(u32 *) v = value;











王辉 wanghui



IT开发工程师 IT Development Engineer
虚拟化南京四部/无线研究院/无线产品经营部 NIV Nanjing Dept. IV/Wireless Product R&D 
Institute/Wireless Product Operation Division


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Multiple vppctl Considered Harmful

2017-09-19 Thread Dave Barach (dbarach)
See https://gerrit.fd.io/r/#/c/8461...

HTH... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jon Loeliger
Sent: Tuesday, September 19, 2017 2:01 PM
To: vpp-dev 
Subject: [vpp-dev] Multiple vppctl Considered Harmful

Folks,

While I appear to be able to run a single vppctl up against VPP,
if I then start a second one, to the same VPP process, VPP immediately
aborts.  It's pretty unfriendly.

EAL:   Invalid NUMA socket, default to 0
EAL:   Invalid NUMA socket, default to 0
DPDK physical memory layout:
Segment 0: phys:0x3520, len:2097152, virt:0x7ff40d00,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x35c0, len:8388608, virt:0x7ff40c60,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: phys:0x36c0, len:2097152, virt:0x7ff40c20,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: phys:0x6d80, len:224395264, virt:0x7ff3fea0,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: phys:0x3f940, len:2097152, virt:0x7ff3fe60,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 5: phys:0x3f980, len:29360128, virt:0x7ff38c80,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Aborted



vpp# show version
vpp v17.10-rc0~307-g6b3a8ef built by jdl on bcc-1.netgate.com at Mon
Sep 11 18:38:26 CDT 2017



Does anyone else see that?  Or am I special?

Thanks,
jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Multiple vppctl Considered Harmful

2017-09-19 Thread Dave Barach (dbarach)
D'oh!

register_node:349: more than one node named `unix-cli-sockaddr family 1'


-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Dave Barach (dbarach)
Sent: Tuesday, September 19, 2017 2:05 PM
To: Jon Loeliger ; vpp-dev 
Subject: Re: [vpp-dev] Multiple vppctl Considered Harmful

Give me a minute, I'll try it right away...

Thanks... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jon Loeliger
Sent: Tuesday, September 19, 2017 2:01 PM
To: vpp-dev 
Subject: [vpp-dev] Multiple vppctl Considered Harmful

Folks,

While I appear to be able to run a single vppctl up against VPP,
if I then start a second one, to the same VPP process, VPP immediately
aborts.  It's pretty unfriendly.

EAL:   Invalid NUMA socket, default to 0
EAL:   Invalid NUMA socket, default to 0
DPDK physical memory layout:
Segment 0: phys:0x3520, len:2097152, virt:0x7ff40d00,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x35c0, len:8388608, virt:0x7ff40c60,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: phys:0x36c0, len:2097152, virt:0x7ff40c20,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: phys:0x6d80, len:224395264, virt:0x7ff3fea0,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: phys:0x3f940, len:2097152, virt:0x7ff3fe60,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 5: phys:0x3f980, len:29360128, virt:0x7ff38c80,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Aborted



vpp# show version
vpp v17.10-rc0~307-g6b3a8ef built by jdl on bcc-1.netgate.com at Mon
Sep 11 18:38:26 CDT 2017



Does anyone else see that?  Or am I special?

Thanks,
jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Multiple vppctl Considered Harmful

2017-09-19 Thread Dave Barach (dbarach)
Give me a minute, I'll try it right away...

Thanks... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jon Loeliger
Sent: Tuesday, September 19, 2017 2:01 PM
To: vpp-dev 
Subject: [vpp-dev] Multiple vppctl Considered Harmful

Folks,

While I appear to be able to run a single vppctl up against VPP,
if I then start a second one, to the same VPP process, VPP immediately
aborts.  It's pretty unfriendly.

EAL:   Invalid NUMA socket, default to 0
EAL:   Invalid NUMA socket, default to 0
DPDK physical memory layout:
Segment 0: phys:0x3520, len:2097152, virt:0x7ff40d00,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 1: phys:0x35c0, len:8388608, virt:0x7ff40c60,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 2: phys:0x36c0, len:2097152, virt:0x7ff40c20,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 3: phys:0x6d80, len:224395264, virt:0x7ff3fea0,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 4: phys:0x3f940, len:2097152, virt:0x7ff3fe60,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Segment 5: phys:0x3f980, len:29360128, virt:0x7ff38c80,
socket_id:0, hugepage_sz:2097152, nchannel:0, nrank:0
Aborted



vpp# show version
vpp v17.10-rc0~307-g6b3a8ef built by jdl on bcc-1.netgate.com at Mon
Sep 11 18:38:26 CDT 2017



Does anyone else see that?  Or am I special?

Thanks,
jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Coverity runs

2017-09-19 Thread Dave Barach (dbarach)
Very cool! Thanks for working on it... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Luke, Chris
Sent: Tuesday, September 19, 2017 11:50 AM
To: vpp-dev 
Subject: [vpp-dev] Coverity runs

All,

Coverity have increased the limits for our project size again; effective 
yesterday I run the build twice daily. 0600 and 1500 Eastern is what I have in 
cron currently, which I hope will be useful times for the majority of the 
current contributors to get feedback on their patches once merged. Thoughts on 
the timing welcome.

Chris.
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Segmentation fault with -DCLIB_VEC64

2017-09-16 Thread Dave Barach (dbarach)
Please “rm /dev/shm/{global_vm,vpe-api,db}”, try again, and report results...

Thanks… Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of khers
Sent: Saturday, September 16, 2017 11:25 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Segmentation fault with -DCLIB_VEC64

Dear guys
I got confused with a segfault.
I have changed vpp.mk to use -DCLIB_VEC64 as gcc argument. I 
tried gdb but the svm.c is complicated for me.

The gdb output pasted here, Anyone familiar with 
this situation?!
Any help appreciated.

Cheers,
Khers

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] usability: API message table comparison

2017-09-15 Thread Dave Barach (dbarach)
Here is a comparison report between a master/latest image, and a 17.07.1 image. 
"only in image" means that the message is defined in master/latest, but not in 
17.07.1. 

 Message Name Result
bridge_domain_add_deldefinition changed
bridge_domain_detailsdefinition changed
connect_session  definition changed
connect_sock definition changed
connect_sock_reply   definition changed
connect_uri_replydefinition changed
create_vhost_user_if definition changed
dhcp_client_config   definition changed
ip4_arp_eventdefinition changed
ip6_fib_details  definition changed
ip6_nd_event definition changed
ip_add_del_route definition changed
ip_fib_details   definition changed
ip_table_add_del definition changed
l2_macs_eventonly in image
macip_acl_add_replacedefinition changed
macip_acl_interface_list_details only in image
macip_acl_interface_list_dumponly in image
modify_vhost_user_if definition changed
mpls_fib_details definition changed
mpls_route_add_del   definition changed
mpls_table_add_del   definition changed
mpls_tunnel_add_del  definition changed
nat44_add_del_address_range  definition changed
nat44_add_del_interface_addr definition changed
nat44_add_del_lb_static_mapping  definition changed
nat44_add_del_static_mapping definition changed
nat44_address_detailsonly in image
nat44_address_dump   only in image
nat44_interface_add_del_feature  definition changed
nat44_interface_add_del_output_feature   definition changed
nat44_interface_addr_details only in image
nat44_interface_addr_dumponly in image
nat44_interface_details  only in image
nat44_interface_dump only in image
nat44_interface_output_feature_details   only in image
nat44_interface_output_feature_dump  only in image
nat44_lb_static_mapping_details  only in image
nat44_lb_static_mapping_dump only in image
nat44_static_mapping_details only in image
nat44_static_mapping_dumponly in image
nat44_user_details   only in image
nat44_user_dump  only in image
nat44_user_session_details   only in image
nat44_user_session_dump  only in image
nat_control_ping definition changed
nat_det_add_del_map  definition changed
nat_det_close_session_in definition changed
nat_det_close_session_outdefinition changed
nat_det_forward  definition changed
nat_det_get_timeouts definition changed
nat_det_map_details  only in image
nat_det_map_dump only in image
nat_det_reverse  definition changed
nat_det_session_details  only in image
nat_det_session_dump only in image
nat_det_set_timeouts definition changed
nat_ipfix_enable_disable definition changed
nat_set_workers  definition changed
nat_show_config  definition changed
nat_worker_details   only in imag

Re: [vpp-dev] VPP API Message Multi-Registration Question

2017-09-15 Thread Dave Barach (dbarach)
How about: only complain if the new registration is actually different from the 
old one?

Thanks... Dave

-Original Message-
From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Jon Loeliger
Sent: Friday, September 15, 2017 3:35 PM
To: vpp-dev 
Subject: [vpp-dev] VPP API Message Multi-Registration Question

Folks,

We have a need to re-register API message handlers.
(Think re-fork/exec scenarios for daemons.)

Today, we register handlers via a call to this function:

vl_msg_api_set_handlers (int id, char *name, void *handler, void *cleanup,
 void *endian, void *print, int size, int traced)

When we do that today, we see this warning on the second call.

vl_msg_api_config (vl_msg_api_msg_config_t * c)
{

  [...]

  if (am->msg_names[c->id])
clib_warning ("BUG: multiple registrations of 'vl_api_%s_t_handler'",
  c->name);

  am->msg_names[c->id] = c->name;
  am->msg_handlers[c->id] = c->handler;
  am->msg_cleanup_handlers[c->id] = c->cleanup;
  am->msg_endian_handlers[c->id] = c->endian;
  am->msg_print_handlers[c->id] = c->print;
  am->message_bounce[c->id] = c->message_bounce;
  am->is_mp_safe[c->id] = c->is_mp_safe;

Sure, the handler is re-registered, but it is really annoying,
and it is misleading in our case.  So we are looking for a
way to squelch it.

Is there a way to *un*-bind a handler during a "graceful shutdown"
procedure so that we can remove any binding here, and thus
later when we re-bind it is all happy again?

Or, can we call a (new?) API function that says "Yeah, we know,
but squash that message for me." just prior to registering the
handlers.

Or, is there a graceful shutdown of the API handling that
I have just missed or overlooked somewhere?

Thanks,
jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] net/mlx5: install libmlx5 & libibverbs if no OFED

2017-09-13 Thread Dave Barach (dbarach)
I typically use "git commit --amend" followed by "git review [--draft]".

HTH... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Shachar Beiser
Sent: Wednesday, September 13, 2017 11:29 AM
To: vpp-dev@lists.fd.io
Cc: Shahaf Shuler ; Damjan Marion (damarion) 

Subject: [vpp-dev] net/mlx5: install libmlx5 & libibverbs if no OFED

Hi,

  I would like to send a second patch fixing the comments I received .
  I understand that it may not be done by "git push" & also "git 
review"/"git review -s" seems like it has no effect.

  What is the procedure to send a second patch ?
 -Shachar Beiser.
  https://gerrit.fd.io/r/#/c/8390/1


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Spurious patch verification failure (gerrit 8400)

2017-09-13 Thread Dave Barach (dbarach)
See gerrit https://gerrit.fd.io/r/#/c/8400, 
https://jenkins.fd.io/job/vpp-verify-master-centos7/7070/console


12:29:12 make[1]: Leaving directory 
`/w/workspace/vpp-verify-master-centos7/test'
12:29:12 [vpp-verify-master-centos7] $ /bin/bash 
/tmp/hudson3100921859131279854.sh
12:29:12 Loaded plugins: fastestmirror, langpacks
12:29:12 Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache 
fast
12:29:17 Determining fastest mirrors
12:29:18  * base: centos.mirror.ca.planethoster.net
12:29:18  * epel: ftp.cse.buffalo.edu
12:29:18  * extras: centos.mirror.iweb.ca
12:29:18  * updates: centos.mirror.netelligent.ca
12:29:21 Package redhat-lsb-4.1-27.el7.centos.1.x86_64 already installed and 
latest version
12:29:21 Nothing to do
12:29:21 DISTRIB_ID: CentOS
12:29:21 DISTRIB_RELEASE: 7.3.1611
12:29:21 DISTRIB_CODENAME: Core
12:29:21 DISTRIB_DESCRIPTION: "CentOS Linux release 7.3.1611 (Core) "
12:29:21 INSTALLING VPP-DPKG-DEV from apt/yum repo
12:29:21 REPO_URL: https://nexus.fd.io/content/repositories/fd.io.master.centos7
12:29:21 Loaded plugins: fastestmirror, langpacks
12:29:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:29:52 Trying other mirror.
12:30:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:30:22 Trying other mirror.
12:30:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:30:52 Trying other mirror.
12:31:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:31:22 Trying other mirror.
12:31:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:31:52 Trying other mirror.
12:32:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:32:22 Trying other mirror.
12:32:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:32:52 Trying other mirror.
12:33:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:33:22 Trying other mirror.
12:33:52 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:33:52 Trying other mirror.
12:34:22 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 [Errno 12] Timeout on 
https://nexus.fd.io/content/repositories/fd.io.master.centos7/repodata/repomd.xml:
 (28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 30 
seconds')
12:34:22 Trying other mirror.
12:34:22
12:34:22
12:34:22  One of the configured repositories failed (fd.io master branch latest 
merge),
12:34:22  and yum doesn't have enough cached data to continue. At this point 
the only
12:34:22  safe thing yum can do is fail. There are a few ways to work "fix" 
this:
12:34:22
12:34:22  1. Contact the upstream for the repository and get them to fix 
the problem.
12:34:22
12:34:22  2. Reconfigure the baseurl/etc. for the repository, to point to a 
working
12:34:22 upstream. This is most often useful if you are using a newer
12:34:22 distribution release than is supported by the repository (and 
the
12:34:22 packages for the previous distribution release still work).
12:34:22
12:34:22  

Re: [vpp-dev] vpp performance numbers with 10Gbps interface.

2017-09-12 Thread Dave Barach (dbarach)
+1. If you want to rx-and-drop packets, install a drop adjacency... Sending to 
an unrouteable address results in 100% icmp error replies...

Thanks... Dave

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Florin Coras
Sent: Tuesday, September 12, 2017 1:05 PM
To: Rahul Negi 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] vpp performance numbers with 10Gbps interface.

Hi Rahul,

It looks like all your packets are going to ip4-imcp-error, ip4-local and 
ip4-udp-lookup. What is your test setup?

Florin

On Sep 12, 2017, at 5:10 AM, Rahul Negi 
mailto:rahulnegi...@gmail.com>> wrote:

Hi All,
I was trying to measure maximum PPS handled by vpp.I have installed ubuntu 
16.04 on my server.I have followed vpp recommended bios settings.

Hardware specs:
root@kujo:~# lscpu
Architecture:  x86_64
CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):8
On-line CPU(s) list:   0-7
Thread(s) per core:1
Core(s) per socket:8
Socket(s): 1
NUMA node(s):  1
Vendor ID: GenuineIntel
CPU family:6
Model: 45
Model name:Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz
Stepping:  7
CPU MHz:   1200.000
CPU max MHz:   2900.
CPU min MHz:   1200.
BogoMIPS:  5786.39
Virtualization:VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache:  256K
L3 cache:  20480K
NUMA node0 CPU(s): 0-7
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx 
pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology 
nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est 
tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt 
tsc_deadline_timer aes xsave avx lahf_lm epb tpr_shadow vnmi flexpriority ept 
vpid xsaveopt dtherm


Vpp version:

vpp# show version
vpp v17.10-rc0~301-gb2d2fc7 built by root on kujo at Mon Sep 11 16:39:34 IST 
2017

My vpp model has 1 main thread and 1 worker thread.I was not able to get more 
than 6Mpps .After 6 Mpps i can see the rx_miss counters in vpp stats.

vpp# show interface
  Name   Idx   State  Counter  Count
TenGigabitEtherneta/0/0   1down  rx-error   
2
TenGigabitEtherneta/0/1   2 up   rx packets  
52647168
 rx bytes  
3369416188
 tx packets  
52638150
 tx bytes  
4842700014
 drops  
 9024
 ip4 
52645519
 tx-error   
1
local00down
vpp# show interface
  Name   Idx   State  Counter  Count
TenGigabitEtherneta/0/0   1down  rx-error   
2
TenGigabitEtherneta/0/1   2 up   rx packets  
54696192
 rx bytes  
3500553704
 tx packets  
54687170
 tx bytes  
5031209822
 drops  
 9028
 ip4 
54694538
 tx-error   
1
local00down
vpp# show interface
  Name   Idx   State  Counter  Count
TenGigabitEtherneta/0/0   1down  rx-error   
2
TenGigabitEtherneta/0/1   2 up   rx packets  
56743168
 rx bytes  
3631560168
 tx packets  
56734146
 tx bytes  
5219531614
 drops  
 9028
 ip4 
56741514
 rx-miss 
23152160
 tx-error   
1
local00down
vpp# show interface
  Name   Idx   State  Counter 

  1   2   3   4   >