Re: [vpp-dev] Question on node type "VLIB_NODE_TYPE_PROCESS"

2017-11-30 Thread Dave Barach (dbarach)
At least for now, process nodes run on the main thread. See line 1587 of .../src/vlib/main.c. The lldp-process is not super-complicated. Set a gdb breakpoint on line 157 [switch(event_type)], cause it to do something, and you can walk through it, etc. HTH... Dave From: vpp-dev-boun...@lists.f

[vpp-dev] Question on node type "VLIB_NODE_TYPE_PROCESS"

2017-11-30 Thread Yeddula, Avinash
Hello, I have a setup with, 1 worker thread (Core 8) and 1 main thread (Core 0). As I read about the node type VLIB_NODE_TYPE_PROCESS, it says "The graph node scheduler invokes these processes in much the same way as traditional vector-processing run-to-completion graph nodes". For eg.. A node

[vpp-dev] zero copy when delivering packet from vpp to a VM/container?

2017-11-30 Thread Yuliang Li
Hi all, Is there a way to attach a VM/container to VPP, so that packet between vpp and VM/container requires zero copy? Thanks, -- Yuliang Li PhD student Department of Computer Science Yale University ___ vpp-dev mailing list vpp-dev@lists.fd.io https:

Re: [vpp-dev] Jenkins jobs not starting from a "clean" state?

2017-11-30 Thread Marco Varlese
Thomas, On Thu, 2017-11-30 at 10:24 -0500, Thomas F Herbert wrote: > > > > > > [SNIP] > > > > Maybe "unhappy" is a little too strong :) :) :) > > > > > > > > I feel that being DPDK such an important piece in the VPP > > infrastructu

Re: [vpp-dev] Jenkins jobs not starting from a "clean" state?

2017-11-30 Thread Thomas F Herbert
On 11/30/2017 02:52 AM, Marco Varlese wrote: Dear Ed, On Wed, 2017-11-29 at 18:57 +, Ed Kern (ejk) wrote: On Nov 29, 2017, at 3:09 AM, Marco Varlese > wrote: Hi Ed, On Wed, 2017-11-29 at 03:24 +, Ed Kern (ejk) wrote: All the jobs that ive looked at vpp v

Re: [vpp-dev] api functions using shared memory

2017-11-30 Thread Gabriel Ganne
I'm afraid I haven't followed csit work for long enough to be sure what to add. The current VppCounters class summarizes the stats though show_vpp_statistics(), which contains the results from "show run" "show hard", and "show error". I'm adding the csit-dev ML in CC. -- Gabriel Ganne

Re: [vpp-dev] api functions using shared memory

2017-11-30 Thread Luke, Chris
Which "show run" info? The stats in the header are calculated and some of the base values needed for it are missing in the current API; I intend to fix precisely that with this work since they are ideal summary lines for 'vpptop'. Chris. From: Gabriel Ganne [mailto:gabriel.ga...@enea.com] Sent:

Re: [vpp-dev] api functions using shared memory

2017-11-30 Thread Gabriel Ganne
Chris, It seems your work in https://gerrit.fd.io/r/#/c/9483/ does all what Maciek and Dave discussed in VPP-55. Thanks again ! -- Gabriel Ganne From: Gabriel Ganne Sent: Thursday, November 30, 2017 2:52:06 PM To: Luke, Chris; Ole Troan Cc: vpp-dev@lists.fd.i

Re: [vpp-dev] api functions using shared memory

2017-11-30 Thread Gabriel Ganne
Actually, during the CSIT weekly call yesterday there was mentioned a missing VPP api for "show run". I think I even found a jira for it : https://jira.fd.io/browse/VPP-55 It seemed like no one was working on it, and so I had a look. In the ticket, Dave Barac

Re: [vpp-dev] problem in elog format

2017-11-30 Thread Dave Barach (dbarach)
Hmmm. I’ve never seen that issue, although I haven’t run c2cpel in a while. I’ll take a look later today. It looks like .../src/perftool.am builds it, so look under build-root/install-xxx and (possibly) install it manually... Thanks… Dave From: Juan Salmon [mailto:salmonju...@gmail.com] Sent:

Re: [vpp-dev] api functions using shared memory

2017-11-30 Thread Ole Troan
Gabriel, > I am looking at the get_node_graph() api function, for use in python. > It returns a u64 reply_in_shmem value which points to the shared memory and > must then be processed by vlib_node_unserialize() (as is done in vat) but I > only saw such a function in C. > Is there any way to do

Re: [vpp-dev] SR MPLS not effective

2017-11-30 Thread 薛欣颖
Hi Neale, I can't configure the command like this: 'VPP# sr mpls policy add bsid 999 next 210 209 208 207 206 205 204 203 202 201 unknown input `209 208 207 206 205 204 ...'' I configured the command like before. And the all info is shown below: packet info: 00:05:58:166326: af

Re: [vpp-dev] api functions using shared memory

2017-11-30 Thread Luke, Chris
I’m already working on making this easier to consume. Stay tuned. 😊 Chris. From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of Gabriel Ganne Sent: Thursday, November 30, 2017 4:44 To: vpp-dev@lists.fd.io Subject: [vpp-dev] api functions using shared memory Hi,

Re: [vpp-dev] SR MPLS not effective

2017-11-30 Thread Neale Ranns (nranns)
Hi Xyxue, To get a 10 label stack, you need to do; sr mpls policy add bsid 999 next 210 209 208 207 206 205 204 203 202 201 i.e. only use the ‘next’ keyword once. And then if you don’t get the desired result, could show me the following outputs; sh sr mpls polic sh mpls fib 999

[vpp-dev] api functions using shared memory

2017-11-30 Thread Gabriel Ganne
Hi, I am looking at the get_node_graph() api function, for use in python. It returns a u64 reply_in_shmem value which points to the shared memory and must then be processed by vlib_node_unserialize() (as is done in vat) but I only saw such a function in C. Is there any way to do this in pyth

Re: [vpp-dev] SR MPLS not effective

2017-11-30 Thread 薛欣颖
Hi Neale, After referring to your example, I modified my configuration. And the two layer label SR MPLS works well. But when configure ten layer label, the bottom label is different from the configuration. vpp1 configuration: create host-interface name eth4 mac 00:0c:29:4d:af:b5 create host-i