Re: [vpp-dev] VPP Repo status?

2017-02-13 Thread Ed Warnicke
As a data point, I just did a fresh clone using that URL:
https://gerrit.fd.io/r/vpp/
with no hiccups.

Ed

On Mon, Feb 13, 2017 at 7:57 PM, Bill Fischofer 
wrote:

> Sorry if this has already been answered. I have a clone of the VPP
> repo from a few months ago that I went to update this evening and got
> this error message:
>
> git pull
> fatal: unable to access 'https://gerrit.fd.io/r/vpp/': Problem with
> the SSL CA cert (path? access rights?)
>
> Has the repo moved?  The fd.io web page still points to this URL as
> what should be cloned to get a local dev copy of VPP.
>
> Thanks.
>
> Bill Fischofer, Linaro
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] VPP Repo status?

2017-02-13 Thread Bill Fischofer
Sorry if this has already been answered. I have a clone of the VPP
repo from a few months ago that I went to update this evening and got
this error message:

git pull
fatal: unable to access 'https://gerrit.fd.io/r/vpp/': Problem with
the SSL CA cert (path? access rights?)

Has the repo moved?  The fd.io web page still points to this URL as
what should be cloned to get a local dev copy of VPP.

Thanks.

Bill Fischofer, Linaro
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev


Re: [vpp-dev] Use RSS in VPP 17.01

2017-02-13 Thread Yichen Wang (yicwang)
Hi, John/all,

Thanks for your pointer, and I am able to bring up my VPP with multiple queues!

I am doing a PVP test, which basically I am expecting Traffic Generator -> VPP 
on Host -> Loopback VM (testpmd) -> VPP on Host -> Traffic Generator. I can see 
the packets are delivered to the Loopback VM with no problem, but:

(1) VPP shows all packets are dropped:

VirtualEthernet0/0/2  8 up   tx packets   
3237064

 tx bytes   
194223840

 drops3237051

But I did check testpmd and it got all packets, and does its job correctly by 
forwarding the packets to the other interfaces;

(2) VPP show err:

   CountNode  Reason

692521171vhost-user-inputno available buffer

Why it says “no available buffer”? It works pretty well without RSS,. Did I 
miss anything?

Thanks very much for your helps!

Regards,
Yichen

From: "John Lo (loj)" 
Date: Thursday, February 9, 2017 at 20:09
To: "Yichen Wang (yicwang)" , "vpp-dev@lists.fd.io" 

Cc: "Ian Wells (iawells)" 
Subject: RE: Use RSS in VPP 17.01

For VPP, the number of queues on a device can be specified in the DPDK portion 
of the startup config, which default to 1. This is usually documented in as 
comments in the startup.conf template file when installing VPP rpm/deb on the 
target Linux OS. Following is the dpdk portion from the startup.conf in 
/etc/vpp/ directory after installing the vpp deb packages on my Ubuntu server:

dpdk {
## Change default settings for all intefaces
# dev default {
   ## Number of receive queues, enables RSS
   ## Default is 1
   # num-rx-queues 3

   ## Number of transmit queues, Default is equal
   ## to number of worker threads or 1 if no workers treads
   # num-tx-queues 3

   ## Number of descriptors in transmit and receive rings
   ## increasing or reducing number can impact performance
   ## Default is 1024 for both rx and tx
   # num-rx-desc 512
   # num-tx-desc 512

   ## VLAN strip offload mode for interface
   ## Default is off
   # vlan-strip-offload on
# }

## Whitelist specific interface by specifying PCI address
# dev :02:00.0

## Whitelist specific interface by specifying PCI address and in
## addition specify custom parameters for this interface
# dev :02:00.1 {
#   num-rx-queues 2
# }

## Change UIO driver used by VPP, Options are: uio_pci_generic, vfio-pci
## and igb_uio (default)
# uio-driver uio_pci_generic

## Disable mutli-segment buffers, improves performance but
## disables Jumbo MTU support
# no-multi-seg

## Increase number of buffers allocated, needed only in scenarios with
## large number of interfaces and worker threads. Value is per CPU 
socket.
## Default is 32768
# num-mbufs 128000

## Change hugepages allocation per-socket, needed only if there is need 
for
## larger number of mbufs. Default is 256M on each detected CPU socket
# socket-mem 2048,2048
}

Regards,
John

From: vpp-dev-boun...@lists.fd.io [mailto:vpp-dev-boun...@lists.fd.io] On 
Behalf Of Yichen Wang (yicwang)
Sent: Thursday, February 09, 2017 10:38 PM
To: vpp-dev@lists.fd.io
Cc: Ian Wells (iawells) 
Subject: [vpp-dev] Use RSS in VPP 17.01

Hi, VPP folks,

From what I saw on the VPP docs, there are some places do mention that VPP 
supports RSS. Like the example given in the bottom of the link, we do see two 
queues per interfaces are shown:
https://wiki.fd.io/view/VPP/Using_VPP_In_A_Multi-thread_Model

I want to try exactly the same thing on both Cisco VIC (enic driver) and Intel 
X710 (i40e), but could not get it working straight, and need some helps from 
you guys! ☺

On Cisco VIC, I went to CIMC and configure 1 TX Queue and 4 RX Queue, and 
enabled all RSS related features. From the RHEL OS, I can clearly see all 5 
queues are shown in /proc/interrupts. However, when I bind the interface to 
VPP, I can only see “TenGigabitEthernet9/0/0 queue 0” from “show dpdk interface 
placement”, but not other queues, am I doing something wrong?

On the Intel X710 side, I did the same type of check of Cisco VIC, and I see 16 
TxRx queues were shown, which makes sense because my server has 16 CPU cores. 
When I bind the interfaces to VPP, same I could only see 1 queue are shown in 
the VPP. How can I configure VPP/DPDK to use RSS? I looked online, but there 
were really limited documents from RHEL/Intel/VPP, which doesn’t seem to be 
very helpful. Only reference I found which is on OVS-DPDK and they are doing 
the RSS 

[vpp-dev] flow distribute

2017-02-13 Thread yug...@telincn.com
Hi, ALL
I would like to pick out one kind of flow which without five tuple session  and 
lead them to the specified CPU worker.
The benifet is that we need no locker in the session creation procedure.
What your guys opinion?

Regards,
Ewan.
  



yug...@telincn.com
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP cannot find interface QLogic 57810

2017-02-13 Thread Dave Wallace

Martin,

There have been several DPDK build changes since I was last working on 
the VPP dpdk driver infra, but the following patch will enable the BNX2X 
PMD in .../custom-config.


+Damjan for his input as this may not be the best way to add the BNX2X 
PMD to VPP.


 %< 
diff --git a/dpdk/Makefile b/dpdk/Makefile
index c9ed873..c8c9f5c 100644
--- a/dpdk/Makefile
+++ b/dpdk/Makefile
@@ -111,6 +111,7 @@ $(B)/custom-config: $(B)/.patch.ok Makefile
$(call set,RTE_PCI_CONFIG,y)
$(call set,RTE_PCI_EXTENDED_TAG,"on")
$(call set,RTE_PCI_MAX_READ_REQUEST_SIZE,4096)
+   $(call set,RTE_LIBRTE_BNX2X_PMD,y)
@# enable debug init for device drivers
$(call set,RTE_LIBRTE_I40E_DEBUG_INIT,$(DPDK_DEBUG))
$(call set,RTE_LIBRTE_IXGBE_DEBUG_INIT,$(DPDK_DEBUG))
 %< 

NOTE: the BNX2X driver requires the zlib library so you'll need to 
ensure that is installed or the build will fail.


Thanks,
-daw-


On 02/13/2017 01:01 PM, Martin Šuňal wrote:


Dave,

Thanks much. I’ve added elseif case for QLogic as you mentioned. Now, 
“vppctl show pci” is showing driver “uio_pci_generic” but iface is 
still missing in “vppctl show int”.


I found bnx2x inside files in build-root after vpp build 
(./vpp/build-root/vagrant/build.sh)


./build-root/build-vpp-native/dpdk/custom-config

./build-root/build-vpp-native/dpdk/dpdk-16.11/config/common_base

./build-root/build-vpp-native/dpdk/dpdk-16.11/build/.config.orig

./build-root/build-vpp-native/dpdk/dpdk-16.11/x86_64-native-linuxapp-gcc/.config

./build-root/build-vpp-native/dpdk/dpdk-16.11/x86_64-native-linuxapp-gcc/.config.orig

./build-root/install-vpp-native/dpdk/share/dpdk/x86_64-nhm-linuxapp-gcc/.config

I noticed that all .config files contain 
"CONFIG_RTE_LIBRTE_BNX2X_PMD=n" so I changed it to “=y”


I restarted VPP but no changes. I guess I am missing some steps like 
where and when are dpdk NIC drivers installed.


Thank you,

Martin

*From:*Dave Wallace [mailto:dwallac...@gmail.com]
*Sent:* Sunday, February 12, 2017 3:24 AM
*To:* Martin Šuňal ; vpp-dev@lists.fd.io
*Subject:* Re: [vpp-dev] VPP cannot find interface QLogic 57810

Martin,

AFAIK, QLogic NICs have not been tested with VPP.

You need to start by adding a case for the QLogic NICs in 
.../vpp/vnet/vnet/devices/dpdk/init.c::dpdk_bind_devices_to_uio(). 
Search for "Unsupported Ethernet PCI device" in this file for details.


A quick internet search for a DPDK QLogic PMD shows the following 
documentation for 17.02-rc3:

http://dpdk.org/doc/guides/nics/bnx2x.html

I'm not sure if this PMD exists in DPDK 16.11 which is what VPP is 
currently being tested against, but hopefully it will just work.


Thanks,
-daw-

On 2/10/17 11:03 AM, Martin Šuňal wrote:

I’ve just found that VPP has problem with QLogic interface.

Any idea if it is problem of VPP or DPDK?

Is it something what can be easy fixed?

I am thinking to try different version of NIC firmware..

root@frinxblade16:~# *service vpp status*

* vpp.service - vector packet processing engine

   Loaded: loaded (/lib/systemd/system/vpp.service; enabled;
vendor preset: enabled)

   Active: active (running) since Fri 2017-02-10 16:41:32 CET;
1min 22s ago

  Process: 3503 ExecStartPre=/sbin/modprobe igb_uio (code=exited,
status=0/SUCCESS)

  Process: 3484 ExecStartPre=/bin/rm -f /dev/shm/db
/dev/shm/global_vm /dev/shm/vpe-api (code=exited, status=0/SUCCESS)

Main PID: 3521 (vpp_main)

Tasks: 3

   Memory: 36.0M

  CPU: 1min 21.730s

   CGroup: /system.slice/vpp.service

   `-3521 /usr/bin/vpp -c /etc/vpp/startup.conf

Feb 10 16:41:32 frinxblade16 systemd[1]: Starting vector packet
processing engine...

Feb 10 16:41:32 frinxblade16 systemd[1]: Started vector packet
processing engine.

Feb 10 16:41:32 frinxblade16 vpp[3521]:
vlib_plugin_early_init:213: plugin path /usr/lib/vpp_plugins

Feb 10 16:41:32 frinxblade16 vpp[3521]: /usr/bin/vpp[3521]:
dpdk_bind_devices_to_uio:871: *Unsupported Ethernet PCI device
0x14e4:0x168e found at PCI address :01:00.1*

Feb 10 16:41:32 frinxblade16 /usr/bin/vpp[3521]:
dpdk_bind_devices_to_uio:871: Unsupported Ethernet PCI device
0x14e4:0x168e found at PCI address :01:00.1

Feb 10 16:41:32 frinxblade16 vpp[3521]: EAL: Detected 56 lcore(s)

Feb 10 16:41:32 frinxblade16 vpp[3521]: EAL: No free hugepages
reported in hugepages-1048576kB

Feb 10 16:41:32 frinxblade16 vpp[3521]: EAL: Probing VFIO support...

Feb 10 16:41:32 frinxblade16 vnet[3521]: EAL: Probing VFIO support...

Feb 10 16:41:32 frinxblade16 vnet[3521]: dpdk_lib_init:304: *DPDK
drivers found no ports...*

Thank you,

Martin Šuňal

/Technical Leader/

Frinx s.r.o.

Mlynské Nivy 48 / 821 09 Bratislava / Slovakia

+421 2 20 91 01 41 / msu...@frinx.io  /
www.frinx.io 

Re: [vpp-dev] Failing Out-of-tree Builds

2017-02-13 Thread Jon Loeliger
On Fri, Feb 10, 2017 at 9:12 AM, Jon Loeliger  wrote:

> On Thu, Feb 9, 2017 at 11:16 PM, Akshaya Nadahalli (anadahal) <
> anada...@cisco.com> wrote:
>
>> Hi Jon,
>>
>> fib_urpf_list.h needs to included inside the source file and need not be
>> installed in /usr/include. Thanks for raising this. I will send out a patch
>> for this.
>>
>> Regards,
>> Akshaya N
>>
>
>  Awesome!  Thank you!
>
> jdl
>

Just to close the loop here, it looks like the commit

commit 0f438df9918911a976751f2391421cc8b4b6fdb7
Author: AkshayaNadahalli 
Date:   Fri Feb 10 10:54:16 2017 +0530

Out-of-tree Build Error fix

did indeed fix this issue for me.

Thank you!

jdl
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] libpneum compilation flags

2017-02-13 Thread Burt Silverman
Thanks, Gabriel, and Damjan. It appears to me that the glibc getconf
program is using its own mechanisms for determining cache line size, rather
than using the information that the kernel has. That is my reading of the
glibc code, so I do not believe Damjan you will see any different behavior
with a newer kernel. Possibly you would see differences in
/sys/devices/system/cpu/cpu0/cache/index0/coherency_line_size (or some such
path). Maybe the correct information is there, with old and/or new kernel?
Anyway, I think we know not to trust getconf for the cache line size in
ARM; at the present time. I suppose configure.ac could be again changed, to
use the kernel info; on the other hand, "if it ain't broke, don't fix it."

Burt

On Mon, Feb 13, 2017 at 11:17 AM, Damjan Marion (damarion) <
damar...@cisco.com> wrote:

>
> On 13 Feb 2017, at 17:11, Gabriel Ganne  wrote:
>
> Hi Burt,
>
> Thank you for your input.
> I pushed a new version of my commit (https://gerrit.fd.io/r/#/c/4576/)
> where I tried to do things more clearly.
>
> I had a look here https://github.com/torvalds/linux/blob/master/
> arch/arm64/kernel/cacheinfo.c and it seems that on arm64, recent kernels
> should be able return a correct value. Which means that some day, they will.
> Old ones will fallback to 64 Bytes.
>
> Maybe someone who has a thunder platform can try it in order to see what
> getconf returns him.
>
>
> I tried on my ThunderX system, and it returns 0, but kernel which I’m
> running is old (one from SDK).
> I am not able to run standard ubuntu kernel for arm64 on that system, it
> just freezes very early in the boot process.
> As ThunderX is listed as certified [1], I guess i’m doing something wrong….
>
> [1] https://certification.ubuntu.com/hardware/201609-25111/
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP performance degradation with multiple nic polling

2017-02-13 Thread Damjan Marion

> On 13 Feb 2017, at 17:34, yusuf khan  wrote:
> 
> Hi,
> 
> Comments inline.
> 
> Br,
> Yusuf
> 
> On Mon, Feb 13, 2017 at 9:20 PM, Damjan Marion  > wrote:
> 
> > On 10 Feb 2017, at 18:03, yusuf khan  > > wrote:
> >
> > Hi,
> >
> > I am testing vpp performance for l3 routing. I am pumping traffic from 
> > moongen which is sending packet at 10Gbps line rate with 84 bytes packet 
> > size.
> > If i start vpp with single worker thread(in addition to main thread), vpp 
> > is able to route almost at the line rate. Almost because i see some drop at 
> > the receive of nic.
> > avg vector per node is 97 in this case.
> >
> > Success case stats from moongen below...
> >
> > Thread 1 vpp_wk_0 (lcore 11)
> > Time 122.6, average vectors/node 96.78, last 128 main loops 12.00 per node 
> > 256.00
> >   vector rates in 3.2663e6, out 3.2660e6, drop 1.6316e-2, punt 0.e0
> > Moongen 
> > output--
> > [Device: id=5] TX: 11.57 Mpps, 8148 Mbit/s (1 Mbit/s with framing)
> > [Device: id=6] RX: 11.41 Mpps, 8034 Mbit/s (9860 Mbit/s with framing)
> 
> Here seems that moongen is not able to send faster….
> [Yusuf] Here moongen is sending 1 Mbit/s but receive is some what 
> less, may be due to nic drop… 

Yeah, I wanted to say that VPP is not limiting factor here.


> 
> >
> >
> > But when i start vpp with 2 worker threads , each polling seperate nic. i 
> > see thre throught put almost reduce by 40%! The other thread is not 
> > receiving any packets its just polling idle nic but impacting other thread?
> 
> Looks like one worker is polling both interfaces and another one is idle. 
> That’s why you see drop of performance.
> 
> Can you provide output of “show dpdk interface placement” command?
> 
> [Yusuf] Each thread is polling individual interface. please find the 
> output below
> Thread 1 (vpp_wk_0 at lcore 11):
>   TenGigabitEthernet5/0/1 queue 0
> Thread 2 (vpp_wk_1 at lcore 24):
>   TenGigabitEthernet5/0/0 queue 0

you have both ports on the same card. Have you tried with two different cards?
82599 have some hardware limitations, If i remember correctly it is round 23 
Mpps per card with 64B packets.

Can you also capture following outputs  while traffic is running :

clear hardware
clear run
[wait 1-2 sec]
show run
show hardware


___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] VPP performance degradation with multiple nic polling

2017-02-13 Thread yusuf khan
Hi,

Comments inline.

Br,
Yusuf

On Mon, Feb 13, 2017 at 9:20 PM, Damjan Marion 
wrote:

>
> > On 10 Feb 2017, at 18:03, yusuf khan  wrote:
> >
> > Hi,
> >
> > I am testing vpp performance for l3 routing. I am pumping traffic from
> moongen which is sending packet at 10Gbps line rate with 84 bytes packet
> size.
> > If i start vpp with single worker thread(in addition to main thread),
> vpp is able to route almost at the line rate. Almost because i see some
> drop at the receive of nic.
> > avg vector per node is 97 in this case.
> >
> > Success case stats from moongen below...
> >
> > Thread 1 vpp_wk_0 (lcore 11)
> > Time 122.6, average vectors/node 96.78, last 128 main loops 12.00 per
> node 256.00
> >   vector rates in 3.2663e6, out 3.2660e6, drop 1.6316e-2, punt 0.e0
> > Moongen output
> --
> > [Device: id=5] TX: 11.57 Mpps, 8148 Mbit/s (1 Mbit/s with framing)
> > [Device: id=6] RX: 11.41 Mpps, 8034 Mbit/s (9860 Mbit/s with framing)
>
> Here seems that moongen is not able to send faster….
>
[Yusuf] Here moongen is sending 1 Mbit/s but receive is some what
less, may be due to nic drop...

>
> >
> >
> > But when i start vpp with 2 worker threads , each polling seperate nic.
> i see thre throught put almost reduce by 40%! The other thread is not
> receiving any packets its just polling idle nic but impacting other thread?
>
> Looks like one worker is polling both interfaces and another one is idle.
> That’s why you see drop of performance.
>
> Can you provide output of “show dpdk interface placement” command?
>

[Yusuf] Each thread is polling individual interface. please find the
output below
Thread 1 (vpp_wk_0 at lcore 11):
  TenGigabitEthernet5/0/1 queue 0
Thread 2 (vpp_wk_1 at lcore 24):
  TenGigabitEthernet5/0/0 queue 0

Infact in case of single worker thread , it polls both interfaces and i
dont see any performance issue. But as soon as additional worker thread is
created it cause performance issue.


>
> > Is polling pci bus causing contention?
>
> We are not polling PCI bus….
>
   [Yusuf] Ok. what i really meant was, do we have any pci command overhead
due to polling but i guess not.

>
> > what could be the reason. in this case avg vector per node is 256! some
> excerpt below…
> > Thread 2 vpp_wk_1 (lcore 24)
> > Time 70.9, average vectors/node 256.00, last 128 main loops 12.00 per
> node 256.00
> >   vector rates in 7.2937e6, out 7.2937e6, drop 0.e0, punt 0.e0
> > Moongen output
> --
> > [Device: id=5] TX: 11.49 Mpps, 8088 Mbit/s (9927 Mbit/s with framing)
> > [Device: id=6] RX: 7.34 Mpps, 5167 Mbit/s (6342 Mbit/s with framing)
> >
> > One more information, its dual port nic  82599ES on pci2 x8 bus.
> >
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] libpneum compilation flags

2017-02-13 Thread Damjan Marion (damarion)

On 13 Feb 2017, at 17:11, Gabriel Ganne 
> wrote:

Hi Burt,

Thank you for your input.
I pushed a new version of my commit (https://gerrit.fd.io/r/#/c/4576/) where I 
tried to do things more clearly.

I had a look here 
https://github.com/torvalds/linux/blob/master/arch/arm64/kernel/cacheinfo.c and 
it seems that on arm64, recent kernels should be able return a correct value. 
Which means that some day, they will.
Old ones will fallback to 64 Bytes.

Maybe someone who has a thunder platform can try it in order to see what 
getconf returns him.

I tried on my ThunderX system, and it returns 0, but kernel which I’m running 
is old (one from SDK).
I am not able to run standard ubuntu kernel for arm64 on that system, it just 
freezes very early in the boot process.
As ThunderX is listed as certified [1], I guess i’m doing something wrong….

[1] https://certification.ubuntu.com/hardware/201609-25111/
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Query regarding running worker thread in VPP Debug mode

2017-02-13 Thread Sreejith Surendran Nair
Hi Damjan,

Thank you for the kind reply.  Sorry I had a doubt in the code I observed
we have support for worker threads in "af_packet" and "netmap" is it used
with dpdk platform only.
I thought I can add similar support for ODP.

Thanks & Regards,
Sreejith

On 13 February 2017 at 17:51, Damjan Marion  wrote:

>
> Hi Sreejith,
>
> You cannot use vpp_lite with multiple threads, vpp_lite buffer manager is
> not thread safe.
>
> Thanks,
>
> Damjan
>
> On 13 Feb 2017, at 11:28, Sreejith Surendran Nair  linaro.org> wrote:
>
> Hi All,
>
> I am working on VPP/ODP integration project.  I am trying to run VPP in
> debug mode with multi-thread support. I have configured the startup conf
> file with "workers" .
>
> But as I try to configure the interface and make it up, there is crash
> occuring due to assertion failure(cpu ). I have seen the same  issue while
> creating "af_packet " and "odp" interface both.
>
> Logs:
> --
> DBGvpp# create pktio-interface name enp0s3 hw-addr 08:00:27:11:7c:1b
> odp-enp0s3
> DBGvpp# sh int
>   Name   Idx   State  Counter
> Count
> local00down
> odp-enp0s31down
>
> DBGvpp# sh threads
> ID NameTypeLWP Sched Policy (Priority)
> lcore  Core   Socket State
> 0  vpp_main7054other (0)
> 0  0  0
> 1  vpp_wk_0workers 7067other (0)
> 1  1  0
> 2  vpp_wk_1workers 7068other (0)
> 2  2  0
> 3  stats   7069other (0)
> 0  0  0
>
>
> *DBGvpp# set int state odp-enp0s3 upDBGvpp# 1:
> /home/vppodp/odp_vpp/copy_vpp/vpp/build-data/../src/vlib/buffer_funcs.h:224
> (vlib_buffer_set_known_state) assertion `os_get_cpu_number () == 0' fails
> Failed to save post-mortem API trace to /tmp/api_post_mortem.7054*
> Aborted (core dumped)
> Makefile:284: recipe for target 'run' failed
> make: *** [run] Error 134
> root@vppodp-VirtualBox:/home/vppodp/odp_vpp/copy_vpp/vpp#
>
>
> Startup.conf
> -
> unix {
>   interactive
>   nodaemon
>   log /tmp/vpp.log
>   full-coredump
>   cli-listen localhost:5002
> }
>
> api-trace {
>   on
> }
>
> cpu {
>   workers 2
>
> }
>
>
> lscpu:
> 
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):3
> On-line CPU(s) list:   0-2
> Thread(s) per core:1
> Core(s) per socket:3
> Socket(s): 1
> NUMA node(s):  1
> Vendor ID: GenuineIntel
> CPU family:6
> Model: 61
> Model name:Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz
> Stepping:  4
> CPU MHz:   2294.686
> BogoMIPS:  4589.37
> Hypervisor vendor: KVM
> Virtualization type:   full
> L1d cache: 32K
> L1i cache: 32K
> L2 cache:  256K
> L3 cache:  3072K
> NUMA node0 CPU(s): 0-2
> Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
> pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm
> constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq
> ssse3 cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand
> hypervisor lahf_lm abm 3dnowprefetch rdseed
>
> If possible could you please kindly suggest if anything wrong in the
> startup file configuration. I am using  Ubuntu  16.04 VM in virtual box
> environment.
>
> Thanks & Regards,
> Sreejith
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev
>
>
>
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Re: [vpp-dev] Query regarding running worker thread in VPP Debug mode

2017-02-13 Thread Damjan Marion

Hi Sreejith,

You cannot use vpp_lite with multiple threads, vpp_lite buffer manager is not 
thread safe.

Thanks,

Damjan

> On 13 Feb 2017, at 11:28, Sreejith Surendran Nair 
>  wrote:
> 
> Hi All,
> 
> I am working on VPP/ODP integration project.  I am trying to run VPP in debug 
> mode with multi-thread support. I have configured the startup conf file with 
> "workers" .
> 
> But as I try to configure the interface and make it up, there is crash 
> occuring due to assertion failure(cpu ). I have seen the same  issue while 
> creating "af_packet " and "odp" interface both.
> 
> Logs:
> --
> DBGvpp# create pktio-interface name enp0s3 hw-addr 08:00:27:11:7c:1b
> odp-enp0s3
> DBGvpp# sh int
>   Name   Idx   State  Counter  
> Count 
> local00down  
> odp-enp0s31down  
> 
> DBGvpp# sh threads
> ID NameTypeLWP Sched Policy (Priority)  lcore 
>  Core   Socket State 
> 0  vpp_main7054other (0)0 
>  0  0  
> 1  vpp_wk_0workers 7067other (0)1 
>  1  0  
> 2  vpp_wk_1workers 7068other (0)2 
>  2  0  
> 3  stats   7069other (0)0 
>  0  0  
> DBGvpp# set int state odp-enp0s3 up
> 
> DBGvpp# 1: 
> /home/vppodp/odp_vpp/copy_vpp/vpp/build-data/../src/vlib/buffer_funcs.h:224 
> (vlib_buffer_set_known_state) assertion `os_get_cpu_number () == 0' fails 
> Failed to save post-mortem API trace to /tmp/api_post_mortem.7054
> Aborted (core dumped)
> Makefile:284: recipe for target 'run' failed
> make: *** [run] Error 134
> root@vppodp-VirtualBox:/home/vppodp/odp_vpp/copy_vpp/vpp# 
> 
> 
> Startup.conf
> -
> unix {
>   interactive
>   nodaemon
>   log /tmp/vpp.log
>   full-coredump
>   cli-listen localhost:5002
> }
> 
> api-trace {
>   on
> }
> 
> cpu {
>   workers 2
> 
> }
> 
> 
> lscpu:
> 
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):3
> On-line CPU(s) list:   0-2
> Thread(s) per core:1
> Core(s) per socket:3
> Socket(s): 1
> NUMA node(s):  1
> Vendor ID: GenuineIntel
> CPU family:6
> Model: 61
> Model name:Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz
> Stepping:  4
> CPU MHz:   2294.686
> BogoMIPS:  4589.37
> Hypervisor vendor: KVM
> Virtualization type:   full
> L1d cache: 32K
> L1i cache: 32K
> L2 cache:  256K
> L3 cache:  3072K
> NUMA node0 CPU(s): 0-2
> Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
> mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm 
> constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq ssse3 
> cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor 
> lahf_lm abm 3dnowprefetch rdseed
> 
> If possible could you please kindly suggest if anything wrong in the startup 
> file configuration. I am using  Ubuntu  16.04 VM in virtual box environment.
> 
> Thanks & Regards,
> Sreejith
> ___
> vpp-dev mailing list
> vpp-dev@lists.fd.io
> https://lists.fd.io/mailman/listinfo/vpp-dev

___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

[vpp-dev] Query regarding running worker thread in VPP Debug mode

2017-02-13 Thread Sreejith Surendran Nair
Hi All,

I am working on VPP/ODP integration project.  I am trying to run VPP in
debug mode with multi-thread support. I have configured the startup conf
file with "workers" .

But as I try to configure the interface and make it up, there is crash
occuring due to assertion failure(cpu ). I have seen the same  issue while
creating "af_packet " and "odp" interface both.

Logs:
--
DBGvpp# create pktio-interface name enp0s3 hw-addr 08:00:27:11:7c:1b
odp-enp0s3
DBGvpp# sh int
  Name   Idx   State  Counter
Count
local00down
odp-enp0s31down

DBGvpp# sh threads
ID NameTypeLWP Sched Policy (Priority)
lcore  Core   Socket State
0  vpp_main7054other (0)
0  0  0
1  vpp_wk_0workers 7067other (0)
1  1  0
2  vpp_wk_1workers 7068other (0)
2  2  0
3  stats   7069other (0)
0  0  0


*DBGvpp# set int state odp-enp0s3 upDBGvpp# 1:
/home/vppodp/odp_vpp/copy_vpp/vpp/build-data/../src/vlib/buffer_funcs.h:224
(vlib_buffer_set_known_state) assertion `os_get_cpu_number () == 0' fails
Failed to save post-mortem API trace to /tmp/api_post_mortem.7054*
Aborted (core dumped)
Makefile:284: recipe for target 'run' failed
make: *** [run] Error 134
root@vppodp-VirtualBox:/home/vppodp/odp_vpp/copy_vpp/vpp#


Startup.conf
-
unix {
  interactive
  nodaemon
  log /tmp/vpp.log
  full-coredump
  cli-listen localhost:5002
}

api-trace {
  on
}

cpu {
  workers 2

}


lscpu:

CPU op-mode(s):32-bit, 64-bit
Byte Order:Little Endian
CPU(s):3
On-line CPU(s) list:   0-2
Thread(s) per core:1
Core(s) per socket:3
Socket(s): 1
NUMA node(s):  1
Vendor ID: GenuineIntel
CPU family:6
Model: 61
Model name:Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz
Stepping:  4
CPU MHz:   2294.686
BogoMIPS:  4589.37
Hypervisor vendor: KVM
Virtualization type:   full
L1d cache: 32K
L1i cache: 32K
L2 cache:  256K
L3 cache:  3072K
NUMA node0 CPU(s): 0-2
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm
constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq
ssse3 cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand
hypervisor lahf_lm abm 3dnowprefetch rdseed

If possible could you please kindly suggest if anything wrong in the
startup file configuration. I am using  Ubuntu  16.04 VM in virtual box
environment.

Thanks & Regards,
Sreejith
___
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev