Do you haver buffer manager in ODP? If yes, we might be able to integrate it in 
vpp...

> On 13 Feb 2017, at 14:58, Sreejith Surendran Nair 
> <sreejith.surendrann...@linaro.org> wrote:
> 
> Hi Damjan,
> 
> Thank you for the kind reply.  Sorry I had a doubt in the code I observed we 
> have support for worker threads in "af_packet" and "netmap" is it used with 
> dpdk platform only.
> I thought I can add similar support for ODP.
> 
> Thanks & Regards,
> Sreejith
> 
> On 13 February 2017 at 17:51, Damjan Marion <dmarion.li...@gmail.com 
> <mailto:dmarion.li...@gmail.com>> wrote:
> 
> Hi Sreejith,
> 
> You cannot use vpp_lite with multiple threads, vpp_lite buffer manager is not 
> thread safe.
> 
> Thanks,
> 
> Damjan
> 
>> On 13 Feb 2017, at 11:28, Sreejith Surendran Nair 
>> <sreejith.surendrann...@linaro.org 
>> <mailto:sreejith.surendrann...@linaro.org>> wrote:
>> 
>> Hi All,
>> 
>> I am working on VPP/ODP integration project.  I am trying to run VPP in 
>> debug mode with multi-thread support. I have configured the startup conf 
>> file with "workers" .
>> 
>> But as I try to configure the interface and make it up, there is crash 
>> occuring due to assertion failure(cpu ). I have seen the same  issue while 
>> creating "af_packet " and "odp" interface both.
>> 
>> Logs:
>> ------
>> DBGvpp# create pktio-interface name enp0s3 hw-addr 08:00:27:11:7c:1b
>> odp-enp0s3
>> DBGvpp# sh int
>>               Name               Idx       State          Counter          
>> Count     
>> local0                            0        down      
>> odp-enp0s3                        1        down      
>> 
>> DBGvpp# sh threads
>> ID     Name                Type        LWP     Sched Policy (Priority)  
>> lcore  Core   Socket State     
>> 0      vpp_main                        7054    other (0)                0    
>>   0      0      
>> 1      vpp_wk_0            workers     7067    other (0)                1    
>>   1      0      
>> 2      vpp_wk_1            workers     7068    other (0)                2    
>>   2      0      
>> 3                          stats       7069    other (0)                0    
>>   0      0      
>> DBGvpp# set int state odp-enp0s3 up
>> 
>> DBGvpp# 1: 
>> /home/vppodp/odp_vpp/copy_vpp/vpp/build-data/../src/vlib/buffer_funcs.h:224 
>> (vlib_buffer_set_known_state) assertion `os_get_cpu_number () == 0' fails 
>> Failed to save post-mortem API trace to /tmp/api_post_mortem.7054
>> Aborted (core dumped)
>> Makefile:284: recipe for target 'run' failed
>> make: *** [run] Error 134
>> root@vppodp-VirtualBox:/home/vppodp/odp_vpp/copy_vpp/vpp# 
>> 
>> 
>> Startup.conf
>> -----------------
>> unix {
>>   interactive
>>   nodaemon
>>   log /tmp/vpp.log
>>   full-coredump
>>   cli-listen localhost:5002
>> }
>> 
>> api-trace {
>>   on
>> }
>> 
>> cpu {
>>   workers 2
>> 
>> }
>> 
>> 
>> lscpu:
>> --------
>> CPU op-mode(s):        32-bit, 64-bit
>> Byte Order:            Little Endian
>> CPU(s):                3
>> On-line CPU(s) list:   0-2
>> Thread(s) per core:    1
>> Core(s) per socket:    3
>> Socket(s):             1
>> NUMA node(s):          1
>> Vendor ID:             GenuineIntel
>> CPU family:            6
>> Model:                 61
>> Model name:            Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz
>> Stepping:              4
>> CPU MHz:               2294.686
>> BogoMIPS:              4589.37
>> Hypervisor vendor:     KVM
>> Virtualization type:   full
>> L1d cache:             32K
>> L1i cache:             32K
>> L2 cache:              256K
>> L3 cache:              3072K
>> NUMA node0 CPU(s):     0-2
>> Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
>> mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm 
>> constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq 
>> ssse3 cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor 
>> lahf_lm abm 3dnowprefetch rdseed
>> 
>> If possible could you please kindly suggest if anything wrong in the startup 
>> file configuration. I am using  Ubuntu  16.04 VM in virtual box environment.
>> 
>> Thanks & Regards,
>> Sreejith
>> _______________________________________________
>> vpp-dev mailing list
>> vpp-dev@lists.fd.io <mailto:vpp-dev@lists.fd.io>
>> https://lists.fd.io/mailman/listinfo/vpp-dev 
>> <https://lists.fd.io/mailman/listinfo/vpp-dev>
> 

_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to