Re: [dpdk-users] Mbuf Free Segfault in Secondary Application

2019-08-21 Thread Kyle Ames
Andrew,

Yep, that was exactly it...I was building the client straight out of the main 
source tree and the other application from a shared library. As soon as I made 
sure to build both the same way everything worked out perfectly.

Thanks!!

-Kyle

On 8/20/19, 4:50 PM, "Andrew Rybchenko"  wrote:

Hello,

On 8/20/19 7:23 PM, Kyle Ames wrote:
> I'm running into an issue with primary/secondary DPDK processes. I am 
using DPDK 19.02.
>
> I'm trying to explore a setup where one process pulls packets off the 
NIC, and then sends them over a rte_ring for additional processing. Unlike the 
client_server_mp example, I don't need to send the packets out a given port in 
the client. Once the client is done with them they can just go back into the 
mbuf mempool. In order to test this, I took the mp_client example and modified 
it immediately call rte_pktmbuf_free on the packet and not do anything else 
with it after receiving the packet over the shared ring.
>
> This works fine for the first 1.5*N packets, where N is the value set for 
the per-lcore cache. Calling rte_pktmbuf_free on the next packet will segfault 
in bucket_enqueue. (backtrace from GDB below)
>
> Program received signal SIGSEGV, Segmentation fault.
> 0x00593822 in bucket_enqueue ()
> Missing separate debuginfos, use: debuginfo-install 
glibc-2.17-196.el7_4.2.x86_64 libgcc-4.8.5-16.el7.x86_64 
numactl-libs-2.0.9-6.el7_2.x86_64
> (gdb) backtrace
> #0  0x00593822 in bucket_enqueue ()

I doubt that bucket mempool is used intentionally. If so, I guess shared
libraries are
used and mempool libraries are picked up in different order and drivers
got different
mempool ops indexes. As far as I remember there is a documentation which
says
that shared libraries should be specified in the same order in primary
and secondary
process.

Andrew.

> #1  0x004769f1 in rte_mempool_ops_enqueue_bulk (n=1, 
obj_table=0x7fffe398, mp=)
>  at 
/home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:704
> #2  __mempool_generic_put (cache=, n=1, 
obj_table=0x7fffe398, mp=)
>  at 
/home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1263
> #3  rte_mempool_generic_put (cache=, n=1, 
obj_table=0x7fffe398, mp=)
>  at 
/home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1285
> #4  rte_mempool_put_bulk (n=1, obj_table=0x7fffe398, mp=) at 
/home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1308
> #5  rte_mempool_put (obj=0x100800040, mp=) at 
/home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1326
> #6  rte_mbuf_raw_free (m=0x100800040) at 
/home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1185
> #7  rte_pktmbuf_free_seg (m=) at 
/home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1807
> #8  rte_pktmbuf_free (m=0x100800040) at 
/home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1828
> #9  main (argc=, argv=)
>  at 
/home/kames/code/3rdparty/dpdk-hack/dpdk/examples/multi_process/client_server_mp/mp_client/client.c:90
>
> I changed the size a few times, and the packet in the client that 
segfaults on free is always the 1.5N'th packet. This happens even if I set the 
cache_size to zero on mbuf pool creation. (The first mbuf free immediately 
segfaults)
>
> I'm a bit stuck at the moment. There's clearly a pattern/interaction of 
some sort, but I don't know what it is or what to do about it. Is this even the 
right approach for such a scenario?
>
> -Kyle Ames

This email and any attachments thereto may contain private, confidential, 
and/or privileged material for the sole use of the intended recipient. Any 
review, copying, or distribution of this email (or any attachments thereto) by 
others is strictly prohibited. If you are not the intended recipient, please 
contact the sender immediately and permanently delete the original and any 
copies of this email and any attachments thereto.


Re: [dpdk-users] rte_latency_stats (metrics library)

2019-08-21 Thread Pattan, Reshma


> -Original Message-
> From: Arvind Narayanan [mailto:webguru2...@gmail.com]
> Sent: Saturday, August 3, 2019 11:27 PM
> To: users 
> Cc: Pattan, Reshma 
> Subject: rte_latency_stats (metrics library)
> 
> Hi,
> 
> I am trying to play with latency statistics library (a wrapper around
> rte_metrics lib). Whenever I try to retrieve values using
> rte_latencystats_get(), all the stats are 0.
> Even after rte_latencystats_update() still no effect on the stats.
> 
> Does this library need NIC driver support?
> Any help would be appreciated.
> 

Hi,

Do you have the traffic coming to l2fwd application?  If not you should have 
traffic running from the  traffic generator.


1)If you have traffic already, then can you also cross check by trying to fetch 
these latency stats using rte_metrics apis  instead of latency stats apis. 
(Or) equally you can run the dpdk-proc info secondary application to show you 
the stats (instead of you calling latency stats get and metrics get APIs).

2) Also, In DPDK, testpmd application has been updated to initiate latency 
stats calculation, and to see the stats you can run dpdk-procinfo secondary 
application alongside. 
This is to cross check that traffic is coming to dpdk application and stats are 
been calculated . You can try this second method if you don’t see success with 
l2fwd.

You can refer tespmd and dpdk-procinfo application codes to see how this is 
done.

Thanks,
Reshma


[dpdk-users] running parallel 2 different flows with different rate

2019-08-21 Thread Sara Gittlin
Hi
Can i run 2 different range groups in parallel with a different rate ?
for example :
- range 1 -   1.1.1.1 (start) - 1.255.255.255(stop)  is transmitting to
2.1.1.1   with rate of 10% percent

- range 2.3.3.3.3 ( start)  - 3.255.255.255(stop)   is transmitting to
4.1.1.1  with rate of 1 percent ?

Thank you
-Sara