[lng-odp] can't configure ODP with DPDK pktio using DPDK package in Ubutnu

2018-10-29 Thread gyanesh patra
Hi,
I see that newer ubuntu has an updated version of DPDK. How can I build/run
ODP with this DPDK in ubuntu? For me it fails at configure step.
*Steps:*
 1)Install dpdk-dev, libdpdk-dev, dpdk
 2) ./configure --prefix=/root/odp/build
--with-dpdk-path=/usr/share/dpdk/arm64-armv8a-linuxapp-gcc/

*I am getting an error in configure step:*

configure: Using shared DPDK library found at
/usr/share/dpdk/arm64-armv8a-linuxapp-gcc//lib
checking rte_config.h usability... no
checking rte_config.h presence... no
checking for rte_config.h... no
checking for rte_eal_init in -ldpdk ... yes
configure: error: in `/root/odp':
configure: error: can't find DPDK
See `config.log' for more details


*Output from config.log file:*

configure:20358: Using shared DPDK library found at
/usr/share/dpdk/arm64-armv8a-linuxapp-gcc//lib
configure:20374: checking rte_config.h usability
configure:20374: gcc -c -g -O2 -pthread -isystem
/usr/share/dpdk/arm64-armv8a-linuxapp-gcc//include/dpdk  conftest.c >&5
conftest.c:107:10: fatal error: rte_config.h: No such file or directory
 #include 
  ^~
compilation terminated.
configure:20374: $? = 1


*The rte_config.h file is foud at:*

root@cavium:~/odp# locate rte_config.h
/usr/include/aarch64-linux-gnu/dpdk/rte_config.h
/usr/share/dpdk/arm64-armv8a-linuxapp-gcc/include/rte_config.h


It is ubuntu 18.04, DPDK 17.11 and AARCH64 system.

Thanks,
P Gyanesh Kumar Patra


[lng-odp] packet pool create failed on odp-thunder

2018-10-25 Thread gyanesh patra
Hi,
I am trying to run the pktgen example on odp-thunderx. It worked for a
couple of times. After that it keeps on failing with error as below. This
persists across reboot. I am running this over Ubuntu 18.04.1 LTS"

root@cavium:~/odp-thunderx/build/bin# ./pktgen -I vfio:30 -m r

ODP system info
---
ODP API version: 1.11.0
CPU max freq (hz): 20
Cache line size: 128
CPU count:   96

Running ODP appl: "pktgen"
-
IF-count:1
Using IFs:   vfio:30
Mode:0(0)

num worker threads: 1
first CPU:  11
cpu mask:   0x800
thunder/odp_shared_memory.c:542:hp_contig_alloc():open: Too many open files
thunder/odp_pool.c:557:odp_pool_create_internal():Error while allocating
shared memory for pool
pktgen.c:788:main():Error: packet pool create failed.
root@cavium:~/odp-thunderx/build/bin#

Is there any way I can avoid this and proceed with testing?

P Gyanesh Kumar Patra


Re: [lng-odp] ODP crash at buffer_alloc_multi() while inserting into iplookuptable

2018-10-18 Thread gyanesh patra
Hi,
We are curious if there is any solution in the pipeline for this issue of
iplookuptable insert, i.e. crash at buffer_alloc_multi() ? Any pointer will
be really helpful.
If it will help, we can create an issue in github too.
Thanks for the help.

Regards,
P Gyanesh Kumar Patra

On Tue, Sep 11, 2018 at 12:17 PM Maxim Uvarov 
wrote:

> odp prints happen on places where ODP_DBG("print something") is placed.
> It's just additional debug information.
> Also you can increase stack with ulimit -s unlimited and run valgrind to
> that app. But I generated core on adding to table random values I saw
> segfault and gdb pointed to that function.
>
> Maxim.
>
>
>
>
> On 11 September 2018 at 18:03, Fabricio Rodríguez 
> wrote:
>
>> Hi Maxim,
>>
>> I have tried the patch that you sent, but the issue continues.
>>
>> Also was trying enabling the debug flags at ODP but for some reason, I
>> can not see any print, you have any suggestion of how to see the ODP debug
>> prints when using an ODP library from an external application?.
>>
>> Regards,
>>
>> Fabricio
>>
>>
>> El mar., 11 sept. 2018 a las 10:44, Maxim Uvarov (<
>> maxim.uva...@linaro.org>) escribió:
>>
>>> Gyanesh,
>>>
>>> I tried to add some random data to ip lookup table and found that it
>>> breaks it's own stack due to recursion.
>>>
>>> I have no idea if it's your case or not. Probably not because you have
>>> buffer_alloc_multi in stack trace and I do not see that.
>>>
>>> You can test this patch:
>>> https://github.com/Linaro/odp/pull/700
>>>
>>> Maxim.
>>>
>>> On 10.09.2018 20:01, gyanesh patra wrote:
>>> > We tried it as:
>>> > #define ENTRY_NUM_SUBTREE(1 << 12)
>>> >
>>> > But it didn't work. We couldn't increase it anymore without adding
>>> > more RAM to system.
>>> > One point to consider is that, this same thing was working with ODP
>>> > 1.16 code, but with ODP 1.19 version it is not working.
>>> >
>>> > P Gyanesh Kumar Patra
>>> >
>>> >
>>> > On Mon, Sep 10, 2018 at 12:31 PM Maxim Uvarov >> > <mailto:maxim.uva...@linaro.org>> wrote:
>>> >
>>> > did you try to increase?
>>> >
>>> > /* The size of one L2\L3 subtree */
>>> > #define ENTRY_NUM_SUBTREE(1 << 8)
>>> > ./helper/iplookuptable.c
>>> >
>>> > On 10 September 2018 at 18:27, gyanesh patra
>>> > mailto:pgyanesh.pa...@gmail.com>>
>>> wrote:
>>> >
>>> > We are using ODP library from an external application. hence i
>>> > dont have a simple test code to reproduce it.
>>> > But to give a perspective:
>>> > - the value size as 12
>>> > - the ip prefix is 32
>>> > The crash is happening around 159th entry. If the prefix is
>>> > changed to 16, the crash happens at around 496th entry.
>>> > Regards,
>>> > P Gyanesh Kumar Patra
>>> >
>>> >
>>> > On Mon, Sep 10, 2018 at 12:16 PM Maxim Uvarov
>>> > mailto:maxim.uva...@linaro.org>>
>>> wrote:
>>> >
>>> > do you have some test code to reproduce it?
>>> >
>>> > On 10 September 2018 at 18:06, gyanesh patra
>>> > >> > <mailto:pgyanesh.pa...@gmail.com>> wrote:
>>> >
>>> > Hi,
>>> > ODP is crashing at buffer_alloc_multi() while
>>> > inserting into iplookuptable.
>>> >
>>> > The backtrace is as below: (gdb) bt #0
>>> buffer_alloc_multi
>>> > (pool=0x7fffd5420c00,
>>> > buf_hdr=buf_hdr@entry=0x7fff55fa8bb0,
>>> > max_num=max_num@entry=1) at odp_pool.c:700 #1
>>> > 0x00433083 in
>>> > odp_buffer_alloc (pool_hdl=pool_hdl@entry=0x9) at
>>> > odp_pool.c:861 #2
>>> > 0x00703732 in cache_alloc_new_pool
>>> > (type=CACHE_TYPE_TRIE,
>>> > tbl=) at iplookuptable.c:223 #3
>>> > cache_get_buffer
>>> > (type=CACHE_TYPE_TRIE, tbl=) at
>>> > iplookuptable.c:248 #4
>>> > trie_insert_node (nexthop=,
>>> > cidr=,
>>> > ip=, root=,
>>> > tbl=) at
>>> > iplookuptable.c:317 #5 odph_iplookup_table_put_value
>>> > (tbl=,
>>> > key=, value=) at
>>> > iplookuptable.c:686
>>> >
>>> > Am i looking at any limitation to iplookuptable
>>> > implementaion here? If any
>>> > other details are needed, please let us know.
>>> >
>>> > Regards,
>>> > P Gyanesh Kumar Patra
>>> >
>>> >
>>> >
>>>
>>>
>


Re: [lng-odp] odp with dpdk pktio gives error with larger packets - 'Segmented buffers not supported'

2018-10-18 Thread gyanesh patra
Hi Matias,
This PR 731 fixed the issue. Thanks a lot.

Regards,
P Gyanesh Kumar Patra


On Thu, Oct 18, 2018 at 4:40 AM Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

> Thanks! I think I figured out the problem. Some DPDK NICs require that the
> buffer length is at least 2kB + headroom to not segment standard ethernet
> frames. This PR should fix the issue:
> https://github.com/Linaro/odp/pull/731 . Please let me know if this fixed
> your problem.
>
> -Matias
>
>


Re: [lng-odp] odp with dpdk pktio gives error with larger packets - 'Segmented buffers not supported'

2018-10-16 Thread gyanesh patra
Hi Maxim,
Increasing the POOL_SEG_LEN worked. But i am not sure how to calculate the
necessary value to use?
I was using the values from the odp_l2fwd example before. But now i
required to increase it upto 2200 for it to work.
Is there any guideline how to calculate this value? And also does it have
any impact on performance?

Regarding the examples, i tried with odp_l2fwd_simple and odp_switch and
faced the same problem. But in my case "odp_l2fwd" example never recieves
any packets. Hence i have not been able to test that.  If you can give any
input regarding this, it will be helpful too.
Thanks for your help.

Regards,
P Gyanesh Kumar Patra


On Tue, Oct 16, 2018 at 3:36 PM Maxim Uvarov 
wrote:

> DPDK as ODP can have packets which are not in physacally continius memory.
> I.e. packet can be split on several memory segments. That is not supported
> by current code and you have this warning. I think that we have dpdk pkio
> validation test and it works with large packets. But to do that you need to
> be sure that you created pool with right parameters. In your case
>  POOL_SEG_LEN has to be increased.
>
> Also you can try more featured example: ./test/performance/odp_l2fwd
>
> Best Regards,
> Maxim.
>
>
> On Tue, 16 Oct 2018 at 20:49, gyanesh patra 
> wrote:
>
>> Hi,
>> I am facing problem while using ODP master branch with DPDK pktio &
>> zero-pkt-copy as below:
>>
>> ODP/bin/# ./odp_l2fwd_simple ./odp_l2fwd_simple
>>
>> pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
>> pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
>> pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
>> pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
>>
>> This error is present for dpdk pktio only. It appears with larger packet
>> sizes like 1518bytes, 1280bytes. But everything works fine with 1024bytes
>> and smaller packets.
>>
>> I have verified that the packets have IP-don't fragment flag set. And
>> Wireshark doesn't show any abnormality with the pcap.
>> Is it broken or we need to specify some extra flags?
>>
>> I am on:
>> commit 570758a22fd0d6e2b2a73eb8ed0a8360a5b0ef32
>> Author: Matias Elo 
>> Date:   Tue Oct 2 14:13:35 2018 +0300
>>linux-gen: ring: allocate global data from shm
>>
>>
>> Thanks,
>> P Gyanesh Kumar Patra
>>
>


[lng-odp] odp with dpdk pktio gives error with larger packets - 'Segmented buffers not supported'

2018-10-16 Thread gyanesh patra
Hi,
I am facing problem while using ODP master branch with DPDK pktio &
zero-pkt-copy as below:

ODP/bin/# ./odp_l2fwd_simple ./odp_l2fwd_simple

pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported
pktio/dpdk.c:851:mbuf_to_pkt_zero():Segmented buffers not supported

This error is present for dpdk pktio only. It appears with larger packet
sizes like 1518bytes, 1280bytes. But everything works fine with 1024bytes
and smaller packets.

I have verified that the packets have IP-don't fragment flag set. And
Wireshark doesn't show any abnormality with the pcap.
Is it broken or we need to specify some extra flags?

I am on:
commit 570758a22fd0d6e2b2a73eb8ed0a8360a5b0ef32
Author: Matias Elo 
Date:   Tue Oct 2 14:13:35 2018 +0300
   linux-gen: ring: allocate global data from shm


Thanks,
P Gyanesh Kumar Patra


[lng-odp] ODP crash at buffer_alloc_multi() while inserting into iplookuptable

2018-09-10 Thread gyanesh patra
Hi,
ODP is crashing at buffer_alloc_multi() while inserting into iplookuptable.

The backtrace is as below: (gdb) bt #0 buffer_alloc_multi
(pool=0x7fffd5420c00, buf_hdr=buf_hdr@entry=0x7fff55fa8bb0,
max_num=max_num@entry=1) at odp_pool.c:700 #1 0x00433083 in
odp_buffer_alloc (pool_hdl=pool_hdl@entry=0x9) at odp_pool.c:861 #2
0x00703732 in cache_alloc_new_pool (type=CACHE_TYPE_TRIE,
tbl=) at iplookuptable.c:223 #3 cache_get_buffer
(type=CACHE_TYPE_TRIE, tbl=) at iplookuptable.c:248 #4
trie_insert_node (nexthop=, cidr=,
ip=, root=, tbl=) at
iplookuptable.c:317 #5 odph_iplookup_table_put_value (tbl=,
key=, value=) at iplookuptable.c:686

Am i looking at any limitation to iplookuptable implementaion here? If any
other details are needed, please let us know.

Regards,
P Gyanesh Kumar Patra


Re: [lng-odp] Suspected SPAM - Re: latency calulation with netmap pkt i/o fails with oversized packet debug msg

2018-07-27 Thread gyanesh patra
Thanks, I'll check it out.
P Gyanesh Kumar Patra


On Fri, Jul 27, 2018 at 4:45 AM Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

>
> >> On 26 Jul 2018, at 21:24, gyanesh patra 
> wrote:
> >>
> >> I verified the throughput over the link with/without this debug message.
> >> With DEBUG message: 10-15 Mbps
> >> without DEBUG message: 1500 Mbps
> >>
>
> This number seems still quite low. I ran a quick test on my development
> server (Xeon E5-2697v3@ 2.60GHz, XL710 NICs) and measured 3.8Gbps.
>
> For optimal performance you should build ODP without ABI compatibility
> (--disable-abi-compat) to enable inlining. In case of netmap pktio, both
> netmap module and modified driver should be loaded, and NIC flow control
> should be disabled.
>
> Regards,
> Matias
>
>


Re: [lng-odp] latency calulation with netmap pkt i/o fails with oversized packet debug msg

2018-07-26 Thread gyanesh patra
I verified the throughput over the link with/without this debug message.
With DEBUG message: 10-15 Mbps
without DEBUG message: 1500 Mbps

Due to this debug message to the stdout, the throughput drops to the
minimum and the latency can't be calculated properly too.
Should i just remove the debug message from the netmap.c file? Does it
serve any purpose?

Regards,
Gyanesh

On Thu, Jul 26, 2018 at 11:25 AM Maxim Uvarov 
wrote:

>
>
> On 26 July 2018 at 16:01, gyanesh patra  wrote:
>
>> Hi,
>> Here is the output for the debug messages as advised:
>> For this code:
>> --
>>  541 ODP_DBG("MTU: %" PRIu32 "\n", mtu);
>>
>>  542 ODP_DBG("NM buf_size: %" PRIu32 "\n", buf_size);
>>
>>  543 pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
>>
>> Output:
>> -
>> netmap interface: eth5
>>   num_rx_desc: 0
>>   num_tx_desc: 0
>> pktio/netmap.c:541:netmap_open():MTU: 1514
>> pktio/netmap.c:542:netmap_open():NM buf_size: 2048
>> pktio/netmap.c:567:netmap_open():netmap pktio eth5 does not support
>> statistics counters
>> odp_packet_io.c:295:odp_pktio_open():interface: eth5, driver: netmap
>>
>> =
>> For this code:
>> --
>>  839 if (odp_likely(ring->slot[slot_id].len <= mtu)) {
>>
>>  840 slot_tbl[num_rx].buf = buf;
>>
>>  841 slot_tbl[num_rx].len = ring->slot[slot_id].len;
>>
>>  842 ODP_DBG("dropped oversized packet %d
>> %d\n",ring->slot[slot_id].len, mtu);
>>  843 num_rx++;
>>
>>  844 }
>>
>> Output:
>> 
>> pktio/netmap.c:842:netmap_recv_desc():dropped oversized packet 60 1514
>> pktio/netmap.c:842:netmap_recv_desc():dropped oversized packet 60 1514
>>
>>
> Are packets dropped or you just see this message?
>
> if (odp_likely(ring->slot[slot_id].len <= mtu)) {
> slot_tbl[num_rx].buf = buf;
> slot_tbl[num_rx].len = ring->slot[slot_id].len;
> ODP_DBG("dropped oversized packet\n");
> num_rx++;
> }
>
> num_rx is increasing then packet wrapped into odp:
> if (num_rx) {
> return netmap_pkt_to_odp(pktio_entry, pkt_table, slot_tbl,
> num_rx, ts);
>
> it looks like message just confusing. Packet is less then mtu.
>
>
>
>
>> If anything else is required, i can get those details too.
>>
>> Thanks,
>> P Gyanesh Kumar Patra
>>
>>
>> On Thu, Jul 26, 2018 at 3:36 AM Elo, Matias (Nokia - FI/Espoo) <
>> matias@nokia.com> wrote:
>>
>>>
>>>
>>> > On 25 Jul 2018, at 17:11, Maxim Uvarov 
>>> wrote:
>>> >
>>> > For quick look it looks like mtu is not set correctly on open(). Can
>>> you try this patch:
>>> >
>>> > diff --git a/platform/linux-generic/pktio/netmap.c
>>> b/platform/linux-generic/pktio/netmap.c
>>> > index 0da2b7a..d4db0af 100644
>>> > --- a/platform/linux-generic/pktio/netmap.c
>>> > +++ b/platform/linux-generic/pktio/netmap.c
>>> > @@ -539,6 +539,7 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED,
>>> pktio_entry_t *pktio_entry,
>>> > goto error;
>>> > }
>>> > pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
>>> > +   pkt_priv(pktio_entry)->mtu = pkt_nm->mtu;
>>>
>>>
>>> pkt_netmap_t *pkt_nm = pkt_priv(pktio_entry), so this is unnecessary.
>>>
>>>
>>> >>
>>> >>
>>> >> Is this a know issue or am i missing something?
>>> >>
>>>
>>>
>>> As far as I can see the problem is caused by reading interface MTU
>>> incorrectly or netmap using unusually small buffers (assuming moongen sends
>>> packets smaller than MTU). The following patch should help debug the issue.
>>>
>>> -Matias
>>>
>>> diff --git a/platform/linux-generic/pktio/netmap.c
>>> b/platform/linux-generic/pktio/netmap.c
>>> index 0da2b7afd..3e0a17542 100644
>>> --- a/platform/linux-generic/pktio/netmap.c
>>> +++ b/platform/linux-generic/pktio/netmap.c
>>> @@ -538,6 +538,10 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED,
>>> pktio_entry_t *pktio_entry,
>>> ODP_ERR("Unable to read interface MTU\n");
>>> goto error;
>>> }
>>> +
>>> +   ODP_DBG("MTU: %" PRIu32 "\n", mtu);
>>> +   ODP_DBG("NM buf_size: %" PRIu32 "\n", buf_size);
>>> +
>>> pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
>>>
>>> /* Check if RSS is supported. If not, set 'max_input_queues' to
>>> 1. */
>>>
>>>
>>>
>


Re: [lng-odp] latency calulation with netmap pkt i/o fails with oversized packet debug msg

2018-07-26 Thread gyanesh patra
Hi,
Here is the output for the debug messages as advised:
For this code:
--
 541 ODP_DBG("MTU: %" PRIu32 "\n", mtu);

 542 ODP_DBG("NM buf_size: %" PRIu32 "\n", buf_size);

 543 pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;

Output:
-
netmap interface: eth5
  num_rx_desc: 0
  num_tx_desc: 0
pktio/netmap.c:541:netmap_open():MTU: 1514
pktio/netmap.c:542:netmap_open():NM buf_size: 2048
pktio/netmap.c:567:netmap_open():netmap pktio eth5 does not support
statistics counters
odp_packet_io.c:295:odp_pktio_open():interface: eth5, driver: netmap

=
For this code:
--
 839 if (odp_likely(ring->slot[slot_id].len <= mtu)) {

 840 slot_tbl[num_rx].buf = buf;

 841 slot_tbl[num_rx].len = ring->slot[slot_id].len;

 842 ODP_DBG("dropped oversized packet %d
%d\n",ring->slot[slot_id].len, mtu);
 843 num_rx++;

 844 }

Output:

pktio/netmap.c:842:netmap_recv_desc():dropped oversized packet 60 1514
pktio/netmap.c:842:netmap_recv_desc():dropped oversized packet 60 1514

If anything else is required, i can get those details too.

Thanks,
P Gyanesh Kumar Patra


On Thu, Jul 26, 2018 at 3:36 AM Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

>
>
> > On 25 Jul 2018, at 17:11, Maxim Uvarov  wrote:
> >
> > For quick look it looks like mtu is not set correctly on open(). Can you
> try this patch:
> >
> > diff --git a/platform/linux-generic/pktio/netmap.c
> b/platform/linux-generic/pktio/netmap.c
> > index 0da2b7a..d4db0af 100644
> > --- a/platform/linux-generic/pktio/netmap.c
> > +++ b/platform/linux-generic/pktio/netmap.c
> > @@ -539,6 +539,7 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED,
> pktio_entry_t *pktio_entry,
> > goto error;
> > }
> > pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
> > +   pkt_priv(pktio_entry)->mtu = pkt_nm->mtu;
>
>
> pkt_netmap_t *pkt_nm = pkt_priv(pktio_entry), so this is unnecessary.
>
>
> >>
> >>
> >> Is this a know issue or am i missing something?
> >>
>
>
> As far as I can see the problem is caused by reading interface MTU
> incorrectly or netmap using unusually small buffers (assuming moongen sends
> packets smaller than MTU). The following patch should help debug the issue.
>
> -Matias
>
> diff --git a/platform/linux-generic/pktio/netmap.c
> b/platform/linux-generic/pktio/netmap.c
> index 0da2b7afd..3e0a17542 100644
> --- a/platform/linux-generic/pktio/netmap.c
> +++ b/platform/linux-generic/pktio/netmap.c
> @@ -538,6 +538,10 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED,
> pktio_entry_t *pktio_entry,
> ODP_ERR("Unable to read interface MTU\n");
> goto error;
> }
> +
> +   ODP_DBG("MTU: %" PRIu32 "\n", mtu);
> +   ODP_DBG("NM buf_size: %" PRIu32 "\n", buf_size);
> +
> pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
>
> /* Check if RSS is supported. If not, set 'max_input_queues' to 1.
> */
>
>
>


[lng-odp] latency calulation with netmap pkt i/o fails with oversized packet debug msg

2018-07-25 Thread gyanesh patra
I am trying to run moongen to calculate the latency in the link. I compiled
ODP with netmap.
I run example/l2-load-latency.lua from moongen and odp_l2fwd_simple from
ODP. I found that most of the packets are dropped at the rx side of ODP.

ODP--
root@test:~/gyn/odp/buildN/bin# ./odp_l2fwd_simple eth5 eth6
01:02:03:04:05:06 07:08:09:0a:0b:0c

pktio/netmap.c:839:netmap_recv_desc():dropped oversized packet
pktio/netmap.c:839:netmap_recv_desc():dropped oversized packet
---

---MOONGEN---
root@ubuntu:/home/ubuntu# ./MoonGen/build/MoonGen
./MoonGen/examples/l2-load-latency.lua 0 1
---

---ubunut--
root@test:# ifconfig eth5
eth5  Link encap:Ethernet  HWaddr a0:36:9f:3e:95:34
  inet6 addr: fe80::a236:9fff:fe3e:9534/64 Scope:Link
  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
  RX packets:8288 errors:21532426 dropped:*1229684107* overruns:0
frame:21532426
  TX packets:28 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:742932 (742.9 KB)  TX bytes:2376 (2.3 KB)

root@test:# ifconfig eth6
eth6  Link encap:Ethernet  HWaddr a0:36:9f:3e:95:36
  inet6 addr: fe80::a236:9fff:fe3e:9536/64 Scope:Link
  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
  RX packets:2780 errors:0 dropped:554038705 overruns:0 frame:0
  TX packets:38 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:166800 (166.8 KB)  TX bytes:3276 (3.2 KB)



When i try the same thing while compiling ODP with DPDK, it works fine. I
am getting error in case of Netmap packet i/o.

Is this a know issue or am i missing something?

Thanks
Gyanesh


Re: [lng-odp] ODP logo to use in academic publications

2018-07-25 Thread gyanesh patra
Thanks a lot for the pointer.

Regards,
P Gyanesh Kumar Patra


On Wed, Jul 25, 2018 at 9:51 AM Maxim Uvarov 
wrote:

> main odp project has:
> ./doc/images/ODP-Logo-HQ.svg
>
> I think it should be possible to convert it to eps.
>
> Maxim.
>
> On 25 July 2018 at 15:38, gyanesh patra  wrote:
>
>> ​
>> Hi,
>> I am looking for the ODP Logo in eps format to use in academic
>> publications. I have only encountered png files. Is there any goto
>> location
>> where i can find the logo?
>>
>> Thanks,
>> P Gyanesh Kumar Patra
>>
>
>


[lng-odp] ODP logo to use in academic publications

2018-07-25 Thread gyanesh patra
​
Hi,
I am looking for the ODP Logo in eps format to use in academic
publications. I have only encountered png files. Is there any goto location
where i can find the logo?

Thanks,
P Gyanesh Kumar Patra


Re: [lng-odp] Bug 3657

2018-04-12 Thread gyanesh patra
Thanks for helping with this issue. It would be a good idea if we can
mention this somewhere in Readme or DEPENDENCY file.
Also for
​
ODP_PKTIO_DPDK_PARAMS, "-m" or "--socket-mem" should be used going forward?

P Gyanesh Kumar Patra

On Thu, Apr 12, 2018 at 10:08 AM, Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

> Thanks for testing this! You can use
> ​​
> ODP_PKTIO_DPDK_PARAMS to override the default options. The patch still
> needs some fixes for arm platforms, but it should be merged to master repo
> soon. There should be no performance impact.
>
> -Matias
>
> > On 12 Apr 2018, at 15:55, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> >
> > I tried this trick and it worked on the odp-dpdk repository.
> >
> > What will be the preferred method?
> >  - ODP_PKTIO_DPDK_PARAMS="-m 512,512"
> >  - the patch you mentioned.
> >
> > Thanks & Regards,
> >
> > P Gyanesh Kumar Patra
> >
> > On Thu, Apr 12, 2018 at 4:42 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > Hi,
> >
> > I may have figured out the issue here. Currently, the ODP DPDK pktio
> implementation configures DPDK to allocated memory only for socket 0.
> >
> > Could you please try running ODP again with environment variable
> ODP_PKTIO_DPDK_PARAMS="-m 512,512" set.
> >
> > E.g.
> > sudo ODP_PKTIO_DPDK_PARAMS="-m 512,512"  ./odp_l2fwd -c 1 -i 0,1
> >
> >
> > If this doesn't help you could test this code change:
> >
> > diff --git a/platform/linux-generic/pktio/dpdk.c
> b/platform/linux-generic/pktio/dpdk.c
> > index 7bccab8..2b8b8e4 100644
> > --- a/platform/linux-generic/pktio/dpdk.c
> > +++ b/platform/linux-generic/pktio/dpdk.c
> > @@ -1120,7 +1120,8 @@ static int dpdk_pktio_init(void)
> > return -1;
> > }
> >
> > -   mem_str_len = snprintf(NULL, 0, "%d", DPDK_MEMORY_MB);
> > +   mem_str_len = snprintf(NULL, 0, "%d,%d", DPDK_MEMORY_MB,
> > +  DPDK_MEMORY_MB);
> >
> > cmdline = getenv("ODP_PKTIO_DPDK_PARAMS");
> > if (cmdline == NULL)
> > @@ -1133,8 +1134,8 @@ static int dpdk_pktio_init(void)
> > char full_cmd[cmd_len];
> >
> > /* first argument is facility log, simply bind it to odpdpdk for
> now.*/
> > -   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d %s",
> > -  mask_str, DPDK_MEMORY_MB, cmdline);
> > +   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d,%d
> %s",
> > +  mask_str, DPDK_MEMORY_MB, DPDK_MEMORY_MB,
> cmdline);
> >
> > for (i = 0, dpdk_argc = 1; i < cmd_len; ++i) {
> > if (isspace(full_cmd[i]))
> >
> >
> > -Matias
> >
> >
> > > On 10 Apr 2018, at 21:37, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> > >
> > > Hi Matias,
> > >
> > > The Mellanox interfaces are mapped to Numa Node 1. (device id: 81:00.x)
> > > We have free hugepages on both Node0 and Node1 as identified below.
> > >
> > >   ​root# cat /sys/devices/system/node/node0
> /hugepages/hugepages-1048576kB/free_hugepages
> > >77
> > >   root# cat /sys/devices/system/node/node1
> /hugepages/hugepages-1048576kB/free_hugepages
> > >83
> > >
> > > The ODP application is using CPU/lcore associated with numa Node1 too.
> > > I have tried with the dpdk-17.11.1 version too without success.
> > > The issue may be somewhere else.
> > >
> > > Regarding the usage of 2M pages ​ (1024 x 2M pages):
> > >  - I unmounted the 1G hugepages and then set 1024x2M pages using
> dpdk-setup.sh scripts.
> > >  - But with this setup failed with the same error as before.
> > >
> > > Let me know if there is any other option we can try.
> > >
> > > ​Thanks,​
> > > P Gyanesh Kumar Patra
> > >
> > > On Thu, Mar 29, 2018 at 4:46 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > > A second thing to try. Since you seem to have a NUMA  system, the ODP
> application should be run on the same NUMA socket as the NIC (e.g. using
> taskset if necessary). In case of different sockets, both sockets should
> have huge pages mapped.
> > >
> > > -Matias
> > >
> > > > On 29 Mar 2018, at 10:00, Elo, Matias (Nokia - FI/Espoo) <

Re: [lng-odp] Suspected SPAM - Re: Bug 3657

2018-04-12 Thread gyanesh patra
I tried this trick and it worked on the odp-dpdk repository.

What will be the preferred method?
 - ODP_PKTIO_DPDK_PARAMS="-m 512,512"
 - the patch you mentioned.

Thanks & Regards,

P Gyanesh Kumar Patra

On Thu, Apr 12, 2018 at 4:42 AM, Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

> Hi,
>
> I may have figured out the issue here. Currently, the ODP DPDK pktio
> implementation configures DPDK to allocated memory only for socket 0.
>
> Could you please try running ODP again with environment variable
> ODP_PKTIO_DPDK_PARAMS="-m 512,512" set.
>
> E.g.
> sudo ODP_PKTIO_DPDK_PARAMS="-m 512,512"  ./odp_l2fwd -c 1 -i 0,1
>
>
> If this doesn't help you could test this code change:
>
> diff --git a/platform/linux-generic/pktio/dpdk.c b/platform/linux-generic/
> pktio/dpdk.c
> index 7bccab8..2b8b8e4 100644
> --- a/platform/linux-generic/pktio/dpdk.c
> +++ b/platform/linux-generic/pktio/dpdk.c
> @@ -1120,7 +1120,8 @@ static int dpdk_pktio_init(void)
> return -1;
> }
>
> -   mem_str_len = snprintf(NULL, 0, "%d", DPDK_MEMORY_MB);
> +   mem_str_len = snprintf(NULL, 0, "%d,%d", DPDK_MEMORY_MB,
> +  DPDK_MEMORY_MB);
>
> cmdline = getenv("ODP_PKTIO_DPDK_PARAMS");
> if (cmdline == NULL)
> @@ -1133,8 +1134,8 @@ static int dpdk_pktio_init(void)
> char full_cmd[cmd_len];
>
> /* first argument is facility log, simply bind it to odpdpdk for
> now.*/
> -   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d %s",
> -  mask_str, DPDK_MEMORY_MB, cmdline);
> +   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d,%d %s",
> +  mask_str, DPDK_MEMORY_MB, DPDK_MEMORY_MB,
> cmdline);
>
> for (i = 0, dpdk_argc = 1; i < cmd_len; ++i) {
> if (isspace(full_cmd[i]))
>
>
> -Matias
>
>
> > On 10 Apr 2018, at 21:37, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> >
> > Hi Matias,
> >
> > The Mellanox interfaces are mapped to Numa Node 1. (device id: 81:00.x)
> > We have free hugepages on both Node0 and Node1 as identified below.
> >
> >   ​root# cat /sys/devices/system/node/node0/hugepages/hugepages-
> 1048576kB/free_hugepages
> >77
> >   root# cat /sys/devices/system/node/node1/hugepages/hugepages-
> 1048576kB/free_hugepages
> >83
> >
> > The ODP application is using CPU/lcore associated with numa Node1 too.
> > I have tried with the dpdk-17.11.1 version too without success.
> > The issue may be somewhere else.
> >
> > Regarding the usage of 2M pages ​ (1024 x 2M pages):
> >  - I unmounted the 1G hugepages and then set 1024x2M pages using
> dpdk-setup.sh scripts.
> >  - But with this setup failed with the same error as before.
> >
> > Let me know if there is any other option we can try.
> >
> > ​Thanks,​
> > P Gyanesh Kumar Patra
> >
> > On Thu, Mar 29, 2018 at 4:46 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > A second thing to try. Since you seem to have a NUMA  system, the ODP
> application should be run on the same NUMA socket as the NIC (e.g. using
> taskset if necessary). In case of different sockets, both sockets should
> have huge pages mapped.
> >
> > -Matias
> >
> > > On 29 Mar 2018, at 10:00, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > >
> > > Hi Gyanesh,
> > >
> > > It seems you are using 1G huge pages. Have you tried using 2M pages​​
> (1024 x 2M pages should be enough)? As Bill noted, this seems like a memory
> related issue.
> > >
> > > -Matias
> > >
> > >
> > >> On 28 Mar 2018, at 18:15, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> > >>
> > >> Yes, it is.
> > >> The error is the same. I did replied that the only difference I see
> is with Ubuntu version and different minor version of mellanox driver.
> > >>
> > >> On Wed, Mar 28, 2018, 07:29 Bill Fischofer <bill.fischo...@linaro.org>
> wrote:
> > >> Thanks for the update. Sounds like you're already using DPDK 17.11?
> > >> What about Mellanox driver level? Is the failure the same as you
> > >> originally reported?
> > >>
> > >> From the reported error:
> > >>
> > >> pktio/dpdk.c:1538:dpdk_start():Queue setup failed: err=-12, port=0
> > >> odp_l2fwd.c:1671:main(

Re: [lng-odp] Bug 3657

2018-04-12 Thread gyanesh patra
This actually worked.
Will this patch come to the master branch? Does it have any impact on
performance?

Thanks & Regards,

P Gyanesh Kumar Patra

On Thu, Apr 12, 2018 at 7:31 AM, Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

>
> This patch should hopefully fix the bug: https://github.com/matiaselo/
> odp/commit/c32baeb1796636adfd12fd3f785e10929984ccc3
>
> It would be great if you could verify that the patch works since I cannot
> repeat the original issue on my test system.
>
> -Matias
>
>
> > On 12 Apr 2018, at 10:53, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> >
> > Still one more thing, the argument '-m' should be replaced with
> '--socket-mem'.
> >
> >
> >> On 12 Apr 2018, at 10:42, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> >>
> >> Hi,
> >>
> >> I may have figured out the issue here. Currently, the ODP DPDK pktio
> implementation configures DPDK to allocated memory only for socket 0.
> >>
> >> Could you please try running ODP again with environment variable
> ODP_PKTIO_DPDK_PARAMS="-m 512,512" set.
> >>
> >> E.g.
> >> sudo ODP_PKTIO_DPDK_PARAMS="-m 512,512"  ./odp_l2fwd -c 1 -i 0,1
> >>
> >>
> >> If this doesn't help you could test this code change:
> >>
> >> diff --git a/platform/linux-generic/pktio/dpdk.c
> b/platform/linux-generic/pktio/dpdk.c
> >> index 7bccab8..2b8b8e4 100644
> >> --- a/platform/linux-generic/pktio/dpdk.c
> >> +++ b/platform/linux-generic/pktio/dpdk.c
> >> @@ -1120,7 +1120,8 @@ static int dpdk_pktio_init(void)
> >>   return -1;
> >>   }
> >>
> >> -   mem_str_len = snprintf(NULL, 0, "%d", DPDK_MEMORY_MB);
> >> +   mem_str_len = snprintf(NULL, 0, "%d,%d", DPDK_MEMORY_MB,
> >> +  DPDK_MEMORY_MB);
> >>
> >>   cmdline = getenv("ODP_PKTIO_DPDK_PARAMS");
> >>   if (cmdline == NULL)
> >> @@ -1133,8 +1134,8 @@ static int dpdk_pktio_init(void)
> >>   char full_cmd[cmd_len];
> >>
> >>   /* first argument is facility log, simply bind it to odpdpdk for
> now.*/
> >> -   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d %s",
> >> -  mask_str, DPDK_MEMORY_MB, cmdline);
> >> +   cmd_len = snprintf(full_cmd, cmd_len, "odpdpdk -c %s -m %d,%d
> %s",
> >> +  mask_str, DPDK_MEMORY_MB, DPDK_MEMORY_MB,
> cmdline);
> >>
> >>   for (i = 0, dpdk_argc = 1; i < cmd_len; ++i) {
> >>   if (isspace(full_cmd[i]))
> >>
> >>
> >> -Matias
> >>
> >>
> >>> On 10 Apr 2018, at 21:37, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> >>>
> >>> Hi Matias,
> >>>
> >>> The Mellanox interfaces are mapped to Numa Node 1. (device id: 81:00.x)
> >>> We have free hugepages on both Node0 and Node1 as identified below.
> >>>
> >>> ​root# cat /sys/devices/system/node/node0/hugepages/hugepages-
> 1048576kB/free_hugepages
> >>>  77
> >>> root# cat /sys/devices/system/node/node1/hugepages/hugepages-
> 1048576kB/free_hugepages
> >>>  83
> >>>
> >>> The ODP application is using CPU/lcore associated with numa Node1 too.
> >>> I have tried with the dpdk-17.11.1 version too without success.
> >>> The issue may be somewhere else.
> >>>
> >>> Regarding the usage of 2M pages ​ (1024 x 2M pages):
> >>> - I unmounted the 1G hugepages and then set 1024x2M pages using
> dpdk-setup.sh scripts.
> >>> - But with this setup failed with the same error as before.
> >>>
> >>> Let me know if there is any other option we can try.
> >>>
> >>> ​Thanks,​
> >>> P Gyanesh Kumar Patra
> >>>
> >>> On Thu, Mar 29, 2018 at 4:46 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> >>> A second thing to try. Since you seem to have a NUMA  system, the ODP
> application should be run on the same NUMA socket as the NIC (e.g. using
> taskset if necessary). In case of different sockets, both sockets should
> have huge pages mapped.
> >>>
> >>> -Matias
> >>>
> >>>> On 29 Mar 2018, at 10:00, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> &g

Re: [lng-odp] Bug 3657

2018-04-12 Thread gyanesh patra
The current odp-dpdk code is ot working. It gives the same error.
But odp-dpdk was working before (august-september 2017). But the recent
updates changed the behaviour.

P Gyanesh Kumar Patra

On Thu, Apr 12, 2018 at 3:28 AM, Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

> Hi,
>
> Have you tested the latest odp-dpdk code? It uses different shm
> implementation, so at least we could rule that one out.
>
> -Matias
>
>
> > On 10 Apr 2018, at 21:37, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> >
> > Hi Matias,
> >
> > The Mellanox interfaces are mapped to Numa Node 1. (device id: 81:00.x)
> > We have free hugepages on both Node0 and Node1 as identified below.
> >
> >   ​root# cat /sys/devices/system/node/node0/hugepages/hugepages-
> 1048576kB/free_hugepages
> >77
> >   root# cat /sys/devices/system/node/node1/hugepages/hugepages-
> 1048576kB/free_hugepages
> >83
> >
> > The ODP application is using CPU/lcore associated with numa Node1 too.
> > I have tried with the dpdk-17.11.1 version too without success.
> > The issue may be somewhere else.
> >
> > Regarding the usage of 2M pages ​ (1024 x 2M pages):
> >  - I unmounted the 1G hugepages and then set 1024x2M pages using
> dpdk-setup.sh scripts.
> >  - But with this setup failed with the same error as before.
> >
> > Let me know if there is any other option we can try.
> >
> > ​Thanks,​
> > P Gyanesh Kumar Patra
> >
> > On Thu, Mar 29, 2018 at 4:46 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > A second thing to try. Since you seem to have a NUMA  system, the ODP
> application should be run on the same NUMA socket as the NIC (e.g. using
> taskset if necessary). In case of different sockets, both sockets should
> have huge pages mapped.
> >
> > -Matias
> >
> > > On 29 Mar 2018, at 10:00, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > >
> > > Hi Gyanesh,
> > >
> > > It seems you are using 1G huge pages. Have you tried using 2M pages​​
> (1024 x 2M pages should be enough)? As Bill noted, this seems like a memory
> related issue.
> > >
> > > -Matias
> > >
> > >
> > >> On 28 Mar 2018, at 18:15, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> > >>
> > >> Yes, it is.
> > >> The error is the same. I did replied that the only difference I see
> is with Ubuntu version and different minor version of mellanox driver.
> > >>
> > >> On Wed, Mar 28, 2018, 07:29 Bill Fischofer <bill.fischo...@linaro.org>
> wrote:
> > >> Thanks for the update. Sounds like you're already using DPDK 17.11?
> > >> What about Mellanox driver level? Is the failure the same as you
> > >> originally reported?
> > >>
> > >> From the reported error:
> > >>
> > >> pktio/dpdk.c:1538:dpdk_start():Queue setup failed: err=-12, port=0
> > >> odp_l2fwd.c:1671:main():Error: unable to start 0
> > >>
> > >> This is a DPDK PMD driver error reported by rte_eth_rx_queue_setup().
> > >> In the Mellanox PMD (drivers/net/mlx5/mlx5_rxq.c) this is the
> > >> mlx5_rx_queue_setup() routine. The relevant code seems to be this:
> > >>
> > >> if (rxq != NULL) {
> > >>DEBUG("%p: reusing already allocated queue index %u (%p)",
> > >>  (void *)dev, idx, (void *)rxq);
> > >>if (priv->started) {
> > >>priv_unlock(priv);
> > >>return -EEXIST;
> > >>}
> > >>(*priv->rxqs)[idx] = NULL;
> > >>rxq_cleanup(rxq_ctrl);
> > >>/* Resize if rxq size is changed. */
> > >>if (rxq_ctrl->rxq.elts_n != log2above(desc)) {
> > >>rxq_ctrl = rte_realloc(rxq_ctrl,
> > >>  sizeof(*rxq_ctrl) +
> > >>  (desc + desc_pad) *
> > >>  sizeof(struct
> rte_mbuf *),
> > >>  RTE_CACHE_LINE_SIZE);
> > >>if (!rxq_ctrl) {
> > >>ERROR("%p: unable to reallocate queue index
> %u",
> > >>  (void *)dev, idx);
> > >> 

Re: [lng-odp] Suspected SPAM - Re: Bug 3657

2018-04-10 Thread gyanesh patra
Hi Matias,

The Mellanox interfaces are mapped to Numa Node 1. (device id: 81:00.x)
We have free hugepages on both Node0 and Node1 as identified below.

  ​root# cat
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages
   77
  root# cat
/sys/devices/system/node/node1/hugepages/hugepages-1048576kB/free_hugepages
   83

The ODP application is using CPU/lcore associated with numa Node1 too.
I have tried with the dpdk-17.11.1 version too without success.
The issue may be somewhere else.

Regarding the usage of 2M pages
​
 (1024 x 2M pages):
 - I unmounted the 1G hugepages and then set 1024x2M pages using
dpdk-setup.sh scripts.
 - But with this setup failed with the same error as before.

Let me know if there is any other option we can try.

​Thanks,​
P Gyanesh Kumar Patra

On Thu, Mar 29, 2018 at 4:46 AM, Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

> A second thing to try. Since you seem to have a NUMA  system, the ODP
> application should be run on the same NUMA socket as the NIC (e.g. using
> taskset if necessary). In case of different sockets, both sockets should
> have huge pages mapped.
>
> -Matias
>
> > On 29 Mar 2018, at 10:00, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> >
> > Hi Gyanesh,
> >
> > It seems you are using 1G huge pages. Have you tried using 2M pages
> ​​
> (1024 x 2M pages should be enough)? As Bill noted, this seems like a
> memory related issue.
> >
> > -Matias
> >
> >
> >> On 28 Mar 2018, at 18:15, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> >>
> >> Yes, it is.
> >> The error is the same. I did replied that the only difference I see is
> with Ubuntu version and different minor version of mellanox driver.
> >>
> >> On Wed, Mar 28, 2018, 07:29 Bill Fischofer <bill.fischo...@linaro.org>
> wrote:
> >> Thanks for the update. Sounds like you're already using DPDK 17.11?
> >> What about Mellanox driver level? Is the failure the same as you
> >> originally reported?
> >>
> >> From the reported error:
> >>
> >> pktio/dpdk.c:1538:dpdk_start():Queue setup failed: err=-12, port=0
> >> odp_l2fwd.c:1671:main():Error: unable to start 0
> >>
> >> This is a DPDK PMD driver error reported by rte_eth_rx_queue_setup().
> >> In the Mellanox PMD (drivers/net/mlx5/mlx5_rxq.c) this is the
> >> mlx5_rx_queue_setup() routine. The relevant code seems to be this:
> >>
> >> if (rxq != NULL) {
> >>DEBUG("%p: reusing already allocated queue index %u (%p)",
> >>  (void *)dev, idx, (void *)rxq);
> >>if (priv->started) {
> >>priv_unlock(priv);
> >>return -EEXIST;
> >>}
> >>(*priv->rxqs)[idx] = NULL;
> >>rxq_cleanup(rxq_ctrl);
> >>/* Resize if rxq size is changed. */
> >>if (rxq_ctrl->rxq.elts_n != log2above(desc)) {
> >>rxq_ctrl = rte_realloc(rxq_ctrl,
> >>  sizeof(*rxq_ctrl) +
> >>  (desc + desc_pad) *
> >>  sizeof(struct rte_mbuf
> *),
> >>  RTE_CACHE_LINE_SIZE);
> >>if (!rxq_ctrl) {
> >>ERROR("%p: unable to reallocate queue index %u",
> >>  (void *)dev, idx);
> >>  priv_unlock(priv);
> >>  return -ENOMEM;
> >>   }
> >>}
> >> } else {
> >>rxq_ctrl = rte_calloc_socket("RXQ", 1, sizeof(*rxq_ctrl) +
> >>(desc + desc_pad) *
> >>         sizeof(struct
> rte_mbuf *),
> >> 0, socket);
> >>if (rxq_ctrl == NULL) {
> >> ERROR("%p: unable to allocate queue index %u",
> >>   (void *)dev, idx);
> >>   priv_unlock(priv);
> >>return -ENOMEM;
> >>}
> >> }
> >>
> >> The reported -12 error code is -ENOMEM so I'd say the issue is some
> >> sort of memory allocation failure.
> >>
> >>
> >> On Wed, Mar 28, 2018 at 8:43 AM, gyanesh patra <
> pgyanesh.pa...@gmail.com> wrote:
> >>> Hi Bill,
> >>> I tried with Matias' suggestions but without success.
> >>>
> >>> P Gyanesh Kumar Patra
> >>>
> >>> On Mon, Mar 26, 2018 at 4:16 PM, Bill Fischofer <
> bill.fischo...@linaro.org>
> >>> wrote:
> >>>>
> >>>> Hi Gyanesh,
> >>>>
> >>>> Have you had a chance to look at
> >>>> https://bugs.linaro.org/show_bug.cgi?id=3657 and see if Matias'
> suggestions
> >>>> are helpful to you?
> >>>>
> >>>> Thanks,
> >>>>
> >>>> Regards,
> >>>> Bill
> >>>
> >>>
> >
>
>


Re: [lng-odp] Bug 3657

2018-03-28 Thread gyanesh patra
Yes, it is.
The error is the same. I did replied that the only difference I see is with
Ubuntu version and different minor version of mellanox driver.

On Wed, Mar 28, 2018, 07:29 Bill Fischofer <bill.fischo...@linaro.org>
wrote:

> Thanks for the update. Sounds like you're already using DPDK 17.11?
> What about Mellanox driver level? Is the failure the same as you
> originally reported?
>
> From the reported error:
>
> pktio/dpdk.c:1538:dpdk_start():Queue setup failed: err=-12, port=0
> odp_l2fwd.c:1671:main():Error: unable to start 0
>
> This is a DPDK PMD driver error reported by rte_eth_rx_queue_setup().
> In the Mellanox PMD (drivers/net/mlx5/mlx5_rxq.c) this is the
> mlx5_rx_queue_setup() routine. The relevant code seems to be this:
>
> if (rxq != NULL) {
> DEBUG("%p: reusing already allocated queue index %u (%p)",
>   (void *)dev, idx, (void *)rxq);
> if (priv->started) {
> priv_unlock(priv);
> return -EEXIST;
> }
> (*priv->rxqs)[idx] = NULL;
> rxq_cleanup(rxq_ctrl);
> /* Resize if rxq size is changed. */
> if (rxq_ctrl->rxq.elts_n != log2above(desc)) {
> rxq_ctrl = rte_realloc(rxq_ctrl,
>   sizeof(*rxq_ctrl) +
>   (desc + desc_pad) *
>   sizeof(struct rte_mbuf
> *),
>   RTE_CACHE_LINE_SIZE);
> if (!rxq_ctrl) {
> ERROR("%p: unable to reallocate queue index %u",
>   (void *)dev, idx);
>   priv_unlock(priv);
>   return -ENOMEM;
>}
> }
> } else {
> rxq_ctrl = rte_calloc_socket("RXQ", 1, sizeof(*rxq_ctrl) +
> (desc + desc_pad) *
>  sizeof(struct
> rte_mbuf *),
>  0, socket);
> if (rxq_ctrl == NULL) {
>  ERROR("%p: unable to allocate queue index %u",
>(void *)dev, idx);
>priv_unlock(priv);
>     return -ENOMEM;
> }
> }
>
> The reported -12 error code is -ENOMEM so I'd say the issue is some
> sort of memory allocation failure.
>
>
> On Wed, Mar 28, 2018 at 8:43 AM, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> > Hi Bill,
> > I tried with Matias' suggestions but without success.
> >
> > P Gyanesh Kumar Patra
> >
> > On Mon, Mar 26, 2018 at 4:16 PM, Bill Fischofer <
> bill.fischo...@linaro.org>
> > wrote:
> >>
> >> Hi Gyanesh,
> >>
> >> Have you had a chance to look at
> >> https://bugs.linaro.org/show_bug.cgi?id=3657 and see if Matias'
> suggestions
> >> are helpful to you?
> >>
> >> Thanks,
> >>
> >> Regards,
> >> Bill
> >
> >
>


Re: [lng-odp] Bug 3657

2018-03-28 Thread gyanesh patra
Hi Bill,
I tried with Matias' suggestions but without success.

P Gyanesh Kumar Patra

On Mon, Mar 26, 2018 at 4:16 PM, Bill Fischofer 
wrote:

> Hi Gyanesh,
>
> Have you had a chance to look at https://bugs.linaro.org/
> show_bug.cgi?id=3657 and see if Matias' suggestions are helpful to you?
>
> Thanks,
>
> Regards,
> Bill
>


Re: [lng-odp] lng-odp Digest, Vol 48, Issue 37

2018-03-17 Thread gyanesh patra
Hi Matias,
Thanks for the patch to compile ODP with MLX drivers.
Finally, i got to try out the patch,  but it is not working for me. I am
still getting the same error while running 'test/performance/odp_l2fwd'.

My configuration details are :
Driver:
MLNX_OFED_LINUX-4.2-1.0.0.0 (OFED-4.2-1.0.0)
Interface:
:81:00.0 'MT27700 Family [ConnectX-4]' if=enp129s0f0 drv=mlx5_core
unused=
System:
Ubuntu 16.04 x86_64 4.4.0-116-generic
DPDK: 17.11
odp-linux :  1.18.0.1

I can see that the two differences in our configuration:
Ubuntu 16 (ours) vs Ubuntu 17 (yours)
MLNX_OFED:-  OFED-4.2-1.0.0 (ours) vs
​
OFED-4.2-1.2.0.0 (yours)

Do you think this might be causing the issue? If any other details needed
to debug, i can help with them.

Thanks,
​
P Gyanesh Kumar Patra

Message: 2
> Date: Tue, 13 Mar 2018 07:05:10 +
> From: bugzilla-dae...@bugs.linaro.org
> To: lng-odp@lists.linaro.org
> Subject: [lng-odp] [Bug 3657] PktIO does not work with Mellanox
> Interfaces
> Message-ID:
> <010001621e2d570a-9dbbd3f1-0755-42a1-90cf-a70d852eb079-0
> 0...@email.amazonses.com>
>
> Content-Type: text/plain; charset="UTF-8"
>
> https://bugs.linaro.org/show_bug.cgi?id=3657
>
> --- Comment #4 from Matias Elo  ---
> Hi,
>
> The Mellanox PMD drivers (mlx5) have received quite a few fixes since DPDK
> v17.08. I would suggest trying DPDK v17.11 as we are moving to that version
> soon anyway.
>
> I tested some Mellanox NICs in our lab (ConnectX-4 Lx) and they work
> properly
> with odp-linux using DPDK v17.11 and Mellanox OFED 4.2
> (MLNX_
> ​​
> OFED_LINUX-4.2-1.2.0.0-ubuntu17.10-x86_64).
>
> The following patch was required to add the necessary libraries.
>
> diff --git a/m4/odp_dpdk.m4 b/m4/odp_dpdk.m4
> index 0050fc4b..b144b23d 100644
> --- a/m4/odp_dpdk.m4
> +++ b/m4/odp_dpdk.m4
> @@ -9,6 +9,7 @@ cur_driver=`basename "$filename" .a | sed -e 's/^lib//'`
>  AS_VAR_APPEND([DPDK_PMDS], [-l$cur_driver,])
>  AS_CASE([$cur_driver],
>  [rte_pmd_nfp], [AS_VAR_APPEND([DPDK_LIBS], [" -lm"])],
> +[rte_pmd_mlx5], [AS_VAR_APPEND([DPDK_LIBS], [" -libverbs -lmlx5"])],
>  [rte_pmd_pcap], [AS_VAR_APPEND([DPDK_LIBS], [" -lpcap"])],
>  [rte_pmd_openssl], [AS_VAR_APPEND([DPDK_LIBS], [" -lcrypto"])])
>  done
>
>
> Regards,
> Matias
>
>


Re: [lng-odp] issues with usage of mellanox 100G NICs with ODP & ODP-DPDK

2018-03-06 Thread gyanesh patra
):Queue setup failed: err=-12, port=0
odp_l2fwd.c:1671:main():Error: unable to start 0
ubuntu@ubuntu:/home/gyanesh/odp/test/performance#


If any other logs or details are required, then i would surely provide here
to resolve this issue.

Thanks

P Gyanesh Kumar Patra

On Fri, Nov 10, 2017 at 6:13 AM, gyanesh patra <pgyanesh.pa...@gmail.com>
wrote:

> I was trying without the dpdk and it was not working properly. I
> guess i have to compile ODP with DPDK support to work with mellanox.
> Thank you for the details.
>
> P Gyanesh Kumar Patra
>
> On Thu, Nov 9, 2017 at 12:47 PM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
>
>> Hi Gyanesh,
>>
>> Pretty much the same steps should also work with odp linux-generic. The
>> main difference is configure script. With linux-generic you use
>> '--with-dpdk-path=' option and optionally
>> --enable-dpdk-zero-copy flag. The supported dpdk  version is v17.08.
>>
>> -Matias
>>
>> > On 9 Nov 2017, at 10:34, gyanesh patra <pgyanesh.pa...@gmail.com>
>> wrote:
>> >
>> > Hi Maxim,
>> > Thanks for the help. I managed to figure out the configuration error
>> and it
>> > works fine for "ODP-DPDK". The MLX5 pmd was not included properly.
>> >
>> > But regarding "ODP" repo (not odp-dpdk), do i need to follow any steps
>> to
>> > be able to use MLX ???
>> >
>> >
>> > P Gyanesh Kumar Patra
>> >
>> > On Wed, Nov 8, 2017 at 7:56 PM, Maxim Uvarov <maxim.uva...@linaro.org>
>> > wrote:
>> >
>> >> On 11/08/17 19:32, gyanesh patra wrote:
>> >>> I am not sure what you mean. Can you please elaborate?
>> >>>
>> >>> As i mentioned before I am able to run dpdk examples. Hence the
>> drivers
>> >>> are available and working fine.
>> >>> I configured ODP & ODP-DPDK with "LDFLAGS=-libverbs" and compiled to
>> >>> work with mellanox. I followed the same while compiling dpdk too.
>> >>>
>> >>> Is there anything i am missing?
>> >>>
>> >>> P Gyanesh Kumar Patra
>> >>
>> >>
>> >> in general if CONFIG_RTE_LIBRTE_MLX5_PMD=y was specified then it has to
>> >> work. I think we did test only with ixgbe. But in general it's common
>> code.
>> >>
>> >> "Unable to init any I/O type." means it it called all open for all
>> pktio
>> >> in list here:
>> >> ./platform/linux-generic/pktio/io_ops.c
>> >>
>> >> and setup_pkt_dpdk() failed for some reason.
>> >>
>> >> I do not like allocations errors in your log.
>> >>
>> >> Try to compile ODP with --enable-debug-print --enable-debug it will
>> make
>> >> ODP_DBG() macro work and it will be visible why it does not opens
>> pktio.
>> >>
>> >> Maxim
>> >>
>> >>
>> >>>
>> >>> On Wed, Nov 8, 2017 at 5:22 PM, Maxim Uvarov <maxim.uva...@linaro.org
>> >>> <mailto:maxim.uva...@linaro.org>> wrote:
>> >>>
>> >>>is Mellanox pmd compiled in?
>> >>>
>> >>>Maxim.
>> >>>
>> >>>On 11/08/17 17:58, gyanesh patra wrote:
>> >>>> Hi,
>> >>>> I am trying to run ODP & ODP-DPDK examples on our server with
>> >>>mellanox 100G
>> >>>> NICs. I am using the odp_l2fwd example. While running the example,
>> >>>I am
>> >>>> facing some issues.
>> >>>> -> When I run "ODP" example using the if names given by kernel as
>> >>>> arguments, I am not getting enough throughput.(the value is very
>> >> low)
>> >>>> -> And when I try "ODP-DPDK" example using port ID as "0,1", it
>> >> can't
>> >>>> create pktio. Whereas I am able to run the examples from "DPDK"
>> >>>> repo with portID "0,1" for the same mellanox NICs. I tried running
>> >>>with
>> >>>> "81:00.0,81:00.1" and also with if-names too without any success.
>> >>>Adding
>> >>>> the whitelist using ODP_PLATFORM_PARAMS doesn't help either.
>> >>>>
>> >>>> Am I missing any steps to use mellanox NICs? OR is there a
>> >>>different method
>> >>>&

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread gyanesh patra
Is it possible to fix for netmap too in similar fashion?

P Gyanesh Kumar Patra

On Wed, Feb 7, 2018 at 1:19 PM, Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

> The PR is now available: https://github.com/Linaro/odp/pull/458
>
> -Matias
>
> > On 7 Feb 2018, at 15:31, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> >
> > This patch works on Intel X540-AT2 NICs too.
> >
> > P Gyanesh Kumar Patra
> >
> > On Wed, Feb 7, 2018 at 11:28 AM, Bill Fischofer <
> bill.fischo...@linaro.org> wrote:
> > Thanks, Matias. Please open a bug for this and reference it in the fix.
> >
> > On Wed, Feb 7, 2018 at 6:36 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > Hi,
> >
> > I actually just figured out the problem. For e.g. Niantic NICs the
> rte_eth_rxconf.rx_drop_en has to be enabled for the NIC to continue working
> properly when all RX queues are not emptied. The following patch fixes the
> problem for me:
> >
> > diff --git a/platform/linux-generic/pktio/dpdk.c
> b/platform/linux-generic/pktio/dpdk.c
> > index bd6920e..fc535e3 100644
> > --- a/platform/linux-generic/pktio/dpdk.c
> > +++ b/platform/linux-generic/pktio/dpdk.c
> > @@ -1402,6 +1402,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
> >
> >  static int dpdk_start(pktio_entry_t *pktio_entry)
> >  {
> > +   struct rte_eth_dev_info dev_info;
> > pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
> > uint8_t port_id = pkt_dpdk->port_id;
> > int ret;
> > @@ -1420,7 +1421,6 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> > }
> > /* Init TX queues */
> > for (i = 0; i < pktio_entry->s.num_out_queue; i++) {
> > -   struct rte_eth_dev_info dev_info;
> > const struct rte_eth_txconf *txconf = NULL;
> > int ip_ena  = pktio_entry->s.config.pktout.
> bit.ipv4_chksum_ena;
> > int udp_ena = pktio_entry->s.config.pktout.
> bit.udp_chksum_ena;
> > @@ -1470,9 +1470,14 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
> > }
> > /* Init RX queues */
> > for (i = 0; i < pktio_entry->s.num_in_queue; i++) {
> > +   struct rte_eth_rxconf *rxconf = NULL;
> > +
> > +   rte_eth_dev_info_get(port_id, _info);
> > +   rxconf = _info.default_rxconf;
> > +   rxconf->rx_drop_en = 1;
> > ret = rte_eth_rx_queue_setup(port_id, i,
> DPDK_NM_RX_DESC,
> >
> rte_eth_dev_socket_id(port_id),
> > -NULL, pkt_dpdk->pkt_pool);
> > +        rxconf, pkt_dpdk->pkt_pool);
> > if (ret < 0) {
> > ODP_ERR("Queue setup failed: err=%d, port=%"
> PRIu8 "\n",
> > ret, port_id);
> >
> > I'll test it a bit more for performance effects and then send a fix PR.
> >
> > -Matias
> >
> >
> >
> > > On 7 Feb 2018, at 14:18, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> > >
> > > Thank you.
> > > I am curious what might be the reason.
> > >
> > > P Gyanesh Kumar Patra
> > >
> > > On Wed, Feb 7, 2018 at 9:51 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > > I'm currently trying to figure out what's happening. I'll report back
> when I find out something.
> > >
> > > -Matias
> > >
> > >
> > > > On 7 Feb 2018, at 13:44, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> > > >
> > > > Do you have any theory for the issue in 82599 (Niantic) NIC and why
> it might be working in Intel XL710 (Fortville)? Can i identify a new
> hardware without this issue by looking at their datasheet/specs?
> > > > Thanks for the insight.
> > > >
> > > > P Gyanesh Kumar Patra
> > > >
> > > > On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > > > I was unable to reproduce this with Intel XL710 (Fortville) but with
> 82599 (Niantic) l2fwd operates as you have described. This may be a NIC HW
> limitation since the same issue is also observed with netmap pktio.
> > > >
> > > > -Matias
> > > >
> > > >
> > > > > On 7 Feb 2018, at 11:14, gyanesh patra <pgyanesh.pa...@gmail.com>
> wro

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread gyanesh patra
This patch works on Intel X540-AT2 NICs too.

P Gyanesh Kumar Patra

On Wed, Feb 7, 2018 at 11:28 AM, Bill Fischofer <bill.fischo...@linaro.org>
wrote:

> Thanks, Matias. Please open a bug for this and reference it in the fix.
>
> On Wed, Feb 7, 2018 at 6:36 AM, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
>
>> Hi,
>>
>> I actually just figured out the problem. For e.g. Niantic NICs the
>> rte_eth_rxconf.rx_drop_en has to be enabled for the NIC to continue working
>> properly when all RX queues are not emptied. The following patch fixes the
>> problem for me:
>>
>> diff --git a/platform/linux-generic/pktio/dpdk.c
>> b/platform/linux-generic/pktio/dpdk.c
>> index bd6920e..fc535e3 100644
>> --- a/platform/linux-generic/pktio/dpdk.c
>> +++ b/platform/linux-generic/pktio/dpdk.c
>> @@ -1402,6 +1402,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
>>
>>  static int dpdk_start(pktio_entry_t *pktio_entry)
>>  {
>> +   struct rte_eth_dev_info dev_info;
>> pkt_dpdk_t *pkt_dpdk = _entry->s.pkt_dpdk;
>> uint8_t port_id = pkt_dpdk->port_id;
>> int ret;
>> @@ -1420,7 +1421,6 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
>> }
>> /* Init TX queues */
>> for (i = 0; i < pktio_entry->s.num_out_queue; i++) {
>> -   struct rte_eth_dev_info dev_info;
>> const struct rte_eth_txconf *txconf = NULL;
>> int ip_ena  = pktio_entry->s.config.pktout.b
>> it.ipv4_chksum_ena;
>> int udp_ena = pktio_entry->s.config.pktout.b
>> it.udp_chksum_ena;
>> @@ -1470,9 +1470,14 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
>> }
>> /* Init RX queues */
>> for (i = 0; i < pktio_entry->s.num_in_queue; i++) {
>> +   struct rte_eth_rxconf *rxconf = NULL;
>> +
>> +   rte_eth_dev_info_get(port_id, _info);
>> +   rxconf = _info.default_rxconf;
>> +   rxconf->rx_drop_en = 1;
>> ret = rte_eth_rx_queue_setup(port_id, i, DPDK_NM_RX_DESC,
>>  rte_eth_dev_socket_id(port_
>> id),
>> -NULL, pkt_dpdk->pkt_pool);
>> +rxconf, pkt_dpdk->pkt_pool);
>> if (ret < 0) {
>> ODP_ERR("Queue setup failed: err=%d, port=%"
>> PRIu8 "\n",
>> ret, port_id);
>>
>> I'll test it a bit more for performance effects and then send a fix PR.
>>
>> -Matias
>>
>>
>>
>> > On 7 Feb 2018, at 14:18, gyanesh patra <pgyanesh.pa...@gmail.com>
>> wrote:
>> >
>> > Thank you.
>> > I am curious what might be the reason.
>> >
>> > P Gyanesh Kumar Patra
>> >
>> > On Wed, Feb 7, 2018 at 9:51 AM, Elo, Matias (Nokia - FI/Espoo) <
>> matias@nokia.com> wrote:
>> > I'm currently trying to figure out what's happening. I'll report back
>> when I find out something.
>> >
>> > -Matias
>> >
>> >
>> > > On 7 Feb 2018, at 13:44, gyanesh patra <pgyanesh.pa...@gmail.com>
>> wrote:
>> > >
>> > > Do you have any theory for the issue in 82599 (Niantic) NIC and why
>> it might be working in Intel XL710 (Fortville)? Can i identify a new
>> hardware without this issue by looking at their datasheet/specs?
>> > > Thanks for the insight.
>> > >
>> > > P Gyanesh Kumar Patra
>> > >
>> > > On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) <
>> matias@nokia.com> wrote:
>> > > I was unable to reproduce this with Intel XL710 (Fortville) but with
>> 82599 (Niantic) l2fwd operates as you have described. This may be a NIC HW
>> limitation since the same issue is also observed with netmap pktio.
>> > >
>> > > -Matias
>> > >
>> > >
>> > > > On 7 Feb 2018, at 11:14, gyanesh patra <pgyanesh.pa...@gmail.com>
>> wrote:
>> > > >
>> > > > Thanks for the info. I verified this with both odp 1.16 and odp
>> 1.17 with same behavior.
>> > > > The traffic consists of diff Mac and ip addresses.
>> > > > Without the busy loop, I could see that all the threads were
>> receiving packets. So i think packet distribution is not an issue. In our
>> c

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread gyanesh patra
Do you have any theory for the issue in 82599 (Niantic) NIC and why it
might be working in Intel XL710 (Fortville)? Can i identify a new hardware
without this issue by looking at their datasheet/specs?
Thanks for the insight.

P Gyanesh Kumar Patra

On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

> I was unable to reproduce this with Intel XL710 (Fortville) but with 82599
> (Niantic) l2fwd operates as you have described. This may be a NIC HW
> limitation since the same issue is also observed with netmap pktio.
>
> -Matias
>
>
> > On 7 Feb 2018, at 11:14, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> >
> > Thanks for the info. I verified this with both odp 1.16 and odp 1.17
> with same behavior.
> > The traffic consists of diff Mac and ip addresses.
> > Without the busy loop, I could see that all the threads were receiving
> packets. So i think packet distribution is not an issue. In our case, we
> are sending packet at line rate of 10G interface. That might be causing
> this behaviour.
> > If I can provide any other info, let me know.
> >
> > Thanks
> >
> > Gyanesh
> >
> > On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> > Hi Gyanesh,
> >
> > I tested the patch on my system and everything seems to work as
> expected. Based on the log you're not running the latest code (v1.17.0) but
> I doubt that is the issue here.
> >
> > What kind of test traffic are you using? The l2fwd example uses IPv4
> addresses and UDP ports to do the input hashing. If test packets are
> identical they will all end up in the same input queue, which would explain
> what you are seeing.
> >
> > -Matias
> >
> >
> > > On 6 Feb 2018, at 19:00, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> > >
> > > Hi,
> > > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them
> have the same behaviour. I also tried with (200*2048) as packet pool size
> without any success.
> > > I am attaching the patch for test/performance/odp_l2fwd example here
> to demonstrate the behaviour. Also find the output of the example below:
> > >
> > > root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> > > HW time counter freq: 2094954892 hz
> > >
> > > PKTIO: initialized loop interface.
> > > PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to
> disable.
> > > PKTIO: initialized pcap interface.
> > > PKTIO: initialized ipc interface.
> > > PKTIO: initialized socket mmap, use export
> ODP_PKTIO_DISABLE_SOCKET_MMAP=1 to disable.
> > > PKTIO: initialized socket mmsg,use export
> ODP_PKTIO_DISABLE_SOCKET_MMSG=1 to disable.
> > >
> > > ODP system info
> > > ---
> > > ODP API version: 1.16.0
> > > ODP impl name:   "odp-linux"
> > > CPU model:   Intel(R) Xeon(R) CPU E5-2620 v2
> > > CPU freq (hz):   26
> > > Cache line size: 64
> > > CPU count:   12
> > >
> > >
> > > CPU features supported:
> > > SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B
> XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE
> OSXSAVE AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR
> PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE
> DIGTEMP ARAT PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2
> LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP EM64T INVTSC
> > >
> > > CPU features NOT supported:
> > > CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID
> RTM AVX512F LZCNT
> > >
> > > Running ODP appl: "odp_l2fwd"
> > > -
> > > IF-count:2
> > > Using IFs:   0 1
> > > Mode:PKTIN_DIRECT, PKTOUT_DIRECT
> > >
> > > num worker threads: 10
> > > first CPU:  2
> > > cpu mask:   0xFFC
> > >
> > >
> > > Pool info
> > > -
> > >   pool0
> > >   namepacket pool
> > >   pool type   packet
> > >   pool shm11
> > >   user area shm   0
> > >   num 8192
> > >   align   64
> > >   headroom128
> > >   seg len 8064
> > >   max data len65536
> > >   tailroom0
> > >   block size  8896
> > >   uarea size  0
> > >   shm size7319628

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-07 Thread gyanesh patra
Thanks for the info. I verified this with both odp 1.16 and odp 1.17 with
same behavior.
The traffic consists of diff Mac and ip addresses.
Without the busy loop, I could see that all the threads were receiving
packets. So i think packet distribution is not an issue. In our case, we
are sending packet at line rate of 10G interface. That might be causing
this behaviour.

If I can provide any other info, let me know.

Thanks

Gyanesh
On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

> Hi Gyanesh,
>
> I tested the patch on my system and everything seems to work as expected.
> Based on the log you're not running the latest code (v1.17.0) but I doubt
> that is the issue here.
>
> What kind of test traffic are you using? The l2fwd example uses IPv4
> addresses and UDP ports to do the input hashing. If test packets are
> identical they will all end up in the same input queue, which would explain
> what you are seeing.
>
> -Matias
>
>
> > On 6 Feb 2018, at 19:00, gyanesh patra <pgyanesh.pa...@gmail.com> wrote:
> >
> > Hi,
> > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them
> have the same behaviour. I also tried with (200*2048) as packet pool size
> without any success.
> > I am attaching the patch for test/performance/odp_l2fwd example here to
> demonstrate the behaviour. Also find the output of the example below:
> >
> > root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> > HW time counter freq: 2094954892 hz
> >
> > PKTIO: initialized loop interface.
> > PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to
> disable.
> > PKTIO: initialized pcap interface.
> > PKTIO: initialized ipc interface.
> > PKTIO: initialized socket mmap, use export
> ODP_PKTIO_DISABLE_SOCKET_MMAP=1 to disable.
> > PKTIO: initialized socket mmsg,use export
> ODP_PKTIO_DISABLE_SOCKET_MMSG=1 to disable.
> >
> > ODP system info
> > ---
> > ODP API version: 1.16.0
> > ODP impl name:   "odp-linux"
> > CPU model:   Intel(R) Xeon(R) CPU E5-2620 v2
> > CPU freq (hz):   26
> > Cache line size: 64
> > CPU count:   12
> >
> >
> > CPU features supported:
> > SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B
> XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE
> OSXSAVE AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR
> PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE
> DIGTEMP ARAT PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2
> LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP EM64T INVTSC
> >
> > CPU features NOT supported:
> > CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID RTM
> AVX512F LZCNT
> >
> > Running ODP appl: "odp_l2fwd"
> > -
> > IF-count:2
> > Using IFs:   0 1
> > Mode:PKTIN_DIRECT, PKTOUT_DIRECT
> >
> > num worker threads: 10
> > first CPU:  2
> > cpu mask:   0xFFC
> >
> >
> > Pool info
> > -
> >   pool0
> >   namepacket pool
> >   pool type   packet
> >   pool shm11
> >   user area shm   0
> >   num 8192
> >   align   64
> >   headroom128
> >   seg len 8064
> >   max data len65536
> >   tailroom0
> >   block size  8896
> >   uarea size  0
> >   shm size73196288
> >   base addr   0x7f566940
> >   uarea shm size  0
> >   uarea base addr (nil)
> >
> > EAL: Detected 12 lcore(s)
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > EAL: PCI device :03:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device :03:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device :05:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL: PCI device :05:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL: PCI device :0a:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device :0a:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device :0c:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:10d3 net_e1000_em
> > created pktio 1, dev: 0, drv: dpdk
> > created 5 input and 5 output queues on (0)
> > created pktio 2, dev: 1, drv: dpdk
> &g

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread gyanesh patra
We are using Intel NICs : X540-AT2 (10G)



P Gyanesh Kumar Patra

On Tue, Feb 6, 2018 at 3:08 PM, Ilias Apalodimas <
ilias.apalodi...@linaro.org> wrote:

> Hello,
>
> Haven't seen any reference to the hardware you are using, sorry if i
> missed it. What kind of NIC are you using for the tests ?
>
> Regards
> Ilias
>
> On 6 February 2018 at 19:00, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> > Hi,
> > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them
> have
> > the same behaviour. I also tried with (200*2048) as packet pool size
> > without any success.
> > I am attaching the patch for test/performance/odp_l2fwd example here to
> > demonstrate the behaviour. Also find the output of the example below:
> >
> > root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> > HW time counter freq: 2094954892 hz
> >
> > PKTIO: initialized loop interface.
> > PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to
> > disable.
> > PKTIO: initialized pcap interface.
> > PKTIO: initialized ipc interface.
> > PKTIO: initialized socket mmap, use export ODP_PKTIO_DISABLE_SOCKET_MMAP=
> 1
> > to disable.
> > PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=
> 1
> > to disable.
> >
> > ODP system info
> > ---
> > ODP API version: 1.16.0
> > ODP impl name:   "odp-linux"
> > CPU model:   Intel(R) Xeon(R) CPU E5-2620 v2
> > CPU freq (hz):   26
> > Cache line size: 64
> > CPU count:   12
> >
> >
> > CPU features supported:
> > SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B
> XTPR
> > PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE OSXSAVE
> > AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA
> > CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE DIGTEMP ARAT
> > PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2 LAHF_SAHF SYSCALL
> XD
> > 1GB_PG RDTSCP EM64T INVTSC
> >
> > CPU features NOT supported:
> > CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID RTM
> > AVX512F LZCNT
> >
> > Running ODP appl: "odp_l2fwd"
> > -
> > IF-count:2
> > Using IFs:   0 1
> > Mode:PKTIN_DIRECT, PKTOUT_DIRECT
> >
> > num worker threads: 10
> > first CPU:  2
> > cpu mask:   0xFFC
> >
> >
> > Pool info
> > -
> >   pool0
> >   namepacket pool
> >   pool type   packet
> >   pool shm11
> >   user area shm   0
> >   num 8192
> >   align   64
> >   headroom128
> >   seg len 8064
> >   max data len65536
> >   tailroom0
> >   block size  8896
> >   uarea size  0
> >   shm size73196288
> >   base addr   0x7f566940
> >   uarea shm size  0
> >   uarea base addr (nil)
> >
> > EAL: Detected 12 lcore(s)
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > EAL: PCI device :03:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device :03:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device :05:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL: PCI device :05:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL: PCI device :0a:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device :0a:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device :0c:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:10d3 net_e1000_em
> > created pktio 1, dev: 0, drv: dpdk
> > created 5 input and 5 output queues on (0)
> > created pktio 2, dev: 1, drv: dpdk
> > created 5 input and 5 output queues on (1)
> >
> > Queue binding (indexes)
> > ---
> > worker 0
> >   rx: pktio 0, queue 0
> >   tx: pktio 1, queue 0
> > worker 1
> >   rx: pktio 1, queue 0
> >   tx: pktio 0, queue 0
> > worker 2
> >   rx: pktio 0, queue 1
> >   tx: pktio 1, queue 1
> > worker 3
> >   rx: pktio 1, queue 1
> >   tx: pktio 0, queue 1
> > worker 4
> >   rx: pktio 0, queue 2
> >   tx: pktio 1, queue 2
> > worker 5
> >   rx: pktio 1

Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread gyanesh patra
ps
TEST RESULT: 1396 maximum packets per second.



P Gyanesh Kumar Patra

On Tue, Feb 6, 2018 at 9:55 AM, Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

>
>
> > On 5 Feb 2018, at 19:42, Bill Fischofer <bill.fischo...@linaro.org>
> wrote:
> >
> > Thanks, Gyanesh, that does sound like a bug. +cc Matias: Can you comment
> on this?
> >
> > On Mon, Feb 5, 2018 at 5:09 AM, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> > ​I am testing an l2fwd use-case​. I am executing the use-case with two
> > CPUs​ & two interfaces​.
> > One interface ​with 2 Rx ​queues receives pkts using 2 threads with 2
> > associated CPUs. Both the
> > threads can forward the packet over the 2nd interface which also has 2 Tx
> > queues ​mapped to
> > 2 CPUs. I am sending packets from an external packet generator and
> > ​confirmed that ​both
> > queues are receiving packets.
> > *When I run odp_pktin_recv() on both the queues, the packet*
> > * forwarding works fine. But if I put a sleep() or add a busy loop
> ​instead
> > of odp_pktin_recv() *
> > *on one ​thread, then the​ other ​thread stops receiving packets. If I
> > replace ​the sleep with odp_pktin_recv(), both the queues start receiving
> > packets again. *I encountered this problem on the DPDK pktio support​ on
> > ODP 1.16 and ODP 1.17​.
> > On socket-mmap it works fine. Is it expected behavior or a potential bug?
> >
>
>
> Hi Gyanesh,
>
> Could you please share an example code which produces this issue? Does
> this happen also if you enable zero-copy dpdk pktio
> (--enable-dpdk-zero-copy)?
>
> Socket-mmap pktio doesn't support MQ, so comparison to that doesn't make
> much sense. Netmap pktio supports MQ.
>
> Regards,
> Matias
>
>


odp_l2fwd_patch
Description: Binary data


Re: [lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-06 Thread gyanesh patra
Hi Bogdan,
Yes, I agree. It looks like that. But i thought if we don't recv pkt
(odp_pktin_recv), then the packets will be dropped at NIC queues as the rx
buffer will be full. In that scenario, the other queue should continue to
work. Maybe i am not aware of the ODP side of the implementation.

In any case, is it an expected behaviour?

Can we disable Rx or Tx on a specific queue instead of the whole PKTIO?
More importantly, how much we can do at run time instead of bringing down
the pktio entirely?

Thanks,

P Gyanesh Kumar Patra

On Tue, Feb 6, 2018 at 11:21 AM, Bogdan Pricope <bogdan.pric...@linaro.org>
wrote:

> Explanation may be related to RSS.
>
> Dpdk pktio is using RSS - traffic is hashed and sent to a specific
> queue. You have two RX queues (pktin) that are polled with
> odp_pktin_recv(). If you stop polling on one queue (put one of the
> threads in busy loop or sleep()), it will not mean that the other will
> take entire traffic: I do not know dpdk so well but I suspect that a
> number of packets are hold on that pktin and pool is exhausted.
>
> /B
>
> On 6 February 2018 at 14:10, Elo, Matias (Nokia - FI/Espoo)
> <matias@nokia.com> wrote:
> >
> >
> >> On 6 Feb 2018, at 13:55, Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
> >>
> >>
> >>
> >>> On 5 Feb 2018, at 19:42, Bill Fischofer <bill.fischo...@linaro.org>
> wrote:
> >>>
> >>> Thanks, Gyanesh, that does sound like a bug. +cc Matias: Can you
> comment on this?
> >>>
> >>> On Mon, Feb 5, 2018 at 5:09 AM, gyanesh patra <
> pgyanesh.pa...@gmail.com> wrote:
> >>> I am testing an l2fwd use-case. I am executing the use-case with two
> >>> CPUs & two interfaces.
> >>> One interface with 2 Rx queues receives pkts using 2 threads with 2
> >>> associated CPUs. Both the
> >>> threads can forward the packet over the 2nd interface which also has 2
> Tx
> >>> queues mapped to
> >>> 2 CPUs. I am sending packets from an external packet generator and
> >>> confirmed that both
> >>> queues are receiving packets.
> >>> *When I run odp_pktin_recv() on both the queues, the packet*
> >>> * forwarding works fine. But if I put a sleep() or add a busy loop
> instead
> >>> of odp_pktin_recv() *
> >>> *on one thread, then the other thread stops receiving packets. If I
> >>> replace the sleep with odp_pktin_recv(), both the queues start
> receiving
> >>> packets again. *I encountered this problem on the DPDK pktio support on
> >>> ODP 1.16 and ODP 1.17.
> >>> On socket-mmap it works fine. Is it expected behavior or a potential
> bug?
> >>>
> >>
> >>
> >> Hi Gyanesh,
> >>
> >> Could you please share an example code which produces this issue? Does
> this happen also if you enable zero-copy dpdk pktio
> (--enable-dpdk-zero-copy)?
> >>
> >> Socket-mmap pktio doesn't support MQ, so comparison to that doesn't
> make much sense. Netmap pktio supports MQ.
> >>
> >> Regards,
> >> Matias
> >>
> >
> > Using too small packet pool can also cause symptoms like this, so you
> could try increasing packet pool size.
> >
> > -Matias
> >
>


[lng-odp] unexpected packet handling behavior with dpdk pktio support

2018-02-05 Thread gyanesh patra
​I am testing an l2fwd use-case​. I am executing the use-case with two
CPUs​ & two interfaces​.
One interface ​with 2 Rx ​queues receives pkts using 2 threads with 2
associated CPUs. Both the
threads can forward the packet over the 2nd interface which also has 2 Tx
queues ​mapped to
2 CPUs. I am sending packets from an external packet generator and
​confirmed that ​both
queues are receiving packets.
*When I run odp_pktin_recv() on both the queues, the packet*
* forwarding works fine. But if I put a sleep() or add a busy loop ​instead
of odp_pktin_recv() *
*on one ​thread, then the​ other ​thread stops receiving packets. If I
replace ​the sleep with odp_pktin_recv(), both the queues start receiving
packets again. *I encountered this problem on the DPDK pktio support​ on
ODP 1.16 and ODP 1.17​.
On socket-mmap it works fine. Is it expected behavior or a potential bug?

Thanks & Regards,
Gyanesh Patra
PhD Candidate
Unicamp University


[lng-odp] Compilation flags for release build and performance evaluation

2018-01-31 Thread gyanesh patra
Hi,
I am curious if there are any specific flags available for ODP for release
builds or performance evaluation?

Also where can i find the list of features i can disable by passing to
configure scripts? I found couple of options in odp-thunderx project such
as  "-DODP_DISABLE_CLASSIFICATION and -DNIC_DISABLE_PACKET_PARSING".
Is there any other such flags available?

Thank you,
P Gyanesh Kumar Patra


[lng-odp] ODP always falls back to normal pages instead of using hugepages

2017-11-13 Thread gyanesh patra
Hi,
I have noticed that ODP application always check for hugepages and fails
though hugepage is configured in the system. And it runs normally using
normal pages. I am not sure it affects the performance or not. But if it
does, how can i make sure that ODP application uses the hugepages???

I am adding the output of hugepage details and odp application below for
reference:


root@ubuntu:/home/ubuntu/P4/mac# cat /proc/sys/vm/nr_hugepages
183
root@ubuntu:/home/ubuntu/P4/mac#
root@ubuntu:/home/ubuntu/P4/mac# grep -i uge /proc/meminfo
AnonHugePages: 28672 kB
HugePages_Total: 183
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:1048576 kB


ODP application output:

EAL: Detected 56 lcore(s)
EAL: Probing VFIO support...
EAL: PCI device :05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1528 net_ixgbe
EAL: PCI device :05:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1528 net_ixgbe
EAL: PCI device :81:00.0 on NUMA socket 1
EAL:   probe driver: 15b3:1013 net_mlx5
PMD: net_mlx5: PCI information matches, using device "mlx5_0" (SR-IOV:
false, MPS: false)
PMD: net_mlx5: 1 port(s) detected
PMD: net_mlx5: port 1 MAC address is 7c:fe:90:31:0d:3a
EAL: PCI device :81:00.1 on NUMA socket 1
EAL:   probe driver: 15b3:1013 net_mlx5
PMD: net_mlx5: PCI information matches, using device "mlx5_1" (SR-IOV:
false, MPS: false)
PMD: net_mlx5: 1 port(s) detected
PMD: net_mlx5: port 1 MAC address is 7c:fe:90:31:0d:3b
../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
allocate memory
../linux-generic/_ishm.c:866:_odp_ishm_reserve():No huge pages, fall back
to normal pages. check: /proc/sys/vm/nr_hugepages.
../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
allocate memory
../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
allocate memory
../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
allocate memory
 PKTIO: initialized loop interface.
../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
allocate memory
No crypto devices available
../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
allocate memory
../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
allocate memory
../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
allocate memory

ODP system info
---
ODP API version: 1.15.0
ODP impl name:   odp-dpdk
CPU model:   Intel(R) Xeon(R) CPU E5-2680 v4
CPU freq (hz):   24
Cache line size: 64
CPU count:   56
***
***
../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
allocate memory
../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
allocate memory
../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
allocate memory
../linux-generic/_ishmphy.c:150:_odp_ishmphy_map():mmap failed:Cannot
allocate memory


Thanks & Regards,
P Gyanesh Kumar Patra


Re: [lng-odp] issues with usage of mellanox 100G NICs with ODP & ODP-DPDK

2017-11-10 Thread gyanesh patra
With my initial testing, ODP-DPDK is working perfectly with mellanox
drivers.
Thank you

P Gyanesh Kumar Patra

On Thu, Nov 9, 2017 at 2:18 PM, Maxim Uvarov <maxim.uva...@linaro.org>
wrote:

> Nice to see it working. I think we did not yet tested it with Mellanox
> drivers.
>
> For linux-generic refer to .travis.yaml or ./scripts/build-pktio-dpdk
> scripts. Also all required steps are in README.
>
> Maxim.
>
> On 11/09/17 14:47, Elo, Matias (Nokia - FI/Espoo) wrote:
> > Hi Gyanesh,
> >
> > Pretty much the same steps should also work with odp linux-generic. The
> main difference is configure script. With linux-generic you use
> '--with-dpdk-path=' option and optionally
> --enable-dpdk-zero-copy flag. The supported dpdk  version is v17.08.
> >
> > -Matias
> >
> >> On 9 Nov 2017, at 10:34, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> >>
> >> Hi Maxim,
> >> Thanks for the help. I managed to figure out the configuration error
> and it
> >> works fine for "ODP-DPDK". The MLX5 pmd was not included properly.
> >>
> >> But regarding "ODP" repo (not odp-dpdk), do i need to follow any steps
> to
> >> be able to use MLX ???
> >>
> >>
> >> P Gyanesh Kumar Patra
> >>
> >> On Wed, Nov 8, 2017 at 7:56 PM, Maxim Uvarov <maxim.uva...@linaro.org>
> >> wrote:
> >>
> >>> On 11/08/17 19:32, gyanesh patra wrote:
> >>>> I am not sure what you mean. Can you please elaborate?
> >>>>
> >>>> As i mentioned before I am able to run dpdk examples. Hence the
> drivers
> >>>> are available and working fine.
> >>>> I configured ODP & ODP-DPDK with "LDFLAGS=-libverbs" and compiled to
> >>>> work with mellanox. I followed the same while compiling dpdk too.
> >>>>
> >>>> Is there anything i am missing?
> >>>>
> >>>> P Gyanesh Kumar Patra
> >>>
> >>>
> >>> in general if CONFIG_RTE_LIBRTE_MLX5_PMD=y was specified then it has to
> >>> work. I think we did test only with ixgbe. But in general it's common
> code.
> >>>
> >>> "Unable to init any I/O type." means it it called all open for all
> pktio
> >>> in list here:
> >>> ./platform/linux-generic/pktio/io_ops.c
> >>>
> >>> and setup_pkt_dpdk() failed for some reason.
> >>>
> >>> I do not like allocations errors in your log.
> >>>
> >>> Try to compile ODP with --enable-debug-print --enable-debug it will
> make
> >>> ODP_DBG() macro work and it will be visible why it does not opens
> pktio.
> >>>
> >>> Maxim
> >>>
> >>>
> >>>>
> >>>> On Wed, Nov 8, 2017 at 5:22 PM, Maxim Uvarov <maxim.uva...@linaro.org
> >>>> <mailto:maxim.uva...@linaro.org>> wrote:
> >>>>
> >>>>is Mellanox pmd compiled in?
> >>>>
> >>>>Maxim.
> >>>>
> >>>>On 11/08/17 17:58, gyanesh patra wrote:
> >>>>> Hi,
> >>>>> I am trying to run ODP & ODP-DPDK examples on our server with
> >>>>mellanox 100G
> >>>>> NICs. I am using the odp_l2fwd example. While running the example,
> >>>>I am
> >>>>> facing some issues.
> >>>>> -> When I run "ODP" example using the if names given by kernel as
> >>>>> arguments, I am not getting enough throughput.(the value is very
> >>> low)
> >>>>> -> And when I try "ODP-DPDK" example using port ID as "0,1", it
> >>> can't
> >>>>> create pktio. Whereas I am able to run the examples from "DPDK"
> >>>>> repo with portID "0,1" for the same mellanox NICs. I tried running
> >>>>with
> >>>>> "81:00.0,81:00.1" and also with if-names too without any success.
> >>>>Adding
> >>>>> the whitelist using ODP_PLATFORM_PARAMS doesn't help either.
> >>>>>
> >>>>> Am I missing any steps to use mellanox NICs? OR is there a
> >>>>different method
> >>>>> to specify the device details to create pktio?
> >>>>> I am providing the output of "odp_l2fwd" examples for ODP and
> >>> ODP-DPDK
> 

Re: [lng-odp] issues with usage of mellanox 100G NICs with ODP & ODP-DPDK

2017-11-10 Thread gyanesh patra
This is good news. I was wondering if it was in the pipeline. Thank you

P Gyanesh Kumar Patra

On Thu, Nov 9, 2017 at 7:25 PM, Francois Ozog <francois.o...@linaro.org>
wrote:

> ODP2.0 should allow ODP to leverage directly libiverbs from a native ODP
> pktio without DPDK layer.
> Mellanox has created a userland framework based on libiverbs while we try
> to promote an extension of Mediated Device (vfio-mdev).
>
> FF
>
> On 9 November 2017 at 14:18, Maxim Uvarov <maxim.uva...@linaro.org> wrote:
>
>> Nice to see it working. I think we did not yet tested it with Mellanox
>> drivers.
>>
>> For linux-generic refer to .travis.yaml or ./scripts/build-pktio-dpdk
>> scripts. Also all required steps are in README.
>>
>> Maxim.
>>
>> On 11/09/17 14:47, Elo, Matias (Nokia - FI/Espoo) wrote:
>> > Hi Gyanesh,
>> >
>> > Pretty much the same steps should also work with odp linux-generic. The
>> main difference is configure script. With linux-generic you use
>> '--with-dpdk-path=' option and optionally
>> --enable-dpdk-zero-copy flag. The supported dpdk  version is v17.08.
>> >
>> > -Matias
>> >
>> >> On 9 Nov 2017, at 10:34, gyanesh patra <pgyanesh.pa...@gmail.com>
>> wrote:
>> >>
>> >> Hi Maxim,
>> >> Thanks for the help. I managed to figure out the configuration error
>> and it
>> >> works fine for "ODP-DPDK". The MLX5 pmd was not included properly.
>> >>
>> >> But regarding "ODP" repo (not odp-dpdk), do i need to follow any steps
>> to
>> >> be able to use MLX ???
>> >>
>> >>
>> >> P Gyanesh Kumar Patra
>> >>
>> >> On Wed, Nov 8, 2017 at 7:56 PM, Maxim Uvarov <maxim.uva...@linaro.org>
>> >> wrote:
>> >>
>> >>> On 11/08/17 19:32, gyanesh patra wrote:
>> >>>> I am not sure what you mean. Can you please elaborate?
>> >>>>
>> >>>> As i mentioned before I am able to run dpdk examples. Hence the
>> drivers
>> >>>> are available and working fine.
>> >>>> I configured ODP & ODP-DPDK with "LDFLAGS=-libverbs" and compiled to
>> >>>> work with mellanox. I followed the same while compiling dpdk too.
>> >>>>
>> >>>> Is there anything i am missing?
>> >>>>
>> >>>> P Gyanesh Kumar Patra
>> >>>
>> >>>
>> >>> in general if CONFIG_RTE_LIBRTE_MLX5_PMD=y was specified then it has
>> to
>> >>> work. I think we did test only with ixgbe. But in general it's common
>> code.
>> >>>
>> >>> "Unable to init any I/O type." means it it called all open for all
>> pktio
>> >>> in list here:
>> >>> ./platform/linux-generic/pktio/io_ops.c
>> >>>
>> >>> and setup_pkt_dpdk() failed for some reason.
>> >>>
>> >>> I do not like allocations errors in your log.
>> >>>
>> >>> Try to compile ODP with --enable-debug-print --enable-debug it will
>> make
>> >>> ODP_DBG() macro work and it will be visible why it does not opens
>> pktio.
>> >>>
>> >>> Maxim
>> >>>
>> >>>
>> >>>>
>> >>>> On Wed, Nov 8, 2017 at 5:22 PM, Maxim Uvarov <
>> maxim.uva...@linaro.org
>> >>>> <mailto:maxim.uva...@linaro.org>> wrote:
>> >>>>
>> >>>>is Mellanox pmd compiled in?
>> >>>>
>> >>>>Maxim.
>> >>>>
>> >>>>On 11/08/17 17:58, gyanesh patra wrote:
>> >>>>> Hi,
>> >>>>> I am trying to run ODP & ODP-DPDK examples on our server with
>> >>>>mellanox 100G
>> >>>>> NICs. I am using the odp_l2fwd example. While running the example,
>> >>>>I am
>> >>>>> facing some issues.
>> >>>>> -> When I run "ODP" example using the if names given by kernel as
>> >>>>> arguments, I am not getting enough throughput.(the value is very
>> >>> low)
>> >>>>> -> And when I try "ODP-DPDK" example using port ID as "0,1", it
>> >>> can't
>> >>>>> create pktio. Whereas I am able to run the examples from "DPDK"
>>

Re: [lng-odp] issues with usage of mellanox 100G NICs with ODP & ODP-DPDK

2017-11-09 Thread gyanesh patra
Hi Maxim,
Thanks for the help. I managed to figure out the configuration error and it
works fine for "ODP-DPDK". The MLX5 pmd was not included properly.

But regarding "ODP" repo (not odp-dpdk), do i need to follow any steps to
be able to use MLX ???


P Gyanesh Kumar Patra

On Wed, Nov 8, 2017 at 7:56 PM, Maxim Uvarov <maxim.uva...@linaro.org>
wrote:

> On 11/08/17 19:32, gyanesh patra wrote:
> > I am not sure what you mean. Can you please elaborate?
> >
> > As i mentioned before I am able to run dpdk examples. Hence the drivers
> > are available and working fine.
> > I configured ODP & ODP-DPDK with "LDFLAGS=-libverbs" and compiled to
> > work with mellanox. I followed the same while compiling dpdk too.
> >
> > Is there anything i am missing?
> >
> > P Gyanesh Kumar Patra
>
>
> in general if CONFIG_RTE_LIBRTE_MLX5_PMD=y was specified then it has to
> work. I think we did test only with ixgbe. But in general it's common code.
>
> "Unable to init any I/O type." means it it called all open for all pktio
> in list here:
> ./platform/linux-generic/pktio/io_ops.c
>
> and setup_pkt_dpdk() failed for some reason.
>
> I do not like allocations errors in your log.
>
> Try to compile ODP with --enable-debug-print --enable-debug it will make
> ODP_DBG() macro work and it will be visible why it does not opens pktio.
>
> Maxim
>
>
> >
> > On Wed, Nov 8, 2017 at 5:22 PM, Maxim Uvarov <maxim.uva...@linaro.org
> > <mailto:maxim.uva...@linaro.org>> wrote:
> >
> > is Mellanox pmd compiled in?
> >
> > Maxim.
> >
> > On 11/08/17 17:58, gyanesh patra wrote:
> > > Hi,
> > > I am trying to run ODP & ODP-DPDK examples on our server with
> > mellanox 100G
> > > NICs. I am using the odp_l2fwd example. While running the example,
> > I am
> > > facing some issues.
> > > -> When I run "ODP" example using the if names given by kernel as
> > > arguments, I am not getting enough throughput.(the value is very
> low)
> > > -> And when I try "ODP-DPDK" example using port ID as "0,1", it
> can't
> > > create pktio. Whereas I am able to run the examples from "DPDK"
> > > repo with portID "0,1" for the same mellanox NICs. I tried running
> > with
> > > "81:00.0,81:00.1" and also with if-names too without any success.
> > Adding
> > > the whitelist using ODP_PLATFORM_PARAMS doesn't help either.
> > >
> > > Am I missing any steps to use mellanox NICs? OR is there a
> > different method
> > > to specify the device details to create pktio?
> > > I am providing the output of "odp_l2fwd" examples for ODP and
> ODP-DPDK
> > > repository here.
> > >
> > > The NICs being used:
> > >
> > > :81:00.0 'MT27700 Family [ConnectX-4]' if=enp129s0f0
> drv=mlx5_core
> > > unused=
> > > :81:00.1 'MT27700 Family [ConnectX-4]' if=enp129s0f1
> drv=mlx5_core
> > > unused=
> > >
> > > ODP l2fwd example run details:
> > > --
> > > root@ubuntu:/home/ubuntu/odp/test/performance# ./odp_l2fwd -i
> > > enp129s0f0,enp129s0f1
> > > HW time counter freq: 239886 
> > <(239)%20999-9886> hz
> > >
> > > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
> memory
> > > _ishm.c:880:_odp_ishm_reserve():No huge pages, fall back to normal
> > pages.
> > > check: /proc/sys/vm/nr_hugepages.
> > > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
> memory
> > > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
> memory
> > > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
> memory
> > >  PKTIO: initialized loop interface.
> > >  PKTIO: initialized pcap interface.
> > >  PKTIO: initialized ipc interface.
> > >  PKTIO: initialized socket mmap, use export
> > ODP_PKTIO_DISABLE_SOCKET_MMAP=1
> > > to disable.
> > >  PKTIO: initialized socket mmsg,use export
> > ODP_PKTIO_DISABLE_SOCKET_MMSG=1
> > > to disable.
> > > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
> memory
> > > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
> memory
> > >

Re: [lng-odp] issues with usage of mellanox 100G NICs with ODP & ODP-DPDK

2017-11-08 Thread gyanesh patra
- where to set CONFIG_RTE_LIBRTE_MLX5_PMD ? I believe in DPDK we set it
whilie compiling. How to set it for ODP??

- with mellanox NICs, it is not required to explicitly bind the interface
to dpdk. I can use portID "0,1"  with dpdk example. Is it the same way to
specify with ODP-DPDK l2fwd example too??


P Gyanesh Kumar Patra

On Wed, Nov 8, 2017 at 7:56 PM, Maxim Uvarov <maxim.uva...@linaro.org>
wrote:

> On 11/08/17 19:32, gyanesh patra wrote:
> > I am not sure what you mean. Can you please elaborate?
> >
> > As i mentioned before I am able to run dpdk examples. Hence the drivers
> > are available and working fine.
> > I configured ODP & ODP-DPDK with "LDFLAGS=-libverbs" and compiled to
> > work with mellanox. I followed the same while compiling dpdk too.
> >
> > Is there anything i am missing?
> >
> > P Gyanesh Kumar Patra
>
>
> in general if CONFIG_RTE_LIBRTE_MLX5_PMD=y was specified then it has to
> work. I think we did test only with ixgbe. But in general it's common code.
>
> "Unable to init any I/O type." means it it called all open for all pktio
> in list here:
> ./platform/linux-generic/pktio/io_ops.c
>
> and setup_pkt_dpdk() failed for some reason.
>
> I do not like allocations errors in your log.
>
> Try to compile ODP with --enable-debug-print --enable-debug it will make
> ODP_DBG() macro work and it will be visible why it does not opens pktio.
>
> Maxim
>
>
> >
> > On Wed, Nov 8, 2017 at 5:22 PM, Maxim Uvarov <maxim.uva...@linaro.org
> > <mailto:maxim.uva...@linaro.org>> wrote:
> >
> > is Mellanox pmd compiled in?
> >
> > Maxim.
> >
> > On 11/08/17 17:58, gyanesh patra wrote:
> > > Hi,
> > > I am trying to run ODP & ODP-DPDK examples on our server with
> > mellanox 100G
> > > NICs. I am using the odp_l2fwd example. While running the example,
> > I am
> > > facing some issues.
> > > -> When I run "ODP" example using the if names given by kernel as
> > > arguments, I am not getting enough throughput.(the value is very
> low)
> > > -> And when I try "ODP-DPDK" example using port ID as "0,1", it
> can't
> > > create pktio. Whereas I am able to run the examples from "DPDK"
> > > repo with portID "0,1" for the same mellanox NICs. I tried running
> > with
> > > "81:00.0,81:00.1" and also with if-names too without any success.
> > Adding
> > > the whitelist using ODP_PLATFORM_PARAMS doesn't help either.
> > >
> > > Am I missing any steps to use mellanox NICs? OR is there a
> > different method
> > > to specify the device details to create pktio?
> > > I am providing the output of "odp_l2fwd" examples for ODP and
> ODP-DPDK
> > > repository here.
> > >
> > > The NICs being used:
> > >
> > > :81:00.0 'MT27700 Family [ConnectX-4]' if=enp129s0f0
> drv=mlx5_core
> > > unused=
> > > :81:00.1 'MT27700 Family [ConnectX-4]' if=enp129s0f1
> drv=mlx5_core
> > > unused=
> > >
> > > ODP l2fwd example run details:
> > > --
> > > root@ubuntu:/home/ubuntu/odp/test/performance# ./odp_l2fwd -i
> > > enp129s0f0,enp129s0f1
> > > HW time counter freq: 239886 
> > <(239)%20999-9886> hz
> > >
> > > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
> memory
> > > _ishm.c:880:_odp_ishm_reserve():No huge pages, fall back to normal
> > pages.
> > > check: /proc/sys/vm/nr_hugepages.
> > > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
> memory
> > > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
> memory
> > > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
> memory
> > >  PKTIO: initialized loop interface.
> > >  PKTIO: initialized pcap interface.
> > >  PKTIO: initialized ipc interface.
> > >  PKTIO: initialized socket mmap, use export
> > ODP_PKTIO_DISABLE_SOCKET_MMAP=1
> > > to disable.
> > >  PKTIO: initialized socket mmsg,use export
> > ODP_PKTIO_DISABLE_SOCKET_MMSG=1
> > > to disable.
> > > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate
> memory
> > > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cann

Re: [lng-odp] issues with usage of mellanox 100G NICs with ODP & ODP-DPDK

2017-11-08 Thread gyanesh patra
I am not sure what you mean. Can you please elaborate?

As i mentioned before I am able to run dpdk examples. Hence the drivers are
available and working fine.
I configured ODP & ODP-DPDK with "LDFLAGS=-libverbs" and compiled to work
with mellanox. I followed the same while compiling dpdk too.

Is there anything i am missing?

P Gyanesh Kumar Patra

On Wed, Nov 8, 2017 at 5:22 PM, Maxim Uvarov <maxim.uva...@linaro.org>
wrote:

> is Mellanox pmd compiled in?
>
> Maxim.
>
> On 11/08/17 17:58, gyanesh patra wrote:
> > Hi,
> > I am trying to run ODP & ODP-DPDK examples on our server with mellanox
> 100G
> > NICs. I am using the odp_l2fwd example. While running the example, I am
> > facing some issues.
> > -> When I run "ODP" example using the if names given by kernel as
> > arguments, I am not getting enough throughput.(the value is very low)
> > -> And when I try "ODP-DPDK" example using port ID as "0,1", it can't
> > create pktio. Whereas I am able to run the examples from "DPDK"
> > repo with portID "0,1" for the same mellanox NICs. I tried running with
> > "81:00.0,81:00.1" and also with if-names too without any success. Adding
> > the whitelist using ODP_PLATFORM_PARAMS doesn't help either.
> >
> > Am I missing any steps to use mellanox NICs? OR is there a different
> method
> > to specify the device details to create pktio?
> > I am providing the output of "odp_l2fwd" examples for ODP and ODP-DPDK
> > repository here.
> >
> > The NICs being used:
> >
> > :81:00.0 'MT27700 Family [ConnectX-4]' if=enp129s0f0 drv=mlx5_core
> > unused=
> > :81:00.1 'MT27700 Family [ConnectX-4]' if=enp129s0f1 drv=mlx5_core
> > unused=
> >
> > ODP l2fwd example run details:
> > --
> > root@ubuntu:/home/ubuntu/odp/test/performance# ./odp_l2fwd -i
> > enp129s0f0,enp129s0f1
> > HW time counter freq: 239886 <(239)%20999-9886> hz
> >
> > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
> > _ishm.c:880:_odp_ishm_reserve():No huge pages, fall back to normal
> pages.
> > check: /proc/sys/vm/nr_hugepages.
> > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
> > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
> > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
> >  PKTIO: initialized loop interface.
> >  PKTIO: initialized pcap interface.
> >  PKTIO: initialized ipc interface.
> >  PKTIO: initialized socket mmap, use export
> ODP_PKTIO_DISABLE_SOCKET_MMAP=1
> > to disable.
> >  PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=
> 1
> > to disable.
> > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
> > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
> > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
> > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
> >
> > ODP system info
> > ---
> > ODP API version: 1.15.0
> > ODP impl name:   "odp-linux"
> > CPU model:   Intel(R) Xeon(R) CPU E5-2680 v4
> > CPU freq (hz):   33
> > Cache line size: 64
> > CPU count:   56
> >
> >
> > CPU features supported:
> > SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 FMA
> CMPXCHG16B
> > XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC MOVBE POPCNT TSC_DEADLINE AES
> XSAVE
> > OSXSAVE AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR
> > PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE
> > DIGTEMP TRBOBST ARAT PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE HLE
> > AVX2 BMI2 ERMS INVPCID RTM LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP EM64T
> INVTSC
> >
> > CPU features NOT supported:
> > CNXT_ID PSN ACNT2 BMI1 SMEP AVX512F LZCNT
> >
> > Running ODP appl: "odp_l2fwd"
> > -
> > IF-count:2
> > Using IFs:   enp129s0f0 enp129s0f1
> > Mode:PKTIN_DIRECT, PKTOUT_DIRECT
> >
> > num worker threads: 32
> > first CPU:  24
> > cpu mask:   0x00
> >
> > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
> > _ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
> >
> > Pool info
> > -
> >   pool0
> >   namepacket pool
> >   pool type   packet
> >   pool shm11
> > 

[lng-odp] issues with usage of mellanox 100G NICs with ODP & ODP-DPDK

2017-11-08 Thread gyanesh patra
Hi,
I am trying to run ODP & ODP-DPDK examples on our server with mellanox 100G
NICs. I am using the odp_l2fwd example. While running the example, I am
facing some issues.
-> When I run "ODP" example using the if names given by kernel as
arguments, I am not getting enough throughput.(the value is very low)
-> And when I try "ODP-DPDK" example using port ID as "0,1", it can't
create pktio. Whereas I am able to run the examples from "DPDK"
repo with portID "0,1" for the same mellanox NICs. I tried running with
"81:00.0,81:00.1" and also with if-names too without any success. Adding
the whitelist using ODP_PLATFORM_PARAMS doesn't help either.

Am I missing any steps to use mellanox NICs? OR is there a different method
to specify the device details to create pktio?
I am providing the output of "odp_l2fwd" examples for ODP and ODP-DPDK
repository here.

The NICs being used:

:81:00.0 'MT27700 Family [ConnectX-4]' if=enp129s0f0 drv=mlx5_core
unused=
:81:00.1 'MT27700 Family [ConnectX-4]' if=enp129s0f1 drv=mlx5_core
unused=

ODP l2fwd example run details:
--
root@ubuntu:/home/ubuntu/odp/test/performance# ./odp_l2fwd -i
enp129s0f0,enp129s0f1
HW time counter freq: 239886 <(239)%20999-9886> hz

_ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
_ishm.c:880:_odp_ishm_reserve():No huge pages, fall back to normal pages.
check: /proc/sys/vm/nr_hugepages.
_ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
_ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
_ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
 PKTIO: initialized loop interface.
 PKTIO: initialized pcap interface.
 PKTIO: initialized ipc interface.
 PKTIO: initialized socket mmap, use export ODP_PKTIO_DISABLE_SOCKET_MMAP=1
to disable.
 PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=1
to disable.
_ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
_ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
_ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
_ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory

ODP system info
---
ODP API version: 1.15.0
ODP impl name:   "odp-linux"
CPU model:   Intel(R) Xeon(R) CPU E5-2680 v4
CPU freq (hz):   33
Cache line size: 64
CPU count:   56


CPU features supported:
SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 FMA CMPXCHG16B
XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC MOVBE POPCNT TSC_DEADLINE AES XSAVE
OSXSAVE AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR
PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE
DIGTEMP TRBOBST ARAT PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE HLE
AVX2 BMI2 ERMS INVPCID RTM LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP EM64T INVTSC

CPU features NOT supported:
CNXT_ID PSN ACNT2 BMI1 SMEP AVX512F LZCNT

Running ODP appl: "odp_l2fwd"
-
IF-count:2
Using IFs:   enp129s0f0 enp129s0f1
Mode:PKTIN_DIRECT, PKTOUT_DIRECT

num worker threads: 32
first CPU:  24
cpu mask:   0x00

_ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory
_ishmphy.c:152:_odp_ishmphy_map():mmap failed:Cannot allocate memory

Pool info
-
  pool0
  namepacket pool
  pool type   packet
  pool shm11
  user area shm   0
  num 8192
  align   64
  headroom128
  seg len 8064
  max data len65536
  tailroom0
  block size  8768
  uarea size  0
  shm size72143104
  base addr   0x7f5fc1234000
  uarea shm size  0
  uarea base addr (nil)

pktio/socket_mmap.c:401:mmap_setup_ring():setsockopt(pkt mmap): Invalid
argument
pktio/socket_mmap.c:496:sock_mmap_close():mmap_unmap_sock() Invalid argument
created pktio 1, dev: enp129s0f0, drv: socket
Sharing 1 input queues between 16 workers
Sharing 1 output queues between 16 workers
created 1 input and 1 output queues on (enp129s0f0)
pktio/socket_mmap.c:401:mmap_setup_ring():setsockopt(pkt mmap): Invalid
argument
pktio/socket_mmap.c:496:sock_mmap_close():mmap_unmap_sock() Invalid argument
created pktio 2, dev: enp129s0f1, drv: socket
Sharing 1 input queues between 16 workers
Sharing 1 output queues between 16 workers
created 1 input and 1 output queues on (enp129s0f1)

Queue binding (indexes)
---
worker 0
  rx: pktio 0, queue 0
  tx: pktio 1, queue 0
worker 1
  rx: pktio 1, queue 0
  tx: pktio 0, queue 0
worker 2
  rx: pktio 0, queue 0
  tx: pktio 1, queue 0
worker 3
  rx: pktio 1, queue 0
  tx: pktio 0, queue 0
worker 4
  rx: pktio 0, queue 0
  tx: pktio 1, queue 0
worker 5
  rx: pktio 1, queue 0
  tx: pktio 0, queue 0
worker 6
  rx: pktio 0, queue 0
  tx: pktio 1, queue 0
worker 7
  rx: pktio 1, queue 0
  tx: pktio 0, queue 0
worker 8
  rx: pktio 0, queue 0
  tx: pktio 1, queue 0
worker 9
  rx: pktio 1, queue 0
  tx: pktio 0, 

Re: [lng-odp] ODP install error with dpdk 16.07

2017-01-31 Thread Gyanesh Patra
Thanks, with this configuration it is working.

My doubt here is:

Why we are not using shared lib, and abi-compat ??

P Gyanesh Patra

On Mon, 30 Jan 2017 at 05:37 Elo Matias (Nokia - FI/Espoo)

<
mailto:Elo Matias (Nokia - FI/Espoo) <matias@nokia-bell-labs.com>
> wrote:

a, pre, code, a:link, body { word-wrap: break-word !important; }

> make: *** [all-recursive] Error 1

>

> I have tried with ODP master branch with DPDK 17 and DPDK 16.07 . I am facing 
> the same problem.

>

> This is the ODP repo, not the odp-dpdk repo.

>

> P Gyanesh Patra

Hi Gyanesh,

Odp-linux currently supports DPDK v16.07. With these steps everything is 
working for me:

# DPDK install

cd

git checkout v16.07

make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc

cd x86_64-native-linuxapp-gcc/

sed -ri 's,(CONFIG_RTE_LIBRTE_PMD_PCAP=).*,\1y,' .config

cd ..

make install T=x86_64-native-linuxapp-gcc DESTDIR=./install EXTRA_CFLAGS="-fPIC"

# odp-linux install

cd

./bootstrap

./configure --with-dpdk-path=

/x86_64-native-linuxapp-gcc --disable-shared -disable-abi-compat

make

Were you doing something differently?

Regards,

Matias


Re: [lng-odp] odp-dpdk gives error with "configure" command

2017-01-31 Thread Gyanesh Patra
Thank you.

It works fine now.

P Gyanesh Patra

On Mon, 30 Jan 2017 at 05:54 Elo Matias (Nokia - FI/Espoo)

<
mailto:Elo Matias (Nokia - FI/Espoo) <matias@nokia-bell-labs.com>
> wrote:

a, pre, code, a:link, body { word-wrap: break-word !important; }

> On 28 Jan 2017, at 23:38, Gyanesh Patra <
mailto:pgyanesh.pa...@gmail.com
> wrote:

>

> odp-dpdk repo gives error for “configure” command when tried with dpdk 16.07 
> and dpdk 17. I am running on ubuntu16 LTS.

>

> ./configure --with-dpdk-path=./dpdk/x86_64-native-linuxapp-gcc

>

> checking rte_config.h usability... no

>

> checking rte_config.h presence... no

>

> checking for rte_config.h... no

>

> configure: error: in `/home/macsad/pktio/odp-dpdk':

>

> configure: error: "can't find DPDK headers"

>

> See `config.log' for more details

>

> macsad@india:~/pktio/odp-dpdk$

>

> P Gyanesh Patra

Hi Gyanesh,

I’m installing odp-dpdk as follows:

# DPDK install

cd

git checkout v16.07

make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc

cd x86_64-native-linuxapp-gcc/

sed -ri 's,(CONFIG_RTE_LIBRTE_PMD_PCAP=).*,\1y,' .config

cd ..

make install T=x86_64-native-linuxapp-gcc DESTDIR=./install EXTRA_CFLAGS="-fPIC"

# Odp-dpdk install

cd

./bootstrap

./configure --with-platform=linux-dpdk --with-sdk-install-path=

/x86_64-native-linuxapp-gcc

make

Did this help?

-Matias


[lng-odp] odp-dpdk gives error with "configure" command

2017-01-28 Thread Gyanesh Patra
odp-dpdk repo gives error for “configure” command when tried with dpdk 16.07 
and dpdk 17. I am running on ubuntu16 LTS. 

./configure --with-dpdk-path=./dpdk/x86_64-native-linuxapp-gcc

checking rte_config.h usability... no

checking rte_config.h presence... no

checking for rte_config.h... no

configure: error: in `/home/macsad/pktio/odp-dpdk':

configure: error: "can't find DPDK headers"

See `config.log' for more details

macsad@india:~/pktio/odp-dpdk$ 

P Gyanesh Patra


[lng-odp] ODP install error with dpdk 16.07

2017-01-28 Thread Gyanesh Patra
ODP install “make” command gives the below error : 

make[1]: Entering directory 'odp/test'

Making all in common_plat

make[2]: Entering directory 'odp/test/common_plat'

Making all in performance

make[3]: Entering directory 'odp/test/common_plat/performance'

  CC       odp_bench_packet-odp_bench_packet.o

  CCLD     odp_bench_packet

../../../lib/.libs/libodp-linux.a(io_ops.o):(.rodata+0x8): undefined reference 
to `dpdk_pktio_ops'

collect2: error: ld returned 1 exit status

Makefile:797: recipe for target 'odp_bench_packet' failed

make[3]: *** [odp_bench_packet] Error 1

make[3]: Leaving directory 'odp/test/common_plat/performance'

Makefile:438: recipe for target 'all-recursive' failed

make[2]: *** [all-recursive] Error 1

make[2]: Leaving directory 'odp/test/common_plat'

Makefile:437: recipe for target 'all-recursive' failed

make[1]: *** [all-recursive] Error 1

make[1]: Leaving directory 'odp/test'

Makefile:497: recipe for target 'all-recursive' failed

make: *** [all-recursive] Error 1

I have tried with ODP master branch with DPDK 17 and DPDK 16.07 . I am facing 
the same problem.

This is the ODP repo, not the odp-dpdk repo.

P Gyanesh Patra


Re: [lng-odp] ODP installation issues on Cavium ThunderX

2017-01-19 Thread Gyanesh Patra
There is no link available for the "Cavium implementation" by Support Account 
on the ODP download page. How can i have access to the code and also 
corresponding toolchain if needed?

P Gyanesh Patra

On Thu, 19 Jan 2017 at 14:33 Mike Holmes

<
mailto:Mike Holmes <mike.hol...@linaro.org>
> wrote:

a, pre, code, a:link, body { word-wrap: break-word !important; }

On 19 January 2017 at 11:17, Gyanesh Patra <
mailto:pgyanesh.pa...@gmail.com
> wrote:

> Hi,

>

> I tried to follow the instruction from OpenFastPath readme file as below:

>

>>> ./bootstrap

>

>

>

>>> ./configure --host=aarch64-thunderx-linux-gnu \

>

> --with-platform=linux-thunder \

>

> --with-openssl-path=${OPENSSL_DIR} \

>

> --enable-debug-print \

>

> --enable-debug \

>

> --prefix=${ODP_DIR} \

>

> CFLAGS="-O0 -static -g" LIBS="-ldl”

>

>>> make

>

> But the `configure` results in a error as below:

>

>

>

> checking for C compiler version... 5.4.0

>

> UNSUPPORTED PLATFORM: linux-thunder

>

> Is there any wiki or instructions for how to build on ThunderX platform with 
> the prerequisites details? I am using the latest master branch of ODP.

That is the issue :) see
https://www.opendataplane.org/downloads/
The Cavium implementation by Support Account

>

>>>> lsb_release -a

>

> No LSB modules are available.

>

> Distributor ID:

>

> Ubuntu

>

> Description:

>

> Ubuntu 16.04.1 LTS

>

> Release:

>

> 16.04

>

> Codename:

>

> xenial

>

>>>> gcc --version

>

> gcc (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609

>

> Thanks,

>

> P Gyanesh Patra

--

Mike Holmes

Program Manager - Linaro Networking Group
Linaro.org
│ Open source software for ARM SoCs

"Work should be fun and collaborative, the rest follows"


[lng-odp] ODP installation issues on Cavium ThunderX

2017-01-19 Thread Gyanesh Patra
Hi,

I tried to follow the instruction from OpenFastPath readme file as below:

>>  ./bootstrap

 

>> ./configure --host=aarch64-thunderx-linux-gnu \

  --with-platform=linux-thunder \

  --with-openssl-path=${OPENSSL_DIR} \

  --enable-debug-print \

  --enable-debug \

  --prefix=${ODP_DIR} \

CFLAGS="-O0 -static -g" LIBS="-ldl”

>> make

But the `configure` results in a error as below:

  

checking for C compiler version... 5.4.0

UNSUPPORTED PLATFORM: linux-thunder

Is there any wiki or instructions for how to build on ThunderX platform with 
the prerequisites details? I am using the latest master branch of ODP.

>>> lsb_release -a

No LSB modules are available.

Distributor ID:

Ubuntu

Description:

Ubuntu 16.04.1 LTS

Release:

16.04

Codename:

xenial

>>> gcc --version

gcc (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609

Thanks,

P Gyanesh Patra


[lng-odp] query regarding cuckoo hash table support

2016-09-20 Thread gyanesh patra
Hi,
I am unable to find the cuckoo hash files in the recent code base. Is the
feature is removed from ODP code base or renamed to something else?

Thank you
Gyanesh


Re: [lng-odp] LPM Algorithm APIs in ODP

2016-09-20 Thread gyanesh patra
Hi,
The L3fwd is a great addition to the ODP. I am curious if the IPv6 support
is also under investigation for the l3fwd example??
If not it will be of my interest to take it up and contribute to ODP code
base.

Thank you

P Gyanesh Kumar Patra

On Mon, Apr 18, 2016 at 10:26 PM, HePeng <xnhp0...@icloud.com> wrote:

> Hi,
>Our current LPM code is based on Tree Bitmap as a backend for IP
> prefixes
> management. Before we submit the code, we need first to remove this part
> of
> code as Tree Bitmap is a patent algorithm for Cisco.
>
>If you just want an LPM algorithm for evaluation, we can provide the
> Tree Bitmap code alone,
> but it is not fit into ODP architecture. Please check https://github.com/
> xnhp0320/prefix_lookup_mc.git
> and pull the develop branch.
>
>We are working on the code, I think the code should be ready in two
> weeks.
>
>
>
>
> 在 2016年4月18日,下午12:39,P Gyanesh Kumar Patra <pgyanesh.pa...@gmail.com> 写道:
>
> Hi,
> Thank you for the details.
> Do we have any time frame for the LPM code submission?
> Is it possible to do some trial on the LPM code now?
>
> Is there a list of names of Algorithms in the pipeline to be developed for
> ODP?
>
> Thank You
> *P Gyanesh K. Patra*
> *University of Campinas (Unicamp)*
>
>
>
>
> On Apr 17, 2016, at 22:55, HePeng <xnhp0...@icloud.com> wrote:
>
> Hi,
>We are in the progress of releasing the LPM code, but currently we are
> busy submitting the cuckoo hash code into ODP helper.
>
>About the LPM code we have already a 16-8-8 implementation. Now we are
> working on the code to fit it into ODP architecture. But we have not
> submitted any code for LPM yet.
>
>If there is a requirement, we can switch to focus on LPM code.
>
>
> 在 2016年4月18日,上午9:03,gyanesh patra <pgyanesh.pa...@gmail.com> 写道:
>
> I encountered an old email chain about the different LPM algorithm for
> ODP. I am curious if anyone has released or working on something for l3
> forwarding/routing.
> Here is the link to the mail chain:
> https://lists.linaro.org/pipermail/lng-odp/2015-August/014717.html
>
> If any work is going on, then point me in the correct direction. Also do
> we have any example code for l3 forwarding in ODP available now?
>
> Thank you
> *P Gyanesh K. Patra*
> *University of Campinas (Unicamp)*
>
>
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
>
>
>
>
>


Re: [lng-odp] "Message too long" error message with Iperf testing

2016-05-16 Thread gyanesh patra
I am using the linux-generic with ODP  v1.9.0.0
I am just running Iperf with default settings. 
But i will try similar experiments by turning off the segment offloading on the 
target NIC. Thank you for the pointer.

Regards,
Gyanesh Patra

> On May 16, 2016, at 21:21, Bill Fischofer <bill.fischo...@linaro.org> wrote:
> 
> 
> 
> On Mon, May 16, 2016 at 6:45 PM, gyanesh patra <pgyanesh.pa...@gmail.com 
> <mailto:pgyanesh.pa...@gmail.com>> wrote:
> I have a simple *standalone application* using ODP. It receives packets on
> one interface and do a broadcasting to all other interfaces. But many a
> times i see this error message when i try to calculate the throughput using
> IPerf tool.
> 
>   pktio/socket_mmap.c:263:pkt_mmap_v2_tx():sendto(pkt mmap):
> Message too long
> 
> What implementation of ODP are you running and at what level? linux-generic, 
> odp-dpdk, a vendor implementation?
> 
> ODP does not support Large Segment Offload (LSO) processing, so if the packet 
> you're trying to send exceeds the target pktio's MTU as returned by the 
> odp_pktio_mtu() API, then you'll see TX failures.
>  
> 
> I don't have jumbo frames enabled. Also i don't think IPerf is sending any
> jumbo frames. When i get this error, all packets were dropped and the
> throughput comes down to a very low number.
> 
> What seems to be a problem? Is there any specific configuration i need to
> do with respect to ODP?
> 
> I am using one thread for each interface. Each interface has one TX and one
> RX queue configured. I am using Burst mode to receive the packet type.
> 
> Thank you
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
> https://lists.linaro.org/mailman/listinfo/lng-odp 
> <https://lists.linaro.org/mailman/listinfo/lng-odp>
> 

___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] "Message too long" error message with Iperf testing

2016-05-16 Thread gyanesh patra
I have a simple *standalone application* using ODP. It receives packets on
one interface and do a broadcasting to all other interfaces. But many a
times i see this error message when i try to calculate the throughput using
IPerf tool.

  pktio/socket_mmap.c:263:pkt_mmap_v2_tx():sendto(pkt mmap):
Message too long

I don't have jumbo frames enabled. Also i don't think IPerf is sending any
jumbo frames. When i get this error, all packets were dropped and the
throughput comes down to a very low number.

What seems to be a problem? Is there any specific configuration i need to
do with respect to ODP?

I am using one thread for each interface. Each interface has one TX and one
RX queue configured. I am using Burst mode to receive the packet type.

Thank you
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] VLAN rx-offload set/unset to handle vlan HDR removal

2016-04-28 Thread gyanesh patra
Hi,
How does ODP handles the vlan header removal in accordance to
vlan-rx-offload feature in linux/hardware?
Are there any APIs for:

   1. To know if vlan header is removed, if the offload feature is set or
   not?
   2. What is the vlan value if the HDR is removed?
   3. How to set/unset the vlan offload feature?

Thank you
*P Gyanesh K. Patra*
*University of Campinas (Unicamp)*
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] LPM Algorithm APIs in ODP

2016-04-17 Thread gyanesh patra
I encountered an old email chain about the different LPM algorithm for ODP.
I am curious if anyone has released or working on something for l3
forwarding/routing.
Here is the link to the mail chain:
https://lists.linaro.org/pipermail/lng-odp/2015-August/014717.html

If any work is going on, then point me in the correct direction. Also do we
have any example code for l3 forwarding in ODP available now?

Thank you
*P Gyanesh K. Patra*
*University of Campinas (Unicamp)*
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp


[lng-odp] Fundamental difference between ODP and SAI (Switch Abstraction Interface) from OCP group

2015-08-05 Thread gyanesh patra
Hi All,
I have seen ODP and SAI have good traction in networking community with
very rapid contribution and development. Can anyone please explain how they
are different from each other? What is the fundamental difference in their
goal?

Thanks
P Gyanesh Kumar Patra
___
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp