Re: [lng-odp] latency calulation with netmap pkt i/o fails with oversized packet debug msg

2018-07-26 Thread gyanesh patra
I verified the throughput over the link with/without this debug message.
With DEBUG message: 10-15 Mbps
without DEBUG message: 1500 Mbps

Due to this debug message to the stdout, the throughput drops to the
minimum and the latency can't be calculated properly too.
Should i just remove the debug message from the netmap.c file? Does it
serve any purpose?

Regards,
Gyanesh

On Thu, Jul 26, 2018 at 11:25 AM Maxim Uvarov 
wrote:

>
>
> On 26 July 2018 at 16:01, gyanesh patra  wrote:
>
>> Hi,
>> Here is the output for the debug messages as advised:
>> For this code:
>> --
>>  541 ODP_DBG("MTU: %" PRIu32 "\n", mtu);
>>
>>  542 ODP_DBG("NM buf_size: %" PRIu32 "\n", buf_size);
>>
>>  543 pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
>>
>> Output:
>> -
>> netmap interface: eth5
>>   num_rx_desc: 0
>>   num_tx_desc: 0
>> pktio/netmap.c:541:netmap_open():MTU: 1514
>> pktio/netmap.c:542:netmap_open():NM buf_size: 2048
>> pktio/netmap.c:567:netmap_open():netmap pktio eth5 does not support
>> statistics counters
>> odp_packet_io.c:295:odp_pktio_open():interface: eth5, driver: netmap
>>
>> =
>> For this code:
>> --
>>  839 if (odp_likely(ring->slot[slot_id].len <= mtu)) {
>>
>>  840 slot_tbl[num_rx].buf = buf;
>>
>>  841 slot_tbl[num_rx].len = ring->slot[slot_id].len;
>>
>>  842 ODP_DBG("dropped oversized packet %d
>> %d\n",ring->slot[slot_id].len, mtu);
>>  843 num_rx++;
>>
>>  844 }
>>
>> Output:
>> 
>> pktio/netmap.c:842:netmap_recv_desc():dropped oversized packet 60 1514
>> pktio/netmap.c:842:netmap_recv_desc():dropped oversized packet 60 1514
>>
>>
> Are packets dropped or you just see this message?
>
> if (odp_likely(ring->slot[slot_id].len <= mtu)) {
> slot_tbl[num_rx].buf = buf;
> slot_tbl[num_rx].len = ring->slot[slot_id].len;
> ODP_DBG("dropped oversized packet\n");
> num_rx++;
> }
>
> num_rx is increasing then packet wrapped into odp:
> if (num_rx) {
> return netmap_pkt_to_odp(pktio_entry, pkt_table, slot_tbl,
> num_rx, ts);
>
> it looks like message just confusing. Packet is less then mtu.
>
>
>
>
>> If anything else is required, i can get those details too.
>>
>> Thanks,
>> P Gyanesh Kumar Patra
>>
>>
>> On Thu, Jul 26, 2018 at 3:36 AM Elo, Matias (Nokia - FI/Espoo) <
>> matias@nokia.com> wrote:
>>
>>>
>>>
>>> > On 25 Jul 2018, at 17:11, Maxim Uvarov 
>>> wrote:
>>> >
>>> > For quick look it looks like mtu is not set correctly on open(). Can
>>> you try this patch:
>>> >
>>> > diff --git a/platform/linux-generic/pktio/netmap.c
>>> b/platform/linux-generic/pktio/netmap.c
>>> > index 0da2b7a..d4db0af 100644
>>> > --- a/platform/linux-generic/pktio/netmap.c
>>> > +++ b/platform/linux-generic/pktio/netmap.c
>>> > @@ -539,6 +539,7 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED,
>>> pktio_entry_t *pktio_entry,
>>> > goto error;
>>> > }
>>> > pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
>>> > +   pkt_priv(pktio_entry)->mtu = pkt_nm->mtu;
>>>
>>>
>>> pkt_netmap_t *pkt_nm = pkt_priv(pktio_entry), so this is unnecessary.
>>>
>>>
>>> >>
>>> >>
>>> >> Is this a know issue or am i missing something?
>>> >>
>>>
>>>
>>> As far as I can see the problem is caused by reading interface MTU
>>> incorrectly or netmap using unusually small buffers (assuming moongen sends
>>> packets smaller than MTU). The following patch should help debug the issue.
>>>
>>> -Matias
>>>
>>> diff --git a/platform/linux-generic/pktio/netmap.c
>>> b/platform/linux-generic/pktio/netmap.c
>>> index 0da2b7afd..3e0a17542 100644
>>> --- a/platform/linux-generic/pktio/netmap.c
>>> +++ b/platform/linux-generic/pktio/netmap.c
>>> @@ -538,6 +538,10 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED,
>>> pktio_entry_t *pktio_entry,
>>> ODP_ERR("Unable to read interface MTU\n");
>>> goto error;
>>> }
>>> +
>>> +   ODP_DBG("MTU: %" PRIu32 "\n", mtu);
>>> +   ODP_DBG("NM buf_size: %" PRIu32 "\n", buf_size);
>>> +
>>> pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
>>>
>>> /* Check if RSS is supported. If not, set 'max_input_queues' to
>>> 1. */
>>>
>>>
>>>
>


[lng-odp] [Bug 3954] New: shm allocator considered wasteful

2018-07-26 Thread bugzilla-daemon
https://bugs.linaro.org/show_bug.cgi?id=3954

Bug ID: 3954
   Summary: shm allocator considered wasteful
   Product: OpenDataPlane - linux- generic reference
   Version: v1.15.0.0
  Hardware: Other
OS: Linux
Status: UNCONFIRMED
  Severity: enhancement
  Priority: ---
 Component: Shared Memory
  Assignee: christophe.mil...@linaro.org
  Reporter: josep.puigdem...@linaro.org
CC: lng-odp@lists.linaro.org
  Target Milestone: ---

Shared memory objects in ODP can be reserved from "normal" memory of from huge
pages. If the requested size fits in a kernel page frame, that will be used,
otherwise huge pages will be preferred. See _odp_ishm_reserve in odp_ishm.c:
https://github.com/Linaro/odp/blob/6d91fe717d2e62e048fb8837a67cc1118a3113d1/platform/linux-generic/odp_ishm.c#L922

When huge pages are used, the actual amount of memory reserved will be a
multiple of the huge page size. See:
https://github.com/Linaro/odp/blob/6d91fe717d2e62e048fb8837a67cc1118a3113d1/platform/linux-generic/odp_ishm.c#L929

ODP does not seem to keep track of the extra memory allocated, which means that
for systems with 2MB huge pages, when a user requests a shared memory object of
3MB, 2 huge pages will be used, and in total 4MB of RAM will be reserved. In
this case 25% of the reserved memory will not be used.
For systems that have 1GB huge pages configured, however, most of the memory
would be wasted in this example.

The following table is an extract of the output of odp_shm_print_all() where
memory usage can be seen on a system that has 1GB huge pages:

ishm blocks allocated at: Memory allocation status:
name  flag lenuser_len seq ref startfd 
file
 0  odp_thread_globals..N  0x1000 3472 1   1  7f5626258000 3  
 1  _odp_pool_table   ..H  0x4000 17850432 1   1  7f558000 4  
 2  _odp_queue_gbl..H  0x4000 262272   1   1  7f554000 5  
 3  _odp_queue_rings  ..H  0x4000 33554432 1   1  7f55 6  
 4  odp_scheduler ..H  0x4000 8730624  1   1  7f54c000 7  
 5  odp_pktio_entries ..H  0x4000 360512   1   1  7f548000 8  
 6  crypto_pool   ..H  0x4000 198001   1  7f544000 9  
 7  shm_odp_cos_tbl   ..H  0x4000 204801   1  7f54 10 
 8  shm_odp_pmr_tbl   ..H  0x4000 114688   1   1  7f53c000 11 
 9  shm_odp_cls_queue_grp_tbl ..H  0x4000 163841   1  7f538000 12 
10  pool_ring_0   ..H  0x4000 4194432  1   1  7f534000 13 
11  ipsec_status_pool ..H  0x4000 786432   1   1  7f53 14 
12  ipsec_sa_table..N  0x1000 2112 1   1  7f5626257000 15 
13  test_shmem..H  0x4000 4120 7   1  7f52c000 16 

Apart from "len" and "user_len" being one in hex and the other in decimal form,
just to confuse the user a bit, it won't escape to the trained eye that ODP
reserved 1GB of memory for "crypto_pool" when only 19K will actually be used.
In fact, all shared memory areas in this example would fit in just 1 GB (not
considering proper alignment), but apparently 12GB have been reserved (90%
wasted).

-- 
You are receiving this mail because:
You are on the CC list for the bug.

Re: [lng-odp] latency calulation with netmap pkt i/o fails with oversized packet debug msg

2018-07-26 Thread Maxim Uvarov
On 26 July 2018 at 16:01, gyanesh patra  wrote:

> Hi,
> Here is the output for the debug messages as advised:
> For this code:
> --
>  541 ODP_DBG("MTU: %" PRIu32 "\n", mtu);
>
>  542 ODP_DBG("NM buf_size: %" PRIu32 "\n", buf_size);
>
>  543 pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
>
> Output:
> -
> netmap interface: eth5
>   num_rx_desc: 0
>   num_tx_desc: 0
> pktio/netmap.c:541:netmap_open():MTU: 1514
> pktio/netmap.c:542:netmap_open():NM buf_size: 2048
> pktio/netmap.c:567:netmap_open():netmap pktio eth5 does not support
> statistics counters
> odp_packet_io.c:295:odp_pktio_open():interface: eth5, driver: netmap
>
> =
> For this code:
> --
>  839 if (odp_likely(ring->slot[slot_id].len <= mtu)) {
>
>  840 slot_tbl[num_rx].buf = buf;
>
>  841 slot_tbl[num_rx].len = ring->slot[slot_id].len;
>
>  842 ODP_DBG("dropped oversized packet %d
> %d\n",ring->slot[slot_id].len, mtu);
>  843 num_rx++;
>
>  844 }
>
> Output:
> 
> pktio/netmap.c:842:netmap_recv_desc():dropped oversized packet 60 1514
> pktio/netmap.c:842:netmap_recv_desc():dropped oversized packet 60 1514
>
>
Are packets dropped or you just see this message?

if (odp_likely(ring->slot[slot_id].len <= mtu)) {
slot_tbl[num_rx].buf = buf;
slot_tbl[num_rx].len = ring->slot[slot_id].len;
ODP_DBG("dropped oversized packet\n");
num_rx++;
}

num_rx is increasing then packet wrapped into odp:
if (num_rx) {
return netmap_pkt_to_odp(pktio_entry, pkt_table, slot_tbl,
num_rx, ts);

it looks like message just confusing. Packet is less then mtu.




> If anything else is required, i can get those details too.
>
> Thanks,
> P Gyanesh Kumar Patra
>
>
> On Thu, Jul 26, 2018 at 3:36 AM Elo, Matias (Nokia - FI/Espoo) <
> matias@nokia.com> wrote:
>
>>
>>
>> > On 25 Jul 2018, at 17:11, Maxim Uvarov  wrote:
>> >
>> > For quick look it looks like mtu is not set correctly on open(). Can
>> you try this patch:
>> >
>> > diff --git a/platform/linux-generic/pktio/netmap.c
>> b/platform/linux-generic/pktio/netmap.c
>> > index 0da2b7a..d4db0af 100644
>> > --- a/platform/linux-generic/pktio/netmap.c
>> > +++ b/platform/linux-generic/pktio/netmap.c
>> > @@ -539,6 +539,7 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED,
>> pktio_entry_t *pktio_entry,
>> > goto error;
>> > }
>> > pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
>> > +   pkt_priv(pktio_entry)->mtu = pkt_nm->mtu;
>>
>>
>> pkt_netmap_t *pkt_nm = pkt_priv(pktio_entry), so this is unnecessary.
>>
>>
>> >>
>> >>
>> >> Is this a know issue or am i missing something?
>> >>
>>
>>
>> As far as I can see the problem is caused by reading interface MTU
>> incorrectly or netmap using unusually small buffers (assuming moongen sends
>> packets smaller than MTU). The following patch should help debug the issue.
>>
>> -Matias
>>
>> diff --git a/platform/linux-generic/pktio/netmap.c
>> b/platform/linux-generic/pktio/netmap.c
>> index 0da2b7afd..3e0a17542 100644
>> --- a/platform/linux-generic/pktio/netmap.c
>> +++ b/platform/linux-generic/pktio/netmap.c
>> @@ -538,6 +538,10 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED,
>> pktio_entry_t *pktio_entry,
>> ODP_ERR("Unable to read interface MTU\n");
>> goto error;
>> }
>> +
>> +   ODP_DBG("MTU: %" PRIu32 "\n", mtu);
>> +   ODP_DBG("NM buf_size: %" PRIu32 "\n", buf_size);
>> +
>> pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
>>
>> /* Check if RSS is supported. If not, set 'max_input_queues' to
>> 1. */
>>
>>
>>


Re: [lng-odp] latency calulation with netmap pkt i/o fails with oversized packet debug msg

2018-07-26 Thread gyanesh patra
Hi,
Here is the output for the debug messages as advised:
For this code:
--
 541 ODP_DBG("MTU: %" PRIu32 "\n", mtu);

 542 ODP_DBG("NM buf_size: %" PRIu32 "\n", buf_size);

 543 pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;

Output:
-
netmap interface: eth5
  num_rx_desc: 0
  num_tx_desc: 0
pktio/netmap.c:541:netmap_open():MTU: 1514
pktio/netmap.c:542:netmap_open():NM buf_size: 2048
pktio/netmap.c:567:netmap_open():netmap pktio eth5 does not support
statistics counters
odp_packet_io.c:295:odp_pktio_open():interface: eth5, driver: netmap

=
For this code:
--
 839 if (odp_likely(ring->slot[slot_id].len <= mtu)) {

 840 slot_tbl[num_rx].buf = buf;

 841 slot_tbl[num_rx].len = ring->slot[slot_id].len;

 842 ODP_DBG("dropped oversized packet %d
%d\n",ring->slot[slot_id].len, mtu);
 843 num_rx++;

 844 }

Output:

pktio/netmap.c:842:netmap_recv_desc():dropped oversized packet 60 1514
pktio/netmap.c:842:netmap_recv_desc():dropped oversized packet 60 1514

If anything else is required, i can get those details too.

Thanks,
P Gyanesh Kumar Patra


On Thu, Jul 26, 2018 at 3:36 AM Elo, Matias (Nokia - FI/Espoo) <
matias@nokia.com> wrote:

>
>
> > On 25 Jul 2018, at 17:11, Maxim Uvarov  wrote:
> >
> > For quick look it looks like mtu is not set correctly on open(). Can you
> try this patch:
> >
> > diff --git a/platform/linux-generic/pktio/netmap.c
> b/platform/linux-generic/pktio/netmap.c
> > index 0da2b7a..d4db0af 100644
> > --- a/platform/linux-generic/pktio/netmap.c
> > +++ b/platform/linux-generic/pktio/netmap.c
> > @@ -539,6 +539,7 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED,
> pktio_entry_t *pktio_entry,
> > goto error;
> > }
> > pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
> > +   pkt_priv(pktio_entry)->mtu = pkt_nm->mtu;
>
>
> pkt_netmap_t *pkt_nm = pkt_priv(pktio_entry), so this is unnecessary.
>
>
> >>
> >>
> >> Is this a know issue or am i missing something?
> >>
>
>
> As far as I can see the problem is caused by reading interface MTU
> incorrectly or netmap using unusually small buffers (assuming moongen sends
> packets smaller than MTU). The following patch should help debug the issue.
>
> -Matias
>
> diff --git a/platform/linux-generic/pktio/netmap.c
> b/platform/linux-generic/pktio/netmap.c
> index 0da2b7afd..3e0a17542 100644
> --- a/platform/linux-generic/pktio/netmap.c
> +++ b/platform/linux-generic/pktio/netmap.c
> @@ -538,6 +538,10 @@ static int netmap_open(odp_pktio_t id ODP_UNUSED,
> pktio_entry_t *pktio_entry,
> ODP_ERR("Unable to read interface MTU\n");
> goto error;
> }
> +
> +   ODP_DBG("MTU: %" PRIu32 "\n", mtu);
> +   ODP_DBG("NM buf_size: %" PRIu32 "\n", buf_size);
> +
> pkt_nm->mtu = (mtu < buf_size) ? mtu : buf_size;
>
> /* Check if RSS is supported. If not, set 'max_input_queues' to 1.
> */
>
>
>