Did you try with the latest netmap master branch code? That seemed to work for 
me.

-Matias

On 7 Feb 2018, at 17.32, gyanesh patra 
<pgyanesh.pa...@gmail.com<mailto:pgyanesh.pa...@gmail.com>> wrote:

Is it possible to fix for netmap too in similar fashion?

P Gyanesh Kumar Patra

On Wed, Feb 7, 2018 at 1:19 PM, Elo, Matias (Nokia - FI/Espoo) 
<matias....@nokia.com<mailto:matias....@nokia.com>> wrote:
The PR is now available: https://github.com/Linaro/odp/pull/458

-Matias

> On 7 Feb 2018, at 15:31, gyanesh patra 
> <pgyanesh.pa...@gmail.com<mailto:pgyanesh.pa...@gmail.com>> wrote:
>
> This patch works on Intel X540-AT2 NICs too.
>
> P Gyanesh Kumar Patra
>
> On Wed, Feb 7, 2018 at 11:28 AM, Bill Fischofer 
> <bill.fischo...@linaro.org<mailto:bill.fischo...@linaro.org>> wrote:
> Thanks, Matias. Please open a bug for this and reference it in the fix.
>
> On Wed, Feb 7, 2018 at 6:36 AM, Elo, Matias (Nokia - FI/Espoo) 
> <matias....@nokia.com<mailto:matias....@nokia.com>> wrote:
> Hi,
>
> I actually just figured out the problem. For e.g. Niantic NICs the 
> rte_eth_rxconf.rx_drop_en has to be enabled for the NIC to continue working 
> properly when all RX queues are not emptied. The following patch fixes the 
> problem for me:
>
> diff --git a/platform/linux-generic/pktio/dpdk.c 
> b/platform/linux-generic/pktio/dpdk.c
> index bd6920e..fc535e3 100644
> --- a/platform/linux-generic/pktio/dpdk.c
> +++ b/platform/linux-generic/pktio/dpdk.c
> @@ -1402,6 +1402,7 @@ static int dpdk_open(odp_pktio_t id ODP_UNUSED,
>
>  static int dpdk_start(pktio_entry_t *pktio_entry)
>  {
> +       struct rte_eth_dev_info dev_info;
>         pkt_dpdk_t *pkt_dpdk = &pktio_entry->s.pkt_dpdk;
>         uint8_t port_id = pkt_dpdk->port_id;
>         int ret;
> @@ -1420,7 +1421,6 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
>         }
>         /* Init TX queues */
>         for (i = 0; i < pktio_entry->s.num_out_queue; i++) {
> -               struct rte_eth_dev_info dev_info;
>                 const struct rte_eth_txconf *txconf = NULL;
>                 int ip_ena  = 
> pktio_entry->s.config.pktout.bit.ipv4_chksum_ena;
>                 int udp_ena = pktio_entry->s.config.pktout.bit.udp_chksum_ena;
> @@ -1470,9 +1470,14 @@ static int dpdk_start(pktio_entry_t *pktio_entry)
>         }
>         /* Init RX queues */
>         for (i = 0; i < pktio_entry->s.num_in_queue; i++) {
> +               struct rte_eth_rxconf *rxconf = NULL;
> +
> +               rte_eth_dev_info_get(port_id, &dev_info);
> +               rxconf = &dev_info.default_rxconf;
> +               rxconf->rx_drop_en = 1;
>                 ret = rte_eth_rx_queue_setup(port_id, i, DPDK_NM_RX_DESC,
>                                              rte_eth_dev_socket_id(port_id),
> -                                            NULL, pkt_dpdk->pkt_pool);
> +                                            rxconf, pkt_dpdk->pkt_pool);
>                 if (ret < 0) {
>                         ODP_ERR("Queue setup failed: err=%d, port=%" PRIu8 
> "\n",
>                                 ret, port_id);
>
> I'll test it a bit more for performance effects and then send a fix PR.
>
> -Matias
>
>
>
> > On 7 Feb 2018, at 14:18, gyanesh patra 
> > <pgyanesh.pa...@gmail.com<mailto:pgyanesh.pa...@gmail.com>> wrote:
> >
> > Thank you.
> > I am curious what might be the reason.
> >
> > P Gyanesh Kumar Patra
> >
> > On Wed, Feb 7, 2018 at 9:51 AM, Elo, Matias (Nokia - FI/Espoo) 
> > <matias....@nokia.com<mailto:matias....@nokia.com>> wrote:
> > I'm currently trying to figure out what's happening. I'll report back when 
> > I find out something.
> >
> > -Matias
> >
> >
> > > On 7 Feb 2018, at 13:44, gyanesh patra 
> > > <pgyanesh.pa...@gmail.com<mailto:pgyanesh.pa...@gmail.com>> wrote:
> > >
> > > Do you have any theory for the issue in 82599 (Niantic) NIC and why it 
> > > might be working in Intel XL710 (Fortville)? Can i identify a new 
> > > hardware without this issue by looking at their datasheet/specs?
> > > Thanks for the insight.
> > >
> > > P Gyanesh Kumar Patra
> > >
> > > On Wed, Feb 7, 2018 at 9:12 AM, Elo, Matias (Nokia - FI/Espoo) 
> > > <matias....@nokia.com<mailto:matias....@nokia.com>> wrote:
> > > I was unable to reproduce this with Intel XL710 (Fortville) but with 
> > > 82599 (Niantic) l2fwd operates as you have described. This may be a NIC 
> > > HW limitation since the same issue is also observed with netmap pktio.
> > >
> > > -Matias
> > >
> > >
> > > > On 7 Feb 2018, at 11:14, gyanesh patra 
> > > > <pgyanesh.pa...@gmail.com<mailto:pgyanesh.pa...@gmail.com>> wrote:
> > > >
> > > > Thanks for the info. I verified this with both odp 1.16 and odp 1.17 
> > > > with same behavior.
> > > > The traffic consists of diff Mac and ip addresses.
> > > > Without the busy loop, I could see that all the threads were receiving 
> > > > packets. So i think packet distribution is not an issue. In our case, 
> > > > we are sending packet at line rate of 10G interface. That might be 
> > > > causing this behaviour.
> > > > If I can provide any other info, let me know.
> > > >
> > > > Thanks
> > > >
> > > > Gyanesh
> > > >
> > > > On Wed, Feb 7, 2018, 05:15 Elo, Matias (Nokia - FI/Espoo) 
> > > > <matias....@nokia.com<mailto:matias....@nokia.com>> wrote:
> > > > Hi Gyanesh,
> > > >
> > > > I tested the patch on my system and everything seems to work as 
> > > > expected. Based on the log you're not running the latest code (v1.17.0) 
> > > > but I doubt that is the issue here.
> > > >
> > > > What kind of test traffic are you using? The l2fwd example uses IPv4 
> > > > addresses and UDP ports to do the input hashing. If test packets are 
> > > > identical they will all end up in the same input queue, which would 
> > > > explain what you are seeing.
> > > >
> > > > -Matias
> > > >
> > > >
> > > > > On 6 Feb 2018, at 19:00, gyanesh patra 
> > > > > <pgyanesh.pa...@gmail.com<mailto:pgyanesh.pa...@gmail.com>> wrote:
> > > > >
> > > > > Hi,
> > > > > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of 
> > > > > them have the same behaviour. I also tried with (200*2048) as packet 
> > > > > pool size without any success.
> > > > > I am attaching the patch for test/performance/odp_l2fwd example here 
> > > > > to demonstrate the behaviour. Also find the output of the example 
> > > > > below:
> > > > >
> > > > > root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> > > > > HW time counter freq: 2094954892<tel:2094954892> hz
> > > > >
> > > > > PKTIO: initialized loop interface.
> > > > > PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to 
> > > > > disable.
> > > > > PKTIO: initialized pcap interface.
> > > > > PKTIO: initialized ipc interface.
> > > > > PKTIO: initialized socket mmap, use export 
> > > > > ODP_PKTIO_DISABLE_SOCKET_MMAP=1 to disable.
> > > > > PKTIO: initialized socket mmsg,use export 
> > > > > ODP_PKTIO_DISABLE_SOCKET_MMSG=1 to disable.
> > > > >
> > > > > ODP system info
> > > > > ---------------
> > > > > ODP API version: 1.16.0
> > > > > ODP impl name:   "odp-linux"
> > > > > CPU model:       Intel(R) Xeon(R) CPU E5-2620 v2
> > > > > CPU freq (hz):   2600000000
> > > > > Cache line size: 64
> > > > > CPU count:       12
> > > > >
> > > > >
> > > > > CPU features supported:
> > > > > SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 
> > > > > CMPXCHG16B XTPR PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT 
> > > > > TSC_DEADLINE AES XSAVE OSXSAVE AVX F16C RDRAND FPU VME DE PSE TSC MSR 
> > > > > PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX 
> > > > > FXSR SSE SSE2 SS HTT TM PBE DIGTEMP ARAT PLN ECMD PTM MPERF_APERF_MSR 
> > > > > ENERGY_EFF FSGSBASE BMI2 LAHF_SAHF SYSCALL XD 1GB_PG RDTSCP EM64T 
> > > > > INVTSC
> > > > >
> > > > > CPU features NOT supported:
> > > > > CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID 
> > > > > RTM AVX512F LZCNT
> > > > >
> > > > > Running ODP appl: "odp_l2fwd"
> > > > > -----------------
> > > > > IF-count:        2
> > > > > Using IFs:       0 1
> > > > > Mode:            PKTIN_DIRECT, PKTOUT_DIRECT
> > > > >
> > > > > num worker threads: 10
> > > > > first CPU:          2
> > > > > cpu mask:           0xFFC
> > > > >
> > > > >
> > > > > Pool info
> > > > > ---------
> > > > >   pool            0
> > > > >   name            packet pool
> > > > >   pool type       packet
> > > > >   pool shm        11
> > > > >   user area shm   0
> > > > >   num             8192
> > > > >   align           64
> > > > >   headroom        128
> > > > >   seg len         8064
> > > > >   max data len    65536
> > > > >   tailroom        0
> > > > >   block size      8896
> > > > >   uarea size      0
> > > > >   shm size        73196288
> > > > >   base addr       0x7f5669400000
> > > > >   uarea shm size  0
> > > > >   uarea base addr (nil)
> > > > >
> > > > > EAL: Detected 12 lcore(s)
> > > > > EAL: No free hugepages reported in hugepages-1048576kB
> > > > > EAL: Probing VFIO support...
> > > > > EAL: PCI device 0000:03:00.0 on NUMA socket 0
> > > > > EAL:   probe driver: 8086:10fb net_ixgbe
> > > > > EAL: PCI device 0000:03:00.1 on NUMA socket 0
> > > > > EAL:   probe driver: 8086:10fb net_ixgbe
> > > > > EAL: PCI device 0000:05:00.0 on NUMA socket 0
> > > > > EAL:   probe driver: 8086:1528 net_ixgbe
> > > > > EAL: PCI device 0000:05:00.1 on NUMA socket 0
> > > > > EAL:   probe driver: 8086:1528 net_ixgbe
> > > > > EAL: PCI device 0000:0a:00.0 on NUMA socket 0
> > > > > EAL:   probe driver: 8086:1521 net_e1000_igb
> > > > > EAL: PCI device 0000:0a:00.1 on NUMA socket 0
> > > > > EAL:   probe driver: 8086:1521 net_e1000_igb
> > > > > EAL: PCI device 0000:0c:00.0 on NUMA socket 0
> > > > > EAL:   probe driver: 8086:10d3 net_e1000_em
> > > > > created pktio 1, dev: 0, drv: dpdk
> > > > > created 5 input and 5 output queues on (0)
> > > > > created pktio 2, dev: 1, drv: dpdk
> > > > > created 5 input and 5 output queues on (1)
> > > > >
> > > > > Queue binding (indexes)
> > > > > -----------------------
> > > > > worker 0
> > > > >   rx: pktio 0, queue 0
> > > > >   tx: pktio 1, queue 0
> > > > > worker 1
> > > > >   rx: pktio 1, queue 0
> > > > >   tx: pktio 0, queue 0
> > > > > worker 2
> > > > >   rx: pktio 0, queue 1
> > > > >   tx: pktio 1, queue 1
> > > > > worker 3
> > > > >   rx: pktio 1, queue 1
> > > > >   tx: pktio 0, queue 1
> > > > > worker 4
> > > > >   rx: pktio 0, queue 2
> > > > >   tx: pktio 1, queue 2
> > > > > worker 5
> > > > >   rx: pktio 1, queue 2
> > > > >   tx: pktio 0, queue 2
> > > > > worker 6
> > > > >   rx: pktio 0, queue 3
> > > > >   tx: pktio 1, queue 3
> > > > > worker 7
> > > > >   rx: pktio 1, queue 3
> > > > >   tx: pktio 0, queue 3
> > > > > worker 8
> > > > >   rx: pktio 0, queue 4
> > > > >   tx: pktio 1, queue 4
> > > > > worker 9
> > > > >   rx: pktio 1, queue 4
> > > > >   tx: pktio 0, queue 4
> > > > >
> > > > >
> > > > > Port config
> > > > > --------------------
> > > > > Port 0 (0)
> > > > >   rx workers 5
> > > > >   tx workers 5
> > > > >   rx queues 5
> > > > >   tx queues 5
> > > > > Port 1 (1)
> > > > >   rx workers 5
> > > > >   tx workers 5
> > > > >   rx queues 5
> > > > >   tx queues 5
> > > > >
> > > > > [01] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > > > > [02] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > > > > [03] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > > > > [04] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > > > > [05] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > > > > [06] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > > > > [07] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > > > > [08] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > > > > [09] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > > > > [10] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > > > > 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> > > > > 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> > > > > 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> > > > > 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> > > > > 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> > > > > 1396 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > > > > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > > > > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > > > > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > > > > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > > > > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > > > > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > > > > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > > > > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > > > > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > > > > ^C0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > > > > TEST RESULT: 1396 maximum packets per second.
> > > > >
> > > > >
> > > > >
> > > > > P Gyanesh Kumar Patra
> > > > >
> > > > > On Tue, Feb 6, 2018 at 9:55 AM, Elo, Matias (Nokia - FI/Espoo) 
> > > > > <matias....@nokia.com<mailto:matias....@nokia.com>> wrote:
> > > > >
> > > > >
> > > > > > On 5 Feb 2018, at 19:42, Bill Fischofer 
> > > > > > <bill.fischo...@linaro.org<mailto:bill.fischo...@linaro.org>> wrote:
> > > > > >
> > > > > > Thanks, Gyanesh, that does sound like a bug. +cc Matias: Can you 
> > > > > > comment on this?
> > > > > >
> > > > > > On Mon, Feb 5, 2018 at 5:09 AM, gyanesh patra 
> > > > > > <pgyanesh.pa...@gmail.com<mailto:pgyanesh.pa...@gmail.com>> wrote:
> > > > > > ​I am testing an l2fwd use-case​. I am executing the use-case with 
> > > > > > two
> > > > > > CPUs​ & two interfaces​.
> > > > > > One interface ​with 2 Rx ​queues receives pkts using 2 threads with 
> > > > > > 2
> > > > > > associated CPUs. Both the
> > > > > > threads can forward the packet over the 2nd interface which also 
> > > > > > has 2 Tx
> > > > > > queues ​mapped to
> > > > > > 2 CPUs. I am sending packets from an external packet generator and
> > > > > > ​confirmed that ​both
> > > > > > queues are receiving packets.
> > > > > > *When I run odp_pktin_recv() on both the queues, the packet*
> > > > > > * forwarding works fine. But if I put a sleep() or add a busy loop 
> > > > > > ​instead
> > > > > > of odp_pktin_recv() *
> > > > > > *on one ​thread, then the​ other ​thread stops receiving packets. 
> > > > > > If I
> > > > > > replace ​the sleep with odp_pktin_recv(), both the queues start 
> > > > > > receiving
> > > > > > packets again. *I encountered this problem on the DPDK pktio 
> > > > > > support​ on
> > > > > > ODP 1.16 and ODP 1.17​.
> > > > > > On socket-mmap it works fine. Is it expected behavior or a 
> > > > > > potential bug?
> > > > > >
> > > > >
> > > > >
> > > > > Hi Gyanesh,
> > > > >
> > > > > Could you please share an example code which produces this issue? 
> > > > > Does this happen also if you enable zero-copy dpdk pktio 
> > > > > (--enable-dpdk-zero-copy)?
> > > > >
> > > > > Socket-mmap pktio doesn't support MQ, so comparison to that doesn't 
> > > > > make much sense. Netmap pktio supports MQ.
> > > > >
> > > > > Regards,
> > > > > Matias
> > > > >
> > > > >
> > > > > <odp_l2fwd_patch>
> > > >
> > >
> > >
> >
> >
>
>
>


Reply via email to