We are using Intel NICs : X540-AT2 (10G)


P Gyanesh Kumar Patra

On Tue, Feb 6, 2018 at 3:08 PM, Ilias Apalodimas <
ilias.apalodi...@linaro.org> wrote:

> Hello,
>
> Haven't seen any reference to the hardware you are using, sorry if i
> missed it. What kind of NIC are you using for the tests ?
>
> Regards
> Ilias
>
> On 6 February 2018 at 19:00, gyanesh patra <pgyanesh.pa...@gmail.com>
> wrote:
> > Hi,
> > I tried with netmap, dpdk and dpdk with zero-copy enabled. All of them
> have
> > the same behaviour. I also tried with (200*2048) as packet pool size
> > without any success.
> > I am attaching the patch for test/performance/odp_l2fwd example here to
> > demonstrate the behaviour. Also find the output of the example below:
> >
> > root@india:~/pktio/odp_ipv6/test/performance# ./odp_l2fwd -i 0,1
> > HW time counter freq: 2094954892 hz
> >
> > PKTIO: initialized loop interface.
> > PKTIO: initialized dpdk pktio, use export ODP_PKTIO_DISABLE_DPDK=1 to
> > disable.
> > PKTIO: initialized pcap interface.
> > PKTIO: initialized ipc interface.
> > PKTIO: initialized socket mmap, use export ODP_PKTIO_DISABLE_SOCKET_MMAP=
> 1
> > to disable.
> > PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=
> 1
> > to disable.
> >
> > ODP system info
> > ---------------
> > ODP API version: 1.16.0
> > ODP impl name:   "odp-linux"
> > CPU model:       Intel(R) Xeon(R) CPU E5-2620 v2
> > CPU freq (hz):   2600000000
> > Cache line size: 64
> > CPU count:       12
> >
> >
> > CPU features supported:
> > SSE3 PCLMULQDQ DTES64 MONITOR DS_CPL VMX SMX EIST TM2 SSSE3 CMPXCHG16B
> XTPR
> > PDCM PCID DCA SSE4_1 SSE4_2 X2APIC POPCNT TSC_DEADLINE AES XSAVE OSXSAVE
> > AVX F16C RDRAND FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA
> > CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE DIGTEMP ARAT
> > PLN ECMD PTM MPERF_APERF_MSR ENERGY_EFF FSGSBASE BMI2 LAHF_SAHF SYSCALL
> XD
> > 1GB_PG RDTSCP EM64T INVTSC
> >
> > CPU features NOT supported:
> > CNXT_ID FMA MOVBE PSN TRBOBST ACNT2 BMI1 HLE AVX2 SMEP ERMS INVPCID RTM
> > AVX512F LZCNT
> >
> > Running ODP appl: "odp_l2fwd"
> > -----------------
> > IF-count:        2
> > Using IFs:       0 1
> > Mode:            PKTIN_DIRECT, PKTOUT_DIRECT
> >
> > num worker threads: 10
> > first CPU:          2
> > cpu mask:           0xFFC
> >
> >
> > Pool info
> > ---------
> >   pool            0
> >   name            packet pool
> >   pool type       packet
> >   pool shm        11
> >   user area shm   0
> >   num             8192
> >   align           64
> >   headroom        128
> >   seg len         8064
> >   max data len    65536
> >   tailroom        0
> >   block size      8896
> >   uarea size      0
> >   shm size        73196288
> >   base addr       0x7f5669400000
> >   uarea shm size  0
> >   uarea base addr (nil)
> >
> > EAL: Detected 12 lcore(s)
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > EAL: PCI device 0000:03:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device 0000:03:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:10fb net_ixgbe
> > EAL: PCI device 0000:05:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL: PCI device 0000:05:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1528 net_ixgbe
> > EAL: PCI device 0000:0a:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device 0000:0a:00.1 on NUMA socket 0
> > EAL:   probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device 0000:0c:00.0 on NUMA socket 0
> > EAL:   probe driver: 8086:10d3 net_e1000_em
> > created pktio 1, dev: 0, drv: dpdk
> > created 5 input and 5 output queues on (0)
> > created pktio 2, dev: 1, drv: dpdk
> > created 5 input and 5 output queues on (1)
> >
> > Queue binding (indexes)
> > -----------------------
> > worker 0
> >   rx: pktio 0, queue 0
> >   tx: pktio 1, queue 0
> > worker 1
> >   rx: pktio 1, queue 0
> >   tx: pktio 0, queue 0
> > worker 2
> >   rx: pktio 0, queue 1
> >   tx: pktio 1, queue 1
> > worker 3
> >   rx: pktio 1, queue 1
> >   tx: pktio 0, queue 1
> > worker 4
> >   rx: pktio 0, queue 2
> >   tx: pktio 1, queue 2
> > worker 5
> >   rx: pktio 1, queue 2
> >   tx: pktio 0, queue 2
> > worker 6
> >   rx: pktio 0, queue 3
> >   tx: pktio 1, queue 3
> > worker 7
> >   rx: pktio 1, queue 3
> >   tx: pktio 0, queue 3
> > worker 8
> >   rx: pktio 0, queue 4
> >   tx: pktio 1, queue 4
> > worker 9
> >   rx: pktio 1, queue 4
> >   tx: pktio 0, queue 4
> >
> >
> > Port config
> > --------------------
> > Port 0 (0)
> >   rx workers 5
> >   tx workers 5
> >   rx queues 5
> >   tx queues 5
> > Port 1 (1)
> >   rx workers 5
> >   tx workers 5
> >   rx queues 5
> >   tx queues 5
> >
> > [01] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > [02] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > [03] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > [04] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > [05] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > [06] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > [07] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > [08] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > [09] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > [10] num pktios 1, PKTIN_DIRECT, PKTOUT_DIRECT
> > 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> > 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> > 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> > 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> > 0 pps, 0 max pps,  0 rx drops, 0 tx drops
> > 1396 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > 0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > ^C0 pps, 1396 max pps,  0 rx drops, 0 tx drops
> > TEST RESULT: 1396 maximum packets per second.
> >
> >
> >
> > P Gyanesh Kumar Patra
> >
> > On Tue, Feb 6, 2018 at 9:55 AM, Elo, Matias (Nokia - FI/Espoo) <
> > matias....@nokia.com> wrote:
> >
> >>
> >>
> >> > On 5 Feb 2018, at 19:42, Bill Fischofer <bill.fischo...@linaro.org>
> >> wrote:
> >> >
> >> > Thanks, Gyanesh, that does sound like a bug. +cc Matias: Can you
> comment
> >> on this?
> >> >
> >> > On Mon, Feb 5, 2018 at 5:09 AM, gyanesh patra <
> pgyanesh.pa...@gmail.com>
> >> wrote:
> >> > I am testing an l2fwd use-case. I am executing the use-case with two
> >> > CPUs & two interfaces.
> >> > One interface with 2 Rx queues receives pkts using 2 threads with 2
> >> > associated CPUs. Both the
> >> > threads can forward the packet over the 2nd interface which also has
> 2 Tx
> >> > queues mapped to
> >> > 2 CPUs. I am sending packets from an external packet generator and
> >> > confirmed that both
> >> > queues are receiving packets.
> >> > *When I run odp_pktin_recv() on both the queues, the packet*
> >> > * forwarding works fine. But if I put a sleep() or add a busy loop
> >> instead
> >> > of odp_pktin_recv() *
> >> > *on one thread, then the other thread stops receiving packets. If I
> >> > replace the sleep with odp_pktin_recv(), both the queues start
> receiving
> >> > packets again. *I encountered this problem on the DPDK pktio support
> on
> >> > ODP 1.16 and ODP 1.17.
> >> > On socket-mmap it works fine. Is it expected behavior or a potential
> bug?
> >> >
> >>
> >>
> >> Hi Gyanesh,
> >>
> >> Could you please share an example code which produces this issue? Does
> >> this happen also if you enable zero-copy dpdk pktio
> >> (--enable-dpdk-zero-copy)?
> >>
> >> Socket-mmap pktio doesn't support MQ, so comparison to that doesn't make
> >> much sense. Netmap pktio supports MQ.
> >>
> >> Regards,
> >> Matias
> >>
> >>
>

Reply via email to