On 29/01/16 08:19, Elo, Matias (Nokia - FI/Espoo) wrote:
*From:*EXT Ola Liljedahl [mailto:ola.liljed...@linaro.org]
*Sent:* Thursday, January 28, 2016 4:39 PM
*To:* Elo, Matias (Nokia - FI/Espoo) <matias....@nokia.com>
*Cc:* EXT Zoltan Kiss <zoltan.k...@linaro.org>; lng-odp@lists.linaro.org
*Subject:* Re: [lng-odp] [API-NEXT PATCH 00/11] DPDK pktio implementation

On 28 January 2016 at 15:14, Elo, Matias (Nokia - FI/Espoo)
<matias....@nokia.com <mailto:matias....@nokia.com>> wrote:

     > -----Original Message-----
     > From: EXT Zoltan Kiss [mailto:zoltan.k...@linaro.org
    <mailto:zoltan.k...@linaro.org>]
     > Sent: Thursday, January 28, 2016 3:21 PM
     > To: Elo, Matias (Nokia - FI/Espoo) <matias....@nokia.com
    <mailto:matias....@nokia.com>>; lng-
     > o...@lists.linaro.org <mailto:o...@lists.linaro.org>
     > Subject: Re: [lng-odp] [API-NEXT PATCH 00/11] DPDK pktio
    implementation
     >
     > Hi,
     >
     > On 28/01/16 07:03, Matias Elo wrote:
     > > The current unoptimized DPDK pktio implementation achieves
    forwarding rates
     > > (odp_l2fwd), which are comparable to netmap pktio and scale
    better with
     > larger
     > > thread counts. Some initial benchmark results below
     > > (odp_l2fwd  4 x 10 Gbps - 64B, Intel Xeon E5-2697v3).
     > >
     > >                             Threads
     > >     1       2       4       6       8       10      12
     > > DPDK        6.7     12      25.3    37.2    47.6    47.3
    46.8    MPPS
     > > Netmap      6.1     12.6    25.8    32.4    38.9    38.6    38.4
     >
     > My performance results for ODP-DPDK are unidirectional between two
     > ports, where one thread does the actual work (the other is
    idling), in
     > that case it can achieve 14 Mpps. Is your number 6.7 Mpps comparable
     > with this?

    These numbers are combined throughputs from all 4 ports. No
    "maintenance"
    thread is needed. With two ports and unidirectional traffic a single
    thread is able
    to handle about 7 MPPS.

     > Your main source of optimization seems to be to do zerocopy on RX
    side,
     > but it needs change in linux-generic buffer management:
     > - allow allocating zero length buffers, so you can append the buffers
     > from the mbuf there
     > - release the mbufs during odp_packet_free(), that needs some DPDK
     > specific code, a destructor which calls rte_pktmbuf_free() on the
    stored
     > pointers.
     >
     > But even with that there will be a cost of wrapping the mbuf's into
     > linux-generic buffers, and you can't avoid copy on TX side.

    Yep, this is in my to-do list.

Perhaps ODP linux-generic should use mbufs? I think that would allow for
the greatest amount of friction-less coexistence.

At first I’m going to try to use mbufs along ODP packets as Zoltan
described. Moving to using only mbufs is a whole different conversation
and may introduce a new set of problems. E.g. mbuf not supporting some
features we require. Still, it’s an option we could think about.

Using mbuf's is the key difference between this approach and ODP-DPDK. The latter reuses most of the linux-generic code (including pktio now as well), but it has its own buffer, packet(_flags) and pool implementation which builds on DPDK mbuf's and mempool's. The linux-generic implementation files are only kept there to make seamless git repo sync possible. If we plan to ditch the linux-generic implementations and use DPDK I'm OK with that, but it would be good to know where should I put my efforts.



P.S. Please reply to messages using plaintext as HTML makes replying
inline a pain.

-Matias


    -Matias


     >
     > Regards,
     >
     > Zoltan
    _______________________________________________
    lng-odp mailing list
    lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
    https://lists.linaro.org/mailman/listinfo/lng-odp

_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to