[dpdk-dev] DPDK Latency Issue

2014-05-28 Thread Jun Han
Hi all,

I realized I made a mistake on my previous post. Please note the changes
below.

"While I vary the MAX_BURST_SIZE (1, 8, 16, 32, 64, and 128) and fix
BURST_TX_DRAIN_US=100 usec, I see a low average latency when sending a
burst of packets greater than the MAX_BURST_SIZE.
For example, when MAX_BURST_SIZE is 32, if I send a burst of 32 packets or
larger, then I get around 10 usec of latency. When the burst size is less
than 32, I see higher average latency, which make total sense."


On Mon, May 26, 2014 at 9:39 PM, Jun Han  wrote:

> Thanks a lot Jeff for your detailed explanation. I still have open
> question left. I would be grateful if someone would share their insight on
> it.
>
> I have performed experiments to vary both the MAX_BURST_SIZE (originally
> set as 32) and BURST_TX_DRAIN_US (originally set as 100 usec) in l3fwd
> main.c.
>
> While I vary the MAX_BURST_SIZE (1, 8, 16, 32, 64, and 128) and fix
> BURST_TX_DRAIN_US=100 usec, I see a low average latency when sending a
> burst of packets less than or equal to the MAX_BURST_SIZE.
> For example, when MAX_BURST_SIZE is 32, if I send a burst of 32 packets or
> less, then I get around 10 usec of latency. When it goes over it, it starts
> to get higher average latency, which make total sense.
>
> My main question are the following. When I start sending continuous packet
> at a rate of 14.88 Mpps for 64B packets, it shows consistently receiving an
> average latency of 150 usec, no matter what MAX_BURST_SIZE. My guess is
> that the latency should be bounded by BURST_TX_DRAIN_US, which is fixed at
> 100 usec. Would you share your thought on this issue please?
>
> Thanks,
> Jun
>
>
> On Thu, May 22, 2014 at 7:06 PM, Shaw, Jeffrey B  > wrote:
>
>> Hello,
>>
>> > I measured a roundtrip latency (using Spirent traffic generator) of
>> sending 64B packets over a 10GbE to DPDK, and DPDK does nothing but simply
>> forward back to the incoming port (l3fwd without any lookup code, i.e.,
>> dstport = port_id).
>> > However, to my surprise, the average latency was around 150 usec. (The
>> packet drop rate was only 0.001%, i.e., 283 packets/sec dropped) Another
>> test I did was to measure the latency due to sending only a single 64B
>> packet, and the latency I measured is ranging anywhere from 40 usec to 100
>> usec.
>>
>> 40-100usec seems very high.
>> The l3fwd application does some internal buffering before transmitting
>> the packets.  It buffers either 32 packets, or waits up to 100us
>> (hash-defined as BURST_TX_DRAIN_US), whichever comes first.
>> Try either removing this timeout, or sending a burst of 32 packets at
>> time.  Or you could try with testpmd, which should have reasonably low
>> latency out of the box.
>>
>> There is also a section in the Release Notes (8.6 How can I tune my
>> network application to achieve lower latency?) which provides some pointers
>> for getting lower latency if you are willing to give up top-rate throughput.
>>
>> Thanks,
>> Jeff
>>
>
>


[dpdk-dev] roundtrip delay

2014-05-27 Thread Jun Han
Hi all,

I've also asked a similar question on the previous thread, but I'll copy it
here for better visibility. I would really appreciate it if you can provide
some hints to my question below. Thanks a lot!

Thanks a lot Jeff for your detailed explanation. I still have open question
left. I would be grateful if someone would share their insight on it.

I have performed experiments to vary both the MAX_BURST_SIZE (originally
set as 32) and BURST_TX_DRAIN_US (originally set as 100 usec) in l3fwd
main.c.

While I vary the MAX_BURST_SIZE (1, 8, 16, 32, 64, and 128) and fix
BURST_TX_DRAIN_US=100 usec, I see a low average latency when sending a
burst of packets less than or equal to the MAX_BURST_SIZE.
For example, when MAX_BURST_SIZE is 32, if I send a burst of 32 packets or
less, then I get around 10 usec of latency. When it goes over it, it starts
to get higher average latency, which make total sense.

My main question are the following. When I start sending continuous packet
at a rate of 14.88 Mpps for 64B packets, it shows consistently receiving an
average latency of 150 usec, no matter what MAX_BURST_SIZE. My guess is
that the latency should be bounded by BURST_TX_DRAIN_US, which is fixed at
100 usec. Would you share your thought on this issue please?



On Sun, May 25, 2014 at 8:12 PM, Jayakumar, Muthurajan <
muthurajan.jayakumar at intel.com> wrote:

> Please kindly refer recent thread titled "DPDK Latency Issue" on similar
> topic. Below copied and pasted Jeff Shaw reply on that thread.
>
> Hello,
>
> > I measured a roundtrip latency (using Spirent traffic generator) of
> sending 64B packets over a 10GbE to DPDK, and DPDK does nothing but simply
> forward back to the incoming port (l3fwd without any lookup code, i.e.,
> dstport = port_id).
> > However, to my surprise, the average latency was around 150 usec. (The
> packet drop rate was only 0.001%, i.e., 283 packets/sec dropped) Another
> test I did was to measure the latency due to sending only a single 64B
> packet, and the latency I measured is ranging anywhere from 40 usec to 100
> usec.
>
> 40-100usec seems very high.
> The l3fwd application does some internal buffering before transmitting the
> packets.  It buffers either 32 packets, or waits up to 100us (hash-defined
> as BURST_TX_DRAIN_US), whichever comes first.
> Try either removing this timeout, or sending a burst of 32 packets at
> time.  Or you could try with testpmd, which should have reasonably low
> latency out of the box.
>
> There is also a section in the Release Notes (8.6 How can I tune my
> network application to achieve lower latency?) which provides some pointers
> for getting lower latency if you are willing to give up top-rate throughput.
>
> Thanks,
> Jeff
>
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Helmut Sim
> Sent: Sunday, May 25, 2014 7:55 AM
> To: dev at dpdk.org
> Subject: [dpdk-dev] roundtrip delay
>
> Hi,
>
> what is the way to optimize the round trip delay of a packet?
> i.e. receiving a packet and then resending it back to the network in a
> minimal time, assuming the rx and tx threads are on a continuous loop of
> rx/tx.
>
> Thanks,
>


[dpdk-dev] DPDK Latency Issue

2014-05-26 Thread Jun Han
Thanks a lot Jeff for your detailed explanation. I still have open question
left. I would be grateful if someone would share their insight on it.

I have performed experiments to vary both the MAX_BURST_SIZE (originally
set as 32) and BURST_TX_DRAIN_US (originally set as 100 usec) in l3fwd
main.c.

While I vary the MAX_BURST_SIZE (1, 8, 16, 32, 64, and 128) and fix
BURST_TX_DRAIN_US=100 usec, I see a low average latency when sending a
burst of packets less than or equal to the MAX_BURST_SIZE.
For example, when MAX_BURST_SIZE is 32, if I send a burst of 32 packets or
less, then I get around 10 usec of latency. When it goes over it, it starts
to get higher average latency, which make total sense.

My main question are the following. When I start sending continuous packet
at a rate of 14.88 Mpps for 64B packets, it shows consistently receiving an
average latency of 150 usec, no matter what MAX_BURST_SIZE. My guess is
that the latency should be bounded by BURST_TX_DRAIN_US, which is fixed at
100 usec. Would you share your thought on this issue please?

Thanks,
Jun


On Thu, May 22, 2014 at 7:06 PM, Shaw, Jeffrey B
wrote:

> Hello,
>
> > I measured a roundtrip latency (using Spirent traffic generator) of
> sending 64B packets over a 10GbE to DPDK, and DPDK does nothing but simply
> forward back to the incoming port (l3fwd without any lookup code, i.e.,
> dstport = port_id).
> > However, to my surprise, the average latency was around 150 usec. (The
> packet drop rate was only 0.001%, i.e., 283 packets/sec dropped) Another
> test I did was to measure the latency due to sending only a single 64B
> packet, and the latency I measured is ranging anywhere from 40 usec to 100
> usec.
>
> 40-100usec seems very high.
> The l3fwd application does some internal buffering before transmitting the
> packets.  It buffers either 32 packets, or waits up to 100us (hash-defined
> as BURST_TX_DRAIN_US), whichever comes first.
> Try either removing this timeout, or sending a burst of 32 packets at
> time.  Or you could try with testpmd, which should have reasonably low
> latency out of the box.
>
> There is also a section in the Release Notes (8.6 How can I tune my
> network application to achieve lower latency?) which provides some pointers
> for getting lower latency if you are willing to give up top-rate throughput.
>
> Thanks,
> Jeff
>


[dpdk-dev] DPDK Latency Issue

2014-05-22 Thread Jun Han
Hi all,

I measured a roundtrip latency (using Spirent traffic generator) of sending
64B packets over a 10GbE to DPDK, and DPDK does nothing but simply forward
back to the incoming port (l3fwd without any lookup code, i.e., dstport =
port_id).

However, to my surprise, the average latency was around 150 usec. (The
packet drop rate was only 0.001%, i.e., 283 packets/sec dropped) Another
test I did was to measure the latency due to sending only a single 64B
packet, and the latency I measured is ranging anywhere from 40 usec to 100
usec.

I do not understand why this is so slow. Can someone explain the reasoning
behind this phenomenon?

Thank you so much!


[dpdk-dev] l3fwd LPM lookup - issue when measuring latency

2014-03-23 Thread Jun Han
Hi all,

I've been trying to measure possible performance penalties of performing
LPM table lookup on l3fwd code (as opposed to a simple forwarding without
lookup, i.e., forwarding back to the ingress port).

I perform two sets of experiments -- (1) generate a fixed dst IP address
from DPDK pktgen; (2) generate random dst IP address from DPDK pktgen. My
hypothesis is that for case (1), upon receiving many packets with same dst
IP, DPDK l3fwd should only need to fetch LPM table from the cache. However,
case (2) would generate more cache misses, hence requiring fetches from
memory, which should increase the latency. (My current machine has 20MB of
L3 cache.)

However, when I measure the average cycles it takes to perform a lookup
indexed by the received dst IP address, the two cases yield almost similar
results of around 34 cycles. I am using rdtsc to measure the cycles in the
rte_lpm_lookup() function in rte_lpm.h (under lib/librte_lpm). I am not
sure if this is due to rte_rdtsc problem, or if I am misunderstanding
something.

tsc1 = rte_rdtsc();
tbl_entry = *(const uint16_t *)>tbl24[tbl24_index];
tscdif = rte_rdtsc() - tsc1;
aggreg_dif += tscdif;

I would appreciate it if someone could provide their opinion on this
phenomenon.

Thanks in advance!
Jun


[dpdk-dev] DPDK pktgen

2014-03-22 Thread Jun Han
Hi

I have a quick question about using your DPDK pktgen. I want to
sequentially increase the destination IP address so I modified the .lua
file in the following way. However, as I am printing the contents of IP
header in the other machine, I find that the IP address does not keep
incrementing all the way to 255.255.255.0 but quickly rolls back to some
other IP address. (e.g., from 5.54.0 to 0.1.180.0, or from 0.1.145.0 to
0.6.24.0 as shown below in bold fonts.) Do you know what could be the
problem? Am I implementing this lua file incorrectly? Thank you.

- Part of .lua file 

pktgen.set("0", "size", 64);

pktgen.dst_ip("0", "start", "0.0.0.0");

pktgen.dst_ip("0", "inc", "0.0.1.0");

pktgen.dst_ip("0", "min", "0.0.0.0");

pktgen.dst_ip("0", "max", "10.255.255.255");


--- output of received packet header dst ip address on the machine
receiving the packets -

IP=0x390500, IP=0.5.57.0

IP=0x380500, IP=0.5.56.0

IP=0x370500, IP=0.5.55.0

*IP=0x360500, IP=0.5.54.0*

*IP=0xb40100, IP=0.1.180.0*

IP=0xb30100, IP=0.1.179.0

IP=0xb20100, IP=0.1.178.0

IP=0xb00100, IP=0.1.176.0

IP=0xac0100, IP=0.1.172.0

IP=0xab0100, IP=0.1.171.0

IP=0xaa0100, IP=0.1.170.0

IP=0xa90100, IP=0.1.169.0

IP=0xa80100, IP=0.1.168.0

IP=0xa70100, IP=0.1.167.0

IP=0xa60100, IP=0.1.166.0

IP=0xa50100, IP=0.1.165.0

IP=0xa40100, IP=0.1.164.0

IP=0xa30100, IP=0.1.163.0

IP=0xa20100, IP=0.1.162.0

IP=0xa10100, IP=0.1.161.0

IP=0xa00100, IP=0.1.160.0

IP=0x9f0100, IP=0.1.159.0

IP=0x9e0100, IP=0.1.158.0

IP=0x9d0100, IP=0.1.157.0

IP=0x9c0100, IP=0.1.156.0

IP=0x9b0100, IP=0.1.155.0

IP=0x9a0100, IP=0.1.154.0

IP=0x990100, IP=0.1.153.0

IP=0x980100, IP=0.1.152.0

IP=0x970100, IP=0.1.151.0

IP=0x960100, IP=0.1.150.0

IP=0x950100, IP=0.1.149.0

IP=0x940100, IP=0.1.148.0

IP=0x930100, IP=0.1.147.0

IP=0x920100, IP=0.1.146.0

*IP=0x910100, IP=0.1.145.0*

*IP=0x180600, IP=0.6.24.0*

IP=0x170600, IP=0.6.23.0

IP=0x160600, IP=0.6.22.0

IP=0x140600, IP=0.6.20.0

IP=0x100600, IP=0.6.16.0

IP=0xf0600, IP=0.6.15.0

IP=0xe0600, IP=0.6.14.0

IP=0xd0600, IP=0.6.13.0

IP=0xc0600, IP=0.6.12.0

IP=0xb0600, IP=0.6.11.0

IP=0xa0600, IP=0.6.10.0

IP=0x90600, IP=0.6.9.0

IP=0x80600, IP=0.6.8.0

IP=0x70600, IP=0.6.7.0

IP=0x60600, IP=0.6.6.0

IP=0x50600, IP=0.6.5.0

IP=0x40600, IP=0.6.4.0

IP=0x30600, IP=0.6.3.0

IP=0x20600, IP=0.6.2.0

IP=0x10600, IP=0.6.1.0

IP=0x600, IP=0.6.0.0

IP=0xff0500, IP=0.5.255.0

IP=0xfe0500, IP=0.5.254.0

IP=0xfd0500, IP=0.5.253.0

IP=0xfc0500, IP=0.5.252.0

IP=0xfb0500, IP=0.5.251.0

IP=0xfa0500, IP=0.5.250.0

IP=0xf90500, IP=0.5.249.0

IP=0xf80500, IP=0.5.248.0

IP=0xf70500, IP=0.5.247.0

IP=0xf60500, IP=0.5.246.0

IP=0xf50500, IP=0.5.245.0

IP=0x6a0200, IP=0.2.106.0

IP=0x690200, IP=0.2.105.0

IP=0x680200, IP=0.2.104.0

IP=0x660200, IP=0.2.102.0

IP=0x620200, IP=0.2.98.0

IP=0x610200, IP=0.2.97.0

IP=0x600200, IP=0.2.96.0

IP=0x5f0200, IP=0.2.95.0

IP=0x5e0200, IP=0.2.94.0

IP=0x5d0200, IP=0.2.93.0

IP=0x5c0200, IP=0.2.92.0

IP=0x5b0200, IP=0.2.91.0

IP=0x5a0200, IP=0.2.90.0

IP=0x590200, IP=0.2.89.0

IP=0x580200, IP=0.2.88.0

IP=0x570200, IP=0.2.87.0

IP=0x560200, IP=0.2.86.0

IP=0x550200, IP=0.2.85.0

IP=0x540200, IP=0.2.84.0

IP=0x530200, IP=0.2.83.0

IP=0x520200, IP=0.2.82.0

IP=0x510200, IP=0.2.81.0

IP=0x500200, IP=0.2.80.0

IP=0x4f0200, IP=0.2.79.0

IP=0x4e0200, IP=0.2.78.0

IP=0x4d0200, IP=0.2.77.0

IP=0x4c0200, IP=0.2.76.0

IP=0x4b0200, IP=0.2.75.0

IP=0x4a0200, IP=0.2.74.0

IP=0x490200, IP=0.2.73.0

IP=0x480200, IP=0.2.72.0

IP=0x470200, IP=0.2.71.0

IP=0xb70600, IP=0.6.183.0

IP=0xb60600, IP=0.6.182.0

IP=0xb50600, IP=0.6.181.0

IP=0xb30600, IP=0.6.179.0

IP=0xaf0600, IP=0.6.175.0

IP=0xae0600, IP=0.6.174.0

IP=0xad0600, IP=0.6.173.0

IP=0xac0600, IP=0.6.172.0

IP=0xab0600, IP=0.6.171.0

IP=0xaa0600, IP=0.6.170.0

IP=0xa90600, IP=0.6.169.0

IP=0xa80600, IP=0.6.168.0

IP=0xa70600, IP=0.6.167.0

IP=0xa60600, IP=0.6.166.0

IP=0xa50600, IP=0.6.165.0

IP=0xa40600, IP=0.6.164.0

IP=0xa30600, IP=0.6.163.0

IP=0xa20600, IP=0.6.162.0

IP=0xa10600, IP=0.6.161.0

IP=0xa00600, IP=0.6.160.0

IP=0x9f0600, IP=0.6.159.0

IP=0x9e0600, IP=0.6.158.0

IP=0x9d0600, IP=0.6.157.0

IP=0x9c0600, IP=0.6.156.0

IP=0x9b0600, IP=0.6.155.0

IP=0x9a0600, IP=0.6.154.0

IP=0x990600, IP=0.6.153.0

IP=0x980600, IP=0.6.152.0

IP=0x970600, IP=0.6.151.0

IP=0x960600, IP=0.6.150.0

IP=0x950600, IP=0.6.149.0

IP=0x940600, IP=0.6.148.0

IP=0x130300, IP=0.3.19.0

IP=0x120300, IP=0.3.18.0

IP=0x110300, IP=0.3.17.0

IP=0xf0300, IP=0.3.15.0

IP=0xb0300, IP=0.3.11.0

IP=0xa0300, IP=0.3.10.0

IP=0x90300, IP=0.3.9.0

IP=0x80300, IP=0.3.8.0

IP=0x70300, IP=0.3.7.0

IP=0x60300, IP=0.3.6.0

IP=0x50300, IP=0.3.5.0

IP=0x40300, IP=0.3.4.0

IP=0x30300, IP=0.3.3.0

IP=0x20300, IP=0.3.2.0

IP=0x10300, IP=0.3.1.0

IP=0x300, IP=0.3.0.0

IP=0xff0200, IP=0.2.255.0

IP=0xfe0200, IP=0.2.254.0

IP=0xfd0200, IP=0.2.253.0

IP=0xfc0200, IP=0.2.252.0

IP=0xfb0200, IP=0.2.251.0

IP=0xfa0200, IP=0.2.250.0

IP=0xf90200, IP=0.2.249.0

IP=0xf80200, IP=0.2.248.0


[dpdk-dev] l2fwd/l3fwd performance drop of about 25% ?

2014-02-25 Thread Jun Han
Hi all,

I have a quick question regarding the performance of DPDK l2fwd (Same
problem with l3fwd). I am seeing that when we start multiple ports (e.g.,
12 ports), for 64 byte packets, the RX rate is only at around 11 Mpps per
port, instead of 14.88 Mpps which is the line rate (with preablem+start of
delimeter + interframe gap). Do you know what could be the problem? I am
describing my experiment setup below.

Setup:
1. We have Intel Xeon E5-2680 (8 cores, 2.7GHz) dual socket, with 6x10GbE
Intel 82599EB dualport NICs (total of 12 ports). Machine A runs your
pktgen, and Machine B runs DPDK l2fwd, unmodified.

2. We are running pktgen on Machine A with the following command:
./app/build/pktgen -c  -n 4 --proc-type auto --socket-mem 1024,1024
--file-prefix pg -- -p 0xfff0 -P -m "1.0, 2.1, 3.2, 4.3, 8.4, 9.5, 10.6,
11.7, 12.8, 13.9, 14.10, 15.11"

3. We are running l2fwd on Machine B with the following command:
sudo ./build/l2fwd -c 0xff0f -n 4 -- -p 0xfff


Thank you very much in advance.
Jun


[dpdk-dev] Rx-errors with testpmd (only 75% line rate)

2014-02-10 Thread Jun Han
Hi Michael,

We are also trying to purchase an IXIA traffic generator. Could you let us
know which chassis + load modules you are using so we can use that as a
reference to look for the model we need? There seems to be quite a number
of different models.

Thank you.


On Tue, Jan 28, 2014 at 9:31 AM, Dmitry Vyal  wrote:

> On 01/28/2014 12:00 AM, Michael Quicquaro wrote:
>
>> Dmitry,
>> I cannot thank you enough for this information.  This too was my main
>> problem.  I put a "small" unmeasured delay before the call to
>> rte_eth_rx_burst() and suddenly it starts returning bursts of 512 packets
>> vs. 4!!
>> Best Regards,
>> Mike
>>
>>
> Thanks for confirming my guesses! By the way, make sure the number of
> packets you receive in a single burst is less than configured queue size.
> Or you will lose packets too. Maybe your "small" delay in not so small :)
> For my own purposes I use a delay of about 150usecs.
>
> P.S. I wonder why this issue is not mentioned in documentation. Is it
> evident for everyone doing network programming?
>
>
>
>
>> On Wed, Jan 22, 2014 at 9:52 AM, Dmitry Vyal > dmitryvyal at gmail.com>> wrote:
>>
>> Hello MIchael,
>>
>> I suggest you to check average burst sizes on receive queues.
>> Looks like I stumbled upon a similar issue several times. If you
>> are calling rte_eth_rx_burst too frequently, NIC begins losing
>> packets no matter how many CPU horse power you have (more you
>> have, more it loses, actually). In my case this situation occured
>> when average burst size is less than 20 packets or so. I'm not
>> sure what's the reason for this behavior, but I observed it on
>> several applications on Intel 82599 10Gb cards.
>>
>> Regards, Dmitry
>>
>>
>>
>> On 01/09/2014 11:28 PM, Michael Quicquaro wrote:
>>
>> Hello,
>> My hardware is a Dell PowerEdge R820:
>> 4x Intel Xeon E5-4620 2.20GHz 8 core
>> 16GB RDIMM 1333 MHz Dual Rank, x4 - Quantity 16
>> Intel X520 DP 10Gb DA/SFP+
>>
>> So in summary 32 cores @ 2.20GHz and 256GB RAM
>>
>> ... plenty of horsepower.
>>
>> I've reserved 16 1GB Hugepages
>>
>> I am configuring only one interface and using testpmd in
>> rx_only mode to
>> first see if I can receive at line rate.
>>
>> I am generating traffic on a different system which is running
>> the netmap
>> pkt-gen program - generating 64 byte packets at close to line
>> rate.
>>
>> I am only able to receive approx. 75% of line rate and I see
>> the Rx-errors
>> in the port stats going up proportionally.
>> I have verified that all receive queues are being used, but
>> strangely
>> enough, it doesn't matter how many queues more than 2 that I
>> use, the
>> throughput is the same.  I have verified with 'mpstat -P ALL'
>> that all
>> specified cores are used.  The utilization of each core is
>> only roughly 25%.
>>
>> Here is my command line:
>> testpmd -c 0x -n 4 -- --nb-ports=1 --coremask=0xfffe
>> --nb-cores=8 --rxd=2048 --txd=2048 --mbcache=512 --burst=512
>> --rxq=8
>> --txq=8 --interactive
>>
>> What can I do to trace down this problem?  It seems very
>> similar to a
>> thread on this list back in May titled "Best example for showing
>> throughput?" where no resolution was ever mentioned in the thread.
>>
>> Thanks for any help.
>> - Michael
>>
>>
>>
>>
>


[dpdk-dev] measuring latency with pktgen <=> l2fwd

2014-01-28 Thread Jun Han
Hi all,

Does anyone know if there is a way to measure the round trip latency
between pktgen and l2fwd? If not, is there a way to add timestamps to
measure latency? Our current setup is: Machine A runs DPDK pktgen and is
connected back-to-back with Machine B running l2fwd.

Thank you.
Jun


[dpdk-dev] Measuring latency

2013-11-25 Thread Jun Han
Hi,

Does anyone know if there is a DPDK provided tool to measure latency of a
packet traversal (i.e., when a packet is copied from the NIC to userspace)?
Also, we are using rdtsc to measure processing delay in the userspace
currently, but does anyone else have suggestions/experiences for other
tools? Additionally, we want to locate the latency bottleneck. Has anyone
looked into this? Would you share your experiences please?

Thank you very much.

Jun


[dpdk-dev] L3FWD LPM IP lookup performance question

2013-09-24 Thread Jun Han
Hello,

We are trying to benchmark L3FWD application and have a question regarding
the IP lookup algorithm as we expect the bottleneck to be at the lookup.
Could someone let us know how efficient the lookup algorithm that L3FWD is
using (e.g, LPM)? We are asking because we want to obtain highest L3
forwarding performance number, and we might need to change the lookup
method if the current LPM method is not as efficient.

Thank you very much,

Jun