[dpdk-dev] Surprisingly high TCP ACK packets drop counter

2013-11-01 Thread Alexander Belyakov
Hello,

we have simple test application on top of DPDK which sole purpose is to
forward as much packets as possible. Generally we easily achieve 14.5Mpps
with two 82599EB (one as input and one as output). The only suprising
exception is forwarding pure TCP ACK flood when performace always drops to
approximately 7Mpps.

For simplicity consider two different types of traffic:
1) TCP SYN flood is forwarded at 14.5Mpps rate,
2) pure TCP ACK flood is forwarded only at 7Mpps rate.

Both SYN and ACK packets have exactly the same length.

It is worth to mention, this forwarding application looks at Ethernet and
IP headers, but never deals with L4 headers.

We tracked down issue to RX circuit. To be specific, there are 4 RX queues
initialized on input port and rte_eth_stats_get() shows uniform packet
distribution (q_ipackets) among them, while q_errors remain zero for all
queues. The only drop counter quickly increasing in the case of pure ACK
flood is ierrors, while rx_nombuf remains zero.

We tried different kinds of traffic generators, but always got the same
result: 7Mpps (instead of expected 14Mpps) for TCP packets with ACK flag
bit set while all other flag bits dropped. Source IPs and ports are
selected randomly.

Please let us know if anyone is aware of such strange behavior and where
should we look at to narrow down the problem.

Thanks in advance,
Alexander Belyakov


[dpdk-dev] DPDK testpmd forwarding performace degradation

2015-02-05 Thread Alexander Belyakov
On Thu, Jan 29, 2015 at 3:43 PM, Alexander Belyakov 
wrote:

>
>
> On Wed, Jan 28, 2015 at 3:24 PM, Alexander Belyakov 
> wrote:
>
>>
>>
>> On Tue, Jan 27, 2015 at 7:21 PM, De Lara Guarch, Pablo <
>> pablo.de.lara.guarch at intel.com> wrote:
>>
>>>
>>>
>>> > On Tue, Jan 27, 2015 at 10:51 AM, Alexander Belyakov
>>>
>>> >  wrote:
>>>
>>> >
>>>
>>> > Hi Pablo,
>>>
>>> >
>>>
>>> > On Mon, Jan 26, 2015 at 5:22 PM, De Lara Guarch, Pablo
>>>
>>> >  wrote:
>>>
>>> > Hi Alexander,
>>>
>>> >
>>>
>>> > > -Original Message-
>>>
>>> > > From: dev [mailto:dev-bounces at dpdk.org ] On
>>> Behalf Of Alexander
>>>
>>> > Belyakov
>>>
>>> > > Sent: Monday, January 26, 2015 10:18 AM
>>>
>>> > > To: dev at dpdk.org
>>>
>>> > > Subject: [dpdk-dev] DPDK testpmd forwarding performace degradation
>>>
>>> > >
>>>
>>> > > Hello,
>>>
>>> > >
>>>
>>> > > recently I have found a case of significant performance degradation
>>> for our
>>>
>>> > > application (built on top of DPDK, of course). Surprisingly, similar
>>> issue
>>>
>>> > > is easily reproduced with default testpmd.
>>>
>>> > >
>>>
>>> > > To show the case we need simple IPv4 UDP flood with variable UDP
>>>
>>> > payload
>>>
>>> > > size. Saying "packet length" below I mean: Eth header length (14
>>> bytes) +
>>>
>>> > > IPv4 header length (20 bytes) + UPD header length (8 bytes) + UDP
>>> payload
>>>
>>> > > length (variable) + CRC (4 bytes). Source IP addresses and ports are
>>>
>>> > selected
>>>
>>> > > randomly for each packet.
>>>
>>> > >
>>>
>>> > > I have used DPDK with revisions 1.6.0r2 and 1.7.1. Both show the same
>>>
>>> > issue.
>>>
>>> > >
>>>
>>> > > Follow "Quick start" guide (http://dpdk.org/doc/quick-start) to
>>> build and
>>>
>>> > > run testpmd. Enable testpmd forwarding ("start" command).
>>>
>>> > >
>>>
>>> > > Table below shows measured forwarding performance depending on
>>>
>>> > packet
>>>
>>> > > length:
>>>
>>> > >
>>>
>>> > > No. -- UDP payload length (bytes) -- Packet length (bytes) --
>>> Forwarding
>>>
>>> > > performance (Mpps) -- Expected theoretical performance (Mpps)
>>>
>>> > >
>>>
>>> > > 1. 0 -- 64 -- 14.8 -- 14.88
>>>
>>> > > 2. 34 -- 80 -- 12.4 -- 12.5
>>>
>>> > > 3. 35 -- 81 -- 6.2 -- 12.38 (!)
>>>
>>> > > 4. 40 -- 86 -- 6.6 -- 11.79
>>>
>>> > > 5. 49 -- 95 -- 7.6 -- 10.87
>>>
>>> > > 6. 50 -- 96 -- 10.7 -- 10.78 (!)
>>>
>>> > > 7. 60 -- 106 -- 9.4 -- 9.92
>>>
>>> > >
>>>
>>> > > At line number 3 we have added 1 byte of UDP payload (comparing to
>>>
>>> > > previous
>>>
>>> > > line) and got forwarding performance halved! 6.2 Mpps against 12.38
>>> Mpps
>>>
>>> > > of
>>>
>>> > > expected theoretical maximum for this packet size.
>>>
>>> > >
>>>
>>> > > That is the issue.
>>>
>>> > >
>>>
>>> > > Significant performance degradation exists up to 50 bytes of UDP
>>> payload
>>>
>>> > > (96 bytes packet length), where it jumps back to theoretical maximum.
>>>
>>> > >
>>>
>>> > > What is happening between 80 and 96 bytes packet length?
>>>
>>> > >
>>>
>>> > > This issue is stable and 100% reproducible. At this point I am not
>>> sure if
>>>
>>> > > it is DPDK or NIC issue. These tests have been performed on Intel(R)
>>> Eth
>>>
>>> > > Svr Bypass Adapter X520-LR2 (X520LR2BP).
>>>
>>> >

[dpdk-dev] bnx2x pmd performance expectations

2015-12-22 Thread Alexander Belyakov
Hi,

just tried to forward a lot of tiny packets with testpmd (dpdk-2.2.0)
running on Broadcom Corporation NetXtreme II BCM57810S 10 Gigabit Ethernet
(rev 10) adapter. I see forwarding performance of 2.6Mpps instead of
expected 14.8Mpps. What should be done to achieve better results?

Thank you,
Alexander Belyakov


[dpdk-dev] bnx2x pmd performance expectations

2015-12-29 Thread Alexander Belyakov
Thank you for pointing this out. While it seems to me problem here is RX, I
will also look into TX burst limitations.

-a

On 28 December 2015 at 00:17, Chas Williams <3chas3 at gmail.com> wrote:

> I wouldn't consider myself an expert on this driver but while looking
> at some other things, I have noted that RTE_PMD_BNX2X_TX_MAX_BURST is
> defined to be 1.  This bursts single packets to bnx2x_tx_encap() but it
> looks like bnx2x_tx_encap() is far more capable than that.
>
> On Tue, 2015-12-22 at 14:52 +0300, Alexander Belyakov wrote:
> > Hi,
> >
> > just tried to forward a lot of tiny packets with testpmd (dpdk-2.2.0)
> > running on Broadcom Corporation NetXtreme II BCM57810S 10 Gigabit
> > Ethernet
> > (rev 10) adapter. I see forwarding performance of 2.6Mpps instead of
> > expected 14.8Mpps. What should be done to achieve better results?
> >
> > Thank you,
> > Alexander Belyakov
>


[dpdk-dev] Unable to build dpdk : #error "SSSE3 instruction set not enabled"

2013-12-02 Thread Alexander Belyakov
Hi,

On Fri, Nov 29, 2013 at 2:39 PM, Surya Nimmagadda wrote:

> I am seeing the following error when doing make with 1.5.0r2 or 1.5.1r1
>
> == Build lib/librte_meter
> == Build lib/librte_sched
>   CC rte_sched.o
> In file included from
> /home/surya/dpdk/dpdk-1.5.1r1/lib/librte_sched/rte_bitmap.h:77:0,
>  from
> /home/surya/dpdk/dpdk-1.5.1r1/lib/librte_sched/rte_sched.c:47:
> /usr/lib/gcc/x86_64-linux-gnu/4.6/include/tmmintrin.h:31:3: error: #error
> "SSSE3 instruction set not enabled"
> make[3]: *** [rte_sched.o] Error 1
> make[2]: *** [librte_sched] Error 2
> make[1]: *** [lib] Error 2
> make: *** [all] Error 2
>
>
adding the following line to your defconfig would also help
"CONFIG_RTE_BITMAP_OPTIMIZATIONS=0"

Regards,
Alexander


[dpdk-dev] Surprisingly high TCP ACK packets drop counter

2013-11-04 Thread Alexander Belyakov
Hi,

thanks for the patch and explanation. We have tried DPDK 1.3 and 1.5 - both
have the same issue.

Regards,
Alexander


On Fri, Nov 1, 2013 at 6:54 PM, Wang, Shawn  wrote:

> Hi:
>
> We had the same problem before. It turned out that RSC (receive side
> coalescing) is enabled by default in DPDK. So we write this na?ve patch to
> disable it. This patch is based on DPDK 1.3. Not sure 1.5 has changed it
> or not.
> After this patch, ACK rate should go back to 14.5Mpps. For details, you
> can refer to Intel? 82599 10 GbE Controller Datasheet. (7.11 Receive Side
> Coalescing).
>
> From: xingbow 
> Date: Wed, 21 Aug 2013 11:35:23 -0700
> Subject: [PATCH] Disable RSC in ixgbe_dev_rx_init function in file
>
>  ixgbe_rxtx.c
>
> ---
>
>  DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h | 2 +-
>  DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c   | 7 +++
>  2 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h
> b/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h
> index 7fffd60..f03046f 100644
>
> --- a/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h
>
> +++ b/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h
>
> @@ -1930,7 +1930,7 @@ enum {
>
>  #define IXGBE_RFCTL_ISCSI_DIS  0x0001
>  #define IXGBE_RFCTL_ISCSI_DWC_MASK 0x003E
>  #define IXGBE_RFCTL_ISCSI_DWC_SHIFT1
> -#define IXGBE_RFCTL_RSC_DIS0x0010
>
> +#define IXGBE_RFCTL_RSC_DIS0x0020
>
>  #define IXGBE_RFCTL_NFSW_DIS   0x0040
>  #define IXGBE_RFCTL_NFSR_DIS   0x0080
>  #define IXGBE_RFCTL_NFS_VER_MASK   0x0300
> diff --git a/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> b/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
> index 07830b7..ba6e05d 100755
>
> --- a/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
>
> +++ b/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
>
> @@ -3007,6 +3007,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
>
> uint64_t bus_addr;
> uint32_t rxctrl;
> uint32_t fctrl;
> +   uint32_t rfctl;
>
> uint32_t hlreg0;
> uint32_t maxfrs;
> uint32_t srrctl;
> @@ -3033,6 +3034,12 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
>
> fctrl |= IXGBE_FCTRL_PMCF;
> IXGBE_WRITE_REG(hw, IXGBE_FCTRL, fctrl);
>
> +   /* Disable RSC */
> +   RTE_LOG(INFO, PMD, "Disable RSC\n");
> +   rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
> +   rfctl |= IXGBE_RFCTL_RSC_DIS;
> +   IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
> +
>
> /*
>  * Configure CRC stripping, if any.
>  */
> --
>
>
> Thanks.
> Wang, Xingbo
>
>
>
>
> On 11/1/13 6:43 AM, "Alexander Belyakov"  wrote:
>
> >Hello,
> >
> >we have simple test application on top of DPDK which sole purpose is to
> >forward as much packets as possible. Generally we easily achieve 14.5Mpps
> >with two 82599EB (one as input and one as output). The only suprising
> >exception is forwarding pure TCP ACK flood when performace always drops to
> >approximately 7Mpps.
> >
> >For simplicity consider two different types of traffic:
> >1) TCP SYN flood is forwarded at 14.5Mpps rate,
> >2) pure TCP ACK flood is forwarded only at 7Mpps rate.
> >
> >Both SYN and ACK packets have exactly the same length.
> >
> >It is worth to mention, this forwarding application looks at Ethernet and
> >IP headers, but never deals with L4 headers.
> >
> >We tracked down issue to RX circuit. To be specific, there are 4 RX queues
> >initialized on input port and rte_eth_stats_get() shows uniform packet
> >distribution (q_ipackets) among them, while q_errors remain zero for all
> >queues. The only drop counter quickly increasing in the case of pure ACK
> >flood is ierrors, while rx_nombuf remains zero.
> >
> >We tried different kinds of traffic generators, but always got the same
> >result: 7Mpps (instead of expected 14Mpps) for TCP packets with ACK flag
> >bit set while all other flag bits dropped. Source IPs and ports are
> >selected randomly.
> >
> >Please let us know if anyone is aware of such strange behavior and where
> >should we look at to narrow down the problem.
> >
> >Thanks in advance,
> >Alexander Belyakov
>
>


[dpdk-dev] Surprisingly high TCP ACK packets drop counter

2013-11-04 Thread Alexander Belyakov
Hello,


On Sat, Nov 2, 2013 at 9:29 AM, Prashant Upadhyaya <
prashant.upadhyaya at aricent.com> wrote:

> Hi Alexander,
>
> Regarding your following statement --
> "
> The only drop counter quickly increasing in the case of pure ACK flood is
> ierrors, while rx_nombuf remains zero.
> "
>
> Can you please explain the significance of "ierrors" counter since I am
> not familiar with that.
>
>
I was speaking about struct rte_eth_stats fields.
http://dpdk.org/doc/api/structrte__eth__stats.html


> Further,  you said you have 4 queues, how many cores  are you using for
> polling the queues ? Hopefully 4 cores for one queue each without locks.
> [It is absolutely critical that all 4 queues be polled]
>

There were one independent core per each RX queue, of course.

Further, is it possible so that your application itself reports the traffic
> receive in packets per second on each queue ? [Don't try to forward the
> traffic here, simply receive and drop in your app and sample the counters
> every second]
>

 There are DPDK counters for RX packets per queue in the same struct
rte_eth_stats. TX was not an issue in this case.


>
> Regards
> -Prashant
>
>
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Alexander Belyakov
> Sent: Friday, November 01, 2013 7:13 PM
> To: dev at dpdk.org
> Subject: [dpdk-dev] Surprisingly high TCP ACK packets drop counter
>
> Hello,
>
> we have simple test application on top of DPDK which sole purpose is to
> forward as much packets as possible. Generally we easily achieve 14.5Mpps
> with two 82599EB (one as input and one as output). The only suprising
> exception is forwarding pure TCP ACK flood when performace always drops to
> approximately 7Mpps.
>
> For simplicity consider two different types of traffic:
> 1) TCP SYN flood is forwarded at 14.5Mpps rate,
> 2) pure TCP ACK flood is forwarded only at 7Mpps rate.
>
> Both SYN and ACK packets have exactly the same length.
>
> It is worth to mention, this forwarding application looks at Ethernet and
> IP headers, but never deals with L4 headers.
>
> We tracked down issue to RX circuit. To be specific, there are 4 RX queues
> initialized on input port and rte_eth_stats_get() shows uniform packet
> distribution (q_ipackets) among them, while q_errors remain zero for all
> queues. The only drop counter quickly increasing in the case of pure ACK
> flood is ierrors, while rx_nombuf remains zero.
>
> We tried different kinds of traffic generators, but always got the same
> result: 7Mpps (instead of expected 14Mpps) for TCP packets with ACK flag
> bit set while all other flag bits dropped. Source IPs and ports are
> selected randomly.
>
> Please let us know if anyone is aware of such strange behavior and where
> should we look at to narrow down the problem.
>
> Thanks in advance,
> Alexander Belyakov
>
>
>
>
>
> ===
> Please refer to http://www.aricent.com/legal/email_disclaimer.html
> for important disclosures regarding this electronic communication.
>
> ===
>


[dpdk-dev] Surprisingly high TCP ACK packets drop counter

2013-11-05 Thread Alexander Belyakov
Hello,

On Mon, Nov 4, 2013 at 7:06 AM, Prashant Upadhyaya <
prashant.upadhyaya at aricent.com> wrote:

> Hi Alexander,
>
> Please confirm if the patch works for you.
>

Disabling RSC (DPDK 1.3) indeed brings ACK flood forwarding performance to
14,5+ Mpps. No negative side affects were discovered so far, but we're
still testing.


>
> @Wang, are you saying that without the patch the NIC does not fan out the
> messages properly on all the receive queues ?
> So what exactly happens ?
>
>
Patch deals with RSC (receive side coalescing) but not RSS (receive side
scaling).


> Regards
> -Prashant
>
>
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Alexander Belyakov
> Sent: Monday, November 04, 2013 1:51 AM
> To: Wang, Shawn
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] Surprisingly high TCP ACK packets drop counter
>
> Hi,
>
> thanks for the patch and explanation. We have tried DPDK 1.3 and 1.5 -
> both have the same issue.
>
> Regards,
> Alexander
>
>


[dpdk-dev] Surprisingly high TCP ACK packets drop counter

2013-11-05 Thread Alexander Belyakov
Hello,

The role of RSC is to reassemble input TCP segments, so it is possible
> that the number of TCP packets sent to the DPDK is lower but some
> packets may contain more data. Can you confirm that?
>
>
I don't think out test case can answer your question, because all generated
TCP ACK packets were as small as possible (no tcp payload at all). Source
IPs and ports were picked at random for each packet, so most of (adjacent)
packets belong to different TCP sessions.


> In my opinion, this mechanism should be disabled by default because it
> could break PMTU discovery on a router. However it could be useful for
> somebody doing TCP termination only.
>
>
I was thinking about new rte_eth_rxmode structure option:

@@ -280,6 +280,7 @@ struct rte_eth_rxmode {
hw_vlan_strip: 1, /**< VLAN strip enable. */
hw_vlan_extend   : 1, /**< Extended VLAN enable. */
jumbo_frame  : 1, /**< Jumbo Frame Receipt enable. */
+   disable_rsc  : 1, /**< Disable RSC (receive side
convalescing). */
hw_strip_crc : 1; /**< Enable CRC stripping by
hardware. */
 };


Regards,
Alexander


[dpdk-dev] Surprisingly high TCP ACK packets drop counter

2013-11-06 Thread Alexander Belyakov
Hello,

On Tue, Nov 5, 2013 at 6:40 PM, Prashant Upadhyaya <
prashant.upadhyaya at aricent.com> wrote:

>  Hi Alexander,
>
>
>
> I am also wondering like Olivier ? yours is a nice testcase and setup,
> hence requesting the information below instead of spending a lot of time
> reinventing the test case at my end.
>
> If you have the time on your side, it would be interesting to know what is
> the number of packets per second received inside your application on each
> of your 4 queues individually in both the usecases ? with and without RSC.
>
>
There is even packet distribution among all RX queues in both cases with
and without RSC.

Regards,
Alexander


[dpdk-dev] DPDK testpmd forwarding performace degradation

2015-01-26 Thread Alexander Belyakov
Hello,

recently I have found a case of significant performance degradation for our
application (built on top of DPDK, of course). Surprisingly, similar issue
is easily reproduced with default testpmd.

To show the case we need simple IPv4 UDP flood with variable UDP payload
size. Saying "packet length" below I mean: Eth header length (14 bytes) +
IPv4 header length (20 bytes) + UPD header length (8 bytes) + UDP payload
length (variable) + CRC (4 bytes). Source IP addresses and ports are selected
randomly for each packet.

I have used DPDK with revisions 1.6.0r2 and 1.7.1. Both show the same issue.

Follow "Quick start" guide (http://dpdk.org/doc/quick-start) to build and
run testpmd. Enable testpmd forwarding ("start" command).

Table below shows measured forwarding performance depending on packet
length:

No. -- UDP payload length (bytes) -- Packet length (bytes) -- Forwarding
performance (Mpps) -- Expected theoretical performance (Mpps)

1. 0 -- 64 -- 14.8 -- 14.88
2. 34 -- 80 -- 12.4 -- 12.5
3. 35 -- 81 -- 6.2 -- 12.38 (!)
4. 40 -- 86 -- 6.6 -- 11.79
5. 49 -- 95 -- 7.6 -- 10.87
6. 50 -- 96 -- 10.7 -- 10.78 (!)
7. 60 -- 106 -- 9.4 -- 9.92

At line number 3 we have added 1 byte of UDP payload (comparing to previous
line) and got forwarding performance halved! 6.2 Mpps against 12.38 Mpps of
expected theoretical maximum for this packet size.

That is the issue.

Significant performance degradation exists up to 50 bytes of UDP payload
(96 bytes packet length), where it jumps back to theoretical maximum.

What is happening between 80 and 96 bytes packet length?

This issue is stable and 100% reproducible. At this point I am not sure if
it is DPDK or NIC issue. These tests have been performed on Intel(R) Eth
Svr Bypass Adapter X520-LR2 (X520LR2BP).

Is anyone aware of such strange behavior?

Regards,
Alexander Belyakov


[dpdk-dev] Fwd: DPDK testpmd forwarding performace degradation

2015-01-27 Thread Alexander Belyakov
Hi Pablo,

On Mon, Jan 26, 2015 at 5:22 PM, De Lara Guarch, Pablo <
pablo.de.lara.guarch at intel.com> wrote:

> Hi Alexander,
>
> > -Original Message-
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Alexander Belyakov
> > Sent: Monday, January 26, 2015 10:18 AM
> > To: dev at dpdk.org
> > Subject: [dpdk-dev] DPDK testpmd forwarding performace degradation
> >
> > Hello,
> >
> > recently I have found a case of significant performance degradation for
> our
> > application (built on top of DPDK, of course). Surprisingly, similar
> issue
> > is easily reproduced with default testpmd.
> >
> > To show the case we need simple IPv4 UDP flood with variable UDP payload
> > size. Saying "packet length" below I mean: Eth header length (14 bytes) +
> > IPv4 header length (20 bytes) + UPD header length (8 bytes) + UDP payload
> > length (variable) + CRC (4 bytes). Source IP addresses and ports are
> selected
> > randomly for each packet.
> >
> > I have used DPDK with revisions 1.6.0r2 and 1.7.1. Both show the same
> issue.
> >
> > Follow "Quick start" guide (http://dpdk.org/doc/quick-start) to build
> and
> > run testpmd. Enable testpmd forwarding ("start" command).
> >
> > Table below shows measured forwarding performance depending on packet
> > length:
> >
> > No. -- UDP payload length (bytes) -- Packet length (bytes) -- Forwarding
> > performance (Mpps) -- Expected theoretical performance (Mpps)
> >
> > 1. 0 -- 64 -- 14.8 -- 14.88
> > 2. 34 -- 80 -- 12.4 -- 12.5
> > 3. 35 -- 81 -- 6.2 -- 12.38 (!)
> > 4. 40 -- 86 -- 6.6 -- 11.79
> > 5. 49 -- 95 -- 7.6 -- 10.87
> > 6. 50 -- 96 -- 10.7 -- 10.78 (!)
> > 7. 60 -- 106 -- 9.4 -- 9.92
> >
> > At line number 3 we have added 1 byte of UDP payload (comparing to
> > previous
> > line) and got forwarding performance halved! 6.2 Mpps against 12.38 Mpps
> > of
> > expected theoretical maximum for this packet size.
> >
> > That is the issue.
> >
> > Significant performance degradation exists up to 50 bytes of UDP payload
> > (96 bytes packet length), where it jumps back to theoretical maximum.
> >
> > What is happening between 80 and 96 bytes packet length?
> >
> > This issue is stable and 100% reproducible. At this point I am not sure
> if
> > it is DPDK or NIC issue. These tests have been performed on Intel(R) Eth
> > Svr Bypass Adapter X520-LR2 (X520LR2BP).
> >
> > Is anyone aware of such strange behavior?
>
> I cannot reproduce the issue using two ports on two different 82599EB
> NICs, using 1.7.1 and 1.8.0.
> I always get either same or better linerate as I increase the packet size.
>

Thank you for trying to reproduce the issue.


> Actually, have you tried using 1.8.0?
>

I feel 1.8.0 is little bit immature and might require some post-release
patching. Even tespmd from this release is not forwarding packets properly
on my setup. It is up and running without visible errors/warnings, TX/RX
counters are ticking but I can not see any packets at the output. Please
note, both 1.6.0r2 and 1.7.1 releases work (on the same setup)
out-of-the-box just fine with only exception of this mysterious performance
drop.

So it will take some time to figure out what is wrong with dpdk-1.8.0.
Meanwhile we could focus on stable dpdk-1.7.1.

As for X520-LR2 NIC - it is dual port bypass adapter with device id 155d. I
believe it should be treated as 82599EB except bypass feature. I put bypass
mode to "normal" in those tests.

Alexander


>
> Pablo
> >
> > Regards,
> > Alexander Belyakov
>


[dpdk-dev] Fwd: DPDK testpmd forwarding performace degradation

2015-01-27 Thread Alexander Belyakov
Hello,

On Mon, Jan 26, 2015 at 8:08 PM, Stephen Hemminger <
stephen at networkplumber.org> wrote:

> On Mon, 26 Jan 2015 13:17:48 +0300
> Alexander Belyakov  wrote:
>
> > Hello,
> >
> > recently I have found a case of significant performance degradation for
> our
> > application (built on top of DPDK, of course). Surprisingly, similar
> issue
> > is easily reproduced with default testpmd.
> >
> > To show the case we need simple IPv4 UDP flood with variable UDP payload
> > size. Saying "packet length" below I mean: Eth header length (14 bytes) +
> > IPv4 header length (20 bytes) + UPD header length (8 bytes) + UDP payload
> > length (variable) + CRC (4 bytes). Source IP addresses and ports are
> selected
> > randomly for each packet.
> >
> > I have used DPDK with revisions 1.6.0r2 and 1.7.1. Both show the same
> issue.
> >
> > Follow "Quick start" guide (http://dpdk.org/doc/quick-start) to build
> and
> > run testpmd. Enable testpmd forwarding ("start" command).
> >
> > Table below shows measured forwarding performance depending on packet
> > length:
> >
> > No. -- UDP payload length (bytes) -- Packet length (bytes) -- Forwarding
> > performance (Mpps) -- Expected theoretical performance (Mpps)
>
> Did you try using git bisect to identify the problem.
>

I believe dpdk-1.6.0r2 is the first release with bypass adapter (device id
155d) support and it already has the issue. So it seems I have no "good"
point.

Alexander


[dpdk-dev] DPDK testpmd forwarding performace degradation

2015-01-27 Thread Alexander Belyakov
Hello,

On Tue, Jan 27, 2015 at 5:49 AM, ???  wrote:

> 65 bytes frame may degrade performace a lot.Thats related to DMA and cache.
> When NIC dma packets to memory, NIC has to do read modify write if DMA
> size is partial cache line.So for 65 bytes, the first 64 bytes are ok. The
> next 1 byte NIC has to read the whole cache line, change one byte and
> update the cache line.
> So in DPDK, CRC is not stripped and ethernet header aligned to cache line
> which causes ip header not aligned on 4 bytes.
>
>
Extra cache line update indeed makes sense because performance is halved
with extra byte.

It is a little bit confusing, but the issue is not with switching from 64
bytes frames to 65 bytes frames, but with switching from 80 bytes frame to
81 bytes frame. Note that the issue disappears at 96 bytes frame size.

Alexander


[dpdk-dev] DPDK testpmd forwarding performace degradation

2015-01-27 Thread Alexander Belyakov
On Tue, Jan 27, 2015 at 10:51 AM, Alexander Belyakov 
wrote:

>
> Hi Pablo,
>
> On Mon, Jan 26, 2015 at 5:22 PM, De Lara Guarch, Pablo <
> pablo.de.lara.guarch at intel.com> wrote:
>
>> Hi Alexander,
>>
>> > -Original Message-
>> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Alexander Belyakov
>> > Sent: Monday, January 26, 2015 10:18 AM
>> > To: dev at dpdk.org
>> > Subject: [dpdk-dev] DPDK testpmd forwarding performace degradation
>> >
>> > Hello,
>> >
>> > recently I have found a case of significant performance degradation for
>> our
>> > application (built on top of DPDK, of course). Surprisingly, similar
>> issue
>> > is easily reproduced with default testpmd.
>> >
>> > To show the case we need simple IPv4 UDP flood with variable UDP payload
>> > size. Saying "packet length" below I mean: Eth header length (14 bytes)
>> +
>> > IPv4 header length (20 bytes) + UPD header length (8 bytes) + UDP
>> payload
>> > length (variable) + CRC (4 bytes). Source IP addresses and ports are
>> selected
>> > randomly for each packet.
>> >
>> > I have used DPDK with revisions 1.6.0r2 and 1.7.1. Both show the same
>> issue.
>> >
>> > Follow "Quick start" guide (http://dpdk.org/doc/quick-start) to build
>> and
>> > run testpmd. Enable testpmd forwarding ("start" command).
>> >
>> > Table below shows measured forwarding performance depending on packet
>> > length:
>> >
>> > No. -- UDP payload length (bytes) -- Packet length (bytes) -- Forwarding
>> > performance (Mpps) -- Expected theoretical performance (Mpps)
>> >
>> > 1. 0 -- 64 -- 14.8 -- 14.88
>> > 2. 34 -- 80 -- 12.4 -- 12.5
>> > 3. 35 -- 81 -- 6.2 -- 12.38 (!)
>> > 4. 40 -- 86 -- 6.6 -- 11.79
>> > 5. 49 -- 95 -- 7.6 -- 10.87
>> > 6. 50 -- 96 -- 10.7 -- 10.78 (!)
>> > 7. 60 -- 106 -- 9.4 -- 9.92
>> >
>> > At line number 3 we have added 1 byte of UDP payload (comparing to
>> > previous
>> > line) and got forwarding performance halved! 6.2 Mpps against 12.38 Mpps
>> > of
>> > expected theoretical maximum for this packet size.
>> >
>> > That is the issue.
>> >
>> > Significant performance degradation exists up to 50 bytes of UDP payload
>> > (96 bytes packet length), where it jumps back to theoretical maximum.
>> >
>> > What is happening between 80 and 96 bytes packet length?
>> >
>> > This issue is stable and 100% reproducible. At this point I am not sure
>> if
>> > it is DPDK or NIC issue. These tests have been performed on Intel(R) Eth
>> > Svr Bypass Adapter X520-LR2 (X520LR2BP).
>> >
>> > Is anyone aware of such strange behavior?
>>
>> I cannot reproduce the issue using two ports on two different 82599EB
>> NICs, using 1.7.1 and 1.8.0.
>> I always get either same or better linerate as I increase the packet size.
>>
>
> Thank you for trying to reproduce the issue.
>
>
>> Actually, have you tried using 1.8.0?
>>
>
> I feel 1.8.0 is little bit immature and might require some post-release
> patching. Even tespmd from this release is not forwarding packets properly
> on my setup. It is up and running without visible errors/warnings, TX/RX
> counters are ticking but I can not see any packets at the output. Please
> note, both 1.6.0r2 and 1.7.1 releases work (on the same setup)
> out-of-the-box just fine with only exception of this mysterious performance
> drop.
>
> So it will take some time to figure out what is wrong with dpdk-1.8.0.
> Meanwhile we could focus on stable dpdk-1.7.1.
>
>
Managed to get testpmd from dpdk-1.8.0 to work on my setup. Unfortunately I
had to disable RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC, it is new comparing to
1.7.1 and somehow breaks testpmd forwarding. By the way, simply disabling
RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC in common_linuxapp config file breaks
the build - had to make quick'n'dirty fix in struct igb_rx_queue as well.

Anyway, issue is still here.

Forwarding 80 bytes packets at 12.4 Mpps.
Forwarding 81 bytes packets at 7.2 Mpps.

Any ideas?

As for X520-LR2 NIC - it is dual port bypass adapter with device id 155d. I
> believe it should be treated as 82599EB except bypass feature. I put bypass
> mode to "normal" in those tests.
>
> Alexander
>
>
>>
>> Pablo
>> >
>> > Regards,
>> > Alexander Belyakov
>>
>
>
>


[dpdk-dev] DPDK testpmd forwarding performace degradation

2015-01-28 Thread Alexander Belyakov
On Tue, Jan 27, 2015 at 7:21 PM, De Lara Guarch, Pablo <
pablo.de.lara.guarch at intel.com> wrote:

>
>
> > On Tue, Jan 27, 2015 at 10:51 AM, Alexander Belyakov
>
> >  wrote:
>
> >
>
> > Hi Pablo,
>
> >
>
> > On Mon, Jan 26, 2015 at 5:22 PM, De Lara Guarch, Pablo
>
> >  wrote:
>
> > Hi Alexander,
>
> >
>
> > > -Original Message-
>
> > > From: dev [mailto:dev-bounces at dpdk.org ] On
> Behalf Of Alexander
>
> > Belyakov
>
> > > Sent: Monday, January 26, 2015 10:18 AM
>
> > > To: dev at dpdk.org
>
> > > Subject: [dpdk-dev] DPDK testpmd forwarding performace degradation
>
> > >
>
> > > Hello,
>
> > >
>
> > > recently I have found a case of significant performance degradation
> for our
>
> > > application (built on top of DPDK, of course). Surprisingly, similar
> issue
>
> > > is easily reproduced with default testpmd.
>
> > >
>
> > > To show the case we need simple IPv4 UDP flood with variable UDP
>
> > payload
>
> > > size. Saying "packet length" below I mean: Eth header length (14
> bytes) +
>
> > > IPv4 header length (20 bytes) + UPD header length (8 bytes) + UDP
> payload
>
> > > length (variable) + CRC (4 bytes). Source IP addresses and ports are
>
> > selected
>
> > > randomly for each packet.
>
> > >
>
> > > I have used DPDK with revisions 1.6.0r2 and 1.7.1. Both show the same
>
> > issue.
>
> > >
>
> > > Follow "Quick start" guide (http://dpdk.org/doc/quick-start) to build
> and
>
> > > run testpmd. Enable testpmd forwarding ("start" command).
>
> > >
>
> > > Table below shows measured forwarding performance depending on
>
> > packet
>
> > > length:
>
> > >
>
> > > No. -- UDP payload length (bytes) -- Packet length (bytes) --
> Forwarding
>
> > > performance (Mpps) -- Expected theoretical performance (Mpps)
>
> > >
>
> > > 1. 0 -- 64 -- 14.8 -- 14.88
>
> > > 2. 34 -- 80 -- 12.4 -- 12.5
>
> > > 3. 35 -- 81 -- 6.2 -- 12.38 (!)
>
> > > 4. 40 -- 86 -- 6.6 -- 11.79
>
> > > 5. 49 -- 95 -- 7.6 -- 10.87
>
> > > 6. 50 -- 96 -- 10.7 -- 10.78 (!)
>
> > > 7. 60 -- 106 -- 9.4 -- 9.92
>
> > >
>
> > > At line number 3 we have added 1 byte of UDP payload (comparing to
>
> > > previous
>
> > > line) and got forwarding performance halved! 6.2 Mpps against 12.38
> Mpps
>
> > > of
>
> > > expected theoretical maximum for this packet size.
>
> > >
>
> > > That is the issue.
>
> > >
>
> > > Significant performance degradation exists up to 50 bytes of UDP
> payload
>
> > > (96 bytes packet length), where it jumps back to theoretical maximum.
>
> > >
>
> > > What is happening between 80 and 96 bytes packet length?
>
> > >
>
> > > This issue is stable and 100% reproducible. At this point I am not
> sure if
>
> > > it is DPDK or NIC issue. These tests have been performed on Intel(R)
> Eth
>
> > > Svr Bypass Adapter X520-LR2 (X520LR2BP).
>
> > >
>
> > > Is anyone aware of such strange behavior?
>
> > I cannot reproduce the issue using two ports on two different 82599EB
> NICs,
>
> > using 1.7.1 and 1.8.0.
>
> > I always get either same or better linerate as I increase the packet
> size.
>
> >
>
> > Thank you for trying to reproduce the issue.
>
> >
>
> > Actually, have you tried using 1.8.0?
>
> >
>
> > I feel 1.8.0 is little bit immature and might require some post-release
>
> > patching. Even tespmd from this release is not forwarding packets
> properly
>
> > on my setup. It is up and running without visible errors/warnings, TX/RX
>
> > counters are ticking but I can not see any packets at the output.
>
>
>
> This is strange. Without  changing anything, forwarding works perfectly
> for me
>
> (so, RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is enabled).
>
>
>
> >Please note, both 1.6.0r2 and 1.7.1 releases work (on the same setup)
> out-of-the-box just
>
> > fine with only exception of this mysterious performance drop.
>
> > So it will take some time to figure out what is wrong with dpdk-1.8.0.
>
> > Meanwhile we could focus on stable dpdk-1.7.1.
>
> >
>
> > Managed to get testpmd 

[dpdk-dev] DPDK testpmd forwarding performace degradation

2015-01-29 Thread Alexander Belyakov
On Wed, Jan 28, 2015 at 3:24 PM, Alexander Belyakov 
wrote:

>
>
> On Tue, Jan 27, 2015 at 7:21 PM, De Lara Guarch, Pablo <
> pablo.de.lara.guarch at intel.com> wrote:
>
>>
>>
>> > On Tue, Jan 27, 2015 at 10:51 AM, Alexander Belyakov
>>
>> >  wrote:
>>
>> >
>>
>> > Hi Pablo,
>>
>> >
>>
>> > On Mon, Jan 26, 2015 at 5:22 PM, De Lara Guarch, Pablo
>>
>> >  wrote:
>>
>> > Hi Alexander,
>>
>> >
>>
>> > > -Original Message-
>>
>> > > From: dev [mailto:dev-bounces at dpdk.org ] On
>> Behalf Of Alexander
>>
>> > Belyakov
>>
>> > > Sent: Monday, January 26, 2015 10:18 AM
>>
>> > > To: dev at dpdk.org
>>
>> > > Subject: [dpdk-dev] DPDK testpmd forwarding performace degradation
>>
>> > >
>>
>> > > Hello,
>>
>> > >
>>
>> > > recently I have found a case of significant performance degradation
>> for our
>>
>> > > application (built on top of DPDK, of course). Surprisingly, similar
>> issue
>>
>> > > is easily reproduced with default testpmd.
>>
>> > >
>>
>> > > To show the case we need simple IPv4 UDP flood with variable UDP
>>
>> > payload
>>
>> > > size. Saying "packet length" below I mean: Eth header length (14
>> bytes) +
>>
>> > > IPv4 header length (20 bytes) + UPD header length (8 bytes) + UDP
>> payload
>>
>> > > length (variable) + CRC (4 bytes). Source IP addresses and ports are
>>
>> > selected
>>
>> > > randomly for each packet.
>>
>> > >
>>
>> > > I have used DPDK with revisions 1.6.0r2 and 1.7.1. Both show the same
>>
>> > issue.
>>
>> > >
>>
>> > > Follow "Quick start" guide (http://dpdk.org/doc/quick-start) to
>> build and
>>
>> > > run testpmd. Enable testpmd forwarding ("start" command).
>>
>> > >
>>
>> > > Table below shows measured forwarding performance depending on
>>
>> > packet
>>
>> > > length:
>>
>> > >
>>
>> > > No. -- UDP payload length (bytes) -- Packet length (bytes) --
>> Forwarding
>>
>> > > performance (Mpps) -- Expected theoretical performance (Mpps)
>>
>> > >
>>
>> > > 1. 0 -- 64 -- 14.8 -- 14.88
>>
>> > > 2. 34 -- 80 -- 12.4 -- 12.5
>>
>> > > 3. 35 -- 81 -- 6.2 -- 12.38 (!)
>>
>> > > 4. 40 -- 86 -- 6.6 -- 11.79
>>
>> > > 5. 49 -- 95 -- 7.6 -- 10.87
>>
>> > > 6. 50 -- 96 -- 10.7 -- 10.78 (!)
>>
>> > > 7. 60 -- 106 -- 9.4 -- 9.92
>>
>> > >
>>
>> > > At line number 3 we have added 1 byte of UDP payload (comparing to
>>
>> > > previous
>>
>> > > line) and got forwarding performance halved! 6.2 Mpps against 12.38
>> Mpps
>>
>> > > of
>>
>> > > expected theoretical maximum for this packet size.
>>
>> > >
>>
>> > > That is the issue.
>>
>> > >
>>
>> > > Significant performance degradation exists up to 50 bytes of UDP
>> payload
>>
>> > > (96 bytes packet length), where it jumps back to theoretical maximum.
>>
>> > >
>>
>> > > What is happening between 80 and 96 bytes packet length?
>>
>> > >
>>
>> > > This issue is stable and 100% reproducible. At this point I am not
>> sure if
>>
>> > > it is DPDK or NIC issue. These tests have been performed on Intel(R)
>> Eth
>>
>> > > Svr Bypass Adapter X520-LR2 (X520LR2BP).
>>
>> > >
>>
>> > > Is anyone aware of such strange behavior?
>>
>> > I cannot reproduce the issue using two ports on two different 82599EB
>> NICs,
>>
>> > using 1.7.1 and 1.8.0.
>>
>> > I always get either same or better linerate as I increase the packet
>> size.
>>
>> >
>>
>> > Thank you for trying to reproduce the issue.
>>
>> >
>>
>> > Actually, have you tried using 1.8.0?
>>
>> >
>>
>> > I feel 1.8.0 is little bit immature and might require some post-release
>>
>> > patching. Even tespmd from this release is

[dpdk-dev] Intel bypass device 0x155C support?

2015-07-02 Thread Alexander Belyakov
Hello,

I have got my hands on Intel? Ethernet Server Bypass Adapter X540-T2
(device_id 0x155C, copper). Unfortunately the only bypass device currently
supported by DPDK is 0x155D (fiber).

Does anyone know about the plans of adding 0x155C support?

Thank you,
Alexander Belyakov