[dpdk-dev] Intel I350 fails to work with DPDK

2015-07-26 Thread he peng
Hi, Sabu and Bruce:
 I saw your post in the mailing list about I350 fails to send packets, 
however it is posted about one year ago. 

 Now we have encountered the same issue. 
 We are now building a forwarding device which forwards packets between 2 
I350 ports, and we observe that the program will 
transmit a few hundreds of packets then the I350 seems freeze: it fails to send 
all the packets. Sometimes one port and sometimes both ports fail 
to send any packets.

 After some code investigation, we find out that the program fails to send 
packets because there is one packet descriptor?s DD bit is not set by the 
hardware DMA, so the driver thinks that the TX ring is full then it drops all 
the packets. Below is the code (eth_igb_xmit_pkts in igb_rxtx.c) where the 
rte_eth_tx_burst returns:


if (! (txr[tx_end].wb.status & E1000_TXD_STAT_DD)) {
if (nb_tx == 0)
return (0);
goto end_of_tx;
}


We have checked the corresponding sw_ring[tx_end]->mbuf , the packet content 
seems fine, it is a normal 64 bytes packet. Our code is quite simple, just 
adding/removing tunnel tags in the packets. The total length of  packet tags is 
28 bytes. Maybe it is because there is some align requirements on the memory 
addresses where the packet content begins? I do not know. Below is the output 
of l2fwd.

/home/dpdk-1.8.0/examples/l2fwd/build/l2fwd -c 0x6 -n 2 -- -p 0x6

Port statistics 
Statistics for port 1 --
Packets sent:  13585277499
Packets received:   6792638878
Packets dropped: 0
Statistics for port 2 --
Packets sent:  649
Packets received:  13585277549
Packets dropped:6792638229
Aggregate statistics ===
Total packets sent:13585278180
Total packets received:20377916457
Total packets dropped:  6792638229


 After the card goes freeze, we run the l2fwd and find out that it 
encounters the same issues. The program forward only around 600 packets, then 
it begins to drop all the other packets. We now start to doubt this is a 
problem of the network card, but we are not sure that if it is because the 
hardware has already been messed up, after so many times of restarting the 
programs and testing.  

 Any help is appreciated ! Thanks. 




[dpdk-dev] Intel I350 fails to work with DPDK

2014-05-28 Thread sabu kurian
Hai bruce,

Thanks for the reply.

I even tried that before. Having a burst size of 64 or 128 simply fails.
The card would send out a few packets (some 400 packets of 74 byte size)
and then freeze. For my application... I'm trying to generate the peak
traffic possible with the link speed and the NIC.



On Wed, May 28, 2014 at 4:16 PM, Richardson, Bruce <
bruce.richardson at intel.com> wrote:

> > -Original Message-
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of sabu kurian
> > Sent: Wednesday, May 28, 2014 10:42 AM
> > To: dev at dpdk.org
> > Subject: [dpdk-dev] Intel I350 fails to work with DPDK
> >
> > I have asked a similar question before, no one replied though.
> >
> > I'm crafting my own packets in mbuf's (74 byte packets all) and sending
> it
> > using
> >
> > ret = rte_eth_tx_burst(port_ids[lcore_id], 0, m_pool,burst_size);
> >
> > When burst_size is 1, it does work. Work in the sense the NIC will
> continue
> > with sending packets, at a little over
> > 50 percent of the link rate. For 1000 Mbps link rate .The observed
> > transmit rate of the NIC is 580 Mbps (using Intel DPDK). But it should be
> > possible to achieve at least 900 Mbps transmit rate with Intel DPDK and
> > I350 on 1 Gbps link.
> >
> > Could someone help me out on this ?
> >
> > Thanks and regards
>
> Sending out a single packet at a time is going to have a very high
> overhead, as each call to tx_burst involves making PCI transactions (MMIO
> writes to the hardware ring pointer). To reduce this penalty you should
> look to send out the packets in bursts, thereby saving PCI bandwidth and
> splitting the cost of each MMIO write over multiple packets.
>
> Regards,
> /Bruce
>


[dpdk-dev] Intel I350 fails to work with DPDK

2014-05-28 Thread sabu kurian
I have asked a similar question before, no one replied though.

I'm crafting my own packets in mbuf's (74 byte packets all) and sending it
using

ret = rte_eth_tx_burst(port_ids[lcore_id], 0, m_pool,burst_size);

When burst_size is 1, it does work. Work in the sense the NIC will continue
with sending packets, at a little over
50 percent of the link rate. For 1000 Mbps link rate .The observed
transmit rate of the NIC is 580 Mbps (using Intel DPDK). But it should be
possible to achieve at least 900 Mbps transmit rate with Intel DPDK and
I350 on 1 Gbps link.

Could someone help me out on this ?

Thanks and regards


[dpdk-dev] Intel I350 fails to work with DPDK

2014-05-28 Thread Richardson, Bruce

> From: sabu kurian [mailto:sabu2kurian at gmail.com] 
> Sent: Wednesday, May 28, 2014 11:54 AM
> To: Richardson, Bruce
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] Intel I350 fails to work with DPDK
>
> Hai bruce,
> Thanks for the reply.
> I even tried that before. Having a burst size of 64 or 128 simply fails. The 
> card would send out a few packets 
> (some 400 packets of 74 byte size) and then freeze. For my application... I'm 
> trying to generate the peak 
> traffic possible with the link speed and the NIC.

Bursts of 64 and 128 are rather large, can you perhaps try using bursts of 16 
and 32 and see what the result is? The drivers are generally tuned for a max 
burst size of about 32 packets.



[dpdk-dev] Intel I350 fails to work with DPDK

2014-05-28 Thread Richardson, Bruce
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of sabu kurian
> Sent: Wednesday, May 28, 2014 10:42 AM
> To: dev at dpdk.org
> Subject: [dpdk-dev] Intel I350 fails to work with DPDK
> 
> I have asked a similar question before, no one replied though.
> 
> I'm crafting my own packets in mbuf's (74 byte packets all) and sending it
> using
> 
> ret = rte_eth_tx_burst(port_ids[lcore_id], 0, m_pool,burst_size);
> 
> When burst_size is 1, it does work. Work in the sense the NIC will continue
> with sending packets, at a little over
> 50 percent of the link rate. For 1000 Mbps link rate .The observed
> transmit rate of the NIC is 580 Mbps (using Intel DPDK). But it should be
> possible to achieve at least 900 Mbps transmit rate with Intel DPDK and
> I350 on 1 Gbps link.
> 
> Could someone help me out on this ?
> 
> Thanks and regards

Sending out a single packet at a time is going to have a very high overhead, as 
each call to tx_burst involves making PCI transactions (MMIO writes to the 
hardware ring pointer). To reduce this penalty you should look to send out the 
packets in bursts, thereby saving PCI bandwidth and splitting the cost of each 
MMIO write over multiple packets.

Regards,
/Bruce