Re: [dpdk-users] [dpdk-dev] TX unable to enqueue packets to NIC due to no free TX descriptor

2019-04-14 Thread Xiao, Xiaohong (NSB - CN/Shanghai)
Hello
We met similar issue. DPDK17.11 + i40e. Tx queue seems full and hanging, no 
packets could be sent out at all. 
Has this issue gone and how? Thank you very much.

Regards
Nokia, Xiao Xiaohong

-Original Message-
From: users [mailto:users-boun...@dpdk.org] On Behalf Of Soni, Shivam
Sent: 2019年1月17日 5:46
To: Stephen Hemminger 
Cc: d...@dpdk.org; users@dpdk.org; Uppal, Hardeep 
Subject: Re: [dpdk-users] [dpdk-dev] TX unable to enqueue packets to NIC due to 
no free TX descriptor

On digging further found some more data.

On the host where everything works fine, I can see 'txq->nb_tx_free' getting 
reduced to 31 from 1024. After reaching at 31, i40e_tx_free_bufs() function 
gets called, which frees the buffer and nb_tx_free reaches to 63.

Also in the function i40e_tx_free_bufs(), this if condition never evaluates to 
true as whatsoever be the value of the index txq->tx_next_dd , the value of 
'cmd_type_offset_bsz' is always 15. Hence this if condition is always false and 
the code works fine.
if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
 rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
 rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) {
return 0;
 }

However, on the hosts where we are seeing the issue, after some calls of the  
i40e_tx_free_bufs(), value for 
'txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz' becomes really weird like 
1099511627888, 1030792151152. Because of these weird values the 'if condition' 
becomes true( if((1099511627888 & 15) != 15). Hence function returns from there 
itself and nb_tx_free doesn't get increased and eventually reaches '0'

Are these values expected or there is some memory corruption happening 
somewhere in our code?

As far as I can understand this if condition its purpose is to check whether 
the buffers to be freed are still transmitting or not. 

Can someone help us out here.

On 1/14/19, 9:54 AM, "Soni, Shivam"  wrote:

I doubled the mempool size to 65535 but the issue is not resolved.

On 1/11/19, 4:27 PM, "dev on behalf of Soni, Shivam"  wrote:

Hi Stephen,

Thanks for the reply.

Our mbuf pool is big enough. We have 2 RX cores, 2 TX cores and 8 
worker cores.
NTxd and NRxd is 1024 each and we have 16 Rx rings (shared between Rx 
and workers) and 8 Tx rings (between Tx and workers)
Mempool cache size is 256 and burst size is 32.

So overall calculation comes out to be 
((NIC_RX_QUEUE_SIZE * RX_LCORES) + (NIC_TX_QUEUE_SIZE * TX_LCORES) + \
  (WORKER_RX_RING_SIZE * RX_LCORES * 
NAT_WORKER_LCORES) + (WORKER_TX_RING_SIZE * NAT_WORKER_LCORES) + \
  ((MBUF_ARRAY_SIZE + CACHE_SIZE) * (RX_LCORES 
+ TX_LCORES + NAT_WORKER_LCORES)))

With this the  mbuf pool size should be 32128. To round off as power of 
2 we have kept mbuf pool size as 32767.

Also the incoming packet rate Is pretty low.

For testing I have doubled the pool size for now. Not sure whether this 
will solve the issue.

Thanks.

On 1/11/19, 3:38 PM, "Stephen Hemminger"  
wrote:

On Fri, 11 Jan 2019 22:10:39 +
"Soni, Shivam"  wrote:

> Hi All,
> 
> We are trying to debug and fix an issue. After the deployment, in 
few of the hosts we see an issue where TX is unable to enqueue packets to NIC. 
On rebouncing or restarting our packet processor daemon, issue gets resolved.
> 
> We are using IntelDPDK version 17.11.4 and i40e drivers.
> 
> On looking into driver’s code, we found that whenever the issue 
is happening the value for nb_tx_free is ‘0’. And then it tries to free the 
buffer by calling function ‘i40e_tx_free_bufs’.
> 
> This method returns early as the buffer its trying to free says 
it hasn’t finished transmitting yet. The method returns at this if condition:
> 
> /* check DD bits on threshold descriptor */
> if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
> rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=
> rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) {
> return 0;
> }
> 
> Hence nb_tx_free remains 0.
> 
> Our tx descriptor count is 1024.
> 
> How can we fix this issue.  Can someone help us out here please

Use bigger mbuf pool.  For safety the mbuf pool has to be big enough
for Nports * (NRxd + NTxd) + NCore * (mbuf_pool_cache_size + 
burst_size)

Each NIC might get full receive ring and full transmit ring
and each active core 

Re: [dpdk-users] Emulex XE-104 / be2net: is it DPDK capable?

2018-12-27 Thread Xiao, Xiaohong (NSB - CN/Shanghai)
Thanks
I knew this list. Emulex XE-104 and be2net are not in.
I wonder whether this list is able to include all DPDK capable NICs.

From: Ramzah Rehman [mailto:ramzahreh...@gmail.com]
Sent: 2018年12月27日 17:28
To: Xiao, Xiaohong (NSB - CN/Shanghai) 
Cc: users@dpdk.org
Subject: Re: [dpdk-users] Emulex XE-104 / be2net: is it DPDK capable?

here is the list of supported drivers: https://core.dpdk.org/supported/

Best Regards,
Ramzah Rehman


On Mon, Dec 24, 2018 at 7:17 AM Xiao, Xiaohong (NSB - CN/Shanghai) 
mailto:xiaohong.x...@nokia-sbell.com>> wrote:
Hello community,
Who had the experience using “HPE FlexFabric 20Gb 2P 650FLB Adpter“,where the 
ethernet controller is Emulex XE-104 and its driver name is be2net.
Do you know whether this card is DPDK capable? Thanks.

Regards
Xiao, Xiaohong


[dpdk-users] kni interface about X710

2018-01-18 Thread Xiao, Xiaohong (NSB - CN/Shanghai)
Hello
We are trying to create kni interfaces on the X710 ethernet interface, so that 
the function kni_ioctl_create() is called. But we falls to below “else” branch 
and get a random MAC address on the created kni interface.
And we also noticed that, for igb/ixgbe devices, no above issue �C it goes with 
below “if” branch.
Is there a plan to develop a function like i40e_kni_probe() to provide similar 
function of existing igb_kni_probe() / ixgbe_kni_probe()?Or can you please 
suggest a way how to set the real MAC here?

Thanks a lot.
Xiao Xiaohong

if (kni->lad_dev)
   memcpy(net_dev->dev_addr, kni->lad_dev->dev_addr, ETH_ALEN);
 else
   /*
   * Generate random mac address. eth_random_addr() is the newer
   * version of generating mac address in linux kernel.
   */
 {
random_ether_addr(net_dev->dev_addr);