I dug deeper into the problem vmxnet-pmd not capturing packets. Whenever pmd 
stops capturing, it does not respect the number of rxd. For example, I set rxd 
to 512. The pkt mbuf mempool I allocate is 4482, which is much bigger than rxd. 
I would expect ~512 mbufs to be removed from mempool at any moment in time. 
When not able to capture new packets is imminent, pmd would use much more than 
512 to the point of exhausting all the mbufs in mempool. After this point, pmd 
stops capturing packet and does not recover; it never enters the while loop 
below. 

vmxnet3_rxtx.c
457     while (rcd->gen == rxq->comp_ring.gen) {

This issue occurs quite randomly, which leads me to believe there is some race 
condition. It occurs when packets are arriving faster than the application can 
process. 

I understand that vmxnet3 pmd preallocates rxq descriptors during 
initialization. Yet, even when it?s receiving packets correctly, it uses a 
little more than rxq descriptors. I find the number of mbufs in mempool very 
interesting. 

Before vmxnet3 init
4482

After vmxnet3 init
3970

When receiving packets properly
ranges between 3920 - 3970

Prior to stop receiving packets
ranges between 0 and 3970

After the last packet is received
4481

I appreciate any comment/help on this. Thanks. 

Dan


On Mar 11, 2014, at 12:55 PM, Daniel Kan <dan at nyansa.com> wrote:

> I?m unable to get RSS to work properly with vmxnet3-pmd. The first issue is 
> that the number of rxqs must be power of 2. Otherwise, rte_eth_dev_start() 
> fails due to inability to activate vmxnet3 NIC. This is not too big of a 
> deal, but physical NICs don?t have this requirement. 
> 
> The second issue is that RSS is just not working at all for me. The rxmode is 
> set to ETH_MQ_RX_RSS and rss_hf = ETH_RSS_IPV4_TCP | ETH_RSS_IPV4_UDP | 
> ETH_RSS_IPV4 | ETH_RSS_IPV6. The same configuration works for a real NIC. 
> When I checked mb->pkt.hash, the value is all zeroed out. 
> 
> Even if I disabled RSS, I found the performance of vmxnet3-pmd to be quite 
> poor, peaking out at 600k pps with 64 byte packet, while libpcap can do 650k 
> pps. 
> 
> Lastly, there is a stability issue. On a number of occasions, vmxnet3-pmd 
> stops receiving packets after some random time and several million packets. 
> 
> I?m not sure if anyone else is having as much issue as I?m, I will give 
> vmxnet3-usermap a try. 
> 
> Finally, does either vmxnet3-usermap or vmxnet3-pmd work well for 
> non-Intel-based underlying physical NIC? 
> 
> Thanks. 
> 
> Dan
> 
> 

Reply via email to