Hi Danny, 

Please see below

Best Regards,
Olga

-----Original Message-----
From: Zhou, Danny [mailto:danny.z...@intel.com] 
Sent: Monday, April 13, 2015 2:30 AM
To: Olga Shern; Raghav Sethi; dev at dpdk.org
Subject: RE: [dpdk-dev] Mellanox Flow Steering

Thanks for clarification Olga. I assume when PMD is upgraded to support flow 
director, the rules should be only set by PMD while DPDK application is 
running, right?
[Olga ] Right
 Also, when DPDK application exits, the rules previously written by the PMD are 
invalid then user needs to reset rules by ethtool via mlx4_en driver.
[Olga ] Right

I think it does not make sense to allow two drivers, one in kernel and another 
in user space, to control a same NIC device simultaneously. Or a control plane 
synchronization mechanism is needed between two drivers.
[Olga ] Agree :) We are looking for a solution 

A master driver responsible for NIC control solely is expected.
[Olga ] Or there should be synchronization mechanism as you mentioned before 

> -----Original Message-----
> From: Olga Shern [mailto:olgas at mellanox.com]
> Sent: Monday, April 13, 2015 4:39 AM
> To: Raghav Sethi; Zhou, Danny; dev at dpdk.org
> Subject: RE: [dpdk-dev] Mellanox Flow Steering
> 
> Hi Raghav,
> 
> You are right with your observations,  Mellanox PMD and mlx4_en (kernel 
> driver) are co-exist.
> When DPDK application run, all traffic is redirected to DPDK 
> application. When DPDK application exit the traffic is received by mlx4_en 
> driver.
> 
> Regarding ethtool configuration you did, it influence only mlx4_en driver, it 
> doesn't influence Mellanox PMD queues.
> 
> Mellanox PMD doesn't support Flow Director, like you mention, and we are 
> working to add it.
> Currently the only way to spread traffic between different PMD queues is 
> using RSS.
> 
> Best Regards,
> Olga
> 
> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Raghav Sethi
> Sent: Sunday, April 12, 2015 7:18 PM
> To: Zhou, Danny; dev at dpdk.org
> Subject: Re: [dpdk-dev] Mellanox Flow Steering
> 
> Hi Danny,
> 
> Thanks, that's helpful. However, Mellanox cards don't support Intel 
> Flow Director, so how would one go about installing these rules in the 
> NIC? The only technique the Mellanox User Manual (
> http://www.mellanox.com/related-docs/prod_software/Mellanox_EN_for_Lin
> ux_User_Manual_v2_0-3_0_0.pdf) lists to use Flow Steering is the 
> ethtool based method.
> 
> Additionally, the mlx4_core driver is used both by DPDK PMD and 
> otherwise (unlike the igb_uio driver, which needs to be loaded to use 
> PMD) and it seems weird that only the packets affected by the rules don't hit 
> the DPDK application. That indicates to me that the NIC is dealing with the 
> rules somehow even though a DPDK application is running.
> 
> Best,
> Raghav
> 
> On Sun, Apr 12, 2015 at 7:47 AM Zhou, Danny <danny.zhou at intel.com> wrote:
> 
> > Currently, the DPDK PMD and NIC kernel driver cannot drive a same 
> > NIC device simultaneously. When you use ethtool to setup flow 
> > director filter, the rules are written to NIC via ethtool support in 
> > kernel driver. But when DPDK PMD is loaded to drive same device, the 
> > rules previously written by ethtool/kernel_driver will be invalid, 
> > so you may have to use DPDK APIs to rewrite your rules to the NIC again.
> >
> > The bifurcated driver is designed to provide a solution to support 
> > the kernel driver and DPDK coexist scenarios, but it has security 
> > concern so netdev maintainer rejects it.
> >
> > It should not be a Mellanox hardware problem, if you try it on Intel 
> > NIC the result is same.
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Raghav Sethi
> > > Sent: Sunday, April 12, 2015 1:10 PM
> > > To: dev at dpdk.org
> > > Subject: [dpdk-dev] Mellanox Flow Steering
> > >
> > > Hi folks,
> > >
> > > I'm trying to use the flow steering features of the Mellanox card 
> > > to effectively use a multicore server for a benchmark.
> > >
> > > The system has a single-port Mellanox ConnectX-3 EN, and I want to 
> > > use 4
> > of
> > > the 32 cores present and 4 of the 16 RX queues supported by the 
> > > hardware (i.e. one RX queue per core).
> > >
> > > I assign RX queues to each of the cores, but obviously without 
> > > flow steering (all the packets have the same IP and UDP headers, 
> > > but different dest MACs in the ethernet headers) each of the packets hits 
> > > one core.
> > I've
> > > set up the client such that it sends packets with a different 
> > > destination MAC for each RX queue (e.g. RX queue 1 should get 
> > > 10:00:00:00:00:00, RX queue 2 should get 10:00:00:00:00:01 and so on).
> > >
> > > I try to accomplish this by using ethtool to set flow steering 
> > > rules
> > (e.g.
> > > ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:00 action 1 loc 
> > > 1, ethtool -U p7p1 flow-type ether dst 10:00:00:00:00:01 action 2 loc 
> > > 2..).
> > >
> > > As soon as I set up these rules though, packets matching them just 
> > > stop hitting my application. All other packets go through, and 
> > > removing the rules also causes the packets to go through. I'm 
> > > pretty sure my
> > application
> > > is looking at all the queues, but I tried changing the rules to 
> > > try a
> > rule
> > > for every single destination RX queue (0-16), and that doesn't 
> > > work
> > either.
> > >
> > > If it helps, my code is based on the l2fwd sample application, and 
> > > is
> > here:
> > > https://gist.github.com/raghavsethi/416fb77d74ccf81bd93e
> > >
> > > Also, I added the following to my /etc/init.d: options mlx4_core 
> > > log_num_mgm_entry_size=-1, and restarted the driver before any of 
> > > these tests.
> > >
> > > Any ideas what might be causing my packets to drop? In case this 
> > > is a Mellanox issue, should I be talking to their customer support?
> > >
> > > Best,
> > > Raghav Sethi
> >

Reply via email to