On Tue, 11 Jun 2019 06:21:21 +0000
Matan Azrad <ma...@mellanox.com> wrote:

> Hi Stephen
> 
> From: Stephen Hemminger
> > When using DPDK on Azure it is common to have one non-DPDK interface.
> > If that non-DPDK interface is present vdev_netvsc correctly skip it.
> > But if the non-DPDK has accelerated networking the Mellanox driver will 
> > still
> > get associated with DPDK (and break connectivity).
> > 
> > The current process is to tell users to do whitelist or blacklist the PCI
> > device(s) not used for DPDK. But vdev_netvsc already is doing a lot of 
> > looking
> > at devices and VF devices.
> > 
> > Could vdev_netvsc just do this automatically by setting devargs for the VF 
> > to
> > blacklist?  
> 
> 
> There is way to blacklist a device by setting it a rout\IP\IPv6, from the 
> VDEV_NETVSC doc:
> "Not specifying either iface or mac makes this driver attach itself to all 
> unrouted NetVSC interfaces found on the system. Specifying the device makes 
> this driver attach itself to the device regardless the device routes."
> 
> So, we are expecting that used VFs will be with a rout and DPDK VFs will not 
> be with a rout.
> 
> Doesn't it enough?
> 
> 
> Matan

I am talking about if eth0 has a route, it gets skipped but the associated MLX 
SR-IOV device
does not. When the MLX device is then configured for DPDK, it breaks it for use 
by kernel;
and therefore connectivity with the VM is lost.

Reply via email to