On 17-01-11 06:51 AM, JOSHI, KAUSTUBH  (KAUSTUBH) wrote:
> Also, the kernel drivers have no concept of passing VF messages to upstream 
> "decision making” (or policy enforcement) software like VFd.
> 
> On Jan 11, 2017, at 9:49 AM, Kaustubh Joshi 
> <kaust...@research.att.com<mailto:kaust...@research.att.com>> wrote:
> 
> When Alex from our team started working on Niantic last year, the following 
> were the list of gaps in the kernel drivers we had a need to fill:
> 

Thanks for the list its nice to see concrete examples. I can
provide latest upstream status for what its worth.

> Direct traffic to VF based on more than one outer VLAN tags

Kernel supports this but ixgbe/i40e drivers need to pull in code to
enable it.

> Optionally strip on ingress (to PF) and insert on egress VLAN tag

This is just a PVID per VF right? This should be working now on
ixgbe I would have to check though.

> Disable/enable MAC and VLAN anti spoofing separately

Needs kernel patch.

> Mirror traffic from one VF to the other

Kernel supports this but driver is missing feature.

> Enable/Disable local switching per VF

Local switching? I'm guessing this means you want all traffic to
go to network regardless of MAC/etc. A sort of per port VEPA mode?

> Collect statistics per VF pkts/octets in/out

Under development.

> Enable/disable Mcast/unknown unicast per VF

Under development.

> Manage up to 8 TC per VF with one strict priority queue
> Manage per VF per TC bandwidth allocations

Needs kernel patch.

> Manage LACP status visibility to the VFs (for NIC teaming using SRIOV)

I've not even thought to do this yet so its a good catch.

> 
> Most of these are VF management functions, and there is no standardized way 
> to do VF management in the kernel drivers. Besides, most of the use-cases 
> around SRIOV need DPDK in the VF anyway (so the target communities are 
> aligned) and the PF DPDK driver for ixgbe already existed, so it made sense 
> to add them there - no forking of the PF driver was involved and there is no 
> additional duplicate code.
> 

So I wont argue against we already have DPDK and VFs are DPDK and
updating is a problem per other email. But I think we can at least
get kernel support for the above.

Thanks,
John

> Cheers
> 
> KJ
> 
> 
> On Jan 11, 2017, at 6:03 AM, Vincent Jardin 
> <vincent.jar...@6wind.com<mailto:vincent.jar...@6wind.com>> wrote:
> 
> Please can you list the gaps of the Kernel API?
> 
> Thank you,
> Vincent
> 
> 
> Le 11 janvier 2017 3:59:45 AM "JOSHI, KAUSTUBH  (KAUSTUBH)" 
> <kaust...@research.att.com<mailto:kaust...@research.att.com>> a écrit :
> 
> Hi Vincent,
> 
> Greetings! Jumping into this debate a bit late, but let me share our point of 
> view based on how we are using this code within AT&T for our NFV cloud.
> 
> Actually, we first started with trying to do the configuration within the 
> kernel drivers as you suggest, but quickly realized that besides the 
> practical problem of kernel upstreaming being a much more arduous road (which 
> can be overcome), the bigger problem was that there is no standardization in 
> the configuration interfaces for the NICs in the kernel community. So 
> different drivers do things differently and expose different settings, and no 
> forum exists to drive towards such standardization. This was leading to 
> vendors have to maintain patched versions of drivers for doing PF 
> configuration, which is not a desirable situation.
> 
> So, to build a portable (to multiple NICs) SRIOV VF manager like VFd, DPDK 
> seemed like a good a forum with some hope for driving towards a standard set 
> of interfaces and without having to worry about a lot of legacy baggage and 
> old hardware. Especially since DPDK already takes on the role of configuring 
> NICs for the data plane functions anyway - both PF and VF drivers will have 
> to be included for data plane usage anyway - we viewed that adding VF config 
> options will not cause any forking, but simply flush out the DPDK drivers and 
> their interfaces to be more complete. These APIs could be optional, so new 
> vendors aren’t obligated to add them.
> 
> Furthermore, allowing VF config using the DPDK PF driver also has the side 
> benefit of allowing a complete SRIOV system (both VF and PF) to be built 
> entirely with DPDK, also making version alignment easier.
> 
> We started with Niantic, which already had PF and VF drivers, and things have 
> worked out very well with it. However, we would like VFd to be a multi-NIC 
> vendor agnostic VF management tool, which is why we’ve been asking for making 
> the PF config APIs richer.
> 
> Regards
> 
> KJ
> 
> 
> On Jan 10, 2017, at 3:23 PM, Vincent Jardin 
> <vincent.jar...@6wind.com<mailto:vincent.jar...@6wind.com>> wrote:
> 
> Nope. First one needs to assess if DPDK should be intensively used to become 
> a PF knowing Linux can do the jobs. Linux kernel community does not like the 
> forking of Kernel drivers, I tend to agree that we should not keep 
> duplicating options that can be solved with the Linux kernel.
> 
> Best regards,
> Vincent
> 
> 
> 
> 
> 
> 
> 

Reply via email to