On Tue, Oct 27, 2020 at 14:27, Vladimir Oltean <olte...@gmail.com> wrote: >> The LAG driver ops all receive the LAG netdev as an argument when this >> information is already available through the port's lag pointer. This >> was done to match the way that the bridge netdev is passed to all VLAN >> ops even though it is in the port's bridge_dev. Is there a reason for >> this or should I just remove it from the LAG ops? > > Maybe because on "leave", the bridge/LAG net device pointer inside > struct dsa_port is first set to NULL, then the DSA notifier is called?
Right, that makes sense. For LAGs I keep ds->lag set until the leave ops have run. But perhaps I should change it to match the VLAN ops? > Since ocelot/felix does not have this restriction, and supports > individual port addressing even under a LAG, you can imagine I am not > very happy to see the RX data path punishing everyone else that is not > mv88e6xxx. I understand that, for sure. Though to be clear, the only penalty in terms of performance is an extra call to dsa_slave_check, which is just a load and compare on skb->dev->netdev_ops. >> (mv88e6xxx) What is the policy regarding the use of DSA vs. EDSA? It >> seems like all chips capable of doing EDSA are using that, except for >> the Peridot. > > I have no documentation whatsoever for mv88e6xxx, but just wondering, > what is the benefit brought by EDSA here vs DSA? Does DSA have the > same restriction when the ports are in a LAG? The same restrictions apply I'm afraid. The only difference is that you prepend a proper ethertype before the tag. The idea (as far as I know) is that you can trap control traffic (TO_CPU in DSA parlance) to the CPU and receive (E)DSA tagged to implement things like STP and LLDP, but you receive the data traffic (FORWARD) untagged or with an 802.1Q tag. This means you can use standard VLAN accelerators on NICs to remove/insert the 1Q tags. In a routing scenario this can bring a significant speed-up as you skip two memcpys per packet to remove and insert the tag.