Re: [patch net-next 00/26] bonding/team offload + mlxsw implementation
Tue, Dec 01, 2015 at 02:48:38PM CET, j...@resnulli.us wrote: >From: Jiri Pirko > >This patchset introduces needed infrastructure for link aggregation >offload - for both team and bonding. It also implements the offload >in mlxsw driver. > >Particulary, this patchset introduces possibility for upper driver >(bond/team/bridge/..) to pass type-specific info down to notifier listeners. >Info is passed along with NETDEV_CHANGEUPPER/NETDEV_PRECHANGEUPPER >notifiers. Listeners (drivers of netdevs being enslaved) can react >accordingly. > >Other extension is for run-time use. This patchset introduces >new netdev notifier type - NETDEV_CHANGELOWERSTATE. Along with this >notification, the upper driver (bond/team/bridge/..) can pass some >information about lower device change, particulary link-up and >TX-enabled states. Listeners (drivers of netdevs being enslaved) >can react accordingly. > >The last part of the patchset is implementation of LAG offload in mlxsw, >using both previously introduced infrastructre extensions. > >Note that bond-speficic (and ugly) NETDEV_BONDING_INFO used by mlx4 >can be removed and mlx4 can use the extensions this patchset adds. >I plan to convert it and get rid of NETDEV_BONDING_INFO in >a follow-up patchset. ccing couple of people I forgot to cc. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch net-next 00/26] bonding/team offload + mlxsw implementation
On Tue, Dec 1, 2015 at 4:43 PM, Jiri Pirko wrote: > Tue, Dec 01, 2015 at 02:48:38PM CET, j...@resnulli.us wrote: >>From: Jiri Pirko >> >>This patchset introduces needed infrastructure for link aggregation >>offload - for both team and bonding. It also implements the offload >>in mlxsw driver. Hi Jiri, I didn't see any changes to switchdev.h, can you elaborate on that please. Or. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch net-next 00/26] bonding/team offload + mlxsw implementation
Tue, Dec 01, 2015 at 04:06:23PM CET, gerlitz...@gmail.com wrote: >On Tue, Dec 1, 2015 at 4:43 PM, Jiri Pirko wrote: >> Tue, Dec 01, 2015 at 02:48:38PM CET, j...@resnulli.us wrote: >>>From: Jiri Pirko >>> >>>This patchset introduces needed infrastructure for link aggregation >>>offload - for both team and bonding. It also implements the offload >>>in mlxsw driver. > >Hi Jiri, > >I didn't see any changes to switchdev.h, can you elaborate on that please. Correct. This patchset does not extend switchdev api. The extension is done for netdev notifiers. It seems natural and correct. As we discussed already with John on a different thread, it makes sense for non-switchdev drivers to benefit from this extensions as well. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch net-next 00/26] bonding/team offload + mlxsw implementation
On Tue, Dec 1, 2015 at 5:12 PM, Jiri Pirko wrote: > Tue, Dec 01, 2015 at 04:06:23PM CET, gerlitz...@gmail.com wrote: >>On Tue, Dec 1, 2015 at 4:43 PM, Jiri Pirko wrote: >>> Tue, Dec 01, 2015 at 02:48:38PM CET, j...@resnulli.us wrote: This patchset introduces needed infrastructure for link aggregation offload - for both team and bonding. It also implements the offload in mlxsw driver. >>I didn't see any changes to switchdev.h, can you elaborate on that please. > Correct. This patchset does not extend switchdev api. The extension is > done for netdev notifiers. It seems natural and correct. > As we discussed already with John on a different thread, it makes sense > for non-switchdev drivers to benefit from this extensions as well. This is understood. However, the point which is still not clear to me related to the LAG / switchdev object model. All of FDB/VLAN/FIB switchdev objects have corresponding software counterparts in the kernel --- what's the case for LAG? the software construct is bond or team instance, shouldn't there be a modeling of the HW LAG object in switchdev? Or. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch net-next 00/26] bonding/team offload + mlxsw implementation
Tue, Dec 01, 2015 at 05:35:43PM CET, gerlitz...@gmail.com wrote: >On Tue, Dec 1, 2015 at 5:12 PM, Jiri Pirko wrote: >> Tue, Dec 01, 2015 at 04:06:23PM CET, gerlitz...@gmail.com wrote: >>>On Tue, Dec 1, 2015 at 4:43 PM, Jiri Pirko wrote: Tue, Dec 01, 2015 at 02:48:38PM CET, j...@resnulli.us wrote: > >This patchset introduces needed infrastructure for link aggregation >offload - for both team and bonding. It also implements the offload >in mlxsw driver. > >>>I didn't see any changes to switchdev.h, can you elaborate on that please. > >> Correct. This patchset does not extend switchdev api. The extension is >> done for netdev notifiers. It seems natural and correct. >> As we discussed already with John on a different thread, it makes sense >> for non-switchdev drivers to benefit from this extensions as well. > >This is understood. > >However, the point which is still not clear to me related to the LAG / >switchdev object model. > >All of FDB/VLAN/FIB switchdev objects have corresponding software counterparts >in the kernel --- what's the case for LAG? the software construct is >bond or team >instance, shouldn't there be a modeling of the HW LAG object in switchdev? No need for that, what that would be good for? Switchdev iface (most of it) works with struct net_device. Does not matter if that is the port netdev direclty, or if it is team/bonding netdev. It falls into the picture very nicely. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch net-next 00/26] bonding/team offload + mlxsw implementation
On Tue, Dec 1, 2015 at 6:47 PM, Jiri Pirko wrote: > Tue, Dec 01, 2015 at 05:35:43PM CET, gerlitz...@gmail.com wrote: >>On Tue, Dec 1, 2015 at 5:12 PM, Jiri Pirko wrote: >>> Tue, Dec 01, 2015 at 04:06:23PM CET, gerlitz...@gmail.com wrote: On Tue, Dec 1, 2015 at 4:43 PM, Jiri Pirko wrote: > Tue, Dec 01, 2015 at 02:48:38PM CET, j...@resnulli.us wrote: >>This patchset introduces needed infrastructure for link aggregation >>offload - for both team and bonding. It also implements the offload >>in mlxsw driver. I didn't see any changes to switchdev.h, can you elaborate on that please. >>> Correct. This patchset does not extend switchdev api. The extension is >>> done for netdev notifiers. It seems natural and correct. >>> As we discussed already with John on a different thread, it makes sense >>> for non-switchdev drivers to benefit from this extensions as well. >>This is understood. >>However, the point which is still not clear to me related to the LAG / >>switchdev object model. >>All of FDB/VLAN/FIB switchdev objects have corresponding software counterparts >>in the kernel --- what's the case for LAG? the software construct is >>bond or team >>instance, shouldn't there be a modeling of the HW LAG object in switchdev? > No need for that, what that would be good for? I'll give it 2nd thought, also lets see what other reviewers think on this matter. Another question relates to users bonding/teaming netdevice ports from different HW switches, or of two vlans over ports from the same HW switch. This is something that AFAIK not supported by HW -- do we want to disallow that? what layer in the kernel we want to enforce that limitation? team/bond or switchdev core or the switchdev HW driver? > Switchdev iface (most of it) works with struct net_device. Does not matter > if that is the port netdev direclty, or if it is team/bonding netdev. > It falls into the picture very nicely. Or. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch net-next 00/26] bonding/team offload + mlxsw implementation
Wed, Dec 02, 2015 at 06:53:35AM CET, gerlitz...@gmail.com wrote: >On Tue, Dec 1, 2015 at 6:47 PM, Jiri Pirko wrote: >> Tue, Dec 01, 2015 at 05:35:43PM CET, gerlitz...@gmail.com wrote: >>>On Tue, Dec 1, 2015 at 5:12 PM, Jiri Pirko wrote: Tue, Dec 01, 2015 at 04:06:23PM CET, gerlitz...@gmail.com wrote: >On Tue, Dec 1, 2015 at 4:43 PM, Jiri Pirko wrote: >> Tue, Dec 01, 2015 at 02:48:38PM CET, j...@resnulli.us wrote: > >>>This patchset introduces needed infrastructure for link aggregation >>>offload - for both team and bonding. It also implements the offload >>>in mlxsw driver. > >I didn't see any changes to switchdev.h, can you elaborate on that please. > Correct. This patchset does not extend switchdev api. The extension is done for netdev notifiers. It seems natural and correct. As we discussed already with John on a different thread, it makes sense for non-switchdev drivers to benefit from this extensions as well. > >>>This is understood. > >>>However, the point which is still not clear to me related to the LAG / >>>switchdev object model. > >>>All of FDB/VLAN/FIB switchdev objects have corresponding software >>>counterparts >>>in the kernel --- what's the case for LAG? the software construct is >>>bond or team >>>instance, shouldn't there be a modeling of the HW LAG object in switchdev? > >> No need for that, what that would be good for? > >I'll give it 2nd thought, also lets see what other reviewers think on >this matter. > >Another question relates to users bonding/teaming netdevice ports from >different HW switches, or of two vlans over ports from the same HW switch. > >This is something that AFAIK not supported by HW -- do we want to >disallow that? >what layer in the kernel we want to enforce that limitation? team/bond >or switchdev >core or the switchdev HW driver? It is not handled at the moment. In can be easily disallowed by driver. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch net-next 00/26] bonding/team offload + mlxsw implementation
On Wed, Dec 2, 2015 at 9:58 AM, Jiri Pirko wrote: > Wed, Dec 02, 2015 at 06:53:35AM CET, gerlitz...@gmail.com wrote: >> Another question relates to users bonding/teaming netdevice ports from >> different HW switches, or of two vlans over ports from the same HW switch. >> This is something that AFAIK not supported by HW -- do we want to disallow >> that? >> what layer in the kernel we want to enforce that limitation? team/bond >> or switchdev core or the switchdev HW driver? > It is not handled at the moment. In can be easily disallowed by driver. what about the case of LAG + VLANs, what would be currently supported, bonding vlans or vlan a bond? bond b0 --> vlan A.X --> switchdev port A vlan B.X --> switchdev port B vlan b0.X --> bond b0 --> vlan --> switchdev port Or. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch net-next 00/26] bonding/team offload + mlxsw implementation
Wed, Dec 02, 2015 at 09:21:37AM CET, gerlitz...@gmail.com wrote: >On Wed, Dec 2, 2015 at 9:58 AM, Jiri Pirko wrote: >> Wed, Dec 02, 2015 at 06:53:35AM CET, gerlitz...@gmail.com wrote: > >>> Another question relates to users bonding/teaming netdevice ports from >>> different HW switches, or of two vlans over ports from the same HW switch. > >>> This is something that AFAIK not supported by HW -- do we want to disallow >>> that? >>> what layer in the kernel we want to enforce that limitation? team/bond >>> or switchdev core or the switchdev HW driver? > >> It is not handled at the moment. In can be easily disallowed by driver. > >what about the case of LAG + VLANs, what would be currently supported, >bonding vlans or vlan a bond? > > >bond b0 --> > vlan A.X --> switchdev port A > vlan B.X --> switchdev port B > >vlan b0.X --> bond b0 --> >vlan --> switchdev port - vlan on top of bond/team (bridge vlan) is currently supported. - Ido is working on support of vlan device on top of bond/team. This will be most likely matter of the next patchset, quite soon. - bond/team on top of vlan is not supported by hw. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch net-next 00/26] bonding/team offload + mlxsw implementation
On Wed, Dec 2, 2015 at 9:58 AM, Jiri Pirko wrote: > Wed, Dec 02, 2015 at 06:53:35AM CET, gerlitz...@gmail.com wrote: >>Another question relates to users bonding/teaming netdevice ports from >>different HW switches, or of two vlans over ports from the same HW switch. >>This is something that AFAIK not supported by HW -- do we want to >>disallow that? >>what layer in the kernel we want to enforce that limitation? team/bond >>or switchdev core or the switchdev HW driver? > It is not handled at the moment. In can be easily disallowed by driver. what about the case of LAG + VLANs, what you think fits better HW switches? what would be currently supported, bonding vlans or vlan a bond? For me the 1st one (below) makes more sense bond b0 --> vlan A.X --> switchdev port A vlan B.X --> switchdev port B vlan b0.X --> bond b0 --> switchdev port A switchdev port B Or. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [patch net-next 00/26] bonding/team offload + mlxsw implementation
> Another question relates to users bonding/teaming netdevice ports from > different HW switches We need to be precises here. DSA allows for a cluster of switches which are interconnected via switch ports. In this setup, the Marvell switches allows ports of different switches to be members of a trunk, which is Marvells name for a bond/team. The second possible setup would be multiple switch devices which are not interconnected. Packets would then have to be forwarded from one switch to another via the CPU when a bond/team is spread across switches. Andrew -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html