4.15-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Mikulas Patocka <mpato...@redhat.com>

commit 7ac8ff95f48cbfa609a060fd6a1e361dd62feeb3 upstream.

IPv6 doesn't work on the MacchiatoBIN board. It is caused by broken
multicast address filter in the mvpp2 driver.

The driver loads doesn't load any multicast entries if "allmulti" is not
set. This condition should be reversed.

The condition !netdev_mc_empty(dev) is useless (because
netdev_for_each_mc_addr is nop if the list is empty).

This patch also fixes a possible overflow of the multicast list - if
mvpp2_prs_mac_da_accept fails, we set the allmulti flag and retry.

Signed-off-by: Mikulas Patocka <mpato...@redhat.com>
Cc: sta...@vger.kernel.org
Signed-off-by: David S. Miller <da...@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>

---
 drivers/net/ethernet/marvell/mvpp2.c |   11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

--- a/drivers/net/ethernet/marvell/mvpp2.c
+++ b/drivers/net/ethernet/marvell/mvpp2.c
@@ -7127,6 +7127,7 @@ static void mvpp2_set_rx_mode(struct net
        int id = port->id;
        bool allmulti = dev->flags & IFF_ALLMULTI;
 
+retry:
        mvpp2_prs_mac_promisc_set(priv, id, dev->flags & IFF_PROMISC);
        mvpp2_prs_mac_multi_set(priv, id, MVPP2_PE_MAC_MC_ALL, allmulti);
        mvpp2_prs_mac_multi_set(priv, id, MVPP2_PE_MAC_MC_IP6, allmulti);
@@ -7134,9 +7135,13 @@ static void mvpp2_set_rx_mode(struct net
        /* Remove all port->id's mcast enries */
        mvpp2_prs_mcast_del_all(priv, id);
 
-       if (allmulti && !netdev_mc_empty(dev)) {
-               netdev_for_each_mc_addr(ha, dev)
-                       mvpp2_prs_mac_da_accept(priv, id, ha->addr, true);
+       if (!allmulti) {
+               netdev_for_each_mc_addr(ha, dev) {
+                       if (mvpp2_prs_mac_da_accept(priv, id, ha->addr, true)) {
+                               allmulti = true;
+                               goto retry;
+                       }
+               }
        }
 }
 


Reply via email to