On Thu, 2015-01-22 at 15:31 +0200, Or Gerlitz wrote:
> From: Erez Shitrit <ere...@mellanox.com>
> 
> Following commit 016d9fb25cd9 "IPoIB: fix MCAST_FLAG_BUSY usage" both
> IPv6 traffic and for the most cases all IPv4 multicast traffic aren't
> working.
> 
> After this change there is no mechanism to handle the work that does the
> join process for the rest of the mcg's. For example, if in the list of
> all the mcg's there is a send-only request, after its processing, the
> code in ipoib_mcast_sendonly_join_complete() will not requeue the
> mcast task, but leaves the bit that signals this task is running,
> and hence the task will never run.
> 
> Also, whenever the kernel sends multicast packet (w.o joining to this
> group), we don't call ipoib_send_only_join(), the code tries to start
> the mcast task but it failed because the bit IPOIB_MCAST_RUN is always
> set, As a result the multicast packet will never be sent.
> 
> The fix handles all the join requests via the same logic, and call
> explicitly to sendonly join whenever there is a packet from sendonly type.
> 
> Since ipoib_mcast_sendonly_join() is now called from the driver TX flow,
> we can't take mutex there. We avoid locking by testing the multicast
> object to be valid (not error or null).
> 
> Fixes: 016d9fb25cd9 ('IPoIB: fix MCAST_FLAG_BUSY usage')
> Reported-by: Eyal Perry <eya...@mellanox.com>
> Signed-off-by: Erez Shitrit <ere...@mellanox.com>
> Signed-off-by: Or Gerlitz <ogerl...@mellanox.com>
> ---
> 
> Changes from V1:
> 
> 1. always do clear_bit(IPOIB_MCAST_FLAG_BUSY) 
> inipoib_mcast_sendonly_join_complete()

This part is good.

> 2. Sync between ipoib_mcast_sendonly_join() to 
> ipoib_mcast_sendonly_join_complete 
> using a IS_ERR_OR_NULL() test

This part is no good.  You just added a kernel data corrupter or kernel
oopser depending on the situation.

>  drivers/infiniband/ulp/ipoib/ipoib_multicast.c |   16 ++++++----------
>  1 files changed, 6 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c 
> b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
> index bc50dd0..212cfb4 100644
> --- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
> +++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
> @@ -342,7 +342,6 @@ static int ipoib_mcast_sendonly_join(struct ipoib_mcast 
> *mcast)
>       rec.port_gid = priv->local_gid;
>       rec.pkey     = cpu_to_be16(priv->pkey);
>  
> -     mutex_lock(&mcast_mutex);
>       init_completion(&mcast->done);
>       set_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
>       mcast->mc = ib_sa_join_multicast(&ipoib_sa_client, priv->ca,
> @@ -354,8 +353,8 @@ static int ipoib_mcast_sendonly_join(struct ipoib_mcast 
> *mcast)
>                                        GFP_ATOMIC,
>                                        ipoib_mcast_sendonly_join_complete,
>                                        mcast);
> -     if (IS_ERR(mcast->mc)) {
> -             ret = PTR_ERR(mcast->mc);
> +     if (IS_ERR_OR_NULL(mcast->mc)) {
> +             ret = mcast->mc ? PTR_ERR(mcast->mc) : -EAGAIN;
>               clear_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags);
>               complete(&mcast->done);
>               ipoib_warn(priv, "ib_sa_join_multicast for sendonly join "
> @@ -364,7 +363,6 @@ static int ipoib_mcast_sendonly_join(struct ipoib_mcast 
> *mcast)
>               ipoib_dbg_mcast(priv, "no multicast record for %pI6, starting "
>                               "sendonly join\n", mcast->mcmember.mgid.raw);
>       }
> -     mutex_unlock(&mcast_mutex);
>  
>       return ret;
>  }

These three hunks allow the join completion routine to run parallel to
ib_sa_join_multicast returning.  It is a race condition to see who
finishes first.  However, what's not much of a race condition is that if
the join completion finishes first, and it set mcast->mc = NULL;, then
it also ran clear_bit(IPOIB_MCAST_FLAG_BUSY, &mcast_flags); and
complete(&mcast->done);.  We will now enter into an if statement and run
those same things again.  If, at the time the first complete was run, we
were already waiting in mcast_dev_flush or mcast_restart_task for the
completion to happen, it is entirely possible, and maybe even probable,
that one of those routines would have already proceeded to free our
mcast struct out from underneath us.  By the time we get around to
running the clear_bit and complete in this routine, it is entirely
possible that our memory has been freed and reused and we are causing
random data corruption.  Or it's been freed and during reuse it was
cleared and now when we try to dereference those pointers we oops the
kernel.

The alternative, if you really want to remove that lock, is that we
can't abuse mcast->mc to store the return value.  Instead, we would need
to use a temporary pointer, and check that temporary pointer for
ERR_PTR.  If we got that, then we know we will never run our completion
routine and we should complete ourselves.  If we get anything else, then
it might be valid, or it might get nullified by our completion routine,
so we simply throw the data away and let the completion routine set it
as it sees fit.  However, as I mentioned off list to Erez, I was
reluctant to make that change in the 3.19 fix series because that's very
risk IMO.  I don't mind adding locks, but removing that much locking is
more 3.20 material IMO.


> @@ -622,10 +620,8 @@ void ipoib_mcast_join_task(struct work_struct *work)
>                       break;
>               }
>  
> -             if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags))
> -                     ipoib_mcast_sendonly_join(mcast);
> -             else
> -                     ipoib_mcast_join(dev, mcast, 1);
> +             ipoib_mcast_join(dev, mcast, 1);
> +
>               return;
>       }
>  
> @@ -725,8 +721,6 @@ void ipoib_mcast_send(struct net_device *dev, u8 *daddr, 
> struct sk_buff *skb)
>               memcpy(mcast->mcmember.mgid.raw, mgid, sizeof (union ib_gid));
>               __ipoib_mcast_add(dev, mcast);
>               list_add_tail(&mcast->list, &priv->multicast_list);
> -             if (!test_and_set_bit(IPOIB_MCAST_RUN, &priv->flags))
> -                     queue_delayed_work(priv->wq, &priv->mcast_task, 0);
>       }
>  
>       if (!mcast->ah) {
> @@ -740,6 +734,8 @@ void ipoib_mcast_send(struct net_device *dev, u8 *daddr, 
> struct sk_buff *skb)
>               if (test_bit(IPOIB_MCAST_FLAG_BUSY, &mcast->flags))
>                       ipoib_dbg_mcast(priv, "no address vector, "
>                                       "but multicast join already started\n");
> +             else if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags))
> +                     ipoib_mcast_sendonly_join(mcast);
>  
>               /*
>                * If lookup completes between here and out:, don't

These items make a passable effort at fixing up the same items as
patches 1 and 2 in my patchset.  Patch 3 in my patchset was just
something I noticed, but it isn't drastically important.

This alternative patch does nothing to address what is fixed in these
patches though:

patch 4: we leaked joins as a result of handling ENETRESET wrong.  you
wouldn't see this unless your testing included taking the network up and
down via killing opensm or the like, but it's a real issue

patch 5: over zealous rescheduling of join task that got in the way of
the flush thread

patch 6: took out an unneeded spinlock

patch 7: during my testing, I got an oops in ipoib_mcast_join that is
the result of lack of locking between the flush task and the
mcast_join_task thread

patch 8: there is a legitimate leak of mcast entries any time we process
this list and get to the end only to find that our device is no longer
up, and given that the debug messages show that we call
mcast_restart_task every time right before we call mcast_dev_flush when
we are downing the interface, the chance of leak here is very high

patch 9: similar to patch 4, you don't see this unless you are abusing
opensm in order to trigger net events, and then combining that with
removing the module at the same time.  but if you have a queue net event
and you don't flush after you unregister, it can oops your kernel later
when the net event finally runs

patch 10: if you have mcast debugging on, that printk can actually cause
problems that don't otherwise exist because it isn't rate limited in any
way

This interim patch you are suggesting addresses the first couple items
my patches address, but it also introduces data corrupter/oopser, but
also like I said in a previous email, your testing is myopic, just like
my original testing was, and you are ignoring other items and thinking
your patch is OK when it really isn't.

-- 
Doug Ledford <dledf...@redhat.com>
              GPG KeyID: 0E572FDD


Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to