rds_send_drop_to() is used during socket tear down to find all the
messages on the socket and flush them .  It can race with the
acking code unless it takes the m_rs_lock on each and every message.

This plugs a hole where we didn't take m_rs_lock on any message that
didn't have the RDS_MSG_ON_CONN set.  Taking m_rs_lock avoids
double frees and other memory corruptions as the ack code trusts
the message m_rs pointer on a socket that had actually been freed.

We must take m_rs_lock to access m_rs.  Because of lock nesting and
rs access, we also need to acquire rs_lock.

Reviewed-by: Ajaykumar Hotchandani <ajaykumar.hotchand...@oracle.com>
Signed-off-by: Santosh Shilimkar <ssant...@kernel.org>
Signed-off-by: Santosh Shilimkar <santosh.shilim...@oracle.com>
---
 net/rds/send.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/net/rds/send.c b/net/rds/send.c
index 96ae38d..b0fe412 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -778,8 +778,22 @@ void rds_send_drop_to(struct rds_sock *rs, struct 
sockaddr_in *dest)
        while (!list_empty(&list)) {
                rm = list_entry(list.next, struct rds_message, m_sock_item);
                list_del_init(&rm->m_sock_item);
-
                rds_message_wait(rm);
+
+               /* just in case the code above skipped this message
+                * because RDS_MSG_ON_CONN wasn't set, run it again here
+                * taking m_rs_lock is the only thing that keeps us
+                * from racing with ack processing.
+                */
+               spin_lock_irqsave(&rm->m_rs_lock, flags);
+
+               spin_lock(&rs->rs_lock);
+               __rds_send_complete(rs, rm, RDS_RDMA_CANCELED);
+               spin_unlock(&rs->rs_lock);
+
+               rm->m_rs = NULL;
+               spin_unlock_irqrestore(&rm->m_rs_lock, flags);
+
                rds_message_put(rm);
        }
 }
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to