From: Cong Wang <[email protected]>
Date: Fri, 24 Aug 2018 12:28:06 -0700

> syzbot reported a use-after-free in tipc_group_fill_sock_diag(),
> where tipc_group_fill_sock_diag() still reads tsk->group meanwhile
> tipc_group_delete() just deletes it in tipc_release().
> 
> tipc_nl_sk_walk() aims to lock this sock when walking each sock
> in the hash table to close race conditions with sock changes like
> this one, by acquiring tsk->sk.sk_lock.slock spinlock, unfortunately
> this doesn't work at all. All non-BH call path should take
> lock_sock() instead to make it work.
> 
> tipc_nl_sk_walk() brutally iterates with raw rht_for_each_entry_rcu()
> where RCU read lock is required, this is the reason why lock_sock()
> can't be taken on this path. This could be resolved by switching to
> rhashtable iterator API's, where taking a sleepable lock is possible.
> Also, the iterator API's are friendly for restartable calls like
> diag dump, the last position is remembered behind the scence,
> all we need to do here is saving the iterator into cb->args[].
> 
> I tested this with parallel tipc diag dump and thousands of tipc
> socket creation and release, no crash or memory leak.
> 
> Reported-by: [email protected]
> Cc: Jon Maloy <[email protected]>
> Cc: Ying Xue <[email protected]>
> Signed-off-by: Cong Wang <[email protected]>

Applied and queued up for -stable, thanks Cong.

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
tipc-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/tipc-discussion

Reply via email to