On Sun, Oct 06, 2019 at 12:00:32PM +0200, Dmitry Vyukov wrote:
> On Sat, Oct 5, 2019 at 1:28 PM Christian Brauner
> <christian.brau...@ubuntu.com> wrote:
> >
> > When assiging and testing taskstats in taskstats
> > taskstats_exit() there's a race around writing and reading sig->stats.
> >
> > cpu0:
> > task calls exit()
> > do_exit()
> >         -> taskstats_exit()
> >                 -> taskstats_tgid_alloc()
> > The task takes sighand lock and assigns new stats to sig->stats.
> >
> > cpu1:
> > task catches signal
> > do_exit()
> >         -> taskstats_tgid_alloc()
> >                 -> taskstats_exit()
> > The tasks reads sig->stats __without__ holding sighand lock seeing
> > garbage.
> >
> > Fix this by taking sighand lock when reading sig->stats.
> >
> > Reported-by: syzbot+c5d03165a1bd1dead...@syzkaller.appspotmail.com
> > Signed-off-by: Christian Brauner <christian.brau...@ubuntu.com>
> > ---
> >  kernel/taskstats.c | 28 +++++++++++++++++-----------
> >  1 file changed, 17 insertions(+), 11 deletions(-)
> >
> > diff --git a/kernel/taskstats.c b/kernel/taskstats.c
> > index 13a0f2e6ebc2..58b145234c4a 100644
> > --- a/kernel/taskstats.c
> > +++ b/kernel/taskstats.c
> > @@ -553,26 +553,32 @@ static int taskstats_user_cmd(struct sk_buff *skb, 
> > struct genl_info *info)
> >
> >  static struct taskstats *taskstats_tgid_alloc(struct task_struct *tsk)
> >  {
> > +       int empty;
> > +       struct taskstats *stats_new, *stats = NULL;
> >         struct signal_struct *sig = tsk->signal;
> > -       struct taskstats *stats;
> > -
> > -       if (sig->stats || thread_group_empty(tsk))
> > -               goto ret;
> >
> >         /* No problem if kmem_cache_zalloc() fails */
> > -       stats = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
> > +       stats_new = kmem_cache_zalloc(taskstats_cache, GFP_KERNEL);
> 
> This seems to be over-pessimistic wrt performance b/c:
> 1. We always allocate the object and free it on every call, even if
> the stats are already allocated, whereas currently we don't.
> 2. We always allocate the object and free it if thread_group_empty,
> whereas currently we don't.
> 3. We do lock/unlock on every call.
> 
> I would suggest to fix the double-checked locking properly.

As I said in my reply to Marco: I'm integrating this. I just haven't
had time to update the patch.

Christian

Reply via email to