On Tue, 16 Feb 2021 10:34:32 -0800
Ben Widawsky <ben.widaw...@intel.com> wrote:

...

> > > diff --git a/drivers/cxl/mem.c b/drivers/cxl/mem.c
> > > index 237b956f0be0..4ca4f5afd9d2 100644
> > > --- a/drivers/cxl/mem.c
> > > +++ b/drivers/cxl/mem.c
> > > @@ -686,7 +686,11 @@ static int cxl_validate_cmd_from_user(struct cxl_mem 
> > > *cxlm,
> > > 
> > >         memcpy(out_cmd, c, sizeof(*c));
> > >         out_cmd->info.size_in = send_cmd->in.size;
> > > -       out_cmd->info.size_out = send_cmd->out.size;
> > > +       /*
> > > +        * XXX: out_cmd->info.size_out will be controlled by the driver, 
> > > and the
> > > +        * specified number of bytes @send_cmd->out.size will be copied 
> > > back out
> > > +        * to userspace.
> > > +        */
> > > 
> > >         return 0;
> > >  }  
> > 
> > This deals with the buffer overflow being triggered from userspace.
> > 
> > I'm still nervous.  I really don't like assuming hardware will do the right
> > thing and never send us more data than we expect.
> > 
> > Given the check that it will fit in the target buffer is simple,
> > I'd prefer to harden it and know we can't have a problem.
> > 
> > Jonathan  
> 
> I'm working on hardening __cxl_mem_mbox_send_cmd now per your request. With
> that, I think this solves the issue, right?

Should do.  Thanks,

Jonathan
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-le...@lists.01.org

Reply via email to