Re: [Linaro-mm-sig] [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-02-28 Thread Liam Mark
+ Sumit

Hi Sumit,

Do you have any thoughts on this patch? 

It fixes a potential crash in on older kernel and I think limiting 
begin/end_cpu_access to only apply cache maintenance when the buffer is 
dma mapped makes sense from a logical perspective and performance 
perspective.



On Wed, 6 Feb 2019, Ørjan Eide wrote:

> 
> I've run some testing, and this patch does indeed fix the crash in
> dma_sync_sg_for_cpu when it tried to use the 0 dma_address from the sg
> list.
> 
> Tested-by: Ørjan Eide 
> 
> I tested this on an older kernel, v4.14, since the dma-mapping code
> moved, in v4.19, to ignore the dma_address and instead use sg_phys() to
> get a valid address from the page, which is always valid in the ion sg
> lists. While this wouldn't crash on newer kernels, it's still good to
> avoid the unnecessary work when no CMO is needed.
> 

Isn't a fix like this also required from a stability perspective for 
future kernels? I understand from your analysis below that the crash has 
been fixed after 4.19 by using sg_phys to get the address but aren't we 
breaking the DMA API contract by calling dma_sync_* without first dma 
mapping the memory, if so then we have no guarantee that future 
implementations of functions like dma_direct_sync_sg_for_cpu will properly 
handle calls to dma_sync_* if the memory is not dma mapped.

> Is this patch a candidate for the relevant stable kernels, those that 
> have this bug exposed to user space via Ion and DMA_BUF_IOCTL_SYNC?
> 

My belief is that is relevant for older kernels otherwise an unprivileged 
malicious userspace application may be able to crash the system if they 
can call DMA_BUF_IOCTL_SYNC at the right time.

BTW thanks Ørjan testing and anaalsyis you have carried out on this 
change.

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [Linaro-mm-sig] [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-02-07 Thread Ørjan Eide
On Wed, Feb 06, 2019 at 11:31:04PM -0800, Christoph Hellwig wrote:
> The CPU may only access DMA mapped memory if ownership has been
> transferred back to the CPU using dma_sync_{single,sg}_to_cpu, and then
> before the device can access it again ownership needs to be transferred
> back to the device using dma_sync_{single,sg}_to_device.
> 
> > I've run some testing, and this patch does indeed fix the crash in
> > dma_sync_sg_for_cpu when it tried to use the 0 dma_address from the sg
> > list.
> > 
> > Tested-by: Ørjan Eide 
> > 
> > I tested this on an older kernel, v4.14, since the dma-mapping code
> > moved, in v4.19, to ignore the dma_address and instead use sg_phys() to
> > get a valid address from the page, which is always valid in the ion sg
> > lists. While this wouldn't crash on newer kernels, it's still good to
> > avoid the unnecessary work when no CMO is needed.
> 
> Can you also test is with CONFIG_DMA_API_DEBUG enabled, as that should
> catch all the usual mistakes in DMA API usage, including the one found?

I checked again with CONFIG_DMA_API_DEBUG=y, both with and without this
patch, and I didn't get any dma-mapping errors. 

The issue I hit, without this patch, is when a CPU access starts after a
device have attached, which caused ion to create a copy of the buffer's
sg list with dma_address zeroed, but before the device have mapped the
buffer.

-- 
Ørjan
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [Linaro-mm-sig] [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-02-06 Thread Christoph Hellwig
The CPU may only access DMA mapped memory if ownership has been
transferred back to the CPU using dma_sync_{single,sg}_to_cpu, and then
before the device can access it again ownership needs to be transferred
back to the device using dma_sync_{single,sg}_to_device.

> I've run some testing, and this patch does indeed fix the crash in
> dma_sync_sg_for_cpu when it tried to use the 0 dma_address from the sg
> list.
> 
> Tested-by: Ørjan Eide 
> 
> I tested this on an older kernel, v4.14, since the dma-mapping code
> moved, in v4.19, to ignore the dma_address and instead use sg_phys() to
> get a valid address from the page, which is always valid in the ion sg
> lists. While this wouldn't crash on newer kernels, it's still good to
> avoid the unnecessary work when no CMO is needed.

Can you also test is with CONFIG_DMA_API_DEBUG enabled, as that should
catch all the usual mistakes in DMA API usage, including the one found?
___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


Re: [Linaro-mm-sig] [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-02-06 Thread Ørjan Eide
On Wed, Jan 30, 2019 at 11:31:23AM +, Brian Starkey wrote:
> 
> On Tue, Jan 29, 2019 at 03:44:53PM -0800, Liam Mark wrote:
> > On Fri, 18 Jan 2019, Liam Mark wrote:
> > 
> > > On Fri, 18 Jan 2019, Andrew F. Davis wrote:
> > > 
> > > > On 1/18/19 12:37 PM, Liam Mark wrote:
> > > > > The ION begin_cpu_access and end_cpu_access functions use the
> > > > > dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> > > > > maintenance.
> > > > > 
> > > > > Currently it is possible to apply cache maintenance, via the
> > > > > begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> > > > > dma mapped.
> > > > > 
> > > > > The dma sync sg APIs should not be called on sg lists which have not 
> > > > > been
> > > > > dma mapped as this can result in cache maintenance being applied to 
> > > > > the
> > > > > wrong address. If an sg list has not been dma mapped then its 
> > > > > dma_address
> > > > > field has not been populated, some dma ops such as the 
> > > > > swiotlb_dma_ops ops
> > > > > use the dma_address field to calculate the address onto which to apply
> > > > > cache maintenance.
> > > > > 
> > > > > Also I don’t think we want CMOs to be applied to a buffer which is not
> > > > > dma mapped as the memory should already be coherent for access from 
> > > > > the
> > > > > CPU. Any CMOs required for device access taken care of in the
> > > > > dma_buf_map_attachment and dma_buf_unmap_attachment calls.
> > > > > So really it only makes sense for begin_cpu_access and end_cpu_access 
> > > > > to
> > > > > apply CMOs if the buffer is dma mapped.
> > > > > 
> > > > > Fix the ION begin_cpu_access and end_cpu_access functions to only 
> > > > > apply
> > > > > cache maintenance to buffers which are dma mapped.
> > > > > 
> > > > > Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for 
> > > > > syncing and mapping")
> > > > > Signed-off-by: Liam Mark 
> > > > > ---
> > > > >  drivers/staging/android/ion/ion.c | 26 +-
> > > > >  1 file changed, 21 insertions(+), 5 deletions(-)
> > > > > 
> > > > > diff --git a/drivers/staging/android/ion/ion.c 
> > > > > b/drivers/staging/android/ion/ion.c
> > > > > index 6f5afab7c1a1..1fe633a7fdba 100644
> > > > > --- a/drivers/staging/android/ion/ion.c
> > > > > +++ b/drivers/staging/android/ion/ion.c
> > > > > @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
> > > > >   struct device *dev;
> > > > >   struct sg_table *table;
> > > > >   struct list_head list;
> > > > > + bool dma_mapped;
> > > > >  };
> > > > >  
> > > > >  static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> > > > > @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf 
> > > > > *dmabuf,
> > > > >  
> > > > >   a->table = table;
> > > > >   a->dev = attachment->dev;
> > > > > + a->dma_mapped = false;
> > > > >   INIT_LIST_HEAD(>list);
> > > > >  
> > > > >   attachment->priv = a;
> > > > > @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct 
> > > > > dma_buf_attachment *attachment,
> > > > >  {
> > > > >   struct ion_dma_buf_attachment *a = attachment->priv;
> > > > >   struct sg_table *table;
> > > > > + struct ion_buffer *buffer = attachment->dmabuf->priv;
> > > > >  
> > > > >   table = a->table;
> > > > >  
> > > > > + mutex_lock(>lock);
> > > > >   if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> > > > > - direction))
> > > > > + direction)) {
> > > > > + mutex_unlock(>lock);
> > > > >   return ERR_PTR(-ENOMEM);
> > > > > + }
> > > > > + a->dma_mapped = true;
> > > > > + mutex_unlock(>lock);
> > > > >  
> > > > >   return table;
> > > > >  }
> > > > > @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct 
> > > > > dma_buf_attachment *attachment,
> > > > > struct sg_table *table,
> > > > > enum dma_data_direction direction)
> > > > >  {
> > > > > + struct ion_dma_buf_attachment *a = attachment->priv;
> > > > > + struct ion_buffer *buffer = attachment->dmabuf->priv;
> > > > > +
> > > > > + mutex_lock(>lock);
> > > > >   dma_unmap_sg(attachment->dev, table->sgl, table->nents, 
> > > > > direction);
> > > > > + a->dma_mapped = false;
> > > > > + mutex_unlock(>lock);
> > > > >  }
> > > > >  
> > > > >  static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct 
> > > > > *vma)
> > > > > @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct 
> > > > > dma_buf *dmabuf,
> > > > >  
> > > > >   mutex_lock(>lock);
> > > > >   list_for_each_entry(a, >attachments, list) {
> > > > 
> > > > When no devices are attached then buffer->attachments is empty and the
> > > > below does not run, so if I understand this patch correctly then what
> > > > you are protecting against is CPU access in the window after
> > > > dma_buf_attach but before dma_buf_map.

Re: [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-01-30 Thread Andrew F. Davis
On 1/29/19 5:44 PM, Liam Mark wrote:
> On Fri, 18 Jan 2019, Liam Mark wrote:
> 
>> On Fri, 18 Jan 2019, Andrew F. Davis wrote:
>>
>>> On 1/18/19 12:37 PM, Liam Mark wrote:
 The ION begin_cpu_access and end_cpu_access functions use the
 dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
 maintenance.

 Currently it is possible to apply cache maintenance, via the
 begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
 dma mapped.

 The dma sync sg APIs should not be called on sg lists which have not been
 dma mapped as this can result in cache maintenance being applied to the
 wrong address. If an sg list has not been dma mapped then its dma_address
 field has not been populated, some dma ops such as the swiotlb_dma_ops ops
 use the dma_address field to calculate the address onto which to apply
 cache maintenance.

 Also I don’t think we want CMOs to be applied to a buffer which is not
 dma mapped as the memory should already be coherent for access from the
 CPU. Any CMOs required for device access taken care of in the
 dma_buf_map_attachment and dma_buf_unmap_attachment calls.
 So really it only makes sense for begin_cpu_access and end_cpu_access to
 apply CMOs if the buffer is dma mapped.

 Fix the ION begin_cpu_access and end_cpu_access functions to only apply
 cache maintenance to buffers which are dma mapped.

 Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing 
 and mapping")
 Signed-off-by: Liam Mark 
 ---
  drivers/staging/android/ion/ion.c | 26 +-
  1 file changed, 21 insertions(+), 5 deletions(-)

 diff --git a/drivers/staging/android/ion/ion.c 
 b/drivers/staging/android/ion/ion.c
 index 6f5afab7c1a1..1fe633a7fdba 100644
 --- a/drivers/staging/android/ion/ion.c
 +++ b/drivers/staging/android/ion/ion.c
 @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
struct device *dev;
struct sg_table *table;
struct list_head list;
 +  bool dma_mapped;
  };
  
  static int ion_dma_buf_attach(struct dma_buf *dmabuf,
 @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
  
a->table = table;
a->dev = attachment->dev;
 +  a->dma_mapped = false;
INIT_LIST_HEAD(>list);
  
attachment->priv = a;
 @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct 
 dma_buf_attachment *attachment,
  {
struct ion_dma_buf_attachment *a = attachment->priv;
struct sg_table *table;
 +  struct ion_buffer *buffer = attachment->dmabuf->priv;
  
table = a->table;
  
 +  mutex_lock(>lock);
if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
 -  direction))
 +  direction)) {
 +  mutex_unlock(>lock);
return ERR_PTR(-ENOMEM);
 +  }
 +  a->dma_mapped = true;
 +  mutex_unlock(>lock);
  
return table;
  }
 @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct 
 dma_buf_attachment *attachment,
  struct sg_table *table,
  enum dma_data_direction direction)
  {
 +  struct ion_dma_buf_attachment *a = attachment->priv;
 +  struct ion_buffer *buffer = attachment->dmabuf->priv;
 +
 +  mutex_lock(>lock);
dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
 +  a->dma_mapped = false;
 +  mutex_unlock(>lock);
  }
  
  static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
 @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf 
 *dmabuf,
  
mutex_lock(>lock);
list_for_each_entry(a, >attachments, list) {
>>>
>>> When no devices are attached then buffer->attachments is empty and the
>>> below does not run, so if I understand this patch correctly then what
>>> you are protecting against is CPU access in the window after
>>> dma_buf_attach but before dma_buf_map.
>>>
>>
>> Yes
>>
>>> This is the kind of thing that again makes me think a couple more
>>> ordering requirements on DMA-BUF ops are needed. DMA-BUFs do not require
>>> the backing memory to be allocated until map time, this is why the
>>> dma_address field would still be null as you note in the commit message.
>>> So why should the CPU be performing accesses on a buffer that is not
>>> actually backed yet?
>>>
>>> I can think of two solutions:
>>>
>>> 1) Only allow CPU access (mmap, kmap, {begin,end}_cpu_access) while at
>>> least one device is mapped.
>>>
>>
>> Would be quite limiting to clients.
>>

I can agree with that, option two seems more reasonable.

>>> 2) Treat the CPU access request like the a device map request and
>>> trigger the allocation of backing memory just like if a device map had

Re: [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-01-30 Thread Brian Starkey
Hi Liam,

On Tue, Jan 29, 2019 at 03:44:53PM -0800, Liam Mark wrote:
> On Fri, 18 Jan 2019, Liam Mark wrote:
> 
> > On Fri, 18 Jan 2019, Andrew F. Davis wrote:
> > 
> > > On 1/18/19 12:37 PM, Liam Mark wrote:
> > > > The ION begin_cpu_access and end_cpu_access functions use the
> > > > dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> > > > maintenance.
> > > > 
> > > > Currently it is possible to apply cache maintenance, via the
> > > > begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> > > > dma mapped.
> > > > 
> > > > The dma sync sg APIs should not be called on sg lists which have not 
> > > > been
> > > > dma mapped as this can result in cache maintenance being applied to the
> > > > wrong address. If an sg list has not been dma mapped then its 
> > > > dma_address
> > > > field has not been populated, some dma ops such as the swiotlb_dma_ops 
> > > > ops
> > > > use the dma_address field to calculate the address onto which to apply
> > > > cache maintenance.
> > > > 
> > > > Also I don’t think we want CMOs to be applied to a buffer which is not
> > > > dma mapped as the memory should already be coherent for access from the
> > > > CPU. Any CMOs required for device access taken care of in the
> > > > dma_buf_map_attachment and dma_buf_unmap_attachment calls.
> > > > So really it only makes sense for begin_cpu_access and end_cpu_access to
> > > > apply CMOs if the buffer is dma mapped.
> > > > 
> > > > Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> > > > cache maintenance to buffers which are dma mapped.
> > > > 
> > > > Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for 
> > > > syncing and mapping")
> > > > Signed-off-by: Liam Mark 
> > > > ---
> > > >  drivers/staging/android/ion/ion.c | 26 +-
> > > >  1 file changed, 21 insertions(+), 5 deletions(-)
> > > > 
> > > > diff --git a/drivers/staging/android/ion/ion.c 
> > > > b/drivers/staging/android/ion/ion.c
> > > > index 6f5afab7c1a1..1fe633a7fdba 100644
> > > > --- a/drivers/staging/android/ion/ion.c
> > > > +++ b/drivers/staging/android/ion/ion.c
> > > > @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
> > > > struct device *dev;
> > > > struct sg_table *table;
> > > > struct list_head list;
> > > > +   bool dma_mapped;
> > > >  };
> > > >  
> > > >  static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> > > > @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf 
> > > > *dmabuf,
> > > >  
> > > > a->table = table;
> > > > a->dev = attachment->dev;
> > > > +   a->dma_mapped = false;
> > > > INIT_LIST_HEAD(>list);
> > > >  
> > > > attachment->priv = a;
> > > > @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct 
> > > > dma_buf_attachment *attachment,
> > > >  {
> > > > struct ion_dma_buf_attachment *a = attachment->priv;
> > > > struct sg_table *table;
> > > > +   struct ion_buffer *buffer = attachment->dmabuf->priv;
> > > >  
> > > > table = a->table;
> > > >  
> > > > +   mutex_lock(>lock);
> > > > if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> > > > -   direction))
> > > > +   direction)) {
> > > > +   mutex_unlock(>lock);
> > > > return ERR_PTR(-ENOMEM);
> > > > +   }
> > > > +   a->dma_mapped = true;
> > > > +   mutex_unlock(>lock);
> > > >  
> > > > return table;
> > > >  }
> > > > @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct 
> > > > dma_buf_attachment *attachment,
> > > >   struct sg_table *table,
> > > >   enum dma_data_direction direction)
> > > >  {
> > > > +   struct ion_dma_buf_attachment *a = attachment->priv;
> > > > +   struct ion_buffer *buffer = attachment->dmabuf->priv;
> > > > +
> > > > +   mutex_lock(>lock);
> > > > dma_unmap_sg(attachment->dev, table->sgl, table->nents, 
> > > > direction);
> > > > +   a->dma_mapped = false;
> > > > +   mutex_unlock(>lock);
> > > >  }
> > > >  
> > > >  static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> > > > @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct 
> > > > dma_buf *dmabuf,
> > > >  
> > > > mutex_lock(>lock);
> > > > list_for_each_entry(a, >attachments, list) {
> > > 
> > > When no devices are attached then buffer->attachments is empty and the
> > > below does not run, so if I understand this patch correctly then what
> > > you are protecting against is CPU access in the window after
> > > dma_buf_attach but before dma_buf_map.
> > > 
> > 
> > Yes
> > 
> > > This is the kind of thing that again makes me think a couple more
> > > ordering requirements on DMA-BUF ops are needed. DMA-BUFs do not require
> > > the backing memory to be allocated until map time, this is why the
> > > 

Re: [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-01-29 Thread Liam Mark
On Fri, 18 Jan 2019, Liam Mark wrote:

> On Fri, 18 Jan 2019, Andrew F. Davis wrote:
> 
> > On 1/18/19 12:37 PM, Liam Mark wrote:
> > > The ION begin_cpu_access and end_cpu_access functions use the
> > > dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> > > maintenance.
> > > 
> > > Currently it is possible to apply cache maintenance, via the
> > > begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> > > dma mapped.
> > > 
> > > The dma sync sg APIs should not be called on sg lists which have not been
> > > dma mapped as this can result in cache maintenance being applied to the
> > > wrong address. If an sg list has not been dma mapped then its dma_address
> > > field has not been populated, some dma ops such as the swiotlb_dma_ops ops
> > > use the dma_address field to calculate the address onto which to apply
> > > cache maintenance.
> > > 
> > > Also I don’t think we want CMOs to be applied to a buffer which is not
> > > dma mapped as the memory should already be coherent for access from the
> > > CPU. Any CMOs required for device access taken care of in the
> > > dma_buf_map_attachment and dma_buf_unmap_attachment calls.
> > > So really it only makes sense for begin_cpu_access and end_cpu_access to
> > > apply CMOs if the buffer is dma mapped.
> > > 
> > > Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> > > cache maintenance to buffers which are dma mapped.
> > > 
> > > Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing 
> > > and mapping")
> > > Signed-off-by: Liam Mark 
> > > ---
> > >  drivers/staging/android/ion/ion.c | 26 +-
> > >  1 file changed, 21 insertions(+), 5 deletions(-)
> > > 
> > > diff --git a/drivers/staging/android/ion/ion.c 
> > > b/drivers/staging/android/ion/ion.c
> > > index 6f5afab7c1a1..1fe633a7fdba 100644
> > > --- a/drivers/staging/android/ion/ion.c
> > > +++ b/drivers/staging/android/ion/ion.c
> > > @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
> > >   struct device *dev;
> > >   struct sg_table *table;
> > >   struct list_head list;
> > > + bool dma_mapped;
> > >  };
> > >  
> > >  static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> > > @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> > >  
> > >   a->table = table;
> > >   a->dev = attachment->dev;
> > > + a->dma_mapped = false;
> > >   INIT_LIST_HEAD(>list);
> > >  
> > >   attachment->priv = a;
> > > @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct 
> > > dma_buf_attachment *attachment,
> > >  {
> > >   struct ion_dma_buf_attachment *a = attachment->priv;
> > >   struct sg_table *table;
> > > + struct ion_buffer *buffer = attachment->dmabuf->priv;
> > >  
> > >   table = a->table;
> > >  
> > > + mutex_lock(>lock);
> > >   if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> > > - direction))
> > > + direction)) {
> > > + mutex_unlock(>lock);
> > >   return ERR_PTR(-ENOMEM);
> > > + }
> > > + a->dma_mapped = true;
> > > + mutex_unlock(>lock);
> > >  
> > >   return table;
> > >  }
> > > @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct 
> > > dma_buf_attachment *attachment,
> > > struct sg_table *table,
> > > enum dma_data_direction direction)
> > >  {
> > > + struct ion_dma_buf_attachment *a = attachment->priv;
> > > + struct ion_buffer *buffer = attachment->dmabuf->priv;
> > > +
> > > + mutex_lock(>lock);
> > >   dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> > > + a->dma_mapped = false;
> > > + mutex_unlock(>lock);
> > >  }
> > >  
> > >  static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> > > @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct 
> > > dma_buf *dmabuf,
> > >  
> > >   mutex_lock(>lock);
> > >   list_for_each_entry(a, >attachments, list) {
> > 
> > When no devices are attached then buffer->attachments is empty and the
> > below does not run, so if I understand this patch correctly then what
> > you are protecting against is CPU access in the window after
> > dma_buf_attach but before dma_buf_map.
> > 
> 
> Yes
> 
> > This is the kind of thing that again makes me think a couple more
> > ordering requirements on DMA-BUF ops are needed. DMA-BUFs do not require
> > the backing memory to be allocated until map time, this is why the
> > dma_address field would still be null as you note in the commit message.
> > So why should the CPU be performing accesses on a buffer that is not
> > actually backed yet?
> > 
> > I can think of two solutions:
> > 
> > 1) Only allow CPU access (mmap, kmap, {begin,end}_cpu_access) while at
> > least one device is mapped.
> > 
> 
> Would be quite limiting to clients.
> 
> > 2) Treat the CPU access request like the a device map request and
> > trigger the allocation of backing memory just like if a device map had
> > come in.
> > 
> 
> Which 

Re: [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-01-18 Thread Liam Mark
On Fri, 18 Jan 2019, Andrew F. Davis wrote:

> On 1/18/19 12:37 PM, Liam Mark wrote:
> > The ION begin_cpu_access and end_cpu_access functions use the
> > dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> > maintenance.
> > 
> > Currently it is possible to apply cache maintenance, via the
> > begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> > dma mapped.
> > 
> > The dma sync sg APIs should not be called on sg lists which have not been
> > dma mapped as this can result in cache maintenance being applied to the
> > wrong address. If an sg list has not been dma mapped then its dma_address
> > field has not been populated, some dma ops such as the swiotlb_dma_ops ops
> > use the dma_address field to calculate the address onto which to apply
> > cache maintenance.
> > 
> > Also I don’t think we want CMOs to be applied to a buffer which is not
> > dma mapped as the memory should already be coherent for access from the
> > CPU. Any CMOs required for device access taken care of in the
> > dma_buf_map_attachment and dma_buf_unmap_attachment calls.
> > So really it only makes sense for begin_cpu_access and end_cpu_access to
> > apply CMOs if the buffer is dma mapped.
> > 
> > Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> > cache maintenance to buffers which are dma mapped.
> > 
> > Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing 
> > and mapping")
> > Signed-off-by: Liam Mark 
> > ---
> >  drivers/staging/android/ion/ion.c | 26 +-
> >  1 file changed, 21 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/staging/android/ion/ion.c 
> > b/drivers/staging/android/ion/ion.c
> > index 6f5afab7c1a1..1fe633a7fdba 100644
> > --- a/drivers/staging/android/ion/ion.c
> > +++ b/drivers/staging/android/ion/ion.c
> > @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
> > struct device *dev;
> > struct sg_table *table;
> > struct list_head list;
> > +   bool dma_mapped;
> >  };
> >  
> >  static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> > @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> >  
> > a->table = table;
> > a->dev = attachment->dev;
> > +   a->dma_mapped = false;
> > INIT_LIST_HEAD(>list);
> >  
> > attachment->priv = a;
> > @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct 
> > dma_buf_attachment *attachment,
> >  {
> > struct ion_dma_buf_attachment *a = attachment->priv;
> > struct sg_table *table;
> > +   struct ion_buffer *buffer = attachment->dmabuf->priv;
> >  
> > table = a->table;
> >  
> > +   mutex_lock(>lock);
> > if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> > -   direction))
> > +   direction)) {
> > +   mutex_unlock(>lock);
> > return ERR_PTR(-ENOMEM);
> > +   }
> > +   a->dma_mapped = true;
> > +   mutex_unlock(>lock);
> >  
> > return table;
> >  }
> > @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct 
> > dma_buf_attachment *attachment,
> >   struct sg_table *table,
> >   enum dma_data_direction direction)
> >  {
> > +   struct ion_dma_buf_attachment *a = attachment->priv;
> > +   struct ion_buffer *buffer = attachment->dmabuf->priv;
> > +
> > +   mutex_lock(>lock);
> > dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> > +   a->dma_mapped = false;
> > +   mutex_unlock(>lock);
> >  }
> >  
> >  static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> > @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf 
> > *dmabuf,
> >  
> > mutex_lock(>lock);
> > list_for_each_entry(a, >attachments, list) {
> 
> When no devices are attached then buffer->attachments is empty and the
> below does not run, so if I understand this patch correctly then what
> you are protecting against is CPU access in the window after
> dma_buf_attach but before dma_buf_map.
> 

Yes

> This is the kind of thing that again makes me think a couple more
> ordering requirements on DMA-BUF ops are needed. DMA-BUFs do not require
> the backing memory to be allocated until map time, this is why the
> dma_address field would still be null as you note in the commit message.
> So why should the CPU be performing accesses on a buffer that is not
> actually backed yet?
> 
> I can think of two solutions:
> 
> 1) Only allow CPU access (mmap, kmap, {begin,end}_cpu_access) while at
> least one device is mapped.
> 

Would be quite limiting to clients.

> 2) Treat the CPU access request like the a device map request and
> trigger the allocation of backing memory just like if a device map had
> come in.
> 

Which is, as you mention pretty much what we have now (though the buffer 
is allocated even earlier).

> I know the current Ion heaps (and most other DMA-BUF exporters) all do
> the allocation up front so the memory is already there, 

Re: [PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-01-18 Thread Andrew F. Davis
On 1/18/19 12:37 PM, Liam Mark wrote:
> The ION begin_cpu_access and end_cpu_access functions use the
> dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
> maintenance.
> 
> Currently it is possible to apply cache maintenance, via the
> begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
> dma mapped.
> 
> The dma sync sg APIs should not be called on sg lists which have not been
> dma mapped as this can result in cache maintenance being applied to the
> wrong address. If an sg list has not been dma mapped then its dma_address
> field has not been populated, some dma ops such as the swiotlb_dma_ops ops
> use the dma_address field to calculate the address onto which to apply
> cache maintenance.
> 
> Also I don’t think we want CMOs to be applied to a buffer which is not
> dma mapped as the memory should already be coherent for access from the
> CPU. Any CMOs required for device access taken care of in the
> dma_buf_map_attachment and dma_buf_unmap_attachment calls.
> So really it only makes sense for begin_cpu_access and end_cpu_access to
> apply CMOs if the buffer is dma mapped.
> 
> Fix the ION begin_cpu_access and end_cpu_access functions to only apply
> cache maintenance to buffers which are dma mapped.
> 
> Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and 
> mapping")
> Signed-off-by: Liam Mark 
> ---
>  drivers/staging/android/ion/ion.c | 26 +-
>  1 file changed, 21 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/staging/android/ion/ion.c 
> b/drivers/staging/android/ion/ion.c
> index 6f5afab7c1a1..1fe633a7fdba 100644
> --- a/drivers/staging/android/ion/ion.c
> +++ b/drivers/staging/android/ion/ion.c
> @@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
>   struct device *dev;
>   struct sg_table *table;
>   struct list_head list;
> + bool dma_mapped;
>  };
>  
>  static int ion_dma_buf_attach(struct dma_buf *dmabuf,
> @@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
>  
>   a->table = table;
>   a->dev = attachment->dev;
> + a->dma_mapped = false;
>   INIT_LIST_HEAD(>list);
>  
>   attachment->priv = a;
> @@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct 
> dma_buf_attachment *attachment,
>  {
>   struct ion_dma_buf_attachment *a = attachment->priv;
>   struct sg_table *table;
> + struct ion_buffer *buffer = attachment->dmabuf->priv;
>  
>   table = a->table;
>  
> + mutex_lock(>lock);
>   if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> - direction))
> + direction)) {
> + mutex_unlock(>lock);
>   return ERR_PTR(-ENOMEM);
> + }
> + a->dma_mapped = true;
> + mutex_unlock(>lock);
>  
>   return table;
>  }
> @@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment 
> *attachment,
> struct sg_table *table,
> enum dma_data_direction direction)
>  {
> + struct ion_dma_buf_attachment *a = attachment->priv;
> + struct ion_buffer *buffer = attachment->dmabuf->priv;
> +
> + mutex_lock(>lock);
>   dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> + a->dma_mapped = false;
> + mutex_unlock(>lock);
>  }
>  
>  static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> @@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf 
> *dmabuf,
>  
>   mutex_lock(>lock);
>   list_for_each_entry(a, >attachments, list) {

When no devices are attached then buffer->attachments is empty and the
below does not run, so if I understand this patch correctly then what
you are protecting against is CPU access in the window after
dma_buf_attach but before dma_buf_map.

This is the kind of thing that again makes me think a couple more
ordering requirements on DMA-BUF ops are needed. DMA-BUFs do not require
the backing memory to be allocated until map time, this is why the
dma_address field would still be null as you note in the commit message.
So why should the CPU be performing accesses on a buffer that is not
actually backed yet?

I can think of two solutions:

1) Only allow CPU access (mmap, kmap, {begin,end}_cpu_access) while at
least one device is mapped.

2) Treat the CPU access request like the a device map request and
trigger the allocation of backing memory just like if a device map had
come in.

I know the current Ion heaps (and most other DMA-BUF exporters) all do
the allocation up front so the memory is already there, but DMA-BUF was
designed with late allocation in mind. I have a use-case I'm working on
that finally exercises this DMA-BUF functionality and I would like to
have it export through ION. This patch doesn't prevent that, but seems
like it is endorsing the the idea that buffers always need to be backed,
even before device attach/map is has occurred.

Either of the 

[PATCH 2/4] staging: android: ion: Restrict cache maintenance to dma mapped memory

2019-01-18 Thread Liam Mark
The ION begin_cpu_access and end_cpu_access functions use the
dma_sync_sg_for_cpu and dma_sync_sg_for_device APIs to perform cache
maintenance.

Currently it is possible to apply cache maintenance, via the
begin_cpu_access and end_cpu_access APIs, to ION buffers which are not
dma mapped.

The dma sync sg APIs should not be called on sg lists which have not been
dma mapped as this can result in cache maintenance being applied to the
wrong address. If an sg list has not been dma mapped then its dma_address
field has not been populated, some dma ops such as the swiotlb_dma_ops ops
use the dma_address field to calculate the address onto which to apply
cache maintenance.

Also I don’t think we want CMOs to be applied to a buffer which is not
dma mapped as the memory should already be coherent for access from the
CPU. Any CMOs required for device access taken care of in the
dma_buf_map_attachment and dma_buf_unmap_attachment calls.
So really it only makes sense for begin_cpu_access and end_cpu_access to
apply CMOs if the buffer is dma mapped.

Fix the ION begin_cpu_access and end_cpu_access functions to only apply
cache maintenance to buffers which are dma mapped.

Fixes: 2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and 
mapping")
Signed-off-by: Liam Mark 
---
 drivers/staging/android/ion/ion.c | 26 +-
 1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/drivers/staging/android/ion/ion.c 
b/drivers/staging/android/ion/ion.c
index 6f5afab7c1a1..1fe633a7fdba 100644
--- a/drivers/staging/android/ion/ion.c
+++ b/drivers/staging/android/ion/ion.c
@@ -210,6 +210,7 @@ struct ion_dma_buf_attachment {
struct device *dev;
struct sg_table *table;
struct list_head list;
+   bool dma_mapped;
 };
 
 static int ion_dma_buf_attach(struct dma_buf *dmabuf,
@@ -231,6 +232,7 @@ static int ion_dma_buf_attach(struct dma_buf *dmabuf,
 
a->table = table;
a->dev = attachment->dev;
+   a->dma_mapped = false;
INIT_LIST_HEAD(>list);
 
attachment->priv = a;
@@ -261,12 +263,18 @@ static struct sg_table *ion_map_dma_buf(struct 
dma_buf_attachment *attachment,
 {
struct ion_dma_buf_attachment *a = attachment->priv;
struct sg_table *table;
+   struct ion_buffer *buffer = attachment->dmabuf->priv;
 
table = a->table;
 
+   mutex_lock(>lock);
if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
-   direction))
+   direction)) {
+   mutex_unlock(>lock);
return ERR_PTR(-ENOMEM);
+   }
+   a->dma_mapped = true;
+   mutex_unlock(>lock);
 
return table;
 }
@@ -275,7 +283,13 @@ static void ion_unmap_dma_buf(struct dma_buf_attachment 
*attachment,
  struct sg_table *table,
  enum dma_data_direction direction)
 {
+   struct ion_dma_buf_attachment *a = attachment->priv;
+   struct ion_buffer *buffer = attachment->dmabuf->priv;
+
+   mutex_lock(>lock);
dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
+   a->dma_mapped = false;
+   mutex_unlock(>lock);
 }
 
 static int ion_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
@@ -346,8 +360,9 @@ static int ion_dma_buf_begin_cpu_access(struct dma_buf 
*dmabuf,
 
mutex_lock(>lock);
list_for_each_entry(a, >attachments, list) {
-   dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
-   direction);
+   if (a->dma_mapped)
+   dma_sync_sg_for_cpu(a->dev, a->table->sgl,
+   a->table->nents, direction);
}
 
 unlock:
@@ -369,8 +384,9 @@ static int ion_dma_buf_end_cpu_access(struct dma_buf 
*dmabuf,
 
mutex_lock(>lock);
list_for_each_entry(a, >attachments, list) {
-   dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
-  direction);
+   if (a->dma_mapped)
+   dma_sync_sg_for_device(a->dev, a->table->sgl,
+  a->table->nents, direction);
}
mutex_unlock(>lock);
 
-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, 
a Linux Foundation Collaborative Project

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel