Re: [RFC PATCH V3 4/5] platform: mtk-isp: Add Mediatek DIP driver

2019-09-09 Thread Tomasz Figa
Hi Frederic,

On Tue, Sep 10, 2019 at 4:23 AM  wrote:
>
> From: Frederic Chen 
>
> This patch adds the driver of Digital Image Processing (DIP)
> unit in Mediatek ISP system, providing image format
> conversion, resizing, and rotation features.
>
> The mtk-isp directory will contain drivers for multiple IP
> blocks found in Mediatek ISP system. It will include ISP
> Pass 1 driver(CAM), sensor interface driver, DIP driver and
> face detection driver.
>
> Signed-off-by: Frederic Chen 
> ---
>  drivers/media/platform/mtk-isp/Makefile   |7 +
>  .../media/platform/mtk-isp/isp_50/Makefile|7 +
>  .../platform/mtk-isp/isp_50/dip/Makefile  |   18 +
>  .../platform/mtk-isp/isp_50/dip/mtk_dip-dev.c |  650 +
>  .../platform/mtk-isp/isp_50/dip/mtk_dip-dev.h |  566 +
>  .../platform/mtk-isp/isp_50/dip/mtk_dip-hw.h  |  156 ++
>  .../platform/mtk-isp/isp_50/dip/mtk_dip-sys.c |  521 
>  .../mtk-isp/isp_50/dip/mtk_dip-v4l2.c | 2255 +
>  8 files changed, 4180 insertions(+)
>  create mode 100644 drivers/media/platform/mtk-isp/Makefile
>  create mode 100644 drivers/media/platform/mtk-isp/isp_50/Makefile
>  create mode 100644 drivers/media/platform/mtk-isp/isp_50/dip/Makefile
>  create mode 100644 drivers/media/platform/mtk-isp/isp_50/dip/mtk_dip-dev.c
>  create mode 100644 drivers/media/platform/mtk-isp/isp_50/dip/mtk_dip-dev.h
>  create mode 100644 drivers/media/platform/mtk-isp/isp_50/dip/mtk_dip-hw.h
>  create mode 100644 drivers/media/platform/mtk-isp/isp_50/dip/mtk_dip-sys.c
>  create mode 100644 drivers/media/platform/mtk-isp/isp_50/dip/mtk_dip-v4l2.c
>

Thanks for sending v3!

I'm going to do a full review a bit later, but please check one
comment about power handling below.

Other than that one comment, from a quick look, I think we only have a
number of style issues left. Thanks for the hard work!

[snip]
> +static void dip_runner_func(struct work_struct *work)
> +{
> +   struct mtk_dip_request *req = mtk_dip_hw_mdp_work_to_req(work);
> +   struct mtk_dip_dev *dip_dev = req->dip_pipe->dip_dev;
> +   struct img_config *config_data =
> +   (struct img_config *)req->working_buf->config_data.vaddr;
> +
> +   /*
> +* Call MDP/GCE API to do HW excecution
> +* Pass the framejob to MDP driver
> +*/
> +   pm_runtime_get_sync(dip_dev->dev);
> +   mdp_cmdq_sendtask(dip_dev->mdp_pdev, config_data,
> + &req->img_fparam.frameparam, NULL, false,
> + dip_mdp_cb_func, req);
> +}
[snip]
> +static void dip_composer_workfunc(struct work_struct *work)
> +{
> +   struct mtk_dip_request *req = mtk_dip_hw_fw_work_to_req(work);
> +   struct mtk_dip_dev *dip_dev = req->dip_pipe->dip_dev;
> +   struct img_ipi_param ipi_param;
> +   struct mtk_dip_hw_subframe *buf;
> +   int ret;
> +
> +   down(&dip_dev->sem);
> +
> +   buf = mtk_dip_hw_working_buf_alloc(req->dip_pipe->dip_dev);
> +   if (!buf) {
> +   dev_err(req->dip_pipe->dip_dev->dev,
> +   "%s:%s:req(%p): no free working buffer available\n",
> +   __func__, req->dip_pipe->desc->name, req);
> +   }
> +
> +   req->working_buf = buf;
> +   mtk_dip_wbuf_to_ipi_img_addr(&req->img_fparam.frameparam.subfrm_data,
> +&buf->buffer);
> +   memset(buf->buffer.vaddr, 0, DIP_SUB_FRM_SZ);
> +   
> mtk_dip_wbuf_to_ipi_img_sw_addr(&req->img_fparam.frameparam.config_data,
> +   &buf->config_data);
> +   memset(buf->config_data.vaddr, 0, DIP_COMP_SZ);
> +
> +   if (!req->img_fparam.frameparam.tuning_data.present) {
> +   /*
> +* When user enqueued without tuning buffer,
> +* it would use driver internal buffer.
> +*/
> +   dev_dbg(dip_dev->dev,
> +   "%s: frame_no(%d) has no tuning_data\n",
> +   __func__, req->img_fparam.frameparam.frame_no);
> +
> +   mtk_dip_wbuf_to_ipi_tuning_addr
> +   (&req->img_fparam.frameparam.tuning_data,
> +&buf->tuning_buf);
> +   memset(buf->tuning_buf.vaddr, 0, DIP_TUNING_SZ);
> +   }
> +
> +   mtk_dip_wbuf_to_ipi_img_sw_addr(&req->img_fparam.frameparam.self_data,
> +   &buf->frameparam);
> +   memcpy(buf->frameparam.vaddr, &req->img_fparam.frameparam,
> +  sizeof(req->img_fparam.frameparam));
> +   ipi_param.usage = IMG_IPI_FRAME;
> +   ipi_param.frm_param.handle = req->id;
> +   ipi_param.frm_param.scp_addr = (u32)buf->frameparam.scp_daddr;
> +
> +   mutex_lock(&dip_dev->hw_op_lock);
> +   atomic_inc(&dip_dev->num_composing);
> +   ret = scp_ipi_send(dip_dev->scp_pdev, SCP_IPI_DIP, &ipi_param,
> +  sizeof(ipi_param), 0);

We're not holdi

Re: [PATCH 01/11] xen/arm: use dma-noncoherent.h calls for xen-swiotlb cache maintainance

2019-09-09 Thread Stefano Stabellini
On Thu, 5 Sep 2019, Christoph Hellwig wrote:
> Copy the arm64 code that uses the dma-direct/swiotlb helpers for DMA
> on-coherent devices.
> 
> Signed-off-by: Christoph Hellwig 

This is much better and much more readable.

Reviewed-by: Stefano Stabellini 

> ---
>  arch/arm/include/asm/device.h|  3 -
>  arch/arm/include/asm/xen/page-coherent.h | 72 +---
>  arch/arm/mm/dma-mapping.c|  8 +--
>  drivers/xen/swiotlb-xen.c| 20 ---
>  4 files changed, 28 insertions(+), 75 deletions(-)
> 
> diff --git a/arch/arm/include/asm/device.h b/arch/arm/include/asm/device.h
> index f6955b55c544..c675bc0d5aa8 100644
> --- a/arch/arm/include/asm/device.h
> +++ b/arch/arm/include/asm/device.h
> @@ -14,9 +14,6 @@ struct dev_archdata {
>  #endif
>  #ifdef CONFIG_ARM_DMA_USE_IOMMU
>   struct dma_iommu_mapping*mapping;
> -#endif
> -#ifdef CONFIG_XEN
> - const struct dma_map_ops *dev_dma_ops;
>  #endif
>   unsigned int dma_coherent:1;
>   unsigned int dma_ops_setup:1;
> diff --git a/arch/arm/include/asm/xen/page-coherent.h 
> b/arch/arm/include/asm/xen/page-coherent.h
> index 2c403e7c782d..602ac02f154c 100644
> --- a/arch/arm/include/asm/xen/page-coherent.h
> +++ b/arch/arm/include/asm/xen/page-coherent.h
> @@ -6,23 +6,37 @@
>  #include 
>  #include 
>  
> -static inline const struct dma_map_ops *xen_get_dma_ops(struct device *dev)
> -{
> - if (dev && dev->archdata.dev_dma_ops)
> - return dev->archdata.dev_dma_ops;
> - return get_arch_dma_ops(NULL);
> -}
> -
>  static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t 
> size,
>   dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs)
>  {
> - return xen_get_dma_ops(hwdev)->alloc(hwdev, size, dma_handle, flags, 
> attrs);
> + return dma_direct_alloc(hwdev, size, dma_handle, flags, attrs);
>  }
>  
>  static inline void xen_free_coherent_pages(struct device *hwdev, size_t size,
>   void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs)
>  {
> - xen_get_dma_ops(hwdev)->free(hwdev, size, cpu_addr, dma_handle, attrs);
> + dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs);
> +}
> +
> +static inline void xen_dma_sync_single_for_cpu(struct device *hwdev,
> + dma_addr_t handle, size_t size, enum dma_data_direction dir)
> +{
> + unsigned long pfn = PFN_DOWN(handle);
> +
> + if (pfn_valid(pfn))
> + dma_direct_sync_single_for_cpu(hwdev, handle, size, dir);
> + else
> + __xen_dma_sync_single_for_cpu(hwdev, handle, size, dir);
> +}
> +
> +static inline void xen_dma_sync_single_for_device(struct device *hwdev,
> + dma_addr_t handle, size_t size, enum dma_data_direction dir)
> +{
> + unsigned long pfn = PFN_DOWN(handle);
> + if (pfn_valid(pfn))
> + dma_direct_sync_single_for_device(hwdev, handle, size, dir);
> + else
> + __xen_dma_sync_single_for_device(hwdev, handle, size, dir);
>  }
>  
>  static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
> @@ -36,17 +50,8 @@ static inline void xen_dma_map_page(struct device *hwdev, 
> struct page *page,
>   bool local = (page_pfn <= dev_pfn) &&
>   (dev_pfn - page_pfn < compound_pages);
>  
> - /*
> -  * Dom0 is mapped 1:1, while the Linux page can span across
> -  * multiple Xen pages, it's not possible for it to contain a
> -  * mix of local and foreign Xen pages. So if the first xen_pfn
> -  * == mfn the page is local otherwise it's a foreign page
> -  * grant-mapped in dom0. If the page is local we can safely
> -  * call the native dma_ops function, otherwise we call the xen
> -  * specific function.
> -  */
>   if (local)
> - xen_get_dma_ops(hwdev)->map_page(hwdev, page, offset, size, 
> dir, attrs);
> + dma_direct_map_page(hwdev, page, offset, size, dir, attrs);
>   else
>   __xen_dma_map_page(hwdev, page, dev_addr, offset, size, dir, 
> attrs);
>  }
> @@ -63,33 +68,10 @@ static inline void xen_dma_unmap_page(struct device 
> *hwdev, dma_addr_t handle,
>* safely call the native dma_ops function, otherwise we call the xen
>* specific function.
>*/
> - if (pfn_valid(pfn)) {
> - if (xen_get_dma_ops(hwdev)->unmap_page)
> - xen_get_dma_ops(hwdev)->unmap_page(hwdev, handle, size, 
> dir, attrs);
> - } else
> + if (pfn_valid(pfn))
> + dma_direct_unmap_page(hwdev, handle, size, dir, attrs);
> + else
>   __xen_dma_unmap_page(hwdev, handle, size, dir, attrs);
>  }
>  
> -static inline void xen_dma_sync_single_for_cpu(struct device *hwdev,
> - dma_addr_t handle, size_t size, enum dma_data_direction dir)
> -{
> - unsigned long pfn = PFN_DOWN(handle);
> - if (pfn_valid(pfn)) {
> - if (xen_get_dma_ops(hwdev)->sync_single_for_cpu)
> -

Re: [PATCH 02/11] xen/arm: consolidate page-coherent.h

2019-09-09 Thread Stefano Stabellini
On Thu, 5 Sep 2019, Christoph Hellwig wrote:
> Shared the duplicate arm/arm64 code in include/xen/arm/page-coherent.h.
> 
> Signed-off-by: Christoph Hellwig 

Reviewed-by: Stefano Stabellini 


> ---
>  arch/arm/include/asm/xen/page-coherent.h   | 75 
>  arch/arm64/include/asm/xen/page-coherent.h | 75 
>  include/xen/arm/page-coherent.h| 80 ++
>  3 files changed, 80 insertions(+), 150 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/page-coherent.h 
> b/arch/arm/include/asm/xen/page-coherent.h
> index 602ac02f154c..27e984977402 100644
> --- a/arch/arm/include/asm/xen/page-coherent.h
> +++ b/arch/arm/include/asm/xen/page-coherent.h
> @@ -1,77 +1,2 @@
>  /* SPDX-License-Identifier: GPL-2.0 */
> -#ifndef _ASM_ARM_XEN_PAGE_COHERENT_H
> -#define _ASM_ARM_XEN_PAGE_COHERENT_H
> -
> -#include 
> -#include 
>  #include 
> -
> -static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t 
> size,
> - dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs)
> -{
> - return dma_direct_alloc(hwdev, size, dma_handle, flags, attrs);
> -}
> -
> -static inline void xen_free_coherent_pages(struct device *hwdev, size_t size,
> - void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs)
> -{
> - dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs);
> -}
> -
> -static inline void xen_dma_sync_single_for_cpu(struct device *hwdev,
> - dma_addr_t handle, size_t size, enum dma_data_direction dir)
> -{
> - unsigned long pfn = PFN_DOWN(handle);
> -
> - if (pfn_valid(pfn))
> - dma_direct_sync_single_for_cpu(hwdev, handle, size, dir);
> - else
> - __xen_dma_sync_single_for_cpu(hwdev, handle, size, dir);
> -}
> -
> -static inline void xen_dma_sync_single_for_device(struct device *hwdev,
> - dma_addr_t handle, size_t size, enum dma_data_direction dir)
> -{
> - unsigned long pfn = PFN_DOWN(handle);
> - if (pfn_valid(pfn))
> - dma_direct_sync_single_for_device(hwdev, handle, size, dir);
> - else
> - __xen_dma_sync_single_for_device(hwdev, handle, size, dir);
> -}
> -
> -static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
> -  dma_addr_t dev_addr, unsigned long offset, size_t size,
> -  enum dma_data_direction dir, unsigned long attrs)
> -{
> - unsigned long page_pfn = page_to_xen_pfn(page);
> - unsigned long dev_pfn = XEN_PFN_DOWN(dev_addr);
> - unsigned long compound_pages =
> - (1< - bool local = (page_pfn <= dev_pfn) &&
> - (dev_pfn - page_pfn < compound_pages);
> -
> - if (local)
> - dma_direct_map_page(hwdev, page, offset, size, dir, attrs);
> - else
> - __xen_dma_map_page(hwdev, page, dev_addr, offset, size, dir, 
> attrs);
> -}
> -
> -static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t 
> handle,
> - size_t size, enum dma_data_direction dir, unsigned long attrs)
> -{
> - unsigned long pfn = PFN_DOWN(handle);
> - /*
> -  * Dom0 is mapped 1:1, while the Linux page can be spanned accross
> -  * multiple Xen page, it's not possible to have a mix of local and
> -  * foreign Xen page. Dom0 is mapped 1:1, so calling pfn_valid on a
> -  * foreign mfn will always return false. If the page is local we can
> -  * safely call the native dma_ops function, otherwise we call the xen
> -  * specific function.
> -  */
> - if (pfn_valid(pfn))
> - dma_direct_unmap_page(hwdev, handle, size, dir, attrs);
> - else
> - __xen_dma_unmap_page(hwdev, handle, size, dir, attrs);
> -}
> -
> -#endif /* _ASM_ARM_XEN_PAGE_COHERENT_H */
> diff --git a/arch/arm64/include/asm/xen/page-coherent.h 
> b/arch/arm64/include/asm/xen/page-coherent.h
> index d88e56b90b93..27e984977402 100644
> --- a/arch/arm64/include/asm/xen/page-coherent.h
> +++ b/arch/arm64/include/asm/xen/page-coherent.h
> @@ -1,77 +1,2 @@
>  /* SPDX-License-Identifier: GPL-2.0 */
> -#ifndef _ASM_ARM64_XEN_PAGE_COHERENT_H
> -#define _ASM_ARM64_XEN_PAGE_COHERENT_H
> -
> -#include 
> -#include 
>  #include 
> -
> -static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t 
> size,
> - dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs)
> -{
> - return dma_direct_alloc(hwdev, size, dma_handle, flags, attrs);
> -}
> -
> -static inline void xen_free_coherent_pages(struct device *hwdev, size_t size,
> - void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs)
> -{
> - dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs);
> -}
> -
> -static inline void xen_dma_sync_single_for_cpu(struct device *hwdev,
> - dma_addr_t handle, size_t size, enum dma_data_direction dir)
> -{
> - unsigned long pfn = PFN_DOWN(handle);
> -
> - if (pfn_valid(pfn))
> - dma_direct_sync_single_for_cpu(hwdev, handle, 

[PATCH AUTOSEL 4.19 8/8] iommu/amd: Fix race in increase_address_space()

2019-09-09 Thread Sasha Levin
From: Joerg Roedel 

[ Upstream commit 754265bcab78a9014f0f99cd35e0d610fcd7dfa7 ]

After the conversion to lock-less dma-api call the
increase_address_space() function can be called without any
locking. Multiple CPUs could potentially race for increasing
the address space, leading to invalid domain->mode settings
and invalid page-tables. This has been happening in the wild
under high IO load and memory pressure.

Fix the race by locking this operation. The function is
called infrequently so that this does not introduce
a performance regression in the dma-api path again.

Reported-by: Qian Cai 
Fixes: 256e4621c21a ('iommu/amd: Make use of the generic IOVA allocator')
Signed-off-by: Joerg Roedel 
Signed-off-by: Sasha Levin 
---
 drivers/iommu/amd_iommu.c | 16 +++-
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 8b79e2b32d378..69c269dc4f1bf 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1340,18 +1340,21 @@ static void domain_flush_devices(struct 
protection_domain *domain)
  * another level increases the size of the address space by 9 bits to a size up
  * to 64 bits.
  */
-static bool increase_address_space(struct protection_domain *domain,
+static void increase_address_space(struct protection_domain *domain,
   gfp_t gfp)
 {
+   unsigned long flags;
u64 *pte;
 
-   if (domain->mode == PAGE_MODE_6_LEVEL)
+   spin_lock_irqsave(&domain->lock, flags);
+
+   if (WARN_ON_ONCE(domain->mode == PAGE_MODE_6_LEVEL))
/* address space already 64 bit large */
-   return false;
+   goto out;
 
pte = (void *)get_zeroed_page(gfp);
if (!pte)
-   return false;
+   goto out;
 
*pte = PM_LEVEL_PDE(domain->mode,
iommu_virt_to_phys(domain->pt_root));
@@ -1359,7 +1362,10 @@ static bool increase_address_space(struct 
protection_domain *domain,
domain->mode+= 1;
domain->updated  = true;
 
-   return true;
+out:
+   spin_unlock_irqrestore(&domain->lock, flags);
+
+   return;
 }
 
 static u64 *alloc_pte(struct protection_domain *domain,
-- 
2.20.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH AUTOSEL 4.14 7/8] iommu/amd: Flush old domains in kdump kernel

2019-09-09 Thread Sasha Levin
From: Stuart Hayes 

[ Upstream commit 36b7200f67dfe75b416b5281ed4ace9927b513bc ]

When devices are attached to the amd_iommu in a kdump kernel, the old device
table entries (DTEs), which were copied from the crashed kernel, will be
overwritten with a new domain number.  When the new DTE is written, the IOMMU
is told to flush the DTE from its internal cache--but it is not told to flush
the translation cache entries for the old domain number.

Without this patch, AMD systems using the tg3 network driver fail when kdump
tries to save the vmcore to a network system, showing network timeouts and
(sometimes) IOMMU errors in the kernel log.

This patch will flush IOMMU translation cache entries for the old domain when
a DTE gets overwritten with a new domain number.

Signed-off-by: Stuart Hayes 
Fixes: 3ac3e5ee5ed5 ('iommu/amd: Copy old trans table from old kernel')
Signed-off-by: Joerg Roedel 
Signed-off-by: Sasha Levin 
---
 drivers/iommu/amd_iommu.c | 24 
 1 file changed, 24 insertions(+)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 684f7cdd814b6..822c85226a29f 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1150,6 +1150,17 @@ static void amd_iommu_flush_tlb_all(struct amd_iommu 
*iommu)
iommu_completion_wait(iommu);
 }
 
+static void amd_iommu_flush_tlb_domid(struct amd_iommu *iommu, u32 dom_id)
+{
+   struct iommu_cmd cmd;
+
+   build_inv_iommu_pages(&cmd, 0, CMD_INV_IOMMU_ALL_PAGES_ADDRESS,
+ dom_id, 1);
+   iommu_queue_command(iommu, &cmd);
+
+   iommu_completion_wait(iommu);
+}
+
 static void amd_iommu_flush_all(struct amd_iommu *iommu)
 {
struct iommu_cmd cmd;
@@ -1835,6 +1846,7 @@ static void set_dte_entry(u16 devid, struct 
protection_domain *domain, bool ats)
 {
u64 pte_root = 0;
u64 flags = 0;
+   u32 old_domid;
 
if (domain->mode != PAGE_MODE_NONE)
pte_root = iommu_virt_to_phys(domain->pt_root);
@@ -1877,8 +1889,20 @@ static void set_dte_entry(u16 devid, struct 
protection_domain *domain, bool ats)
flags &= ~DEV_DOMID_MASK;
flags |= domain->id;
 
+   old_domid = amd_iommu_dev_table[devid].data[1] & DEV_DOMID_MASK;
amd_iommu_dev_table[devid].data[1]  = flags;
amd_iommu_dev_table[devid].data[0]  = pte_root;
+
+   /*
+* A kdump kernel might be replacing a domain ID that was copied from
+* the previous kernel--if so, it needs to flush the translation cache
+* entries for the old domain ID that is being overwritten
+*/
+   if (old_domid) {
+   struct amd_iommu *iommu = amd_iommu_rlookup_table[devid];
+
+   amd_iommu_flush_tlb_domid(iommu, old_domid);
+   }
 }
 
 static void clear_dte_entry(u16 devid)
-- 
2.20.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH AUTOSEL 4.14 8/8] iommu/amd: Fix race in increase_address_space()

2019-09-09 Thread Sasha Levin
From: Joerg Roedel 

[ Upstream commit 754265bcab78a9014f0f99cd35e0d610fcd7dfa7 ]

After the conversion to lock-less dma-api call the
increase_address_space() function can be called without any
locking. Multiple CPUs could potentially race for increasing
the address space, leading to invalid domain->mode settings
and invalid page-tables. This has been happening in the wild
under high IO load and memory pressure.

Fix the race by locking this operation. The function is
called infrequently so that this does not introduce
a performance regression in the dma-api path again.

Reported-by: Qian Cai 
Fixes: 256e4621c21a ('iommu/amd: Make use of the generic IOVA allocator')
Signed-off-by: Joerg Roedel 
Signed-off-by: Sasha Levin 
---
 drivers/iommu/amd_iommu.c | 16 +++-
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 822c85226a29f..a1174e61daf4e 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1337,18 +1337,21 @@ static void domain_flush_devices(struct 
protection_domain *domain)
  * another level increases the size of the address space by 9 bits to a size up
  * to 64 bits.
  */
-static bool increase_address_space(struct protection_domain *domain,
+static void increase_address_space(struct protection_domain *domain,
   gfp_t gfp)
 {
+   unsigned long flags;
u64 *pte;
 
-   if (domain->mode == PAGE_MODE_6_LEVEL)
+   spin_lock_irqsave(&domain->lock, flags);
+
+   if (WARN_ON_ONCE(domain->mode == PAGE_MODE_6_LEVEL))
/* address space already 64 bit large */
-   return false;
+   goto out;
 
pte = (void *)get_zeroed_page(gfp);
if (!pte)
-   return false;
+   goto out;
 
*pte = PM_LEVEL_PDE(domain->mode,
iommu_virt_to_phys(domain->pt_root));
@@ -1356,7 +1359,10 @@ static bool increase_address_space(struct 
protection_domain *domain,
domain->mode+= 1;
domain->updated  = true;
 
-   return true;
+out:
+   spin_unlock_irqrestore(&domain->lock, flags);
+
+   return;
 }
 
 static u64 *alloc_pte(struct protection_domain *domain,
-- 
2.20.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH AUTOSEL 4.9 6/6] iommu/amd: Fix race in increase_address_space()

2019-09-09 Thread Sasha Levin
From: Joerg Roedel 

[ Upstream commit 754265bcab78a9014f0f99cd35e0d610fcd7dfa7 ]

After the conversion to lock-less dma-api call the
increase_address_space() function can be called without any
locking. Multiple CPUs could potentially race for increasing
the address space, leading to invalid domain->mode settings
and invalid page-tables. This has been happening in the wild
under high IO load and memory pressure.

Fix the race by locking this operation. The function is
called infrequently so that this does not introduce
a performance regression in the dma-api path again.

Reported-by: Qian Cai 
Fixes: 256e4621c21a ('iommu/amd: Make use of the generic IOVA allocator')
Signed-off-by: Joerg Roedel 
Signed-off-by: Sasha Levin 
---
 drivers/iommu/amd_iommu.c | 16 +++-
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index c1233d0288a03..dd7880de7e4e9 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1321,18 +1321,21 @@ static void domain_flush_devices(struct 
protection_domain *domain)
  * another level increases the size of the address space by 9 bits to a size up
  * to 64 bits.
  */
-static bool increase_address_space(struct protection_domain *domain,
+static void increase_address_space(struct protection_domain *domain,
   gfp_t gfp)
 {
+   unsigned long flags;
u64 *pte;
 
-   if (domain->mode == PAGE_MODE_6_LEVEL)
+   spin_lock_irqsave(&domain->lock, flags);
+
+   if (WARN_ON_ONCE(domain->mode == PAGE_MODE_6_LEVEL))
/* address space already 64 bit large */
-   return false;
+   goto out;
 
pte = (void *)get_zeroed_page(gfp);
if (!pte)
-   return false;
+   goto out;
 
*pte = PM_LEVEL_PDE(domain->mode,
virt_to_phys(domain->pt_root));
@@ -1340,7 +1343,10 @@ static bool increase_address_space(struct 
protection_domain *domain,
domain->mode+= 1;
domain->updated  = true;
 
-   return true;
+out:
+   spin_unlock_irqrestore(&domain->lock, flags);
+
+   return;
 }
 
 static u64 *alloc_pte(struct protection_domain *domain,
-- 
2.20.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH AUTOSEL 4.19 7/8] iommu/amd: Flush old domains in kdump kernel

2019-09-09 Thread Sasha Levin
From: Stuart Hayes 

[ Upstream commit 36b7200f67dfe75b416b5281ed4ace9927b513bc ]

When devices are attached to the amd_iommu in a kdump kernel, the old device
table entries (DTEs), which were copied from the crashed kernel, will be
overwritten with a new domain number.  When the new DTE is written, the IOMMU
is told to flush the DTE from its internal cache--but it is not told to flush
the translation cache entries for the old domain number.

Without this patch, AMD systems using the tg3 network driver fail when kdump
tries to save the vmcore to a network system, showing network timeouts and
(sometimes) IOMMU errors in the kernel log.

This patch will flush IOMMU translation cache entries for the old domain when
a DTE gets overwritten with a new domain number.

Signed-off-by: Stuart Hayes 
Fixes: 3ac3e5ee5ed5 ('iommu/amd: Copy old trans table from old kernel')
Signed-off-by: Joerg Roedel 
Signed-off-by: Sasha Levin 
---
 drivers/iommu/amd_iommu.c | 24 
 1 file changed, 24 insertions(+)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 8d9920ff41344..8b79e2b32d378 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1153,6 +1153,17 @@ static void amd_iommu_flush_tlb_all(struct amd_iommu 
*iommu)
iommu_completion_wait(iommu);
 }
 
+static void amd_iommu_flush_tlb_domid(struct amd_iommu *iommu, u32 dom_id)
+{
+   struct iommu_cmd cmd;
+
+   build_inv_iommu_pages(&cmd, 0, CMD_INV_IOMMU_ALL_PAGES_ADDRESS,
+ dom_id, 1);
+   iommu_queue_command(iommu, &cmd);
+
+   iommu_completion_wait(iommu);
+}
+
 static void amd_iommu_flush_all(struct amd_iommu *iommu)
 {
struct iommu_cmd cmd;
@@ -1838,6 +1849,7 @@ static void set_dte_entry(u16 devid, struct 
protection_domain *domain,
 {
u64 pte_root = 0;
u64 flags = 0;
+   u32 old_domid;
 
if (domain->mode != PAGE_MODE_NONE)
pte_root = iommu_virt_to_phys(domain->pt_root);
@@ -1887,8 +1899,20 @@ static void set_dte_entry(u16 devid, struct 
protection_domain *domain,
flags &= ~DEV_DOMID_MASK;
flags |= domain->id;
 
+   old_domid = amd_iommu_dev_table[devid].data[1] & DEV_DOMID_MASK;
amd_iommu_dev_table[devid].data[1]  = flags;
amd_iommu_dev_table[devid].data[0]  = pte_root;
+
+   /*
+* A kdump kernel might be replacing a domain ID that was copied from
+* the previous kernel--if so, it needs to flush the translation cache
+* entries for the old domain ID that is being overwritten
+*/
+   if (old_domid) {
+   struct amd_iommu *iommu = amd_iommu_rlookup_table[devid];
+
+   amd_iommu_flush_tlb_domid(iommu, old_domid);
+   }
 }
 
 static void clear_dte_entry(u16 devid)
-- 
2.20.1



[PATCH AUTOSEL 5.2 12/12] iommu/amd: Fix race in increase_address_space()

2019-09-09 Thread Sasha Levin
From: Joerg Roedel 

[ Upstream commit 754265bcab78a9014f0f99cd35e0d610fcd7dfa7 ]

After the conversion to lock-less dma-api call the
increase_address_space() function can be called without any
locking. Multiple CPUs could potentially race for increasing
the address space, leading to invalid domain->mode settings
and invalid page-tables. This has been happening in the wild
under high IO load and memory pressure.

Fix the race by locking this operation. The function is
called infrequently so that this does not introduce
a performance regression in the dma-api path again.

Reported-by: Qian Cai 
Fixes: 256e4621c21a ('iommu/amd: Make use of the generic IOVA allocator')
Signed-off-by: Joerg Roedel 
Signed-off-by: Sasha Levin 
---
 drivers/iommu/amd_iommu.c | 16 +++-
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index b265062edf6c8..3e687f18b203a 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1425,18 +1425,21 @@ static void free_pagetable(struct protection_domain 
*domain)
  * another level increases the size of the address space by 9 bits to a size up
  * to 64 bits.
  */
-static bool increase_address_space(struct protection_domain *domain,
+static void increase_address_space(struct protection_domain *domain,
   gfp_t gfp)
 {
+   unsigned long flags;
u64 *pte;
 
-   if (domain->mode == PAGE_MODE_6_LEVEL)
+   spin_lock_irqsave(&domain->lock, flags);
+
+   if (WARN_ON_ONCE(domain->mode == PAGE_MODE_6_LEVEL))
/* address space already 64 bit large */
-   return false;
+   goto out;
 
pte = (void *)get_zeroed_page(gfp);
if (!pte)
-   return false;
+   goto out;
 
*pte = PM_LEVEL_PDE(domain->mode,
iommu_virt_to_phys(domain->pt_root));
@@ -1444,7 +1447,10 @@ static bool increase_address_space(struct 
protection_domain *domain,
domain->mode+= 1;
domain->updated  = true;
 
-   return true;
+out:
+   spin_unlock_irqrestore(&domain->lock, flags);
+
+   return;
 }
 
 static u64 *alloc_pte(struct protection_domain *domain,
-- 
2.20.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH AUTOSEL 5.2 07/12] iommu/vt-d: Remove global page flush support

2019-09-09 Thread Sasha Levin
From: Jacob Pan 

[ Upstream commit 8744daf4b0699b724ee0a56b313a6c0c4ea289e3 ]

Global pages support is removed from VT-d spec 3.0. Since global pages G
flag only affects first-level paging structures and because DMA request
with PASID are only supported by VT-d spec. 3.0 and onward, we can
safely remove global pages support.

For kernel shared virtual address IOTLB invalidation, PASID
granularity and page selective within PASID will be used. There is
no global granularity supported. Without this fix, IOTLB invalidation
will cause invalid descriptor error in the queued invalidation (QI)
interface.

Fixes: 1c4f88b7f1f9 ("iommu/vt-d: Shared virtual address in scalable mode")
Reported-by: Sanjay K Kumar 
Signed-off-by: Jacob Pan 
Signed-off-by: Joerg Roedel 
Signed-off-by: Sasha Levin 
---
 drivers/iommu/intel-svm.c   | 36 +++-
 include/linux/intel-iommu.h |  3 ---
 2 files changed, 15 insertions(+), 24 deletions(-)

diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c
index eceaa7e968ae8..641dc223c97b8 100644
--- a/drivers/iommu/intel-svm.c
+++ b/drivers/iommu/intel-svm.c
@@ -100,24 +100,19 @@ int intel_svm_finish_prq(struct intel_iommu *iommu)
 }
 
 static void intel_flush_svm_range_dev (struct intel_svm *svm, struct 
intel_svm_dev *sdev,
-  unsigned long address, unsigned long 
pages, int ih, int gl)
+   unsigned long address, unsigned long pages, int 
ih)
 {
struct qi_desc desc;
 
-   if (pages == -1) {
-   /* For global kernel pages we have to flush them in *all* PASIDs
-* because that's the only option the hardware gives us. Despite
-* the fact that they are actually only accessible through one. 
*/
-   if (gl)
-   desc.qw0 = QI_EIOTLB_PASID(svm->pasid) |
-   QI_EIOTLB_DID(sdev->did) |
-   QI_EIOTLB_GRAN(QI_GRAN_ALL_ALL) |
-   QI_EIOTLB_TYPE;
-   else
-   desc.qw0 = QI_EIOTLB_PASID(svm->pasid) |
-   QI_EIOTLB_DID(sdev->did) |
-   QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) |
-   QI_EIOTLB_TYPE;
+   /*
+* Do PASID granu IOTLB invalidation if page selective capability is
+* not available.
+*/
+   if (pages == -1 || !cap_pgsel_inv(svm->iommu->cap)) {
+   desc.qw0 = QI_EIOTLB_PASID(svm->pasid) |
+   QI_EIOTLB_DID(sdev->did) |
+   QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) |
+   QI_EIOTLB_TYPE;
desc.qw1 = 0;
} else {
int mask = ilog2(__roundup_pow_of_two(pages));
@@ -127,7 +122,6 @@ static void intel_flush_svm_range_dev (struct intel_svm 
*svm, struct intel_svm_d
QI_EIOTLB_GRAN(QI_GRAN_PSI_PASID) |
QI_EIOTLB_TYPE;
desc.qw1 = QI_EIOTLB_ADDR(address) |
-   QI_EIOTLB_GL(gl) |
QI_EIOTLB_IH(ih) |
QI_EIOTLB_AM(mask);
}
@@ -162,13 +156,13 @@ static void intel_flush_svm_range_dev (struct intel_svm 
*svm, struct intel_svm_d
 }
 
 static void intel_flush_svm_range(struct intel_svm *svm, unsigned long address,
- unsigned long pages, int ih, int gl)
+   unsigned long pages, int ih)
 {
struct intel_svm_dev *sdev;
 
rcu_read_lock();
list_for_each_entry_rcu(sdev, &svm->devs, list)
-   intel_flush_svm_range_dev(svm, sdev, address, pages, ih, gl);
+   intel_flush_svm_range_dev(svm, sdev, address, pages, ih);
rcu_read_unlock();
 }
 
@@ -180,7 +174,7 @@ static void intel_invalidate_range(struct mmu_notifier *mn,
struct intel_svm *svm = container_of(mn, struct intel_svm, notifier);
 
intel_flush_svm_range(svm, start,
- (end - start + PAGE_SIZE - 1) >> VTD_PAGE_SHIFT, 
0, 0);
+ (end - start + PAGE_SIZE - 1) >> VTD_PAGE_SHIFT, 
0);
 }
 
 static void intel_mm_release(struct mmu_notifier *mn, struct mm_struct *mm)
@@ -203,7 +197,7 @@ static void intel_mm_release(struct mmu_notifier *mn, 
struct mm_struct *mm)
rcu_read_lock();
list_for_each_entry_rcu(sdev, &svm->devs, list) {
intel_pasid_tear_down_entry(svm->iommu, sdev->dev, svm->pasid);
-   intel_flush_svm_range_dev(svm, sdev, 0, -1, 0, !svm->mm);
+   intel_flush_svm_range_dev(svm, sdev, 0, -1, 0);
}
rcu_read_unlock();
 
@@ -410,7 +404,7 @@ int intel_svm_unbind_mm(struct device *dev, int pasid)
 * large and has to be physically contiguous. 
So it's
   

[PATCH AUTOSEL 5.2 11/12] iommu/amd: Flush old domains in kdump kernel

2019-09-09 Thread Sasha Levin
From: Stuart Hayes 

[ Upstream commit 36b7200f67dfe75b416b5281ed4ace9927b513bc ]

When devices are attached to the amd_iommu in a kdump kernel, the old device
table entries (DTEs), which were copied from the crashed kernel, will be
overwritten with a new domain number.  When the new DTE is written, the IOMMU
is told to flush the DTE from its internal cache--but it is not told to flush
the translation cache entries for the old domain number.

Without this patch, AMD systems using the tg3 network driver fail when kdump
tries to save the vmcore to a network system, showing network timeouts and
(sometimes) IOMMU errors in the kernel log.

This patch will flush IOMMU translation cache entries for the old domain when
a DTE gets overwritten with a new domain number.

Signed-off-by: Stuart Hayes 
Fixes: 3ac3e5ee5ed5 ('iommu/amd: Copy old trans table from old kernel')
Signed-off-by: Joerg Roedel 
Signed-off-by: Sasha Levin 
---
 drivers/iommu/amd_iommu.c | 24 
 1 file changed, 24 insertions(+)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index dce1d8d2e8a44..b265062edf6c8 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1143,6 +1143,17 @@ static void amd_iommu_flush_tlb_all(struct amd_iommu 
*iommu)
iommu_completion_wait(iommu);
 }
 
+static void amd_iommu_flush_tlb_domid(struct amd_iommu *iommu, u32 dom_id)
+{
+   struct iommu_cmd cmd;
+
+   build_inv_iommu_pages(&cmd, 0, CMD_INV_IOMMU_ALL_PAGES_ADDRESS,
+ dom_id, 1);
+   iommu_queue_command(iommu, &cmd);
+
+   iommu_completion_wait(iommu);
+}
+
 static void amd_iommu_flush_all(struct amd_iommu *iommu)
 {
struct iommu_cmd cmd;
@@ -1863,6 +1874,7 @@ static void set_dte_entry(u16 devid, struct 
protection_domain *domain,
 {
u64 pte_root = 0;
u64 flags = 0;
+   u32 old_domid;
 
if (domain->mode != PAGE_MODE_NONE)
pte_root = iommu_virt_to_phys(domain->pt_root);
@@ -1912,8 +1924,20 @@ static void set_dte_entry(u16 devid, struct 
protection_domain *domain,
flags &= ~DEV_DOMID_MASK;
flags |= domain->id;
 
+   old_domid = amd_iommu_dev_table[devid].data[1] & DEV_DOMID_MASK;
amd_iommu_dev_table[devid].data[1]  = flags;
amd_iommu_dev_table[devid].data[0]  = pte_root;
+
+   /*
+* A kdump kernel might be replacing a domain ID that was copied from
+* the previous kernel--if so, it needs to flush the translation cache
+* entries for the old domain ID that is being overwritten
+*/
+   if (old_domid) {
+   struct amd_iommu *iommu = amd_iommu_rlookup_table[devid];
+
+   amd_iommu_flush_tlb_domid(iommu, old_domid);
+   }
 }
 
 static void clear_dte_entry(u16 devid)
-- 
2.20.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] iommu/io-pgtable: Move some initialization data to .init.rodata

2019-09-09 Thread Christophe JAILLET
The memory used by '__init' functions can be freed once the initialization
phase has been performed.

Mark some 'static const' array defined and used within some '__init'
functions as '__initconst', so that the corresponding data can also be
discarded.

Without '__initconst', the data are put in the .rodata section.
With the qualifier, they are put in the .init.rodata section.

With gcc 8.3.0, the following changes have been measured:

Without '__initconst':
   section  size
  .rodata   0720
  .init.rodata  0018

With '__initconst':
   section  size
  .rodata   0660
  .init.rodata  0058

Signed-off-by: Christophe JAILLET 
---
Adding __initconst "within" a function is not in line with kernel/include/init.h
which states that:
 * Don't forget to initialize data not at file scope, i.e. within a function,
 * as gcc otherwise puts the data into the bss section and not into the init
 * section.
However, having the array within the function or out-side the function
seems to have no impact in the generated code and in the section used.
According to my test, both put the data in .init.rodata.

Maybe the comment is outdated or related to some older vesion of gcc.
---
 drivers/iommu/io-pgtable-arm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 161a7d56264d..24076f0560c6 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -1109,7 +1109,7 @@ static void __init arm_lpae_dump_ops(struct 
io_pgtable_ops *ops)
 
 static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg)
 {
-   static const enum io_pgtable_fmt fmts[] = {
+   static const enum io_pgtable_fmt fmts[] __initconst = {
ARM_64_LPAE_S1,
ARM_64_LPAE_S2,
};
@@ -1208,13 +1208,13 @@ static int __init arm_lpae_run_tests(struct 
io_pgtable_cfg *cfg)
 
 static int __init arm_lpae_do_selftests(void)
 {
-   static const unsigned long pgsize[] = {
+   static const unsigned long pgsize[] __initconst = {
SZ_4K | SZ_2M | SZ_1G,
SZ_16K | SZ_32M,
SZ_64K | SZ_512M,
};
 
-   static const unsigned int ias[] = {
+   static const unsigned int ias[] __initconst = {
32, 36, 40, 42, 44, 48,
};
 
-- 
2.20.1



Re: [PATCH 1/2] iommu: Implement of_iommu_get_resv_regions()

2019-09-09 Thread Will Deacon
On Mon, Sep 02, 2019 at 05:00:56PM +0200, Thierry Reding wrote:
> On Mon, Sep 02, 2019 at 02:54:23PM +0100, Robin Murphy wrote:
> > On 29/08/2019 12:14, Thierry Reding wrote:
> > > From: Thierry Reding 
> > > 
> > > This is an implementation that IOMMU drivers can use to obtain reserved
> > > memory regions from a device tree node. It uses the reserved-memory DT
> > > bindings to find the regions associated with a given device. These
> > > regions will be used to create 1:1 mappings in the IOMMU domain that
> > > the devices will be attached to.
> > > 
> > > Cc: Rob Herring 
> > > Cc: Frank Rowand 
> > > Cc: devicet...@vger.kernel.org
> > > Signed-off-by: Thierry Reding 
> > > ---
> > >   drivers/iommu/of_iommu.c | 39 +++
> > >   include/linux/of_iommu.h |  8 
> > >   2 files changed, 47 insertions(+)
> > > 
> > > diff --git a/drivers/iommu/of_iommu.c b/drivers/iommu/of_iommu.c
> > > index 614a93aa5305..0d47f626b854 100644
> > > --- a/drivers/iommu/of_iommu.c
> > > +++ b/drivers/iommu/of_iommu.c
> > > @@ -9,6 +9,7 @@
> > >   #include 
> > >   #include 
> > >   #include 
> > > +#include 
> > >   #include 
> > >   #include 
> > >   #include 
> > > @@ -225,3 +226,41 @@ const struct iommu_ops *of_iommu_configure(struct 
> > > device *dev,
> > >   return ops;
> > >   }
> > > +
> > > +/**
> > > + * of_iommu_get_resv_regions - reserved region driver helper for device 
> > > tree
> > > + * @dev: device for which to get reserved regions
> > > + * @list: reserved region list
> > > + *
> > > + * IOMMU drivers can use this to implement their .get_resv_regions() 
> > > callback
> > > + * for memory regions attached to a device tree node. See the 
> > > reserved-memory
> > > + * device tree bindings on how to use these:
> > > + *
> > > + *   
> > > Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> > > + */
> > > +void of_iommu_get_resv_regions(struct device *dev, struct list_head 
> > > *list)
> > > +{
> > > + struct of_phandle_iterator it;
> > > + int err;
> > > +
> > > + of_for_each_phandle(&it, err, dev->of_node, "memory-region", NULL, 0) {
> > > + struct iommu_resv_region *region;
> > > + struct resource res;
> > > +
> > > + err = of_address_to_resource(it.node, 0, &res);
> > > + if (err < 0) {
> > > + dev_err(dev, "failed to parse memory region %pOF: %d\n",
> > > + it.node, err);
> > > + continue;
> > > + }
> > 
> > What if the device node has memory regions for other purposes, like private
> > CMA carveouts? We wouldn't want to force mappings of those (and in the very
> > worst case doing so could even render them unusable).
> 
> I suppose we could come up with additional properties to mark such
> memory regions and skip them here.

I think we need /something/ like that, both so that we can identify these
memory regions as requiring an identity mapping in the SMMU but also so
that we can place additional requirements on them, such as being 64k-aligned
and mandating properties of the mapping, such as cacheability based on the
device coherency.

I defer to the devicetree folks as to whether this should be an additional
property, or a phandle or whatever.

Will


Re: [PATCH 1/3] crypto: marvell: Use kzfree rather than its implementation

2019-09-09 Thread Herbert Xu
On Wed, Sep 04, 2019 at 11:01:17AM +0800, zhong jiang wrote:
> Use kzfree instead of memset() + kfree().
> 
> Signed-off-by: zhong jiang 
> ---
>  drivers/crypto/marvell/hash.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)

Patch applied.  Thanks.
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu