Re: [PATCH v5] swiotlb: Adjust SWIOTBL bounce buffer size for SEV guests.

2020-11-18 Thread Christoph Hellwig
On Wed, Nov 18, 2020 at 08:12:43PM +, Ashish Kalra wrote:
> From: Ashish Kalra 
> 
> For SEV, all DMA to and from guest has to use shared
> (un-encrypted) pages. SEV uses SWIOTLB to make this
> happen without requiring changes to device drivers.
> However, depending on workload being run, the default
> 64MB of SWIOTLB might not be enough and SWIOTLB
> may run out of buffers to use for DMA, resulting
> in I/O errors and/or performance degradation for
> high I/O workloads.

FYI, you can use up 73 chars for your commit log.  This looks rather
compressed.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 6/6] iommu/mediatek: Convert tlb_flush_walk to gather_add_page

2020-11-18 Thread Yong Wu
MediaTek TLB flush don't care about granule. when unmap, it could gather
whole the iova range then do tlb flush once.

In current v7s, If unmap the lvl2 pagetable, the steps are:
step1: set this current pdg to 0.
step2: tlb flush for this lvl2 block iova(1M).
step3: free the lvl2 pagetable.

This patch means we delay the step2 after unmap whole the iova.
the iommu consumer HW should have stopped before it call dma_free_xx,
thus, this delay looks ok.

Since tlb_flush_walk doesn't have the "gather" parameter, so we have to
add this "gather" in ourself private data.

Meanswhile, After this patch, the gather_add_pages will always be called,
then "gather->start == ULONG_MAX" is impossible. remove this checking.

Signed-off-by: Yong Wu 
---
tlb_flush_walk is designed for tlb flush range, I'm not sure whether it's ok
if adding "gather" as a parameter in tlb_flush_walk. in this version, I put
it into our private data.
---
 drivers/iommu/mtk_iommu.c | 21 -
 1 file changed, 16 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index 94786860bd84..4c8200f4403a 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -128,6 +128,8 @@ struct mtk_iommu_domain {
struct io_pgtable_ops   *iop;
 
struct iommu_domain domain;
+
+   struct iommu_iotlb_gather   *gather;
 };
 
 static const struct iommu_ops mtk_iommu_ops;
@@ -227,6 +229,17 @@ static void mtk_iommu_tlb_flush_range_sync(unsigned long 
iova, size_t size,
}
 }
 
+static void mtk_iommu_tlb_flush_walk(unsigned long iova, size_t size,
+size_t granule, void *cookie)
+{
+   struct mtk_iommu_data *data = cookie;
+   struct mtk_iommu_domain *m4u_dom = data->m4u_dom;
+   struct iommu_domain *domain = &m4u_dom->domain;
+
+   /* Gather all the iova and tlb flush once after unmap. */
+   iommu_iotlb_gather_add_page(domain, m4u_dom->gather, iova, size);
+}
+
 static void mtk_iommu_tlb_flush_page_nosync(struct iommu_iotlb_gather *gather,
unsigned long iova, size_t granule,
void *cookie)
@@ -239,8 +252,8 @@ static void mtk_iommu_tlb_flush_page_nosync(struct 
iommu_iotlb_gather *gather,
 
 static const struct iommu_flush_ops mtk_iommu_flush_ops = {
.tlb_flush_all = mtk_iommu_tlb_flush_all,
-   .tlb_flush_walk = mtk_iommu_tlb_flush_range_sync,
-   .tlb_flush_leaf = mtk_iommu_tlb_flush_range_sync,
+   .tlb_flush_walk = mtk_iommu_tlb_flush_walk,
+   .tlb_flush_leaf = mtk_iommu_tlb_flush_walk,
.tlb_add_page = mtk_iommu_tlb_flush_page_nosync,
 };
 
@@ -432,6 +445,7 @@ static size_t mtk_iommu_unmap(struct iommu_domain *domain,
 {
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
 
+   dom->gather = gather;
gather->granule_ignore = true;
return dom->iop->unmap(dom->iop, iova, size, gather);
 }
@@ -447,9 +461,6 @@ static void mtk_iommu_iotlb_sync(struct iommu_domain 
*domain,
struct mtk_iommu_data *data = mtk_iommu_get_m4u_data();
size_t length = gather->end - gather->start;
 
-   if (gather->start == ULONG_MAX)
-   return;
-
mtk_iommu_tlb_flush_range_sync(gather->start, length, gather->pgsize,
   data);
 }
-- 
2.18.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 5/6] iommu/mediatek: Enable granule_ignore for unmap

2020-11-18 Thread Yong Wu
MediaTek IOMMU HW don't care about granule when it flush tlb.
In order to flush tlb once when unmap, Enable this flag to gather all
the iova chunk of unmap.

Signed-off-by: Yong Wu 
---
 drivers/iommu/mtk_iommu.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index 8c2d4a225666..94786860bd84 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -432,6 +432,7 @@ static size_t mtk_iommu_unmap(struct iommu_domain *domain,
 {
struct mtk_iommu_domain *dom = to_mtk_domain(domain);
 
+   gather->granule_ignore = true;
return dom->iop->unmap(dom->iop, iova, size, gather);
 }
 
-- 
2.18.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 4/6] iommu: Add granule_ignore when tlb gather

2020-11-18 Thread Yong Wu
Add a granule_ignore option when tlb gather for some HW which don't care
about granule when it flush tlb.

Signed-off-by: Yong Wu 
---
 include/linux/iommu.h | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 794d4085edd3..1aad32238510 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -171,6 +171,7 @@ enum iommu_dev_features {
  * @start: IOVA representing the start of the range to be flushed
  * @end: IOVA representing the end of the range to be flushed (exclusive)
  * @pgsize: The interval at which to perform the flush
+ * @granule_ignore: For tlb flushing that could be regardless of granule.
  *
  * This structure is intended to be updated by multiple calls to the
  * ->unmap() function in struct iommu_ops before eventually being passed
@@ -180,6 +181,7 @@ struct iommu_iotlb_gather {
unsigned long   start;
unsigned long   end;
size_t  pgsize;
+   boolgranule_ignore;
 };
 
 /**
@@ -544,7 +546,7 @@ static inline void iommu_iotlb_gather_add_page(struct 
iommu_domain *domain,
 * a different granularity, then sync the TLB so that the gather
 * structure can be rewritten.
 */
-   if (gather->pgsize != size ||
+   if ((!gather->granule_ignore && gather->pgsize != size) ||
end < gather->start || start > gather->end) {
if (gather->pgsize)
iommu_iotlb_sync(domain, gather);
-- 
2.18.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 3/6] iommu/mediatek: Add iotlb_sync_map to sync whole the iova range

2020-11-18 Thread Yong Wu
Remove IO_PGTABLE_QUIRK_TLBI_ON_MAP to avoid tlb sync for each a small
chunk memory, Use the new iotlb_sync_map to tlb_sync once for whole the
iova range of iommu_map.

Signed-off-by: Yong Wu 
---
After reading msm_iommu.c, It looks IO_PGTABLE_QUIRK_TLBI_ON_MAP can be
removed.
---
 drivers/iommu/mtk_iommu.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index c072cee532c2..8c2d4a225666 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -323,7 +323,6 @@ static int mtk_iommu_domain_finalise(struct 
mtk_iommu_domain *dom)
dom->cfg = (struct io_pgtable_cfg) {
.quirks = IO_PGTABLE_QUIRK_ARM_NS |
IO_PGTABLE_QUIRK_NO_PERMS |
-   IO_PGTABLE_QUIRK_TLBI_ON_MAP |
IO_PGTABLE_QUIRK_ARM_MTK_EXT,
.pgsize_bitmap = mtk_iommu_ops.pgsize_bitmap,
.ias = 32,
@@ -454,6 +453,14 @@ static void mtk_iommu_iotlb_sync(struct iommu_domain 
*domain,
   data);
 }
 
+static void mtk_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
+  size_t size)
+{
+   struct mtk_iommu_domain *dom = to_mtk_domain(domain);
+
+   mtk_iommu_tlb_flush_range_sync(iova, size, size, dom->data);
+}
+
 static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain,
  dma_addr_t iova)
 {
@@ -540,6 +547,7 @@ static const struct iommu_ops mtk_iommu_ops = {
.unmap  = mtk_iommu_unmap,
.flush_iotlb_all = mtk_iommu_flush_iotlb_all,
.iotlb_sync = mtk_iommu_iotlb_sync,
+   .iotlb_sync_map = mtk_iommu_sync_map,
.iova_to_phys   = mtk_iommu_iova_to_phys,
.probe_device   = mtk_iommu_probe_device,
.release_device = mtk_iommu_release_device,
-- 
2.18.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 2/6] iommu: Add iova and size as parameters in iommu_iotlb_map

2020-11-18 Thread Yong Wu
iotlb_sync_map allow IOMMU drivers tlb sync after completing the whole
mapping. This patch adds iova and size as the parameters in it. then the
IOMMU driver could flush tlb with the whole range once after iova mapping
to improve performance.

Signed-off-by: Yong Wu 
---
 drivers/iommu/iommu.c  | 6 +++---
 drivers/iommu/tegra-gart.c | 3 ++-
 include/linux/iommu.h  | 3 ++-
 3 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index decef851fa3a..df87c8e825f7 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2425,7 +2425,7 @@ int iommu_map(struct iommu_domain *domain, unsigned long 
iova,
might_sleep();
ret = __iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL);
if (ret == 0 && ops->iotlb_sync_map)
-   ops->iotlb_sync_map(domain);
+   ops->iotlb_sync_map(domain, iova, size);
 
return ret;
 }
@@ -2439,7 +2439,7 @@ int iommu_map_atomic(struct iommu_domain *domain, 
unsigned long iova,
 
ret = __iommu_map(domain, iova, paddr, size, prot, GFP_ATOMIC);
if (ret == 0 && ops->iotlb_sync_map)
-   ops->iotlb_sync_map(domain);
+   ops->iotlb_sync_map(domain, iova, size);
 
return ret;
 }
@@ -2557,7 +2557,7 @@ static size_t __iommu_map_sg(struct iommu_domain *domain, 
unsigned long iova,
}
 
if (ops->iotlb_sync_map)
-   ops->iotlb_sync_map(domain);
+   ops->iotlb_sync_map(domain, iova, mapped);
return mapped;
 
 out_err:
diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c
index fac720273889..d15d13a98ed1 100644
--- a/drivers/iommu/tegra-gart.c
+++ b/drivers/iommu/tegra-gart.c
@@ -261,7 +261,8 @@ static int gart_iommu_of_xlate(struct device *dev,
return 0;
 }
 
-static void gart_iommu_sync_map(struct iommu_domain *domain)
+static void gart_iommu_sync_map(struct iommu_domain *domain, unsigned long 
iova,
+   size_t size)
 {
FLUSH_GART_REGS(gart_handle);
 }
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index b95a6f8db6ff..794d4085edd3 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -244,7 +244,8 @@ struct iommu_ops {
size_t (*unmap)(struct iommu_domain *domain, unsigned long iova,
 size_t size, struct iommu_iotlb_gather *iotlb_gather);
void (*flush_iotlb_all)(struct iommu_domain *domain);
-   void (*iotlb_sync_map)(struct iommu_domain *domain);
+   void (*iotlb_sync_map)(struct iommu_domain *domain, unsigned long iova,
+  size_t size);
void (*iotlb_sync)(struct iommu_domain *domain,
   struct iommu_iotlb_gather *iotlb_gather);
phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t 
iova);
-- 
2.18.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 1/6] iommu: Move iotlb_sync_map out from __iommu_map

2020-11-18 Thread Yong Wu
In the end of __iommu_map, It alway call iotlb_sync_map.
This patch moves iotlb_sync_map out from __iommu_map since it is
unnecessary to call this for each sg segment especially iotlb_sync_map
is flush tlb all currently.

Signed-off-by: Yong Wu 
---
 drivers/iommu/iommu.c | 24 +++-
 1 file changed, 19 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 8c470f451a32..decef851fa3a 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2407,9 +2407,6 @@ static int __iommu_map(struct iommu_domain *domain, 
unsigned long iova,
size -= pgsize;
}
 
-   if (ops->iotlb_sync_map)
-   ops->iotlb_sync_map(domain);
-
/* unroll mapping in case something went wrong */
if (ret)
iommu_unmap(domain, orig_iova, orig_size - size);
@@ -2422,15 +2419,29 @@ static int __iommu_map(struct iommu_domain *domain, 
unsigned long iova,
 int iommu_map(struct iommu_domain *domain, unsigned long iova,
  phys_addr_t paddr, size_t size, int prot)
 {
+   const struct iommu_ops *ops = domain->ops;
+   int ret;
+
might_sleep();
-   return __iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL);
+   ret = __iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL);
+   if (ret == 0 && ops->iotlb_sync_map)
+   ops->iotlb_sync_map(domain);
+
+   return ret;
 }
 EXPORT_SYMBOL_GPL(iommu_map);
 
 int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova,
  phys_addr_t paddr, size_t size, int prot)
 {
-   return __iommu_map(domain, iova, paddr, size, prot, GFP_ATOMIC);
+   const struct iommu_ops *ops = domain->ops;
+   int ret;
+
+   ret = __iommu_map(domain, iova, paddr, size, prot, GFP_ATOMIC);
+   if (ret == 0 && ops->iotlb_sync_map)
+   ops->iotlb_sync_map(domain);
+
+   return ret;
 }
 EXPORT_SYMBOL_GPL(iommu_map_atomic);
 
@@ -2514,6 +2525,7 @@ static size_t __iommu_map_sg(struct iommu_domain *domain, 
unsigned long iova,
 struct scatterlist *sg, unsigned int nents, int 
prot,
 gfp_t gfp)
 {
+   const struct iommu_ops *ops = domain->ops;
size_t len = 0, mapped = 0;
phys_addr_t start;
unsigned int i = 0;
@@ -2544,6 +2556,8 @@ static size_t __iommu_map_sg(struct iommu_domain *domain, 
unsigned long iova,
sg = sg_next(sg);
}
 
+   if (ops->iotlb_sync_map)
+   ops->iotlb_sync_map(domain);
return mapped;
 
 out_err:
-- 
2.18.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 0/6] MediaTek IOMMU improve tlb flush performance in map/unmap

2020-11-18 Thread Yong Wu
This patchset is to improve tlb flushing performance in iommu_map/unmap
for MediaTek IOMMU.

For iommu_map, currently MediaTek IOMMU use IO_PGTABLE_QUIRK_TLBI_ON_MAP
to do tlb_flush for each a memory chunk. this is so unnecessary. we could
improve it by tlb flushing one time at the end of iommu_map.

For iommu_unmap, currently we have already improve this performance by
gather. But the current gather should take care its granule size. if the
granule size is different, it will do tlb flush and gather again. Our HW
don't care about granule size. thus I add a flag(granule_ignore) for this
case.

After this patchset, we could achieve only tlb flushing once in iommu_map
and iommu_unmap.

Regardless of sg, for each a segment, I did a simple test:
  
  size = 20 * SZ_1M;
  /* the worst case, all are 4k mapping. */
  ret = iommu_map(domain, 0x5bb02000, 0x123f1000, size, IOMMU_READ);
  iommu_unmap(domain, 0x5bb02000, size);

This is the comparing time(unit is us):
  original-time  after-improve
   map-20M59943   2347
   unmap-20M  264 36

This patchset also flush tlb once in the iommu_map_sg case.

patch [1/6][2/6][3/6] are for map while the others are for unmap.

change note:
v2: Refactor all the code.
base on v5.10-rc1.

v1: 
https://lore.kernel.org/linux-iommu/20201019113100.23661-1-chao@mediatek.com/

Yong Wu (6):
  iommu: Move iotlb_sync_map out from __iommu_map
  iommu: Add iova and size as parameters in iommu_iotlb_map
  iommu/mediatek: Add iotlb_sync_map to sync whole the iova range
  iommu: Add granule_ignore when tlb gather
  iommu/mediatek: Enable granule_ignore for unmap
  iommu/mediatek: Convert tlb_flush_walk to gather_add_page

 drivers/iommu/iommu.c  | 24 +++-
 drivers/iommu/mtk_iommu.c  | 32 ++--
 drivers/iommu/tegra-gart.c |  3 ++-
 include/linux/iommu.h  |  7 +--
 4 files changed, 52 insertions(+), 14 deletions(-)

-- 
2.18.0


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 1/1] iommu/vt-d: Fix compile error with CONFIG_PCI_ATS not set

2020-11-18 Thread Lu Baolu
Fix the compile error below (CONFIG_PCI_ATS not set):

drivers/iommu/intel/dmar.c: In function ‘vf_inherit_msi_domain’:
drivers/iommu/intel/dmar.c:338:59: error: ‘struct pci_dev’ has no member named 
‘physfn’; did you mean ‘is_physfn’?
  338 |  dev_set_msi_domain(&pdev->dev, dev_get_msi_domain(&pdev->physfn->dev));
  |   ^~
  |   is_physfn

Link: 
https://lore.kernel.org/linux-iommu/camuhmdxa7wfjovmfsh2nbahn0cpycifhodtvg4a8hm9rx5d...@mail.gmail.com/
Fixes: ff828729be446 ("iommu/vt-d: Cure VF irqdomain hickup")
Cc: Thomas Gleixner 
Reported-by: Geert Uytterhoeven 
Signed-off-by: Lu Baolu 
---
 drivers/iommu/intel/dmar.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
index b2e804473209..11319e4dce4a 100644
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -335,7 +335,9 @@ static void  dmar_pci_bus_del_dev(struct 
dmar_pci_notify_info *info)
 
 static inline void vf_inherit_msi_domain(struct pci_dev *pdev)
 {
-   dev_set_msi_domain(&pdev->dev, dev_get_msi_domain(&pdev->physfn->dev));
+   struct pci_dev *physfn = pci_physfn(pdev);
+
+   dev_set_msi_domain(&pdev->dev, dev_get_msi_domain(&physfn->dev));
 }
 
 static int dmar_pci_bus_notifier(struct notifier_block *nb,
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v13 05/15] iommu/smmuv3: Get prepared for nested stage support

2020-11-18 Thread kernel test robot
Hi Eric,

I love your patch! Perhaps something to improve:

[auto build test WARNING on iommu/next]
[also build test WARNING on linus/master v5.10-rc4 next-20201118]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:
https://github.com/0day-ci/linux/commits/Eric-Auger/SMMUv3-Nested-Stage-Setup-IOMMU-part/20201118-192520
base:   https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next
config: arm64-randconfig-s031-20201118 (attached as .config)
compiler: aarch64-linux-gcc (GCC) 9.3.0
reproduce:
wget 
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O 
~/bin/make.cross
chmod +x ~/bin/make.cross
# apt-get install sparse
# sparse version: v0.6.3-123-g626c4742-dirty
# 
https://github.com/0day-ci/linux/commit/7308cdb07384d807c5ef43e6bfe0cd61c35a121e
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review 
Eric-Auger/SMMUv3-Nested-Stage-Setup-IOMMU-part/20201118-192520
git checkout 7308cdb07384d807c5ef43e6bfe0cd61c35a121e
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 
CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=arm64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot 


"sparse warnings: (new ones prefixed by >>)"
>> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:1326:37: sparse: sparse: 
>> restricted __le64 degrades to integer
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:1326:37: sparse: sparse: cast to 
restricted __le64
   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c: note: in included file (through 
arch/arm64/include/asm/atomic.h, include/linux/atomic.h, 
include/asm-generic/bitops/atomic.h, ...):
   arch/arm64/include/asm/cmpxchg.h:172:1: sparse: sparse: cast truncates bits 
from constant value (8000 becomes 0)
   arch/arm64/include/asm/cmpxchg.h:172:1: sparse: sparse: cast truncates bits 
from constant value (8000 becomes 0)

vim +1326 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c

  1175  
  1176  static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, 
u32 sid,
  1177__le64 *dst)
  1178  {
  1179  /*
  1180   * This is hideously complicated, but we only really care about
  1181   * three cases at the moment:
  1182   *
  1183   * 1. Invalid (all zero) -> bypass/fault (init)
  1184   * 2. Bypass/fault -> single stage translation/bypass (attach)
  1185   * 3. Single or nested stage Translation/bypass -> bypass/fault 
(detach)
  1186   * 4. S2 -> S1 + S2 (attach_pasid_table)
  1187   * 5. S1 + S2 -> S2 (detach_pasid_table)
  1188   *
  1189   * Given that we can't update the STE atomically and the SMMU
  1190   * doesn't read the thing in a defined order, that leaves us
  1191   * with the following maintenance requirements:
  1192   *
  1193   * 1. Update Config, return (init time STEs aren't live)
  1194   * 2. Write everything apart from dword 0, sync, write dword 0, 
sync
  1195   * 3. Update Config, sync
  1196   */
  1197  u64 val = le64_to_cpu(dst[0]);
  1198  bool s1_live = false, s2_live = false, ste_live;
  1199  bool abort, nested = false, translate = false;
  1200  struct arm_smmu_device *smmu = NULL;
  1201  struct arm_smmu_s1_cfg *s1_cfg;
  1202  struct arm_smmu_s2_cfg *s2_cfg;
  1203  struct arm_smmu_domain *smmu_domain = NULL;
  1204  struct arm_smmu_cmdq_ent prefetch_cmd = {
  1205  .opcode = CMDQ_OP_PREFETCH_CFG,
  1206  .prefetch   = {
  1207  .sid= sid,
  1208  },
  1209  };
  1210  
  1211  if (master) {
  1212  smmu_domain = master->domain;
  1213  smmu = master->smmu;
  1214  }
  1215  
  1216  if (smmu_domain) {
  1217  s1_cfg = &smmu_domain->s1_cfg;
  1218  s2_cfg = &smmu_domain->s2_cfg;
  1219  
  1220  switch (smmu_domain->stage) {
  1221  case ARM_SMMU_DOMAIN_S1:
  1222  s1_cfg->set = true;
  1223  s2_cfg->set = false;
  1224  break;
  1225  case ARM_SMMU_DOMAIN_S2:
  1226  s1_cfg->set = false;
  1227  s2_cfg->set = true;
  1228  break;
  1229  case ARM_SMMU_DOMAIN_NESTED:
  1230  

Re: [Patch V8 0/3] iommu: Add support to change default domain of an iommu group

2020-11-18 Thread Lu Baolu

On 11/18/20 9:52 PM, Will Deacon wrote:

On Fri, Sep 25, 2020 at 12:06:17PM -0700, Ashok Raj wrote:

Presently, the default domain of an iommu group is allocated during boot time
and it cannot be changed later. So, the device would typically be either in
identity (pass_through) mode or the device would be in DMA mode as long as the
system is up and running. There is no way to change the default domain type
dynamically i.e. after booting, a device cannot switch between identity mode and
DMA mode.

Assume a use case wherein the privileged user would want to use the device in
pass-through mode when the device is used for host so that it would be high
performing. Presently, this is not supported. Hence add support to change the
default domain of an iommu group dynamically.

Support this by writing to a sysfs file, namely
"/sys/kernel/iommu_groups//type".

Testing:

Tested by dynamically changing storage device (nvme) from
1. identity mode to DMA and making sure file transfer works
2. DMA mode to identity mode and making sure file transfer works
Tested only for intel_iommu/vt-d. Would appreciate if someone could test on AMD
and ARM based machines.

Based on iommu maintainer's 'next' branch.


Modulo my minor comments, I think this looks good for 5.11 if you can
please send a version 9.

Robin -- please can you give it the once-over too? I think root can break
things quite badly with this interface, but root can do that in other ways
anyway...


Sure. I will send a v9 after Robin's review.



Will


Best regards,
baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [Patch V8 3/3] iommu: Document usage of "/sys/kernel/iommu_groups//type" file

2020-11-18 Thread Lu Baolu

Hi Will,

On 11/18/20 9:51 PM, Will Deacon wrote:

On Fri, Sep 25, 2020 at 12:06:20PM -0700, Ashok Raj wrote:

From: Sai Praneeth Prakhya 

The default domain type of an iommu group can be changed by writing to
"/sys/kernel/iommu_groups//type" file. Hence, document it's usage
and more importantly spell out its limitations.

Cc: Christoph Hellwig 
Cc: Joerg Roedel 
Cc: Ashok Raj 
Cc: Will Deacon 
Cc: Lu Baolu 
Cc: Sohil Mehta 
Cc: Robin Murphy 
Cc: Jacob Pan 
Reviewed-by: Lu Baolu 
Signed-off-by: Sai Praneeth Prakhya 
---
  .../ABI/testing/sysfs-kernel-iommu_groups  | 30 ++
  1 file changed, 30 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-kernel-iommu_groups 
b/Documentation/ABI/testing/sysfs-kernel-iommu_groups
index 017f5bc3920c..effde9d23f4f 100644
--- a/Documentation/ABI/testing/sysfs-kernel-iommu_groups
+++ b/Documentation/ABI/testing/sysfs-kernel-iommu_groups
@@ -33,3 +33,33 @@ Description:In case an RMRR is used only by graphics or 
USB devices
it is now exposed as "direct-relaxable" instead of "direct".
In device assignment use case, for instance, those RMRR
are considered to be relaxable and safe.
+
+What:  /sys/kernel/iommu_groups//type
+Date:  September 2020
+KernelVersion: v5.10


^^ Please can you update these two lines?


Sure.




+Contact:   Sai Praneeth Prakhya 
+Description:   Let the user know the type of default domain in use by iommu
+   for this group. A privileged user could request kernel to change
+   the group type by writing to this file. Presently, only three
+   types are supported
+   1. DMA: All the DMA transactions from the device in this group
+   are translated by the iommu.
+   2. identity: All the DMA transactions from the device in this
+group are *not* translated by the iommu.
+   3. auto: Change to the type the device was booted with. When the
+user reads the file he would never see "auto". This is


Can we avoid assuming gender here and just use "they" instead of "he", please?
Same thing for the "Caution" note below.


Yes, absolutely.




+just a write only value.


I can't figure out from this description what string is returned to
userspace in the case that the group is configured as  blocked or unmanaged.


This series only enables switching a default domain in use between DMA
and IDENTITY. Other cases will result in write failures.




+   Note:
+   -
+   A group type could be modified only when


s/could be/may be/


+   1. The group has *only* one device
+   2. The device in the group is not bound to any device driver.
+  So, the user must first unbind the appropriate driver and
+  then change the default domain type.
+   Caution:
+   
+   Unbinding a device driver will take away the driver's control
+   over the device and if done on devices that host root file
+   system could lead to catastrophic effects (the user might
+   need to reboot the machine to get it to normal state). So, it's
+   expected that the user understands what he is doing.


Thanks,

Will


Best regards,
baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [Patch V8 1/3] iommu: Add support to change default domain of an iommu group

2020-11-18 Thread Lu Baolu

Hi Will,

The original author of this patch series has left Intel. I am now the
backup.

On 11/18/20 9:51 PM, Will Deacon wrote:

On Fri, Sep 25, 2020 at 12:06:18PM -0700, Ashok Raj wrote:

From: Sai Praneeth Prakhya 

Presently, the default domain of an iommu group is allocated during boot
time and it cannot be changed later. So, the device would typically be
either in identity (also known as pass_through) mode or the device would be
in DMA mode as long as the machine is up and running. There is no way to
change the default domain type dynamically i.e. after booting, a device
cannot switch between identity mode and DMA mode.

But, assume a use case wherein the user trusts the device and believes that
the OS is secure enough and hence wants *only* this device to bypass IOMMU
(so that it could be high performing) whereas all the other devices to go
through IOMMU (so that the system is protected). Presently, this use case
is not supported. It will be helpful if there is some way to change the
default domain of an iommu group dynamically. Hence, add such support.

A privileged user could request the kernel to change the default domain
type of a iommu group by writing to
"/sys/kernel/iommu_groups//type" file. Presently, only three values
are supported
1. identity: all the DMA transactions from the device in this group are
  *not* translated by the iommu
2. DMA: all the DMA transactions from the device in this group are
 translated by the iommu
3. auto: change to the type the device was booted with

Note:
1. Default domain of an iommu group with two or more devices cannot be
changed.
2. The device in the iommu group shouldn't be bound to any driver.
3. The device shouldn't be assigned to user for direct access.
4. The vendor iommu driver is required to add def_domain_type() callback.
The change request will fail if the request type conflicts with that
returned from the callback.

Please see "Documentation/ABI/testing/sysfs-kernel-iommu_groups" for more
information.

Cc: Christoph Hellwig 
Cc: Joerg Roedel 
Cc: Ashok Raj 
Cc: Will Deacon 
Cc: Lu Baolu 
Cc: Sohil Mehta 
Cc: Robin Murphy 
Cc: Jacob Pan 
Reviewed-by: Lu Baolu 
Signed-off-by: Sai Praneeth Prakhya 
---
  drivers/iommu/iommu.c | 225 +-
  1 file changed, 224 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 6c14c88cd525..2e93c48ce248 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -93,6 +93,8 @@ static void __iommu_detach_group(struct iommu_domain *domain,
  static int iommu_create_device_direct_mappings(struct iommu_group *group,
   struct device *dev);
  static struct iommu_group *iommu_group_get_for_dev(struct device *dev);
+static ssize_t iommu_group_store_type(struct iommu_group *group,
+ const char *buf, size_t count);
  
  #define IOMMU_GROUP_ATTR(_name, _mode, _show, _store)		\

  struct iommu_group_attribute iommu_group_attr_##_name =   \
@@ -525,7 +527,8 @@ static IOMMU_GROUP_ATTR(name, S_IRUGO, 
iommu_group_show_name, NULL);
  static IOMMU_GROUP_ATTR(reserved_regions, 0444,
iommu_group_show_resv_regions, NULL);
  
-static IOMMU_GROUP_ATTR(type, 0444, iommu_group_show_type, NULL);

+static IOMMU_GROUP_ATTR(type, 0644, iommu_group_show_type,
+   iommu_group_store_type);
  
  static void iommu_group_release(struct kobject *kobj)

  {
@@ -2849,3 +2852,223 @@ int iommu_sva_get_pasid(struct iommu_sva *handle)
return ops->sva_get_pasid(handle);
  }
  EXPORT_SYMBOL_GPL(iommu_sva_get_pasid);
+
+/*
+ * Changes the default domain of an iommu group that has *only* one device
+ *
+ * @group: The group for which the default domain should be changed
+ * @prev_dev: The device in the group (this is used to make sure that the 
device
+ *  hasn't changed after the caller has called this function)
+ * @type: The type of the new default domain that gets associated with the 
group
+ *
+ * Returns 0 on success and error code on failure
+ *
+ * Note:
+ * 1. Presently, this function is called only when user requests to change the
+ *group's default domain type through 
/sys/kernel/iommu_groups//type
+ *Please take a closer look if intended to use for other purposes.
+ */
+static int iommu_change_dev_def_domain(struct iommu_group *group,
+  struct device *prev_dev, int type)
+{
+   struct iommu_domain *prev_dom;
+   struct group_device *grp_dev;
+   const struct iommu_ops *ops;
+   int ret, dev_def_dom;
+   struct device *dev;
+
+   if (!group)
+   return -EINVAL;
+
+   mutex_lock(&group->mutex);
+
+   if (group->default_domain != group->domain) {
+   pr_err_ratelimited("Group not assigned to default domain\n");


This error is lacking any context. Can we use dev_err_ratelimited to 

Re: [PATCH v5] swiotlb: Adjust SWIOTBL bounce buffer size for SEV guests.

2020-11-18 Thread Borislav Petkov
On Wed, Nov 18, 2020 at 08:12:43PM +, Ashish Kalra wrote:
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index 3511736fbc74..0f42911cea57 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -1166,6 +1166,10 @@ void __init setup_arch(char **cmdline_p)
>   if (boot_cpu_has(X86_FEATURE_GBPAGES))
>   hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
>  
> +#ifdef CONFIG_X86_64
> + swiotlb_adjust();
> +#endif

Add an empty stub in include/linux/swiotlb.h for the !CONFIG_SWIOTLB
case and get rid of the ifdeffery please.

> +unsigned long __init arch_swiotlb_adjust(unsigned long iotlb_default_size)
> +{
> + unsigned long size = 0;
> +
> + /*
> +  * For SEV, all DMA has to occur via shared/unencrypted pages.
> +  * SEV uses SWOTLB to make this happen without changing device
> +  * drivers. However, depending on the workload being run, the
> +  * default 64MB of SWIOTLB may not be enough & SWIOTLB may
> +  * run out of buffers for DMA, resulting in I/O errors and/or
> +  * performance degradation especially with high I/O workloads.
> +  * Increase the default size of SWIOTLB for SEV guests using
> +  * a minimum value of 128MB and a maximum value of 512MB,
> +  * depending on amount of provisioned guest memory.
> +  */
> + if (sev_active()) {
> + phys_addr_t total_mem = memblock_phys_mem_size();
> +
> + if (total_mem <= SZ_1G)
> + size = max(iotlb_default_size, (unsigned long) SZ_128M);
> + else if (total_mem <= SZ_4G)
> + size = max(iotlb_default_size, (unsigned long) SZ_256M);
> + else
> + size = max(iotlb_default_size, (unsigned long) SZ_512M);
> +
> + pr_info("SEV adjusted max SWIOTLB size = %luMB",

Please make that message more user-friendly.

...

> +void __init swiotlb_adjust(void)
> +{
> + unsigned long size;
> +
> + /*
> +  * If swiotlb parameter has not been specified, give a chance to
> +  * architectures such as those supporting memory encryption to
> +  * adjust/expand SWIOTLB size for their use.
> +  */
> + if (!io_tlb_nslabs) {
> + size = arch_swiotlb_adjust(IO_TLB_DEFAULT_SIZE);
> + if (size) {
> + size = ALIGN(size, 1 << IO_TLB_SHIFT);
> + io_tlb_nslabs = size >> IO_TLB_SHIFT;
> + io_tlb_nslabs = ALIGN(io_tlb_nslabs, IO_TLB_SEGSIZE);
> +
> + pr_info("architecture adjusted SWIOTLB slabs = %lu\n",

That one too: what does "architecture adjusted SWIOTLB slabs" even
mean?!

Put yourself in your code user's shoes and see if that message makes
sense to her/him.

Thx.

-- 
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v5] swiotlb: Adjust SWIOTBL bounce buffer size for SEV guests.

2020-11-18 Thread Ashish Kalra
From: Ashish Kalra 

For SEV, all DMA to and from guest has to use shared
(un-encrypted) pages. SEV uses SWIOTLB to make this
happen without requiring changes to device drivers.
However, depending on workload being run, the default
64MB of SWIOTLB might not be enough and SWIOTLB
may run out of buffers to use for DMA, resulting
in I/O errors and/or performance degradation for
high I/O workloads.

Increase the default size of SWIOTLB for SEV guests
using a minimum value of 128MB and a maximum value
of 512MB, determining on amount of provisioned guest
memory.

Using late_initcall() interface to invoke
swiotlb_adjust() does not work as the size
adjustment needs to be done before mem_encrypt_init()
and reserve_crashkernel() which use the allocated
SWIOTLB buffer size, hence calling it explicitly
from setup_arch().

The SWIOTLB default size adjustment is added as an
architecture specific interface/callback to allow
architectures such as those supporting memory
encryption to adjust/expand SWIOTLB size for their
use.

v5 fixes build errors and warnings as
Reported-by: kbuild test robot 

Signed-off-by: Ashish Kalra 
---
 arch/x86/kernel/setup.c   |  4 
 arch/x86/mm/mem_encrypt.c | 32 
 include/linux/swiotlb.h   |  2 ++
 kernel/dma/swiotlb.c  | 27 +++
 4 files changed, 65 insertions(+)

diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 3511736fbc74..0f42911cea57 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1166,6 +1166,10 @@ void __init setup_arch(char **cmdline_p)
if (boot_cpu_has(X86_FEATURE_GBPAGES))
hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
 
+#ifdef CONFIG_X86_64
+   swiotlb_adjust();
+#endif
+
/*
 * Reserve memory for crash kernel after SRAT is parsed so that it
 * won't consume hotpluggable memory.
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 3f248f0d0e07..f6c04a3ac830 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -490,6 +490,38 @@ static void print_mem_encrypt_feature_info(void)
 }
 
 /* Architecture __weak replacement functions */
+unsigned long __init arch_swiotlb_adjust(unsigned long iotlb_default_size)
+{
+   unsigned long size = 0;
+
+   /*
+* For SEV, all DMA has to occur via shared/unencrypted pages.
+* SEV uses SWOTLB to make this happen without changing device
+* drivers. However, depending on the workload being run, the
+* default 64MB of SWIOTLB may not be enough & SWIOTLB may
+* run out of buffers for DMA, resulting in I/O errors and/or
+* performance degradation especially with high I/O workloads.
+* Increase the default size of SWIOTLB for SEV guests using
+* a minimum value of 128MB and a maximum value of 512MB,
+* depending on amount of provisioned guest memory.
+*/
+   if (sev_active()) {
+   phys_addr_t total_mem = memblock_phys_mem_size();
+
+   if (total_mem <= SZ_1G)
+   size = max(iotlb_default_size, (unsigned long) SZ_128M);
+   else if (total_mem <= SZ_4G)
+   size = max(iotlb_default_size, (unsigned long) SZ_256M);
+   else
+   size = max(iotlb_default_size, (unsigned long) SZ_512M);
+
+   pr_info("SEV adjusted max SWIOTLB size = %luMB",
+   size >> 20);
+   }
+
+   return size;
+}
+
 void __init mem_encrypt_init(void)
 {
if (!sme_me_mask)
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 046bb94bd4d6..9d34728ad5d7 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -33,6 +33,8 @@ extern void swiotlb_init(int verbose);
 int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose);
 extern unsigned long swiotlb_nr_tbl(void);
 unsigned long swiotlb_size_or_default(void);
+void __init swiotlb_adjust(void);
+unsigned long __init arch_swiotlb_adjust(unsigned long size);
 extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs);
 extern void __init swiotlb_update_mem_attributes(void);
 
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index c19379fabd20..66a9e627bb51 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -163,6 +163,33 @@ unsigned long swiotlb_size_or_default(void)
return size ? size : (IO_TLB_DEFAULT_SIZE);
 }
 
+unsigned long __init __weak arch_swiotlb_adjust(unsigned long size)
+{
+   return 0;
+}
+
+void __init swiotlb_adjust(void)
+{
+   unsigned long size;
+
+   /*
+* If swiotlb parameter has not been specified, give a chance to
+* architectures such as those supporting memory encryption to
+* adjust/expand SWIOTLB size for their use.
+*/
+   if (!io_tlb_nslabs) {
+   size = arch_swiotlb_adjust(IO_TLB_DEFAULT_SIZE);
+   if (size) {
+  

Re: [PATCH v13 01/15] iommu: Introduce attach/detach_pasid_table API

2020-11-18 Thread Jacob Pan
Hi Eric,

On Wed, 18 Nov 2020 12:21:37 +0100, Eric Auger 
wrote:

> In virtualization use case, when a guest is assigned
> a PCI host device, protected by a virtual IOMMU on the guest,
> the physical IOMMU must be programmed to be consistent with
> the guest mappings. If the physical IOMMU supports two
> translation stages it makes sense to program guest mappings
> onto the first stage/level (ARM/Intel terminology) while the host
> owns the stage/level 2.
> 
> In that case, it is mandated to trap on guest configuration
> settings and pass those to the physical iommu driver.
> 
> This patch adds a new API to the iommu subsystem that allows
> to set/unset the pasid table information.
> 
> A generic iommu_pasid_table_config struct is introduced in
> a new iommu.h uapi header. This is going to be used by the VFIO
> user API.
> 
> Signed-off-by: Jean-Philippe Brucker 
> Signed-off-by: Liu, Yi L 
> Signed-off-by: Ashok Raj 
> Signed-off-by: Jacob Pan 
> Signed-off-by: Eric Auger 
> 
> ---
> 
> v12 -> v13:
> - Fix config check
> 
> v11 -> v12:
> - add argsz, name the union
> ---
>  drivers/iommu/iommu.c  | 68 ++
>  include/linux/iommu.h  | 21 
>  include/uapi/linux/iommu.h | 54 ++
>  3 files changed, 143 insertions(+)
> 
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index b53446bb8c6b..978fe34378fb 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -2171,6 +2171,74 @@ int iommu_uapi_sva_unbind_gpasid(struct
> iommu_domain *domain, struct device *dev }
>  EXPORT_SYMBOL_GPL(iommu_uapi_sva_unbind_gpasid);
>  
> +int iommu_attach_pasid_table(struct iommu_domain *domain,
> +  struct iommu_pasid_table_config *cfg)
> +{
> + if (unlikely(!domain->ops->attach_pasid_table))
> + return -ENODEV;
> +
> + return domain->ops->attach_pasid_table(domain, cfg);
> +}
> +
> +int iommu_uapi_attach_pasid_table(struct iommu_domain *domain,
> +   void __user *uinfo)
> +{
> + struct iommu_pasid_table_config pasid_table_data = { 0 };
> + u32 minsz;
> +
> + if (unlikely(!domain->ops->attach_pasid_table))
> + return -ENODEV;
> +
> + /*
> +  * No new spaces can be added before the variable sized union,
> the
> +  * minimum size is the offset to the union.
> +  */
> + minsz = offsetof(struct iommu_pasid_table_config, vendor_data);
> +
> + /* Copy minsz from user to get flags and argsz */
> + if (copy_from_user(&pasid_table_data, uinfo, minsz))
> + return -EFAULT;
> +
> + /* Fields before the variable size union are mandatory */
> + if (pasid_table_data.argsz < minsz)
> + return -EINVAL;
> +
> + /* PASID and address granu require additional info beyond minsz
> */
> + if (pasid_table_data.version != PASID_TABLE_CFG_VERSION_1)
> + return -EINVAL;
> + if (pasid_table_data.format == IOMMU_PASID_FORMAT_SMMUV3 &&
> + pasid_table_data.argsz <
> + offsetofend(struct iommu_pasid_table_config,
> vendor_data.smmuv3))
> + return -EINVAL;
> +
> + /*
> +  * User might be using a newer UAPI header which has a larger
> data
> +  * size, we shall support the existing flags within the current
> +  * size. Copy the remaining user data _after_ minsz but not more
> +  * than the current kernel supported size.
> +  */
> + if (copy_from_user((void *)&pasid_table_data + minsz, uinfo +
> minsz,
> +min_t(u32, pasid_table_data.argsz,
> sizeof(pasid_table_data)) - minsz))
> + return -EFAULT;
> +
> + /* Now the argsz is validated, check the content */
> + if (pasid_table_data.config < IOMMU_PASID_CONFIG_TRANSLATE ||
> + pasid_table_data.config > IOMMU_PASID_CONFIG_ABORT)
> + return -EINVAL;
> +
> + return domain->ops->attach_pasid_table(domain,
> &pasid_table_data); +}
> +EXPORT_SYMBOL_GPL(iommu_uapi_attach_pasid_table);
> +
> +void iommu_detach_pasid_table(struct iommu_domain *domain)
> +{
> + if (unlikely(!domain->ops->detach_pasid_table))
> + return;
> +
> + domain->ops->detach_pasid_table(domain);
> +}
> +EXPORT_SYMBOL_GPL(iommu_detach_pasid_table);
> +
>  static void __iommu_detach_device(struct iommu_domain *domain,
> struct device *dev)
>  {
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index b95a6f8db6ff..464fcbecf841 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -223,6 +223,8 @@ struct iommu_iotlb_gather {
>   * @cache_invalidate: invalidate translation caches
>   * @sva_bind_gpasid: bind guest pasid and mm
>   * @sva_unbind_gpasid: unbind guest pasid and mm
> + * @attach_pasid_table: attach a pasid table
> + * @detach_pasid_table: detach the pasid table
>   * @def_domain_type: device default domain type, return value:
>   *   - IOMMU_DOMAI

[PATCH] WIP! media: uvcvideo: Use dma_alloc_noncontiguos API

2020-11-18 Thread Ricardo Ribalda
On architectures where the is no coherent caching such as ARM use the
dma_alloc_noncontiguos API and handle manually the cache flushing using
dma_sync_single().

With this patch on the affected architectures we can measure up to 20x
performance improvement in uvc_video_copy_data_work().

Signed-off-by: Ricardo Ribalda 
---

This patch depends on dma_alloc_contiguous API1315351diffmboxseries

https://lore.kernel.org/patchwork/patch/1315351/#1535182

 drivers/media/usb/uvc/uvc_video.c | 69 +--
 drivers/media/usb/uvc/uvcvideo.h  |  1 +
 2 files changed, 58 insertions(+), 12 deletions(-)

diff --git a/drivers/media/usb/uvc/uvc_video.c 
b/drivers/media/usb/uvc/uvc_video.c
index ff624bb857d3..ef1b029b8576 100644
--- a/drivers/media/usb/uvc/uvc_video.c
+++ b/drivers/media/usb/uvc/uvc_video.c
@@ -1641,6 +1641,11 @@ static void uvc_video_encode_bulk(struct uvc_urb 
*uvc_urb,
urb->transfer_buffer_length = stream->urb_size - len;
 }
 
+static inline struct device *stream_to_dmadev(struct uvc_streaming *stream)
+{
+   return stream->dev->udev->bus->controller->parent;
+}
+
 static void uvc_video_complete(struct urb *urb)
 {
struct uvc_urb *uvc_urb = urb->context;
@@ -1693,6 +1698,11 @@ static void uvc_video_complete(struct urb *urb)
 * Process the URB headers, and optionally queue expensive memcpy tasks
 * to be deferred to a work queue.
 */
+   if (uvc_urb->pages)
+   dma_sync_single_for_cpu(stream_to_dmadev(stream),
+   urb->transfer_dma,
+   urb->transfer_buffer_length,
+   DMA_FROM_DEVICE);
stream->decode(uvc_urb, buf, buf_meta);
 
/* If no async work is needed, resubmit the URB immediately. */
@@ -1723,8 +1733,15 @@ static void uvc_free_urb_buffers(struct uvc_streaming 
*stream)
continue;
 
 #ifndef CONFIG_DMA_NONCOHERENT
-   usb_free_coherent(stream->dev->udev, stream->urb_size,
- uvc_urb->buffer, uvc_urb->dma);
+   if (uvc_urb->pages) {
+   vunmap(uvc_urb->buffer);
+   dma_free_noncontiguous(stream_to_dmadev(stream),
+  stream->urb_size,
+  uvc_urb->pages, uvc_urb->dma);
+   } else {
+   usb_free_coherent(stream->dev->udev, stream->urb_size,
+ uvc_urb->buffer, uvc_urb->dma);
+   }
 #else
kfree(uvc_urb->buffer);
 #endif
@@ -1734,6 +1751,42 @@ static void uvc_free_urb_buffers(struct uvc_streaming 
*stream)
stream->urb_size = 0;
 }
 
+#ifndef CONFIG_DMA_NONCOHERENT
+static bool uvc_alloc_urb_buffer(struct uvc_streaming *stream, struct uvc_urb 
*uvc_urb,
+gfp_t gfp_flags)
+{
+   struct device *dma_dev = dma_dev = stream_to_dmadev(stream);
+
+   if (!dma_can_alloc_noncontiguous(dma_dev)) {
+   uvc_urb->buffer = usb_alloc_coherent(stream->dev->udev, 
stream->urb_size,
+gfp_flags | __GFP_NOWARN, 
&uvc_urb->dma);
+   return uvc_urb->buffer != NULL;
+   }
+
+   uvc_urb->pages = dma_alloc_noncontiguous(dma_dev, stream->urb_size,
+&uvc_urb->dma, gfp_flags | 
__GFP_NOWARN, 0);
+   if (!uvc_urb->pages)
+   return false;
+
+   uvc_urb->buffer = vmap(uvc_urb->pages, PAGE_ALIGN(stream->urb_size) >> 
PAGE_SHIFT,
+  VM_DMA_COHERENT, PAGE_KERNEL);
+   if (!uvc_urb->buffer) {
+   dma_free_noncontiguous(dma_dev, stream->urb_size, 
uvc_urb->pages, uvc_urb->dma);
+   return false;
+   }
+
+   return true;
+}
+#else
+static bool uvc_alloc_urb_buffer(struct uvc_streaming *stream, struct uvc_urb 
*uvc_urb,
+gfp_t gfp_flags)
+{
+   uvc_urb->buffer = kmalloc(stream->urb_size, gfp_flags | __GFP_NOWARN);
+
+   return uvc_urb->buffer != NULL;
+}
+#endif
+
 /*
  * Allocate transfer buffers. This function can be called with buffers
  * already allocated when resuming from suspend, in which case it will
@@ -1764,19 +1817,11 @@ static int uvc_alloc_urb_buffers(struct uvc_streaming 
*stream,
 
/* Retry allocations until one succeed. */
for (; npackets > 1; npackets /= 2) {
+   stream->urb_size = psize * npackets;
for (i = 0; i < UVC_URBS; ++i) {
struct uvc_urb *uvc_urb = &stream->uvc_urb[i];
 
-   stream->urb_size = psize * npackets;
-#ifndef CONFIG_DMA_NONCOHERENT
-   uvc_urb->buffer = usb_alloc_coherent(
-   stream->dev->udev, stream->urb_size,
-   gfp_flags | __GFP_NOWARN, 

Re: [PATCH v2] iommu/vt-d: avoid unnecessory panic if iommu init fail in tboot system

2020-11-18 Thread Will Deacon
On Tue, 10 Nov 2020 15:19:08 +0800, Zhenzhong Duan wrote:
> "intel_iommu=off" command line is used to disable iommu but iommu is force
> enabled in a tboot system for security reason.
> 
> However for better performance on high speed network device, a new option
> "intel_iommu=tboot_noforce" is introduced to disable the force on.
> 
> By default kernel should panic if iommu init fail in tboot for security
> reason, but it's unnecessory if we use "intel_iommu=tboot_noforce,off".
> 
> [...]

Applied to arm64 (for-next/iommu/fixes), thanks!

[1/1] iommu/vt-d: Avoid panic if iommu init fails in tboot system
  https://git.kernel.org/arm64/c/4d213e76a359

Cheers,
-- 
Will

https://fixes.arm64.dev
https://next.arm64.dev
https://will.arm64.dev
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [Patch V8 0/3] iommu: Add support to change default domain of an iommu group

2020-11-18 Thread Will Deacon
On Fri, Sep 25, 2020 at 12:06:17PM -0700, Ashok Raj wrote:
> Presently, the default domain of an iommu group is allocated during boot time
> and it cannot be changed later. So, the device would typically be either in
> identity (pass_through) mode or the device would be in DMA mode as long as the
> system is up and running. There is no way to change the default domain type
> dynamically i.e. after booting, a device cannot switch between identity mode 
> and
> DMA mode.
> 
> Assume a use case wherein the privileged user would want to use the device in
> pass-through mode when the device is used for host so that it would be high
> performing. Presently, this is not supported. Hence add support to change the
> default domain of an iommu group dynamically.
> 
> Support this by writing to a sysfs file, namely
> "/sys/kernel/iommu_groups//type".
> 
> Testing:
> 
> Tested by dynamically changing storage device (nvme) from
> 1. identity mode to DMA and making sure file transfer works
> 2. DMA mode to identity mode and making sure file transfer works
> Tested only for intel_iommu/vt-d. Would appreciate if someone could test on 
> AMD
> and ARM based machines.
> 
> Based on iommu maintainer's 'next' branch.

Modulo my minor comments, I think this looks good for 5.11 if you can
please send a version 9.

Robin -- please can you give it the once-over too? I think root can break
things quite badly with this interface, but root can do that in other ways
anyway...

Will
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [Patch V8 1/3] iommu: Add support to change default domain of an iommu group

2020-11-18 Thread Will Deacon
On Fri, Sep 25, 2020 at 12:06:18PM -0700, Ashok Raj wrote:
> From: Sai Praneeth Prakhya 
> 
> Presently, the default domain of an iommu group is allocated during boot
> time and it cannot be changed later. So, the device would typically be
> either in identity (also known as pass_through) mode or the device would be
> in DMA mode as long as the machine is up and running. There is no way to
> change the default domain type dynamically i.e. after booting, a device
> cannot switch between identity mode and DMA mode.
> 
> But, assume a use case wherein the user trusts the device and believes that
> the OS is secure enough and hence wants *only* this device to bypass IOMMU
> (so that it could be high performing) whereas all the other devices to go
> through IOMMU (so that the system is protected). Presently, this use case
> is not supported. It will be helpful if there is some way to change the
> default domain of an iommu group dynamically. Hence, add such support.
> 
> A privileged user could request the kernel to change the default domain
> type of a iommu group by writing to
> "/sys/kernel/iommu_groups//type" file. Presently, only three values
> are supported
> 1. identity: all the DMA transactions from the device in this group are
>  *not* translated by the iommu
> 2. DMA: all the DMA transactions from the device in this group are
> translated by the iommu
> 3. auto: change to the type the device was booted with
> 
> Note:
> 1. Default domain of an iommu group with two or more devices cannot be
>changed.
> 2. The device in the iommu group shouldn't be bound to any driver.
> 3. The device shouldn't be assigned to user for direct access.
> 4. The vendor iommu driver is required to add def_domain_type() callback.
>The change request will fail if the request type conflicts with that
>returned from the callback.
> 
> Please see "Documentation/ABI/testing/sysfs-kernel-iommu_groups" for more
> information.
> 
> Cc: Christoph Hellwig 
> Cc: Joerg Roedel 
> Cc: Ashok Raj 
> Cc: Will Deacon 
> Cc: Lu Baolu 
> Cc: Sohil Mehta 
> Cc: Robin Murphy 
> Cc: Jacob Pan 
> Reviewed-by: Lu Baolu 
> Signed-off-by: Sai Praneeth Prakhya 
> ---
>  drivers/iommu/iommu.c | 225 
> +-
>  1 file changed, 224 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 6c14c88cd525..2e93c48ce248 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -93,6 +93,8 @@ static void __iommu_detach_group(struct iommu_domain 
> *domain,
>  static int iommu_create_device_direct_mappings(struct iommu_group *group,
>  struct device *dev);
>  static struct iommu_group *iommu_group_get_for_dev(struct device *dev);
> +static ssize_t iommu_group_store_type(struct iommu_group *group,
> +   const char *buf, size_t count);
>  
>  #define IOMMU_GROUP_ATTR(_name, _mode, _show, _store)\
>  struct iommu_group_attribute iommu_group_attr_##_name =  \
> @@ -525,7 +527,8 @@ static IOMMU_GROUP_ATTR(name, S_IRUGO, 
> iommu_group_show_name, NULL);
>  static IOMMU_GROUP_ATTR(reserved_regions, 0444,
>   iommu_group_show_resv_regions, NULL);
>  
> -static IOMMU_GROUP_ATTR(type, 0444, iommu_group_show_type, NULL);
> +static IOMMU_GROUP_ATTR(type, 0644, iommu_group_show_type,
> + iommu_group_store_type);
>  
>  static void iommu_group_release(struct kobject *kobj)
>  {
> @@ -2849,3 +2852,223 @@ int iommu_sva_get_pasid(struct iommu_sva *handle)
>   return ops->sva_get_pasid(handle);
>  }
>  EXPORT_SYMBOL_GPL(iommu_sva_get_pasid);
> +
> +/*
> + * Changes the default domain of an iommu group that has *only* one device
> + *
> + * @group: The group for which the default domain should be changed
> + * @prev_dev: The device in the group (this is used to make sure that the 
> device
> + *hasn't changed after the caller has called this function)
> + * @type: The type of the new default domain that gets associated with the 
> group
> + *
> + * Returns 0 on success and error code on failure
> + *
> + * Note:
> + * 1. Presently, this function is called only when user requests to change 
> the
> + *group's default domain type through 
> /sys/kernel/iommu_groups//type
> + *Please take a closer look if intended to use for other purposes.
> + */
> +static int iommu_change_dev_def_domain(struct iommu_group *group,
> +struct device *prev_dev, int type)
> +{
> + struct iommu_domain *prev_dom;
> + struct group_device *grp_dev;
> + const struct iommu_ops *ops;
> + int ret, dev_def_dom;
> + struct device *dev;
> +
> + if (!group)
> + return -EINVAL;
> +
> + mutex_lock(&group->mutex);
> +
> + if (group->default_domain != group->domain) {
> + pr_err_ratelimited("Group not assigned to default domain\n");

This e

Re: [Patch V8 3/3] iommu: Document usage of "/sys/kernel/iommu_groups//type" file

2020-11-18 Thread Will Deacon
On Fri, Sep 25, 2020 at 12:06:20PM -0700, Ashok Raj wrote:
> From: Sai Praneeth Prakhya 
> 
> The default domain type of an iommu group can be changed by writing to
> "/sys/kernel/iommu_groups//type" file. Hence, document it's usage
> and more importantly spell out its limitations.
> 
> Cc: Christoph Hellwig 
> Cc: Joerg Roedel 
> Cc: Ashok Raj 
> Cc: Will Deacon 
> Cc: Lu Baolu 
> Cc: Sohil Mehta 
> Cc: Robin Murphy 
> Cc: Jacob Pan 
> Reviewed-by: Lu Baolu 
> Signed-off-by: Sai Praneeth Prakhya 
> ---
>  .../ABI/testing/sysfs-kernel-iommu_groups  | 30 
> ++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/Documentation/ABI/testing/sysfs-kernel-iommu_groups 
> b/Documentation/ABI/testing/sysfs-kernel-iommu_groups
> index 017f5bc3920c..effde9d23f4f 100644
> --- a/Documentation/ABI/testing/sysfs-kernel-iommu_groups
> +++ b/Documentation/ABI/testing/sysfs-kernel-iommu_groups
> @@ -33,3 +33,33 @@ Description:In case an RMRR is used only by graphics 
> or USB devices
>   it is now exposed as "direct-relaxable" instead of "direct".
>   In device assignment use case, for instance, those RMRR
>   are considered to be relaxable and safe.
> +
> +What:/sys/kernel/iommu_groups//type
> +Date:September 2020
> +KernelVersion:   v5.10

^^ Please can you update these two lines?

> +Contact: Sai Praneeth Prakhya 
> +Description: Let the user know the type of default domain in use by iommu
> + for this group. A privileged user could request kernel to change
> + the group type by writing to this file. Presently, only three
> + types are supported
> + 1. DMA: All the DMA transactions from the device in this group
> + are translated by the iommu.
> + 2. identity: All the DMA transactions from the device in this
> +  group are *not* translated by the iommu.
> + 3. auto: Change to the type the device was booted with. When the
> +  user reads the file he would never see "auto". This is

Can we avoid assuming gender here and just use "they" instead of "he", please?
Same thing for the "Caution" note below.

> +  just a write only value.

I can't figure out from this description what string is returned to
userspace in the case that the group is configured as  blocked or unmanaged.

> + Note:
> + -
> + A group type could be modified only when

s/could be/may be/

> + 1. The group has *only* one device
> + 2. The device in the group is not bound to any device driver.
> +So, the user must first unbind the appropriate driver and
> +then change the default domain type.
> + Caution:
> + 
> + Unbinding a device driver will take away the driver's control
> + over the device and if done on devices that host root file
> + system could lead to catastrophic effects (the user might
> + need to reboot the machine to get it to normal state). So, it's
> + expected that the user understands what he is doing.

Thanks,

Will
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2] iommu/vt-d: avoid unnecessory panic if iommu init fail in tboot system

2020-11-18 Thread Will Deacon
On Wed, Nov 18, 2020 at 07:32:25AM +0800, Lu Baolu wrote:
> Please consider this patch for v5.10.

Cheers, I'll stick this onto a fixes branch momentarily.

Will
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v13 15/15] iommu/smmuv3: Add PASID cache invalidation per PASID

2020-11-18 Thread Eric Auger
In order to cascade guest CFGI_CD, let's add PASID cache invalidation
per PASID.

Signed-off-by: Eric Auger 

---

v12 -> v13:
- Fix !(info->flags & IOMMU_INV_PASID_FLAGS_PASID) check
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 16 +---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index ed64699a4a0d..45adfe4da11b 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2999,9 +2999,19 @@ arm_smmu_cache_invalidate(struct iommu_domain *domain, 
struct device *dev,
} else {
return -EINVAL;
}
-   }
-   if (inv_info->cache & IOMMU_CACHE_INV_TYPE_PASID ||
-   inv_info->cache & IOMMU_CACHE_INV_TYPE_DEV_IOTLB) {
+   } else if (inv_info->cache & IOMMU_CACHE_INV_TYPE_PASID) {
+   if (inv_info->granularity == IOMMU_INV_GRANU_PASID) {
+   struct iommu_inv_pasid_info *info =
+   &inv_info->granu.pasid_info;
+
+   if (!(info->flags & IOMMU_INV_PASID_FLAGS_PASID))
+   return -EINVAL;
+
+   arm_smmu_sync_cd(smmu_domain, info->pasid, true);
+   } else {
+   return -ENOENT;
+   }
+   } else { /* IOMMU_CACHE_INV_TYPE_DEV_IOTLB */
return -ENOENT;
}
return 0;
-- 
2.21.3

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v13 14/15] iommu/smmuv3: Accept configs with more than one context descriptor

2020-11-18 Thread Eric Auger
In preparation for vSVA, let's accept userspace provided configs
with more than one CD. We check the max CD against the host iommu
capability and also the format (linear versus 2 level).

Signed-off-by: Eric Auger 
Signed-off-by: Shameer Kolothum 
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 977c22d08612..ed64699a4a0d 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2905,14 +2905,17 @@ static int arm_smmu_attach_pasid_table(struct 
iommu_domain *domain,
if (smmu_domain->s1_cfg.set)
goto out;
 
-   /*
-* we currently support a single CD so s1fmt and s1dss
-* fields are also ignored
-*/
-   if (cfg->pasid_bits)
+   list_for_each_entry(master, &smmu_domain->devices, domain_head) 
{
+   if (cfg->pasid_bits > master->ssid_bits)
+   goto out;
+   }
+   if (cfg->vendor_data.smmuv3.s1fmt == STRTAB_STE_0_S1FMT_64K_L2 
&&
+   !(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB))
goto out;
 
smmu_domain->s1_cfg.cdcfg.cdtab_dma = cfg->base_ptr;
+   smmu_domain->s1_cfg.s1cdmax = cfg->pasid_bits;
+   smmu_domain->s1_cfg.s1fmt = cfg->vendor_data.smmuv3.s1fmt;
smmu_domain->s1_cfg.set = true;
smmu_domain->abort = false;
break;
-- 
2.21.3

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v13 13/15] iommu/smmuv3: Report non recoverable faults

2020-11-18 Thread Eric Auger
When a stage 1 related fault event is read from the event queue,
let's propagate it to potential external fault listeners, ie. users
who registered a fault handler.

Signed-off-by: Eric Auger 

---
v8 -> v9:
- adapt to the removal of IOMMU_FAULT_UNRECOV_PERM_VALID:
  only look at IOMMU_FAULT_UNRECOV_ADDR_VALID which comes with
  perm
- do not advertise IOMMU_FAULT_UNRECOV_PASID_VALID faults for
  translation faults
- trace errors if !master
- test nested before calling iommu_report_device_fault
- call the fault handler unconditionnally in non nested mode

v4 -> v5:
- s/IOMMU_FAULT_PERM_INST/IOMMU_FAULT_PERM_EXEC
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 102 +---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h |  80 +++
 2 files changed, 171 insertions(+), 11 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 1b8dad340899..977c22d08612 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1397,7 +1397,6 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device 
*smmu, u32 sid)
return 0;
 }
 
-__maybe_unused
 static struct arm_smmu_master *
 arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid)
 {
@@ -1423,25 +1422,106 @@ arm_smmu_find_master(struct arm_smmu_device *smmu, u32 
sid)
return master;
 }
 
+/* Populates the record fields according to the input SMMU event */
+static bool arm_smmu_transcode_fault(u64 *evt, u8 type,
+struct iommu_fault_unrecoverable *record)
+{
+   const struct arm_smmu_fault_propagation_data *data;
+   u32 fields;
+
+   if (type >= ARRAY_SIZE(fault_propagation))
+   return false;
+
+   data = &fault_propagation[type];
+   if (!data->reason)
+   return false;
+
+   fields = data->fields;
+
+   if (data->s1_check & FIELD_GET(EVTQ_1_S2, evt[1]))
+   return false; /* S2 related fault, don't propagate */
+
+   if (fields & IOMMU_FAULT_UNRECOV_PASID_VALID)
+   record->pasid = FIELD_GET(EVTQ_0_SUBSTREAMID, evt[0]);
+   else {
+   /* all other transcoded errors have SSV */
+   if (FIELD_GET(EVTQ_0_SSV, evt[0])) {
+   record->pasid = FIELD_GET(EVTQ_0_SUBSTREAMID, evt[0]);
+   fields |= IOMMU_FAULT_UNRECOV_PASID_VALID;
+   }
+   }
+
+   if (fields & IOMMU_FAULT_UNRECOV_ADDR_VALID) {
+   if (FIELD_GET(EVTQ_1_RNW, evt[1]))
+   record->perm = IOMMU_FAULT_PERM_READ;
+   else
+   record->perm = IOMMU_FAULT_PERM_WRITE;
+   if (FIELD_GET(EVTQ_1_PNU, evt[1]))
+   record->perm |= IOMMU_FAULT_PERM_PRIV;
+   if (FIELD_GET(EVTQ_1_IND, evt[1]))
+   record->perm |= IOMMU_FAULT_PERM_EXEC;
+   record->addr = evt[2];
+   }
+
+   if (fields & IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID)
+   record->fetch_addr = FIELD_GET(EVTQ_3_FETCH_ADDR, evt[3]);
+
+   record->flags = fields;
+   record->reason = data->reason;
+   return true;
+}
+
+static void arm_smmu_report_event(struct arm_smmu_device *smmu, u64 *evt)
+{
+   u32 sid = FIELD_GET(EVTQ_0_STREAMID, evt[0]);
+   u8 type = FIELD_GET(EVTQ_0_ID, evt[0]);
+   struct arm_smmu_master *master;
+   struct iommu_fault_event event = {};
+   bool nested;
+   int i;
+
+   master = arm_smmu_find_master(smmu, sid);
+   if (!master || !master->domain)
+   goto out;
+
+   event.fault.type = IOMMU_FAULT_DMA_UNRECOV;
+
+   nested = (master->domain->stage == ARM_SMMU_DOMAIN_NESTED);
+
+   if (nested) {
+   if (arm_smmu_transcode_fault(evt, type, &event.fault.event)) {
+   /*
+* Only S1 related faults should be reported to the
+* guest and must not flood the host log.
+* Also a fault handler should have been registered
+* to guarantee the full nested functionality
+*/
+   WARN_ON_ONCE(iommu_report_device_fault(master->dev,
+  &event));
+   return;
+   }
+   } else {
+   iommu_report_device_fault(master->dev, &event);
+   }
+out:
+   dev_info(smmu->dev, "event 0x%02x received:\n", type);
+   for (i = 0; i < EVTQ_ENT_DWORDS; ++i) {
+   dev_info(smmu->dev, "\t0x%016llx\n",
+(unsigned long long)evt[i]);
+   }
+}
+
 /* IRQ and event handlers */
 static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
 {
-   int i;
struct arm_smmu_device *smmu = dev;
struct arm_smmu_queue *q = &smmu->evtq.q;
struct arm_smmu_ll_queue *llq = &

[PATCH v13 12/15] iommu/smmuv3: Implement bind/unbind_guest_msi

2020-11-18 Thread Eric Auger
The bind/unbind_guest_msi() callbacks check the domain
is NESTED and redirect to the dma-iommu implementation.

Signed-off-by: Eric Auger 

---

v6 -> v7:
- remove device handle argument
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 43 +
 1 file changed, 43 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index ccf2fef10b69..1b8dad340899 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2744,6 +2744,47 @@ static void arm_smmu_get_resv_regions(struct device *dev,
iommu_dma_get_resv_regions(dev, head);
 }
 
+static int
+arm_smmu_bind_guest_msi(struct iommu_domain *domain,
+   dma_addr_t giova, phys_addr_t gpa, size_t size)
+{
+   struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+   struct arm_smmu_device *smmu;
+   int ret = -EINVAL;
+
+   mutex_lock(&smmu_domain->init_mutex);
+   smmu = smmu_domain->smmu;
+   if (!smmu)
+   goto out;
+
+   if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED)
+   goto out;
+
+   ret = iommu_dma_bind_guest_msi(domain, giova, gpa, size);
+out:
+   mutex_unlock(&smmu_domain->init_mutex);
+   return ret;
+}
+
+static void
+arm_smmu_unbind_guest_msi(struct iommu_domain *domain, dma_addr_t giova)
+{
+   struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+   struct arm_smmu_device *smmu;
+
+   mutex_lock(&smmu_domain->init_mutex);
+   smmu = smmu_domain->smmu;
+   if (!smmu)
+   goto unlock;
+
+   if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED)
+   goto unlock;
+
+   iommu_dma_unbind_guest_msi(domain, giova);
+unlock:
+   mutex_unlock(&smmu_domain->init_mutex);
+}
+
 static int arm_smmu_attach_pasid_table(struct iommu_domain *domain,
   struct iommu_pasid_table_config *cfg)
 {
@@ -2967,6 +3008,8 @@ static struct iommu_ops arm_smmu_ops = {
.attach_pasid_table = arm_smmu_attach_pasid_table,
.detach_pasid_table = arm_smmu_detach_pasid_table,
.cache_invalidate   = arm_smmu_cache_invalidate,
+   .bind_guest_msi = arm_smmu_bind_guest_msi,
+   .unbind_guest_msi   = arm_smmu_unbind_guest_msi,
.dev_has_feat   = arm_smmu_dev_has_feature,
.dev_feat_enabled   = arm_smmu_dev_feature_enabled,
.dev_enable_feat= arm_smmu_dev_enable_feature,
-- 
2.21.3

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v13 11/15] iommu/smmuv3: Enforce incompatibility between nested mode and HW MSI regions

2020-11-18 Thread Eric Auger
Nested mode currently is not compatible with HW MSI reserved regions.
Indeed MSI transactions targeting this MSI doorbells bypass the SMMU.

Let's check nested mode is not attempted in such configuration.

Signed-off-by: Eric Auger 
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 23 +++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 5a05c2074c8a..ccf2fef10b69 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2296,6 +2296,23 @@ static bool arm_smmu_share_msi_domain(struct 
iommu_domain *domain,
return share;
 }
 
+static bool arm_smmu_has_hw_msi_resv_region(struct device *dev)
+{
+   struct iommu_resv_region *region;
+   bool has_msi_resv_region = false;
+   LIST_HEAD(resv_regions);
+
+   iommu_get_resv_regions(dev, &resv_regions);
+   list_for_each_entry(region, &resv_regions, list) {
+   if (region->type == IOMMU_RESV_MSI) {
+   has_msi_resv_region = true;
+   break;
+   }
+   }
+   iommu_put_resv_regions(dev, &resv_regions);
+   return has_msi_resv_region;
+}
+
 static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
 {
int ret = 0;
@@ -2350,10 +2367,12 @@ static int arm_smmu_attach_dev(struct iommu_domain 
*domain, struct device *dev)
/*
 * In nested mode we must check all devices belonging to the
 * domain share the same physical MSI doorbell. Otherwise nested
-* stage MSI binding is not supported.
+* stage MSI binding is not supported. Also nested mode is not
+* compatible with MSI HW reserved regions.
 */
if (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED &&
-   !arm_smmu_share_msi_domain(domain, dev)) {
+   (!arm_smmu_share_msi_domain(domain, dev) ||
+arm_smmu_has_hw_msi_resv_region(dev))) {
ret = -EINVAL;
goto out_unlock;
}
-- 
2.21.3

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v13 10/15] iommu/smmuv3: Nested mode single MSI doorbell per domain enforcement

2020-11-18 Thread Eric Auger
In nested mode we enforce the rule that all devices belonging
to the same iommu_domain share the same msi_domain.

Indeed if there were several physical MSI doorbells being used
within a single iommu_domain, it becomes really difficult to
resolve the nested stage mapping translating into the correct
physical doorbell. So let's forbid this situation.

Signed-off-by: Eric Auger 
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 41 +
 1 file changed, 41 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 24124361dd3b..5a05c2074c8a 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2265,6 +2265,37 @@ static void arm_smmu_detach_dev(struct arm_smmu_master 
*master)
arm_smmu_install_ste_for_dev(master);
 }
 
+static bool arm_smmu_share_msi_domain(struct iommu_domain *domain,
+ struct device *dev)
+{
+   struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+   struct irq_domain *irqd = dev_get_msi_domain(dev);
+   struct arm_smmu_master *master;
+   unsigned long flags;
+   bool share = false;
+
+   if (!irqd)
+   return true;
+
+   spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+   list_for_each_entry(master, &smmu_domain->devices, domain_head) {
+   struct irq_domain *d = dev_get_msi_domain(master->dev);
+
+   if (!d)
+   continue;
+   if (irqd != d) {
+   dev_info(dev, "Nested mode forbids to attach devices "
+"using different physical MSI doorbells "
+"to the same iommu_domain");
+   goto unlock;
+   }
+   }
+   share = true;
+unlock:
+   spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+   return share;
+}
+
 static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
 {
int ret = 0;
@@ -2316,6 +2347,16 @@ static int arm_smmu_attach_dev(struct iommu_domain 
*domain, struct device *dev)
ret = -EINVAL;
goto out_unlock;
}
+   /*
+* In nested mode we must check all devices belonging to the
+* domain share the same physical MSI doorbell. Otherwise nested
+* stage MSI binding is not supported.
+*/
+   if (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED &&
+   !arm_smmu_share_msi_domain(domain, dev)) {
+   ret = -EINVAL;
+   goto out_unlock;
+   }
 
master->domain = smmu_domain;
 
-- 
2.21.3

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v13 09/15] dma-iommu: Implement NESTED_MSI cookie

2020-11-18 Thread Eric Auger
Up to now, when the type was UNMANAGED, we used to
allocate IOVA pages within a reserved IOVA MSI range.

If both the host and the guest are exposed with SMMUs, each
would allocate an IOVA. The guest allocates an IOVA (gIOVA)
to map onto the guest MSI doorbell (gDB). The Host allocates
another IOVA (hIOVA) to map onto the physical doorbell (hDB).

So we end up with 2 unrelated mappings, at S1 and S2:
 S1 S2
gIOVA-> gDB
   hIOVA->hDB

The PCI device would be programmed with hIOVA.
No stage 1 mapping would existing, causing the MSIs to fault.

iommu_dma_bind_guest_msi() allows to pass gIOVA/gDB
to the host so that gIOVA can be used by the host instead of
re-allocating a new hIOVA.

 S1   S2
gIOVA->gDB->hDB

this time, the PCI device can be programmed with the gIOVA MSI
doorbell which is correctly mapped through both stages.

Nested mode is not compatible with HW MSI regions as in that
case gDB and hDB should have a 1-1 mapping. This check will
be done when attaching each device to the IOMMU domain.

Signed-off-by: Eric Auger 

---

v10 -> v11:
- fix compilation if !CONFIG_IOMMU_DMA

v7 -> v8:
- correct iommu_dma_(un)bind_guest_msi when
  !CONFIG_IOMMU_DMA
- Mentioned nested mode is not compatible with HW MSI regions
  in commit message
- protect with msi_lock on unbind

v6 -> v7:
- removed device handle

v3 -> v4:
- change function names; add unregister
- protect with msi_lock

v2 -> v3:
- also store the device handle on S1 mapping registration.
  This garantees we associate the associated S2 mapping binds
  to the correct physical MSI controller.

v1 -> v2:
- unmap stage2 on put()
---
 drivers/iommu/dma-iommu.c | 142 +-
 include/linux/dma-iommu.h |  16 +
 2 files changed, 155 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 0cbcd3fc3e7e..a14ecad6b79b 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -19,6 +19,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -27,12 +28,15 @@
 struct iommu_dma_msi_page {
struct list_headlist;
dma_addr_t  iova;
+   dma_addr_t  gpa;
phys_addr_t phys;
+   size_t  s1_granule;
 };
 
 enum iommu_dma_cookie_type {
IOMMU_DMA_IOVA_COOKIE,
IOMMU_DMA_MSI_COOKIE,
+   IOMMU_DMA_NESTED_MSI_COOKIE,
 };
 
 struct iommu_dma_cookie {
@@ -44,6 +48,7 @@ struct iommu_dma_cookie {
dma_addr_t  msi_iova;
};
struct list_headmsi_page_list;
+   spinlock_t  msi_lock;
 
/* Domain for flush queue callback; NULL if flush queue not in use */
struct iommu_domain *fq_domain;
@@ -62,6 +67,7 @@ static struct iommu_dma_cookie *cookie_alloc(enum 
iommu_dma_cookie_type type)
 
cookie = kzalloc(sizeof(*cookie), GFP_KERNEL);
if (cookie) {
+   spin_lock_init(&cookie->msi_lock);
INIT_LIST_HEAD(&cookie->msi_page_list);
cookie->type = type;
}
@@ -95,14 +101,17 @@ EXPORT_SYMBOL(iommu_get_dma_cookie);
  *
  * Users who manage their own IOVA allocation and do not want DMA API support,
  * but would still like to take advantage of automatic MSI remapping, can use
- * this to initialise their own domain appropriately. Users should reserve a
+ * this to initialise their own domain appropriately. Users may reserve a
  * contiguous IOVA region, starting at @base, large enough to accommodate the
  * number of PAGE_SIZE mappings necessary to cover every MSI doorbell address
- * used by the devices attached to @domain.
+ * used by the devices attached to @domain. The other way round is to provide
+ * usable iova pages through the iommu_dma_bind_doorbell API (nested stages
+ * use case)
  */
 int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base)
 {
struct iommu_dma_cookie *cookie;
+   int nesting, ret;
 
if (domain->type != IOMMU_DOMAIN_UNMANAGED)
return -EINVAL;
@@ -110,7 +119,12 @@ int iommu_get_msi_cookie(struct iommu_domain *domain, 
dma_addr_t base)
if (domain->iova_cookie)
return -EEXIST;
 
-   cookie = cookie_alloc(IOMMU_DMA_MSI_COOKIE);
+   ret =  iommu_domain_get_attr(domain, DOMAIN_ATTR_NESTING, &nesting);
+   if (!ret && nesting)
+   cookie = cookie_alloc(IOMMU_DMA_NESTED_MSI_COOKIE);
+   else
+   cookie = cookie_alloc(IOMMU_DMA_MSI_COOKIE);
+
if (!cookie)
return -ENOMEM;
 
@@ -131,6 +145,7 @@ void iommu_put_dma_cookie(struct iommu_domain *domain)
 {
struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iommu_dma_msi_page *msi, *tmp;
+   bool s2_unmap = false;
 
if (!cookie)
return;
@@ -138,7 +153,15 @@ void iommu_

[PATCH v13 08/15] iommu/smmuv3: Implement cache_invalidate

2020-11-18 Thread Eric Auger
Implement domain-selective and page-selective IOTLB invalidations.

Signed-off-by: Eric Auger 

---
v7 -> v8:
- ASID based invalidation using iommu_inv_pasid_info
- check ARCHID/PASID flags in addr based invalidation
- use __arm_smmu_tlb_inv_context and __arm_smmu_tlb_inv_range_nosync

v6 -> v7
- check the uapi version

v3 -> v4:
- adapt to changes in the uapi
- add support for leaf parameter
- do not use arm_smmu_tlb_inv_range_nosync or arm_smmu_tlb_inv_context
  anymore

v2 -> v3:
- replace __arm_smmu_tlb_sync by arm_smmu_cmdq_issue_sync

v1 -> v2:
- properly pass the asid
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 53 +
 1 file changed, 53 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index fdecc9f17b36..24124361dd3b 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2771,6 +2771,58 @@ static void arm_smmu_detach_pasid_table(struct 
iommu_domain *domain)
mutex_unlock(&smmu_domain->init_mutex);
 }
 
+static int
+arm_smmu_cache_invalidate(struct iommu_domain *domain, struct device *dev,
+ struct iommu_cache_invalidate_info *inv_info)
+{
+   struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+   struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+   if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED)
+   return -EINVAL;
+
+   if (!smmu)
+   return -EINVAL;
+
+   if (inv_info->version != IOMMU_CACHE_INVALIDATE_INFO_VERSION_1)
+   return -EINVAL;
+
+   if (inv_info->cache & IOMMU_CACHE_INV_TYPE_IOTLB) {
+   if (inv_info->granularity == IOMMU_INV_GRANU_PASID) {
+   struct iommu_inv_pasid_info *info =
+   &inv_info->granu.pasid_info;
+
+   if (!(info->flags & IOMMU_INV_PASID_FLAGS_ARCHID) ||
+(info->flags & IOMMU_INV_PASID_FLAGS_PASID))
+   return -EINVAL;
+
+   __arm_smmu_tlb_inv_context(smmu_domain, info->archid);
+
+   } else if (inv_info->granularity == IOMMU_INV_GRANU_ADDR) {
+   struct iommu_inv_addr_info *info = 
&inv_info->granu.addr_info;
+   size_t size = info->nb_granules * info->granule_size;
+   bool leaf = info->flags & IOMMU_INV_ADDR_FLAGS_LEAF;
+
+   if (!(info->flags & IOMMU_INV_ADDR_FLAGS_ARCHID) ||
+(info->flags & IOMMU_INV_ADDR_FLAGS_PASID))
+   return -EINVAL;
+
+   __arm_smmu_tlb_inv_range(info->addr, size,
+info->granule_size, leaf,
+ smmu_domain, info->archid);
+
+   arm_smmu_cmdq_issue_sync(smmu);
+   } else {
+   return -EINVAL;
+   }
+   }
+   if (inv_info->cache & IOMMU_CACHE_INV_TYPE_PASID ||
+   inv_info->cache & IOMMU_CACHE_INV_TYPE_DEV_IOTLB) {
+   return -ENOENT;
+   }
+   return 0;
+}
+
 static bool arm_smmu_dev_has_feature(struct device *dev,
 enum iommu_dev_features feat)
 {
@@ -2854,6 +2906,7 @@ static struct iommu_ops arm_smmu_ops = {
.put_resv_regions   = generic_iommu_put_resv_regions,
.attach_pasid_table = arm_smmu_attach_pasid_table,
.detach_pasid_table = arm_smmu_detach_pasid_table,
+   .cache_invalidate   = arm_smmu_cache_invalidate,
.dev_has_feat   = arm_smmu_dev_has_feature,
.dev_feat_enabled   = arm_smmu_dev_feature_enabled,
.dev_enable_feat= arm_smmu_dev_enable_feature,
-- 
2.21.3

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v13 07/15] iommu/smmuv3: Allow stage 1 invalidation with unmanaged ASIDs

2020-11-18 Thread Eric Auger
With nested stage support, soon we will need to invalidate
S1 contexts and ranges tagged with an unmanaged asid, this
latter being managed by the guest. So let's introduce 2 helpers
that allow to invalidate with externally managed ASIDs

Signed-off-by: Eric Auger 
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 35 +
 1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 805acdc18a3a..fdecc9f17b36 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1697,9 +1697,9 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain 
*smmu_domain,
 }
 
 /* IO_PGTABLE API */
-static void arm_smmu_tlb_inv_context(void *cookie)
+static void __arm_smmu_tlb_inv_context(struct arm_smmu_domain *smmu_domain,
+  int ext_asid)
 {
-   struct arm_smmu_domain *smmu_domain = cookie;
struct arm_smmu_device *smmu = smmu_domain->smmu;
struct arm_smmu_cmdq_ent cmd;
 
@@ -1710,7 +1710,11 @@ static void arm_smmu_tlb_inv_context(void *cookie)
 * insertion to guarantee those are observed before the TLBI. Do be
 * careful, 007.
 */
-   if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+   if (ext_asid >= 0) { /* guest stage 1 invalidation */
+   cmd.opcode  = CMDQ_OP_TLBI_NH_ASID;
+   cmd.tlbi.asid   = ext_asid;
+   cmd.tlbi.vmid   = smmu_domain->s2_cfg.vmid;
+   } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
arm_smmu_tlb_inv_asid(smmu, smmu_domain->s1_cfg.cd.asid);
} else {
cmd.opcode  = CMDQ_OP_TLBI_S12_VMALL;
@@ -1721,9 +1725,17 @@ static void arm_smmu_tlb_inv_context(void *cookie)
arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0);
 }
 
-static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
+static void arm_smmu_tlb_inv_context(void *cookie)
+{
+   struct arm_smmu_domain *smmu_domain = cookie;
+
+   __arm_smmu_tlb_inv_context(smmu_domain, -1);
+}
+
+static void __arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
   size_t granule, bool leaf,
-  struct arm_smmu_domain *smmu_domain)
+  struct arm_smmu_domain *smmu_domain,
+  int ext_asid)
 {
struct arm_smmu_device *smmu = smmu_domain->smmu;
unsigned long start = iova, end = iova + size, num_pages = 0, tg = 0;
@@ -1738,7 +1750,11 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, 
size_t size,
if (!size)
return;
 
-   if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
+   if (ext_asid >= 0) {  /* guest stage 1 invalidation */
+   cmd.opcode  = CMDQ_OP_TLBI_NH_VA;
+   cmd.tlbi.asid   = ext_asid;
+   cmd.tlbi.vmid   = smmu_domain->s2_cfg.vmid;
+   } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
cmd.opcode  = CMDQ_OP_TLBI_NH_VA;
cmd.tlbi.asid   = smmu_domain->s1_cfg.cd.asid;
} else {
@@ -1798,6 +1814,13 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, 
size_t size,
arm_smmu_atc_inv_domain(smmu_domain, 0, start, size);
 }
 
+static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size,
+  size_t granule, bool leaf,
+  struct arm_smmu_domain *smmu_domain)
+{
+   __arm_smmu_tlb_inv_range(iova, size, granule, leaf, smmu_domain, -1);
+}
+
 static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather,
 unsigned long iova, size_t granule,
 void *cookie)
-- 
2.21.3

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v13 06/15] iommu/smmuv3: Implement attach/detach_pasid_table

2020-11-18 Thread Eric Auger
On attach_pasid_table() we program STE S1 related info set
by the guest into the actual physical STEs. At minimum
we need to program the context descriptor GPA and compute
whether the stage1 is translated/bypassed or aborted.

Signed-off-by: Eric Auger 

---
v7 -> v8:
- remove smmu->features check, now done on domain finalize

v6 -> v7:
- check versions and comment the fact we don't need to take
  into account s1dss and s1fmt
v3 -> v4:
- adapt to changes in iommu_pasid_table_config
- different programming convention at s1_cfg/s2_cfg/ste.abort

v2 -> v3:
- callback now is named set_pasid_table and struct fields
  are laid out differently.

v1 -> v2:
- invalidate the STE before changing them
- hold init_mutex
- handle new fields
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 89 +
 1 file changed, 89 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 412ea1bafa50..805acdc18a3a 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2661,6 +2661,93 @@ static void arm_smmu_get_resv_regions(struct device *dev,
iommu_dma_get_resv_regions(dev, head);
 }
 
+static int arm_smmu_attach_pasid_table(struct iommu_domain *domain,
+  struct iommu_pasid_table_config *cfg)
+{
+   struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+   struct arm_smmu_master *master;
+   struct arm_smmu_device *smmu;
+   unsigned long flags;
+   int ret = -EINVAL;
+
+   if (cfg->format != IOMMU_PASID_FORMAT_SMMUV3)
+   return -EINVAL;
+
+   if (cfg->version != PASID_TABLE_CFG_VERSION_1 ||
+   cfg->vendor_data.smmuv3.version != PASID_TABLE_SMMUV3_CFG_VERSION_1)
+   return -EINVAL;
+
+   mutex_lock(&smmu_domain->init_mutex);
+
+   smmu = smmu_domain->smmu;
+
+   if (!smmu)
+   goto out;
+
+   if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED)
+   goto out;
+
+   switch (cfg->config) {
+   case IOMMU_PASID_CONFIG_ABORT:
+   smmu_domain->s1_cfg.set = false;
+   smmu_domain->abort = true;
+   break;
+   case IOMMU_PASID_CONFIG_BYPASS:
+   smmu_domain->s1_cfg.set = false;
+   smmu_domain->abort = false;
+   break;
+   case IOMMU_PASID_CONFIG_TRANSLATE:
+   /* we do not support S1 <-> S1 transitions */
+   if (smmu_domain->s1_cfg.set)
+   goto out;
+
+   /*
+* we currently support a single CD so s1fmt and s1dss
+* fields are also ignored
+*/
+   if (cfg->pasid_bits)
+   goto out;
+
+   smmu_domain->s1_cfg.cdcfg.cdtab_dma = cfg->base_ptr;
+   smmu_domain->s1_cfg.set = true;
+   smmu_domain->abort = false;
+   break;
+   default:
+   goto out;
+   }
+   spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+   list_for_each_entry(master, &smmu_domain->devices, domain_head)
+   arm_smmu_install_ste_for_dev(master);
+   spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+   ret = 0;
+out:
+   mutex_unlock(&smmu_domain->init_mutex);
+   return ret;
+}
+
+static void arm_smmu_detach_pasid_table(struct iommu_domain *domain)
+{
+   struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+   struct arm_smmu_master *master;
+   unsigned long flags;
+
+   mutex_lock(&smmu_domain->init_mutex);
+
+   if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED)
+   goto unlock;
+
+   smmu_domain->s1_cfg.set = false;
+   smmu_domain->abort = true;
+
+   spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+   list_for_each_entry(master, &smmu_domain->devices, domain_head)
+   arm_smmu_install_ste_for_dev(master);
+   spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+unlock:
+   mutex_unlock(&smmu_domain->init_mutex);
+}
+
 static bool arm_smmu_dev_has_feature(struct device *dev,
 enum iommu_dev_features feat)
 {
@@ -2742,6 +2829,8 @@ static struct iommu_ops arm_smmu_ops = {
.of_xlate   = arm_smmu_of_xlate,
.get_resv_regions   = arm_smmu_get_resv_regions,
.put_resv_regions   = generic_iommu_put_resv_regions,
+   .attach_pasid_table = arm_smmu_attach_pasid_table,
+   .detach_pasid_table = arm_smmu_detach_pasid_table,
.dev_has_feat   = arm_smmu_dev_has_feature,
.dev_feat_enabled   = arm_smmu_dev_feature_enabled,
.dev_enable_feat= arm_smmu_dev_enable_feature,
-- 
2.21.3

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH v13 05/15] iommu/smmuv3: Get prepared for nested stage support

2020-11-18 Thread Eric Auger
When nested stage translation is setup, both s1_cfg and
s2_cfg are set.

We introduce a new smmu domain abort field that will be set
upon guest stage1 configuration passing.

arm_smmu_write_strtab_ent() is modified to write both stage
fields in the STE and deal with the abort field.

In nested mode, only stage 2 is "finalized" as the host does
not own/configure the stage 1 context descriptor; guest does.

Signed-off-by: Eric Auger 

---
v10 -> v11:
- Fix an issue reported by Shameer when switching from with vSMMU
  to without vSMMU. Despite the spec does not seem to mention it
  seems to be needed to reset the 2 high 64b when switching from
  S1+S2 cfg to S1 only. Especially dst[3] needs to be reset (S2TTB).
  On some implementations, if the S2TTB is not reset, this causes
  a C_BAD_STE error
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 64 +
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h |  2 +
 2 files changed, 56 insertions(+), 10 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 18ac5af1b284..412ea1bafa50 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1181,8 +1181,10 @@ static void arm_smmu_write_strtab_ent(struct 
arm_smmu_master *master, u32 sid,
 * three cases at the moment:
 *
 * 1. Invalid (all zero) -> bypass/fault (init)
-* 2. Bypass/fault -> translation/bypass (attach)
-* 3. Translation/bypass -> bypass/fault (detach)
+* 2. Bypass/fault -> single stage translation/bypass (attach)
+* 3. Single or nested stage Translation/bypass -> bypass/fault (detach)
+* 4. S2 -> S1 + S2 (attach_pasid_table)
+* 5. S1 + S2 -> S2 (detach_pasid_table)
 *
 * Given that we can't update the STE atomically and the SMMU
 * doesn't read the thing in a defined order, that leaves us
@@ -1193,7 +1195,8 @@ static void arm_smmu_write_strtab_ent(struct 
arm_smmu_master *master, u32 sid,
 * 3. Update Config, sync
 */
u64 val = le64_to_cpu(dst[0]);
-   bool ste_live = false;
+   bool s1_live = false, s2_live = false, ste_live;
+   bool abort, nested = false, translate = false;
struct arm_smmu_device *smmu = NULL;
struct arm_smmu_s1_cfg *s1_cfg;
struct arm_smmu_s2_cfg *s2_cfg;
@@ -1233,6 +1236,8 @@ static void arm_smmu_write_strtab_ent(struct 
arm_smmu_master *master, u32 sid,
default:
break;
}
+   nested = s1_cfg->set && s2_cfg->set;
+   translate = s1_cfg->set || s2_cfg->set;
}
 
if (val & STRTAB_STE_0_V) {
@@ -1240,23 +1245,36 @@ static void arm_smmu_write_strtab_ent(struct 
arm_smmu_master *master, u32 sid,
case STRTAB_STE_0_CFG_BYPASS:
break;
case STRTAB_STE_0_CFG_S1_TRANS:
+   s1_live = true;
+   break;
case STRTAB_STE_0_CFG_S2_TRANS:
-   ste_live = true;
+   s2_live = true;
+   break;
+   case STRTAB_STE_0_CFG_NESTED:
+   s1_live = true;
+   s2_live = true;
break;
case STRTAB_STE_0_CFG_ABORT:
-   BUG_ON(!disable_bypass);
break;
default:
BUG(); /* STE corruption */
}
}
 
+   ste_live = s1_live || s2_live;
+
/* Nuke the existing STE_0 value, as we're going to rewrite it */
val = STRTAB_STE_0_V;
 
/* Bypass/fault */
-   if (!smmu_domain || !(s1_cfg->set || s2_cfg->set)) {
-   if (!smmu_domain && disable_bypass)
+
+   if (!smmu_domain)
+   abort = disable_bypass;
+   else
+   abort = smmu_domain->abort;
+
+   if (abort || !translate) {
+   if (abort)
val |= FIELD_PREP(STRTAB_STE_0_CFG, 
STRTAB_STE_0_CFG_ABORT);
else
val |= FIELD_PREP(STRTAB_STE_0_CFG, 
STRTAB_STE_0_CFG_BYPASS);
@@ -1274,8 +1292,16 @@ static void arm_smmu_write_strtab_ent(struct 
arm_smmu_master *master, u32 sid,
return;
}
 
+   BUG_ON(ste_live && !nested);
+
+   if (ste_live) {
+   /* First invalidate the live STE */
+   dst[0] = cpu_to_le64(STRTAB_STE_0_CFG_ABORT);
+   arm_smmu_sync_ste_for_sid(smmu, sid);
+   }
+
if (s1_cfg->set) {
-   BUG_ON(ste_live);
+   BUG_ON(s1_live);
dst[1] = cpu_to_le64(
 FIELD_PREP(STRTAB_STE_1_S1DSS, 
STRTAB_STE_1_S1DSS_SSID0) |
 FIELD_PREP(STRTAB_STE_1_S1CIR, 
STRTAB_STE_1_S1C_CACHE_WBRA) |
@@ -1294,7 +1320,14 @@ static void arm_smmu_write_st

[PATCH v13 04/15] iommu/smmuv3: Allow s1 and s2 configs to coexist

2020-11-18 Thread Eric Auger
In true nested mode, both s1_cfg and s2_cfg will coexist.
Let's remove the union and add a "set" field in each
config structure telling whether the config is set and needs
to be applied when writing the STE. In legacy nested mode,
only the 2d stage is used. In true nested mode, the "set" field
will be set when the guest passes the pasid table.

Signed-off-by: Eric Auger 

---
v12 -> v13:
- does not dynamically allocate s1-cfg and s2_cfg anymore. Add
  the set field
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 43 +
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h |  8 ++--
 2 files changed, 31 insertions(+), 20 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 1e4acc7f3d3c..18ac5af1b284 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1195,8 +1195,8 @@ static void arm_smmu_write_strtab_ent(struct 
arm_smmu_master *master, u32 sid,
u64 val = le64_to_cpu(dst[0]);
bool ste_live = false;
struct arm_smmu_device *smmu = NULL;
-   struct arm_smmu_s1_cfg *s1_cfg = NULL;
-   struct arm_smmu_s2_cfg *s2_cfg = NULL;
+   struct arm_smmu_s1_cfg *s1_cfg;
+   struct arm_smmu_s2_cfg *s2_cfg;
struct arm_smmu_domain *smmu_domain = NULL;
struct arm_smmu_cmdq_ent prefetch_cmd = {
.opcode = CMDQ_OP_PREFETCH_CFG,
@@ -1211,13 +1211,24 @@ static void arm_smmu_write_strtab_ent(struct 
arm_smmu_master *master, u32 sid,
}
 
if (smmu_domain) {
+   s1_cfg = &smmu_domain->s1_cfg;
+   s2_cfg = &smmu_domain->s2_cfg;
+
switch (smmu_domain->stage) {
case ARM_SMMU_DOMAIN_S1:
-   s1_cfg = &smmu_domain->s1_cfg;
+   s1_cfg->set = true;
+   s2_cfg->set = false;
break;
case ARM_SMMU_DOMAIN_S2:
+   s1_cfg->set = false;
+   s2_cfg->set = true;
+   break;
case ARM_SMMU_DOMAIN_NESTED:
-   s2_cfg = &smmu_domain->s2_cfg;
+   /*
+* Actual usage of stage 1 depends on nested mode:
+* legacy (2d stage only) or true nested mode
+*/
+   s2_cfg->set = true;
break;
default:
break;
@@ -1244,7 +1255,7 @@ static void arm_smmu_write_strtab_ent(struct 
arm_smmu_master *master, u32 sid,
val = STRTAB_STE_0_V;
 
/* Bypass/fault */
-   if (!smmu_domain || !(s1_cfg || s2_cfg)) {
+   if (!smmu_domain || !(s1_cfg->set || s2_cfg->set)) {
if (!smmu_domain && disable_bypass)
val |= FIELD_PREP(STRTAB_STE_0_CFG, 
STRTAB_STE_0_CFG_ABORT);
else
@@ -1263,7 +1274,7 @@ static void arm_smmu_write_strtab_ent(struct 
arm_smmu_master *master, u32 sid,
return;
}
 
-   if (s1_cfg) {
+   if (s1_cfg->set) {
BUG_ON(ste_live);
dst[1] = cpu_to_le64(
 FIELD_PREP(STRTAB_STE_1_S1DSS, 
STRTAB_STE_1_S1DSS_SSID0) |
@@ -1282,7 +1293,7 @@ static void arm_smmu_write_strtab_ent(struct 
arm_smmu_master *master, u32 sid,
FIELD_PREP(STRTAB_STE_0_S1FMT, s1_cfg->s1fmt);
}
 
-   if (s2_cfg) {
+   if (s2_cfg->set) {
BUG_ON(ste_live);
dst[2] = cpu_to_le64(
 FIELD_PREP(STRTAB_STE_2_S2VMID, s2_cfg->vmid) |
@@ -1846,24 +1857,24 @@ static void arm_smmu_domain_free(struct iommu_domain 
*domain)
 {
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_device *smmu = smmu_domain->smmu;
+   struct arm_smmu_s1_cfg *s1_cfg = &smmu_domain->s1_cfg;
+   struct arm_smmu_s2_cfg *s2_cfg = &smmu_domain->s2_cfg;
 
iommu_put_dma_cookie(domain);
free_io_pgtable_ops(smmu_domain->pgtbl_ops);
 
/* Free the CD and ASID, if we allocated them */
-   if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) {
-   struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
-
+   if (s1_cfg->set) {
/* Prevent SVA from touching the CD while we're freeing it */
mutex_lock(&arm_smmu_asid_lock);
-   if (cfg->cdcfg.cdtab)
+   if (s1_cfg->cdcfg.cdtab)
arm_smmu_free_cd_tables(smmu_domain);
-   arm_smmu_free_asid(&cfg->cd);
+   arm_smmu_free_asid(&s1_cfg->cd);
mutex_unlock(&arm_smmu_asid_lock);
-   } else {
-   struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
-   if (cfg->vmid)
-   arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
+   }
+   if (s2_cfg->set) {
+  

[PATCH v13 03/15] iommu/arm-smmu-v3: Maintain a SID->device structure

2020-11-18 Thread Eric Auger
From: Jean-Philippe Brucker 

When handling faults from the event or PRI queue, we need to find the
struct device associated to a SID. Add a rb_tree to keep track of SIDs.

Signed-off-by: Jean-Philippe Brucker 
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 161 
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h |  13 +-
 2 files changed, 144 insertions(+), 30 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index e634bbe60573..1e4acc7f3d3c 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -911,8 +911,8 @@ static void arm_smmu_sync_cd(struct arm_smmu_domain 
*smmu_domain,
 
spin_lock_irqsave(&smmu_domain->devices_lock, flags);
list_for_each_entry(master, &smmu_domain->devices, domain_head) {
-   for (i = 0; i < master->num_sids; i++) {
-   cmd.cfgi.sid = master->sids[i];
+   for (i = 0; i < master->num_streams; i++) {
+   cmd.cfgi.sid = master->streams[i].id;
arm_smmu_cmdq_batch_add(smmu, &cmds, &cmd);
}
}
@@ -1350,6 +1350,32 @@ static int arm_smmu_init_l2_strtab(struct 
arm_smmu_device *smmu, u32 sid)
return 0;
 }
 
+__maybe_unused
+static struct arm_smmu_master *
+arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid)
+{
+   struct rb_node *node;
+   struct arm_smmu_stream *stream;
+   struct arm_smmu_master *master = NULL;
+
+   mutex_lock(&smmu->streams_mutex);
+   node = smmu->streams.rb_node;
+   while (node) {
+   stream = rb_entry(node, struct arm_smmu_stream, node);
+   if (stream->id < sid) {
+   node = node->rb_right;
+   } else if (stream->id > sid) {
+   node = node->rb_left;
+   } else {
+   master = stream->master;
+   break;
+   }
+   }
+   mutex_unlock(&smmu->streams_mutex);
+
+   return master;
+}
+
 /* IRQ and event handlers */
 static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
 {
@@ -1569,8 +1595,8 @@ static int arm_smmu_atc_inv_master(struct arm_smmu_master 
*master)
 
arm_smmu_atc_inv_to_cmd(0, 0, 0, &cmd);
 
-   for (i = 0; i < master->num_sids; i++) {
-   cmd.atc.sid = master->sids[i];
+   for (i = 0; i < master->num_streams; i++) {
+   cmd.atc.sid = master->streams[i].id;
arm_smmu_cmdq_issue_cmd(master->smmu, &cmd);
}
 
@@ -1613,8 +1639,8 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain 
*smmu_domain,
if (!master->ats_enabled)
continue;
 
-   for (i = 0; i < master->num_sids; i++) {
-   cmd.atc.sid = master->sids[i];
+   for (i = 0; i < master->num_streams; i++) {
+   cmd.atc.sid = master->streams[i].id;
arm_smmu_cmdq_batch_add(smmu_domain->smmu, &cmds, &cmd);
}
}
@@ -2027,13 +2053,13 @@ static void arm_smmu_install_ste_for_dev(struct 
arm_smmu_master *master)
int i, j;
struct arm_smmu_device *smmu = master->smmu;
 
-   for (i = 0; i < master->num_sids; ++i) {
-   u32 sid = master->sids[i];
+   for (i = 0; i < master->num_streams; ++i) {
+   u32 sid = master->streams[i].id;
__le64 *step = arm_smmu_get_step_for_sid(smmu, sid);
 
/* Bridged PCI devices may end up with duplicated IDs */
for (j = 0; j < i; j++)
-   if (master->sids[j] == sid)
+   if (master->streams[j].id == sid)
break;
if (j < i)
continue;
@@ -2306,11 +2332,101 @@ static bool arm_smmu_sid_in_range(struct 
arm_smmu_device *smmu, u32 sid)
return sid < limit;
 }
 
+static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
+ struct arm_smmu_master *master)
+{
+   int i;
+   int ret = 0;
+   struct arm_smmu_stream *new_stream, *cur_stream;
+   struct rb_node **new_node, *parent_node = NULL;
+   struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(master->dev);
+
+   master->streams = kcalloc(fwspec->num_ids,
+ sizeof(struct arm_smmu_stream), GFP_KERNEL);
+   if (!master->streams)
+   return -ENOMEM;
+   master->num_streams = fwspec->num_ids;
+
+   mutex_lock(&smmu->streams_mutex);
+   for (i = 0; i < fwspec->num_ids && !ret; i++) {
+   u32 sid = fwspec->ids[i];
+
+   new_stream = &master->streams[i];
+   new_stream->id = sid;
+   new_stream->master = master;
+
+   /*
+* Check the SIDs are in range of the SMMU and our stream table

[PATCH v13 02/15] iommu: Introduce bind/unbind_guest_msi

2020-11-18 Thread Eric Auger
On ARM, MSI are translated by the SMMU. An IOVA is allocated
for each MSI doorbell. If both the host and the guest are exposed
with SMMUs, we end up with 2 different IOVAs allocated by each.
guest allocates an IOVA (gIOVA) to map onto the guest MSI
doorbell (gDB). The Host allocates another IOVA (hIOVA) to map
onto the physical doorbell (hDB).

So we end up with 2 untied mappings:
 S1S2
gIOVA->gDB
  hIOVA->hDB

Currently the PCI device is programmed by the host with hIOVA
as MSI doorbell. So this does not work.

This patch introduces an API to pass gIOVA/gDB to the host so
that gIOVA can be reused by the host instead of re-allocating
a new IOVA. So the goal is to create the following nested mapping:

 S1S2
gIOVA->gDB ->hDB

and program the PCI device with gIOVA MSI doorbell.

In case we have several devices attached to this nested domain
(devices belonging to the same group), they cannot be isolated
on guest side either. So they should also end up in the same domain
on guest side. We will enforce that all the devices attached to
the host iommu domain use the same physical doorbell and similarly
a single virtual doorbell mapping gets registered (1 single
virtual doorbell is used on guest as well).

Signed-off-by: Eric Auger 

---
v7 -> v8:
- dummy iommu_unbind_guest_msi turned into a void function

v6 -> v7:
- remove the device handle parameter.
- Add comments saying there can only be a single MSI binding
  registered per iommu_domain
v5 -> v6:
-fix compile issue when IOMMU_API is not set

v3 -> v4:
- add unbind

v2 -> v3:
- add a struct device handle
---
 drivers/iommu/iommu.c | 37 +
 include/linux/iommu.h | 20 
 2 files changed, 57 insertions(+)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 978fe34378fb..0b1f458b444f 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2252,6 +2252,43 @@ static void __iommu_detach_device(struct iommu_domain 
*domain,
trace_detach_device_from_domain(dev);
 }
 
+/**
+ * iommu_bind_guest_msi - Passes the stage1 GIOVA/GPA mapping of a
+ * virtual doorbell
+ *
+ * @domain: iommu domain the stage 1 mapping will be attached to
+ * @iova: iova allocated by the guest
+ * @gpa: guest physical address of the virtual doorbell
+ * @size: granule size used for the mapping
+ *
+ * The associated IOVA can be reused by the host to create a nested
+ * stage2 binding mapping translating into the physical doorbell used
+ * by the devices attached to the domain.
+ *
+ * All devices within the domain must share the same physical doorbell.
+ * A single MSI GIOVA/GPA mapping can be attached to an iommu_domain.
+ */
+
+int iommu_bind_guest_msi(struct iommu_domain *domain,
+dma_addr_t giova, phys_addr_t gpa, size_t size)
+{
+   if (unlikely(!domain->ops->bind_guest_msi))
+   return -ENODEV;
+
+   return domain->ops->bind_guest_msi(domain, giova, gpa, size);
+}
+EXPORT_SYMBOL_GPL(iommu_bind_guest_msi);
+
+void iommu_unbind_guest_msi(struct iommu_domain *domain,
+   dma_addr_t iova)
+{
+   if (unlikely(!domain->ops->unbind_guest_msi))
+   return;
+
+   domain->ops->unbind_guest_msi(domain, iova);
+}
+EXPORT_SYMBOL_GPL(iommu_unbind_guest_msi);
+
 void iommu_detach_device(struct iommu_domain *domain, struct device *dev)
 {
struct iommu_group *group;
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 464fcbecf841..35819bff03bc 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -225,6 +225,8 @@ struct iommu_iotlb_gather {
  * @sva_unbind_gpasid: unbind guest pasid and mm
  * @attach_pasid_table: attach a pasid table
  * @detach_pasid_table: detach the pasid table
+ * @bind_guest_msi: provides a stage1 giova/gpa MSI doorbell mapping
+ * @unbind_guest_msi: withdraw a stage1 giova/gpa MSI doorbell mapping
  * @def_domain_type: device default domain type, return value:
  * - IOMMU_DOMAIN_IDENTITY: must use an identity domain
  * - IOMMU_DOMAIN_DMA: must use a dma domain
@@ -305,6 +307,10 @@ struct iommu_ops {
 
int (*def_domain_type)(struct device *dev);
 
+   int (*bind_guest_msi)(struct iommu_domain *domain,
+ dma_addr_t giova, phys_addr_t gpa, size_t size);
+   void (*unbind_guest_msi)(struct iommu_domain *domain, dma_addr_t giova);
+
unsigned long pgsize_bitmap;
struct module *owner;
 };
@@ -444,6 +450,10 @@ extern int iommu_attach_pasid_table(struct iommu_domain 
*domain,
 extern int iommu_uapi_attach_pasid_table(struct iommu_domain *domain,
 void __user *udata);
 extern void iommu_detach_pasid_table(struct iommu_domain *domain);
+extern int iommu_bind_guest_msi(struct iommu_domain *domain,
+   dma_addr_t giova, phys_addr_t gpa, size_t size);
+extern v

[PATCH v13 01/15] iommu: Introduce attach/detach_pasid_table API

2020-11-18 Thread Eric Auger
In virtualization use case, when a guest is assigned
a PCI host device, protected by a virtual IOMMU on the guest,
the physical IOMMU must be programmed to be consistent with
the guest mappings. If the physical IOMMU supports two
translation stages it makes sense to program guest mappings
onto the first stage/level (ARM/Intel terminology) while the host
owns the stage/level 2.

In that case, it is mandated to trap on guest configuration
settings and pass those to the physical iommu driver.

This patch adds a new API to the iommu subsystem that allows
to set/unset the pasid table information.

A generic iommu_pasid_table_config struct is introduced in
a new iommu.h uapi header. This is going to be used by the VFIO
user API.

Signed-off-by: Jean-Philippe Brucker 
Signed-off-by: Liu, Yi L 
Signed-off-by: Ashok Raj 
Signed-off-by: Jacob Pan 
Signed-off-by: Eric Auger 

---

v12 -> v13:
- Fix config check

v11 -> v12:
- add argsz, name the union
---
 drivers/iommu/iommu.c  | 68 ++
 include/linux/iommu.h  | 21 
 include/uapi/linux/iommu.h | 54 ++
 3 files changed, 143 insertions(+)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index b53446bb8c6b..978fe34378fb 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2171,6 +2171,74 @@ int iommu_uapi_sva_unbind_gpasid(struct iommu_domain 
*domain, struct device *dev
 }
 EXPORT_SYMBOL_GPL(iommu_uapi_sva_unbind_gpasid);
 
+int iommu_attach_pasid_table(struct iommu_domain *domain,
+struct iommu_pasid_table_config *cfg)
+{
+   if (unlikely(!domain->ops->attach_pasid_table))
+   return -ENODEV;
+
+   return domain->ops->attach_pasid_table(domain, cfg);
+}
+
+int iommu_uapi_attach_pasid_table(struct iommu_domain *domain,
+ void __user *uinfo)
+{
+   struct iommu_pasid_table_config pasid_table_data = { 0 };
+   u32 minsz;
+
+   if (unlikely(!domain->ops->attach_pasid_table))
+   return -ENODEV;
+
+   /*
+* No new spaces can be added before the variable sized union, the
+* minimum size is the offset to the union.
+*/
+   minsz = offsetof(struct iommu_pasid_table_config, vendor_data);
+
+   /* Copy minsz from user to get flags and argsz */
+   if (copy_from_user(&pasid_table_data, uinfo, minsz))
+   return -EFAULT;
+
+   /* Fields before the variable size union are mandatory */
+   if (pasid_table_data.argsz < minsz)
+   return -EINVAL;
+
+   /* PASID and address granu require additional info beyond minsz */
+   if (pasid_table_data.version != PASID_TABLE_CFG_VERSION_1)
+   return -EINVAL;
+   if (pasid_table_data.format == IOMMU_PASID_FORMAT_SMMUV3 &&
+   pasid_table_data.argsz <
+   offsetofend(struct iommu_pasid_table_config, 
vendor_data.smmuv3))
+   return -EINVAL;
+
+   /*
+* User might be using a newer UAPI header which has a larger data
+* size, we shall support the existing flags within the current
+* size. Copy the remaining user data _after_ minsz but not more
+* than the current kernel supported size.
+*/
+   if (copy_from_user((void *)&pasid_table_data + minsz, uinfo + minsz,
+  min_t(u32, pasid_table_data.argsz, 
sizeof(pasid_table_data)) - minsz))
+   return -EFAULT;
+
+   /* Now the argsz is validated, check the content */
+   if (pasid_table_data.config < IOMMU_PASID_CONFIG_TRANSLATE ||
+   pasid_table_data.config > IOMMU_PASID_CONFIG_ABORT)
+   return -EINVAL;
+
+   return domain->ops->attach_pasid_table(domain, &pasid_table_data);
+}
+EXPORT_SYMBOL_GPL(iommu_uapi_attach_pasid_table);
+
+void iommu_detach_pasid_table(struct iommu_domain *domain)
+{
+   if (unlikely(!domain->ops->detach_pasid_table))
+   return;
+
+   domain->ops->detach_pasid_table(domain);
+}
+EXPORT_SYMBOL_GPL(iommu_detach_pasid_table);
+
 static void __iommu_detach_device(struct iommu_domain *domain,
  struct device *dev)
 {
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index b95a6f8db6ff..464fcbecf841 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -223,6 +223,8 @@ struct iommu_iotlb_gather {
  * @cache_invalidate: invalidate translation caches
  * @sva_bind_gpasid: bind guest pasid and mm
  * @sva_unbind_gpasid: unbind guest pasid and mm
+ * @attach_pasid_table: attach a pasid table
+ * @detach_pasid_table: detach the pasid table
  * @def_domain_type: device default domain type, return value:
  * - IOMMU_DOMAIN_IDENTITY: must use an identity domain
  * - IOMMU_DOMAIN_DMA: must use a dma domain
@@ -287,6 +289,9 @@ struct iommu_ops {
  void *drvdata);
void (*sva_unbind)(struct iommu_sva *ha

[PATCH v13 00/15] SMMUv3 Nested Stage Setup (IOMMU part)

2020-11-18 Thread Eric Auger
This series brings the IOMMU part of HW nested paging support
in the SMMUv3. The VFIO part is submitted separately.

The IOMMU API is extended to support 2 new API functionalities:
1) pass the guest stage 1 configuration
2) pass stage 1 MSI bindings

Then those capabilities gets implemented in the SMMUv3 driver.

The virtualizer passes information through the VFIO user API
which cascades them to the iommu subsystem. This allows the guest
to own stage 1 tables and context descriptors (so-called PASID
table) while the host owns stage 2 tables and main configuration
structures (STE).

Best Regards

Eric

This series can be found at:
https://github.com/eauger/linux/tree/5.10-rc4-2stage-v13
(including the VFIO part in his last version: v11)

The series includes a patch from Jean-Philippe. It is better to
review the original patch:
[PATCH v8 2/9] iommu/arm-smmu-v3: Maintain a SID->device structure

The VFIO series is sent separately.

History:

v12 -> v13:
- fixed compilation issue with CONFIG_ARM_SMMU_V3_SVA
  reported by Shameer. This urged me to revisit patch 4 into
  iommu/smmuv3: Allow s1 and s2 configs to coexist where
  s1_cfg and s2_cfg are not dynamically allocated anymore.
  Instead I use a new set field in existing structs
- fixed 2 others config checks
- Updated "iommu/arm-smmu-v3: Maintain a SID->device structure"
  according to the last version

v11 -> v12:
- rebase on top of v5.10-rc4

Eric Auger (14):
  iommu: Introduce attach/detach_pasid_table API
  iommu: Introduce bind/unbind_guest_msi
  iommu/smmuv3: Allow s1 and s2 configs to coexist
  iommu/smmuv3: Get prepared for nested stage support
  iommu/smmuv3: Implement attach/detach_pasid_table
  iommu/smmuv3: Allow stage 1 invalidation with unmanaged ASIDs
  iommu/smmuv3: Implement cache_invalidate
  dma-iommu: Implement NESTED_MSI cookie
  iommu/smmuv3: Nested mode single MSI doorbell per domain enforcement
  iommu/smmuv3: Enforce incompatibility between nested mode and HW MSI
regions
  iommu/smmuv3: Implement bind/unbind_guest_msi
  iommu/smmuv3: Report non recoverable faults
  iommu/smmuv3: Accept configs with more than one context descriptor
  iommu/smmuv3: Add PASID cache invalidation per PASID

Jean-Philippe Brucker (1):
  iommu/arm-smmu-v3: Maintain a SID->device structure

 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 659 ++--
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 103 ++-
 drivers/iommu/dma-iommu.c   | 142 -
 drivers/iommu/iommu.c   | 105 
 include/linux/dma-iommu.h   |  16 +
 include/linux/iommu.h   |  41 ++
 include/uapi/linux/iommu.h  |  54 ++
 7 files changed, 1042 insertions(+), 78 deletions(-)

-- 
2.21.3

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu