[PATCH 1/2] iommu/vt-d: Split iommu_prepare_identity_map

2015-10-01 Thread Joerg Roedel
From: Joerg Roedel 

Split the part of the function that fetches the domain out
and put the rest into into a domain_prepare_identity_map, so
that the code can also be used with when the domain is
already known.

Signed-off-by: Joerg Roedel 
---
 drivers/iommu/intel-iommu.c | 40 ++--
 1 file changed, 22 insertions(+), 18 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 2d7349a..e182d81 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -2429,17 +2429,13 @@ static int iommu_domain_identity_map(struct dmar_domain 
*domain,
  DMA_PTE_READ|DMA_PTE_WRITE);
 }
 
-static int iommu_prepare_identity_map(struct device *dev,
- unsigned long long start,
- unsigned long long end)
+static int domain_prepare_identity_map(struct device *dev,
+  struct dmar_domain *domain,
+  unsigned long long start,
+  unsigned long long end)
 {
-   struct dmar_domain *domain;
int ret;
 
-   domain = get_domain_for_dev(dev, DEFAULT_DOMAIN_ADDRESS_WIDTH);
-   if (!domain)
-   return -ENOMEM;
-
/* For _hardware_ passthrough, don't bother. But for software
   passthrough, we do it anyway -- it may indicate a memory
   range which is reserved in E820, so which didn't get set
@@ -2459,8 +2455,7 @@ static int iommu_prepare_identity_map(struct device *dev,
dmi_get_system_info(DMI_BIOS_VENDOR),
dmi_get_system_info(DMI_BIOS_VERSION),
 dmi_get_system_info(DMI_PRODUCT_VERSION));
-   ret = -EIO;
-   goto error;
+   return -EIO;
}
 
if (end >> agaw_to_width(domain->agaw)) {
@@ -2470,18 +2465,27 @@ static int iommu_prepare_identity_map(struct device 
*dev,
 dmi_get_system_info(DMI_BIOS_VENDOR),
 dmi_get_system_info(DMI_BIOS_VERSION),
 dmi_get_system_info(DMI_PRODUCT_VERSION));
-   ret = -EIO;
-   goto error;
+   return -EIO;
}
 
-   ret = iommu_domain_identity_map(domain, start, end);
-   if (ret)
-   goto error;
+   return iommu_domain_identity_map(domain, start, end);
+}
 
-   return 0;
+static int iommu_prepare_identity_map(struct device *dev,
+ unsigned long long start,
+ unsigned long long end)
+{
+   struct dmar_domain *domain;
+   int ret;
+
+   domain = get_domain_for_dev(dev, DEFAULT_DOMAIN_ADDRESS_WIDTH);
+   if (!domain)
+   return -ENOMEM;
+
+   ret = domain_prepare_identity_map(dev, domain, start, end);
+   if (ret)
+   domain_exit(domain);
 
- error:
-   domain_exit(domain);
return ret;
 }
 
-- 
1.9.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 0/2] Fix remaining RMRR issues

2015-10-01 Thread Joerg Roedel
Hi,

here is a patch-set to fix a remaining RMRR issue in the
VT-d driver. The problem was that RMRR mappings are only
created for boot-time (or hotplug-time) allocated domains.

Domains allocated on demand don't get RMRR mappings, which
is bad for the kdump case, for example. With this patch-set
the on-demand domains also get RMRR mappings.

Regards,

Joerg

Joerg Roedel (2):
  iommu/vt-d: Split iommu_prepare_identity_map
  iommu/vt-d: Create RMRR mappings in newly allocated domains

 drivers/iommu/intel-iommu.c | 60 +++--
 1 file changed, 42 insertions(+), 18 deletions(-)

-- 
1.9.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 2/2] iommu/vt-d: Create RMRR mappings in newly allocated domains

2015-10-01 Thread Joerg Roedel
From: Joerg Roedel 

Currently the RMRR entries are created only at boot time.
This means they will vanish when the domain allocated at
boot time is destroyed.
This patch makes sure that also newly allocated domains will
get RMRR mappings.

Signed-off-by: Joerg Roedel 
---
 drivers/iommu/intel-iommu.c | 20 
 1 file changed, 20 insertions(+)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index e182d81..e9ace17 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3243,7 +3243,10 @@ static struct iova *intel_alloc_iova(struct device *dev,
 
 static struct dmar_domain *__get_valid_domain_for_dev(struct device *dev)
 {
+   struct dmar_rmrr_unit *rmrr;
struct dmar_domain *domain;
+   struct device *i_dev;
+   int i, ret;
 
domain = get_domain_for_dev(dev, DEFAULT_DOMAIN_ADDRESS_WIDTH);
if (!domain) {
@@ -3252,6 +3255,23 @@ static struct dmar_domain 
*__get_valid_domain_for_dev(struct device *dev)
return NULL;
}
 
+   /* We have a new domain - setup possible RMRRs for the device */
+   rcu_read_lock();
+   for_each_rmrr_units(rmrr) {
+   for_each_active_dev_scope(rmrr->devices, rmrr->devices_cnt,
+ i, i_dev) {
+   if (i_dev != dev)
+   continue;
+
+   ret = domain_prepare_identity_map(dev, domain,
+ rmrr->base_address,
+ rmrr->end_address);
+   if (ret)
+   dev_err(dev, "Mapping reserved region 
failed\n");
+   }
+   }
+   rcu_read_unlock();
+
return domain;
 }
 
-- 
1.9.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 1/2] iommu/vt-d: Split iommu_prepare_identity_map

2015-10-01 Thread kbuild test robot
Hi Joerg,

[auto build test results on v4.3-rc3 -- if it's inappropriate base, please 
ignore]

config: ia64-allyesconfig (attached as .config)
reproduce:
wget 
https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross
 -O ~/bin/make.cross
chmod +x ~/bin/make.cross
git checkout 75170658f4350889e0d0d404c5bb17179797bd47
# save the attached .config to linux build tree
make.cross ARCH=ia64 

All warnings (new ones prefixed by >>):

   drivers/iommu/intel-iommu.c: In function 'domain_prepare_identity_map':
>> drivers/iommu/intel-iommu.c:2437:6: warning: unused variable 'ret' 
>> [-Wunused-variable]
 int ret;
 ^

vim +/ret +2437 drivers/iommu/intel-iommu.c

ba395927 drivers/pci/intel-iommu.c   Keshavamurthy, Anil S 2007-10-21  2421 
/*
ba395927 drivers/pci/intel-iommu.c   Keshavamurthy, Anil S 2007-10-21  2422 
 * RMRR range might have overlap with physical memory range,
ba395927 drivers/pci/intel-iommu.c   Keshavamurthy, Anil S 2007-10-21  2423 
 * clear it first
ba395927 drivers/pci/intel-iommu.c   Keshavamurthy, Anil S 2007-10-21  2424 
 */
c5395d5c drivers/pci/intel-iommu.c   David Woodhouse   2009-06-28  2425 
dma_pte_clear_range(domain, first_vpfn, last_vpfn);
ba395927 drivers/pci/intel-iommu.c   Keshavamurthy, Anil S 2007-10-21  2426  
c5395d5c drivers/pci/intel-iommu.c   David Woodhouse   2009-06-28  2427 
return domain_pfn_mapping(domain, first_vpfn, first_vpfn,
c5395d5c drivers/pci/intel-iommu.c   David Woodhouse   2009-06-28  2428 
  last_vpfn - first_vpfn + 1,
ba395927 drivers/pci/intel-iommu.c   Keshavamurthy, Anil S 2007-10-21  2429 
  DMA_PTE_READ|DMA_PTE_WRITE);
b213203e drivers/pci/intel-iommu.c   David Woodhouse   2009-06-26  2430  }
b213203e drivers/pci/intel-iommu.c   David Woodhouse   2009-06-26  2431  
75170658 drivers/iommu/intel-iommu.c Joerg Roedel  2015-10-01  2432  
static int domain_prepare_identity_map(struct device *dev,
75170658 drivers/iommu/intel-iommu.c Joerg Roedel  2015-10-01  2433 
   struct dmar_domain *domain,
b213203e drivers/pci/intel-iommu.c   David Woodhouse   2009-06-26  2434 
   unsigned long long start,
b213203e drivers/pci/intel-iommu.c   David Woodhouse   2009-06-26  2435 
   unsigned long long end)
b213203e drivers/pci/intel-iommu.c   David Woodhouse   2009-06-26  2436  {
b213203e drivers/pci/intel-iommu.c   David Woodhouse   2009-06-26 @2437 
int ret;
b213203e drivers/pci/intel-iommu.c   David Woodhouse   2009-06-26  2438  
19943b0e drivers/pci/intel-iommu.c   David Woodhouse   2009-08-04  2439 
/* For _hardware_ passthrough, don't bother. But for software
19943b0e drivers/pci/intel-iommu.c   David Woodhouse   2009-08-04  2440 
   passthrough, we do it anyway -- it may indicate a memory
19943b0e drivers/pci/intel-iommu.c   David Woodhouse   2009-08-04  2441 
   range which is reserved in E820, so which didn't get set
19943b0e drivers/pci/intel-iommu.c   David Woodhouse   2009-08-04  2442 
   up to start with in si_domain */
19943b0e drivers/pci/intel-iommu.c   David Woodhouse   2009-08-04  2443 
if (domain == si_domain && hw_pass_through) {
9f10e5bf drivers/iommu/intel-iommu.c Joerg Roedel  2015-06-12  2444 
pr_warn("Ignoring identity map for HW passthrough device %s [0x%Lx - 
0x%Lx]\n",
0b9d9753 drivers/iommu/intel-iommu.c David Woodhouse   2014-03-09  2445 
dev_name(dev), start, end);

:: The code at line 2437 was first introduced by commit
:: b213203e475212a69ad6fedfb73464087e317148 intel-iommu: Create new 
iommu_domain_identity_map() function

:: TO: David Woodhouse 
:: CC: David Woodhouse 

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


.config.gz
Description: Binary data
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH] iommu/s390: add iommu api for s390 pci devices

2015-10-01 Thread Gerald Schaefer
On Tue, 29 Sep 2015 14:40:30 +0200
Joerg Roedel  wrote:

> Hi Gerald,
> 
> thanks for your patch. It looks pretty good and addresses my previous
> review comments. I have a few questions, first one is how this
> operates with DMA-API on s390. Is there a seperate DMA-API
> implementation besides the IOMMU-API one for PCI devices?

Yes, the DMA API is already implemented in arch/s390/pci/pci_dma.c.
I thought about moving it over to the new location in drivers/iommu/,
but I don't see any benefit from it.

Also, the two APIs are quite different on s390 and must not be mixed-up.
For example, we have optimizations in the DMA API to reduce TLB flushes
based on iommu bitmap wrap-around, which is not possible for the map/unmap
logic in the IOMMU API. There is also the requirement that each device has
its own DMA page table (not shared), which is important for DMA API device
recovery and map/unmap on s390.

> 
> My other question is inline:
> 
> On Thu, Aug 27, 2015 at 03:33:03PM +0200, Gerald Schaefer wrote:
> > +struct s390_domain_device {
> > +   struct list_headlist;
> > +   struct zpci_dev *zdev;
> > +};
> 
> Instead of using your own struct here, have you considered using the
> struct iommu_group instead? The struct devices contains a pointer to an
> iommu_group and the struct itself contains pointers to the domain it is
> currently bound to.

Hmm, not sure how this can replace my own struct. I need the struct to
maintain a list of all devices that share a dma page table. And the
devices need to be added and removed to/from that list in attach/detach_dev.

I also need that list during map/unmap, in order to do a TLB flush for
all affected devices, and this happens under a spin lock.

So I guess I cannot use the iommu_group->devices list, which is managed
in add/remove_device and under a mutex, if that was on your mind.

Regards,
Gerald

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 2/3] arm64: Add IOMMU dma_ops

2015-10-01 Thread Robin Murphy
Taking some inspiration from the arch/arm code, implement the
arch-specific side of the DMA mapping ops using the new IOMMU-DMA layer.

Since there is still work to do elsewhere to make DMA configuration happen
in a more appropriate order and properly support platform devices in the
IOMMU core, the device setup code unfortunately starts out carrying some
workarounds to ensure it works correctly in the current state of things.

Signed-off-by: Robin Murphy 
---
 arch/arm64/mm/dma-mapping.c | 435 
 1 file changed, 435 insertions(+)

diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 0bcc4bc..dd2d6e6 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -533,3 +533,438 @@ static int __init dma_debug_do_init(void)
return 0;
 }
 fs_initcall(dma_debug_do_init);
+
+
+#ifdef CONFIG_IOMMU_DMA
+#include 
+#include 
+#include 
+
+/* Thankfully, all cache ops are by VA so we can ignore phys here */
+static void flush_page(struct device *dev, const void *virt, phys_addr_t phys)
+{
+   __dma_flush_range(virt, virt + PAGE_SIZE);
+}
+
+static void *__iommu_alloc_attrs(struct device *dev, size_t size,
+dma_addr_t *handle, gfp_t gfp,
+struct dma_attrs *attrs)
+{
+   bool coherent = is_device_dma_coherent(dev);
+   int ioprot = dma_direction_to_prot(DMA_BIDIRECTIONAL, coherent);
+   void *addr;
+
+   if (WARN(!dev, "cannot create IOMMU mapping for unknown device\n"))
+   return NULL;
+   /*
+* Some drivers rely on this, and we probably don't want the
+* possibility of stale kernel data being read by devices anyway.
+*/
+   gfp |= __GFP_ZERO;
+
+   if (gfp & __GFP_WAIT) {
+   struct page **pages;
+   pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent);
+
+   pages = iommu_dma_alloc(dev, size, gfp, ioprot, handle,
+   flush_page);
+   if (!pages)
+   return NULL;
+
+   addr = dma_common_pages_remap(pages, size, VM_USERMAP, prot,
+ __builtin_return_address(0));
+   if (!addr)
+   iommu_dma_free(dev, pages, size, handle);
+   } else {
+   struct page *page;
+   /*
+* In atomic context we can't remap anything, so we'll only
+* get the virtually contiguous buffer we need by way of a
+* physically contiguous allocation.
+*/
+   if (coherent) {
+   page = alloc_pages(gfp, get_order(size));
+   addr = page ? page_address(page) : NULL;
+   } else {
+   addr = __alloc_from_pool(size, &page, gfp);
+   }
+   if (!addr)
+   return NULL;
+
+   *handle = iommu_dma_map_page(dev, page, 0, size, ioprot);
+   if (iommu_dma_mapping_error(dev, *handle)) {
+   if (coherent)
+   __free_pages(page, get_order(size));
+   else
+   __free_from_pool(addr, size);
+   addr = NULL;
+   }
+   }
+   return addr;
+}
+
+static void __iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr,
+  dma_addr_t handle, struct dma_attrs *attrs)
+{
+   /*
+* @cpu_addr will be one of 3 things depending on how it was allocated:
+* - A remapped array of pages from iommu_dma_alloc(), for all
+*   non-atomic allocations.
+* - A non-cacheable alias from the atomic pool, for atomic
+*   allocations by non-coherent devices.
+* - A normal lowmem address, for atomic allocations by
+*   coherent devices.
+* Hence how dodgy the below logic looks...
+*/
+   if (__in_atomic_pool(cpu_addr, size)) {
+   iommu_dma_unmap_page(dev, handle, size, 0, NULL);
+   __free_from_pool(cpu_addr, size);
+   } else if (is_vmalloc_addr(cpu_addr)){
+   struct vm_struct *area = find_vm_area(cpu_addr);
+
+   if (WARN_ON(!area || !area->pages))
+   return;
+   iommu_dma_free(dev, area->pages, size, &handle);
+   dma_common_free_remap(cpu_addr, size, VM_USERMAP);
+   } else {
+   iommu_dma_unmap_page(dev, handle, size, 0, NULL);
+   __free_pages(virt_to_page(cpu_addr), get_order(size));
+   }
+}
+
+static int __iommu_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
+ void *cpu_addr, dma_addr_t dma_addr, size_t size,
+ struct dma_attrs *attrs)
+{
+   struct vm_struct *area;
+   int ret;
+
+

[PATCH v6 3/3] arm64: Hook up IOMMU dma_ops

2015-10-01 Thread Robin Murphy
With iommu_dma_ops in place, hook them up to the configuration code, so
IOMMU-fronted devices will get them automatically.

Acked-by: Catalin Marinas 
Signed-off-by: Robin Murphy 
---
 arch/arm64/Kconfig   |  1 +
 arch/arm64/include/asm/dma-mapping.h | 15 +++
 arch/arm64/mm/dma-mapping.c  | 22 ++
 3 files changed, 30 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7d95663..6597311 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -74,6 +74,7 @@ config ARM64
select HAVE_PERF_USER_STACK_DUMP
select HAVE_RCU_TABLE_FREE
select HAVE_SYSCALL_TRACEPOINTS
+   select IOMMU_DMA if IOMMU_SUPPORT
select IRQ_DOMAIN
select IRQ_FORCED_THREADING
select MODULES_USE_ELF_RELA
diff --git a/arch/arm64/include/asm/dma-mapping.h 
b/arch/arm64/include/asm/dma-mapping.h
index cfdb34b..54d0ead 100644
--- a/arch/arm64/include/asm/dma-mapping.h
+++ b/arch/arm64/include/asm/dma-mapping.h
@@ -54,16 +54,15 @@ static inline struct dma_map_ops *get_dma_ops(struct device 
*dev)
return __generic_dma_ops(dev);
 }
 
-static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 
size,
- struct iommu_ops *iommu, bool coherent)
-{
-   if (!acpi_disabled && !dev->archdata.dma_ops)
-   dev->archdata.dma_ops = dma_ops;
-
-   dev->archdata.dma_coherent = coherent;
-}
+void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
+   struct iommu_ops *iommu, bool coherent);
 #define arch_setup_dma_ops arch_setup_dma_ops
 
+#ifdef CONFIG_IOMMU_DMA
+void arch_teardown_dma_ops(struct device *dev);
+#define arch_teardown_dma_ops  arch_teardown_dma_ops
+#endif
+
 /* do not use this function in a driver */
 static inline bool is_device_dma_coherent(struct device *dev)
 {
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index dd2d6e6..02ef19d 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -960,6 +960,19 @@ static void __iommu_setup_dma_ops(struct device *dev, u64 
dma_base, u64 size,
}
 }
 
+void arch_teardown_dma_ops(struct device *dev)
+{
+   struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+
+   if (domain) {
+   iommu_detach_device(domain, dev);
+   if (domain->type & __IOMMU_DOMAIN_FAKE_DEFAULT)
+   iommu_domain_free(domain);
+   }
+
+   dev->archdata.dma_ops = NULL;
+}
+
 #else
 
 static void __iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
@@ -968,3 +981,12 @@ static void __iommu_setup_dma_ops(struct device *dev, u64 
dma_base, u64 size,
 
 #endif  /* CONFIG_IOMMU_DMA */
 
+void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
+   struct iommu_ops *iommu, bool coherent)
+{
+   if (!acpi_disabled && !dev->archdata.dma_ops)
+   dev->archdata.dma_ops = dma_ops;
+
+   dev->archdata.dma_coherent = coherent;
+   __iommu_setup_dma_ops(dev, dma_base, size, iommu);
+}
-- 
1.9.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6 1/3] iommu: Implement common IOMMU ops for DMA mapping

2015-10-01 Thread Robin Murphy
Taking inspiration from the existing arch/arm code, break out some
generic functions to interface the DMA-API to the IOMMU-API. This will
do the bulk of the heavy lifting for IOMMU-backed dma-mapping.

Since associating an IOVA allocator with an IOMMU domain is a fairly
common need, rather than introduce yet another private structure just to
do this for ourselves, extend the top-level struct iommu_domain with the
notion. A simple opaque cookie allows reuse by other IOMMU API users
with their various different incompatible allocator types.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/Kconfig |   7 +
 drivers/iommu/Makefile|   1 +
 drivers/iommu/dma-iommu.c | 524 ++
 include/linux/dma-iommu.h |  85 
 include/linux/iommu.h |   1 +
 5 files changed, 618 insertions(+)
 create mode 100644 drivers/iommu/dma-iommu.c
 create mode 100644 include/linux/dma-iommu.h

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 3dc1bcb..27d4d4b 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -48,6 +48,13 @@ config OF_IOMMU
def_bool y
depends on OF && IOMMU_API
 
+# IOMMU-agnostic DMA-mapping layer
+config IOMMU_DMA
+   bool
+   depends on NEED_SG_DMA_LENGTH
+   select IOMMU_API
+   select IOMMU_IOVA
+
 config FSL_PAMU
bool "Freescale IOMMU support"
depends on PPC32
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index c6dcc51..f465cfb 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -1,6 +1,7 @@
 obj-$(CONFIG_IOMMU_API) += iommu.o
 obj-$(CONFIG_IOMMU_API) += iommu-traces.o
 obj-$(CONFIG_IOMMU_API) += iommu-sysfs.o
+obj-$(CONFIG_IOMMU_DMA) += dma-iommu.o
 obj-$(CONFIG_IOMMU_IO_PGTABLE) += io-pgtable.o
 obj-$(CONFIG_IOMMU_IO_PGTABLE_LPAE) += io-pgtable-arm.o
 obj-$(CONFIG_IOMMU_IOVA) += iova.o
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
new file mode 100644
index 000..3a20db4
--- /dev/null
+++ b/drivers/iommu/dma-iommu.c
@@ -0,0 +1,524 @@
+/*
+ * A fairly generic DMA-API to IOMMU-API glue layer.
+ *
+ * Copyright (C) 2014-2015 ARM Ltd.
+ *
+ * based in part on arch/arm/mm/dma-mapping.c:
+ * Copyright (C) 2000-2004 Russell King
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see .
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+int iommu_dma_init(void)
+{
+   return iova_cache_get();
+}
+
+/**
+ * iommu_get_dma_cookie - Acquire DMA-API resources for a domain
+ * @domain: IOMMU domain to prepare for DMA-API usage
+ *
+ * IOMMU drivers should normally call this from their domain_alloc
+ * callback when domain->type == IOMMU_DOMAIN_DMA.
+ */
+int iommu_get_dma_cookie(struct iommu_domain *domain)
+{
+   struct iova_domain *iovad;
+
+   if (domain->iova_cookie)
+   return -EEXIST;
+
+   iovad = kzalloc(sizeof(*iovad), GFP_KERNEL);
+   domain->iova_cookie = iovad;
+
+   return iovad ? 0 : -ENOMEM;
+}
+EXPORT_SYMBOL(iommu_get_dma_cookie);
+
+/**
+ * iommu_put_dma_cookie - Release a domain's DMA mapping resources
+ * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
+ *
+ * IOMMU drivers should normally call this from their domain_free callback.
+ */
+void iommu_put_dma_cookie(struct iommu_domain *domain)
+{
+   struct iova_domain *iovad = domain->iova_cookie;
+
+   if (!iovad)
+   return;
+
+   put_iova_domain(iovad);
+   kfree(iovad);
+   domain->iova_cookie = NULL;
+}
+EXPORT_SYMBOL(iommu_put_dma_cookie);
+
+/**
+ * iommu_dma_init_domain - Initialise a DMA mapping domain
+ * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
+ * @base: IOVA at which the mappable address space starts
+ * @size: Size of IOVA space
+ *
+ * @base and @size should be exact multiples of IOMMU page granularity to
+ * avoid rounding surprises. If necessary, we reserve the page at address 0
+ * to ensure it is an invalid IOVA. It is safe to reinitialise a domain, but
+ * any change which could make prior IOVAs invalid will fail.
+ */
+int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, u64 
size)
+{
+   struct iova_domain *iovad = domain->iova_cookie;
+   unsigned long order, base_pfn, end_pfn;
+
+   if (!iovad)
+   return -ENODEV;
+
+   /* Use the smallest supported page size for IOVA granularity */
+   order = __ffs(domain->ops->pgsize_bitmap);
+ 

[PATCH v6 0/3] arm64: IOMMU-backed DMA mapping

2015-10-01 Thread Robin Murphy
Hi all,

Here's the latest, and hopefully last, revision of the initial arm64
IOMMU dma_ops support.

There are a couple of dependencies still currently in -next and the
intel-iommu tree[0]: "iommu: iova: Move iova cache management to the
iova library" is necessary for the rename of iova_cache_get(), and
"iommu/iova: Avoid over-allocating when size-aligned" will be needed
with some IOMMU drivers to prevent unmapping errors.

Changes from v5[1]:
- Change __iommu_dma_unmap() from BUG to WARN when things go wrong, and
  prevent a NULL dereference on double-free.
- Fix iommu_dma_map_sg() to ensure segments can never inadvertently end
  mapped across a segment boundary. As a result, we have to lose the
  segment-merging optimisation from before (I might revisit that if
  there's some evidence it's really worthwhile, though).
- Cleaned up the platform device workarounds for config order and
  default domains, and removed the other hacks. Demanding that the IOMMU
  drivers assign groups, and support IOMMU_DOMAIN_DMA via the methods
  provided, keeps things bearable, and the behaviour should now be
  consistent across all cases.

As a bonus, whilst the underlying of_iommu_configure() code only supports
platform devices at the moment, I can also say that this has now been
tested to work for PCI devices too, via some horrible hacks on a Juno r1.

Thanks,
Robin.

[0]:http://thread.gmane.org/gmane.linux.kernel.iommu/11033
[1]:http://thread.gmane.org/gmane.linux.kernel.iommu/10439

Robin Murphy (3):
  iommu: Implement common IOMMU ops for DMA mapping
  arm64: Add IOMMU dma_ops
  arm64: Hook up IOMMU dma_ops

 arch/arm64/Kconfig   |   1 +
 arch/arm64/include/asm/dma-mapping.h |  15 +-
 arch/arm64/mm/dma-mapping.c  | 457 ++
 drivers/iommu/Kconfig|   7 +
 drivers/iommu/Makefile   |   1 +
 drivers/iommu/dma-iommu.c| 524 +++
 include/linux/dma-iommu.h|  85 ++
 include/linux/iommu.h|   1 +
 8 files changed, 1083 insertions(+), 8 deletions(-)
 create mode 100644 drivers/iommu/dma-iommu.c
 create mode 100644 include/linux/dma-iommu.h

-- 
1.9.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [GIT PULL] IOVA fix for 4.3

2015-10-01 Thread Linus Torvalds
On Wed, Sep 30, 2015 at 11:49 AM, David Woodhouse  wrote:
>
> Linus, please pull from
>
> git://git.infradead.org/intel-iommu.git

Umm. What happened to this pull request? I want to see the expected
diffstat in addition to the shortlog...

  Linus
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [GIT PULL] IOVA fix for 4.3

2015-10-01 Thread David Woodhouse
On Thu, 2015-10-01 at 16:42 -0400, Linus Torvalds wrote:
> 
> Umm. What happened to this pull request? I want to see the expected
> diffstat in addition to the shortlog...

Ah crap, sorry. Here it is...

Linus, please pull from

git://git.infradead.org/intel-iommu.git

Robin Murphy (1):
  iommu/iova: Avoid over-allocating when size-aligned

Sakari Ailus (3):
  iommu: iova: Move iova cache management to the iova library
  iommu: iova: Export symbols
  iommu: Make the iova library a module

 drivers/iommu/Kconfig   |   2 +-
 drivers/iommu/intel-iommu.c |   8 +--
 drivers/iommu/iova.c| 120 +---
 include/linux/iova.h|   4 +-
 4 files changed, 77 insertions(+), 57 deletions(-)

The main fix here is the first one, fixing the over-allocation of
size-aligned requests. The other patches simply make the existing
IOVA code available to users other than the Intel VT-d driver, with
no functional change.

I concede the latter really *should* have been submitted during the
merge window, but since it's basically risk-free and people are
waiting to build on top of it and it's my fault I didn't get it in,
I (and they) would be grateful if you'd take it.

Thanks.

-- 
dwmw2




smime.p7s
Description: S/MIME cryptographic signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu