Re: [PATCH v4] devicetree: Add generic IOMMU device tree bindings

2014-07-16 Thread Will Deacon
On Wed, Jul 16, 2014 at 02:25:20AM +0100, Olav Haugan wrote:
 On 7/13/2014 4:43 AM, Rob Clark wrote:
  On Sun, Jul 13, 2014 at 5:43 AM, Will Deacon will.dea...@arm.com wrote:
  My plan for the ARM SMMU driver is:
 
(1) Change -probe() to walk the device-tree looking for all masters with
phandles back to the SMMU instance being probed
 
(2) For each master, extract the Stream IDs and add them to the internal
SMMU driver data structures (an rbtree per SMMU instance). For
hotpluggable buses, we'll need a way for the bus controller to
reserve a range of IDs -- this will likely be a later extension to
the binding.
 
(3) When we get an -add() call, warn if it's a device we haven't seen
and reject the addition.
 
  That way, -attach() should be the same as it is now, I think.
 
  Have you tried implementing something like that? We agreed that (1) isn't
  pretty, but I don't have a good alternative and it's only done at
  probe-time.
  
  I haven't tried implementing that yet, but I'm sure it would work.  I
  was just hoping to avoid having to do that ;-)
 
 Is the reason you want to do it this way because you want to guarantee
 that all masters (and stream IDs) have been identified before the first
 attach call? I am just wondering why you cannot continue doing the
 master/streamID discovery during add_device() callback?

That's fine if we have one SMR per ID, but the moment we want to do more
involved matching, we're going to need complete system knowledge prior to
the first attach. I don't think we can safely reconfigure live streams on
the fly.

  BTW: Is the msm-v0 IOMMU compatible with the ARM SMMU driver, or is it a
  completely different design requiring a different driver?
  
  My understanding is that it is different from msm v1 IOMMU, although I
  think it shares the same pagetable format with v1.  Not sure if that
  is the same as arm-smmu?   If so it might be nice to try to extract
  out some shared helper fxns for map/unmap as well.
  
  I expect Olav knows better the similarities/differences.
  
 
 The msm-v0 IOMMU is not compatible with ARM SMMUv1 specification.
 However, it is a close cousin. The hardware was designed before the ARM
 SMMUv1 specification was available I believe. But it shares many of the
 same concepts as the ARM SMMUv1.
 
 msm-v0 IOMMU supports V7S page table format only. The ARM SMMU driver
 does not support V7S at this time. However, I believe we need to support
 this.
 
 Will, this reminds me. We definitely have a need to use different page
 tables in the ARM SMMU driver vs. the ARM CPU. We have an SoC with ARMv8
 cores (and thus ARMv8 page tables) but the SMMUs (SMMUv1) on this SoC
 only have support for V7S/V7L page table format. So we cannot use the
 same page table format as the CPU.

That sounds like a sane use-case. The best thing to do would be to add
some ARM page-table code as a library under drivers/iommu/, then we can
move the ARM IOMMUs over to using that. Since we'd be moving away from the
CPU page table helpers, we could also take the opportunity to use the
coherent DMA API instead of the hack I currently use for flushing out tables
to non-coherent walkers.

Will
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [Patch Part3 V4 21/21] pci, ACPI, iommu: Enhance pci_root to support DMAR device hotplug

2014-07-16 Thread Bjorn Helgaas
On Fri, Jul 11, 2014 at 12:19 AM, Jiang Liu jiang@linux.intel.com wrote:
 Finally enhance pci_root driver to support DMAR device hotplug when
 hot-plugging PCI host bridges.

 Signed-off-by: Jiang Liu jiang@linux.intel.com

Acked-by: Bjorn Helgaas bhelg...@google.com

I assume you'll merge this along with the rest of the series via some
non-PCI tree.

 ---
  drivers/acpi/pci_root.c |   16 ++--
  1 file changed, 14 insertions(+), 2 deletions(-)

 diff --git a/drivers/acpi/pci_root.c b/drivers/acpi/pci_root.c
 index d388f13d48b4..99c2b9761c12 100644
 --- a/drivers/acpi/pci_root.c
 +++ b/drivers/acpi/pci_root.c
 @@ -33,6 +33,7 @@
  #include linux/pci.h
  #include linux/pci-acpi.h
  #include linux/pci-aspm.h
 +#include linux/dmar.h
  #include linux/acpi.h
  #include linux/slab.h
  #include acpi/apei.h /* for acpi_hest_init() */
 @@ -511,6 +512,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
 struct acpi_pci_root *root;
 acpi_handle handle = device-handle;
 int no_aspm = 0, clear_aspm = 0;
 +   bool hotadd = system_state != SYSTEM_BOOTING;

 root = kzalloc(sizeof(struct acpi_pci_root), GFP_KERNEL);
 if (!root)
 @@ -557,6 +559,11 @@ static int acpi_pci_root_add(struct acpi_device *device,
 strcpy(acpi_device_class(device), ACPI_PCI_ROOT_CLASS);
 device-driver_data = root;

 +   if (hotadd  dmar_device_add(handle)) {
 +   result = -ENXIO;
 +   goto end;
 +   }
 +
 pr_info(PREFIX %s [%s] (domain %04x %pR)\n,
acpi_device_name(device), acpi_device_bid(device),
root-segment, root-secondary);
 @@ -583,7 +590,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
 root-segment, (unsigned int)root-secondary.start);
 device-driver_data = NULL;
 result = -ENODEV;
 -   goto end;
 +   goto remove_dmar;
 }

 if (clear_aspm) {
 @@ -597,7 +604,7 @@ static int acpi_pci_root_add(struct acpi_device *device,
 if (device-wakeup.flags.run_wake)
 device_set_run_wake(root-bus-bridge, true);

 -   if (system_state != SYSTEM_BOOTING) {
 +   if (hotadd) {
 pcibios_resource_survey_bus(root-bus);
 pci_assign_unassigned_root_bus_resources(root-bus);
 }
 @@ -607,6 +614,9 @@ static int acpi_pci_root_add(struct acpi_device *device,
 pci_unlock_rescan_remove();
 return 1;

 +remove_dmar:
 +   if (hotadd)
 +   dmar_device_remove(handle);
  end:
 kfree(root);
 return result;
 @@ -625,6 +635,8 @@ static void acpi_pci_root_remove(struct acpi_device 
 *device)

 pci_remove_root_bus(root-bus);

 +   dmar_device_remove(device-handle);
 +
 pci_unlock_rescan_remove();

 kfree(root);
 --
 1.7.10.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v4] devicetree: Add generic IOMMU device tree bindings

2014-07-16 Thread Rob Clark
On Tue, Jul 15, 2014 at 9:25 PM, Olav Haugan ohau...@codeaurora.org wrote:
 On 7/13/2014 4:43 AM, Rob Clark wrote:
 On Sun, Jul 13, 2014 at 5:43 AM, Will Deacon will.dea...@arm.com wrote:
 On Sat, Jul 12, 2014 at 01:57:31PM +0100, Rob Clark wrote:
 On Sat, Jul 12, 2014 at 8:22 AM, Arnd Bergmann a...@arndb.de wrote:
 On Saturday 12 July 2014, Rob Clark wrote:
 Was there actually a good reason for having the device link to the
 iommu rather than the other way around?  How much would people hate it
 if I just ignore the generic bindings and use something that works for
 me instead.  I mean, it isn't exactly like there is going to be .dts
 re-use across different SoC's..  and at least with current IOMMU API
 some sort of of_get_named_iommu() API doesn't really make sense.

 The thing is, if you end up ignoring the generic binding then we have 
 two
 IOMMUs using the same (ARM SMMU) binding and it begs the question as to
 which is the more generic! I know we're keen to get this merged, but 
 merging
 something that people won't use and calling it generic doesn't seem 
 ideal
 either. We do, however, desperately need a generic binding.

 yeah, ignoring the generic binding is not my first choice.  I'd rather
 have something that works well for everyone.  But I wasn't really sure
 if the current proposal was arbitrary, or if there are some
 conflicting requirements between different platforms.

 The common case that needs to be simple is attaching one (master) device
 to an IOMMU using the shared global context for the purposes of 
 implementing
 the dma-mapping API.

 well, I don't disagree that IOMMU API has some problems.  It is too
 tied to the bus type, which doesn't really seem to make sense for
 platform devices.  (Unless we start having multiple platform busses?)

 But at least given the current IOMMU API I'm not really sure how it
 makes a difference which way the link goes.  But if there has already
 been some discussion about how you want to handle the tie in with
 dma-mapping, if you could point me at that then maybe your point will
 make more sense to me.

 If you look at the proposed binding in isolation, I think it *is* cleaner
 than the ARM SMMU binding (I've acked it...) and I believe it's more
 consistent with the way we describe linkages elsewhere.

 However, the issue you're raising is that it's more difficult to make use of
 in a Linux IOMMU driver. The reward you'll get for using it will come
 eventually when the DMA ops are automatically swizzled for devices using the
 generic binding.

 My plan for the ARM SMMU driver is:

   (1) Change -probe() to walk the device-tree looking for all masters with
   phandles back to the SMMU instance being probed

   (2) For each master, extract the Stream IDs and add them to the internal
   SMMU driver data structures (an rbtree per SMMU instance). For
   hotpluggable buses, we'll need a way for the bus controller to
   reserve a range of IDs -- this will likely be a later extension to
   the binding.

   (3) When we get an -add() call, warn if it's a device we haven't seen
   and reject the addition.

 That way, -attach() should be the same as it is now, I think.

 Have you tried implementing something like that? We agreed that (1) isn't
 pretty, but I don't have a good alternative and it's only done at
 probe-time.

 I haven't tried implementing that yet, but I'm sure it would work.  I
 was just hoping to avoid having to do that ;-)

 Is the reason you want to do it this way because you want to guarantee
 that all masters (and stream IDs) have been identified before the first
 attach call? I am just wondering why you cannot continue doing the
 master/streamID discovery during add_device() callback?

it was mostly because I couldn't think of a sane way to differentiate
between first and second time a device attaches (without keeping a
reference to the device).  But I guess it is ok to assume no hotplug
(since walking the device tree also seems acceptable)

BR,
-R


 BTW: Is the msm-v0 IOMMU compatible with the ARM SMMU driver, or is it a
 completely different design requiring a different driver?

 My understanding is that it is different from msm v1 IOMMU, although I
 think it shares the same pagetable format with v1.  Not sure if that
 is the same as arm-smmu?   If so it might be nice to try to extract
 out some shared helper fxns for map/unmap as well.

 I expect Olav knows better the similarities/differences.


 The msm-v0 IOMMU is not compatible with ARM SMMUv1 specification.
 However, it is a close cousin. The hardware was designed before the ARM
 SMMUv1 specification was available I believe. But it shares many of the
 same concepts as the ARM SMMUv1.

 msm-v0 IOMMU supports V7S page table format only. The ARM SMMU driver
 does not support V7S at this time. However, I believe we need to support
 this.

 Will, this reminds me. We definitely have a need to use different page
 tables in the ARM SMMU driver vs. the ARM CPU. We 

[PATCH v2 1/1] iommu-api: Add map_range/unmap_range functions

2014-07-16 Thread Olav Haugan
Mapping and unmapping are more often than not in the critical path.
map_range and unmap_range allows SMMU driver implementations to optimize
the process of mapping and unmapping buffers into the SMMU page tables.
Instead of mapping one physical address, do TLB operation (expensive),
mapping, do TLB operation, mapping, do TLB operation the driver can map
a scatter-gatherlist of physically contiguous pages into one virtual
address space and then at the end do one TLB operation.

Additionally, the mapping operation would be faster in general since
clients does not have to keep calling map API over and over again for
each physically contiguous chunk of memory that needs to be mapped to a
virtually contiguous region.

Signed-off-by: Olav Haugan ohau...@codeaurora.org
---
 drivers/iommu/iommu.c | 48 
 include/linux/iommu.h | 25 +
 2 files changed, 73 insertions(+)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 1698360..a0eebb7 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1089,6 +1089,54 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned 
long iova, size_t size)
 EXPORT_SYMBOL_GPL(iommu_unmap);
 
 
+int iommu_map_range(struct iommu_domain *domain, unsigned int iova,
+   struct scatterlist *sg, unsigned int len, int opt)
+{
+   s32 ret = 0;
+   u32 offset = 0;
+   u32 start_iova = iova;
+
+   BUG_ON(iova  (~PAGE_MASK));
+
+   if (unlikely(domain-ops-map_range == NULL)) {
+   while (offset  len) {
+   phys_addr_t phys = page_to_phys(sg_page(sg));
+   u32 page_len = PAGE_ALIGN(sg-offset + sg-length);
+
+   ret = iommu_map(domain, iova, phys, page_len, opt);
+   if (ret)
+   goto fail;
+
+   iova += page_len;
+   offset += page_len;
+   if (offset  len)
+   sg = sg_next(sg);
+   }
+   } else {
+   ret = domain-ops-map_range(domain, iova, sg, len, opt);
+   }
+   goto out;
+
+fail:
+   /* undo mappings already done in case of error */
+   iommu_unmap(domain, start_iova, offset);
+out:
+   return ret;
+}
+EXPORT_SYMBOL_GPL(iommu_map_range);
+
+int iommu_unmap_range(struct iommu_domain *domain, unsigned int iova,
+ unsigned int len, int opt)
+{
+   BUG_ON(iova  (~PAGE_MASK));
+
+   if (unlikely(domain-ops-unmap_range == NULL))
+   return iommu_unmap(domain, iova, len);
+   else
+   return domain-ops-unmap_range(domain, iova, len, opt);
+}
+EXPORT_SYMBOL_GPL(iommu_unmap_range);
+
 int iommu_domain_window_enable(struct iommu_domain *domain, u32 wnd_nr,
   phys_addr_t paddr, u64 size, int prot)
 {
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index c7097d7..54c836e 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -22,6 +22,7 @@
 #include linux/errno.h
 #include linux/err.h
 #include linux/types.h
+#include linux/scatterlist.h
 #include trace/events/iommu.h
 
 #define IOMMU_READ (1  0)
@@ -93,6 +94,8 @@ enum iommu_attr {
  * @detach_dev: detach device from an iommu domain
  * @map: map a physically contiguous memory region to an iommu domain
  * @unmap: unmap a physically contiguous memory region from an iommu domain
+ * @map_range: map a scatter-gather list of physically contiguous memory 
chunks to an iommu domain
+ * @unmap_range: unmap a scatter-gather list of physically contiguous memory 
chunks from an iommu domain
  * @iova_to_phys: translate iova to physical address
  * @domain_has_cap: domain capabilities query
  * @add_device: add device to iommu grouping
@@ -110,6 +113,10 @@ struct iommu_ops {
   phys_addr_t paddr, size_t size, int prot);
size_t (*unmap)(struct iommu_domain *domain, unsigned long iova,
 size_t size);
+   int (*map_range)(struct iommu_domain *domain, unsigned int iova,
+   struct scatterlist *sg, unsigned int len, int opt);
+   int (*unmap_range)(struct iommu_domain *domain, unsigned int iova,
+ unsigned int len, int opt);
phys_addr_t (*iova_to_phys)(struct iommu_domain *domain, dma_addr_t 
iova);
int (*domain_has_cap)(struct iommu_domain *domain,
  unsigned long cap);
@@ -153,6 +160,10 @@ extern int iommu_map(struct iommu_domain *domain, unsigned 
long iova,
 phys_addr_t paddr, size_t size, int prot);
 extern size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova,
   size_t size);
+extern int iommu_map_range(struct iommu_domain *domain, unsigned int iova,
+   struct scatterlist *sg, unsigned int len, int opt);
+extern int iommu_unmap_range(struct iommu_domain *domain, unsigned 

[PATCH v2 0/1] Add iommu map_range/unmap_range calls

2014-07-16 Thread Olav Haugan
Patch for adding map_range/unmap range to the generic IOMMU API.

v1 - v2:
* Added support for option argument to unmap call. This can be used by
  IOMMU driver implentations to allow clients to signal to the driver not
  to do TLB invalidate for example.
* Added fallback in case iommu_{map,unmap}_range is called for an IOMMU driver
  that does not have these implemented.
* Rebased on top of Joerg's tree.
* Split out patch from the rest of the patches.

Olav Haugan (1):
  iommu-api: Add map_range/unmap_range functions

 drivers/iommu/iommu.c | 48 
 include/linux/iommu.h | 25 +
 2 files changed, 73 insertions(+)

--
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu