Re: FW: [PATCH 2/3] iommu: mtk: constify iommu_ops

2017-08-28 Thread Yong Wu
> From: Linux-mediatek [mailto:linux-mediatek-boun...@lists.infradead.org] On 
> Behalf Of Arvind Yadav
> Sent: Monday, August 28, 2017 8:12 PM
> To: m.szyprow...@samsung.com; j...@8bytes.org; kg...@kernel.org; 
> k...@kernel.org; matthias@gmail.com; gerald.schae...@de.ibm.com
> Cc: iommu@lists.linux-foundation.org; linux-media...@lists.infradead.org; 
> linux-ker...@vger.kernel.org; linux-arm-ker...@lists.infradead.org
> Subject: [PATCH 2/3] iommu: mtk: constify iommu_ops
> 
> iommu_ops are not supposed to change at runtime.
> Functions 'iommu_device_set_ops' and 'bus_set_iommu' working with
> const iommu_ops provided by . So mark the non-const
> structs as const.
> 
> Signed-off-by: Arvind Yadav 


Thanks.

Reviewed-by: Yong Wu 

nit, the title should be: iommu/mediatek:xxx

Also, the file mtk_iommu.c may need rebase on[1].

[1]https://lists.linuxfoundation.org/pipermail/iommu/2017-August/023847.html

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v6] vfio: platform: reset: Add Broadcom FlexRM reset module

2017-08-28 Thread Anup Patel via iommu
This patch adds Broadcom FlexRM low-level reset for
VFIO platform.

It will do the following:
1. Disable/Deactivate each FlexRM ring
2. Flush each FlexRM ring

The cleanup sequence for FlexRM rings is adapted from
Broadcom FlexRM mailbox driver.

Signed-off-by: Anup Patel 
Reviewed-by: Oza Oza 
Reviewed-by: Scott Branden 
Reviewed-by: Eric Auger 
---
 drivers/vfio/platform/reset/Kconfig|   9 ++
 drivers/vfio/platform/reset/Makefile   |   1 +
 .../vfio/platform/reset/vfio_platform_bcmflexrm.c  | 100 +
 3 files changed, 110 insertions(+)
 create mode 100644 drivers/vfio/platform/reset/vfio_platform_bcmflexrm.c

diff --git a/drivers/vfio/platform/reset/Kconfig 
b/drivers/vfio/platform/reset/Kconfig
index 705..392e3c0 100644
--- a/drivers/vfio/platform/reset/Kconfig
+++ b/drivers/vfio/platform/reset/Kconfig
@@ -13,3 +13,12 @@ config VFIO_PLATFORM_AMDXGBE_RESET
  Enables the VFIO platform driver to handle reset for AMD XGBE
 
  If you don't know what to do here, say N.
+
+config VFIO_PLATFORM_BCMFLEXRM_RESET
+   tristate "VFIO support for Broadcom FlexRM reset"
+   depends on VFIO_PLATFORM && (ARCH_BCM_IPROC || COMPILE_TEST)
+   default ARCH_BCM_IPROC
+   help
+ Enables the VFIO platform driver to handle reset for Broadcom FlexRM
+
+ If you don't know what to do here, say N.
diff --git a/drivers/vfio/platform/reset/Makefile 
b/drivers/vfio/platform/reset/Makefile
index 93f4e23..8d9874b 100644
--- a/drivers/vfio/platform/reset/Makefile
+++ b/drivers/vfio/platform/reset/Makefile
@@ -5,3 +5,4 @@ ccflags-y += -Idrivers/vfio/platform
 
 obj-$(CONFIG_VFIO_PLATFORM_CALXEDAXGMAC_RESET) += vfio-platform-calxedaxgmac.o
 obj-$(CONFIG_VFIO_PLATFORM_AMDXGBE_RESET) += vfio-platform-amdxgbe.o
+obj-$(CONFIG_VFIO_PLATFORM_BCMFLEXRM_RESET) += vfio_platform_bcmflexrm.o
diff --git a/drivers/vfio/platform/reset/vfio_platform_bcmflexrm.c 
b/drivers/vfio/platform/reset/vfio_platform_bcmflexrm.c
new file mode 100644
index 000..966a813
--- /dev/null
+++ b/drivers/vfio/platform/reset/vfio_platform_bcmflexrm.c
@@ -0,0 +1,100 @@
+/*
+ * Copyright (C) 2017 Broadcom
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see .
+ */
+
+/*
+ * This driver provides reset support for Broadcom FlexRM ring manager
+ * to VFIO platform.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "vfio_platform_private.h"
+
+/* FlexRM configuration */
+#define RING_REGS_SIZE 0x1
+#define RING_VER_MAGIC 0x76303031
+
+/* Per-Ring register offsets */
+#define RING_VER   0x000
+#define RING_CONTROL   0x034
+#define RING_FLUSH_DONE0x038
+
+/* Register RING_CONTROL fields */
+#define CONTROL_FLUSH_SHIFT5
+
+/* Register RING_FLUSH_DONE fields */
+#define FLUSH_DONE_MASK0x1
+
+static int vfio_platform_bcmflexrm_shutdown(void __iomem *ring)
+{
+   unsigned int timeout;
+
+   /* Disable/inactivate ring */
+   writel_relaxed(0x0, ring + RING_CONTROL);
+
+   /* Flush ring with timeout of 1s */
+   timeout = 1000;
+   writel_relaxed(BIT(CONTROL_FLUSH_SHIFT), ring + RING_CONTROL);
+   do {
+   if (readl_relaxed(ring + RING_FLUSH_DONE) & FLUSH_DONE_MASK)
+   break;
+   mdelay(1);
+   } while (--timeout);
+
+   if (!timeout) {
+   pr_warn("VFIO FlexRM shutdown timeout\n");
+   return -ETIMEDOUT;
+   }
+
+   return 0;
+}
+
+static int vfio_platform_bcmflexrm_reset(struct vfio_platform_device *vdev)
+{
+   int rc = 0;
+   void __iomem *ring;
+   struct vfio_platform_region *reg = >regions[0];
+
+   /* Map FlexRM ring registers if not mapped */
+   if (!reg->ioaddr) {
+   reg->ioaddr = ioremap_nocache(reg->addr, reg->size);
+   if (!reg->ioaddr)
+   return -ENOMEM;
+   }
+
+   /* Discover and shutdown each FlexRM ring */
+   for (ring = reg->ioaddr;
+ring < (reg->ioaddr + reg->size); ring += RING_REGS_SIZE) {
+   if (readl_relaxed(ring + RING_VER) == RING_VER_MAGIC) {
+   

[PATCH v6] FlexRM support in VFIO platform

2017-08-28 Thread Anup Patel via iommu
This patchset primarily adds Broadcom FlexRM reset module for
VFIO platform driver.

The patches are based on Linux-4.13-rc3 and can also be
found at flexrm-vfio-v6 branch of
https://github.com/Broadcom/arm64-linux.git

Changes since v5:
 - Make kconfig option VFIO_PLATFORM_BCMFLEXRM_RESET
   default to ARCH_BCM_IPROC

Changes since v4:
 - Use "--timeout" instead of "timeout--" in
   vfio_platform_bcmflexrm_shutdown()

Changes since v3:
 - Improve "depends on" for Kconfig option
   VFIO_PLATFORM_BCMFLEXRM_RESET
 - Fix typo in pr_warn() called by
   vfio_platform_bcmflexrm_shutdown()
 - Return error from vfio_platform_bcmflexrm_shutdown()
   when FlexRM ring flush timeout happens

Changes since v2:
 - Remove PATCH1 because fixing VFIO no-IOMMU mode is
   a separate topic

Changes since v1:
 - Remove iommu_present() check in vfio_iommu_group_get()
 - Drop PATCH1-to-PATCH3 because IOMMU_CAP_BYPASS is not
   required
 - Move additional comments out of license header in
   vfio_platform_bcmflexrm.c

Anup Patel (1):
  vfio: platform: reset: Add Broadcom FlexRM reset module

 drivers/vfio/platform/reset/Kconfig|   9 ++
 drivers/vfio/platform/reset/Makefile   |   1 +
 .../vfio/platform/reset/vfio_platform_bcmflexrm.c  | 100 +
 3 files changed, 110 insertions(+)
 create mode 100644 drivers/vfio/platform/reset/vfio_platform_bcmflexrm.c

-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v5 0/5] vfio-pci: Add support for mmapping MSI-X table

2017-08-28 Thread Alexey Kardashevskiy
On 21/08/17 12:47, Alexey Kardashevskiy wrote:
> Folks,
> 
> Ok, people did talk, exchanged ideas, lovely :) What happens now? Do I
> repost this or go back to PCI bus flags or something else? Thanks.


Anyone, any help? How do we proceed with this? Thanks.



> 
> 
> 
> On 14/08/17 19:45, Alexey Kardashevskiy wrote:
>> Folks,
>>
>> Is there anything to change besides those compiler errors and David's
>> comment in 5/5? Or the while patchset is too bad? Thanks.
>>
>>
>>
>> On 07/08/17 17:25, Alexey Kardashevskiy wrote:
>>> This is a followup for "[PATCH kernel v4 0/6] vfio-pci: Add support for 
>>> mmapping MSI-X table"
>>> http://www.spinics.net/lists/kvm/msg152232.html
>>>
>>> This time it is using "caps" in IOMMU groups. The main question is if PCI
>>> bus flags or IOMMU domains are still better (and which one).
>>
>>>
>>>
>>>
>>> Here is some background:
>>>
>>> Current vfio-pci implementation disallows to mmap the page
>>> containing MSI-X table in case that users can write directly
>>> to MSI-X table and generate an incorrect MSIs.
>>>
>>> However, this will cause some performance issue when there
>>> are some critical device registers in the same page as the
>>> MSI-X table. We have to handle the mmio access to these
>>> registers in QEMU emulation rather than in guest.
>>>
>>> To solve this issue, this series allows to expose MSI-X table
>>> to userspace when hardware enables the capability of interrupt
>>> remapping which can ensure that a given PCI device can only
>>> shoot the MSIs assigned for it. And we introduce a new bus_flags
>>> PCI_BUS_FLAGS_MSI_REMAP to test this capability on PCI side
>>> for different archs.
>>>
>>>
>>> This is based on sha1
>>> 26c5cebfdb6c "Merge branch 'parisc-4.13-4' of 
>>> git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux"
>>>
>>> Please comment. Thanks.
>>>
>>> Changelog:
>>>
>>> v5:
>>> * redid the whole thing via so-called IOMMU group capabilities
>>>
>>> v4:
>>> * rebased on recent upstream
>>> * got all 6 patches from v2 (v3 was missing some)
>>>
>>>
>>>
>>>
>>> Alexey Kardashevskiy (5):
>>>   iommu: Add capabilities to a group
>>>   iommu: Set IOMMU_GROUP_CAP_ISOLATE_MSIX if MSI controller enables IRQ
>>> remapping
>>>   iommu/intel/amd: Set IOMMU_GROUP_CAP_ISOLATE_MSIX if IRQ remapping is
>>> enabled
>>>   powerpc/iommu: Set IOMMU_GROUP_CAP_ISOLATE_MSIX
>>>   vfio-pci: Allow to expose MSI-X table to userspace when safe
>>>
>>>  include/linux/iommu.h| 20 
>>>  include/linux/vfio.h |  1 +
>>>  arch/powerpc/kernel/iommu.c  |  1 +
>>>  drivers/iommu/amd_iommu.c|  3 +++
>>>  drivers/iommu/intel-iommu.c  |  3 +++
>>>  drivers/iommu/iommu.c| 35 +++
>>>  drivers/vfio/pci/vfio_pci.c  | 20 +---
>>>  drivers/vfio/pci/vfio_pci_rdwr.c |  5 -
>>>  drivers/vfio/vfio.c  | 15 +++
>>>  9 files changed, 99 insertions(+), 4 deletions(-)
>>>
>>
>>
> 
> 


-- 
Alexey
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 2/2] iommu: Introduce Interface for IOMMU TLB Flushing

2017-08-28 Thread Leizhen (ThunderTown)


On 2017/8/23 21:50, Joerg Roedel wrote:
> From: Joerg Roedel 
> 
> With the current IOMMU-API the hardware TLBs have to be
> flushed in every iommu_ops->unmap() call-back.
> 
> For unmapping large amounts of address space, like it
> happens when a KVM domain with assigned devices is
> destroyed, this causes thousands of unnecessary TLB flushes
> in the IOMMU hardware because the unmap call-back runs for
> every unmapped physical page.
> 
> With the TLB Flush Interface and the new iommu_unmap_fast()
> function introduced here the need to clean the hardware TLBs
> is removed from the unmapping code-path. Users of
> iommu_unmap_fast() have to explicitly call the TLB-Flush
> functions to sync the page-table changes to the hardware.
> 
> Three functions for TLB-Flushes are introduced:
> 
>   * iommu_flush_tlb_all() - Flushes all TLB entries
> associated with that
> domain. TLBs entries are
> flushed when this function
> returns.
> 
>   * iommu_tlb_range_add() - This will add a given
> range to the flush queue
> for this domain.
> 
>   * iommu_tlb_sync() - Flushes all queued ranges from
>the hardware TLBs. Returns when
>the flush is finished.
> 
> The semantic of this interface is intentionally similar to
> the iommu_gather_ops from the io-pgtable code.
> 
> Cc: Alex Williamson 
> Cc: Will Deacon 
> Cc: Robin Murphy 
> Signed-off-by: Joerg Roedel 
> ---
>  drivers/iommu/iommu.c | 32 ---
>  include/linux/iommu.h | 72 
> ++-
>  2 files changed, 99 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 3f6ea16..0f68342 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -527,6 +527,8 @@ static int iommu_group_create_direct_mappings(struct 
> iommu_group *group,
>  
>   }
>  
> + iommu_flush_tlb_all(domain);
> +
>  out:
>   iommu_put_resv_regions(dev, );
>  
> @@ -1556,13 +1558,16 @@ int iommu_map(struct iommu_domain *domain, unsigned 
> long iova,
>  }
>  EXPORT_SYMBOL_GPL(iommu_map);
>  
> -size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t 
> size)
> +static size_t __iommu_unmap(struct iommu_domain *domain,
> + unsigned long iova, size_t size,
> + bool sync)
>  {
> + const struct iommu_ops *ops = domain->ops;
>   size_t unmapped_page, unmapped = 0;
> - unsigned int min_pagesz;
>   unsigned long orig_iova = iova;
> + unsigned int min_pagesz;
>  
> - if (unlikely(domain->ops->unmap == NULL ||
> + if (unlikely(ops->unmap == NULL ||
>domain->pgsize_bitmap == 0UL))
>   return -ENODEV;
>  
> @@ -1592,10 +1597,13 @@ size_t iommu_unmap(struct iommu_domain *domain, 
> unsigned long iova, size_t size)
>   while (unmapped < size) {
>   size_t pgsize = iommu_pgsize(domain, iova, size - unmapped);
>  
> - unmapped_page = domain->ops->unmap(domain, iova, pgsize);
> + unmapped_page = ops->unmap(domain, iova, pgsize);
>   if (!unmapped_page)
>   break;
>  
> + if (sync && ops->iotlb_range_add)
> + ops->iotlb_range_add(domain, iova, pgsize);
> +
>   pr_debug("unmapped: iova 0x%lx size 0x%zx\n",
>iova, unmapped_page);
>  
> @@ -1603,11 +1611,27 @@ size_t iommu_unmap(struct iommu_domain *domain, 
> unsigned long iova, size_t size)
>   unmapped += unmapped_page;
>   }
>  
> + if (sync && ops->iotlb_sync)
> + ops->iotlb_sync(domain);
> +
>   trace_unmap(orig_iova, size, unmapped);
>   return unmapped;
>  }
> +
> +size_t iommu_unmap(struct iommu_domain *domain,
> +unsigned long iova, size_t size)
> +{
> + return __iommu_unmap(domain, iova, size, true);
> +}
>  EXPORT_SYMBOL_GPL(iommu_unmap);
>  
> +size_t iommu_unmap_fast(struct iommu_domain *domain,
> + unsigned long iova, size_t size)
> +{
Do we need to add a check "if (!domain->ops->iotlb_sync)". Suppose the new 
added three hooks are not
registered, we should fallback to iommu_unmap.

> + return __iommu_unmap(domain, iova, size, false);
> +}
> +EXPORT_SYMBOL_GPL(iommu_unmap_fast);
> +
>  size_t default_iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
>struct scatterlist *sg, unsigned int nents, int prot)
>  {
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 2cb54ad..67fa954 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -167,6 +167,10 @@ struct 

RE: Support SVM without PASID

2017-08-28 Thread Tian, Kevin
> From: Bharat Kumar Gogada [mailto:bhara...@xilinx.com]
> Sent: Monday, August 28, 2017 9:10 PM
> 
> > Subject: RE: Support SVM without PASID
> >
> > > From: valmiki [mailto:valmiki...@gmail.com]
> > > Sent: Saturday, August 12, 2017 8:11 PM
> > >
> > > On 8/7/2017 4:01 PM, Jean-Philippe Brucker wrote:
> > > > On 05/08/17 06:14, valmiki wrote:
> > > > [...]
> > > >> Hi Jean, Thanks a lot, now i understood the flow. From vfio kernel
> > > >> documentation we fill vaddr and iova in struct
> > > vfio_iommu_type1_dma_map
> > > >> and pass them to VFIO. But if we use dynamic allocation in
> > > >> application (say malloc), do we need to use dma API to get iova and
> > > >> then call VFIO_IOMMU_MAP ioctl ?
> > > >> If application needs multiple such dynamic allocations, then it
> > > >> need to allocate large chunk and program it via VFIO_IOMMU_MAP
> > > >> ioctl and
> > > then
> > > >> manage rest allocations requirements from this buffer ?
> > > >
> > > > Yes, without SVM, the application allocates large buffers, allocates
> > > > IOVAs itself, and maps them with VFIO_IOMMU_MAP. Userspace
> doesn't
> > > > rely
> > > on the
> > > > DMA API at all, it manages IOVAs as it wants. Sizes passed to
> > > > VFIO_IOMMU_MAP have to be multiples of the MMU or IOMMU page
> > > granularity
> > > > (that is at least 4kB), and both iova and vaddr have to be aligned
> > > > on that granularity as well. So malloc isn't really suitable in this
> > > > case, you'll need mmap. The application can then implement a small
> > > > allocator to
> > > manage
> > > > the DMA pool created with VFIO_IOMMU_MAP.
> > >
> > > Thanks Jean, I have a confusion allocate IOVA's in userspace means,
> > > how can user application decide IOVA address, can user application
> > > pick any random IOVA address ?
> > >
> >
> > yes, any address. In this mode the whole IOVA address space is owned by
> > application, which just needs to use VFIO_IOMMU_MAP to setup
> > IOVA->PA mapping in IOMMU (As Jean pointed out, input paramters
> > are iova and vaddr. VFIO will figure out pa corresponding to vaddr).
> >
> Hi Kevin, I have a doubt in this case, what if someone assigns mmap
> returned virtual address as iova address, then
> EP will assume it is generating request on IOVA but in reality we are using
> application allocated virtual address, then
> it looks like we are working with application virtual address directly without
> PASID. Is this valid ?
> 

IOMMU doesn't care what an address in the DMA transaction from
a device is. It cares only whether the DMA transaction has PASID
tagged or not, and then pursue different structure to walk corresponding 
I/O page table to get a translated address. Each I/O page table hosts
a standalone virtual address space. An application can implement a
new allocator to manage the address space (iova != vaddr), or simply
reuse malloc-ed virtual address (iova == vaddr), which is not what
hardware really cares about.

Thanks
Kevin
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] iommu/ipmmu-vmsa: make ipmmu_gather_ops const

2017-08-28 Thread Bhumika Goyal
Make these const as they are not modified anywhere.

Signed-off-by: Bhumika Goyal 
---
 drivers/iommu/ipmmu-vmsa.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 5093d1c..0423d17 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -295,7 +295,7 @@ static void ipmmu_tlb_add_flush(unsigned long iova, size_t 
size,
/* The hardware doesn't support selective TLB flush. */
 }
 
-static struct iommu_gather_ops ipmmu_gather_ops = {
+static const struct iommu_gather_ops ipmmu_gather_ops = {
.tlb_flush_all = ipmmu_tlb_flush_all,
.tlb_add_flush = ipmmu_tlb_add_flush,
.tlb_sync = ipmmu_tlb_flush_all,
-- 
1.9.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 3/3] iommu: s390: constify iommu_ops

2017-08-28 Thread Gerald Schaefer
On Mon, 28 Aug 2017 17:42:50 +0530
Arvind Yadav  wrote:

> iommu_ops are not supposed to change at runtime.
> Functions 'bus_set_iommu' working with const iommu_ops provided
> by . So mark the non-const structs as const.
> 
> Signed-off-by: Arvind Yadav 
> ---
>  drivers/iommu/s390-iommu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Acked-by: Gerald Schaefer 

> 
> diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
> index 8788640..400d75f 100644
> --- a/drivers/iommu/s390-iommu.c
> +++ b/drivers/iommu/s390-iommu.c
> @@ -327,7 +327,7 @@ static size_t s390_iommu_unmap(struct iommu_domain 
> *domain,
>   return size;
>  }
> 
> -static struct iommu_ops s390_iommu_ops = {
> +static const struct iommu_ops s390_iommu_ops = {
>   .capable = s390_iommu_capable,
>   .domain_alloc = s390_domain_alloc,
>   .domain_free = s390_domain_free,

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] intel-iommu: Don't be too aggressive when clearing one context entry

2017-08-28 Thread Filippo Sironi via iommu
Previously, we were invalidating context cache and IOTLB globally when
clearing one context entry.  This is a tad too aggressive.
Invalidate the context cache and IOTLB for the interested device only.

Signed-off-by: Filippo Sironi 
Cc: David Woodhouse 
Cc: David Woodhouse 
Cc: Joerg Roedel 
Cc: iommu@lists.linux-foundation.org
Cc: linux-ker...@vger.kernel.org
---
 drivers/iommu/intel-iommu.c | 25 ++---
 1 file changed, 22 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 3e8636f1220e..4bf3e59b0929 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -2351,13 +2351,32 @@ static inline int domain_pfn_mapping(struct dmar_domain 
*domain, unsigned long i
 
 static void domain_context_clear_one(struct intel_iommu *iommu, u8 bus, u8 
devfn)
 {
+   unsigned long flags;
+   struct context_entry *context;
+   u16 did_old;
+
if (!iommu)
return;
 
+   spin_lock_irqsave(>lock, flags);
+   context = iommu_context_addr(iommu, bus, devfn, 0);
+   if (!context) {
+   spin_unlock_irqrestore(>lock, flags);
+   return;
+   }
+   did_old = context_domain_id(context);
+   spin_unlock_irqrestore(>lock, flags);
clear_context_table(iommu, bus, devfn);
-   iommu->flush.flush_context(iommu, 0, 0, 0,
-  DMA_CCMD_GLOBAL_INVL);
-   iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+   iommu->flush.flush_context(iommu,
+  did_old,
+  (((u16)bus) << 8) | devfn,
+  DMA_CCMD_MASK_NOBIT,
+  DMA_CCMD_DEVICE_INVL);
+   iommu->flush.flush_iotlb(iommu,
+did_old,
+0,
+0,
+DMA_TLB_DSI_FLUSH);
 }
 
 static inline void unlink_domain_info(struct device_domain_info *info)
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


RE: Support SVM without PASID

2017-08-28 Thread Bharat Kumar Gogada
> Subject: RE: Support SVM without PASID
> 
> > From: valmiki [mailto:valmiki...@gmail.com]
> > Sent: Saturday, August 12, 2017 8:11 PM
> >
> > On 8/7/2017 4:01 PM, Jean-Philippe Brucker wrote:
> > > On 05/08/17 06:14, valmiki wrote:
> > > [...]
> > >> Hi Jean, Thanks a lot, now i understood the flow. From vfio kernel
> > >> documentation we fill vaddr and iova in struct
> > vfio_iommu_type1_dma_map
> > >> and pass them to VFIO. But if we use dynamic allocation in
> > >> application (say malloc), do we need to use dma API to get iova and
> > >> then call VFIO_IOMMU_MAP ioctl ?
> > >> If application needs multiple such dynamic allocations, then it
> > >> need to allocate large chunk and program it via VFIO_IOMMU_MAP
> > >> ioctl and
> > then
> > >> manage rest allocations requirements from this buffer ?
> > >
> > > Yes, without SVM, the application allocates large buffers, allocates
> > > IOVAs itself, and maps them with VFIO_IOMMU_MAP. Userspace doesn't
> > > rely
> > on the
> > > DMA API at all, it manages IOVAs as it wants. Sizes passed to
> > > VFIO_IOMMU_MAP have to be multiples of the MMU or IOMMU page
> > granularity
> > > (that is at least 4kB), and both iova and vaddr have to be aligned
> > > on that granularity as well. So malloc isn't really suitable in this
> > > case, you'll need mmap. The application can then implement a small
> > > allocator to
> > manage
> > > the DMA pool created with VFIO_IOMMU_MAP.
> >
> > Thanks Jean, I have a confusion allocate IOVA's in userspace means,
> > how can user application decide IOVA address, can user application
> > pick any random IOVA address ?
> >
> 
> yes, any address. In this mode the whole IOVA address space is owned by
> application, which just needs to use VFIO_IOMMU_MAP to setup
> IOVA->PA mapping in IOMMU (As Jean pointed out, input paramters
> are iova and vaddr. VFIO will figure out pa corresponding to vaddr).
> 
Hi Kevin, I have a doubt in this case, what if someone assigns mmap returned 
virtual address as iova address, then 
EP will assume it is generating request on IOVA but in reality we are using 
application allocated virtual address, then
it looks like we are working with application virtual address directly without 
PASID. Is this valid ?

Regards,
Bharat
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2] iommu/ipmmu-vmsa: Rereserving a free context before setting up a pagetable

2017-08-28 Thread Oleksandr Tyshchenko
Hi, all.

Any comments?

On Wed, Aug 23, 2017 at 5:31 PM, Oleksandr Tyshchenko
 wrote:
> From: Oleksandr Tyshchenko 
>
> Reserving a free context is both quicker and more likely to fail
> (due to limited hardware resources) than setting up a pagetable.
> What is more the pagetable init/cleanup code could require
> the context to be set up.
>
> Signed-off-by: Oleksandr Tyshchenko 
> CC: Robin Murphy 
> CC: Laurent Pinchart 
> CC: Joerg Roedel 
>
> ---
> This patch fixes a bug during rollback logic:
> In ipmmu_domain_init_context() we are trying to find an unused context
> and if operation fails we will call free_io_pgtable_ops(),
> but "domain->context_id" hasn't been initialized yet (likely it is 0
> because of kzalloc). Having the following call stack:
> free_io_pgtable_ops() -> io_pgtable_tlb_flush_all() ->
> ipmmu_tlb_flush_all() -> ipmmu_tlb_invalidate()
> we will get a mistaken cache flush for a context pointed by
> uninitialized "domain->context_id".
>
>
> v1 here:
> https://lists.linuxfoundation.org/pipermail/iommu/2017-August/023857.html
> ---
>  drivers/iommu/ipmmu-vmsa.c | 42 +-
>  1 file changed, 21 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
> index 2a38aa1..382f387 100644
> --- a/drivers/iommu/ipmmu-vmsa.c
> +++ b/drivers/iommu/ipmmu-vmsa.c
> @@ -341,6 +341,19 @@ static int ipmmu_domain_allocate_context(struct 
> ipmmu_vmsa_device *mmu,
> return ret;
>  }
>
> +static void ipmmu_domain_free_context(struct ipmmu_vmsa_device *mmu,
> + unsigned int context_id)
> +{
> +   unsigned long flags;
> +
> +   spin_lock_irqsave(>lock, flags);
> +
> +   clear_bit(context_id, mmu->ctx);
> +   mmu->domains[context_id] = NULL;
> +
> +   spin_unlock_irqrestore(>lock, flags);
> +}
> +
>  static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain)
>  {
> u64 ttbr;
> @@ -370,22 +383,22 @@ static int ipmmu_domain_init_context(struct 
> ipmmu_vmsa_domain *domain)
>  */
> domain->cfg.iommu_dev = domain->mmu->dev;
>
> -   domain->iop = alloc_io_pgtable_ops(ARM_32_LPAE_S1, >cfg,
> -  domain);
> -   if (!domain->iop)
> -   return -EINVAL;
> -
> /*
>  * Find an unused context.
>  */
> ret = ipmmu_domain_allocate_context(domain->mmu, domain);
> -   if (ret == IPMMU_CTX_MAX) {
> -   free_io_pgtable_ops(domain->iop);
> +   if (ret == IPMMU_CTX_MAX)
> return -EBUSY;
> -   }
>
> domain->context_id = ret;
>
> +   domain->iop = alloc_io_pgtable_ops(ARM_32_LPAE_S1, >cfg,
> +  domain);
> +   if (!domain->iop) {
> +   ipmmu_domain_free_context(domain->mmu, domain->context_id);
> +   return -EINVAL;
> +   }
> +
> /* TTBR0 */
> ttbr = domain->cfg.arm_lpae_s1_cfg.ttbr[0];
> ipmmu_ctx_write(domain, IMTTLBR0, ttbr);
> @@ -426,19 +439,6 @@ static int ipmmu_domain_init_context(struct 
> ipmmu_vmsa_domain *domain)
> return 0;
>  }
>
> -static void ipmmu_domain_free_context(struct ipmmu_vmsa_device *mmu,
> - unsigned int context_id)
> -{
> -   unsigned long flags;
> -
> -   spin_lock_irqsave(>lock, flags);
> -
> -   clear_bit(context_id, mmu->ctx);
> -   mmu->domains[context_id] = NULL;
> -
> -   spin_unlock_irqrestore(>lock, flags);
> -}
> -
>  static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain)
>  {
> /*
> --
> 2.7.4
>



-- 
Regards,

Oleksandr Tyshchenko
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 1/3] iommu: exynos: constify iommu_ops

2017-08-28 Thread Marek Szyprowski

Hi Arvind,

On 2017-08-28 14:12, Arvind Yadav wrote:

iommu_ops are not supposed to change at runtime.
Functions 'iommu_device_set_ops' and 'bus_set_iommu' working with
const iommu_ops provided by . So mark the non-const
structs as const.

Signed-off-by: Arvind Yadav 


Acked-by: Marek Szyprowski 

I remember that in the past there were a reason for non-const iommu_ops 
in the

IOMMU API, but I hope it has been finally resolved.


---
  drivers/iommu/exynos-iommu.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
index 2395478..a540016 100644
--- a/drivers/iommu/exynos-iommu.c
+++ b/drivers/iommu/exynos-iommu.c
@@ -569,7 +569,7 @@ static void sysmmu_tlb_invalidate_entry(struct 
sysmmu_drvdata *data,
spin_unlock_irqrestore(>lock, flags);
  }
  
-static struct iommu_ops exynos_iommu_ops;

+static const struct iommu_ops exynos_iommu_ops;
  
  static int __init exynos_sysmmu_probe(struct platform_device *pdev)

  {
@@ -1323,7 +1323,7 @@ static int exynos_iommu_of_xlate(struct device *dev,
return 0;
  }
  
-static struct iommu_ops exynos_iommu_ops = {

+static const struct iommu_ops exynos_iommu_ops = {
.domain_alloc = exynos_iommu_domain_alloc,
.domain_free = exynos_iommu_domain_free,
.attach_dev = exynos_iommu_attach_device,


Best regards
--
Marek Szyprowski, PhD
Samsung R Institute Poland

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 3/3] iommu: s390: constify iommu_ops

2017-08-28 Thread Arvind Yadav
iommu_ops are not supposed to change at runtime.
Functions 'bus_set_iommu' working with const iommu_ops provided
by . So mark the non-const structs as const.

Signed-off-by: Arvind Yadav 
---
 drivers/iommu/s390-iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 8788640..400d75f 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -327,7 +327,7 @@ static size_t s390_iommu_unmap(struct iommu_domain *domain,
return size;
 }
 
-static struct iommu_ops s390_iommu_ops = {
+static const struct iommu_ops s390_iommu_ops = {
.capable = s390_iommu_capable,
.domain_alloc = s390_domain_alloc,
.domain_free = s390_domain_free,
-- 
1.9.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 2/3] iommu: mtk: constify iommu_ops

2017-08-28 Thread Arvind Yadav
iommu_ops are not supposed to change at runtime.
Functions 'iommu_device_set_ops' and 'bus_set_iommu' working with
const iommu_ops provided by . So mark the non-const
structs as const.

Signed-off-by: Arvind Yadav 
---
 drivers/iommu/mtk_iommu.c| 4 ++--
 drivers/iommu/mtk_iommu_v1.c | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index 91c6d36..eac99a3 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -102,7 +102,7 @@ struct mtk_iommu_domain {
struct iommu_domain domain;
 };
 
-static struct iommu_ops mtk_iommu_ops;
+static const struct iommu_ops mtk_iommu_ops;
 
 static struct mtk_iommu_domain *to_mtk_domain(struct iommu_domain *dom)
 {
@@ -437,7 +437,7 @@ static int mtk_iommu_of_xlate(struct device *dev, struct 
of_phandle_args *args)
return iommu_fwspec_add_ids(dev, args->args, 1);
 }
 
-static struct iommu_ops mtk_iommu_ops = {
+static const struct iommu_ops mtk_iommu_ops = {
.domain_alloc   = mtk_iommu_domain_alloc,
.domain_free= mtk_iommu_domain_free,
.attach_dev = mtk_iommu_attach_device,
diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
index bc1efbf..40b231f 100644
--- a/drivers/iommu/mtk_iommu_v1.c
+++ b/drivers/iommu/mtk_iommu_v1.c
@@ -363,7 +363,7 @@ static phys_addr_t mtk_iommu_iova_to_phys(struct 
iommu_domain *domain,
return pa;
 }
 
-static struct iommu_ops mtk_iommu_ops;
+static const struct iommu_ops mtk_iommu_ops;
 
 /*
  * MTK generation one iommu HW only support one iommu domain, and all the 
client
@@ -536,7 +536,7 @@ static int mtk_iommu_hw_init(const struct mtk_iommu_data 
*data)
return 0;
 }
 
-static struct iommu_ops mtk_iommu_ops = {
+static const struct iommu_ops mtk_iommu_ops = {
.domain_alloc   = mtk_iommu_domain_alloc,
.domain_free= mtk_iommu_domain_free,
.attach_dev = mtk_iommu_attach_device,
-- 
1.9.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 1/3] iommu: exynos: constify iommu_ops

2017-08-28 Thread Arvind Yadav
iommu_ops are not supposed to change at runtime.
Functions 'iommu_device_set_ops' and 'bus_set_iommu' working with
const iommu_ops provided by . So mark the non-const
structs as const.

Signed-off-by: Arvind Yadav 
---
 drivers/iommu/exynos-iommu.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
index 2395478..a540016 100644
--- a/drivers/iommu/exynos-iommu.c
+++ b/drivers/iommu/exynos-iommu.c
@@ -569,7 +569,7 @@ static void sysmmu_tlb_invalidate_entry(struct 
sysmmu_drvdata *data,
spin_unlock_irqrestore(>lock, flags);
 }
 
-static struct iommu_ops exynos_iommu_ops;
+static const struct iommu_ops exynos_iommu_ops;
 
 static int __init exynos_sysmmu_probe(struct platform_device *pdev)
 {
@@ -1323,7 +1323,7 @@ static int exynos_iommu_of_xlate(struct device *dev,
return 0;
 }
 
-static struct iommu_ops exynos_iommu_ops = {
+static const struct iommu_ops exynos_iommu_ops = {
.domain_alloc = exynos_iommu_domain_alloc,
.domain_free = exynos_iommu_domain_free,
.attach_dev = exynos_iommu_attach_device,
-- 
1.9.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 1/2] iommu/mediatek: Fix a build fail of m4u_type

2017-08-28 Thread Joerg Roedel
On Thu, Aug 24, 2017 at 03:42:11PM +0800, Yong Wu wrote:
> The commit ("iommu/mediatek: Enlarge the validate PA range
> for 4GB mode") introduce the following build error:
> 
>drivers/iommu/mtk_iommu.c: In function 'mtk_iommu_hw_init':
> >> drivers/iommu/mtk_iommu.c:536:30: error: 'const struct mtk_iommu_data'
>  has no member named 'm4u_type'; did you mean 'm4u_dom'?
>  if (data->enable_4GB && data->m4u_type != M4U_MT8173) {
> 
> This patch fix it, use "m4u_plat" instead of "m4u_type".
> 
> Reported-by: kernel test robot 
> Signed-off-by: Yong Wu 

Applied both, thanks.

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] iommu: qcom: annotate PM functions as __maybe_unused

2017-08-28 Thread Joerg Roedel
On Wed, Aug 23, 2017 at 03:42:45PM +0200, Arnd Bergmann wrote:
> The qcom_iommu_disable_clocks() function is only called from PM
> code that is hidden in an #ifdef, causing a harmless warning without
> CONFIG_PM:
> 
> drivers/iommu/qcom_iommu.c:601:13: error: 'qcom_iommu_disable_clocks' defined 
> but not used [-Werror=unused-function]
>  static void qcom_iommu_disable_clocks(struct qcom_iommu_dev *qcom_iommu)
> drivers/iommu/qcom_iommu.c:581:12: error: 'qcom_iommu_enable_clocks' defined 
> but not used [-Werror=unused-function]
>  static int qcom_iommu_enable_clocks(struct qcom_iommu_dev *qcom_iommu)
> 
> Replacing that #ifdef with __maybe_unused annotations lets the compiler
> drop the functions silently instead.
> 
> Fixes: 0ae349a0f33f ("iommu/qcom: Add qcom_iommu")
> Signed-off-by: Arnd Bergmann 
> ---
>  drivers/iommu/qcom_iommu.c | 6 ++
>  1 file changed, 2 insertions(+), 4 deletions(-)

Applied, thanks.

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


RE: [RFC] virtio-iommu version 0.4

2017-08-28 Thread Tian, Kevin
> From: Jean-Philippe Brucker [mailto:jean-philippe.bruc...@arm.com]
> Sent: Wednesday, August 23, 2017 6:01 PM
> 
> On 04/08/17 19:19, Jean-Philippe Brucker wrote:
> > Other extensions are in preparation. I won't detail them here because
> v0.4
> > already is a lot to digest, but in short, building on top of PROBE:
> >
> > * First, since the IOMMU is paravirtualized, the device can expose some
> >   properties of the physical topology to the guest, and let it allocate
> >   resources more efficiently. For example, when the virtio-iommu
> manages
> >   both physical and emulated endpoints, with different underlying
> IOMMUs,
> >   we now have a way to describe multiple page and block granularities,
> >   instead of forcing the guest to use the most restricted one for all
> >   endpoints. This will most likely be in v0.5.
> 
> In order to extend requests with PASIDs and (later) nested mode, I intend
> to rename "address_space" field to "domain", since it is a lot more
> precise about what the field is referring to and the current name would
> make these extensions confusing. Please find the rationale at [1].
> "ioasid_bits" will be "domain_bits" and "VIRTIO_IOMMU_F_IOASID_BITS"
> will
> be "VIRTIO_IOMMU_F_DOMAIN_BITS".
> 
> For those that had time to read this version, do you have other comments
> and suggestions about v0.4? Otherwise it is the only update I have for
> v0.5 (along with fine-grained address range and page size properties from
> the quoted text) and I will send it soon.
> 
> In particular, please tell me now if you see the need for other
> destructive changes like this one. They will be impossible to introduce
> once a driver or device is upstream.
> 
> Thanks,
> Jean
> 
> [1] https://www.spinics.net/lists/kvm/msg154573.html

Here comes some comments:

1.1 Motivation

You describe I/O page faults handling as future work. Seems you considered
only recoverable fault (since "aka. PCI PRI" being used). What about other
unrecoverable faults e.g. what to do if a virtual DMA request doesn't find 
a valid mapping? Even when there is no PRI support, we need some basic
form of fault reporting mechanism to indicate such errors to guest.

2.6.8.2 Property RESV_MEM

I'm not immediately clear when VIRTIO_IOMMU_PROBE_RESV_MEM_T_ABORT
should be explicitly reported. Is there any real example on bare metal IOMMU?
usually reserved memory is reported to CPU through other method (e.g. e820
on x86 platform). Of course MSI is a special case which is covered by BYPASS 
and MSI flag... If yes, maybe you can also include an example in implementation 
notes.

Another thing I want to ask your opinion, about whether there is value of
adding another subtype (MEM_T_IDENTITY), asking for identity mapping
in the address space. It's similar to Reserved Memory Region Reporting
(RMRR) structure defined in VT-d, to indicate BIOS allocated reserved
memory ranges which may be DMA target and has to be identity mapped
when DMA remapping is enabled. I'm not sure whether ARM has similar
capability and whether there might be a general usage beyond VT-d. For
now the only usage in my mind is to assign a device with RMRR associated
on VT-d (Intel GPU, or some USB controllers) where the RMRR info needs
propagated to the guest (since identity mapping also means reservation
of virtual address space).

2.6.8.2.3 Device Requirements: Property RESV_MEM

--citation start--
If an endpoint is attached to an address space, the device SHOULD leave 
any access targeting one of its VIRTIO_IOMMU_PROBE_RESV_MEM_T_BYPASS 
regions pass through untranslated. In other words, the device SHOULD 
handle such a region as if it was identity-mapped (virtual address equal to
physical address). If the endpoint is not attached to any address space, 
then the device MAY abort the transaction.
--citation end

I have a question for the last sentence. From definition of BYPASS, it's
orthogonal to whether there is an address space attached, then should
we still allow "May abort" behavior? 

Thanks
Kevin 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 12/12] dma-mapping: turn dma_cache_sync into a dma_map_ops method

2017-08-28 Thread Geert Uytterhoeven
Hi Christoph,

On Sun, Aug 27, 2017 at 6:10 PM, Christoph Hellwig  wrote:
> After we removed all the dead wood it turns out only two architectures
> actually implement dma_cache_sync as a no-op: mips and parisc.  Add

s/no-op/real op/

> a cache_sync method to struct dma_map_ops and implement it for the
> mips defualt DMA ops, and the parisc pa11 ops.

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu