On Fri, Oct 13, 2017 at 8:23 PM, Robin Murphy wrote:
> Arnd reports a build warning[1] thanks to me missing ipmmu-vmsa's second
> set of ops when converting io-pgtable-arm users to the new iommu_iotlb_*
> callbacks. Rather than just treat the symptom with a point fix, this
> seemed like a good exc
On Fri, Oct 06, 2017 at 03:04:48PM +0100, Shameer Kolothum wrote:
> IOMMU drivers can use this to implement their .get_resv_regions callback
> for HW MSI specific reservations(e.g. ARM GICv3 ITS MSI region).
>
> Signed-off-by: Shameer Kolothum
> ---
> drivers/iommu/dma-iommu.c | 20 +
On Fri, Oct 06, 2017 at 03:04:50PM +0100, Shameer Kolothum wrote:
> The HiSilicon erratum 161010801 describes the limitation of
> HiSilicon platforms hip06/hip07 to support the SMMUv3 mappings
> for MSI transactions.
>
> PCIe controller on these platforms has to differentiate the MSI
> payload aga
On Fri, Oct 06, 2017 at 02:31:39PM +0100, Jean-Philippe Brucker wrote:
> On ARM systems, some platform devices behind an IOMMU may support stall
> and PASID features. Stall is the ability to recover from page faults and
> PASID offers multiple process address spaces to the device. Together they
> a
On Wed, Sep 06, 2017 at 11:07:35AM +0530, Vivek Gautam wrote:
> We don't want to touch the TLB when smmu is suspended, so
> defer the TLB maintenance until smmu is resumed.
> On resume, we issue arm_smmu_device_reset() to restore the
> configuration and flush the TLBs.
>
> Signed-off-by: Vivek Gau
Hi Robin,
On Thu, Aug 31, 2017 at 02:44:24PM +0100, Robin Murphy wrote:
> Since Nate reported a reasonable performance boost from the out-of-line
> MSI polling in v1 [1], I've now implemented the equivalent for cons
> polling as well - that has been boot-tested on D05 with some trivial I/O
> and a
Hi Robin,
Some of my comments on patch 3 are addressed here, but I'm really struggling
to convince myself that this algorithm is correct. My preference would
be to leave the code as it is for SMMUs that don't implement MSIs, but
comments below anyway because it's an interesting idea.
On Thu, Aug
Hi Robin,
This mostly looks good. Just a few comments below.
On Thu, Aug 31, 2017 at 02:44:27PM +0100, Robin Murphy wrote:
> As an IRQ, the CMD_SYNC interrupt is not particularly useful, not least
> because we often need to wait for sync completion within someone else's
> IRQ handler anyway. Howe
The remaining difference between the ARM-specific and iommu-dma ops is
in the {add,remove}_device implementations, but even those have some
overlap and duplication. By stubbing out the few arm_iommu_*() calls,
we can get rid of the rest of the inline #ifdeffery to both simplify the
code and improve
We go through quite the merry dance in order to find masters behind the
same IPMMU instance, so that we can ensure they are grouped together.
None of which is really necessary, since the master's private data
already points to the particular IPMMU it is associated with, and that
IPMMU instance data
Now that the IPMMU instance pointer is the only thing remaining in the
private data structure, we no longer need the extra level of indirection
and can simply stash that directlty in the fwspec.
Signed-off-by: Robin Murphy
---
drivers/iommu/ipmmu-vmsa.c | 36
We have two implementations for ipmmu_ops->alloc depending on
CONFIG_IOMMU_DMA, the difference being whether they accept the
IOMMU_DOMAIN_DMA type or not. However, iommu_dma_get_cookie() is
guaranteed to return an error when !CONFIG_IOMMU_DMA, so if
ipmmu_domain_alloc_dma() was actually checking an
Arnd reports a build warning[1] thanks to me missing ipmmu-vmsa's second
set of ops when converting io-pgtable-arm users to the new iommu_iotlb_*
callbacks. Rather than just treat the symptom with a point fix, this
seemed like a good excuse to clean up the messy #ifdeffery and
duplication in the dr
On Fri, 13 Oct 2017 16:40:13 +0200
Joerg Roedel wrote:
> From: Joerg Roedel
>
> After every unmap VFIO unpins the pages that where mapped by
> the IOMMU. This requires an IOTLB flush after every unmap
> and puts a high load on the IOMMU hardware and the device
> TLBs.
>
> Gather up to 32 range
Hi Linus,
The following changes since commit 8a5776a5f49812d29fe4b2d0a2d71675c3facf3f:
Linux 4.14-rc4 (2017-10-08 20:53:29 -0700)
are available in the git repository at:
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
tags/iommu-fixes-v4.14-rc4
for you to fetch changes up to
Hi Joerg,
On 20/09/17 15:13, Liviu Dudau wrote:
> If the IPMMU driver is compiled in the kernel it will replace the
> platform bus IOMMU ops on running the ipmmu_init() function, regardless
> if there is any IPMMU hardware present or not. This screws up systems
> that just want to build a generic
From: Joerg Roedel
The function only sends the flush command to the IOMMU(s),
but does not wait for its completion when it returns. Fix
that.
Fixes: 601367d76bd1 ('x86/amd-iommu: Remove iommu_flush_domain function')
Cc: sta...@vger.kernel.org # >= 2.6.33
Signed-off-by: Joerg Roedel
---
drivers
From: Joerg Roedel
Make use of the new IOTLB flush-interface in the IOMMU-API.
We don't implement the iotlb_range_add() call-back for now,
as this will put too many pressure on the command buffer.
Instead, we do a full TLB flush in the iotlb_sync()
call-back.
Signed-off-by: Joerg Roedel
---
dr
From: Joerg Roedel
Switch from using iommu_unmap to iommu_unmap_fast() and add
the necessary calls the the IOTLB invalidation routines.
Signed-off-by: Joerg Roedel
---
drivers/vfio/vfio_iommu_type1.c | 24 ++--
1 file changed, 18 insertions(+), 6 deletions(-)
diff --git a/
From: Joerg Roedel
After every unmap VFIO unpins the pages that where mapped by
the IOMMU. This requires an IOTLB flush after every unmap
and puts a high load on the IOMMU hardware and the device
TLBs.
Gather up to 32 ranges to flush and unpin and do the IOTLB
flush once for all these ranges. Th
Hi,
these patches implement the new IOTLB flush interface in the
AMD IOMMU driver. But for it to take effect, changes in VFIO
are also necessary, because VFIO unpins the pages after
every successful iommu_unmap() call. This requires an IOTLB
flush, so that we don't save flushes.
So I implemented
On Wed, Oct 04, 2017 at 02:33:08PM +0200, Geert Uytterhoeven wrote:
> Use the preferred generic node name in the example.
>
> Signed-off-by: Geert Uytterhoeven
> ---
> Documentation/devicetree/bindings/iommu/renesas,ipmmu-vmsa.txt | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
Applied.
On 13.10.2017 11:38, Joerg Roedel wrote:
> On Thu, Oct 12, 2017 at 05:27:26PM +0300, Dmitry Osipenko wrote:
>> I'm not talking about any specific bug, but in general if allocator re-maps
>> already mapped region or unmaps the wrong-and-used region. I had those
>> bug-cases
>> during of development
Hi Marek,
On 13/10/17 09:15, Marek Szyprowski wrote:
> Hi Robin,
>
> On 2017-10-11 15:56, Robin Murphy wrote:
>> xHCI requires that data buffers do not cross 64KB boundaries (and are
>> thus at most 64KB long as well) - whilst xhci_queue_{bulk,isoc}_tx()
>> already split their input buffers into
On Thu, Oct 12, 2017 at 05:27:26PM +0300, Dmitry Osipenko wrote:
> I'm not talking about any specific bug, but in general if allocator re-maps
> already mapped region or unmaps the wrong-and-used region. I had those
> bug-cases
> during of development of the 'scattered' graphics allocations for Te
Hi Robin,
On 2017-10-11 15:56, Robin Murphy wrote:
xHCI requires that data buffers do not cross 64KB boundaries (and are
thus at most 64KB long as well) - whilst xhci_queue_{bulk,isoc}_tx()
already split their input buffers into individual TRBs as necessary,
it's still a good idea to advertise t
26 matches
Mail list logo