Commit e61d98d8dad00 ("x64, x2apic/intr-remap: Intel vt-d, IOMMU
code reorganization") moved dma_remapping.h from drivers/pci/ to
current place. It is entirely VT-d specific, but uses a generic
name. This merges dma_remapping.h with include/linux/intel-iommu.h
and removes dma_remapping.h as the
From: Christoph Hellwig
Date: Fri, 9 Nov 2018 09:46:30 +0100
> Error reporting for the dma_map_single and dma_map_page operations is
> currently a mess. Both APIs directly return the dma_addr_t to be used for
> the DMA, with a magic error escape that is specific to the instance and
> checked
On Thu, 8 Nov 2018 11:30:04 +
James Sewart wrote:
> Hey,
>
> > On 8 Nov 2018, at 01:42, Lu Baolu wrote:
> >
> > Hi,
> >
> > On 11/8/18 1:55 AM, James Sewart wrote:
> >> Hey,
> >>> On 7 Nov 2018, at 02:10, Lu Baolu
> >>> wrote:
> >>>
> >>> Hi,
> >>>
> >>> On 11/6/18 6:40 PM, James
On Mon, Oct 29, 2018 at 3:09 PM Will Deacon wrote:
>
> On Thu, Sep 27, 2018 at 06:46:07PM -0400, Rob Clark wrote:
> > We seem to need to set either this or CFCFG (stall), otherwise gpu
> > faults trigger problems with other in-flight transactions from the
> > GPU causing CP errors, etc.
> >
> >
On 09/11/2018 07:49, Christoph Hellwig wrote:
On Tue, Nov 06, 2018 at 05:27:14PM -0800, John Stultz wrote:
But at that point if I just re-apply "swiotlb: use swiotlb_map_page in
swiotlb_map_sg_attrs", I reproduce the hangs.
Any suggestions for how to further debug what might be going wrong
On Fri, 9 Nov 2018 12:07:12 +0100
Joerg Roedel wrote:
> From: Joerg Roedel
>
> The AMD IOMMU driver can now map a huge-page where smaller
> mappings existed before, so this code-path is no longer
> triggered.
>
> Signed-off-by: Joerg Roedel
> ---
> drivers/vfio/vfio_iommu_type1.c | 33
On 09/11/2018 08:46, Christoph Hellwig wrote:
[...]
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 1167ff0416cf..cfb422e17049 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -55,8 +55,6 @@
#include "amd_iommu_types.h"
#include
On 11/9/18 12:57 PM, Nicolas Boichat wrote:
> On Fri, Nov 9, 2018 at 6:43 PM Vlastimil Babka wrote:
>> Also I'm probably missing the point of this all. In patch 3 you use
>> __get_dma32_pages() thus __get_free_pages(__GFP_DMA32), which uses
>> alloc_pages, thus the page allocator directly, and
On Fri, Nov 9, 2018 at 6:43 PM Vlastimil Babka wrote:
>
> On 11/9/18 9:24 AM, Nicolas Boichat wrote:
> > Some callers, namely iommu/io-pgtable-arm-v7s, expect the physical
> > address returned by kmem_cache_alloc with GFP_DMA parameter to be
> > a 32-bit address.
> >
> > Instead of adding a
Hey Yi,
> On 9 Nov 2018, at 06:54, Liu, Yi L wrote:
>
> Hi James,
>
> Regards to the relationship of iommu group and domain, the blog written by
> Alex
> may help you. The blog explained very well on how iommu group is determined
> and
> why.
>
>
Hi,
the AMD IOMMU driver had an issue for a long time where it
didn't allow to map a huge-page when smaller mappings
existed at that address range before. The VFIO driver even
had a workaround for that behavior.
These patches fix the issue and remove the workaround from
the VFIO driver.
Please
From: Joerg Roedel
The AMD IOMMU driver can now map a huge-page where smaller
mappings existed before, so this code-path is no longer
triggered.
Signed-off-by: Joerg Roedel
---
drivers/vfio/vfio_iommu_type1.c | 33 ++---
1 file changed, 2 insertions(+), 31
From: Joerg Roedel
This makes sure that __pte always contains the correct value
when the pointer to the next page-table level is derived.
Signed-off-by: Joerg Roedel
---
drivers/iommu/amd_iommu.c | 11 +--
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git
From: Joerg Roedel
Before this patch the iommu_map_page() function failed when
it tried to map a huge-page where smaller mappings existed
before.
With this change the page-table pages of the old mappings
are teared down, so that the huge-page can be mapped.
Signed-off-by: Joerg Roedel
---
From: Joerg Roedel
Collect all pages that belong to a page-table in a list and
free them after the tree has been traversed. This allows to
implement safer page-table updates in subsequent patches.
Also move the functions for page-table freeing a bit upwards
in the file so that they are usable
From: Joerg Roedel
The page-mode 7 is a special one as it marks a final PTE to
a page with an intermediary size.
Signed-off-by: Joerg Roedel
---
drivers/iommu/amd_iommu.c | 4
drivers/iommu/amd_iommu_types.h | 1 +
2 files changed, 5 insertions(+)
diff --git
From: Joerg Roedel
The function is a more generic version of free_pagetable()
and will be used to free only specific sub-trees of a
page-table.
Signed-off-by: Joerg Roedel
---
drivers/iommu/amd_iommu.c | 18 +-
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git
From: Joerg Roedel
Before this patch it was not possible the downgrade a
mapping established with page-mode 7 to a mapping using
smaller page-sizes, because the pte_level != level check
prevented that.
Treat page-mode 7 like a non-present mapping and allow to
overwrite it in alloc_pte().
On 11/9/18 9:24 AM, Nicolas Boichat wrote:
> Some callers, namely iommu/io-pgtable-arm-v7s, expect the physical
> address returned by kmem_cache_alloc with GFP_DMA parameter to be
> a 32-bit address.
>
> Instead of adding a separate SLAB_CACHE_DMA32 (and then audit
> all the calls to check if
Hi Robin/Yong,
On Fri, Nov 9, 2018 at 3:51 PM Yong Wu wrote:
>
> On Thu, 2018-11-08 at 13:49 +, Robin Murphy wrote:
> > On 08/11/2018 07:31, Yong Wu wrote:
> > > Hi Robin,
> > >
> > > After the commit ad67f5a6545f ("arm64: replace ZONE_DMA with
> > > ZONE_DMA32"), we don't have ZONE_DMA in
Some callers, namely iommu/io-pgtable-arm-v7s, expect the physical
address returned by kmem_cache_alloc with GFP_DMA parameter to be
a 32-bit address.
Instead of adding a separate SLAB_CACHE_DMA32 (and then audit
all the calls to check if they require memory from DMA or DMA32
zone), we simply
For level 1 pages, use __get_dma32_pages to make sure physical
memory address is 32-bit.
For level 2 pages, kmem_cache_zalloc with GFP_DMA has been modified
in a previous patch to be allocated in DMA32 zone.
Also, print an error when the physical address does not fit in
32-bit, to make debugging
Some callers (e.g. iommu/io-pgtable-arm-v7s) require DMA32 memory
when calling __get_dma_pages. Add a new macro for that purpose.
Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32")
Signed-off-by: Nicolas Boichat
---
include/linux/gfp.h | 2 ++
1 file changed, 2 insertions(+)
diff
This is a follow-up to the discussion in [1], to make sure that the page tables
allocated by iommu/io-pgtable-arm-v7s are contained within 32-bit physical
address space.
[1] https://lists.linuxfoundation.org/pipermail/iommu/2018-November/030876.html
Nicolas Boichat (3):
mm: When
These days architectures are mostly out of the business of dealing with
struct scatterlist at all, unless they have architecture specific iommu
drivers. Replace the ARCH_HAS_SG_CHAIN symbol with a ARCH_NO_SG_CHAIN
one only enabled for architectures with horrible legacy iommu drivers
like alpha
Remove leftovers, and switch the default on enabling SG chaining.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
There has been no for a long time, which also means
there is no point in using it from asm-generic.
Signed-off-by: Christoph Hellwig
---
arch/csky/include/asm/Kbuild | 1 -
arch/h8300/include/asm/Kbuild | 1 -
arch/riscv/include/asm/Kbuild | 1 -
3 files changed, 3 deletions(-)
diff --git
Error reporting for the dma_map_single and dma_map_page operations is
currently a mess. Both APIs directly return the dma_addr_t to be used for
the DMA, with a magic error escape that is specific to the instance and
checked by another method provided. This has a few downsides:
- the error
There is no need to perform an indirect function call to check if a
DMA mapping resulted in an error, if we always return the last
possible dma address as the error code. While that could in theory
be a valid DMAable region, it would have to assume we want to
support unaligned DMAs of size 1,
The current DMA API map_page and map_single routines use a very bad API
pattern that makes error checking hard. The calls to them are far too
many and too complex to easily change that, but the relatively new _attr
variants that take an additional attributs argument only have a few
callers and
30 matches
Mail list logo