[PATCH 1/1] iommu/vtd: Cleanup dma_remapping.h header

2018-11-09 Thread Lu Baolu
Commit e61d98d8dad00 ("x64, x2apic/intr-remap: Intel vt-d, IOMMU code reorganization") moved dma_remapping.h from drivers/pci/ to current place. It is entirely VT-d specific, but uses a generic name. This merges dma_remapping.h with include/linux/intel-iommu.h and removes dma_remapping.h as the

Re: [RFC] remove the ->mapping_error method from dma_map_ops

2018-11-09 Thread David Miller
From: Christoph Hellwig Date: Fri, 9 Nov 2018 09:46:30 +0100 > Error reporting for the dma_map_single and dma_map_page operations is > currently a mess. Both APIs directly return the dma_addr_t to be used for > the DMA, with a magic error escape that is specific to the instance and > checked

Re: [RFC] iommu/vt-d: Group and domain relationship

2018-11-09 Thread Jacob Pan
On Thu, 8 Nov 2018 11:30:04 + James Sewart wrote: > Hey, > > > On 8 Nov 2018, at 01:42, Lu Baolu wrote: > > > > Hi, > > > > On 11/8/18 1:55 AM, James Sewart wrote: > >> Hey, > >>> On 7 Nov 2018, at 02:10, Lu Baolu > >>> wrote: > >>> > >>> Hi, > >>> > >>> On 11/6/18 6:40 PM, James

Re: [PATCH] iommu: arm-smmu: Set SCTLR.HUPCF bit

2018-11-09 Thread Rob Clark
On Mon, Oct 29, 2018 at 3:09 PM Will Deacon wrote: > > On Thu, Sep 27, 2018 at 06:46:07PM -0400, Rob Clark wrote: > > We seem to need to set either this or CFCFG (stall), otherwise gpu > > faults trigger problems with other in-flight transactions from the > > GPU causing CP errors, etc. > > > >

Re: [PATCH 06/10] swiotlb: use swiotlb_map_page in swiotlb_map_sg_attrs

2018-11-09 Thread Robin Murphy
On 09/11/2018 07:49, Christoph Hellwig wrote: On Tue, Nov 06, 2018 at 05:27:14PM -0800, John Stultz wrote: But at that point if I just re-apply "swiotlb: use swiotlb_map_page in swiotlb_map_sg_attrs", I reproduce the hangs. Any suggestions for how to further debug what might be going wrong

Re: [PATCH 7/7] vfio/type1: Remove map_try_harder() code path

2018-11-09 Thread Alex Williamson
On Fri, 9 Nov 2018 12:07:12 +0100 Joerg Roedel wrote: > From: Joerg Roedel > > The AMD IOMMU driver can now map a huge-page where smaller > mappings existed before, so this code-path is no longer > triggered. > > Signed-off-by: Joerg Roedel > --- > drivers/vfio/vfio_iommu_type1.c | 33

Re: [PATCH 1/2] dma-mapping: remove ->mapping_error

2018-11-09 Thread Robin Murphy
On 09/11/2018 08:46, Christoph Hellwig wrote: [...] diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 1167ff0416cf..cfb422e17049 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -55,8 +55,6 @@ #include "amd_iommu_types.h" #include

Re: [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA

2018-11-09 Thread Vlastimil Babka
On 11/9/18 12:57 PM, Nicolas Boichat wrote: > On Fri, Nov 9, 2018 at 6:43 PM Vlastimil Babka wrote: >> Also I'm probably missing the point of this all. In patch 3 you use >> __get_dma32_pages() thus __get_free_pages(__GFP_DMA32), which uses >> alloc_pages, thus the page allocator directly, and

Re: [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA

2018-11-09 Thread Nicolas Boichat
On Fri, Nov 9, 2018 at 6:43 PM Vlastimil Babka wrote: > > On 11/9/18 9:24 AM, Nicolas Boichat wrote: > > Some callers, namely iommu/io-pgtable-arm-v7s, expect the physical > > address returned by kmem_cache_alloc with GFP_DMA parameter to be > > a 32-bit address. > > > > Instead of adding a

Re: [RFC] iommu/vt-d: Group and domain relationship

2018-11-09 Thread James Sewart via iommu
Hey Yi, > On 9 Nov 2018, at 06:54, Liu, Yi L wrote: > > Hi James, > > Regards to the relationship of iommu group and domain, the blog written by > Alex > may help you. The blog explained very well on how iommu group is determined > and > why. > >

[PATCH 0/7] iommu/amd: Always allow to map huge pages

2018-11-09 Thread Joerg Roedel
Hi, the AMD IOMMU driver had an issue for a long time where it didn't allow to map a huge-page when smaller mappings existed at that address range before. The VFIO driver even had a workaround for that behavior. These patches fix the issue and remove the workaround from the VFIO driver. Please

[PATCH 7/7] vfio/type1: Remove map_try_harder() code path

2018-11-09 Thread Joerg Roedel
From: Joerg Roedel The AMD IOMMU driver can now map a huge-page where smaller mappings existed before, so this code-path is no longer triggered. Signed-off-by: Joerg Roedel --- drivers/vfio/vfio_iommu_type1.c | 33 ++--- 1 file changed, 2 insertions(+), 31

[PATCH 5/7] iommu/amd: Restart loop if cmpxchg64 succeeded in alloc_pte()

2018-11-09 Thread Joerg Roedel
From: Joerg Roedel This makes sure that __pte always contains the correct value when the pointer to the next page-table level is derived. Signed-off-by: Joerg Roedel --- drivers/iommu/amd_iommu.c | 11 +-- 1 file changed, 5 insertions(+), 6 deletions(-) diff --git

[PATCH 6/7] iommu/amd: Allow to upgrade page-size

2018-11-09 Thread Joerg Roedel
From: Joerg Roedel Before this patch the iommu_map_page() function failed when it tried to map a huge-page where smaller mappings existed before. With this change the page-table pages of the old mappings are teared down, so that the huge-page can be mapped. Signed-off-by: Joerg Roedel ---

[PATCH 1/7] iommu/amd: Collect page-table pages in freelist

2018-11-09 Thread Joerg Roedel
From: Joerg Roedel Collect all pages that belong to a page-table in a list and free them after the tree has been traversed. This allows to implement safer page-table updates in subsequent patches. Also move the functions for page-table freeing a bit upwards in the file so that they are usable

[PATCH 3/7] iommu/amd: Ignore page-mode 7 in free_sub_pt()

2018-11-09 Thread Joerg Roedel
From: Joerg Roedel The page-mode 7 is a special one as it marks a final PTE to a page with an intermediary size. Signed-off-by: Joerg Roedel --- drivers/iommu/amd_iommu.c | 4 drivers/iommu/amd_iommu_types.h | 1 + 2 files changed, 5 insertions(+) diff --git

[PATCH 2/7] iommu/amd: Introduce free_sub_pt() function

2018-11-09 Thread Joerg Roedel
From: Joerg Roedel The function is a more generic version of free_pagetable() and will be used to free only specific sub-trees of a page-table. Signed-off-by: Joerg Roedel --- drivers/iommu/amd_iommu.c | 18 +- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git

[PATCH 4/7] iommu/amd: Allow downgrading page-sizes in alloc_pte()

2018-11-09 Thread Joerg Roedel
From: Joerg Roedel Before this patch it was not possible the downgrade a mapping established with page-mode 7 to a mapping using smaller page-sizes, because the pte_level != level check prevented that. Treat page-mode 7 like a non-present mapping and allow to overwrite it in alloc_pte().

Re: [PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA

2018-11-09 Thread Vlastimil Babka
On 11/9/18 9:24 AM, Nicolas Boichat wrote: > Some callers, namely iommu/io-pgtable-arm-v7s, expect the physical > address returned by kmem_cache_alloc with GFP_DMA parameter to be > a 32-bit address. > > Instead of adding a separate SLAB_CACHE_DMA32 (and then audit > all the calls to check if

Re: iommu/io-pgtable-arm-v7s: About pagetable 33bit PA

2018-11-09 Thread Nicolas Boichat
Hi Robin/Yong, On Fri, Nov 9, 2018 at 3:51 PM Yong Wu wrote: > > On Thu, 2018-11-08 at 13:49 +, Robin Murphy wrote: > > On 08/11/2018 07:31, Yong Wu wrote: > > > Hi Robin, > > > > > > After the commit ad67f5a6545f ("arm64: replace ZONE_DMA with > > > ZONE_DMA32"), we don't have ZONE_DMA in

[PATCH RFC 1/3] mm: When CONFIG_ZONE_DMA32 is set, use DMA32 for SLAB_CACHE_DMA

2018-11-09 Thread Nicolas Boichat
Some callers, namely iommu/io-pgtable-arm-v7s, expect the physical address returned by kmem_cache_alloc with GFP_DMA parameter to be a 32-bit address. Instead of adding a separate SLAB_CACHE_DMA32 (and then audit all the calls to check if they require memory from DMA or DMA32 zone), we simply

[PATCH RFC 3/3] iommu/io-pgtable-arm-v7s: Request DMA32 memory, and improve debugging

2018-11-09 Thread Nicolas Boichat
For level 1 pages, use __get_dma32_pages to make sure physical memory address is 32-bit. For level 2 pages, kmem_cache_zalloc with GFP_DMA has been modified in a previous patch to be allocated in DMA32 zone. Also, print an error when the physical address does not fit in 32-bit, to make debugging

[PATCH RFC 2/3] include/linux/gfp.h: Add __get_dma32_pages macro

2018-11-09 Thread Nicolas Boichat
Some callers (e.g. iommu/io-pgtable-arm-v7s) require DMA32 memory when calling __get_dma_pages. Add a new macro for that purpose. Fixes: ad67f5a6545f ("arm64: replace ZONE_DMA with ZONE_DMA32") Signed-off-by: Nicolas Boichat --- include/linux/gfp.h | 2 ++ 1 file changed, 2 insertions(+) diff

[PATCH RFC 0/3] iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables

2018-11-09 Thread Nicolas Boichat
This is a follow-up to the discussion in [1], to make sure that the page tables allocated by iommu/io-pgtable-arm-v7s are contained within 32-bit physical address space. [1] https://lists.linuxfoundation.org/pipermail/iommu/2018-November/030876.html Nicolas Boichat (3): mm: When

[PATCH 2/2] arch: switch the default on ARCH_HAS_SG_CHAIN

2018-11-09 Thread Christoph Hellwig
These days architectures are mostly out of the business of dealing with struct scatterlist at all, unless they have architecture specific iommu drivers. Replace the ARCH_HAS_SG_CHAIN symbol with a ARCH_NO_SG_CHAIN one only enabled for architectures with horrible legacy iommu drivers like alpha

scatterlist arch cleanups

2018-11-09 Thread Christoph Hellwig
Remove leftovers, and switch the default on enabling SG chaining. ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH 1/2] csky, h8300, riscv: remove leftovers

2018-11-09 Thread Christoph Hellwig
There has been no for a long time, which also means there is no point in using it from asm-generic. Signed-off-by: Christoph Hellwig --- arch/csky/include/asm/Kbuild | 1 - arch/h8300/include/asm/Kbuild | 1 - arch/riscv/include/asm/Kbuild | 1 - 3 files changed, 3 deletions(-) diff --git

[RFC] remove the ->mapping_error method from dma_map_ops

2018-11-09 Thread Christoph Hellwig
Error reporting for the dma_map_single and dma_map_page operations is currently a mess. Both APIs directly return the dma_addr_t to be used for the DMA, with a magic error escape that is specific to the instance and checked by another method provided. This has a few downsides: - the error

[PATCH 1/2] dma-mapping: remove ->mapping_error

2018-11-09 Thread Christoph Hellwig
There is no need to perform an indirect function call to check if a DMA mapping resulted in an error, if we always return the last possible dma address as the error code. While that could in theory be a valid DMAable region, it would have to assume we want to support unaligned DMAs of size 1,

[PATCH 2/2] dma-mapping: return errors from dma_map_page and dma_map_attrs

2018-11-09 Thread Christoph Hellwig
The current DMA API map_page and map_single routines use a very bad API pattern that makes error checking hard. The calls to them are far too many and too complex to easily change that, but the relatively new _attr variants that take an additional attributs argument only have a few callers and