On Wed, 2018-10-03 at 16:10 -0700, Alexander Duyck wrote:
> > -* Because 32-bit DMA masks are so common we expect every
> > architecture
> > -* to be able to satisfy them - either by not supporting more
> > physical
> > -* memory, or by providing a ZONE_DMA32. If neither
On Mon, 2018-10-08 at 09:03 +0200, Christoph Hellwig wrote:
> Ben, does this resolve your issues with the confusing zone selection?
The comment does make things a tad clearer yes :)
Thanks !
Cheers,
Ben.
> On Mon, Oct 01, 2018 at 01:10:16PM -0700, Christoph Hellwig wrote:
> > What we are doing
4.18-stable review patch. If anyone has any objections, please let me know.
--
From: Singh, Brijesh
commit b3e9b515b08e407ab3a026dc2e4d935c48d05f69 upstream.
Boris Ostrovsky reported a memory leak with device passthrough when SME
is active.
The VFIO driver uses iommu_iova_to_
4.14-stable review patch. If anyone has any objections, please let me know.
--
From: Singh, Brijesh
commit b3e9b515b08e407ab3a026dc2e4d935c48d05f69 upstream.
Boris Ostrovsky reported a memory leak with device passthrough when SME
is active.
The VFIO driver uses iommu_iova_to_
This is a note to let you know that I've just added the patch titled
iommu/amd: Clear memory encryption mask from physical address
to the 4.18-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is
This is a note to let you know that I've just added the patch titled
iommu/amd: Clear memory encryption mask from physical address
to the 4.14-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is
On Fri Oct 05 18, Raj, Ashok wrote:
On Thu, Oct 04, 2018 at 03:07:46PM -0700, Jacob Pan wrote:
On Thu, 4 Oct 2018 13:57:24 -0700
Jerry Snitselaar wrote:
> On Thu Oct 04 18, Joerg Roedel wrote:
> >Hi Jerry,
> >
> >thanks for the report.
> >
> >On Tue, Oct 02, 2018 at 10:25:29AM -0700, Jerry Sni
On Mon, Oct 08, 2018 at 10:24:19AM +0800, Lu Baolu wrote:
> Recent gcc warns about switching on an enumeration, but not having
> an explicit case statement for all members of the enumeration. To
> show the compiler this is intentional, we simply add a default case
> with nothing more than a break s
On Thu, Oct 04, 2018 at 05:25:47PM +0100, Biju Das wrote:
> Document RZ/G1N (R8A7744) SoC bindings.
>
> Signed-off-by: Biju Das
> Reviewed-by: Chris Paterson
Applied, thanks.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxf
Hi Jean,
> From: Jean-Philippe Brucker [mailto:jean-philippe.bruc...@arm.com]
> Sent: Thursday, September 27, 2018 9:38 PM
> To: Liu, Yi L ; Joerg Roedel
> Subject: Re: [PATCH v3 03/10] iommu/sva: Manage process address spaces
>
> On 27/09/2018 04:22, Liu, Yi L wrote:
> >> For the "classic" vfio
Now that the generic swiotlb code supports non-coherent DMA we can switch
to it for arm64. For that we need to refactor the existing
alloc/free/mmap/pgprot helpers to be used as the architecture hooks,
and implement the standard arch_sync_dma_for_{device,cpu} hooks for
cache maintaincance in the s
Handle architectures that are not cache coherent directly in the main
swiotlb code by calling arch_sync_dma_for_{device,cpu} in all the right
places from the various dma_map/unmap/sync methods when the device is
non-coherent.
Because swiotlb now uses dma_direct_alloc for the coherent allocation
th
Remove the somewhat useless map_single function, and replace it with a
swiotlb_bounce_page handler that handles everything related to actually
bouncing a page.
Signed-off-by: Christoph Hellwig
---
kernel/dma/swiotlb.c | 77 +---
1 file changed, 36 insertio
All architectures that support swiotlb also have a zone that backs up
these less than full addressing allocations (usually ZONE_DMA32).
Because of that it is rather pointless to fall back to the global swiotlb
buffer if the normal dma direct allocation failed - the only thing this
will do is to ea
Signed-off-by: Christoph Hellwig
---
kernel/dma/swiotlb.c | 15 ---
1 file changed, 4 insertions(+), 11 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 11dbcd80b4a6..15335f3a1bf3 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -765,9 +765,9
No need to duplicate the code - map_sg is equivalent to map_page
for each page in the scatterlist.
Signed-off-by: Christoph Hellwig
---
kernel/dma/swiotlb.c | 34 --
1 file changed, 12 insertions(+), 22 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/
Like all other dma mapping drivers just return an error code instead
of an actual memory buffer. The reason for the overflow buffer was
that at the time swiotlb was invented there was no way to check for
dma mapping errors, but this has long been fixed.
Signed-off-by: Christoph Hellwig
---
arch
Signed-off-by: Christoph Hellwig
---
include/linux/swiotlb.h | 1 -
kernel/dma/swiotlb.c| 2 +-
2 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 965be92c33b5..7ef541ce8f34 100644
--- a/include/linux/swiotlb.h
+++ b/include/l
All properly written drivers now have error handling in the
dma_map_single / dma_map_page callers. As swiotlb_tbl_map_single already
prints a useful warning when running out of swiotlb pool swace we can
also remove swiotlb_full entirely as it serves no purpose now.
Signed-off-by: Christoph Hellwi
Hi all,
this series starts with various swiotlb cleanups, then adds support for
non-cache coherent devices to the generic swiotlb support, and finally
switches arm64 to use the generic code.
Given that this series depends on patches in the dma-mapping tree, or
pending for it I've also published a
This comments describes an aspect of the map_sg interface that isn't
even exploited by swiotlb.
Signed-off-by: Christoph Hellwig
---
kernel/dma/swiotlb.c | 6 --
1 file changed, 6 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 4f8a6dbf0b60..9062b14bc7f4 100644
-
Thanks,
applied to the dma-mapping tree for 4.20.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Thu, Oct 04, 2018 at 05:25:47PM +0100, Biju Das wrote:
> Document RZ/G1N (R8A7744) SoC bindings.
>
> Signed-off-by: Biju Das
> Reviewed-by: Chris Paterson
Reviewed-by: Simon Horman
___
iommu mailing list
iommu@lists.linux-foundation.org
https://l
I recently debugged a DMA mapping oops where a driver was trying to map
a buffer returned from request_firmware() with dma_map_single(). Memory
returned from request_firmware() is mapped into the vmalloc region and
this isn't a valid region to map with dma_map_single() per the DMA
documentation's "
Ben, does this resolve your issues with the confusing zone selection?
On Mon, Oct 01, 2018 at 01:10:16PM -0700, Christoph Hellwig wrote:
> What we are doing here isn't quite obvious, so add a comment explaining
> it.
>
> Signed-off-by: Christoph Hellwig
> ---
> kernel/dma/direct.c | 9 -
Any comments on these rather trivial patches?
On Mon, Oct 01, 2018 at 01:12:55PM -0700, Christoph Hellwig wrote:
> Hi all,
>
> this series sorts out how we deal with the nowarn flags in the dma
> mapping code. We still support __GFP_NOWARN for the legacy APIs that
> don't support passing the dma
26 matches
Mail list logo