On 2021/10/20 0:37, Sven Peter via iommu wrote:
The iova allocator is capable of handling any granularity which is a power
of two. Remove the much stronger condition that the granularity must be
smaller or equal to the CPU page size from a BUG_ON there.
Instead, check this condition during
On Tue, 19 Oct 2021, Christoph Hellwig wrote:
> Split the code for DMA_ATTR_NO_KERNEL_MAPPING allocations into a separate
> helper to make dma_direct_alloc a little more readable.
>
> Signed-off-by: Christoph Hellwig
Acked-by: David Rientjes
(I think my name got mangled in your To: field on
On Tue, 19 Oct 2021, Christoph Hellwig wrote:
> We must never unencryped memory go back into the general page pool.
> So if we fail to set it back to encrypted when freeing DMA memory, leak
> the memory insted and warn the user.
>
> Signed-off-by: Christoph Hellwig
> ---
> kernel/dma/direct.c
On Tue, 19 Oct 2021, Christoph Hellwig wrote:
> Factor out helpers the make dealing with memory encryption a little less
> cumbersome.
>
> Signed-off-by: Christoph Hellwig
> ---
> kernel/dma/direct.c | 55 +
> 1 file changed, 25 insertions(+), 30
On 20/10/2021 01.22, Sven Peter wrote:
DART has an additional global register to control which streams are
isolated. This register is a bit redundant since DART_TCR can already
be used to control isolation and is usually initialized to DART_STREAM_ALL
by the time we get control. Some DARTs
On Tue, Oct 19, 2021 at 10:11:34AM -0700, Jacob Pan wrote:
> Hi Jason,
>
> On Tue, 19 Oct 2021 13:57:47 -0300, Jason Gunthorpe wrote:
>
> > On Tue, Oct 19, 2021 at 09:57:34AM -0700, Jacob Pan wrote:
> > > Hi Jason,
> > >
> > > On Fri, 15 Oct 2021 08:18:07 -0300, Jason Gunthorpe
> > > wrote:
Hi Jason,
On Tue, 19 Oct 2021 13:57:47 -0300, Jason Gunthorpe wrote:
> On Tue, Oct 19, 2021 at 09:57:34AM -0700, Jacob Pan wrote:
> > Hi Jason,
> >
> > On Fri, 15 Oct 2021 08:18:07 -0300, Jason Gunthorpe
> > wrote:
> > > On Fri, Oct 15, 2021 at 09:18:06AM +, Liu, Yi L wrote:
> > >
> >
On Tue, Oct 19, 2021 at 09:57:34AM -0700, Jacob Pan wrote:
> Hi Jason,
>
> On Fri, 15 Oct 2021 08:18:07 -0300, Jason Gunthorpe wrote:
>
> > On Fri, Oct 15, 2021 at 09:18:06AM +, Liu, Yi L wrote:
> >
> > > > Acquire from the xarray is
> > > >rcu_lock()
> > > >ioas = xa_load()
> >
Hi Jason,
On Fri, 15 Oct 2021 08:18:07 -0300, Jason Gunthorpe wrote:
> On Fri, Oct 15, 2021 at 09:18:06AM +, Liu, Yi L wrote:
>
> > > Acquire from the xarray is
> > >rcu_lock()
> > >ioas = xa_load()
> > >if (ioas)
> > > if (down_read_trylock(>destroying_lock))
> >
>
Now that the dma-iommu API supports IOMMU granules which are larger than
the CPU page size and that the kernel no longer runs into a BUG_ON when
devices are attached to a domain with such a granule there's no need to
force bypass mode anymore.
Signed-off-by: Sven Peter
---
__IOMMU_DOMAIN_LP (large pages) indicates that a domain can handle
conditions where PAGE_SIZE might be smaller than the IOMMU page size.
Always allow attaching trusted devices to such domains and set the flag for
IOMMU_DOMAIN_DMA, which can now handle these situations.
Note that untrusted devices
The iova allocator is capable of handling any granularity which is a power
of two. Remove the much stronger condition that the granularity must be
smaller or equal to the CPU page size from a BUG_ON there.
Instead, check this condition during __iommu_attach_device and fail
gracefully.
Noncontiguous allocations must be made up of individual blocks
in a way that allows those blocks to be mapped contiguously in IOVA space.
For IOMMU page sizes larger than the CPU page size this can be done
by allocating all individual blocks from pools with
order >= get_order(iovad->granule). Some
Add support to iommu_dma_map_sg's impedance matching to also align
sg_lists correctly when the IOMMU granule is larger than PAGE_SIZE.
Co-developed-by: Robin Murphy
Signed-off-by: Robin Murphy
Signed-off-by: Sven Peter
---
drivers/iommu/dma-iommu.c | 25 -
1 file
While this function *probably* works correctly without any changes for
granule > PAGE_SIZE I don't have any code to actually test it and cannot
reason about how the function is supposed to work.
Disable it instead until we run into a use case where it's required.
Signed-off-by: Sven Peter
---
Hi,
RFC:
https://lore.kernel.org/linux-iommu/20210806155523.50429-1-s...@svenpeter.dev/
v2:
https://lore.kernel.org/linux-iommu/20210828153642.19396-1-s...@svenpeter.dev/
Time to revive this series:
v2 -> v3:
- Dropped support for untrusted devices since swiotlb currently does not
DART has an additional global register to control which streams are
isolated. This register is a bit redundant since DART_TCR can already
be used to control isolation and is usually initialized to DART_STREAM_ALL
by the time we get control. Some DARTs (namely the one used for the audio
controller)
On Tue, Oct 19, 2021 at 09:22:13AM +0800, Jason Wang wrote:
> > > So I think clarifying system reset should address your questions.
> > > I believe we should leave bypass sticky across device reset, so a FW->OS
> > > transition, where the OS resets the device, does not open a vulnerability
> > >
Hi Joerg,
Please pull this tiny batch of Arm SMMU updates for 5.16. It's dominated
by compatible string additions for Qualcomm SMMUv2 implementations, but
there's a bit of cleanup on the SMMUv3 command-submission side as well.
Cheers,
Will
--->8
The following changes since commit
Split the code for DMA_ATTR_NO_KERNEL_MAPPING allocations into a separate
helper to make dma_direct_alloc a little more readable.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.c | 31 ---
1 file changed, 20 insertions(+), 11 deletions(-)
diff --git
Add a local variable to track if we want to remap the returned address
using vmap and use that to simplify the code flow.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.c | 44 +++-
1 file changed, 23 insertions(+), 21 deletions(-)
diff --git
We must never unencryped memory go back into the general page pool.
So if we fail to set it back to encrypted when freeing DMA memory, leak
the memory insted and warn the user.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.c | 17 +
1 file changed, 13 insertions(+), 4
Factor out helpers the make dealing with memory encryption a little less
cumbersome.
Signed-off-by: Christoph Hellwig
---
kernel/dma/direct.c | 55 +
1 file changed, 25 insertions(+), 30 deletions(-)
diff --git a/kernel/dma/direct.c
Hi all,
Linus complained about the complex flow in dma_direct_alloc, so this
tries to simplify it a bit, and while I was at it I also made sure that
unencrypted pages never leak back into the page allocator.
Diffstat
direct.c | 133
24 matches
Mail list logo