On 2020/4/14 21:15, Joerg Roedel wrote:
From: Joerg Roedel
Add call-backs to 'struct iommu_ops' as an alternative to the
add_device() and remove_device() call-backs, which will be removed when
all drivers are converted.
The new call-backs will not setupt IOMMU groups and domains anymore,
so als
On Wed, Apr 15, 2020 at 12:26:04PM +1000, Alexey Kardashevskiy wrote:
> May be this is correct and allowed (no idea) but removing exported
> symbols at least deserves a mention in the commit log, does not it?
>
> The rest of the series is fine and works. Thanks,
Maybe I can throw in a line, but t
On 2020/4/14 21:15, Joerg Roedel wrote:
From: Joerg Roedel
Add a check to the bus_iommu_probe() call-path to make sure it ignores
devices which have already been successfully probed. Then export the
bus_iommu_probe() function so it can be used by IOMMU drivers.
Signed-off-by: Joerg Roedel
---
When a PASID is stopped or terminated, there can be pending
PRQs (requests that haven't received responses) in remapping
hardware. This adds the interface to drain page requests and
call it when a PASID is terminated.
Signed-off-by: Jacob Pan
Signed-off-by: Liu Yi L
Signed-off-by: Lu Baolu
---
Export invalidation queue internals of each iommu device through
the debugfs.
Example of such dump on a Skylake machine:
$ sudo cat /sys/kernel/debug/iommu/intel/invalidation_queue
Invalidation queue on IOMMU: dmar1
Base: 0x1672c9000 Head: 80Tail: 80
Index qw0
Currently, the page request interrupt thread handles the page
requests in the queue in this way:
- Clear PPR bit to ensure new interrupt could come in;
- Read and record the head and tail registers;
- Handle all descriptors between head and tail;
- Write tail to head register.
This might cause so
Extend qi_submit_sync() function to support multiple descriptors.
Signed-off-by: Jacob Pan
Signed-off-by: Lu Baolu
---
drivers/iommu/dmar.c| 39 +++--
include/linux/intel-iommu.h | 1 +
2 files changed, 25 insertions(+), 15 deletions(-)
diff --git a/dri
Current qi_submit_sync() supports single invalidation descriptor
per submission and appends wait descriptor after each submission
to poll hardware completion. This patch adjusts the parameters
of this function so that multiple descriptors per submission can
be supported.
Signed-off-by: Jacob Pan
Move the software processing page request descriptors part from
prq_event_thread() into a separated function. No any functional
changes.
Signed-off-by: Lu Baolu
---
drivers/iommu/intel-svm.c | 256 --
1 file changed, 135 insertions(+), 121 deletions(-)
diff -
When a PASID is stopped or terminated, there can be pending PRQs
(requests that haven't received responses) in the software and
remapping hardware. The pending page requests must be drained
so that the pasid could be reused. The register level interface
for page request draining is defined in 7.11
IOTLB flush already included in the PASID tear down and the
page request drain process. There is no need to flush again.
Signed-off-by: Jacob Pan
Signed-off-by: Lu Baolu
---
drivers/iommu/intel-svm.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu
On 14/04/2020 22:25, Christoph Hellwig wrote:
> For a long time the DMA API has been implemented inline in dma-mapping.h,
> but the function bodies can be quite large. Move them all out of line.
>
> Signed-off-by: Christoph Hellwig
> ---
> include/linux/dma-direct.h | 58 +
> inclu
Hi Robin,
> From: Robin Murphy, Sent: Wednesday, April 15, 2020 2:16 AM
>
> On 2020-04-13 11:25 am, Yoshihiro Shimoda wrote:
> [...]
> > -Each bus master connected to an IPMMU must reference the IPMMU in its
> > device
> > -node with the following property:
> > -
> > - - iommus: A reference to
> From: Alex Williamson
> Sent: Wednesday, April 15, 2020 8:36 AM
>
> On Tue, 14 Apr 2020 23:57:33 +
> "Tian, Kevin" wrote:
>
> > > From: Alex Williamson
> > > Sent: Tuesday, April 14, 2020 11:24 PM
> > >
> > > On Tue, 14 Apr 2020 03:42:42 +
> > > "Tian, Kevin" wrote:
> > >
> > > > >
On Tue, 14 Apr 2020 23:57:33 +
"Tian, Kevin" wrote:
> > From: Alex Williamson
> > Sent: Tuesday, April 14, 2020 11:24 PM
> >
> > On Tue, 14 Apr 2020 03:42:42 +
> > "Tian, Kevin" wrote:
> >
> > > > From: Alex Williamson
> > > > Sent: Tuesday, April 14, 2020 11:29 AM
> > > >
> > > >
When a device requires unencrypted memory and the context does not allow
blocking, memory must be returned from the atomic coherent pools.
This avoids the remap when CONFIG_DMA_DIRECT_REMAP is not enabled and the
config only requires CONFIG_DMA_COHERENT_POOL. This will be used for
CONFIG_AMD_MEM_
DMA atomic pools will be needed beyond only CONFIG_DMA_DIRECT_REMAP so
separate them out into their own file.
This also adds a new Kconfig option that can be subsequently used for
options, such as CONFIG_AMD_MEM_ENCRYPT, that will utilize the coherent
pools but do not have a dependency on direct r
When an atomic pool becomes fully depleted because it is now relied upon
for all non-blocking allocations through the DMA API, allow background
expansion of each pool by a kworker.
When an atomic pool has less than the default size of memory left, kick
off a kworker to dynamically expand the pool
When CONFIG_AMD_MEM_ENCRYPT is enabled and a device requires unencrypted
DMA, all non-blocking allocations must originate from the atomic DMA
coherent pools.
Select CONFIG_DMA_COHERENT_POOL for CONFIG_AMD_MEM_ENCRYPT.
Signed-off-by: David Rientjes
---
arch/x86/Kconfig | 1 +
1 file changed, 1 i
set_memory_decrypted() may block so it is not possible to do non-blocking
allocations through the DMA API for devices that required unencrypted
memory.
The solution is to expand the atomic DMA pools for the various possible
gfp requirements as a means to prevent an unnecessary depletion of lowmem.
When AMD memory encryption is enabled, some devices may use more than
256KB/sec from the atomic pools. It would be more appropriate to scale
the default size based on memory capacity unless the coherent_pool
option is used on the kernel command line.
This provides a slight optimization on initial
The atomic DMA pools can dynamically expand based on non-blocking
allocations that need to use it.
Export the sizes of each of these pools, in bytes, through debugfs for
measurement.
Suggested-by: Christoph Hellwig
Signed-off-by: David Rientjes
---
kernel/dma/pool.c | 41 ++
The single atomic pool is allocated from the lowest zone possible since
it is guaranteed to be applicable for any DMA allocation.
Devices may allocate through the DMA API but not have a strict reliance
on GFP_DMA memory. Since the atomic pool will be used for all
non-blockable allocations, return
> From: Alex Williamson
> Sent: Tuesday, April 14, 2020 11:24 PM
>
> On Tue, 14 Apr 2020 03:42:42 +
> "Tian, Kevin" wrote:
>
> > > From: Alex Williamson
> > > Sent: Tuesday, April 14, 2020 11:29 AM
> > >
> > > On Tue, 14 Apr 2020 02:40:58 +
> > > "Tian, Kevin" wrote:
> > >
> > > > > F
> From: Jacob Pan
> Sent: Wednesday, April 15, 2020 6:32 AM
>
> On Tue, 14 Apr 2020 10:13:04 -0700
> Jacob Pan wrote:
>
> > > > > In any of the proposed solutions, the
> > > > > IOMMU driver is ultimately responsible for validating the user
> > > > > data, so do we want vfio performing the cop
On Tue, 14 Apr 2020 10:13:04 -0700
Jacob Pan wrote:
> > > > In any of the proposed solutions, the
> > > > IOMMU driver is ultimately responsible for validating the user
> > > > data, so do we want vfio performing the copy_from_user() to an
> > > > object that could later be assumed to be sanitiz
Hi Eric,
There are some discussions about how to size the uAPI data.
https://lkml.org/lkml/2020/4/14/939
I think the problem with the current scheme is that when uAPI data gets
extended, if VFIO continue to use:
minsz = offsetofend(struct vfio_iommu_type1_set_pasid_table, config);
if (copy_from_
On Tue, 14 Apr 2020, Christoph Hellwig wrote:
> > I'll rely on Christoph to determine whether it makes sense to add some
> > periodic scavening of the atomic pools, whether that's needed for this to
> > be merged, or wheter we should enforce some maximum pool size.
>
> I don't really see the po
On Tue, 14 Apr 2020, Christoph Hellwig wrote:
> > + /*
> > +* Unencrypted memory must come directly from DMA atomic pools if
> > +* blocking is not allowed.
> > +*/
> > + if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
> > + force_dma_unencrypted(dev) && !gfpflags_allow_blocking(
On Thu, 9 Apr 2020, Tom Lendacky wrote:
> > When a device required unencrypted memory and the context does not allow
>
> required => requires
>
Fixed, thanks.
> > blocking, memory must be returned from the atomic coherent pools.
> >
> > This avoids the remap when CONFIG_DMA_DIRECT_REMAP is no
On Tue, Apr 14, 2020 at 07:02:29PM +0200, Jean-Philippe Brucker wrote:
> The new allocation scheme introduced by commit 2c7933f53f6b
> ("mm/mmu_notifiers: add a get/put scheme for the registration") provides
> a convenient way for users to attach notifier data to an mm. However, it
> would be even
Hi,
On 4/14/20 10:02 AM, Jean-Philippe Brucker wrote:
The SMMUv3 driver, which can be built without CONFIG_PCI, will soon gain
support for PRI. Partially revert commit c6e9aefbf9db ("PCI/ATS: Remove
unused PRI and PASID stubs") to re-introduce the PRI stubs, and avoid
adding more #ifdefs to the
Hi,
On 4/14/20 10:02 AM, Jean-Philippe Brucker wrote:
The SMMUv3 driver uses pci_{enable,disable}_pri() and related
functions. Export those functions to allow the driver to be built as a
module.
Acked-by: Bjorn Helgaas
Signed-off-by: Jean-Philippe Brucker
Reviewed-by: Kuppuswamy Sathyanaraya
On 2020-04-13 11:25 am, Yoshihiro Shimoda wrote:
[...]
-Each bus master connected to an IPMMU must reference the IPMMU in its device
-node with the following property:
-
- - iommus: A reference to the IPMMU in two cells. The first cell is a phandle
-to the IPMMU and the second cell the numbe
On Tue, 14 Apr 2020 10:13:58 -0600
Alex Williamson wrote:
> On Mon, 13 Apr 2020 22:05:15 -0700
> Jacob Pan wrote:
>
> > Hi Alex,
> > Thanks a lot for the feedback, my comments inline.
> >
> > On Mon, 13 Apr 2020 16:21:29 -0600
> > Alex Williamson wrote:
> >
> > > On Mon, 13 Apr 2020 13:41:
ARMv8.1 extensions added Virtualization Host Extensions (VHE), which allow
to run a host kernel at EL2. When using normal DMA, Device and CPU address
spaces are dissociated, and do not need to implement the same
capabilities, so VHE hasn't been used in the SMMU until now.
With shared address space
The SMMUv3 driver uses pci_{enable,disable}_pri() and related
functions. Export those functions to allow the driver to be built as a
module.
Acked-by: Bjorn Helgaas
Signed-off-by: Jean-Philippe Brucker
---
drivers/pci/ats.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/pci/ats
Some systems allow devices to handle I/O Page Faults in the core mm. For
example systems implementing the PCI PRI extension or Arm SMMU stall
model. Infrastructure for reporting these recoverable page faults was
recently added to the IOMMU core. Add a page fault handler for host SVA.
IOMMU driver
On ARM systems, some platform devices behind an IOMMU may support stall,
which is the ability to recover from page faults. Let the firmware tell us
when a device supports stall.
Reviewed-by: Rob Herring
Signed-off-by: Jean-Philippe Brucker
---
.../devicetree/bindings/iommu/iommu.txt| 18
iommu-sva calls us when an mm is modified. Perform the required ATC
invalidations.
Signed-off-by: Jean-Philippe Brucker
---
v4->v5: more comments
---
drivers/iommu/arm-smmu-v3.c | 70 ++---
1 file changed, 58 insertions(+), 12 deletions(-)
diff --git a/drivers/io
With Shared Virtual Addressing (SVA), we need to mirror CPU TTBR, TCR,
MAIR and ASIDs in SMMU contexts. Each SMMU has a single ASID space split
into two sets, shared and private. Shared ASIDs correspond to those
obtained from the arch ASID allocator, and private ASIDs are used for
"classic" map/unm
In preparation for sharing some ASIDs with the CPU, use a global xarray to
store ASIDs and their context. ASID#0 is now reserved, and the ASID
space is global.
Signed-off-by: Jean-Philippe Brucker
---
drivers/iommu/arm-smmu-v3.c | 27 ++-
1 file changed, 18 insertions(+),
The SMMU provides a Stall model for handling page faults in platform
devices. It is similar to PCI PRI, but doesn't require devices to have
their own translation cache. Instead, faulting transactions are parked and
the OS is given a chance to fix the page tables and retry the transaction.
Enable s
The SMMU has a single ASID space, the union of shared and private ASID
sets. This means that the SMMU driver competes with the arch allocator
for ASIDs. Shared ASIDs are those of Linux processes, allocated by the
arch, and contribute in broadcast TLB maintenance. Private ASIDs are
allocated by the
When a recoverable page fault is handled by the fault workqueue, find the
associated mm and call handle_mm_fault.
Signed-off-by: Jean-Philippe Brucker
---
v4->v5: no need to call mmput_async() anymore, since the MMU release()
doesn't flush the IOPF queue anymore.
---
drivers/iommu/io-pgf
Add a small library to help IOMMU drivers manage process address spaces
bound to their devices. Register an MMU notifier to track modification
on each address space bound to one or more devices.
IOMMU drivers must implement the io_mm_ops and can then use the helpers
provided by this library to eas
For PCI devices that support it, enable the PRI capability and handle PRI
Page Requests with the generic fault handler. It is enabled on demand by
iommu_sva_device_init().
Signed-off-by: Jean-Philippe Brucker
---
drivers/iommu/arm-smmu-v3.c | 284 +---
1 file chan
Shared Virtual Addressing (SVA) allows to share process page tables with
devices using the IOMMU. Add a generic implementation of the IOMMU SVA
API, and add support in the Arm SMMUv3 driver.
Since v4 [1] I changed the PASID lifetime. It isn't released when the
corresponding process address space d
If the SMMU supports it and the kernel was built with HTTU support, enable
hardware update of access and dirty flags. This is essential for shared
page tables, to reduce the number of access faults on the fault queue.
We can enable HTTU even if CPUs don't support it, because the kernel
always chec
When handling faults from the event or PRI queue, we need to find the
struct device associated to a SID. Add a rb_tree to keep track of SIDs.
Signed-off-by: Jean-Philippe Brucker
---
drivers/iommu/arm-smmu-v3.c | 179 +---
1 file changed, 147 insertions(+), 32 del
The fault handler will need to find an mm given its PASID. This is the
reason we have an IDR for storing address spaces, so hook it up.
Signed-off-by: Jean-Philippe Brucker
---
include/linux/iommu.h | 9 +
drivers/iommu/iommu-sva.c | 19 +++
2 files changed, 28 inser
The SMMUv3 driver, which can be built without CONFIG_PCI, will soon gain
support for PRI. Partially revert commit c6e9aefbf9db ("PCI/ATS: Remove
unused PRI and PASID stubs") to re-introduce the PRI stubs, and avoid
adding more #ifdefs to the SMMU driver.
Acked-by: Bjorn Helgaas
Signed-off-by: Je
The new allocation scheme introduced by commit 2c7933f53f6b
("mm/mmu_notifiers: add a get/put scheme for the registration") provides
a convenient way for users to attach notifier data to an mm. However, it
would be even better to create this notifier data atomically.
Since the alloc_notifier() cal
Aggregate all sanity-checks for sharing CPU page tables with the SMMU
under a single ARM_SMMU_FEAT_SVA bit. For PCIe SVA, users also need to
check FEAT_ATS and FEAT_PRI. For platform SVA, they will most likely have
to check FEAT_STALLS.
Cc: Suzuki K Poulose
Signed-off-by: Jean-Philippe Brucker
-
When enabling SVA, register the fault handler. Device driver will register
an I/O page fault queue before or after calling iommu_sva_enable. The
fault queue must be flushed before any io_mm is freed, to make sure that
its PASID isn't used in any fault queue, and can be reallocated.
Signed-off-by:
The SMMUv3 driver would like to read the MMFR0 PARANGE field in order to
share CPU page tables with devices. Allow the driver to be built as
module by exporting the read_sanitized_ftr_reg() cpufeature symbol.
Cc: Suzuki K Poulose
Signed-off-by: Jean-Philippe Brucker
---
arch/arm64/kernel/cpufea
Extract some of the most generic TCR defines, so they can be reused by
the page table sharing code.
Signed-off-by: Jean-Philippe Brucker
---
drivers/iommu/io-pgtable-arm.h | 30 ++
drivers/iommu/io-pgtable-arm.c | 27 ++-
2 files changed, 32 in
Add a macro to check if an ASID is from the current generation, since a
subsequent patch will introduce a third user for this test.
Signed-off-by: Jean-Philippe Brucker
---
v4->v5: new
---
arch/arm64/mm/context.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/ar
The SMMUv3 can handle invalidation targeted at TLB entries with shared
ASIDs. If the implementation supports broadcast TLB maintenance, enable it
and keep track of it in a feature bit. The SMMU will then be affected by
inner-shareable TLB invalidations from other agents.
A major side-effect of thi
Hook SVA operations to support sharing page tables with the SMMUv3:
* dev_enable/disable/has_feature for device drivers to modify the SVA
state.
* sva_bind/unbind and sva_get_pasid to bind device and address spaces.
* The mm_attach/detach/clear/invalidate/free callbacks from iommu-sva
The clear
To enable address space sharing with the IOMMU, introduce mm_context_get()
and mm_context_put(), that pin down a context and ensure that it will keep
its ASID after a rollover. Export the symbols to let the modular SMMUv3
driver use them.
Pinning is necessary because a device constantly needs a va
Hi Evan,
On 2020-04-14 04:42, Evan Green wrote:
On Wed, Jan 22, 2020 at 3:48 AM Sai Prakash Ranjan
wrote:
From: Jordan Crouse
Some client devices want to directly map the IOMMU themselves instead
of using the DMA domain. Allow those devices to opt in to direct
mapping by way of a list of co
On Mon, 13 Apr 2020 22:05:15 -0700
Jacob Pan wrote:
> Hi Alex,
> Thanks a lot for the feedback, my comments inline.
>
> On Mon, 13 Apr 2020 16:21:29 -0600
> Alex Williamson wrote:
>
> > On Mon, 13 Apr 2020 13:41:57 -0700
> > Jacob Pan wrote:
> >
> > > Hi All,
> > >
> > > Just a gentle rem
On Tue, 14 Apr 2020 01:11:07 -0700
Christoph Hellwig wrote:
> On Mon, Apr 13, 2020 at 01:41:57PM -0700, Jacob Pan wrote:
> > Hi All,
> >
> > Just a gentle reminder, any feedback on the options I listed below?
> > New ideas will be even better.
> >
> > Christoph, does the explanation make sense
Hi Jonathan,
On Mon, Apr 13, 2020 at 10:10:50PM +, Derrick, Jonathan wrote:
> I had to add the following for initial VMD support. The new PCIe domain
> added on VMD endpoint probe didn't have the dev_iommu member set on the
> VMD subdevices, which I'm guessing is due to probe_iommu_group alrea
On Tue, 14 Apr 2020 03:42:42 +
"Tian, Kevin" wrote:
> > From: Alex Williamson
> > Sent: Tuesday, April 14, 2020 11:29 AM
> >
> > On Tue, 14 Apr 2020 02:40:58 +
> > "Tian, Kevin" wrote:
> >
> > > > From: Alex Williamson
> > > > Sent: Tuesday, April 14, 2020 3:21 AM
> > > >
> > > > O
On Tue, Apr 14, 2020 at 03:13:40PM +0200, Christoph Hellwig wrote:
> The pgprot argument to __vmalloc is always PROT_KERNEL now, so remove
> it.
>
> Signed-off-by: Christoph Hellwig
> Reviewed-by: Michael Kelley [hyperv]
> Acked-by: Gao Xiang [erofs]
> Acked-by: Peter Zijlstra (Intel)
> ---
>
Nested mode currently is not compatible with HW MSI reserved regions.
Indeed MSI transactions targeting this MSI doorbells bypass the SMMU.
Let's check nested mode is not attempted in such configuration.
Signed-off-by: Eric Auger
---
drivers/iommu/arm-smmu-v3.c | 23 +--
1 f
When a stage 1 related fault event is read from the event queue,
let's propagate it to potential external fault listeners, ie. users
who registered a fault handler.
Signed-off-by: Eric Auger
---
v8 -> v9:
- adapt to the removal of IOMMU_FAULT_UNRECOV_PERM_VALID:
only look at IOMMU_FAULT_UNRECO
In nested mode we enforce the rule that all devices belonging
to the same iommu_domain share the same msi_domain.
Indeed if there were several physical MSI doorbells being used
within a single iommu_domain, it becomes really difficult to
resolve the nested stage mapping translating into the correc
The bind/unbind_guest_msi() callbacks check the domain
is NESTED and redirect to the dma-iommu implementation.
Signed-off-by: Eric Auger
---
v6 -> v7:
- remove device handle argument
---
drivers/iommu/arm-smmu-v3.c | 43 +
1 file changed, 43 insertions(+)
d
From: Jean-Philippe Brucker
When handling faults from the event or PRI queue, we need to find the
struct device associated to a SID. Add a rb_tree to keep track of SIDs.
Signed-off-by: Eric Auger
Signed-off-by: Jean-Philippe Brucker
---
drivers/iommu/arm-smmu-v3.c | 112 ++
Up to now, when the type was UNMANAGED, we used to
allocate IOVA pages within a reserved IOVA MSI range.
If both the host and the guest are exposed with SMMUs, each
would allocate an IOVA. The guest allocates an IOVA (gIOVA)
to map onto the guest MSI doorbell (gDB). The Host allocates
another IOVA
When nested stage translation is setup, both s1_cfg and
s2_cfg are allocated.
We introduce a new smmu domain abort field that will be set
upon guest stage1 configuration passing.
arm_smmu_write_strtab_ent() is modified to write both stage
fields in the STE and deal with the abort field.
In neste
Implement domain-selective and page-selective IOTLB invalidations.
Signed-off-by: Eric Auger
---
v7 -> v8:
- ASID based invalidation using iommu_inv_pasid_info
- check ARCHID/PASID flags in addr based invalidation
- use __arm_smmu_tlb_inv_context and __arm_smmu_tlb_inv_range_nosync
v6 -> v7
- c
On attach_pasid_table() we program STE S1 related info set
by the guest into the actual physical STEs. At minimum
we need to program the context descriptor GPA and compute
whether the stage1 is translated/bypassed or aborted.
Signed-off-by: Eric Auger
---
v7 -> v8:
- remove smmu->features check,
With nested stage support, soon we will need to invalidate
S1 contexts and ranges tagged with an unmanaged asid, this
latter being managed by the guest. So let's introduce 2 helpers
that allow to invalidate with externally managed ASIDs
Signed-off-by: Eric Auger
---
drivers/iommu/arm-smmu-v3.c |
On ARM, MSI are translated by the SMMU. An IOVA is allocated
for each MSI doorbell. If both the host and the guest are exposed
with SMMUs, we end up with 2 different IOVAs allocated by each.
guest allocates an IOVA (gIOVA) to map onto the guest MSI
doorbell (gDB). The Host allocates another IOVA (h
In preparation for the introduction of nested stages
let's turn s1_cfg and s2_cfg fields into pointers which are
dynamically allocated depending on the smmu_domain stage.
In nested mode, both stages will coexist and s1_cfg will
be allocated when the guest configuration gets passed.
Signed-off-by:
This version fixes an issue observed by Shameer on an SMMU 3.2,
when moving from dual stage config to stage 1 only config.
The 2 high 64b of the STE now get reset. Otherwise, leaving the
S2TTB set may cause a C_BAD_STE error.
This series can be found at:
https://github.com/eauger/linux/tree/v5.6-2
From: Jacob Pan
In virtualization use case, when a guest is assigned
a PCI host device, protected by a virtual IOMMU on the guest,
the physical IOMMU must be programmed to be consistent with
the guest mappings. If the physical IOMMU supports two
translation stages it makes sense to program guest
On Wed, Mar 25, 2020 at 06:48:55PM +0200, Laurentiu Tudor wrote:
> Hi Lorenzo,
>
> On 3/25/2020 2:51 PM, Lorenzo Pieralisi wrote:
> > On Thu, Feb 27, 2020 at 12:05:39PM +0200, laurentiu.tu...@nxp.com wrote:
> >> From: Laurentiu Tudor
> >>
> >> The devices on this bus are not discovered by way of
Although SPAPR_TCE_IOMMU itself can be compile tested on certain PowerPC
configurations, its presence makes arch/powerpc/kvm/Makefile to select
modules which do not build in such configuration.
The arch/powerpc/kvm/ modules use kvm_arch.spapr_tce_tables which exists
only with CONFIG_PPC_BOOK3S_64.
On Thu, Apr 09, 2020 at 03:58:00PM +0200, Marek Szyprowski wrote:
> I've checked and it works fine on top of
> ff68eb23308e6538ec7864c83d39540f423bbe90. However I'm not a fan of
> removing this 'owner' structure. It gave a nice abstraction for the all
> SYSMMU controllers for the given device (a
Hi Marek,
On Fri, Apr 10, 2020 at 12:39:38PM +0200, Marek Szyprowski wrote:
> > + if (!group->default_domain)
> > + continue;
>
> It doesn't look straight from the above diff, but this continue leaks
> group->lock taken.
You are right, thanks for the review! I fixed
Open code it in __bpf_map_area_alloc, which is the only caller. Also
clean up __bpf_map_area_alloc to have a single vmalloc call with
slightly different flags instead of the current two different calls.
For this to compile for the nommu case add a __vmalloc_node_range stub
to nommu.c.
Signed-off
Hi,
here is the second version of this patch-set. The first version with
some more introductory text can be found here:
https://lore.kernel.org/lkml/20200407183742.4344-1-j...@8bytes.org/
Changes v1->v2:
* Rebased to v5.7-rc1
* Re-wrote the arm-smmu changes as suggested
From: Joerg Roedel
The function is now only used in IOMMU core code and shouldn't be used
outside of it anyway, so remove the export for it.
Signed-off-by: Joerg Roedel
---
drivers/iommu/iommu.c | 4 ++--
include/linux/iommu.h | 1 -
2 files changed, 2 insertions(+), 3 deletions(-)
diff --git
No need to export the very low-level __vmalloc_node_range when the
test module can use a slightly higher level variant.
Signed-off-by: Christoph Hellwig
Acked-by: Peter Zijlstra (Intel)
---
lib/test_vmalloc.c | 26 +++---
mm/vmalloc.c | 17 -
2 files ch
This is always PAGE_KERNEL - for long term mappings with other
properties vmap should be used.
Signed-off-by: Christoph Hellwig
Acked-by: Peter Zijlstra (Intel)
---
drivers/gpu/drm/i915/gem/selftests/mock_dmabuf.c | 2 +-
drivers/media/common/videobuf2/videobuf2-dma-sg.c | 3 +--
drivers/med
From: Joerg Roedel
Convert the Rockchip IOMMU driver to use the probe_device() and
release_device() call-backs of iommu_ops, so that the iommu core code
does the group and sysfs setup.
Signed-off-by: Joerg Roedel
---
drivers/iommu/rockchip-iommu.c | 26 +++---
1 file change
The non-cached vmalloc mapping was initially added as a hack for the
first-gen amigaone platform (6xx/book32s), isn't fully supported
upstream, and which used the legacy radeon driver together with
non-coherent DMA. However this only ever worked reliably for DRI .
Remove the hack as it is the last
From: Joerg Roedel
Convert the Renesas IOMMU driver to use the probe_device() and
release_device() call-backs of iommu_ops, so that the iommu core code
does the group and sysfs setup.
Signed-off-by: Joerg Roedel
---
drivers/iommu/ipmmu-vmsa.c | 60 +-
1 file
From: Joerg Roedel
Convert the Tegra IOMMU drivers to use the probe_device() and
release_device() call-backs of iommu_ops, so that the iommu core code
does the group and sysfs setup.
Signed-off-by: Joerg Roedel
---
drivers/iommu/tegra-gart.c | 24 ++--
drivers/iommu/tegra-s
From: Joerg Roedel
After the previous changes the iommu group may not have a default
domain when iommu_group_add_device() is called. With no default domain
iommu_group_create_direct_mappings() will do nothing and no direct
mappings will be created.
Rename iommu_group_create_direct_mappings() to
From: Joerg Roedel
Make use of generic IOMMU infrastructure to gather the same information
carried in dev_data->passthrough and remove the struct member.
Signed-off-by: Joerg Roedel
---
drivers/iommu/amd_iommu.c | 10 +-
drivers/iommu/amd_iommu_types.h | 1 -
2 files changed, 5
From: Joerg Roedel
Convert the Exynos IOMMU driver to use the probe_device() and
release_device() call-backs of iommu_ops, so that the iommu core code
does the group and sysfs setup.
Signed-off-by: Joerg Roedel
---
drivers/iommu/exynos-iommu.c | 26 ++
1 file changed, 6
From: Joerg Roedel
Convert the MSM IOMMU driver to use the probe_device() and
release_device() call-backs of iommu_ops, so that the iommu core code
does the group and sysfs setup.
Signed-off-by: Joerg Roedel
---
drivers/iommu/msm_iommu.c | 34 +++---
1 file changed,
From: Joerg Roedel
The Intel VT-d driver already has a matching function to determine the
default domain type for a device. Wire it up in intel_iommu_ops.
Signed-off-by: Joerg Roedel
---
drivers/iommu/intel-iommu.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/iommu/intel-iommu.c
From: Joerg Roedel
On Exynos platforms there can be more than one SYSMMU (IOMMU) for one
DMA master device. Since the IOMMU core code expects only one hardware
IOMMU, use the first SYSMMU in the list.
Signed-off-by: Joerg Roedel
---
drivers/iommu/exynos-iommu.c | 10 ++
1 file changed,
1 - 100 of 157 matches
Mail list logo