On Tue, Apr 09, 2019 at 05:24:48PM +, Thomas Hellstrom wrote:
> > Note that this only affects external, untrusted devices. But that
> > may include eGPU,
>
> What about discrete graphics cards, like Radeon and Nvidia? Who gets to
> determine what's trusted?
Based on firmware tables. discret
On Tue, Apr 09, 2019 at 06:59:32PM +0100, Robin Murphy wrote:
> On 27/03/2019 08:04, Christoph Hellwig wrote:
>> This keeps the code together and will simplify compiling the code
>> out on architectures that are always dma coherent.
>
> And this is where things take a turn in the direction I just c
Hi James,
On 4/6/19 2:02 AM, James Sewart wrote:
Hey Lu,
My bad, did some debugging on my end. The issue was swapping out
find_domain for iommu_get_domain_for_dev. It seems in some situations the
domain is not attached to the group but the device is expected to have the
domain still stored in i
On Tue, Apr 09, 2019 at 06:21:30PM +0300, Andriy Shevchenko wrote:
> On Tue, Apr 09, 2019 at 07:53:08AM -0700, Paul E. McKenney wrote:
> > On Tue, Apr 09, 2019 at 01:30:30PM +0300, Andriy Shevchenko wrote:
> > > On Tue, Apr 09, 2019 at 03:04:36AM -0700, Christoph Hellwig wrote:
> > > > On Tue, Apr
Hi all,
According to the routine of iommu_dma_alloc(), it allocates an iova
then does iommu_map() to map the iova to a physical address of new
allocated pages. However, in remoteproc_core.c, I see its code try
to iommu_map() without having an alloc_iova() or alloc_iova_fast().
Is it safe to do so
ARM Mali midgard GPU is similar to standard 64-bit stage 1 page tables, but
have a few differences. Add a new format type to represent the format. The
input address size is 48-bits and the output address size is 40-bits (and
possibly less?). Note that the later bifrost GPUs follow the standard
64-b
Here's v3 of the panfrost driver. Lot's of changes from review comments
and further testing. Details are in each patch. Of note, a problem with
MMU page faults has been addressed improving the stability. In the
process, the TLB invalidate has been optimized which Tomeu says has
improved the perform
On 27/03/2019 08:04, Christoph Hellwig wrote:
This keeps the code together and will simplify compiling the code
out on architectures that are always dma coherent.
And this is where things take a turn in the direction I just can't get
on with - I'm looking at the final result and the twisty maz
On Tue, 9 Apr 2019 17:57:12 +0300
Andriy Shevchenko wrote:
> On Mon, Apr 08, 2019 at 04:59:33PM -0700, Jacob Pan wrote:
> > When Shared Virtual Address (SVA) is enabled for a guest OS via
> > vIOMMU, we need to provide invalidation support at IOMMU API and
> > driver level. This patch adds Intel
On Tue, Apr 09, 2019 at 09:43:28AM -0700, Jacob Pan wrote:
> On Tue, 9 Apr 2019 13:07:18 +0300
> Andriy Shevchenko wrote:
> > On Mon, Apr 08, 2019 at 04:59:23PM -0700, Jacob Pan wrote:
> > > +int iommu_cache_invalidate(struct iommu_domain *domain, struct
> > > device *dev,
> > > +
On 09/04/2019 18:23, Christoph Hellwig wrote:
On Tue, Apr 09, 2019 at 04:07:02PM +0100, Robin Murphy wrote:
-static inline int iommu_dma_init(void)
+static inline void iommu_setup_dma_ops(struct device *dev, u64 dma_base,
+ u64 size, const struct iommu_ops *ops)
{
- return
On Tue, Apr 09, 2019 at 04:49:30PM +0100, Robin Murphy wrote:
>> *cpu_addr,
>> +size_t size)
>> +{
>> +unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
>> +struct vm_struct *area = find_vm_area(cpu_addr);
>> +
>> +if (WARN_ON(!area || !area->pages))
>> +retur
On Tue, Apr 09, 2019 at 04:29:07PM +0100, Robin Murphy wrote:
> On 27/03/2019 08:04, Christoph Hellwig wrote:
>> Move the vm_area handling into __iommu_dma_mmap, which is renamed
>> to iommu_dma_mmap_remap.
>>
>> Inline __iommu_dma_mmap_pfn into the main function to simplify the code
>> flow a bit.
On Tue, 2019-04-09 at 17:25 +0200, h...@lst.de wrote:
> On Tue, Apr 09, 2019 at 02:17:40PM +, Thomas Hellstrom wrote:
> > If that's the case, I think most of the graphics drivers will stop
> > functioning. I don't think people would want that, and even if the
> > graphics drivers are "to blame"
On Tue, Apr 09, 2019 at 04:07:02PM +0100, Robin Murphy wrote:
>> -static inline int iommu_dma_init(void)
>> +static inline void iommu_setup_dma_ops(struct device *dev, u64 dma_base,
>> +u64 size, const struct iommu_ops *ops)
>> {
>> -return 0;
>> }
>
> I don't think it makes sen
On Tue, Apr 09, 2019 at 04:07:02PM +0100, Robin Murphy wrote:
>> +static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
>> +size_t size, enum dma_data_direction dir, unsigned long attrs)
>> +{
>> +if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
>> +iommu_dma
On Fri, Apr 05, 2019 at 06:42:57PM +0100, Robin Murphy wrote:
> Other than introducing this unnecessary dupe,
Fixed.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Tue, Apr 09, 2019 at 04:12:51PM +0100, Robin Murphy wrote:
> On 07/04/2019 07:59, Christoph Hellwig wrote:
>> On Fri, Apr 05, 2019 at 06:30:52PM +0100, Robin Murphy wrote:
>>> On 27/03/2019 08:04, Christoph Hellwig wrote:
The nr_pages checks should be done for all mmap requests, not just th
When removing a mapping from a domain, we need to send an invalidation to
all devices that might have stored it in their Address Translation Cache
(ATC). In addition when updating the context descriptor of a live domain,
we'll need to send invalidations for all devices attached to it.
Maintain a l
The arm_smmu_master_data structure already represents more than just the
firmware data associated to a master, and will be used extensively to
represent a device's state when implementing more SMMU features. Rename
the structure to arm_smmu_master.
Signed-off-by: Jean-Philippe Brucker
---
driver
The ARM architecture has a "Top Byte Ignore" (TBI) option that makes the
MMU mask out bits [63:56] of an address, allowing a userspace application
to store data in its pointers. This option is incompatible with PCI ATS.
If TBI is enabled in the SMMU and userspace triggers DMA transactions on
tagge
As we're going to track domain-master links more closely for ATS and CD
invalidation, add pointer to the attached domain in struct
arm_smmu_master. As a result, arm_smmu_strtab_ent is redundant and can be
removed.
Signed-off-by: Jean-Philippe Brucker
---
drivers/iommu/arm-smmu-v3.c | 92
Simplify the attach/detach code a bit by keeping a pointer to the stream
IDs in the master structure. Although not completely obvious here, it does
make the subsequent support for ATS, PRI and PASID a bit simpler.
Signed-off-by: Jean-Philippe Brucker
---
drivers/iommu/arm-smmu-v3.c | 30
PCIe devices can implement their own TLB, named Address Translation Cache
(ATC). Enable Address Translation Service (ATS) for devices that support
it and send them invalidation requests whenever we invalidate the IOTLBs.
ATC invalidation is allowed to take up to 90 seconds, according to the
PCIe s
This series enables PCI ATS in SMMUv3. Changes since v1 [1]:
* Simplify the SMMU structures (patches 2-4 are new).
* Don't enable ATS for devices that are attached to a bypass domain,
because in that case a translation request would cause F_BAD_ATS_TREQ.
Translation requests in that case caus
Root complex node in IORT has a bit telling whether it supports ATS or
not. Store this bit in the IOMMU fwspec when setting up a device, so it
can be accessed later by an IOMMU driver.
Use the negative version (NO_ATS) at the moment because it's not clear
if/how the bit needs to be integrated in o
Rob Herring writes:
> On Mon, Apr 1, 2019 at 10:43 AM Eric Anholt wrote:
>>
>> Chris Wilson writes:
>>
>> > Quoting Daniel Vetter (2019-04-01 14:06:48)
>> >> On Mon, Apr 1, 2019 at 9:47 AM Rob Herring wrote:
>> >> > +{
>> >> > + int i, ret = 0;
>> >> > + struct drm_gem_object *obj;
On Tue, 9 Apr 2019 13:03:15 +0300
Andriy Shevchenko wrote:
> On Mon, Apr 08, 2019 at 04:59:20PM -0700, Jacob Pan wrote:
> > Device faults detected by IOMMU can be reported outside the IOMMU
> > subsystem for further processing. This patch introduces
> > a generic device fault data structure.
> >
On Tue, 9 Apr 2019 13:07:18 +0300
Andriy Shevchenko wrote:
> On Mon, Apr 08, 2019 at 04:59:23PM -0700, Jacob Pan wrote:
> > From: "Liu, Yi L"
> >
> > In any virtualization use case, when the first translation stage
> > is "owned" by the guest OS, the host IOMMU driver has no knowledge
> > of ca
On 27/03/2019 08:04, Christoph Hellwig wrote:
Move the call to dma_common_pages_remap / dma_common_free_remap into
__iommu_dma_alloc / __iommu_dma_free and rename those functions to
better describe what they do. This keeps the functionality that
allocates and remaps a non-contigous array of pag
On Tue, 9 Apr 2019 13:08:59 +0300
Andriy Shevchenko wrote:
> On Mon, Apr 08, 2019 at 04:59:24PM -0700, Jacob Pan wrote:
> > From: Lu Baolu
> >
> > If Intel IOMMU runs in caching mode, a.k.a. virtual IOMMU, the
> > IOMMU driver should rely on the emulation software to allocate
> > and free PASID
On Tue, 9 Apr 2019 12:56:23 +0300
Andriy Shevchenko wrote:
> On Mon, Apr 08, 2019 at 04:59:15PM -0700, Jacob Pan wrote:
> > Shared virtual address (SVA), a.k.a, Shared virtual memory (SVM) on
> > Intel platforms allow address space sharing between device DMA and
> > applications. SVA can reduce p
On Tue, Apr 9, 2019 at 10:56 AM Tomeu Vizoso wrote:
>
> On Mon, 8 Apr 2019 at 23:04, Rob Herring wrote:
> >
> > On Fri, Apr 5, 2019 at 7:30 AM Steven Price wrote:
> > >
> > > On 01/04/2019 08:47, Rob Herring wrote:
> > > > This adds the initial driver for panfrost which supports Arm Mali
> > > >
On Tue, Apr 09, 2019 at 01:30:30PM +0300, Andriy Shevchenko wrote:
> On Tue, Apr 09, 2019 at 03:04:36AM -0700, Christoph Hellwig wrote:
> > On Tue, Apr 09, 2019 at 01:00:49PM +0300, Andriy Shevchenko wrote:
> > > I think it makes sense to add a helper macro to rcupdate.h
> > > (and we have several
On Mon, 8 Apr 2019 at 23:04, Rob Herring wrote:
>
> On Fri, Apr 5, 2019 at 7:30 AM Steven Price wrote:
> >
> > On 01/04/2019 08:47, Rob Herring wrote:
> > > This adds the initial driver for panfrost which supports Arm Mali
> > > Midgard and Bifrost family of GPUs. Currently, only the T860 and
> >
On 27/03/2019 08:04, Christoph Hellwig wrote:
Moving this function up to its unmap counterpart helps to keep related
code together for the following changes.
Reviewed-by: Robin Murphy
Signed-off-by: Christoph Hellwig
---
drivers/iommu/dma-iommu.c | 46 +++--
On 27/03/2019 08:04, Christoph Hellwig wrote:
Move the vm_area handling into a new iommu_dma_get_sgtable_remap helper.
Inline __iommu_dma_get_sgtable_page into the main function to simplify
the code flow a bit.
Signed-off-by: Christoph Hellwig
---
drivers/iommu/dma-iommu.c | 54 +
On 27/03/2019 08:04, Christoph Hellwig wrote:
Move the vm_area handling into __iommu_dma_mmap, which is renamed
to iommu_dma_mmap_remap.
Inline __iommu_dma_mmap_pfn into the main function to simplify the code
flow a bit.
Signed-off-by: Christoph Hellwig
---
drivers/iommu/dma-iommu.c | 50 +++
On Tue, Apr 09, 2019 at 02:17:40PM +, Thomas Hellstrom wrote:
> If that's the case, I think most of the graphics drivers will stop
> functioning. I don't think people would want that, and even if the
> graphics drivers are "to blame" due to not implementing the sync calls,
> I think the work in
On Tue, Apr 09, 2019 at 07:53:08AM -0700, Paul E. McKenney wrote:
> On Tue, Apr 09, 2019 at 01:30:30PM +0300, Andriy Shevchenko wrote:
> > On Tue, Apr 09, 2019 at 03:04:36AM -0700, Christoph Hellwig wrote:
> > > On Tue, Apr 09, 2019 at 01:00:49PM +0300, Andriy Shevchenko wrote:
> > > > I think it m
On 07/04/2019 07:59, Christoph Hellwig wrote:
On Fri, Apr 05, 2019 at 06:30:52PM +0100, Robin Murphy wrote:
On 27/03/2019 08:04, Christoph Hellwig wrote:
The nr_pages checks should be done for all mmap requests, not just those
using remap_pfn_range.
Hmm, the logic in iommu_dma_mmap() inherent
On 27/03/2019 08:04, Christoph Hellwig wrote:
[...]
@@ -649,19 +696,44 @@ static dma_addr_t __iommu_dma_map(struct device *dev,
phys_addr_t phys,
return iova + iova_off;
}
-dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
+static dma_addr_t __iommu_dma_map_page(
On Mon, Apr 08, 2019 at 04:59:33PM -0700, Jacob Pan wrote:
> When Shared Virtual Address (SVA) is enabled for a guest OS via
> vIOMMU, we need to provide invalidation support at IOMMU API and driver
> level. This patch adds Intel VT-d specific function to implement
> iommu passdown invalidate API f
On Mon, Apr 08, 2019 at 04:59:30PM -0700, Jacob Pan wrote:
> When supporting guest SVA with emulated IOMMU, the guest PASID
> table is shadowed in VMM. Updates to guest vIOMMU PASID table
> will result in PASID cache flush which will be passed down to
> the host as bind guest PASID calls.
>
> For
On Mon, Apr 08, 2019 at 04:59:31PM -0700, Jacob Pan wrote:
Commit message?
> Signed-off-by: Jacob Pan
> ---
> include/uapi/linux/iommu.h | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/include/uapi/linux/iommu.h b/include/uapi/linux/iommu.h
> index 9344cbb..59569d6 100644
> --- a/in
On Tue, 2019-04-09 at 15:31 +0200, h...@lst.de wrote:
> On Tue, Apr 09, 2019 at 01:04:51PM +, Thomas Hellstrom wrote:
> > On the VMware platform we have two possible vIOMMUS, the AMD iommu
> > and
> > Intel VTD, Given those conditions I belive the patch is
> > functionally
> > correct. We can't
Hi Muli and Jon,
do you know if there are user of systems with the Calgary iommu
around still? It seems like the last non-drive by changes to it
are from 2010 and I'm not sure how common these systems were.
___
iommu mailing list
iommu@lists.linux-founda
On Tue, 2019-04-09 at 15:59 +0200, Christoph Hellwig wrote:
> Hi David and Joerg,
>
> do you remember a good reason why intel-iommu is not using per-device
> dma_map_ops like the AMD iommu or the various ARM iommus?
>
> Right now intel-iommu.c contains a half-asses reimplementation of the
> dma d
Hi David and Joerg,
do you remember a good reason why intel-iommu is not using per-device
dma_map_ops like the AMD iommu or the various ARM iommus?
Right now intel-iommu.c contains a half-asses reimplementation of the
dma direct code for the iommu_no_mapping() case, and it would seem
much nicer t
On Tue, Apr 09, 2019 at 01:04:51PM +, Thomas Hellstrom wrote:
> On the VMware platform we have two possible vIOMMUS, the AMD iommu and
> Intel VTD, Given those conditions I belive the patch is functionally
> correct. We can't cover the AMD case with intel_iommu_enabled.
> Furthermore the only f
On Tue, 2019-04-09 at 11:57 +0200, h...@lst.de wrote:
> On Mon, Apr 08, 2019 at 06:47:52PM +, Thomas Hellstrom wrote:
> > We HAVE discussed our needs, although admittedly some of my emails
> > ended up unanswered.
>
> And than you haven't followed up, and instead ignored the layering
> instruc
The following equivalence or replacement relationship exists:
iommu=pt <--> iommu.dma_mode=passthrough.
iommu=nopt can be replaced with iommu.dma_mode=lazy.
intel_iommu=strict <--> iommu.dma_mode=strict.
amd_iommu=fullflush <--> iommu.dma_mode=strict.
Signed-off-by: Zhen Lei
---
Documentation/ad
s390_iommu=strict is equivalent to iommu.dma_mode=strict.
Signed-off-by: Zhen Lei
---
Documentation/admin-guide/kernel-parameters.txt | 6 +++---
arch/s390/pci/pci_dma.c | 14 +++---
drivers/iommu/Kconfig | 1 +
3 files changed, 11 inse
iommu=nobypass can be replaced with iommu.dma_mode=strict.
Signed-off-by: Zhen Lei
---
Documentation/admin-guide/kernel-parameters.txt | 2 +-
arch/powerpc/platforms/powernv/pci-ioda.c | 5 ++---
drivers/iommu/Kconfig | 1 +
3 files changed, 4 insertions(+), 4 del
Currently the IOMMU dma contains 3 modes: passthrough, lazy, strict. The
passthrough mode bypass the IOMMU, the lazy mode defer the invalidation
of hardware TLBs, and the strict mode invalidate IOMMU hardware TLBs
synchronously. The three modes are mutually exclusive. But the current
boot options a
Also add IOMMU_DMA_MODE_IS_{STRICT|LAZT|PASSTHROUGH}() to make the code
looks cleaner.
There is no functional change, just prepare for the following patches.
Signed-off-by: Zhen Lei
---
drivers/iommu/iommu.c | 18 ++
include/linux/iommu.h | 18 ++
2 files changed
First, add build option IOMMU_DEFAULT_{LAZY|STRICT}, so that we have the
opportunity to set {lazy|strict} mode as default at build time. Then put
the three config options in an choice, make people can only choose one of
the three at a time, the same to the boot options iommu.dma_mode.
Signed-off-b
v4 --> v5:
As Hanjun and Thomas Gleixner's suggestion:
1. Keep the old ARCH specific boot options no change.
2. Keep build option CONFIG_IOMMU_DEFAULT_PASSTHROUGH no change.
v4:
As Robin Murphy's suggestion:
"It's also not necessarily obvious to the user how this interacts with
IOMMU_DEFAULT_PASST
On Tue, Apr 09, 2019 at 03:04:36AM -0700, Christoph Hellwig wrote:
> On Tue, Apr 09, 2019 at 01:00:49PM +0300, Andriy Shevchenko wrote:
> > I think it makes sense to add a helper macro to rcupdate.h
> > (and we have several cases in kernel that can utilize it)
> >
> > #define kfree_non_null_rcu(pt
On Mon, Apr 08, 2019 at 04:59:24PM -0700, Jacob Pan wrote:
> From: Lu Baolu
>
> If Intel IOMMU runs in caching mode, a.k.a. virtual IOMMU, the
> IOMMU driver should rely on the emulation software to allocate
> and free PASID IDs. The Intel vt-d spec revision 3.0 defines a
> register set to suppor
On Mon, Apr 08, 2019 at 04:59:23PM -0700, Jacob Pan wrote:
> From: "Liu, Yi L"
>
> In any virtualization use case, when the first translation stage
> is "owned" by the guest OS, the host IOMMU driver has no knowledge
> of caching structure updates unless the guest invalidation activities
> are tr
On Tue, Apr 09, 2019 at 01:00:49PM +0300, Andriy Shevchenko wrote:
> I think it makes sense to add a helper macro to rcupdate.h
> (and we have several cases in kernel that can utilize it)
>
> #define kfree_non_null_rcu(ptr, rcu_head) \
> do {
On Mon, Apr 08, 2019 at 04:59:20PM -0700, Jacob Pan wrote:
> Device faults detected by IOMMU can be reported outside the IOMMU
> subsystem for further processing. This patch introduces
> a generic device fault data structure.
>
> The fault can be either an unrecoverable fault or a page request,
>
On Mon, Apr 08, 2019 at 04:59:16PM -0700, Jacob Pan wrote:
> From: Jean-Philippe Brucker
>
> Some devices might support multiple DMA address spaces, in particular
> those that have the PCI PASID feature. PASID (Process Address Space ID)
> allows to share process address spaces with devices (SVA),
> index 74e944bd4a8d..81d449451494 100644
> --- a/drivers/iommu/arm-smmu.c
> +++ b/drivers/iommu/arm-smmu.c
> @@ -1484,8 +1484,7 @@ static int arm_smmu_add_device(struct device *dev)
> }
>
> ret = -ENOMEM;
> - cfg = kzalloc(offsetof(struct arm_smmu_master_cfg, smendx[i]),
> -
On Mon, Apr 08, 2019 at 06:47:52PM +, Thomas Hellstrom wrote:
> We HAVE discussed our needs, although admittedly some of my emails
> ended up unanswered.
And than you haven't followed up, and instead ignored the layering
instructions and just commited a broken patch?
> We've as you're well aw
On Mon, Apr 08, 2019 at 04:59:15PM -0700, Jacob Pan wrote:
> Shared virtual address (SVA), a.k.a, Shared virtual memory (SVM) on Intel
> platforms allow address space sharing between device DMA and applications.
> SVA can reduce programming complexity and enhance security.
> This series is intended
67 matches
Mail list logo