Hi,
On 1/2/20 10:25 AM, Roland Dreier wrote:
We saw more devices with the same mismatch quirk. So maintaining them in
a quirk table will make it more readable and maintainable.
I guess I disagree about the maintainable part, given that this patch
already regresses Broadwell NTB.
I'm not even
Hi Yi,
On 1/2/20 10:31 AM, Liu, Yi L wrote:
From: Lu Baolu [mailto:baolu...@linux.intel.com]
Sent: Thursday, January 2, 2020 7:38 AM
To: Joerg Roedel ; David Woodhouse ;
Alex Williamson
Subject: Re: [PATCH v5 0/9] Use 1st-level for IOVA translation
On 12/24/19 3:44 PM, Lu Baolu wrote:
Intel
> From: Lu Baolu [mailto:baolu...@linux.intel.com]
> Sent: Thursday, January 2, 2020 7:38 AM
> To: Joerg Roedel ; David Woodhouse ;
> Alex Williamson
> Subject: Re: [PATCH v5 0/9] Use 1st-level for IOVA translation
>
> On 12/24/19 3:44 PM, Lu Baolu wrote:
> > Intel VT-d in scalable mode supports
> We saw more devices with the same mismatch quirk. So maintaining them in
> a quirk table will make it more readable and maintainable.
I guess I disagree about the maintainable part, given that this patch
already regresses Broadwell NTB.
I'm not even sure what the DMAR table says about NTB on
> +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2f0d, /* NTB devices */
> +quirk_dmar_scope_mismatch);
> +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2020, /* NVME host */
> +quirk_dmar_scope_mismatch);
what's the motivation for changing
Hi,
On 1/2/20 10:11 AM, Roland Dreier wrote:
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2f0d, /* NTB devices */
+quirk_dmar_scope_mismatch);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2020, /* NVME host */
+
From: Jacob Pan
Use combined macros for_each_svm_dev() to simplify SVM device iteration
and error checking.
Suggested-by: Andy Shevchenko
Signed-off-by: Jacob Pan
Reviewed-by: Eric Auger
Signed-off-by: Lu Baolu
---
drivers/iommu/intel-svm.c | 79 +++
1
After we make all map/unmap paths support first level page table.
Let's turn it on if hardware supports scalable mode.
Signed-off-by: Lu Baolu
---
drivers/iommu/intel-iommu.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/intel-iommu.c
First-level translation may map input addresses to 4-KByte pages,
2-MByte pages, or 1-GByte pages. Support for 4-KByte pages and
2-Mbyte pages are mandatory for first-level translation. Hardware
support for 1-GByte page is reported through the FL1GP field in
the Capability Register.
From: Jacob Pan
After each setup for PASID entry, related translation caches must be
flushed. We can combine duplicated code into one function which is less
error prone.
Signed-off-by: Jacob Pan
Reviewed-by: Eric Auger
Signed-off-by: Lu Baolu
---
drivers/iommu/intel-pasid.c | 48
If Intel IOMMU strict mode is enabled by users, it's unnecessary
to create the iova flush queue.
Signed-off-by: Lu Baolu
---
drivers/iommu/intel-iommu.c | 24 +++-
1 file changed, 15 insertions(+), 9 deletions(-)
diff --git a/drivers/iommu/intel-iommu.c
First-level translation restricts the input-address to a canonical
address (i.e., address bits 63:N have the same value as address
bit [N-1], where N is 48-bits with 4-level paging and 57-bits with
5-level paging). (section 3.6 in the spec)
This makes first level IOVA canonical by using IOVA with
Hi Joerg,
Below patches have been piled up for v5.6.
- Some preparation patches for VT-d nested mode support
- VT-d Native Shared virtual memory cleanup and fixes
- Use 1st-level for IOVA translation
- VT-d debugging and tracing
- Extend map_sg trace event for more information
-
From: Jacob Pan
When setting up first level page tables for sharing with CPU, we need
to ensure IOMMU can support no less than the levels supported by the
CPU.
It is not adequate, as in the current code, to set up 5-level paging
in PASID entry First Level Paging Mode(FLPM) solely based on CPU.
We expect devices with endpoint scope to have normal PCI headers,
and devices with bridge scope to have bridge PCI headers. However,
some PCI devices may be listed in the DMAR table with bridge scope,
even though they have a normal PCI header. Add a quirk flag for
those special devices.
Cc:
Current map_sg stores trace message in a coarse manner. This
extends it so that more detailed messages could be traced.
The map_sg trace message looks like:
map_sg: dev=:00:17.0 [1/9] dev_addr=0xf8f9 phys_addr=0x158051000
size=4096
map_sg: dev=:00:17.0 [2/9] dev_addr=0xf8f91000
This adds the Intel VT-d specific callback of setting
DOMAIN_ATTR_NESTING domain attribution. It is necessary
to let the VT-d driver know that the domain represents
a virtual machine which requires the IOMMU hardware to
support nested translation mode. Return success if the
IOMMU hardware suports
From: Jacob Pan
Page responses should only be sent when last page in group (LPIG) or
private data is present in the page request. This patch avoids sending
invalid descriptors.
Fixes: 5d308fc1ecf53 ("iommu/vt-d: Add 256-bit invalidation descriptor support")
Signed-off-by: Jacob Pan
From: Jacob Pan
PASID allocator uses IDR which is exclusive for the end of the
allocation range. There is no need to decrement pasid_max.
Fixes: af39507305fb ("iommu/vt-d: Apply global PASID in SVA")
Reported-by: Eric Auger
Signed-off-by: Jacob Pan
Reviewed-by: Eric Auger
Signed-off-by: Lu
Current intel_pasid_setup_first_level() use 5-level paging for
first level translation if CPUs use 5-level paging mode too.
This makes sense for SVA usages since the page table is shared
between CPUs and IOMMUs. But it makes no sense if we only want
to use first level for IOVA translation. Add
From: Jacob Pan
Add a check during SVM bind to ensure CPU and IOMMU hardware capabilities
are met.
Signed-off-by: Jacob Pan
Reviewed-by: Eric Auger
Signed-off-by: Lu Baolu
---
drivers/iommu/intel-svm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/iommu/intel-svm.c
From: Jacob Pan
Shared Virtual Memory(SVM) is based on a collective set of hardware
features detected at runtime. There are requirements for matching CPU
and IOMMU capabilities.
The current code checks CPU and IOMMU feature set for SVM support but
the result is never stored nor used. Therefore,
Currently if flush queue initialization fails, we return error
or enforce the system-wide strict mode. These are unnecessary
because we always check the existence of a flush queue before
queuing any iova's for lazy flushing. Printing a informational
message is enough.
Signed-off-by: Lu Baolu
---
When software has changed first-level tables, it should invalidate
the affected IOTLB and the paging-structure-caches using the PASID-
based-IOTLB Invalidate Descriptor defined in spec 6.5.2.4.
Signed-off-by: Lu Baolu
---
drivers/iommu/dmar.c| 41 +++
Intel VT-d in scalable mode supports two types of page tables for
IOVA translation: first level and second level. The IOMMU driver
can choose one from both for IOVA translation according to the use
case. This sets up the pasid entry if a domain is selected to use
the first-level page table for
Export page table internals of the domain attached to each device.
Example of such dump on a Skylake machine:
$ sudo cat /sys/kernel/debug/iommu/intel/domain_translation_struct
[ ... ]
Device :00:14.0 with pasid 0 @0x15f3d9000
IOVA_PFNPML5E PML4E
From: Jacob Pan
Make use of generic IOASID code to manage PASID allocation,
free, and lookup. Replace Intel specific code.
Signed-off-by: Jacob Pan
Reviewed-by: Eric Auger
Signed-off-by: Lu Baolu
---
drivers/iommu/Kconfig | 1 +
drivers/iommu/intel-iommu.c | 13 +++--
This checks whether a domain should use the first level page
table for map/unmap and marks it in the domain structure.
Signed-off-by: Lu Baolu
---
drivers/iommu/intel-iommu.c | 39 +
1 file changed, 39 insertions(+)
diff --git a/drivers/iommu/intel-iommu.c
This adds Kconfig option INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON
to make it easier for distributions to enable or disable the
Intel IOMMU scalable mode by default during kernel build.
Signed-off-by: Lu Baolu
---
drivers/iommu/Kconfig | 12
drivers/iommu/intel-iommu.c | 7
On 12/24/19 3:44 PM, Lu Baolu wrote:
Intel VT-d in scalable mode supports two types of page tables
for DMA translation: the first level page table and the second
level page table. The first level page table uses the same
format as the CPU page table, while the second level page table
keeps
On 12/24/19 2:22 PM, Lu Baolu wrote:
We expect devices with endpoint scope to have normal PCI headers,
and devices with bridge scope to have bridge PCI headers. However
Some PCI devices may be listed in the DMAR table with bridge scope,
even though they have a normal PCI header. Add a quirk
On Tue, Dec 31, 2019 at 10:39:49PM -0500, Brian Masney wrote:
[...]
> (kernel_init) from ret_from_fork (arch/arm/kernel/entry-common.S:156)
> Exception stack(0xee89dfb0 to 0xee89dff8)
> dfa0:
>
> dfc0:
On Tue, Dec 31, 2019 at 10:39:49PM -0500, Brian Masney wrote:
> When attempting to load the qcom-iommu driver, and an -EPROBE_DEFER
> error occurs, the following attempted NULL pointer deference occurs:
>
> Unable to handle kernel NULL pointer dereference at virtual address
> 0014
>
33 matches
Mail list logo