i.org/next/master/next-20200511/arm/tegra_defconfig/gcc-8/lab-collabora/baseline-tegra124-jetson-tk1.html#L573
Stack trace:
<0>[2.953683] [] (__iommu_probe_device) from []
(iommu_probe_device+0x18/0x124)
<0>[2.962810] [] (iommu_probe_device) from []
(of_iommu_con
On Sat 09 May 06:08 PDT 2020, Shawn Guo wrote:
> On some SoCs like MSM8939 with A405 adreno, there is a gfx_tbu clock
> needs to be on while doing TLB invalidate. Otherwise, TLBSYNC status
> will not be correctly reflected, causing the system to go into a bad
> state. Add it as an optional clock,
On Mon 11 May 19:17 PDT 2020, Samuel Zou wrote:
> Fix the following sparse warning:
>
> drivers/iommu/msm_iommu.c:37:1: warning: symbol 'msm_iommu_lock' was not
> declared.
>
> The msm_iommu_lock has only call site within msm_iommu.c
> It should be static
>
Reviewed-by: Bjorn Andersson
Rega
Calling pci_fixup_final after of_pci_iommu_init, which alloc
iommu_fwnode. Some platform devices appear as PCI but are
actually on the AMBA bus, and they need fixup in
drivers/pci/quirks.c handling iommu_fwnode.
So calling pci_fixup_final after iommu_fwnode is allocated.
Signed-off-by: Zhangfei Ga
Calling pci_fixup_final after iommu_fwspec_init, which alloc
iommu_fwnode. Some platform devices appear as PCI but are
actually on the AMBA bus, and they need fixup in
drivers/pci/quirks.c handling iommu_fwnode.
So calling pci_fixup_final after iommu_fwnode is allocated.
Signed-off-by: Zhangfei Ga
Some platform devices appear as PCI but are
actually on the AMBA bus, and they need fixup in
drivers/pci/quirks.c handling iommu_fwnode.
So calling pci_fixup_final after iommu_fwnode is allocated.
For example:
Hisilicon platform device need fixup in
drivers/pci/quirks.c
+static void quirk_huawe
When a PASID is stopped or terminated, there can be pending PRQs
(requests that haven't received responses) in the software and
remapping hardware. The pending page requests must be drained
so that the pasid could be reused. The chapter 7.10 in the VT-d
specification specifies the software steps to
Current qi_submit_sync() only supports single invalidation descriptor
per submission and appends wait descriptor after each submission to
poll the hardware completion. This extends the qi_submit_sync() helper
to support multiple descriptors, and add an option so that the caller
could specify the Pa
Export invalidation queue internals of each iommu device through the
debugfs.
Example of such dump on a Skylake machine:
$ sudo cat /sys/kernel/debug/iommu/intel/invalidation_queue
Invalidation queue on IOMMU: dmar1
Base: 0x1672c9000 Head: 80Tail: 80
Index qw0
When a PASID is stopped or terminated, there can be pending PRQs
(requests that haven't received responses) in remapping hardware.
This adds the interface to drain page requests and call it when a
PASID is terminated.
Signed-off-by: Jacob Pan
Signed-off-by: Liu Yi L
Signed-off-by: Lu Baolu
---
IOTLB flush already included in the PASID tear down and the page request
drain process. There is no need to flush again.
Signed-off-by: Jacob Pan
Signed-off-by: Lu Baolu
Reviewed-by: Kevin Tian
---
drivers/iommu/intel-svm.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git
When a PASID is used for SVA by the device, it's possible that the PASID
entry is cleared before the device flushes all ongoing DMA requests. The
IOMMU should tolerate and ignore the non-recoverable faults caused by the
untranslated requests from this device.
For example, when an exception happens
Quoting Sibi Sankar (2020-05-11 10:55:32)
> The modem remote processor has two access paths to DDR. One path is
> directly connected to DDR and another path goes through an SMMU. The
> SMMU path is configured to be a direct mapping because it's used by
> various peripherals in the modem subsystem.
On Mon 11 May 10:55 PDT 2020, Sibi Sankar wrote:
> The modem remote processor has two access paths to DDR. One path is
> directly connected to DDR and another path goes through an SMMU. The
> SMMU path is configured to be a direct mapping because it's used by
> various peripherals in the modem sub
On 2020-05-06 6:46 pm, Jean-Philippe Brucker wrote:
Some SMMUv3 implementation embed the Perf Monitor Group Registers (PMCG)
inside the first 64kB region of the SMMU. Since PMCG are managed by a
separate driver, this layout causes resource reservation conflicts
during boot.
To avoid this conflic
The modem remote processor has two access paths to DDR. One path is
directly connected to DDR and another path goes through an SMMU. The
SMMU path is configured to be a direct mapping because it's used by
various peripherals in the modem subsystem. Typically this direct
mapping is configured static
On Fri, May 08, 2020 at 08:40:40AM -0700, Rob Clark wrote:
> On Fri, May 8, 2020 at 8:32 AM Rob Clark wrote:
> >
> > On Thu, May 7, 2020 at 5:54 AM Will Deacon wrote:
> > >
> > > On Thu, May 07, 2020 at 11:55:54AM +0100, Robin Murphy wrote:
> > > > On 2020-05-07 11:14 am, Sai Prakash Ranjan wrote
acpi_dev_hid_uid_match() expects a null pointer for UID if it doesn't
exist. The acpihid_map_entry contains a char buffer for holding the
UID. If no UID was provided in the IVRS table, this buffer will be
zeroed. If we pass in a null string, acpi_dev_hid_uid_match() will
return false because it wil
From: Thierry Reding
The host1x bus implemented on Tegra SoCs is primarily an abstraction to
create logical device from multiple platform devices. Since the devices
in such a setup are typically hierarchical, DMA setup still needs to be
done so that DMA masks can be properly inherited, but we don
On 05/05/2020 09:45, Marek Szyprowski wrote:
The Documentation/DMA-API-HOWTO.txt states that dma_map_sg returns the
numer of the created entries in the DMA address space. However the
subsequent calls to dma_sync_sg_for_{device,cpu} and dma_unmap_sg must be
called with the original number of the e
IVRS parsing code always tries to read 255 bytes from memory when
retrieving ACPI device path, and makes an assumption that firmware
provides a zero-terminated string. Both of those are bugs: the entry
is likely to be shorter than 255 bytes, and zero-termination is not
guaranteed.
With Acer SF314-
On 5/9/2020 12:25 AM, Vijayanand Jitta wrote:
>
>
> On 5/7/2020 6:54 PM, Robin Murphy wrote:
>> On 2020-05-06 9:01 pm, vji...@codeaurora.org wrote:
>>> From: Vijayanand Jitta
>>>
>>> When ever a new iova alloc request comes iova is always searched
>>> from the cached node and the nodes which a
From: Vijayanand Jitta
When ever a new iova alloc request comes iova is always searched
from the cached node and the nodes which are previous to cached
node. So, even if there is free iova space available in the nodes
which are next to the cached node iova allocation can still fail
because of thi
23 matches
Mail list logo