From: Xiyu Yang
[ Upstream commit 7c8f176d6a3fa18aa0f8875da6f7c672ed2a8554 ]
The reference counting issue happens in several exception handling paths
of arm_smmu_iova_to_phys_hard(). When those error scenarios occur, the
function forgets to decrease the refcount of "smmu" increased by
From: Xiyu Yang
[ Upstream commit 1adf30f198c26539a62d761e45af72cde570413d ]
arm_smmu_rpm_get() invokes pm_runtime_get_sync(), which increases the
refcount of the "smmu" even though the return value is less than 0.
The reference counting issue happens in some error handling paths of
From: Xiyu Yang
[ Upstream commit 7c8f176d6a3fa18aa0f8875da6f7c672ed2a8554 ]
The reference counting issue happens in several exception handling paths
of arm_smmu_iova_to_phys_hard(). When those error scenarios occur, the
function forgets to decrease the refcount of "smmu" increased by
From: Xiyu Yang
[ Upstream commit 1adf30f198c26539a62d761e45af72cde570413d ]
arm_smmu_rpm_get() invokes pm_runtime_get_sync(), which increases the
refcount of the "smmu" even though the return value is less than 0.
The reference counting issue happens in some error handling paths of
From: Xiyu Yang
[ Upstream commit 7c8f176d6a3fa18aa0f8875da6f7c672ed2a8554 ]
The reference counting issue happens in several exception handling paths
of arm_smmu_iova_to_phys_hard(). When those error scenarios occur, the
function forgets to decrease the refcount of "smmu" increased by
From: Xiyu Yang
[ Upstream commit 1adf30f198c26539a62d761e45af72cde570413d ]
arm_smmu_rpm_get() invokes pm_runtime_get_sync(), which increases the
refcount of the "smmu" even though the return value is less than 0.
The reference counting issue happens in some error handling paths of
From: Eric Anholt
[ Upstream commit a242f4297cfe3f4589a7620dcd42cc503607fc6b ]
db820c wants to use the qcom smmu path to get HUPCF set (which keeps
the GPU from wedging and then sometimes wedging the kernel after a
page fault), but it doesn't have separate pagetables support yet in
drm/msm so
From: Xiyu Yang
[ Upstream commit 1adf30f198c26539a62d761e45af72cde570413d ]
arm_smmu_rpm_get() invokes pm_runtime_get_sync(), which increases the
refcount of the "smmu" even though the return value is less than 0.
The reference counting issue happens in some error handling paths of
From: Xiyu Yang
[ Upstream commit 7c8f176d6a3fa18aa0f8875da6f7c672ed2a8554 ]
The reference counting issue happens in several exception handling paths
of arm_smmu_iova_to_phys_hard(). When those error scenarios occur, the
function forgets to decrease the refcount of "smmu" increased by
From: Eric Anholt
[ Upstream commit a242f4297cfe3f4589a7620dcd42cc503607fc6b ]
db820c wants to use the qcom smmu path to get HUPCF set (which keeps
the GPU from wedging and then sometimes wedging the kernel after a
page fault), but it doesn't have separate pagetables support yet in
drm/msm so
Hi Kevin,
A couple first pass comments...
On Fri, 9 Jul 2021 07:48:44 +
"Tian, Kevin" wrote:
> 2.2. /dev/vfio device uAPI
> ++
>
> /*
> * Bind a vfio_device to the specified IOMMU fd
> *
> * The user should provide a device cookie when calling this ioctl. The
On 2021-07-08 09:08, Joerg Roedel wrote:
On Wed, Jul 07, 2021 at 01:00:13PM -0700, Doug Anderson wrote:
a) Nothing is inherently broken with my current approach.
b) My current approach doesn't make anybody terribly upset even if
nobody is totally in love with it.
Well, no, sorry :)
I don't
From: Ville Syrjälä
With the iommu driver disabling VT-d superpage it should be
safe to use FBC on SKL/BXT with VT-d otherwise enabled.
Cc: David Woodhouse
Cc: Lu Baolu
Cc: iommu@lists.linux-foundation.org
Signed-off-by: Ville Syrjälä
---
drivers/gpu/drm/i915/display/intel_fbc.c | 16
From: Ville Syrjälä
Skylake has known issues with VT-d superpage. Namely frame buffer
compression (FBC) can't be safely used when superpage is enabled.
Currently we're disabling FBC entirely when VT-d is active, but
I think just disabling superpage would be better since FBC can
save some power.
From: Ville Syrjälä
Broxton has known issues with VT-d superpage. Namely frame buffer
compression (FBC) can't be safely used when superpage is enabled.
Currently we're disabling FBC entirely when VT-d is active, but
I think just disabling superpage would be better since FBC can
save some power.
From: Ville Syrjälä
While running "gem_exec_big --r single" from igt-gpu-tools on
Geminilake as soon as a 2M mapping is made I tend to get a DMAR
write fault. Strangely the faulting address is always a 4K page
and usually very far away from the 2M page that got mapped.
But if no 2M mappings get
From: Ville Syrjälä
I ran into some kind of fail with VT-d superpage on Geminlake igfx,
so without any better ideas let's just disable it.
Additionally Skylake/Broxton igfx have known issues with VT-d
superpage as well, so let's disable it there as well. This should
let us re-enable frame
On Fri, Jul 09, 2021 at 11:26:53AM +0100, Robin Murphy wrote:
> On 2021-07-09 09:38, Ming Lei wrote:
> > Hello,
> >
> > I observed that NVMe performance is very bad when running fio on one
> > CPU(aarch64) in remote numa node compared with the nvme pci numa node.
> >
> > Please see the test
On Fri, Jul 09, 2021 at 11:16:14AM +0100, Russell King (Oracle) wrote:
> On Fri, Jul 09, 2021 at 04:38:09PM +0800, Ming Lei wrote:
> > I observed that NVMe performance is very bad when running fio on one
> > CPU(aarch64) in remote numa node compared with the nvme pci numa node.
>
> Have you
On Fri, Jul 09, 2021 at 10:17:25PM +0800, Lu Baolu wrote:
> On 2021/7/9 19:43, Wei Liu wrote:
> > When Microsoft Hypervisor runs on Intel platforms it needs to know the
> > reserved regions to program devices correctly. There is no reason to
> > duplicate intel_iommu_get_resv_regions. Export it.
>
On 2021/7/9 19:43, Wei Liu wrote:
When Microsoft Hypervisor runs on Intel platforms it needs to know the
reserved regions to program devices correctly. There is no reason to
duplicate intel_iommu_get_resv_regions. Export it.
Why not using iommu_get_resv_regions()?
Best regards,
baolu
On 2021-07-08 09:08, Joerg Roedel wrote:
On Wed, Jul 07, 2021 at 01:00:13PM -0700, Doug Anderson wrote:
a) Nothing is inherently broken with my current approach.
b) My current approach doesn't make anybody terribly upset even if
nobody is totally in love with it.
Well, no, sorry :)
I don't
On Fri, Jul 09, 2021 at 01:56:46PM +0100, Robin Murphy wrote:
> On 2021-07-09 12:43, Wei Liu wrote:
> > Microsoft Hypervisor provides a set of hypercalls to manage device
> > domains. The root kernel should parse the DMAR so that it can program
> > the IOMMU (with hypercalls) correctly.
> >
> >
On Fri, Jul 09, 2021 at 01:46:19PM +0100, Robin Murphy wrote:
> On 2021-07-09 12:43, Wei Liu wrote:
> > Some devices may have been claimed by the hypervisor already. One such
> > example is a user can assign a NIC for debugging purpose.
> >
> > Ideally Linux should be able to tell retrieve that
On 2021-07-09 12:43, Wei Liu wrote:
Microsoft Hypervisor provides a set of hypercalls to manage device
domains. The root kernel should parse the DMAR so that it can program
the IOMMU (with hypercalls) correctly.
The DMAR code was designed to work with Intel IOMMU only. Add two more
parameters
On 2021-07-09 12:43, Wei Liu wrote:
Some devices may have been claimed by the hypervisor already. One such
example is a user can assign a NIC for debugging purpose.
Ideally Linux should be able to tell retrieve that information, but
there is no way to do that yet. And designing that new
On 2021-07-09 12:04, John Garry wrote:
On 09/07/2021 11:26, Robin Murphy wrote:
n 2021-07-09 09:38, Ming Lei wrote:
Hello,
I observed that NVMe performance is very bad when running fio on one
CPU(aarch64) in remote numa node compared with the nvme pci numa node.
Please see the test result[1]
Some devices may have been claimed by the hypervisor already. One such
example is a user can assign a NIC for debugging purpose.
Ideally Linux should be able to tell retrieve that information, but
there is no way to do that yet. And designing that new mechanism is
going to take time.
Provide a
When Microsoft Hypervisor runs on Intel platforms it needs to know the
reserved regions to program devices correctly. There is no reason to
duplicate intel_iommu_get_resv_regions. Export it.
Signed-off-by: Wei Liu
---
drivers/iommu/intel/iommu.c | 5 +++--
include/linux/intel-iommu.h | 4
Microsoft Hypervisor provides a set of hypercalls to manage device
domains. Implement a type-1 IOMMU using those hypercalls.
Implement DMA remapping as the first step for this driver. Interrupt
remapping will come in a later stage.
Signed-off-by: Wei Liu
---
drivers/iommu/Kconfig| 14
Microsoft Hypervisor provides a set of hypercalls to manage device
domains. The root kernel should parse the DMAR so that it can program
the IOMMU (with hypercalls) correctly.
The DMAR code was designed to work with Intel IOMMU only. Add two more
parameters to make it useful to Microsoft
On 09/07/2021 11:26, Robin Murphy wrote:
n 2021-07-09 09:38, Ming Lei wrote:
Hello,
I observed that NVMe performance is very bad when running fio on one
CPU(aarch64) in remote numa node compared with the nvme pci numa node.
Please see the test result[1] 327K vs. 34.9K.
Latency trace shows
On 2021-07-09 09:38, Ming Lei wrote:
Hello,
I observed that NVMe performance is very bad when running fio on one
CPU(aarch64) in remote numa node compared with the nvme pci numa node.
Please see the test result[1] 327K vs. 34.9K.
Latency trace shows that one big difference is in
On Fri, Jul 09, 2021 at 04:38:09PM +0800, Ming Lei wrote:
> I observed that NVMe performance is very bad when running fio on one
> CPU(aarch64) in remote numa node compared with the nvme pci numa node.
Have you checked the effect of running a memory-heavy process using
memory from node 1 while
Hello,
I observed that NVMe performance is very bad when running fio on one
CPU(aarch64) in remote numa node compared with the nvme pci numa node.
Please see the test result[1] 327K vs. 34.9K.
Latency trace shows that one big difference is in iommu_dma_unmap_sg(),
nsecs vs 25437 nsecs.
/dev/iommu provides an unified interface for managing I/O page tables for
devices assigned to userspace. Device passthrough frameworks (VFIO, vDPA,
etc.) are expected to use this interface instead of creating their own logic to
isolate untrusted device DMAs initiated by userspace.
This
On Fri, Jul 9, 2021 at 2:14 AM Robin Murphy wrote:
>
> On 2021-07-08 10:29, Joerg Roedel wrote:
> > Adding Robin too.
> >
> > On Wed, Jul 07, 2021 at 04:55:01PM +0900, David Stevens wrote:
> >> Add support for per-domain dynamic pools of iommu bounce buffers to the
> >> dma-iommu API. This allows
On Thu, Jul 8, 2021 at 10:38 PM Lu Baolu wrote:
>
> Hi David,
>
> I like this idea. Thanks for proposing this.
>
> On 2021/7/7 15:55, David Stevens wrote:
> > Add support for per-domain dynamic pools of iommu bounce buffers to the
> > dma-iommu API. This allows iommu mappings to be reused while
38 matches
Mail list logo