From: Nadav Amit
When running on an AMD vIOMMU, it is better to avoid TLB flushes
of unmodified PTEs. vIOMMUs require the hypervisor to synchronize the
virtualized IOMMU's PTEs with the physical ones. This process induce
overheads.
AMD IOMMU allows us to flush any range that is aligned t
From: Nadav Amit
On virtual machines, software must flush the IOTLB after each page table
entry update.
The iommu_map_sg() code iterates through the given scatter-gather list
and invokes iommu_map() for each element in the scatter-gather list,
which calls into the vendor IOMMU driver through
From: Nadav Amit
AMD's IOMMU can flush efficiently (i.e., in a single flush) any range.
This is in contrast, for instnace, to Intel IOMMUs that have a limit on
the number of pages that can be flushed in a single flush. In addition,
AMD's IOMMU do not care about the page-size, so chan
From: Nadav Amit
Refactor iommu_iotlb_gather_add_page() and factor out the logic that
detects whether IOTLB gather range and a new range are disjoint. To be
used by the next patch that implements different gathering logic for
AMD.
Note that updating gather->pgsize unconditionally does
From: Nadav Amit
Recent patch attempted to enable selective page flushes on AMD IOMMU but
neglected to adapt amd_iommu_iotlb_sync() to use the selective flushes.
Adapt amd_iommu_iotlb_sync() to use selective flushes and change
amd_iommu_unmap() to collect the flushes. As a defensive measure, to
f these
helpers for clarity.
Cc: Joerg Roedel
Cc: Will Deacon
Cc: Jiajun Cao
Cc: Robin Murphy
Cc: Lu Baolu
Cc: iommu@lists.linux-foundation.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Robin Murphy
Signed-off-by: Nadav Amit
---
drivers/iommu/mtk_iommu.c | 6 +-
include/linux/i
From: Nadav Amit
Do not use flush-queue on virtualized environments, where the NpCache
capability of the IOMMU is set. This is required to reduce
virtualization overheads.
This change follows a similar change to Intel's VT-d and a detailed
explanation as for the rationale is described in c
From: Nadav Amit
The previous patch, commit 268aa4548277 ("iommu/amd: Page-specific
invalidations for more than one page") was supposed to enable
page-selective IOTLB flushes on AMD.
Besides the bug that was already fixed by commit a017c567915f
("iommu/amd: Fix wrong pare
> On Jul 13, 2021, at 11:40 AM, Robin Murphy wrote:
>
> On 2021-07-13 10:41, Nadav Amit wrote:
>> From: Nadav Amit
>> AMD's IOMMU can flush efficiently (i.e., in a single flush) any range.
>> This is in contrast, for instnace, to Intel IOMMUs that have a limi
From: Nadav Amit
When running on an AMD vIOMMU, it is better to avoid TLB flushes
of unmodified PTEs. vIOMMUs require the hypervisor to synchronize the
virtualized IOMMU's PTEs with the physical ones. This process induce
overheads.
AMD IOMMU allows us to flush any range that is aligned t
From: Nadav Amit
On virtual machines, software must flush the IOTLB after each page table
entry update.
The iommu_map_sg() code iterates through the given scatter-gather list
and invokes iommu_map() for each element in the scatter-gather list,
which calls into the vendor IOMMU driver through
From: Nadav Amit
AMD's IOMMU can flush efficiently (i.e., in a single flush) any range.
This is in contrast, for instnace, to Intel IOMMUs that have a limit on
the number of pages that can be flushed in a single flush. In addition,
AMD's IOMMU do not care about the page-size, so chan
From: Nadav Amit
Refactor iommu_iotlb_gather_add_page() and factor out the logic that
detects whether IOTLB gather range and a new range are disjoint. To be
used by the next patch that implements different gathering logic for
AMD.
Note that updating gather->pgsize unconditionally does
f these
helpers for clarity.
Cc: Joerg Roedel
Cc: Will Deacon
Cc: Jiajun Cao
Cc: Robin Murphy
Cc: Lu Baolu
Cc: iommu@lists.linux-foundation.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Robin Murphy
Signed-off-by: Nadav Amit
---
drivers/iommu/mtk_iommu.c | 6 +-
include/linux/i
From: Nadav Amit
Do not use flush-queue on virtualized environments, where the NpCache
capability of the IOMMU is set. This is required to reduce
virtualization overheads.
This change follows a similar change to Intel's VT-d and a detailed
explanation as for the rationale is described in c
From: Nadav Amit
Recent patch attempted to enable selective page flushes on AMD IOMMU but
neglected to adapt amd_iommu_iotlb_sync() to use the selective flushes.
Adapt amd_iommu_iotlb_sync() to use selective flushes and change
amd_iommu_unmap() to collect the flushes. As a defensive measure, to
From: Nadav Amit
The previous patch, commit 268aa4548277 ("iommu/amd: Page-specific
invalidations for more than one page") was supposed to enable
page-selective IOTLB flushes on AMD.
Besides the bug that was already fixed by commit a017c567915f
("iommu/amd: Fix wrong pare
From: Nadav Amit
When running on an AMD vIOMMU, it is better to avoid TLB flushes
of unmodified PTEs. vIOMMUs require the hypervisor to synchronize the
virtualized IOMMU's PTEs with the physical ones. This process induce
overheads.
AMD IOMMU allows us to flush any range that is aligned t
From: Nadav Amit
On virtual machines, software must flush the IOTLB after each page table
entry update.
The iommu_map_sg() code iterates through the given scatter-gather list
and invokes iommu_map() for each element in the scatter-gather list,
which calls into the vendor IOMMU driver through
From: Nadav Amit
Refactor iommu_iotlb_gather_add_page() and factor out the logic that
detects whether IOTLB gather range and a new range are disjoint. To be
used by the next patch that implements different gathering logic for
AMD.
Note that updating gather->pgsize unconditionally does
From: Nadav Amit
AMD's IOMMU can flush efficiently (i.e., in a single flush) any range.
This is in contrast, for instnace, to Intel IOMMUs that have a limit on
the number of pages that can be flushed in a single flush. In addition,
AMD's IOMMU do not care about the page-size, so chan
f these
helpers for clarity.
Cc: Joerg Roedel
Cc: Will Deacon
Cc: Jiajun Cao
Cc: Robin Murphy
Cc: Lu Baolu
Cc: iommu@lists.linux-foundation.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Robin Murphy
Signed-off-by: Nadav Amit
---
Changes from Robin's version:
* Added iommu_io
From: Nadav Amit
Do not use flush-queue on virtualized environments, where the NpCache
capability of the IOMMU is set. This is required to reduce
virtualization overheads.
This change follows a similar change to Intel's VT-d and a detailed
explanation as for the rationale is described in c
From: Nadav Amit
Recent patch attempted to enable selective page flushes on AMD IOMMU but
neglected to adapt amd_iommu_iotlb_sync() to use the selective flushes.
Adapt amd_iommu_iotlb_sync() to use selective flushes and change
amd_iommu_unmap() to collect the flushes. As a defensive measure, to
From: Nadav Amit
The previous patch, commit 268aa4548277 ("iommu/amd: Page-specific
invalidations for more than one page") was supposed to enable
page-selective IOTLB flushes on AMD.
Besides the bug that was already fixed by commit a017c567915f
("iommu/amd: Fix wrong pare
> On Jun 15, 2021, at 12:20 PM, Robin Murphy wrote:
>
> On 2021-06-15 19:14, Nadav Amit wrote:
>>> On Jun 15, 2021, at 5:55 AM, Robin Murphy wrote:
>>>
>>> On 2021-06-07 19:25, Nadav Amit wrote:
>>>> From: Nadav Amit
>>>> AMD'
> On Jun 15, 2021, at 12:05 PM, Nadav Amit wrote:
>
>
>
>> On Jun 15, 2021, at 3:42 AM, Robin Murphy wrote:
>>
>> On 2021-06-07 19:25, Nadav Amit wrote:
>>> From: Robin Murphy
>>> The Mediatek driver is not the only one which might want a
> On Jun 15, 2021, at 3:42 AM, Robin Murphy wrote:
>
> On 2021-06-07 19:25, Nadav Amit wrote:
>> From: Robin Murphy
>> The Mediatek driver is not the only one which might want a basic
>> address-based gathering behaviour, so although it's arguably simple
>
> On Jun 15, 2021, at 3:29 AM, Will Deacon wrote:
>
> On Fri, Jun 11, 2021 at 09:50:31AM -0700, Nadav Amit wrote:
>>
>>
>>> On Jun 11, 2021, at 6:57 AM, Will Deacon wrote:
>>>
>>> On Mon, Jun 07, 2021 at 11:25:39AM -0700, Nadav Amit w
> On Jun 15, 2021, at 4:25 AM, Robin Murphy wrote:
>
> On 2021-06-07 19:25, Nadav Amit wrote:
>> From: Nadav Amit
>> On virtual machines, software must flush the IOTLB after each page table
>> entry update.
>> The iommu_map_sg() code iterates through the
> On Jun 15, 2021, at 6:08 AM, Robin Murphy wrote:
>
> On 2021-06-07 19:25, Nadav Amit wrote:
>> From: Nadav Amit
>> Do not use flush-queue on virtualized environments, where the NpCache
>> capability of the IOMMU is set. This is required to reduce
>> virtual
> On Jun 15, 2021, at 5:55 AM, Robin Murphy wrote:
>
> On 2021-06-07 19:25, Nadav Amit wrote:
>> From: Nadav Amit
>> AMD's IOMMU can flush efficiently (i.e., in a single flush) any range.
>> This is in contrast, for instnace, to Intel IOMMUs that have a limi
> On Jun 11, 2021, at 6:57 AM, Will Deacon wrote:
>
> On Mon, Jun 07, 2021 at 11:25:39AM -0700, Nadav Amit wrote:
>> From: Nadav Amit
>>
>> Refactor iommu_iotlb_gather_add_page() and factor out the logic that
>> detects whether IOTLB gather range and a new
From: Nadav Amit
AMD's IOMMU can flush efficiently (i.e., in a single flush) any range.
This is in contrast, for instnace, to Intel IOMMUs that have a limit on
the number of pages that can be flushed in a single flush. In addition,
AMD's IOMMU do not care about the page-size, so chan
From: Nadav Amit
On virtual machines, software must flush the IOTLB after each page table
entry update.
The iommu_map_sg() code iterates through the given scatter-gather list
and invokes iommu_map() for each element in the scatter-gather list,
which calls into the vendor IOMMU driver through
From: Robin Murphy
The Mediatek driver is not the only one which might want a basic
address-based gathering behaviour, so although it's arguably simple
enough to open-code, let's factor it out for the sake of cleanliness.
Let's also take this opportunity to document the intent of these
helpers fo
From: Nadav Amit
Refactor iommu_iotlb_gather_add_page() and factor out the logic that
detects whether IOTLB gather range and a new range are disjoint. To be
used by the next patch that implements different gathering logic for
AMD.
Cc: Joerg Roedel
Cc: Will Deacon
Cc: Jiajun Cao
Cc: Robin
From: Nadav Amit
Do not use flush-queue on virtualized environments, where the NpCache
capability of the IOMMU is set. This is required to reduce
virtualization overheads.
This change follows a similar change to Intel's VT-d and a detailed
explanation as for the rationale is described in c
From: Nadav Amit
Recent patch attempted to enable selective page flushes on AMD IOMMU but
neglected to adapt amd_iommu_iotlb_sync() to use the selective flushes.
Adapt amd_iommu_iotlb_sync() to use selective flushes and change
amd_iommu_unmap() to collect the flushes. As a defensive measure, to
From: Nadav Amit
The previous patch, commit 268aa4548277 ("iommu/amd: Page-specific
invalidations for more than one page") was supposed to enable
page-selective IOTLB flushes on AMD.
Besides the bug that was already fixed by commit a017c567915f
("iommu/amd: Fix wrong pare
> On Jun 4, 2021, at 11:53 AM, Robin Murphy wrote:
>
> On 2021-06-04 18:10, Nadav Amit wrote:
>>> On Jun 4, 2021, at 8:38 AM, Joerg Roedel wrote:
>>>
>>> Hi Nadav,
>>>
>>> [Adding Robin]
>>>
>>> On Mon, May 24, 2021
> On Jun 4, 2021, at 8:38 AM, Joerg Roedel wrote:
>
> Hi Nadav,
>
> [Adding Robin]
>
> On Mon, May 24, 2021 at 03:41:55PM -0700, Nadav Amit wrote:
>> Nadav Amit (4):
>> iommu/amd: Fix wrong parentheses on page-specific invalidations
>
> This patch i
> On Jun 1, 2021, at 10:27 AM, Robin Murphy wrote:
>
> On 2021-06-01 17:39, Nadav Amit wrote:
>>> On Jun 1, 2021, at 8:59 AM, Robin Murphy wrote:
>>>
>>> On 2021-05-02 07:59, Nadav Amit wrote:
>>>> From: Nadav Amit
>>>> Some IOMM
> On Jun 1, 2021, at 8:59 AM, Robin Murphy wrote:
>
> On 2021-05-02 07:59, Nadav Amit wrote:
>> From: Nadav Amit
>> Some IOMMU architectures perform invalidations regardless of the page
>> size. In such architectures there is no need to sync when the page size
>
> On May 18, 2021, at 2:23 AM, Joerg Roedel wrote:
>
> On Sat, May 01, 2021 at 11:59:56PM -0700, Nadav Amit wrote:
>> From: Nadav Amit
>>
>> The logic to determine the mask of page-specific invalidations was
>> tested in userspace. As the code was copied in
> On May 27, 2021, at 10:57 AM, Joerg Roedel wrote:
>
> Signed PGP part
> Hi Linus,
>
> The following changes since commit d07f6ca923ea0927a1024dfccafc5b53b61cfecc:
>
> Linux 5.13-rc2 (2021-05-16 15:27:44 -0700)
For 5.13-rc3? Not -rc4?
___
iommu
From: Nadav Amit
Do not use flush-queue on virtualized environments, where the NpCache
capability of the IOMMU is set. This is required to reduce
virtualization overheads.
This change follows a similar change to Intel's VT-d and a detailed
explanation as for the rationale is described in c
From: Nadav Amit
Recent patch attempted to enable selective page flushes on AMD IOMMU but
neglected to adapt amd_iommu_iotlb_sync() to use the selective flushes.
Adapt amd_iommu_iotlb_sync() to use selective flushes and change
amd_iommu_unmap() to collect the flushes. As a defensive measure, to
From: Nadav Amit
Some IOMMU architectures perform invalidations regardless of the page
size. In such architectures there is no need to sync when the page size
changes or to regard pgsize when making interim flush in
iommu_iotlb_gather_add_page().
Add a "ignore_gather_pgsize" propert
From: Nadav Amit
The logic to determine the mask of page-specific invalidations was
tested in userspace. As the code was copied into the kernel, the
parentheses were mistakenly set in the wrong place, resulting in the
wrong mask.
Fix it.
Cc: Joerg Roedel
Cc: Will Deacon
Cc: Jiajun Cao
Cc
From: Nadav Amit
The previous patch, commit 268aa4548277 ("iommu/amd: Page-specific
invalidations for more than one page") was supposed to enable
page-selective IOTLB flushes on AMD.
The patch had an embaressing bug, and I apologize for it.
Analysis as for why this bug did not
From: Nadav Amit
Do not use flush-queue on virtualized environments, where the NpCache
capability of the IOMMU is set. This is required to reduce
virtualization overheads.
This change follows a similar change to Intel's VT-d and a detailed
explanation as for the rationale is described in c
From: Nadav Amit
Recent patch attempted to enable selective page flushes on AMD IOMMU but
neglected to adapt amd_iommu_iotlb_sync() to use the selective flushes.
Adapt amd_iommu_iotlb_sync() to use selective flushes and change
amd_iommu_unmap() to collect the flushes. As a defensive measure, to
From: Nadav Amit
Some IOMMU architectures perform invalidations regardless of the page
size. In such architectures there is no need to sync when the page size
changes or to regard pgsize when making interim flush in
iommu_iotlb_gather_add_page().
Add a "ignore_gather_pgsize" propert
From: Nadav Amit
Recent patch attempted to enable selective page flushes on AMD IOMMU but
neglected to adapt amd_iommu_iotlb_sync() to use the selective flushes.
Adapt amd_iommu_iotlb_sync() to use selective flushes and change
amd_iommu_unmap() to collect the flushes. As a defensive measure, to
From: Nadav Amit
Some IOMMU architectures perform invalidations regardless of the page
size. In such architectures there is no need to sync when the page size
changes. In such architecture, there is no need to regard pgsize when
making interim flush in iommu_iotlb_gather_add_page().
Add a
From: Nadav Amit
The logic to determine the mask of page-specific invalidations was
tested in userspace. As the code was copied into the kernel, the
parentheses were mistakenly set in the wrong place, resulting in the
wrong mask.
Fix it.
Cc: Joerg Roedel
Cc: Will Deacon
Cc: Jiajun Cao
Cc
From: Nadav Amit
The previous patch, commit 268aa4548277 ("iommu/amd: Page-specific
invalidations for more than one page") was supposed to enable
page-selective IOTLB flushes on AMD.
The patch had an embaressing bug, and I apologize for it.
Analysis as for why this bug did not
> On Apr 15, 2021, at 7:13 AM, Joerg Roedel wrote:
>
> On Thu, Apr 15, 2021 at 08:46:28AM +0800, Longpeng(Mike) wrote:
>> Fixes: 6491d4d02893 ("intel-iommu: Free old page tables before creating
>> superpage")
>> Cc: # v3.0+
>> Link:
>> https://lore.kernel.org/linux-iommu/670baaf8-4ff8-4e84-4
> On Apr 8, 2021, at 12:18 AM, Joerg Roedel wrote:
>
> Hi Nadav,
>
> On Wed, Apr 07, 2021 at 05:57:31PM +0000, Nadav Amit wrote:
>> I tested it on real bare-metal hardware. I ran some basic I/O workloads
>> with the IOMMU enabled, checkers enabled/disabled, and so
> On Apr 7, 2021, at 3:01 AM, Joerg Roedel wrote:
>
> On Tue, Mar 23, 2021 at 02:06:19PM -0700, Nadav Amit wrote:
>> From: Nadav Amit
>>
>> Currently, IOMMU invalidations and device-IOTLB invalidations using
>> AMD IOMMU fall back to full address-space inva
> On Mar 26, 2021, at 7:31 PM, Lu Baolu wrote:
>
> Hi Nadav,
>
> On 3/19/21 12:46 AM, Nadav Amit wrote:
>> So here is my guess:
>> Intel probably used as a basis for the IOTLB an implementation of
>> some other (regular) TLB design.
>> Intel SDM say
From: Nadav Amit
Currently, IOMMU invalidations and device-IOTLB invalidations using
AMD IOMMU fall back to full address-space invalidation if more than a
single page need to be flushed.
Full flushes are especially inefficient when the IOMMU is virtualized by
a hypervisor, since it requires the
e, Cloud Infrastructure Service Product Dept.)
>> ; Nadav Amit
>> Cc: chenjiashang ; David Woodhouse
>> ; iommu@lists.linux-foundation.org; LKML
>> ; alex.william...@redhat.com; Gonglei (Arei)
>> ; w...@kernel.org
>> Subject: RE: A problem of Intel IOMMU hardwar
> On Mar 17, 2021, at 9:46 PM, Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.) wrote:
>
[Snip]
>
> NOTE, the magical thing happen...(*Operation-4*) we write the PTE
> of Operation-1 from 0 to 0x3 which means can Read/Write, and then
> we trigger DMA read again, it success and r
> On Mar 17, 2021, at 2:35 AM, Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.) wrote:
>
> Hi Nadav,
>
>> -Original Message-
>> From: Nadav Amit [mailto:nadav.a...@gmail.com]
>>> reproduce the problem with high probability (~50%).
>
> On Mar 16, 2021, at 8:16 PM, Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.) wrote:
>
> Hi guys,
>
> We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a special
> situation, it would cause DMA fails or get wrong data.
>
> The reproducer (based on Alex's vfio tes
> On Jan 27, 2021, at 3:25 AM, Lu Baolu wrote:
>
> On 2021/1/27 14:17, Nadav Amit wrote:
>> From: Nadav Amit
>> When an Intel IOMMU is virtualized, and a physical device is
>> passed-through to the VM, changes of the virtual IOMMU need to be
>> propagated to t
From: Nadav Amit
When an Intel IOMMU is virtualized, and a physical device is
passed-through to the VM, changes of the virtual IOMMU need to be
propagated to the physical IOMMU. The hypervisor therefore needs to
monitor PTE mappings in the IOMMU page-tables. Intel specifications
provide "ca
From: Nadav Amit
When an Intel IOMMU is virtualized, and a physical device is
passed-through to the VM, changes of the virtual IOMMU need to be
propagated to the physical IOMMU. The hypervisor therefore needs to
monitor PTE mappings in the IOMMU page-tables. Intel specifications
provide "ca
> On Jan 26, 2021, at 4:26 PM, Lu Baolu wrote:
>
> Hi Nadav,
>
> On 1/27/21 4:38 AM, Nadav Amit wrote:
>> From: Nadav Amit
>> When an Intel IOMMU is virtualized, and a physical device is
>> passed-through to the VM, changes of the virtual IOMMU need to be
&g
From: Nadav Amit
When an Intel IOMMU is virtualized, and a physical device is
passed-through to the VM, changes of the virtual IOMMU need to be
propagated to the physical IOMMU. The hypervisor therefore needs to
monitor PTE mappings in the IOMMU page-tables. Intel specifications
provide "ca
mu/vt-d: Allow interrupts from the entire bus for
aliased devices")
Cc: sta...@vger.kernel.org
Cc: Logan Gunthorpe
Cc: David Woodhouse
Cc: Joerg Roedel
Cc: Jacob Pan
Signed-off-by: Nadav Amit
---
drivers/iommu/intel_irq_remapping.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletion
> On Apr 17, 2019, at 10:26 AM, Ingo Molnar wrote:
>
>
> * Nadav Amit wrote:
>
>>> On Apr 17, 2019, at 10:09 AM, Ingo Molnar wrote:
>>>
>>>
>>> * Khalid Aziz wrote:
>>>
>>>>> I.e. the original motivation of the
> On Apr 17, 2019, at 10:09 AM, Ingo Molnar wrote:
>
>
> * Khalid Aziz wrote:
>
>>> I.e. the original motivation of the XPFO patches was to prevent execution
>>> of direct kernel mappings. Is this motivation still present if those
>>> mappings are non-executable?
>>>
>>> (Sorry if this has
> On Dec 6, 2018, at 9:43 AM, Jesper Dangaard Brouer wrote:
>
> On Thu, 6 Dec 2018 07:37:19 -0800
> Christoph Hellwig wrote:
>
>> Hi all,
>>
>> a while ago Jesper reported major performance regressions due to the
>> spectre v2 mitigations in his XDP forwarding workloads. A large part
>> of t
Sinan Kaya wrote:
> +Bjorn,
>
> On 5/3/2018 9:59 AM, Joerg Roedel wrote:
>> On Thu, May 03, 2018 at 09:46:34AM -0400, Sinan Kaya wrote:
>>> I also like the idea in general.
>>> Minor nit..
>>>
>>> Shouldn't this be an iommu parameter rather than a PCI kernel command line
>>> parameter?
>>> We
Jerome Glisse wrote:
> On Wed, Oct 04, 2017 at 01:42:15AM +0200, Andrea Arcangeli wrote:
>
>> I'd like some more explanation about the inner working of "that new
>> user" as per comment above.
>>
>> It would be enough to drop mmu_notifier_invalidate_range from above
>> without adding it to the
Andrea Arcangeli wrote:
> On Wed, Aug 30, 2017 at 08:47:19PM -0400, Jerome Glisse wrote:
>> On Wed, Aug 30, 2017 at 04:25:54PM -0700, Nadav Amit wrote:
>>> For both CoW and KSM, the correctness is maintained by calling
>>> ptep_clear_flush_notify(). If you defer the
[cc’ing IOMMU people, which for some reason are not cc’d]
Andrea Arcangeli wrote:
> On Wed, Aug 30, 2017 at 11:00:32AM -0700, Nadav Amit wrote:
>> It is not trivial to flush TLBs (primary or secondary) without holding the
>> page-table lock, and as we recently encountered t
Paolo Bonzini wrote:
>
>
> On 05/07/2016 18:27, Nadav Amit wrote:
>>> Although such hardware is old, there are some hypervisors that do not set
>>> the ecap.coherency of emulated IOMMUs. Yes, it is unwise, but there is no
>>> reason to further punish these
Nadav Amit wrote:
> Joerg Roedel wrote:
>
>> On Fri, Jun 24, 2016 at 06:13:14AM -0700, Nadav Amit wrote:
>>> According to the manual: "Hardware access to ... invalidation queue ...
>>> are always coherent."
>>>
>>> Remove unnecass
Joerg Roedel wrote:
> On Fri, Jun 24, 2016 at 06:13:14AM -0700, Nadav Amit wrote:
>> According to the manual: "Hardware access to ... invalidation queue ...
>> are always coherent."
>>
>> Remove unnecassary clflushes accordingly.
>
> It is one thing
According to the manual: "Hardware access to ... invalidation queue ...
are always coherent."
Remove unnecassary clflushes accordingly.
Signed-off-by: Nadav Amit
---
Build-tested since I do not have an IOMMU that does not support
coherency.
---
drivers/iommu/dmar.c | 5 -
1 fi
.
Avoid this scenario by using WRITE_ONCE, and order the writes on
32-bit kernels.
Signed-off-by: Nadav Amit
---
V3: Move split_dma_pte struct to dma_clear_pte (Joerg)
Add comments (Joerg)
V2: Use two WRITE_ONCE on 32-bit to avoid reordering
---
drivers/iommu/intel-iommu.c | 23
.
Avoid this scenario by using WRITE_ONCE, and order the writes on
32-bit kernels.
Signed-off-by: Nadav Amit
---
V2: Use two WRITE_ONCE on 32-bit to avoid reordering
---
drivers/iommu/intel-iommu.c | 19 ++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu
Alan Stern wrote:
> On Thu, 9 Jun 2016, M G Berberich wrote:
>
>> Hello,
>>
>> With 4.7-rc2, after detecting a USB Mass Storage device
>>
>> [ 11.589843] usb-storage 4-2:1.0: USB Mass Storage device detected
>>
>> a constant flow of kernel-BUGS is reported (several per second).
>>
>> [
Ping?
Nadav Amit wrote:
> When a PTE is cleared, the write may be teared or perform by multiple
> writes. In addition, in 32-bit kernel, writes are currently performed
> using a single 64-bit write, which does not guarantee order.
>
> The byte-code right now does not seem to
.
Avoid this scenario by using WRITE_ONCE, and order the writes on
32-bit kernels.
Signed-off-by: Nadav Amit
---
drivers/iommu/intel-iommu.c | 19 ++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index e1852e8
Radim Kr?má? wrote:
> 2015-01-14 01:27+, Wu, Feng:
>>> the new
hardware even doesn't consider the TPR for lowest priority interrupts
>>> delivery.
>>>
>>> A bold move ... what hardware was the first to do so?
>>
>> I think it was starting with Nehalem.
>
> Thanks, (Could be that QPI
90 matches
Mail list logo