When a PTE is cleared, the write may be teared or perform by multiple writes. In addition, in 32-bit kernel, writes are currently performed using a single 64-bit write, which does not guarantee order.
The byte-code right now does not seem to cause a problem, but it may still occur in theory. Avoid this scenario by using WRITE_ONCE, and order the writes on 32-bit kernels. Signed-off-by: Nadav Amit <na...@vmware.com> --- drivers/iommu/intel-iommu.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index e1852e8..4f488a5 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -326,9 +326,26 @@ struct dma_pte { u64 val; }; +#ifndef CONFIG_64BIT +union split_dma_pte { + struct { + u32 val_low; + u32 val_high; + }; + u64 val; +}; +#endif + static inline void dma_clear_pte(struct dma_pte *pte) { - pte->val = 0; +#ifdef CONFIG_64BIT + WRITE_ONCE(pte->val, 0); +#else + union split_dma_pte *sdma_pte = (union split_dma_pte *)pte; + + WRITE_ONCE(sdma_pte->val_low, 0); + sdma_pte->val_high = 0; +#endif } static inline u64 dma_pte_addr(struct dma_pte *pte) -- 2.7.4 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu