On Tue, Nov 06, 2018 at 09:20:19AM +0100, Peter Zijlstra wrote:

> By our current way of thinking, kmap_atomic simply is not correct.

Something like the below; which weirdly builds an x86_32 kernel.
Although I imagine a very sad one.

---

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ba7e3464ee92..e273f3879d04 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1449,6 +1449,16 @@ config PAGE_OFFSET
 config HIGHMEM
        def_bool y
        depends on X86_32 && (HIGHMEM64G || HIGHMEM4G)
+       depends on !SMP || BROKEN
+       help
+         By current thinking kmap_atomic() is broken, since it relies on per
+         CPU PTEs in the global (kernel) address space and relies on CPU local
+         TLB invalidates to completely invalidate these PTEs. However there is
+         nothing that guarantees other CPUs will not speculatively touch upon
+         'our' fixmap PTEs and load then into their TLBs, after which our
+         local TLB invalidate will not invalidate them.
+
+         There are AMD chips that will #MC on inconsistent TLB states.
 
 config X86_PAE
        bool "PAE (Physical Address Extension) Support"

Reply via email to