DMI cacheability is very confused on x86.

dmi_early_remap uses early_ioremap, which uses FIXMAP_PAGE_IO, which
is __PAGE_KERNEL_IO, which is __PAGE_KERNEL, which is cached.  Don't
ask me why this makes any sense.

dmi_remap uses ioremap, which requests an uncached mapping.
However, on non-EFI systems, the DMI data generally lives between
0xf0000 and 0x100000, which is in the legacy ISA range, which
triggers a special case in the PAT code that overrides the cache
mode requested by ioremap and forces a WB mapping.

On a UEFI boot, however, the DMI table can live at any physical
address.  On my laptop, it's around 0x77dd0000.  That's nowhere near
the legacy ISA range, so the ioremap implicit uncached type is
honored and we end up with a UC- mapping.

UC- is a very, very slow way to read from main memory, so dmi_walk
is likely to take much longer than necessary.

Given that, even on UEFI, we do early cached DMI reads, it seems
safe to just ask for cached access.  Switch to ioremap_cache.

I haven't tried to benchmark this, but I'd guess it saves several
milliseconds of boot time.

Signed-off-by: Andy Lutomirski <l...@kernel.org>
---
 arch/x86/include/asm/dmi.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/dmi.h b/arch/x86/include/asm/dmi.h
index 535192f6bfad..3c69fed215c5 100644
--- a/arch/x86/include/asm/dmi.h
+++ b/arch/x86/include/asm/dmi.h
@@ -15,7 +15,7 @@ static __always_inline __init void *dmi_alloc(unsigned len)
 /* Use early IO mappings for DMI because it's initialized early */
 #define dmi_early_remap                early_ioremap
 #define dmi_early_unmap                early_iounmap
-#define dmi_remap              ioremap
+#define dmi_remap              ioremap_cache
 #define dmi_unmap              iounmap
 
 #endif /* _ASM_X86_DMI_H */
-- 
2.5.0

Reply via email to