On 01/23/2018 08:52 AM, David Woodhouse wrote: > When they advertise the IA32_ARCH_CAPABILITIES MSR and it has the RDCL_NO > bit set, they don't need KPTI either. > > Signed-off-by: David Woodhouse <d...@amazon.co.uk> > --- > arch/x86/kernel/cpu/common.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c > index e5d66e9..c05d0fe 100644 > --- a/arch/x86/kernel/cpu/common.c > +++ b/arch/x86/kernel/cpu/common.c > @@ -900,8 +900,14 @@ static void __init early_identify_cpu(struct cpuinfo_x86 > *c) > > setup_force_cpu_cap(X86_FEATURE_ALWAYS); > > - if (c->x86_vendor != X86_VENDOR_AMD) > - setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN); > + if (c->x86_vendor != X86_VENDOR_AMD) { > + u64 ia32_cap = 0; > + > + if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES)) > + rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap); > + if (!(ia32_cap & ARCH_CAP_RDCL_NO)) > + setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN); > + }
I'd really rather we break this out into a nice, linear set of true/false conditions. bool early_cpu_vulnerable_meltdown(struct cpuinfo_x86 *c) { u64 ia32_cap = 0; /* AMD processors are not subject to Meltdown exploit: */ if (c->x86_vendor == X86_VENDOR_AMD) return false; /* Assume all remaining CPUs not enumerating are vulnerable: */ if (!cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES)) return true; /* * Does the CPU explicitly enumerate that it is not vulnerable * to Rogue Data Cache Load (aka Meltdown)? */ rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap); if (ia32_cap & ARCH_CAP_RDCL_NO) return false; /* Assume everything else is vulnerable */ return true; } Then we get a nice: if (early_cpu_vulnerable_meltdown(c)) setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN); setup_force_cpu_bug(X86_BUG_SPECTRE_V1); setup_force_cpu_bug(X86_BUG_SPECTRE_V2); Which clearly shows that Meltdown is special.