CVSROOT:        /cvs
Module name:    src
Changes by:     guent...@cvs.openbsd.org        2018/07/12 08:11:11

Modified files:
        sys/arch/amd64/amd64: cpu.c identcpu.c locore.S machdep.c pmap.c 
                              vector.S 
        sys/arch/amd64/conf: ld.script 
        sys/arch/amd64/include: asm.h codepatch.h 

Log message:
Reorganize the Meltdown entry and exit trampolines for syscall and
traps so that the "mov %rax,%cr3" is followed by an infinite loop
which is avoided because the mapping of the code being executed is
changed.  This means the sysretq/iretq isn't even present in that
flow of instructions in the kernel mapping, so userspace code can't
be speculatively reached on the kernel mapping and totally eliminates
the conditional jump over the the %cr3 change that supported CPUs
without the Meltdown vulnerability.  The return paths were probably
vulnerable to Spectre v1 (and v1.1/1.2) style attacks, speculatively
executing user code post-system-call with the kernel mappings, thus
creating cache/TLB/etc side-effects.

Would like to apply this technique to the interrupt stubs too, but
I'm hitting a bug in clang's assembler which misaligns the code and
symbols.

While here, when on a CPU not vulnerable to Meltdown, codepatch out
the unnecessary bits in cpu_switchto().

Inspiration from sf@, refined over dinner with theo
ok mlarkin@ deraadt@

Reply via email to