On Wed, 2018-01-03 at 18:00 -0800, Andi Kleen wrote: > @@ -269,8 +270,9 @@ entry_SYSCALL_64_fastpath: > * This call instruction is handled specially in stub_ptregs_64. > * It might end up jumping to the slow path. If it jumps, RAX > * and all argument registers are clobbered. > - */ > - call *sys_call_table(, %rax, 8) > + */ > + movq sys_call_table(, %rax, 8), %rax > + NOSPEC_CALL %rax > .Lentry_SYSCALL_64_after_fastpath_call: > > movq %rax, RAX(%rsp)
Now I *know* you're working from an older version of my patches and haven't just deliberately reverted the CET support (and alternatives?). I fixed that mis-indentation of the closing */ and Intel were even shipping that version in the latest pre-embargo patch tarball. Should I post a new series and call it 'v3'? Was there anything specific you changed other than the cosmetics like s/CALL_THUNK/NOSPEC_CALL/ ? And obviously the fact that it stands alone instead of being based on the IBRS microcode support?
smime.p7s
Description: S/MIME cryptographic signature