On Tue, Sep 04, 2018 at 06:18:55PM +0200, Thomas Gleixner wrote: > On Tue, 4 Sep 2018, Jiri Kosina wrote: > > if (tsk && tsk->mm && > > tsk->mm->context.ctx_id != last_ctx_id && > > - get_dumpable(tsk->mm) != SUID_DUMP_USER) > > + ___ptrace_may_access(current, tsk, PTRACE_MODE_IBPB)) > > Uurgh. If X86_FEATURE_USE_IBPB is not enabled, then the whole > __ptrace_may_access() overhead is just done for nothing. > > > indirect_branch_prediction_barrier(); > > This really wants to be runtime patched: > > if (static_cpu_has(X86_FEATURE_USE_IBPB)) > stop_speculation(tsk, last_ctx_id); > > and have an inline for that: > > static inline void stop_speculation(struct task_struct *tsk, u64 last_ctx_id) > { > if (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id && > ___ptrace_may_access(current, tsk, PTRACE_MODE_IBPB)) > indirect_branch_prediction_barrier(); > } > > which also makes the whole mess readable.
How about something like: if (static_cpu_has(X86_FEATURE_USE_IBPB) && need_ibpb(tsk, last_ctx_id)) indirect_branch_predictor_barrier(); where: static inline bool need_ibpb(struct task_struct *next, u64 last_ctx_id) { return next && next->mm && next->mm->context.ctx_id != last_ctx_id && __ptrace_may_access(next, PTRACE_MODE_IBPB)); } I don't much like "stop_speculation" for a name here.