On Wed, Jan 24, 2018 at 04:36:41PM -0800, Tim Chen wrote: > These two patches provide optimization to skip IBPB for this > commonly encountered scenario: > We could switch to a kernel idle thread and then back to the original > process such as: > process A -> idle -> process A > > In such scenario, we do not have to do IBPB here even though the process > is non-dumpable, as we are switching back to the same process after > an hiatus. > > The cost is to have an extra pointer to track the last mm we were using before > switching to the init_mm used by idle. But avoiding the extra IBPB > is probably worth the extra memory for such a common scenario.
So we already track active_mm for kernel threads. I can't immediately see where this fails for idle and your changelog doesn't say. > @@ -229,15 +230,17 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct > mm_struct *next, > * As an optimization flush indirect branches only when > * switching into processes that disable dumping. > * > - * This will not flush branches when switching into kernel > - * threads, but it would flush them when switching to the > - * idle thread and back. > + * This will not flush branches when switching into kernel > + * threads. It will also not flush if we switch to idle > + * thread and back to the same process. It will flush if we > + * switch to a different non-dumpable process. Whitespace damage. > * > * It might be useful to have a one-off cache here > * to also not flush the idle case, but we would need some > * kind of stable sequence number to remember the previous > mm. > */ > - if (tsk && tsk->mm && get_dumpable(tsk->mm) != SUID_DUMP_USER) > + if (tsk && tsk->mm && (tsk->mm != last) > + && get_dumpable(tsk->mm) != SUID_DUMP_USER) Broken coding style, operators go at the end of the previous line. > indirect_branch_prediction_barrier(); > > if (IS_ENABLED(CONFIG_VMAP_STACK)) {