On Tue, Sep 17, 2013 at 12:53:44PM +0200, Ingo Molnar wrote:
> 
> * Peter Zijlstra <[email protected]> wrote:
> 
> > These patches optimize preempt_enable by firstly folding the preempt and
> > need_resched tests into one -- this should work for all architectures. And
> > secondly by providing per-arch preempt_count implementations; with x86 using
> > per-cpu preempt_count for fastest access.
> > 
> > These patches have been boot tested on CONFIG_PREEMPT=y x86_64 and survive
> > building a x86_64-defconfig kernel.
> > 
> >    text    data     bss     filename
> > 11387014  1454776 1187840 defconfig-build/vmlinux.before
> > 11352294  1454776 1187840 defconfig-build/vmlinux.after
> 
> That's a 0.3% size improvement (and most of the improvement is in 
> hotpaths), despite GCC is being somewhat stupid about not allowing us to 
> mark asm goto targets as cold paths and thus causes some unnecessary 
> register shuffling in some cases, right?

I'm not entire sure where the bloat in 1/11 comes from; several
functions look like they avoid using stack variables for using more
registers which create more push/pop on entry/exit paths. Others I'm not
entirely sure of what happens with.

But it does look like the unlikely() thing still works, even with the
asm goto, you'll note that the call to schedule_preempt is out-of-line.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to