Re: [PATCH 7/8] sched, net: Fixup busy_loop_us_clock()

2013-11-29 Thread Eliezer Tamir
On 28/11/2013 19:40, Peter Zijlstra wrote: > On Thu, Nov 28, 2013 at 06:49:00PM +0200, Eliezer Tamir wrote: >> I have tested this patch and I see a performance regression of about >> 1.5%. > > Cute, can you qualify your metric? Since this is a poll loop the only > metric that would be interesting

Re: [PATCH 7/8] sched, net: Fixup busy_loop_us_clock()

2013-11-28 Thread Peter Zijlstra
On Thu, Nov 28, 2013 at 06:40:01PM +0100, Peter Zijlstra wrote: > That said; let me see if I can come up with a few patches to optimize > the entire thing; that'd be something we all benefit from. OK, so the below compiles, I currently haven't got time to see if it runs or not. I've got it as ser

Re: [PATCH 7/8] sched, net: Fixup busy_loop_us_clock()

2013-11-28 Thread Peter Zijlstra
On Thu, Nov 28, 2013 at 06:49:00PM +0200, Eliezer Tamir wrote: > I have tested this patch and I see a performance regression of about > 1.5%. Cute, can you qualify your metric? Since this is a poll loop the only metric that would be interesting is the response latency. Is that what's increased by

Re: [PATCH 7/8] sched, net: Fixup busy_loop_us_clock()

2013-11-28 Thread Eliezer Tamir
On 26/11/2013 17:57, Peter Zijlstra wrote: > > Replace sched_clock() usage with local_clock() which has a bounded > drift between CPUs (<2 jiffies). > Peter, I have tested this patch and I see a performance regression of about 1.5%. Maybe it would be better, rather then testing in the fast pat

[PATCH 7/8] sched, net: Fixup busy_loop_us_clock()

2013-11-26 Thread Peter Zijlstra
The only valid use of preempt_enable_no_resched() is if the very next line is schedule() or if we know preemption cannot actually be enabled by that statement due to known more preempt_count 'refs'. This busy_poll stuff looks to be completely and utterly broken, sched_clock() can return utter garb