On Sun, Feb 15, 2015 at 03:46:54PM +0800, Huang Ying wrote:
> FYI, we noticed the below changes on
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core
> commit 871a6bb4916fef3123b6ff749b0dc82680fb0d2a ("mutex: In 
> mutex_spin_on_owner(), return true when owner changes")
> 
> 
> testbox/testcase/testparams: wsm/will-it-scale/performance-writeseek3
> 
> e07e0d4cb0c4bfe8  871a6bb4916fef3123b6ff749b  
> ----------------  --------------------------  
>          %stddev     %change         %stddev
>              \          |                \  
>   24972759 ±  2%     -98.3%     417134 ±  9%  
> will-it-scale.time.voluntary_context_switches
>       2223 ± 49%    +209.6%       6884 ± 10%  
> will-it-scale.time.involuntary_context_switches
>        542 ± 32%     +91.3%       1037 ±  0%  will-it-scale.time.system_time
>        186 ± 30%     +86.3%        347 ±  0%  
> will-it-scale.time.percent_of_cpu_this_job_got
>      26.11 ±  5%     -22.7%      20.18 ±  2%  will-it-scale.time.user_time
>       0.09 ±  1%     -18.2%       0.07 ±  1%  will-it-scale.scalability
>     783528 ±  0%      -1.8%     769550 ±  0%  will-it-scale.per_process_ops

>      12.27 ± 10%    +492.7%      72.73 ±  1%  
> perf-profile.cpu-cycles.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter.new_sync_write
>       3.22 ± 26%   +1718.0%      58.50 ±  1%  
> perf-profile.cpu-cycles.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter


Do you have the results for more/different performance tests for this
commit?

Jason mentioned seeing +5% on one (fserver).

So if, for multiple tests, we see an avg improvement, we might trade
that for this one regression.

If otoh we see a net negative, we'll have to go do something about this.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to