On Thu, Jul 18, 2013 at 12:06:39PM -0700, Jason Low wrote:

> N = 1
> -----
> 19.21%  reaim  [k] __read_lock_failed                     
> 14.79%  reaim  [k] mspin_lock                             
> 12.19%  reaim  [k] __write_lock_failed                    
> 7.87%   reaim  [k] _raw_spin_lock                          
> 2.03%   reaim  [k] start_this_handle                       
> 1.98%   reaim  [k] update_sd_lb_stats                      
> 1.92%   reaim  [k] mutex_spin_on_owner                     
> 1.86%   reaim  [k] update_cfs_rq_blocked_load              
> 1.14%   swapper  [k] intel_idle                              
> 1.10%   reaim  [.] add_long                                
> 1.09%   reaim  [.] add_int                                 
> 1.08%   reaim  [k] load_balance                            

But but but but.. wth is causing this? The only thing we do more of with
N=1 is idle_balance(); where would that cause __{read,write}_lock_failed
and or mspin_lock() contention like that.

There shouldn't be a rwlock_t in the entire scheduler; those things suck
worse than quicksand.

If, as Rik thought, we'd have more rq->lock contention, then I'd
expected _raw_spin_lock to be up highest.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to