Waiman Long <[email protected]> writes:

> On 12/02/2015 03:02 PM, [email protected] wrote:
>> Waiman Long<[email protected]>  writes:
>>> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
>>> index efd3bfc..e679895 100644
>>> --- a/kernel/sched/sched.h
>>> +++ b/kernel/sched/sched.h
>>> @@ -248,7 +248,12 @@ struct task_group {
>>>     unsigned long shares;
>>>
>>>   #ifdef    CONFIG_SMP
>>> -   atomic_long_t load_avg;
>>> +   /*
>>> +    * load_avg can be heavily contended at clock tick time, so put
>>> +    * it in its own cacheline separated from the fields above which
>>> +    * will also be accessed at each tick.
>>> +    */
>>> +   atomic_long_t load_avg ____cacheline_aligned;
>>>   #endif
>>>   #endif
>> I suppose the question is if it would be better to just move this to
>> wind up on a separate cacheline without the extra empty space, though it
>> would likely be more fragile and unclear.
>
> I have been thinking about that too. The problem is anything that will be in 
> the
> same cacheline as load_avg and have to be accessed at clock click time will
> cause the same contention problem. In the current layout, the fields after
> load_avg are the rt stuff as well some list head structure and pointers. The 
> rt
> stuff should be kind of mutually exclusive of the CFS load_avg in term of 
> usage.
> The list head structure and pointers don't seem to be that frequently 
> accessed.
> So it is the right place to start a new cacheline boundary.
>
> Cheers,
> Longman

Yeah, this is a good place to start a new boundary, I was just saying
you could probably remove the empty space by reordering fields, but that
would be a less logical ordering in terms of programmer clarity.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to