> On Jul 19, 2019, at 11:38 AM, Dave Hansen <dave.han...@intel.com> wrote:
> 
> On 7/18/19 5:58 PM, Nadav Amit wrote:
>> +struct tlb_state_shared {
>> +    /*
>> +     * We can be in one of several states:
>> +     *
>> +     *  - Actively using an mm.  Our CPU's bit will be set in
>> +     *    mm_cpumask(loaded_mm) and is_lazy == false;
>> +     *
>> +     *  - Not using a real mm.  loaded_mm == &init_mm.  Our CPU's bit
>> +     *    will not be set in mm_cpumask(&init_mm) and is_lazy == false.
>> +     *
>> +     *  - Lazily using a real mm.  loaded_mm != &init_mm, our bit
>> +     *    is set in mm_cpumask(loaded_mm), but is_lazy == true.
>> +     *    We're heuristically guessing that the CR3 load we
>> +     *    skipped more than makes up for the overhead added by
>> +     *    lazy mode.
>> +     */
>> +    bool is_lazy;
>> +};
>> +DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state_shared, 
>> cpu_tlbstate_shared);
> 
> Could we get a comment about what "shared" means and why we need shared
> state?
> 
> Should we change 'tlb_state' to 'tlb_state_private’?

Andy said that for the lazy tlb optimizations there might soon be more
shared state. If you prefer, I can move is_lazy outside of tlb_state, and
not set it in any alternative struct.

Reply via email to