On 05/14/2020 05:33 PM, Mike Rapoport wrote:
> On Thu, May 14, 2020 at 10:17:57AM +0800, Bibo Mao wrote:
>> From: bibo mao <maob...@loongson.cn>
>>
>> If there are two threads reading the same memory and tlb miss happens,
>> one thread fills pte entry, the other reads new pte value during page fault
>> handling. PTE value may be updated before page faul, so the process need
>> need update tlb still.
>>
>> Also this patch define flush_tlb_fix_spurious_fault as empty, since it not
>> necessary to flush the page for all CPUs
> 
> I'm not familiar enough with MIPS TLB workings, but it seems that adding
> a MIPS version of ptep_set_access_flags() and relaxing
> flush_tlb_fix_spurious_fault() are two separate patches.

Well, will split it into two patches.


regards
bibo, mao
> 
>> Signed-off-by: Bibo Mao <maob...@loongson.cn>
>> ---
>>  arch/mips/include/asm/pgtable.h | 20 ++++++++++++++++++++
>>  1 file changed, 20 insertions(+)
>>
>> diff --git a/arch/mips/include/asm/pgtable.h 
>> b/arch/mips/include/asm/pgtable.h
>> index aab0ec1..d0a4940 100644
>> --- a/arch/mips/include/asm/pgtable.h
>> +++ b/arch/mips/include/asm/pgtable.h
>> @@ -635,6 +635,26 @@ static inline pmd_t pmd_mknotpresent(pmd_t pmd)
>>      return pmd;
>>  }
>>  
>> +#define flush_tlb_fix_spurious_fault(vma, address) do { } while (0)
>> +
>> +#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
>> +int ptep_set_access_flags(struct vm_area_struct *vma,
>> +                    unsigned long address, pte_t *ptep,
>> +                    pte_t entry, int dirty)
>> +{
>> +    int changed = !pte_same(*ptep, entry);
>> +
>> +    if (changed)
>> +            set_pte_at(vma->vm_mm, address, ptep, entry);
>> +    else
>> +            /* update tlb with latest pte entry still, tlb entry is old
>> +             * since there is page fault
>> +             */
>> +            update_mmu_cache(vma, address, ptep);
>> +
>> +    return changed;
>> +}
>> +
>>  /*
>>   * The generic version pmdp_huge_get_and_clear uses a version of 
>> pmd_clear() with a
>>   * different prototype.
>> -- 
>> 1.8.3.1
>>
> 

Reply via email to