On 3/21/21 8:44 PM, Matthew Wilcox wrote:
> On Mon, Mar 22, 2021 at 03:51:52AM +0100, Ingo Molnar wrote:
>> +++ b/mm/huge_memory.c
>> @@ -1794,7 +1794,7 @@ bool move_huge_pmd(struct vm_area_struct *vma, 
>> unsigned long old_addr,
>>  /*
>>   * Returns
>>   *  - 0 if PMD could not be locked
>> - *  - 1 if PMD was locked but protections unchange and TLB flush unnecessary
>> + *  - 1 if PMD was locked but protections unchanged and TLB flush 
>> unnecessary
>>   *  - HPAGE_PMD_NR is protections changed and TLB flush necessary
> 
> s/is/if/
> 
>> @@ -5306,7 +5306,7 @@ void adjust_range_if_pmd_sharing_possible(struct 
>> vm_area_struct *vma,
>>  
>>      /*
>>       * vma need span at least one aligned PUD size and the start,end range
>> -     * must at least partialy within it.
>> +     * must at least partially within it.
> 
>        * vma needs to span at least one aligned PUD size, and the range
>        * must be at least partially within in.
> 
>>  /*
>>   * swapon tell device that all the old swap contents can be discarded,
>> - * to allow the swap device to optimize its wear-levelling.
>> + * to allow the swap device to optimize its wear-leveling.
>>   */
> 
> Levelling is british english, leveling is american english.  we don't
> usually "correct" one into the other.

How about "labelled" (from mm/kasan/shadow.c):

@@ -384,7 +384,7 @@ static int kasan_depopulate_vmalloc_pte(pte_t *ptep, 
unsigned long addr,
  * How does this work?
  * -------------------
  *
- * We have a region that is page aligned, labelled as A.
+ * We have a region that is page aligned, labeled as A.
  * That might not map onto the shadow in a way that is page-aligned:


-- 
~Randy

Reply via email to