Sorry to reply to you after a so long time and thanks for your advice. It 
does seem that your proposed change will make the code cleaner and more 
efficient.

    I repeated move_freepages_block() 2000000 times on the VM and counted 
jiffies. The average value before and after the change was both about 12,000. I 
think it's probably because I'm using the Sparse Memory Model, so pfn_to_page() 
is not time-consuming.


On 2021/3/23 20:54, Matthew Wilcox wrote:
> On Tue, Mar 23, 2021 at 09:12:15PM +0800, Liu Shixin wrote:
>> From: Kefeng Wang <wangkefeng.w...@huawei.com>
>>
>> The start_pfn and end_pfn are already available in move_freepages_block(),
>> there is no need to go back and forth between page and pfn in move_freepages
>> and move_freepages_block, and pfn_valid_within() should validate pfn first
>> before touching the page.
> This looks good to me:
>
> Reviewed-by: Matthew Wilcox (Oracle) <wi...@infradead.org>
>
>>  static int move_freepages(struct zone *zone,
>> -                      struct page *start_page, struct page *end_page,
>> +                      unsigned long start_pfn, unsigned long end_pfn,
>>                        int migratetype, int *num_movable)
>>  {
>>      struct page *page;
>> +    unsigned long pfn;
>>      unsigned int order;
>>      int pages_moved = 0;
>>  
>> -    for (page = start_page; page <= end_page;) {
>> -            if (!pfn_valid_within(page_to_pfn(page))) {
>> -                    page++;
>> +    for (pfn = start_pfn; pfn <= end_pfn;) {
>> +            if (!pfn_valid_within(pfn)) {
>> +                    pfn++;
>>                      continue;
>>              }
>>  
>> +            page = pfn_to_page(pfn);
> I wonder if this wouldn't be even better if we did:
>
>       struct page *start_page = pfn_to_page(start_pfn);
>
>       for (pfn = start_pfn; pfn <= end_pfn; pfn++) {
>               struct page *page = start_page + pfn - start_pfn;
>
>               if (!pfn_valid_within(pfn))
>                       continue;
>
>> -
>> -                    page++;
>> +                    pfn++;
>>                      continue;
> ... then we can drop the increment of pfn here
>
>>              }
>>  
>> @@ -2458,7 +2459,7 @@ static int move_freepages(struct zone *zone,
>>  
>>              order = buddy_order(page);
>>              move_to_free_list(page, zone, order, migratetype);
>> -            page += 1 << order;
>> +            pfn += 1 << order;
> ... and change this to pfn += (1 << order) - 1;
>
> Do you have any numbers to quantify the benefit of this change?
> .
>

Reply via email to