Re: [PATCH 2/2] mm: numa: Do not clear PTEs or PMDs for NUMA hinting faults

2015-03-05 Thread Dave Chinner
On Thu, Mar 05, 2015 at 11:54:52PM +, Mel Gorman wrote:
> Dave Chinner reported the following on https://lkml.org/lkml/2015/3/1/226
> 
>Across the board the 4.0-rc1 numbers are much slower, and the
>degradation is far worse when using the large memory footprint
>configs. Perf points straight at the cause - this is from 4.0-rc1
>on the "-o bhash=101073" config:
> 
>-   56.07%56.07%  [kernel][k] 
> default_send_IPI_mask_sequence_phys
>   - default_send_IPI_mask_sequence_phys
>  - 99.99% physflat_send_IPI_mask
> - 99.37% native_send_call_func_ipi
>  smp_call_function_many
>- native_flush_tlb_others
>   - 99.85% flush_tlb_page
>ptep_clear_flush
>try_to_unmap_one
>rmap_walk
>try_to_unmap
>migrate_pages
>migrate_misplaced_page
>  - handle_mm_fault
> - 99.73% __do_page_fault
>  trace_do_page_fault
>  do_async_page_fault
>+ async_page_fault
>   0.63% native_send_call_func_single_ipi
>  generic_exec_single
>  smp_call_function_single
> 
> This was bisected to commit 4d9424669946 ("mm: convert p[te|md]_mknonnuma
> and remaining page table manipulations") which clears PTEs and PMDs to make
> them PROT_NONE. This is tidy but tests on some benchmarks indicate that
> there are many more hinting faults trapped resulting in excessive migration.
> This is the result for the old autonuma benchmark for example.

[snip]

Doesn't fix the problem. Runtime is slightly improved (16m45s vs 17m35)
but it's still much slower that 3.19 (6m5s).

Stats and profiles still roughly the same:

360,228  migrate:mm_migrate_pages ( +-  4.28% )

-   52.69%52.69%  [kernel][k] 
default_send_IPI_mask_sequence_phys
 default_send_IPI_mask_sequence_phys
   - physflat_send_IPI_mask
  - 97.28% native_send_call_func_ipi
   smp_call_function_many
   native_flush_tlb_others
   flush_tlb_page
   ptep_clear_flush
   try_to_unmap_one
   rmap_walk
   try_to_unmap
   migrate_pages
   migrate_misplaced_page
 - handle_mm_fault
- 99.59% __do_page_fault
 trace_do_page_fault
 do_async_page_fault
   + async_page_fault
  + 2.72% native_send_call_func_single_ipi

numa_hit 36678767
numa_miss 905234
numa_foreign 905234
numa_interleave 14802
numa_local 36656791
numa_other 927210
numa_pte_updates 92168450
numa_huge_pte_updates 0
numa_hint_faults 87573926
numa_hint_faults_local 29730293
numa_pages_migrated 30195890
pgmigrate_success 30195890
pgmigrate_fail 0

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/2] mm: numa: Do not clear PTEs or PMDs for NUMA hinting faults

2015-03-05 Thread Linus Torvalds
On Thu, Mar 5, 2015 at 3:54 PM, Mel Gorman  wrote:
> if (!prot_numa || !pmd_protnone(*pmd)) {
> -   entry = pmdp_get_and_clear_notify(mm, addr, pmd);
> -   entry = pmd_modify(entry, newprot);
> +   /*
> +* NUMA hinting update can avoid a clear and defer the
> +* flush as it is not a functional correctness issue 
> if
> +* access occurs after the update and this avoids
> +* spurious faults.
> +*/
> +   if (prot_numa) {
> +   entry = *pmd;
> +   entry = pmd_mkprotnone(entry);
> +   } else {
> +   entry = pmdp_get_and_clear_notify(mm, addr,
> + pmd);
> +   entry = pmd_modify(entry, newprot);
> +   BUG_ON(pmd_write(entry));
> +   }
> +
> ret = HPAGE_PMD_NR;
> set_pmd_at(mm, addr, pmd, entry);
> -   BUG_ON(pmd_write(entry));

So I don't think this is right, nor is the new pte code.

You cannot just read the old pte entry, change it, and write it back.
That's fundamentally racy, and can drop any concurrent dirty or
accessed bit setting. And there are no locks you can use to protect
against that, since the accessed and dirty bit are set by hardware.

Now, losing the accessed bit doesn't matter - it's a small race, and
not a correctness issue. But potentially losing dirty bits is a data
loss problem.

Did the old prot_numa code do this too? Because if it did, it sounds
like it was just buggy.

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/