Re: [PATCH 1/3] balancenuma: add stats for huge pmd numa faults

2012-11-26 Thread Hillf Danton
On 11/26/12, Mel Gorman  wrote:
> In my mind, the primary use of the counter is to estimate how many MB/sec
> are being copied.

The new counters are dropped as they help you little.

Hillf
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] balancenuma: add stats for huge pmd numa faults

2012-11-26 Thread Mel Gorman
On Sun, Nov 25, 2012 at 02:14:12PM +0800, Hillf Danton wrote:
> On 11/24/12, Mel Gorman  wrote:
> > On Sat, Nov 24, 2012 at 12:17:03PM +0800, Hillf Danton wrote:
> >> A thp contributes 512 times more than a regular page to numa fault stats,
> >> so deserves its own vm event counter. THP migration is also accounted.
> >>
> >
> > I agree and mentioned it needed fixing. I did not create a new counter
> > but I properly account for PGMIGRATE_SUCCESS and PGMIGRATE_FAIL now. I
> > did not create a new NUMA_PAGE_MIGRATE counter because I didn't feel it
> > was necessary. Instead I just do this
> >
> > count_vm_events(PGMIGRATE_SUCCESS, HPAGE_PMD_NR);
> >
>
> It could be read as: 512 pages are successfully migrated(though at the
> cost of actually one page).
> 

512 pages had to be copied and copy is a big part of the cost.

> > count_vm_numa_events(NUMA_PAGE_MIGRATE, HPAGE_PMD_NR);
> >
>
> ditto, 512 pages go through migration(though actually only one page
> takes the hard journey).
> 

In my mind, the primary use of the counter is to estimate how many MB/sec
are being copied. If measured just once, the copy rate is averaged over the
duration of the test. If it's being regularly measured it can be determined
if the copying happens in bursts or is steady copying throughout.  For this,
just one counter is necessary as long as it counts the number of base
pages properly.

> That said, in short, the new counters are different and clearer.
> 

What new information does an extra counter give us that we can draw useful
conclusions from? It does not tell us much that is new about the data being
copied. It also do not tell us very much that is useful about THP because
the number of THP splits or collapses is more interesting (higher splits
or fewer collapses implies with THP which may be a net loss).

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] balancenuma: add stats for huge pmd numa faults

2012-11-26 Thread Mel Gorman
On Sun, Nov 25, 2012 at 02:14:12PM +0800, Hillf Danton wrote:
 On 11/24/12, Mel Gorman mgor...@suse.de wrote:
  On Sat, Nov 24, 2012 at 12:17:03PM +0800, Hillf Danton wrote:
  A thp contributes 512 times more than a regular page to numa fault stats,
  so deserves its own vm event counter. THP migration is also accounted.
 
 
  I agree and mentioned it needed fixing. I did not create a new counter
  but I properly account for PGMIGRATE_SUCCESS and PGMIGRATE_FAIL now. I
  did not create a new NUMA_PAGE_MIGRATE counter because I didn't feel it
  was necessary. Instead I just do this
 
  count_vm_events(PGMIGRATE_SUCCESS, HPAGE_PMD_NR);
 

 It could be read as: 512 pages are successfully migrated(though at the
 cost of actually one page).
 

512 pages had to be copied and copy is a big part of the cost.

  count_vm_numa_events(NUMA_PAGE_MIGRATE, HPAGE_PMD_NR);
 

 ditto, 512 pages go through migration(though actually only one page
 takes the hard journey).
 

In my mind, the primary use of the counter is to estimate how many MB/sec
are being copied. If measured just once, the copy rate is averaged over the
duration of the test. If it's being regularly measured it can be determined
if the copying happens in bursts or is steady copying throughout.  For this,
just one counter is necessary as long as it counts the number of base
pages properly.

 That said, in short, the new counters are different and clearer.
 

What new information does an extra counter give us that we can draw useful
conclusions from? It does not tell us much that is new about the data being
copied. It also do not tell us very much that is useful about THP because
the number of THP splits or collapses is more interesting (higher splits
or fewer collapses implies with THP which may be a net loss).

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] balancenuma: add stats for huge pmd numa faults

2012-11-26 Thread Hillf Danton
On 11/26/12, Mel Gorman mgor...@suse.de wrote:
 In my mind, the primary use of the counter is to estimate how many MB/sec
 are being copied.

The new counters are dropped as they help you little.

Hillf
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] balancenuma: add stats for huge pmd numa faults

2012-11-24 Thread Hillf Danton
On 11/24/12, Mel Gorman  wrote:
> On Sat, Nov 24, 2012 at 12:17:03PM +0800, Hillf Danton wrote:
>> A thp contributes 512 times more than a regular page to numa fault stats,
>> so deserves its own vm event counter. THP migration is also accounted.
>>
>
> I agree and mentioned it needed fixing. I did not create a new counter
> but I properly account for PGMIGRATE_SUCCESS and PGMIGRATE_FAIL now. I
> did not create a new NUMA_PAGE_MIGRATE counter because I didn't feel it
> was necessary. Instead I just do this
>
> count_vm_events(PGMIGRATE_SUCCESS, HPAGE_PMD_NR);
>
It could be read as: 512 pages are successfully migrated(though at the
cost of actually one page).

> count_vm_numa_events(NUMA_PAGE_MIGRATE, HPAGE_PMD_NR);
>
ditto, 512 pages go through migration(though actually only one page
takes the hard journey).

That said, in short, the new counters are different and clearer.

Hillf
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] balancenuma: add stats for huge pmd numa faults

2012-11-24 Thread Mel Gorman
On Sat, Nov 24, 2012 at 12:17:03PM +0800, Hillf Danton wrote:
> A thp contributes 512 times more than a regular page to numa fault stats,
> so deserves its own vm event counter. THP migration is also accounted.
> 

I agree and mentioned it needed fixing. I did not create a new counter
but I properly account for PGMIGRATE_SUCCESS and PGMIGRATE_FAIL now. I
did not create a new NUMA_PAGE_MIGRATE counter because I didn't feel it
was necessary. Instead I just do this

count_vm_events(PGMIGRATE_SUCCESS, HPAGE_PMD_NR);
count_vm_numa_events(NUMA_PAGE_MIGRATE, HPAGE_PMD_NR);

> [A duplicated computation of page node idx is cleaned up]
> 

Got it. Thanks

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] balancenuma: add stats for huge pmd numa faults

2012-11-24 Thread Mel Gorman
On Sat, Nov 24, 2012 at 12:17:03PM +0800, Hillf Danton wrote:
 A thp contributes 512 times more than a regular page to numa fault stats,
 so deserves its own vm event counter. THP migration is also accounted.
 

I agree and mentioned it needed fixing. I did not create a new counter
but I properly account for PGMIGRATE_SUCCESS and PGMIGRATE_FAIL now. I
did not create a new NUMA_PAGE_MIGRATE counter because I didn't feel it
was necessary. Instead I just do this

count_vm_events(PGMIGRATE_SUCCESS, HPAGE_PMD_NR);
count_vm_numa_events(NUMA_PAGE_MIGRATE, HPAGE_PMD_NR);

 [A duplicated computation of page node idx is cleaned up]
 

Got it. Thanks

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] balancenuma: add stats for huge pmd numa faults

2012-11-24 Thread Hillf Danton
On 11/24/12, Mel Gorman mgor...@suse.de wrote:
 On Sat, Nov 24, 2012 at 12:17:03PM +0800, Hillf Danton wrote:
 A thp contributes 512 times more than a regular page to numa fault stats,
 so deserves its own vm event counter. THP migration is also accounted.


 I agree and mentioned it needed fixing. I did not create a new counter
 but I properly account for PGMIGRATE_SUCCESS and PGMIGRATE_FAIL now. I
 did not create a new NUMA_PAGE_MIGRATE counter because I didn't feel it
 was necessary. Instead I just do this

 count_vm_events(PGMIGRATE_SUCCESS, HPAGE_PMD_NR);

It could be read as: 512 pages are successfully migrated(though at the
cost of actually one page).

 count_vm_numa_events(NUMA_PAGE_MIGRATE, HPAGE_PMD_NR);

ditto, 512 pages go through migration(though actually only one page
takes the hard journey).

That said, in short, the new counters are different and clearer.

Hillf
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/3] balancenuma: add stats for huge pmd numa faults

2012-11-23 Thread Hillf Danton
A thp contributes 512 times more than a regular page to numa fault stats,
so deserves its own vm event counter. THP migration is also accounted.

[A duplicated computation of page node idx is cleaned up]

Signed-off-by: Hillf Danton 
---

--- a/include/linux/vm_event_item.h Fri Nov 23 21:24:12 2012
+++ b/include/linux/vm_event_item.h Fri Nov 23 21:37:32 2012
@@ -40,6 +40,12 @@ enum vm_event_item { PGPGIN, PGPGOUT, PS
PAGEOUTRUN, ALLOCSTALL, PGROTATED,
 #ifdef CONFIG_BALANCE_NUMA
NUMA_PTE_UPDATES,
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+   NUMA_THP_HINT_FAULTS,
+   NUMA_THP_HINT_FAULTS_LOCAL,
+   NUMA_THP_MIGRATE_SUCCESS,
+   NUMA_THP_MIGRATE_FAIL,
+#endif
NUMA_HINT_FAULTS,
NUMA_HINT_FAULTS_LOCAL,
NUMA_PAGE_MIGRATE,
--- a/mm/huge_memory.c  Fri Nov 23 21:28:04 2012
+++ b/mm/huge_memory.c  Fri Nov 23 21:52:06 2012
@@ -1035,12 +1035,13 @@ int do_huge_pmd_numa_page(struct mm_stru

page = pmd_page(pmd);
get_page(page);
-   count_vm_numa_event(NUMA_HINT_FAULTS);
current_nid = page_to_nid(page);
+   count_vm_numa_event(NUMA_THP_HINT_FAULTS);
+   if (current_nid == numa_node_id())
+   count_vm_numa_event(NUMA_THP_HINT_FAULTS_LOCAL);

target_nid = mpol_misplaced(page, vma, haddr);
if (target_nid == -1) {
-   current_nid = page_to_nid(page);
put_page(page);
goto clear_pmdnuma;
}
@@ -1063,9 +1064,11 @@ int do_huge_pmd_numa_page(struct mm_stru
migrated = migrate_misplaced_transhuge_page(mm, vma,
pmdp, pmd, addr,
page, target_nid);
-   if (migrated)
+   if (migrated) {
+   count_vm_numa_event(NUMA_THP_MIGRATE_SUCCESS);
current_nid = target_nid;
-   else {
+   } else {
+   count_vm_numa_event(NUMA_THP_MIGRATE_FAIL);
spin_lock(>page_table_lock);
if (unlikely(!pmd_same(pmd, *pmdp))) {
unlock_page(page);
--- a/mm/vmstat.c   Fri Nov 23 21:30:04 2012
+++ b/mm/vmstat.c   Fri Nov 23 21:57:32 2012
@@ -776,6 +776,12 @@ const char * const vmstat_text[] = {

 #ifdef CONFIG_BALANCE_NUMA
"numa_pte_updates",
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+   "numa_thp_hint_faults",
+   "numa_thp_hint_faults_local",
+   "numa_thp_migrated_success",
+   "numa_thp_migrated_fail",
+#endif
"numa_hint_faults",
"numa_hint_faults_local",
"numa_pages_migrated",
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/3] balancenuma: add stats for huge pmd numa faults

2012-11-23 Thread Hillf Danton
A thp contributes 512 times more than a regular page to numa fault stats,
so deserves its own vm event counter. THP migration is also accounted.

[A duplicated computation of page node idx is cleaned up]

Signed-off-by: Hillf Danton dhi...@gmail.com
---

--- a/include/linux/vm_event_item.h Fri Nov 23 21:24:12 2012
+++ b/include/linux/vm_event_item.h Fri Nov 23 21:37:32 2012
@@ -40,6 +40,12 @@ enum vm_event_item { PGPGIN, PGPGOUT, PS
PAGEOUTRUN, ALLOCSTALL, PGROTATED,
 #ifdef CONFIG_BALANCE_NUMA
NUMA_PTE_UPDATES,
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+   NUMA_THP_HINT_FAULTS,
+   NUMA_THP_HINT_FAULTS_LOCAL,
+   NUMA_THP_MIGRATE_SUCCESS,
+   NUMA_THP_MIGRATE_FAIL,
+#endif
NUMA_HINT_FAULTS,
NUMA_HINT_FAULTS_LOCAL,
NUMA_PAGE_MIGRATE,
--- a/mm/huge_memory.c  Fri Nov 23 21:28:04 2012
+++ b/mm/huge_memory.c  Fri Nov 23 21:52:06 2012
@@ -1035,12 +1035,13 @@ int do_huge_pmd_numa_page(struct mm_stru

page = pmd_page(pmd);
get_page(page);
-   count_vm_numa_event(NUMA_HINT_FAULTS);
current_nid = page_to_nid(page);
+   count_vm_numa_event(NUMA_THP_HINT_FAULTS);
+   if (current_nid == numa_node_id())
+   count_vm_numa_event(NUMA_THP_HINT_FAULTS_LOCAL);

target_nid = mpol_misplaced(page, vma, haddr);
if (target_nid == -1) {
-   current_nid = page_to_nid(page);
put_page(page);
goto clear_pmdnuma;
}
@@ -1063,9 +1064,11 @@ int do_huge_pmd_numa_page(struct mm_stru
migrated = migrate_misplaced_transhuge_page(mm, vma,
pmdp, pmd, addr,
page, target_nid);
-   if (migrated)
+   if (migrated) {
+   count_vm_numa_event(NUMA_THP_MIGRATE_SUCCESS);
current_nid = target_nid;
-   else {
+   } else {
+   count_vm_numa_event(NUMA_THP_MIGRATE_FAIL);
spin_lock(mm-page_table_lock);
if (unlikely(!pmd_same(pmd, *pmdp))) {
unlock_page(page);
--- a/mm/vmstat.c   Fri Nov 23 21:30:04 2012
+++ b/mm/vmstat.c   Fri Nov 23 21:57:32 2012
@@ -776,6 +776,12 @@ const char * const vmstat_text[] = {

 #ifdef CONFIG_BALANCE_NUMA
numa_pte_updates,
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+   numa_thp_hint_faults,
+   numa_thp_hint_faults_local,
+   numa_thp_migrated_success,
+   numa_thp_migrated_fail,
+#endif
numa_hint_faults,
numa_hint_faults_local,
numa_pages_migrated,
--
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/