On Thu 20-08-15 08:26:26, Naoya Horiguchi wrote:
> Currently /proc/PID/smaps provides no usage info for vma(VM_HUGETLB), which
> is inconvenient when we want to know per-task or per-vma base hugetlb usage.
> To solve this, this patch adds a new line for hugetlb usage like below:
> 
>   Size:              20480 kB
>   Rss:                   0 kB
>   Pss:                   0 kB
>   Shared_Clean:          0 kB
>   Shared_Dirty:          0 kB
>   Private_Clean:         0 kB
>   Private_Dirty:         0 kB
>   Referenced:            0 kB
>   Anonymous:             0 kB
>   AnonHugePages:         0 kB
>   HugetlbPages:      18432 kB
>   Swap:                  0 kB
>   KernelPageSize:     2048 kB
>   MMUPageSize:        2048 kB
>   Locked:                0 kB
>   VmFlags: rd wr mr mw me de ht

I have only now got to this thread. This is indeed very helpful. I would
just suggest to update Documentation/filesystems/proc.txt to be explicit
that Rss: doesn't count hugetlb pages for historical reasons.
 
> Signed-off-by: Naoya Horiguchi <[email protected]>
> Acked-by: Joern Engel <[email protected]>
> Acked-by: David Rientjes <[email protected]>

Acked-by: Michal Hocko <[email protected]>

> ---
> v3 -> v4:
> - suspend Acked-by tag because v3->v4 change is not trivial
> - I stated in previous discussion that HugetlbPages line can contain page
>   size info, but that's not necessary because we already have KernelPageSize
>   info.
> - merged documentation update, where the current documentation doesn't mention
>   AnonHugePages, so it's also added.
> ---
>  Documentation/filesystems/proc.txt |  7 +++++--
>  fs/proc/task_mmu.c                 | 29 +++++++++++++++++++++++++++++
>  2 files changed, 34 insertions(+), 2 deletions(-)
> 
> diff --git v4.2-rc4/Documentation/filesystems/proc.txt 
> v4.2-rc4_patched/Documentation/filesystems/proc.txt
> index 6f7fafde0884..22e40211ef64 100644
> --- v4.2-rc4/Documentation/filesystems/proc.txt
> +++ v4.2-rc4_patched/Documentation/filesystems/proc.txt
> @@ -423,6 +423,8 @@ Private_Clean:         0 kB
>  Private_Dirty:         0 kB
>  Referenced:          892 kB
>  Anonymous:             0 kB
> +AnonHugePages:         0 kB
> +HugetlbPages:          0 kB
>  Swap:                  0 kB
>  KernelPageSize:        4 kB
>  MMUPageSize:           4 kB
> @@ -440,8 +442,9 @@ indicates the amount of memory currently marked as 
> referenced or accessed.
>  "Anonymous" shows the amount of memory that does not belong to any file.  
> Even
>  a mapping associated with a file may contain anonymous pages: when 
> MAP_PRIVATE
>  and a page is modified, the file page is replaced by a private anonymous 
> copy.
> -"Swap" shows how much would-be-anonymous memory is also used, but out on
> -swap.
> +"AnonHugePages" shows the ammount of memory backed by transparent hugepage.
> +"HugetlbPages" shows the ammount of memory backed by hugetlbfs page.
> +"Swap" shows how much would-be-anonymous memory is also used, but out on 
> swap.
>  
>  "VmFlags" field deserves a separate description. This member represents the 
> kernel
>  flags associated with the particular virtual memory area in two letter 
> encoded
> diff --git v4.2-rc4/fs/proc/task_mmu.c v4.2-rc4_patched/fs/proc/task_mmu.c
> index ca1e091881d4..2c37938b82ee 100644
> --- v4.2-rc4/fs/proc/task_mmu.c
> +++ v4.2-rc4_patched/fs/proc/task_mmu.c
> @@ -445,6 +445,7 @@ struct mem_size_stats {
>       unsigned long anonymous;
>       unsigned long anonymous_thp;
>       unsigned long swap;
> +     unsigned long hugetlb;
>       u64 pss;
>  };
>  
> @@ -610,12 +611,38 @@ static void show_smap_vma_flags(struct seq_file *m, 
> struct vm_area_struct *vma)
>       seq_putc(m, '\n');
>  }
>  
> +#ifdef CONFIG_HUGETLB_PAGE
> +static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask,
> +                              unsigned long addr, unsigned long end,
> +                              struct mm_walk *walk)
> +{
> +     struct mem_size_stats *mss = walk->private;
> +     struct vm_area_struct *vma = walk->vma;
> +     struct page *page = NULL;
> +
> +     if (pte_present(*pte)) {
> +             page = vm_normal_page(vma, addr, *pte);
> +     } else if (is_swap_pte(*pte)) {
> +             swp_entry_t swpent = pte_to_swp_entry(*pte);
> +
> +             if (is_migration_entry(swpent))
> +                     page = migration_entry_to_page(swpent);
> +     }
> +     if (page)
> +             mss->hugetlb += huge_page_size(hstate_vma(vma));
> +     return 0;
> +}
> +#endif /* HUGETLB_PAGE */
> +
>  static int show_smap(struct seq_file *m, void *v, int is_pid)
>  {
>       struct vm_area_struct *vma = v;
>       struct mem_size_stats mss;
>       struct mm_walk smaps_walk = {
>               .pmd_entry = smaps_pte_range,
> +#ifdef CONFIG_HUGETLB_PAGE
> +             .hugetlb_entry = smaps_hugetlb_range,
> +#endif
>               .mm = vma->vm_mm,
>               .private = &mss,
>       };
> @@ -637,6 +664,7 @@ static int show_smap(struct seq_file *m, void *v, int 
> is_pid)
>                  "Referenced:     %8lu kB\n"
>                  "Anonymous:      %8lu kB\n"
>                  "AnonHugePages:  %8lu kB\n"
> +                "HugetlbPages:   %8lu kB\n"
>                  "Swap:           %8lu kB\n"
>                  "KernelPageSize: %8lu kB\n"
>                  "MMUPageSize:    %8lu kB\n"
> @@ -651,6 +679,7 @@ static int show_smap(struct seq_file *m, void *v, int 
> is_pid)
>                  mss.referenced >> 10,
>                  mss.anonymous >> 10,
>                  mss.anonymous_thp >> 10,
> +                mss.hugetlb >> 10,
>                  mss.swap >> 10,
>                  vma_kernel_pagesize(vma) >> 10,
>                  vma_mmu_pagesize(vma) >> 10,
> -- 
> 2.4.3
> N?????r??y????b?X??ǧv?^?)޺{.n?+????{????zX????ܨ}???Ơz?&j:+v???????zZ+??+zf???h???~????i???z??w?????????&?)ߢf??^jǫy?m??@A?a???
> 0??h???i

-- 
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to