On 1/25/24 22:12, Alexandru Elisei wrote:
> The arm64 MTE code uses the PG_arch_2 page flag, which it renames to
> PG_mte_tagged, to track if a page has been mapped with tagging enabled.
> That flag is cleared by free_pages_prepare() by doing:
> 
>       page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
> 
> When tag storage management is added, tag storage will be reserved for a
> page if and only if the page is mapped as tagged (the page flag
> PG_mte_tagged is set). When a page is freed, likewise, the code will have
> to look at the the page flags to determine if the page has tag storage
> reserved, which should also be freed.
> 
> For this purpose, add an arch_free_pages_prepare() hook that is called
> before that page flags are cleared. The function arch_free_page() has also
> been considered for this purpose, but it is called after the flags are
> cleared.

arch_free_pages_prepare() makes sense as a prologue to arch_free_page().  

s/arch_free_pages_prepare/arch_free_page_prepare to match similar functions.

> 
> Signed-off-by: Alexandru Elisei <alexandru.eli...@arm.com>
> ---
> 
> Changes since rfc v2:
> 
> * Expanded commit message (David Hildenbrand).
> 
>  include/linux/pgtable.h | 4 ++++
>  mm/page_alloc.c         | 1 +
>  2 files changed, 5 insertions(+)
> 
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index f6d0e3513948..6d98d5fdd697 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -901,6 +901,10 @@ static inline void arch_do_swap_page(struct mm_struct 
> *mm,
>  }
>  #endif
>  
> +#ifndef __HAVE_ARCH_FREE_PAGES_PREPARE

I guess new __HAVE_ARCH_ constructs are not being added lately. Instead
something like '#ifndef arch_free_pages_prepare' might be better suited.

> +static inline void arch_free_pages_prepare(struct page *page, int order) { }
> +#endif
> +
>  #ifndef __HAVE_ARCH_UNMAP_ONE
>  /*
>   * Some architectures support metadata associated with a page. When a
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2c140abe5ee6..27282a1c82fe 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1092,6 +1092,7 @@ static __always_inline bool free_pages_prepare(struct 
> page *page,
>  
>       trace_mm_page_free(page, order);
>       kmsan_free_page(page, order);
> +     arch_free_pages_prepare(page, order);
>  
>       if (memcg_kmem_online() && PageMemcgKmem(page))
>               __memcg_kmem_uncharge_page(page, order);

Reply via email to