On Fri 19-01-18 08:09:08, Petr Tesarik wrote:
[...]
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 67f2e3c38939..7522a6987595 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -1166,8 +1166,16 @@ extern unsigned long usemap_size(void);
>  
>  /*
>   * We use the lower bits of the mem_map pointer to store
> - * a little bit of information.  There should be at least
> - * 3 bits here due to 32-bit alignment.
> + * a little bit of information.  The pointer is calculated
> + * as mem_map - section_nr_to_pfn(pnum).  The result is
> + * aligned to the minimum alignment of the two values:
> + *   1. All mem_map arrays are page-aligned.
> + *   2. section_nr_to_pfn() always clears PFN_SECTION_SHIFT
> + *      lowest bits.  PFN_SECTION_SHIFT is arch-specific
> + *      (equal SECTION_SIZE_BITS - PAGE_SHIFT), and the
> + *      worst combination is powerpc with 256k pages,
> + *      which results in PFN_SECTION_SHIFT equal 6.
> + * To sum it up, at least 6 bits are available.
>   */

This is _much_ better indeed. Do you think we can go one step further
and add BUG_ON into the sparse code to guarantee that every mmemap
is indeed aligned properly so that SECTION_MAP_LAST_BIT-1 bits are never
used?

Thanks!

>  #define      SECTION_MARKED_PRESENT  (1UL<<0)
>  #define SECTION_HAS_MEM_MAP  (1UL<<1)
> -- 
> 2.13.6

-- 
Michal Hocko
SUSE Labs

Reply via email to