On Mon, May 11, 2026 at 05:01:49AM -0400, Michael S. Tsirkin wrote:
> The NUMA interleave index formula (addr - vm_start) >> shift
> gives wrong results when vm_start is not aligned to the folio
> size: the subtraction before the shift allows low bits to
> affect the result via borrows.
> 
> Use (addr >> shift) - (vm_start >> shift) instead, which
> independently aligns both values before computing the
> difference.
> 
> No functional change for current callers: the fix only affects
> NUMA interleave and weighted-interleave policies. Current
> large-order callers either pre-align the address
> (vma_alloc_anon_folio_pmd) or do not use NUMA interleave
> (drm_pagemap). All other callers use order 0 where the old
> and new formulas are equivalent. However subsequent patches
> in this series add large-order callers that pass unaligned
> fault addresses, making this fix necessary.
> 
> Signed-off-by: Michael S. Tsirkin <[email protected]>

Reviewed-by: Gregory Price <[email protected]>

Should this just be pulled out ahead as a fix (assuming this
set takes more time to bake)?  I get it's not causing issues today,
but the interface is otherwise technicaly broken either way.

> ---
>  mm/mempolicy.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 6832cc68120f..39e556e3d263 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2049,7 +2049,8 @@ struct mempolicy *get_vma_policy(struct vm_area_struct 
> *vma,
>       if (pol->mode == MPOL_INTERLEAVE ||
>           pol->mode == MPOL_WEIGHTED_INTERLEAVE) {
>               *ilx += vma->vm_pgoff >> order;
> -             *ilx += (addr - vma->vm_start) >> (PAGE_SHIFT + order);
> +             *ilx += (addr >> (PAGE_SHIFT + order)) -
> +                     (vma->vm_start >> (PAGE_SHIFT + order));

There's enough (PAGE_SHIFT + ...) spread around the kernel, i wonder if
it's worth a define or a function (not in scope, just pondering).

~Gregory

Reply via email to