On Wed, Sep 19, 2018 at 10:20 AM Dave Jiang <[email protected]> wrote:
>
> During fakenuma processing in numa_emulation(), pi gets passed in and
> processed as new fake numa nodes are being split out. Once the original
> memory region is proccessed, it gets removed from the pi by
> numa_remove_memblk_from() in emu_setup_memblk(). So entry 0 gets deleted
> and the rest of the entries get moved up. Therefore we should always pass
> in entry 0 for the next entry to process.
>
> Fixes: 1f6a2c6d9f121 ("x86/numa_emulation: Introduce uniform split
> capability")
>
> Cc: Dan Williams <[email protected]>
> Signed-off-by: Dave Jiang <[email protected]>

Thanks Dave! I missed this behavior in my testing.

Reviewed-by: Dan Williams <[email protected]>

> ---
>  arch/x86/mm/numa_emulation.c |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/mm/numa_emulation.c b/arch/x86/mm/numa_emulation.c
> index b54d52a2d00a..a3ca8bf5afcb 100644
> --- a/arch/x86/mm/numa_emulation.c
> +++ b/arch/x86/mm/numa_emulation.c
> @@ -401,8 +401,8 @@ void __init numa_emulation(struct numa_meminfo 
> *numa_meminfo, int numa_dist_cnt)
>                 ret = -1;
>                 for_each_node_mask(i, physnode_mask) {

We might put a comment here because the use of 0 is non-obvious on first glance.

>                         ret = split_nodes_size_interleave_uniform(&ei, &pi,
> -                                       pi.blk[i].start, pi.blk[i].end, 0,
> -                                       n, &pi.blk[i], nid);
> +                                       pi.blk[0].start, pi.blk[0].end, 0,
> +                                       n, &pi.blk[0], nid);
>                         if (ret < 0)
>                                 break;
>                         if (ret < n) {
>

Reply via email to