On Tue, 16 Jul 2024 14:13:45 +0300
Mike Rapoport <r...@kernel.org> wrote:

> From: "Mike Rapoport (Microsoft)" <r...@kernel.org>
> 
> Until now arch_numa was directly translating firmware NUMA information
> to memblock.
> 
> Using numa_memblks as an intermediate step has a few advantages:
> * alignment with more battle tested x86 implementation
> * availability of NUMA emulation
> * maintaining node information for not yet populated memory
> 
> Replace current functionality related to numa_add_memblk() and
> __node_distance() with the implementation based on numa_memblks and add
> functions required by numa_emulation.
> 
> Signed-off-by: Mike Rapoport (Microsoft) <r...@kernel.org>

One trivial comment inline,

Jonathan
>  /*
>   * Initialize NODE_DATA for a node on the local memory
>   */
> @@ -226,116 +204,9 @@ static void __init setup_node_data(int nid, u64 
> start_pfn, u64 end_pfn)
>       NODE_DATA(nid)->node_spanned_pages = end_pfn - start_pfn;
>  }

>  
> @@ -454,3 +321,54 @@ void __init arch_numa_init(void)
>  
>       numa_init(dummy_numa_init);
>  }
> +
> +#ifdef CONFIG_NUMA_EMU
> +void __init numa_emu_update_cpu_to_node(int *emu_nid_to_phys,
> +                                     unsigned int nr_emu_nids)
> +{
> +     int i, j;
> +
> +     /*
> +      * Transform __apicid_to_node table to use emulated nids by

Comment needs an update seeing as there is no __apicid_to_node table
here.

> +      * reverse-mapping phys_nid.  The maps should always exist but fall
> +      * back to zero just in case.
> +      */
> +     for (i = 0; i < ARRAY_SIZE(cpu_to_node_map); i++) {
> +             if (cpu_to_node_map[i] == NUMA_NO_NODE)
> +                     continue;
> +             for (j = 0; j < nr_emu_nids; j++)
> +                     if (cpu_to_node_map[i] == emu_nid_to_phys[j])
> +                             break;
> +             cpu_to_node_map[i] = j < nr_emu_nids ? j : 0;
> +     }
> +}
> +
> +u64 __init numa_emu_dma_end(void)
> +{
> +     return PFN_PHYS(memblock_start_of_DRAM() + SZ_4G);
> +}


Reply via email to