* Nathan Zimmer <nzim...@sgi.com> wrote:

> The memory we set aside in the previous patch needs to be reinserted.
> We start this process via late_initcall so we will have multiple cpus to do
> the work.
> 
> Signed-off-by: Mike Travis <tra...@sgi.com>
> Signed-off-by: Nathan Zimmer <nzim...@sgi.com>
> Cc: Thomas Gleixner <t...@linutronix.de>
> Cc: Ingo Molnar <mi...@redhat.com>
> Cc: "H. Peter Anvin" <h...@zytor.com>
> Cc: Greg Kroah-Hartman <gre...@linuxfoundation.org>
> Cc: Andrew Morton <a...@linux-foundation.org> 
> Cc: Yinghai Lu <ying...@kernel.org>
> ---
>  arch/x86/kernel/e820.c | 129 
> +++++++++++++++++++++++++++++++++++++++++++++++++
>  drivers/base/memory.c  |  83 +++++++++++++++++++++++++++++++
>  include/linux/memory.h |   5 ++
>  3 files changed, 217 insertions(+)
> 
> diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
> index 3752dc5..d31039d 100644
> --- a/arch/x86/kernel/e820.c
> +++ b/arch/x86/kernel/e820.c
> @@ -23,6 +23,7 @@
>  
>  #ifdef CONFIG_DELAY_MEM_INIT
>  #include <linux/memory.h>
> +#include <linux/delay.h>
>  #endif
>  
>  #include <asm/e820.h>
> @@ -397,6 +398,22 @@ static u64 min_region_size;      /* min size of region 
> to slice from */
>  static u64 pre_region_size;  /* multiply bsize for node low memory */
>  static u64 post_region_size; /* multiply bsize for node high memory */
>  
> +static unsigned long add_absent_work_start_time;
> +static unsigned long add_absent_work_stop_time;
> +static unsigned int add_absent_job_count;
> +static atomic_t add_absent_work_count;
> +
> +struct absent_work {
> +     struct work_struct      work;
> +     struct absent_work      *next;
> +     atomic_t                busy;
> +     int                     cpu;
> +     int                     node;
> +     int                     index;
> +};
> +static DEFINE_PER_CPU(struct absent_work, absent_work);
> +static struct absent_work *first_absent_work;

That's 4.5 GB/sec initialization speed - that feels a bit slow and the 
boot time effect should be felt on smaller 'a couple of gigabytes' desktop 
boxes as well. Do we know exactly where the 2 hours of boot time on a 32 
TB system is spent?

While you cannot profile the boot process (yet), you could try your 
delayed patch and run a "perf record -g" call-graph profiling of the 
late-time initialization routines. What does 'perf report' show?

Delayed initialization makes sense I guess because 32 TB is a lot of 
memory - I'm just wondering whether there's some low hanging fruits left 
in the mem init code, that code is certainly not optimized for 
performance.

Plus with a struct page size of around 64 bytes (?) 32 TB of RAM has 512 
GB of struct page arrays alone. Initializing those will take quite some 
time as well - and I suspect they are allocated via zeroing them first. If 
that memset() exists then getting rid of it might be a good move as well.

Yet another thing to consider would be to implement an initialization 
speedup of 3 orders of magnitude: initialize on the large page (2MB) 
grandularity and on-demand delay the initialization of the 4K granular 
struct pages [but still allocating them] - which I suspect are a good 
chunk of the overhead? That way we could initialize in 2MB steps and speed 
up the 2 hours bootup of 32 TB of RAM to 14 seconds...

[ The cost would be one more branch in the buddy allocator, to detect
  not-yet-initialized 2 MB chunks as we encounter them. Acceptable I 
  think. ]

Thanks,

        Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to