From: Borislav Petkov <b...@suse.de>

Single-stepping through head_64.S made me look at the fixmap page PTEs
fixup loop:

So we're going through the whole level2_fixmap_pgt 4K page, looking at
whether PAGE_PRESENT is set in those PTEs and add the delta between
where we're compiled to run and where we actually end up running.

However, if that delta is 0 (most cases) we go through all those 512
PTEs for no reason at all. Oh well, we add 0 but that's no reason to me.

Skipping that useless fixup gives us a boot speedup of 0.004 seconds in
my guest. Not a lot but considering how cheap it is, I'll take it. Here
is the printk time difference:

before:
  ...
  [    0.000000] tsc: Marking TSC unstable due to TSCs unsynchronized
  [    0.013590] Calibrating delay loop (skipped), value calculated using timer 
frequency..
                8027.17 BogoMIPS (lpj=16054348)
  [    0.017094] pid_max: default: 32768 minimum: 301
  ...

after:
  ...
  [    0.000000] tsc: Marking TSC unstable due to TSCs unsynchronized
  [    0.009587] Calibrating delay loop (skipped), value calculated using timer 
frequency..
                8026.86 BogoMIPS (lpj=16053724)
  [    0.013090] pid_max: default: 32768 minimum: 301
  ...

For the other two changes converting naked numbers to defines:

  # arch/x86/kernel/head_64.o:

   text    data     bss     dec     hex filename
   1124  290864    4096  296084   48494 head_64.o.before
   1124  290864    4096  296084   48494 head_64.o.after

md5:
   87086e202588939296f66e892414ffe2  head_64.o.before.asm
   87086e202588939296f66e892414ffe2  head_64.o.after.asm

Signed-off-by: Borislav Petkov <b...@suse.de>
---
 arch/x86/kernel/head_64.S | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index a15d381e6020..90de28841242 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -142,6 +142,9 @@ startup_64:
        decl    %ecx
        jnz     1b
 
+       test %rbp, %rbp
+       jz .Lskip_fixup
+
        /*
         * Fixup the kernel text+data virtual addresses. Note that
         * we might write invalid pmds, when the kernel is relocated
@@ -149,9 +152,9 @@ startup_64:
         * beyond _end.
         */
        leaq    level2_kernel_pgt(%rip), %rdi
-       leaq    4096(%rdi), %r8
+       leaq    PAGE_SIZE(%rdi), %r8
        /* See if it is a valid page table entry */
-1:     testb   $1, 0(%rdi)
+1:     testb   $_PAGE_PRESENT, 0(%rdi)
        jz      2f
        addq    %rbp, 0(%rdi)
        /* Go to the next page */
@@ -162,6 +165,7 @@ startup_64:
        /* Fixup phys_base */
        addq    %rbp, phys_base(%rip)
 
+.Lskip_fixup:
        movq    $(early_level4_pgt - __START_KERNEL_map), %rax
        jmp 1f
 ENTRY(secondary_startup_64)
-- 
2.10.0

Reply via email to