GCC toolchain simply ignores if kaslr_64.o has __force_order means the build ends up successfully whereas LLVM toolchain and IAS breaks and the build stops and needs explicitly commit df6d4f9db79c1a5d6f48b59db35ccd1e9ff9adfc ("x86/boot/compressed: Don't declare __force_order in kaslr_64.c") reverted to fix this. With the revert GCC toolchain is also fine.
Maybe it is good to revert that commit? This is with [1]: diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index 59a3e13204c3..e1c19c5ecd5e 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -17,7 +17,7 @@ * all loads stores around it, which can hurt performance. Solution is to * use a variable and mimic reads and writes to it to enforce serialization */ -extern unsigned long __force_order; +extern unsigned long __force_order __weak; void native_write_cr0(unsigned long val); ...and the patchset of "x86/boot: Remove run-time relocations from compressed kernel" applied [3]. More details in [4]. - Sedat - References: [1] https://github.com/ClangBuiltLinux/linux/issues/1120#issuecomment-674182703 [2] https://git.kernel.org/linus/df6d4f9db79c1a5d6f48b59db35ccd1e9ff9adfc [3] https://lore.kernel.org/patchwork/project/lkml/list/?series=456251 [4] https://github.com/ClangBuiltLinux/linux/issues/1120#issuecomment-674502114