Le 28/08/2019 à 06:54, Scott Wood a écrit :
On Fri, Aug 09, 2019 at 06:07:54PM +0800, Jason Yan wrote:
This patch add support to boot kernel from places other than KERNELBASE.
Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

The offset of the kernel was not randomized yet(a fixed 64M is set). We
will randomize it in the next patch.

Signed-off-by: Jason Yan <yanai...@huawei.com>
Cc: Diana Craciun <diana.crac...@nxp.com>
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: Christophe Leroy <christophe.le...@c-s.fr>
Cc: Benjamin Herrenschmidt <b...@kernel.crashing.org>
Cc: Paul Mackerras <pau...@samba.org>
Cc: Nicholas Piggin <npig...@gmail.com>
Cc: Kees Cook <keesc...@chromium.org>
Tested-by: Diana Craciun <diana.crac...@nxp.com>
Reviewed-by: Christophe Leroy <christophe.le...@c-s.fr>
---
  arch/powerpc/Kconfig                          | 11 ++++
  arch/powerpc/kernel/Makefile                  |  1 +
  arch/powerpc/kernel/early_32.c                |  2 +-
  arch/powerpc/kernel/fsl_booke_entry_mapping.S | 17 +++--
  arch/powerpc/kernel/head_fsl_booke.S          | 13 +++-
  arch/powerpc/kernel/kaslr_booke.c             | 62 +++++++++++++++++++
  arch/powerpc/mm/mmu_decl.h                    |  7 +++
  arch/powerpc/mm/nohash/fsl_booke.c            |  7 ++-
  8 files changed, 105 insertions(+), 15 deletions(-)
  create mode 100644 arch/powerpc/kernel/kaslr_booke.c


[...]

diff --git a/arch/powerpc/kernel/kaslr_booke.c 
b/arch/powerpc/kernel/kaslr_booke.c
new file mode 100644
index 000000000000..f8dc60534ac1
--- /dev/null
+++ b/arch/powerpc/kernel/kaslr_booke.c

Shouldn't this go under arch/powerpc/mm/nohash?

+/*
+ * To see if we need to relocate the kernel to a random offset
+ * void *dt_ptr - address of the device tree
+ * phys_addr_t size - size of the first memory block
+ */
+notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
+{
+       unsigned long tlb_virt;
+       phys_addr_t tlb_phys;
+       unsigned long offset;
+       unsigned long kernel_sz;
+
+       kernel_sz = (unsigned long)_end - KERNELBASE;

Why KERNELBASE and not kernstart_addr?

+
+       offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
+
+       if (offset == 0)
+               return;
+
+       kernstart_virt_addr += offset;
+       kernstart_addr += offset;
+
+       is_second_reloc = 1;
+
+       if (offset >= SZ_64M) {
+               tlb_virt = round_down(kernstart_virt_addr, SZ_64M);
+               tlb_phys = round_down(kernstart_addr, SZ_64M);

If kernstart_addr wasn't 64M-aligned before adding offset, then "offset
= SZ_64M" is not necessarily going to detect when you've crossed a
mapping boundary.

+
+               /* Create kernel map to relocate in */
+               create_tlb_entry(tlb_phys, tlb_virt, 1);
+       }
+
+       /* Copy the kernel to it's new location and run */
+       memcpy((void *)kernstart_virt_addr, (void *)KERNELBASE, kernel_sz);
+
+       reloc_kernel_entry(dt_ptr, kernstart_virt_addr);
+}

After copying, call flush_icache_range() on the destination.

Function copy_and_flush() does the copy and the flush. I think it should be used instead of memcpy() + flush_icache_range()

Christophe

Reply via email to