Re: [PATCH] powerpc/32s: Allocate one 256k IBAT instead of two consecutives 128k IBATs
Christophe Leroy writes: > Today we have the following IBATs allocated: > > ---[ Instruction Block Address Translation ]--- > 0: 0xc000-0xc03f 0x 4M Kernel x m > 1: 0xc040-0xc05f 0x0040 2M Kernel x m > 2: 0xc060-0xc06f 0x0060 1M Kernel x m > 3: 0xc070-0xc077 0x0070 512K Kernel x m > 4: 0xc078-0xc079 0x0078 128K Kernel x m > 5: 0xc07a-0xc07b 0x007a 128K Kernel x m > 6: - > 7: - > > The two 128K should be a single 256K instead. > > When _etext is not aligned to 128Kbytes, the system will allocate > all necessary BATs to the lower 128Kbytes boundary, then allocate > an additional 128Kbytes BAT for the remaining block. > > Instead, align the top to 128Kbytes so that the function directly > allocates a 256Mbytes last block: ^ I think that's meant to be 256Kbytes, I changed it when committing. > ---[ Instruction Block Address Translation ]--- > 0: 0xc000-0xc03f 0x 4M Kernel x m > 1: 0xc040-0xc05f 0x0040 2M Kernel x m > 2: 0xc060-0xc06f 0x0060 1M Kernel x m > 3: 0xc070-0xc077 0x0070 512K Kernel x m > 4: 0xc078-0xc07b 0x0078 256K Kernel x m > 5: - > 6: - > 7: - > > Signed-off-by: Christophe Leroy cheers
[PATCH V2 1/2] tools/perf: Include global and local variants for p_stage_cyc sort key
Sort key p_stage_cyc is used to present the latency cycles spend in pipeline stages. perf tool has local p_stage_cyc sort key to display this info. There is no global variant available for this sort key. local variant shows latency in a sinlge sample, whereas, global value will be useful to present the total latency (sum of latencies) in the hist entry. It represents latency number multiplied by the number of samples. Add global (p_stage_cyc) and local variant (local_p_stage_cyc) for this sort key. Use the local_p_stage_cyc as default option for "mem" sort mode. Also add this to list of dynamic sort keys and made the "dynamic_headers" and "arch_specific_sort_keys" as static. Signed-off-by: Athira Rajeev Reported-by: Namhyung Kim --- Changelog: v1 -> v2: Addressed review comments from Jiri by making the "dynamic_headers" and "arch_specific_sort_keys" as static. tools/perf/util/hist.c | 4 +++- tools/perf/util/hist.h | 3 ++- tools/perf/util/sort.c | 34 +- tools/perf/util/sort.h | 3 ++- 4 files changed, 32 insertions(+), 12 deletions(-) diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c index b776465e04ef..0a8033b09e28 100644 --- a/tools/perf/util/hist.c +++ b/tools/perf/util/hist.c @@ -211,7 +211,9 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h) hists__new_col_len(hists, HISTC_MEM_BLOCKED, 10); hists__new_col_len(hists, HISTC_LOCAL_INS_LAT, 13); hists__new_col_len(hists, HISTC_GLOBAL_INS_LAT, 13); - hists__new_col_len(hists, HISTC_P_STAGE_CYC, 13); + hists__new_col_len(hists, HISTC_LOCAL_P_STAGE_CYC, 13); + hists__new_col_len(hists, HISTC_GLOBAL_P_STAGE_CYC, 13); + if (symbol_conf.nanosecs) hists__new_col_len(hists, HISTC_TIME, 16); else diff --git a/tools/perf/util/hist.h b/tools/perf/util/hist.h index 621f35ae1efa..2a15e22fb89c 100644 --- a/tools/perf/util/hist.h +++ b/tools/perf/util/hist.h @@ -75,7 +75,8 @@ enum hist_column { HISTC_MEM_BLOCKED, HISTC_LOCAL_INS_LAT, HISTC_GLOBAL_INS_LAT, - HISTC_P_STAGE_CYC, + HISTC_LOCAL_P_STAGE_CYC, + HISTC_GLOBAL_P_STAGE_CYC, HISTC_NR_COLS, /* Last entry */ }; diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c index a111065b484e..e417e47f51b9 100644 --- a/tools/perf/util/sort.c +++ b/tools/perf/util/sort.c @@ -37,7 +37,7 @@ const chardefault_parent_pattern[] = "^sys_|^do_page_fault"; const char *parent_pattern = default_parent_pattern; const char *default_sort_order = "comm,dso,symbol"; const char default_branch_sort_order[] = "comm,dso_from,symbol_from,symbol_to,cycles"; -const char default_mem_sort_order[] = "local_weight,mem,sym,dso,symbol_daddr,dso_daddr,snoop,tlb,locked,blocked,local_ins_lat,p_stage_cyc"; +const char default_mem_sort_order[] = "local_weight,mem,sym,dso,symbol_daddr,dso_daddr,snoop,tlb,locked,blocked,local_ins_lat,local_p_stage_cyc"; const char default_top_sort_order[] = "dso,symbol"; const char default_diff_sort_order[] = "dso,symbol"; const char default_tracepoint_sort_order[] = "trace"; @@ -46,8 +46,8 @@ const char*field_order; regex_tignore_callees_regex; inthave_ignore_callees = 0; enum sort_mode sort__mode = SORT_MODE__NORMAL; -const char *dynamic_headers[] = {"local_ins_lat", "p_stage_cyc"}; -const char *arch_specific_sort_keys[] = {"p_stage_cyc"}; +static const char *const dynamic_headers[] = {"local_ins_lat", "ins_lat", "local_p_stage_cyc", "p_stage_cyc"}; +static const char *const arch_specific_sort_keys[] = {"local_p_stage_cyc", "p_stage_cyc"}; /* * Replaces all occurrences of a char used with the: @@ -1392,22 +1392,37 @@ struct sort_entry sort_global_ins_lat = { }; static int64_t -sort__global_p_stage_cyc_cmp(struct hist_entry *left, struct hist_entry *right) +sort__p_stage_cyc_cmp(struct hist_entry *left, struct hist_entry *right) { return left->p_stage_cyc - right->p_stage_cyc; } +static int hist_entry__global_p_stage_cyc_snprintf(struct hist_entry *he, char *bf, + size_t size, unsigned int width) +{ + return repsep_snprintf(bf, size, "%-*u", width, + he->p_stage_cyc * he->stat.nr_events); +} + + static int hist_entry__p_stage_cyc_snprintf(struct hist_entry *he, char *bf, size_t size, unsigned int width) { return repsep_snprintf(bf, size, "%-*u", width, he->p_stage_cyc); } -struct sort_entry sort_p_stage_cyc = { - .se_header = "Pipeline Stage Cycle", - .se_cmp = sort__global_p_stage_cyc_cmp, +struct sort_entry sort_local_p_stage_cyc = { + .se_header = "Local Pipeline Stage Cycle", + .se_cmp = sort__p_stage_cyc_cmp, .se_snprintf= hist_entry__p_stage_cyc_snprintf, - .se_width_idx = HISTC_P_STAGE_CYC, + .se_width_idx
[PATCH V2 2/2] tools/perf: Update global/local variants for p_stage_cyc in powerpc
Update the arch_support_sort_key() function in powerpc to enable presenting local and global variants of sort key: p_stage_cyc. Update the "se_header" strings for these in arch_perf_header_entry() function along with instruction latency. Signed-off-by: Athira Rajeev Reported-by: Namhyung Kim --- tools/perf/arch/powerpc/util/event.c | 8 +++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/tools/perf/arch/powerpc/util/event.c b/tools/perf/arch/powerpc/util/event.c index 3bf441257466..cf430a4c55b9 100644 --- a/tools/perf/arch/powerpc/util/event.c +++ b/tools/perf/arch/powerpc/util/event.c @@ -40,8 +40,12 @@ const char *arch_perf_header_entry(const char *se_header) { if (!strcmp(se_header, "Local INSTR Latency")) return "Finish Cyc"; - else if (!strcmp(se_header, "Pipeline Stage Cycle")) + else if (!strcmp(se_header, "INSTR Latency")) + return "Global Finish_cyc"; + else if (!strcmp(se_header, "Local Pipeline Stage Cycle")) return "Dispatch Cyc"; + else if (!strcmp(se_header, "Pipeline Stage Cycle")) + return "Global Dispatch_cyc"; return se_header; } @@ -49,5 +53,7 @@ int arch_support_sort_key(const char *sort_key) { if (!strcmp(sort_key, "p_stage_cyc")) return 1; + if (!strcmp(sort_key, "local_p_stage_cyc")) + return 1; return 0; } -- 2.33.0
Re: [PATCH] of: unmap memory regions in /memreserve node
Mark Rutland writes: > On Tue, Nov 30, 2021 at 04:43:31PM -0600, Rob Herring wrote: >> +linuxppc-dev Sorry missed this until now ... >> On Wed, Nov 24, 2021 at 09:33:47PM +0800, Calvin Zhang wrote: >> > Reserved memory regions in /memreserve node aren't and shouldn't >> > be referenced elsewhere. So mark them no-map to skip direct mapping >> > for them. >> >> I suspect this has a high chance of breaking some platform. There's no >> rule a region can't be accessed. > > The subtlety is that the region shouldn't be explicitly accessed (e.g. > modified), I think "modified" is the key there, reserved means Linux doesn't use the range for its own data, but may still read from whatever is in the range. On some platforms the initrd will be marked as reserved, which Linux obviously needs to read from. > but the OS is permitted to have the region mapped. In ePAPR this is > described as: > >This requirement is necessary because the client program is permitted to > map >memory with storage attributes specified as not Write Through Required, not >Caching Inhibited, and Memory Coherence Required (i.e., WIMG = 0b001x), and >VLE=0 where supported. The client program may use large virtual pages that >contain reserved memory. However, the client program may not modify > reserved >memory, so the boot program may perform accesses to reserved memory as > Write >Through Required where conflicting values for this storage attribute are >architecturally permissible. > > Historically arm64 relied upon this for spin-table to work, and I *think* we > might not need that any more I agree that there's a high chance this will > break > something (especially on 16K or 64K page size kernels), so I'd prefer to leave > it as-is. Yeah I agree. On powerpc we still use large pages for the linear mapping (direct map), so reserved regions will be incidentally mapped as described above. > If someone requires no-map behaviour, they should use a /reserved-memory entry > with a no-map property, which will work today and document their requirement > explicitly. +1. cheers
Re: bug: usb: gadget: FSL_UDC_CORE Corrupted request list leads to unrecoverable loop.
On Thu, 2021-12-02 at 20:35 +, Leo Li wrote: > > > -Original Message- > > From: Joakim Tjernlund > > Sent: Wednesday, December 1, 2021 8:19 AM > > To: regressi...@leemhuis.info; Leo Li ; > > eugene_bordenkirc...@selinc.com; linux-...@vger.kernel.org; linuxppc- > > d...@lists.ozlabs.org > > Cc: gre...@linuxfoundation.org; ba...@kernel.org > > Subject: Re: bug: usb: gadget: FSL_UDC_CORE Corrupted request list leads to > > unrecoverable loop. > > > > On Tue, 2021-11-30 at 12:56 +0100, Joakim Tjernlund wrote: > > > On Mon, 2021-11-29 at 23:48 +, Eugene Bordenkircher wrote: > > > > Agreed, > > > > > > > > We are happy pick up the torch on this, but I'd like to try and hear > > > > from > > Joakim first before we do. The patch set is his, so I'd like to give him > > the > > opportunity. I think he's the only one that can add a truly proper > > description > > as well because he mentioned that this includes a "few more fixes" than just > > the one we ran into. I'd rather hear from him than try to reverse engineer > > what was being addressed. > > > > > > > > Joakim, if you are still watching the thread, would you like to take a > > > > stab > > at it? If I don't hear from you in a couple days, we'll pick up the torch > > and do > > what we can. > > > > > > > > > > I am far away from this now and still on 4.19. I don't mind if you tweak > > tweak the patches for better "upstreamability" > > > > Even better would be to migrate to the chipidea driver, I am told just a few > > tweaks are needed but this is probably something NXP should do as they > > have access to other SOC's using chipidea. > > I agree with this direction but the problem was with bandwidth. As this > controller was only used on legacy platforms, it is harder to justify new > effort on it now. > Legacy? All PPC is legacy and not supported now? Jocke
Re: [PATCH v2 3/3] soc: fsl: Replace kernel.h with the necessary inclusions
On Thu, Dec 2, 2021 at 3:30 PM Andy Shevchenko wrote: > > On Thu, Dec 02, 2021 at 08:01:54PM +, Leo Li wrote: > > > From: Andy Shevchenko > > > Sent: Thursday, December 2, 2021 3:33 AM > > > On Wed, Dec 01, 2021 at 01:41:16PM -0600, Li Yang wrote: > > > > On Tue, Nov 23, 2021 at 10:32 AM Andy Shevchenko > > > > wrote: > > ... > > > > > The build test is good. I have applied it for next. Thanks. > > > > > > Thanks, what about MAINTAINERS updates? I don't see them neither in next > > > nor in your tree. > > > > I am ok with these MAINTAINERS updates. I thought you want to send them > > directly to Linus. I can take them if you like. > > I was just pointing out that it would be good that you (as a maintainer of SOC > FSL) have them applied and pushed for the current cycle, but they are not code > fixes anyway, so it's not critical. > > TL;DR: Yes, please take them, thanks! Got it. Both applied for next. Thanks. Regards, Leo
Re: [PATCH v2 3/3] soc: fsl: Replace kernel.h with the necessary inclusions
On Thu, Dec 02, 2021 at 08:01:54PM +, Leo Li wrote: > > From: Andy Shevchenko > > Sent: Thursday, December 2, 2021 3:33 AM > > On Wed, Dec 01, 2021 at 01:41:16PM -0600, Li Yang wrote: > > > On Tue, Nov 23, 2021 at 10:32 AM Andy Shevchenko > > > wrote: ... > > > The build test is good. I have applied it for next. Thanks. > > > > Thanks, what about MAINTAINERS updates? I don't see them neither in next > > nor in your tree. > > I am ok with these MAINTAINERS updates. I thought you want to send them > directly to Linus. I can take them if you like. I was just pointing out that it would be good that you (as a maintainer of SOC FSL) have them applied and pushed for the current cycle, but they are not code fixes anyway, so it's not critical. TL;DR: Yes, please take them, thanks! -- With Best Regards, Andy Shevchenko
[PATCH] soc/fsl/qman: test: Make use of the helper function kthread_run_on_cpu()
Replace kthread_create/kthread_bind/wake_up_process() with kthread_run_on_cpu() to simplify the code. Signed-off-by: Cai Huoqing --- drivers/soc/fsl/qbman/qman_test_stash.c | 6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/drivers/soc/fsl/qbman/qman_test_stash.c b/drivers/soc/fsl/qbman/qman_test_stash.c index b7e8e5ec884c..7ab259cb139e 100644 --- a/drivers/soc/fsl/qbman/qman_test_stash.c +++ b/drivers/soc/fsl/qbman/qman_test_stash.c @@ -108,14 +108,12 @@ static int on_all_cpus(int (*fn)(void)) .fn = fn, .started = ATOMIC_INIT(0) }; - struct task_struct *k = kthread_create(bstrap_fn, , - "hotpotato%d", cpu); + struct task_struct *k = kthread_run_on_cpu(bstrap_fn, , + cpu, "hotpotato/%u"); int ret; if (IS_ERR(k)) return -ENOMEM; - kthread_bind(k, cpu); - wake_up_process(k); /* * If we call kthread_stop() before the "wake up" has had an * effect, then the thread may exit with -EINTR without ever -- 2.25.1
RE: bug: usb: gadget: FSL_UDC_CORE Corrupted request list leads to unrecoverable loop.
> -Original Message- > From: Joakim Tjernlund > Sent: Wednesday, December 1, 2021 8:19 AM > To: regressi...@leemhuis.info; Leo Li ; > eugene_bordenkirc...@selinc.com; linux-...@vger.kernel.org; linuxppc- > d...@lists.ozlabs.org > Cc: gre...@linuxfoundation.org; ba...@kernel.org > Subject: Re: bug: usb: gadget: FSL_UDC_CORE Corrupted request list leads to > unrecoverable loop. > > On Tue, 2021-11-30 at 12:56 +0100, Joakim Tjernlund wrote: > > On Mon, 2021-11-29 at 23:48 +, Eugene Bordenkircher wrote: > > > Agreed, > > > > > > We are happy pick up the torch on this, but I'd like to try and hear from > Joakim first before we do. The patch set is his, so I'd like to give him the > opportunity. I think he's the only one that can add a truly proper > description > as well because he mentioned that this includes a "few more fixes" than just > the one we ran into. I'd rather hear from him than try to reverse engineer > what was being addressed. > > > > > > Joakim, if you are still watching the thread, would you like to take a > > > stab > at it? If I don't hear from you in a couple days, we'll pick up the torch > and do > what we can. > > > > > > > I am far away from this now and still on 4.19. I don't mind if you tweak > tweak the patches for better "upstreamability" > > Even better would be to migrate to the chipidea driver, I am told just a few > tweaks are needed but this is probably something NXP should do as they > have access to other SOC's using chipidea. I agree with this direction but the problem was with bandwidth. As this controller was only used on legacy platforms, it is harder to justify new effort on it now. Regards, Leo
RE: [PATCH v2 3/3] soc: fsl: Replace kernel.h with the necessary inclusions
> -Original Message- > From: Andy Shevchenko > Sent: Thursday, December 2, 2021 3:33 AM > To: Leo Li > Cc: linuxppc-dev@lists.ozlabs.org; linux-ker...@vger.kernel.org; linux-arm- > ker...@lists.infradead.org; Qiang Zhao > Subject: Re: [PATCH v2 3/3] soc: fsl: Replace kernel.h with the necessary > inclusions > > On Wed, Dec 01, 2021 at 01:41:16PM -0600, Li Yang wrote: > > On Tue, Nov 23, 2021 at 10:32 AM Andy Shevchenko > > wrote: > > > > > > On Tue, Nov 16, 2021 at 11:38:01AM +0200, Andy Shevchenko wrote: > > > > On Mon, Nov 15, 2021 at 10:24:36PM +, Leo Li wrote: > > > > > > From: Andy Shevchenko > > > > > > Sent: Monday, November 15, 2021 5:30 AM On Wed, Nov 10, 2021 > > > > > > at 12:59:52PM +0200, Andy Shevchenko wrote: > > > > > > > > ... > > > > > > > > > > > v2: updated Cc list based on previous changes to MAINTAINERS > > > > > > > > > > > > Any comments on this, please? > > > > > > > > > > > > I really want to decrease amount of kernel.h usage in the common > headers. > > > > > > So others won't copy'n'paste bad example. > > > > > > > > > > There seems to be no problem with the patch although I didn't get > time to really compile with it applied. > > > > > > > > > > Will pick them up later after build test. > > > > > > > > Thank you! > > > > > > > > Note, it has two fixes against MAINTAINERS which may be sent, I > > > > believe, sooner than later to Linus. > > > > > > Any new so far? > > > > The build test is good. I have applied it for next. Thanks. > > Thanks, what about MAINTAINERS updates? I don't see them neither in next > nor in your tree. I am ok with these MAINTAINERS updates. I thought you want to send them directly to Linus. I can take them if you like. Regards, Leo
Re: [patch 11/22] x86/hyperv: Refactor hv_msi_domain_free_irqs()
On Sat, Nov 27, 2021 at 02:18:51AM +0100, Thomas Gleixner wrote: > No point in looking up things over and over. Just look up the associated > irq data and work from there. > > No functional change. > > Signed-off-by: Thomas Gleixner Acked-by: Wei Liu
Re: [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU
Nicholas Piggin writes: > Compiling out hash support code when CONFIG_PPC_64S_HASH_MMU=n saves > 128kB kernel image size (90kB text) on powernv_defconfig minus KVM, > 350kB on pseries_defconfig minus KVM, 40kB on a tiny config. > > Signed-off-by: Nicholas Piggin Tested this series on a P9. Tried to force some invalid configs with KVM and it held up. Also built all defconfigs from make help. Tested-by: Fabiano Rosas > --- > arch/powerpc/Kconfig | 2 +- > arch/powerpc/include/asm/book3s/64/mmu.h | 21 ++-- > .../include/asm/book3s/64/tlbflush-hash.h | 6 > arch/powerpc/include/asm/book3s/pgtable.h | 4 +++ > arch/powerpc/include/asm/mmu_context.h| 2 ++ > arch/powerpc/include/asm/paca.h | 8 + > arch/powerpc/kernel/asm-offsets.c | 2 ++ > arch/powerpc/kernel/entry_64.S| 4 +-- > arch/powerpc/kernel/exceptions-64s.S | 16 ++ > arch/powerpc/kernel/mce.c | 2 +- > arch/powerpc/kernel/mce_power.c | 10 -- > arch/powerpc/kernel/paca.c| 18 --- > arch/powerpc/kernel/process.c | 13 > arch/powerpc/kernel/prom.c| 2 ++ > arch/powerpc/kernel/setup_64.c| 5 +++ > arch/powerpc/kexec/core_64.c | 4 +-- > arch/powerpc/kexec/ranges.c | 4 +++ > arch/powerpc/mm/book3s64/Makefile | 15 + > arch/powerpc/mm/book3s64/hugetlbpage.c| 2 ++ > arch/powerpc/mm/book3s64/mmu_context.c| 32 +++ > arch/powerpc/mm/book3s64/pgtable.c| 2 +- > arch/powerpc/mm/book3s64/radix_pgtable.c | 4 +++ > arch/powerpc/mm/copro_fault.c | 2 ++ > arch/powerpc/mm/ptdump/Makefile | 2 +- > arch/powerpc/platforms/powernv/idle.c | 2 ++ > arch/powerpc/platforms/powernv/setup.c| 2 ++ > arch/powerpc/platforms/pseries/lpar.c | 11 +-- > arch/powerpc/platforms/pseries/lparcfg.c | 2 +- > arch/powerpc/platforms/pseries/mobility.c | 6 > arch/powerpc/platforms/pseries/ras.c | 2 ++ > arch/powerpc/platforms/pseries/reconfig.c | 2 ++ > arch/powerpc/platforms/pseries/setup.c| 6 ++-- > arch/powerpc/xmon/xmon.c | 8 +++-- > drivers/misc/lkdtm/core.c | 2 +- > 34 files changed, 173 insertions(+), 52 deletions(-) > > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig > index 1fa336ec8faf..fb48823ccd62 100644 > --- a/arch/powerpc/Kconfig > +++ b/arch/powerpc/Kconfig > @@ -129,7 +129,7 @@ config PPC > select ARCH_HAS_KCOV > select ARCH_HAS_MEMBARRIER_CALLBACKS > select ARCH_HAS_MEMBARRIER_SYNC_CORE > - select ARCH_HAS_MEMREMAP_COMPAT_ALIGN if PPC_BOOK3S_64 > + select ARCH_HAS_MEMREMAP_COMPAT_ALIGN if PPC_64S_HASH_MMU > select ARCH_HAS_MMIOWB if PPC64 > select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE > select ARCH_HAS_PHYS_TO_DMA > diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h > b/arch/powerpc/include/asm/book3s/64/mmu.h > index 015d7d972d16..c480d21a146c 100644 > --- a/arch/powerpc/include/asm/book3s/64/mmu.h > +++ b/arch/powerpc/include/asm/book3s/64/mmu.h > @@ -104,7 +104,9 @@ typedef struct { >* from EA and new context ids to build the new VAs. >*/ > mm_context_id_t id; > +#ifdef CONFIG_PPC_64S_HASH_MMU > mm_context_id_t extended_id[TASK_SIZE_USER64/TASK_CONTEXT_SIZE]; > +#endif > }; > > /* Number of bits in the mm_cpumask */ > @@ -116,7 +118,9 @@ typedef struct { > /* Number of user space windows opened in process mm_context */ > atomic_t vas_windows; > > +#ifdef CONFIG_PPC_64S_HASH_MMU > struct hash_mm_context *hash_context; > +#endif > > void __user *vdso; > /* > @@ -139,6 +143,7 @@ typedef struct { > #endif > } mm_context_t; > > +#ifdef CONFIG_PPC_64S_HASH_MMU > static inline u16 mm_ctx_user_psize(mm_context_t *ctx) > { > return ctx->hash_context->user_psize; > @@ -199,8 +204,15 @@ static inline struct subpage_prot_table > *mm_ctx_subpage_prot(mm_context_t *ctx) > extern int mmu_linear_psize; > extern int mmu_virtual_psize; > extern int mmu_vmalloc_psize; > -extern int mmu_vmemmap_psize; > extern int mmu_io_psize; > +#else /* CONFIG_PPC_64S_HASH_MMU */ > +#ifdef CONFIG_PPC_64K_PAGES > +#define mmu_virtual_psize MMU_PAGE_64K > +#else > +#define mmu_virtual_psize MMU_PAGE_4K > +#endif > +#endif > +extern int mmu_vmemmap_psize; > > /* MMU initialization */ > void mmu_early_init_devtree(void); > @@ -239,8 +251,9 @@ static inline void setup_initial_memory_limit(phys_addr_t > first_memblock_base, >* know which translations we will pick. Hence go with hash >* restrictions. >*/ > - return
[PATCH v1 08/11] powerpc/code-patching: Move patch_exception() outside code-patching.c
patch_exception() is dedicated to book3e/64 is nothing more than a normal use of patch_branch(), so move it into a place dedicated to book3e/64. Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/code-patching.h | 7 --- arch/powerpc/include/asm/exception-64e.h | 4 arch/powerpc/include/asm/nohash/64/pgtable.h | 6 ++ arch/powerpc/lib/code-patching.c | 16 arch/powerpc/mm/nohash/book3e_pgtable.c | 14 ++ 5 files changed, 24 insertions(+), 23 deletions(-) diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h index 46e8c5a8ce51..275061c3c977 100644 --- a/arch/powerpc/include/asm/code-patching.h +++ b/arch/powerpc/include/asm/code-patching.h @@ -63,13 +63,6 @@ int instr_is_relative_link_branch(ppc_inst_t instr); unsigned long branch_target(const u32 *instr); int translate_branch(ppc_inst_t *instr, const u32 *dest, const u32 *src); bool is_conditional_branch(ppc_inst_t instr); -#ifdef CONFIG_PPC_BOOK3E_64 -void __patch_exception(int exc, unsigned long addr); -#define patch_exception(exc, name) do { \ - extern unsigned int name; \ - __patch_exception((exc), (unsigned long)); \ -} while (0) -#endif #define OP_RT_RA_MASK 0xUL #define LIS_R2 (PPC_RAW_LIS(_R2, 0)) diff --git a/arch/powerpc/include/asm/exception-64e.h b/arch/powerpc/include/asm/exception-64e.h index 40cdcb2fb057..b1ef1e92c34a 100644 --- a/arch/powerpc/include/asm/exception-64e.h +++ b/arch/powerpc/include/asm/exception-64e.h @@ -149,6 +149,10 @@ exc_##label##_book3e: addir11,r13,PACA_EXTLB; \ TLB_MISS_RESTORE(r11) +#ifndef __ASSEMBLY__ +extern unsigned int interrupt_base_book3e; +#endif + #define SET_IVOR(vector_number, vector_offset) \ LOAD_REG_ADDR(r3,interrupt_base_book3e);\ ori r3,r3,vector_offset@l; \ diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h index 9d2905a47410..a3313e853e5e 100644 --- a/arch/powerpc/include/asm/nohash/64/pgtable.h +++ b/arch/powerpc/include/asm/nohash/64/pgtable.h @@ -313,6 +313,12 @@ extern int __meminit vmemmap_create_mapping(unsigned long start, unsigned long phys); extern void vmemmap_remove_mapping(unsigned long start, unsigned long page_size); +void __patch_exception(int exc, unsigned long addr); +#define patch_exception(exc, name) do { \ + extern unsigned int name; \ + __patch_exception((exc), (unsigned long)); \ +} while (0) + #endif /* __ASSEMBLY__ */ #endif /* _ASM_POWERPC_NOHASH_64_PGTABLE_H */ diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index 2d878e67df3f..17e6443eb6c8 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -370,22 +370,6 @@ int translate_branch(ppc_inst_t *instr, const u32 *dest, const u32 *src) return 1; } -#ifdef CONFIG_PPC_BOOK3E_64 -void __patch_exception(int exc, unsigned long addr) -{ - extern unsigned int interrupt_base_book3e; - unsigned int *ibase = _base_book3e; - - /* Our exceptions vectors start with a NOP and -then- a branch -* to deal with single stepping from userspace which stops on -* the second instruction. Thus we need to patch the second -* instruction of the exception, not the first one -*/ - - patch_branch(ibase + (exc / 4) + 1, addr, 0); -} -#endif - #ifdef CONFIG_CODE_PATCHING_SELFTEST static int instr_is_branch_to_addr(const u32 *instr, unsigned long addr) diff --git a/arch/powerpc/mm/nohash/book3e_pgtable.c b/arch/powerpc/mm/nohash/book3e_pgtable.c index 77884e24281d..7b6db97c2bdc 100644 --- a/arch/powerpc/mm/nohash/book3e_pgtable.c +++ b/arch/powerpc/mm/nohash/book3e_pgtable.c @@ -10,6 +10,7 @@ #include #include #include +#include #include @@ -115,3 +116,16 @@ int __ref map_kernel_page(unsigned long ea, unsigned long pa, pgprot_t prot) smp_wmb(); return 0; } + +void __patch_exception(int exc, unsigned long addr) +{ + unsigned int *ibase = _base_book3e; + + /* Our exceptions vectors start with a NOP and -then- a branch +* to deal with single stepping from userspace which stops on +* the second instruction. Thus we need to patch the second +* instruction of the exception, not the first one +*/ + + patch_branch(ibase + (exc / 4) + 1, addr, 0); +} -- 2.33.1
[PATCH v1 10/11] powerpc/code-patching: Move code patching selftests in its own file
Code patching selftests are half of code-patching.c. As they are guarded by CONFIG_CODE_PATCHING_SELFTESTS, they'd be better in their own file. Also add a missing __init for instr_is_branch_to_addr() Signed-off-by: Christophe Leroy --- arch/powerpc/lib/Makefile | 2 + arch/powerpc/lib/code-patching.c | 355 -- .../{code-patching.c => test-code-patching.c} | 353 + 3 files changed, 3 insertions(+), 707 deletions(-) copy arch/powerpc/lib/{code-patching.c => test-code-patching.c} (56%) diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile index c2654894b468..3e183f4b4bda 100644 --- a/arch/powerpc/lib/Makefile +++ b/arch/powerpc/lib/Makefile @@ -21,6 +21,8 @@ endif obj-y += alloc.o code-patching.o feature-fixups.o pmem.o +obj-$(CONFIG_CODE_PATCHING_SELFTEST) += test-code-patching.o + ifndef CONFIG_KASAN obj-y += string.o memcmp_$(BITS).o obj-$(CONFIG_PPC32)+= strlen_32.o diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index e07de5db06c0..906d43463366 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -3,13 +3,10 @@ * Copyright 2008 Michael Ellerman, IBM Corporation. */ -#include #include #include #include -#include #include -#include #include #include @@ -354,355 +351,3 @@ int translate_branch(ppc_inst_t *instr, const u32 *dest, const u32 *src) return 1; } - -#ifdef CONFIG_CODE_PATCHING_SELFTEST - -static int instr_is_branch_to_addr(const u32 *instr, unsigned long addr) -{ - if (instr_is_branch_iform(ppc_inst_read(instr)) || - instr_is_branch_bform(ppc_inst_read(instr))) - return branch_target(instr) == addr; - - return 0; -} - -static void __init test_trampoline(void) -{ - asm ("nop;nop;\n"); -} - -#define check(x) do {\ - if (!(x)) \ - pr_err("code-patching: test failed at line %d\n", __LINE__); \ -} while (0) - -static void __init test_branch_iform(void) -{ - int err; - ppc_inst_t instr; - u32 tmp[2]; - u32 *iptr = tmp; - unsigned long addr = (unsigned long)tmp; - - /* The simplest case, branch to self, no flags */ - check(instr_is_branch_iform(ppc_inst(0x4800))); - /* All bits of target set, and flags */ - check(instr_is_branch_iform(ppc_inst(0x4bff))); - /* High bit of opcode set, which is wrong */ - check(!instr_is_branch_iform(ppc_inst(0xcbff))); - /* Middle bits of opcode set, which is wrong */ - check(!instr_is_branch_iform(ppc_inst(0x7bff))); - - /* Simplest case, branch to self with link */ - check(instr_is_branch_iform(ppc_inst(0x4801))); - /* All bits of targets set */ - check(instr_is_branch_iform(ppc_inst(0x4bfd))); - /* Some bits of targets set */ - check(instr_is_branch_iform(ppc_inst(0x4bff00fd))); - /* Must be a valid branch to start with */ - check(!instr_is_branch_iform(ppc_inst(0x7bfd))); - - /* Absolute branch to 0x100 */ - patch_instruction(iptr, ppc_inst(0x48000103)); - check(instr_is_branch_to_addr(iptr, 0x100)); - /* Absolute branch to 0x420fc */ - patch_instruction(iptr, ppc_inst(0x480420ff)); - check(instr_is_branch_to_addr(iptr, 0x420fc)); - /* Maximum positive relative branch, + 20MB - 4B */ - patch_instruction(iptr, ppc_inst(0x49fc)); - check(instr_is_branch_to_addr(iptr, addr + 0x1FC)); - /* Smallest negative relative branch, - 4B */ - patch_instruction(iptr, ppc_inst(0x4bfc)); - check(instr_is_branch_to_addr(iptr, addr - 4)); - /* Largest negative relative branch, - 32 MB */ - patch_instruction(iptr, ppc_inst(0x4a00)); - check(instr_is_branch_to_addr(iptr, addr - 0x200)); - - /* Branch to self, with link */ - err = create_branch(, iptr, addr, BRANCH_SET_LINK); - patch_instruction(iptr, instr); - check(instr_is_branch_to_addr(iptr, addr)); - - /* Branch to self - 0x100, with link */ - err = create_branch(, iptr, addr - 0x100, BRANCH_SET_LINK); - patch_instruction(iptr, instr); - check(instr_is_branch_to_addr(iptr, addr - 0x100)); - - /* Branch to self + 0x100, no link */ - err = create_branch(, iptr, addr + 0x100, 0); - patch_instruction(iptr, instr); - check(instr_is_branch_to_addr(iptr, addr + 0x100)); - - /* Maximum relative negative offset, - 32 MB */ - err = create_branch(, iptr, addr - 0x200, BRANCH_SET_LINK); - patch_instruction(iptr, instr); - check(instr_is_branch_to_addr(iptr, addr - 0x200)); - - /* Out of range relative negative offset, - 32 MB + 4*/ - err = create_branch(, iptr, addr - 0x204, BRANCH_SET_LINK); - check(err); - - /* Out of range relative positive offset, + 32 MB */ -
[PATCH v1 09/11] powerpc/code-patching: Move instr_is_branch_{i/b}form() in code-patching.h
To enable moving selftests in their own C file in following patch, move instr_is_branch_iform() and instr_is_branch_bform() to code-patching.h Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/code-patching.h | 15 +++ arch/powerpc/lib/code-patching.c | 15 --- 2 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h index 275061c3c977..e26080539c31 100644 --- a/arch/powerpc/include/asm/code-patching.h +++ b/arch/powerpc/include/asm/code-patching.h @@ -58,6 +58,21 @@ static inline int modify_instruction_site(s32 *site, unsigned int clr, unsigned return modify_instruction((unsigned int *)patch_site_addr(site), clr, set); } +static inline unsigned int branch_opcode(ppc_inst_t instr) +{ + return ppc_inst_primary_opcode(instr) & 0x3F; +} + +static inline int instr_is_branch_iform(ppc_inst_t instr) +{ + return branch_opcode(instr) == 18; +} + +static inline int instr_is_branch_bform(ppc_inst_t instr) +{ + return branch_opcode(instr) == 16; +} + int instr_is_relative_branch(ppc_inst_t instr); int instr_is_relative_link_branch(ppc_inst_t instr); unsigned long branch_target(const u32 *instr); diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index 17e6443eb6c8..e07de5db06c0 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -285,21 +285,6 @@ int create_cond_branch(ppc_inst_t *instr, const u32 *addr, return 0; } -static unsigned int branch_opcode(ppc_inst_t instr) -{ - return ppc_inst_primary_opcode(instr) & 0x3F; -} - -static int instr_is_branch_iform(ppc_inst_t instr) -{ - return branch_opcode(instr) == 18; -} - -static int instr_is_branch_bform(ppc_inst_t instr) -{ - return branch_opcode(instr) == 16; -} - int instr_is_relative_branch(ppc_inst_t instr) { if (ppc_inst_val(instr) & BRANCH_ABSOLUTE) -- 2.33.1
[PATCH v1 05/11] powerpc/code-patching: Reorganise do_patch_instruction() to ease error handling
Split do_patch_instruction() in two functions, the caller doing the spin locking and the callee doing everything else. And remove a few unnecessary initialisations and intermediate variables. This allows the callee to return from anywhere in the function. Signed-off-by: Christophe Leroy --- arch/powerpc/lib/code-patching.c | 37 ++-- 1 file changed, 21 insertions(+), 16 deletions(-) diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index 5fa719a4ee69..a43ca22313ee 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -129,13 +129,30 @@ static void unmap_patch_area(unsigned long addr) flush_tlb_kernel_range(addr, addr + PAGE_SIZE); } +static int __do_patch_instruction(u32 *addr, ppc_inst_t instr) +{ + int err; + u32 *patch_addr; + unsigned long text_poke_addr; + + text_poke_addr = (unsigned long)__this_cpu_read(text_poke_area)->addr; + patch_addr = (u32 *)(text_poke_addr + offset_in_page(addr)); + + err = map_patch_area(addr, text_poke_addr); + if (err) + return err; + + err = __patch_instruction(addr, instr, patch_addr); + + unmap_patch_area(text_poke_addr); + + return err; +} + static int do_patch_instruction(u32 *addr, ppc_inst_t instr) { int err; - u32 *patch_addr = NULL; unsigned long flags; - unsigned long text_poke_addr; - unsigned long kaddr = (unsigned long)addr; /* * During early early boot patch_instruction is called @@ -146,19 +163,7 @@ static int do_patch_instruction(u32 *addr, ppc_inst_t instr) return raw_patch_instruction(addr, instr); local_irq_save(flags); - - text_poke_addr = (unsigned long)__this_cpu_read(text_poke_area)->addr; - err = map_patch_area(addr, text_poke_addr); - if (err) - goto out; - - patch_addr = (u32 *)(text_poke_addr + (kaddr & ~PAGE_MASK)); - - err = __patch_instruction(addr, instr, patch_addr); - - unmap_patch_area(text_poke_addr); - -out: + err = __do_patch_instruction(addr, instr); local_irq_restore(flags); return err; -- 2.33.1
[PATCH v1 07/11] powerpc/code-patching: Use test_trampoline for prefixed patch test
Use the dedicated test_trampoline function for testing prefixed patching like other tests and remove the hand coded assembly stuff. Signed-off-by: Christophe Leroy --- arch/powerpc/lib/Makefile | 2 +- arch/powerpc/lib/code-patching.c | 24 +--- arch/powerpc/lib/test_code-patching.S | 20 3 files changed, 10 insertions(+), 36 deletions(-) delete mode 100644 arch/powerpc/lib/test_code-patching.S diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile index 9e5d0f413b71..c2654894b468 100644 --- a/arch/powerpc/lib/Makefile +++ b/arch/powerpc/lib/Makefile @@ -19,7 +19,7 @@ CFLAGS_code-patching.o += -DDISABLE_BRANCH_PROFILING CFLAGS_feature-fixups.o += -DDISABLE_BRANCH_PROFILING endif -obj-y += alloc.o code-patching.o feature-fixups.o pmem.o test_code-patching.o +obj-y += alloc.o code-patching.o feature-fixups.o pmem.o ifndef CONFIG_KASAN obj-y += string.o memcmp_$(BITS).o diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index e7a2a41ae8eb..2d878e67df3f 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -399,7 +399,7 @@ static int instr_is_branch_to_addr(const u32 *instr, unsigned long addr) static void __init test_trampoline(void) { - asm ("nop;\n"); + asm ("nop;nop;\n"); } #define check(x) do {\ @@ -708,25 +708,19 @@ static void __init test_translate_branch(void) vfree(buf); } -#ifdef CONFIG_PPC64 static void __init test_prefixed_patching(void) { - extern unsigned int code_patching_test1[]; - extern unsigned int code_patching_test1_expected[]; - extern unsigned int end_code_patching_test1[]; + u32 *iptr = (u32 *)ppc_function_entry(test_trampoline); + u32 expected[2] = {OP_PREFIX << 26, 0}; + ppc_inst_t inst = ppc_inst_prefix(OP_PREFIX << 26, 0); - __patch_instruction(code_patching_test1, - ppc_inst_prefix(OP_PREFIX << 26, 0x), - code_patching_test1); + if (!IS_ENABLED(CONFIG_PPC64)) + return; + + patch_instruction(iptr, inst); - check(!memcmp(code_patching_test1, - code_patching_test1_expected, - sizeof(unsigned int) * - (end_code_patching_test1 - code_patching_test1))); + check(!memcmp(iptr, expected, sizeof(expected))); } -#else -static inline void test_prefixed_patching(void) {} -#endif static int __init test_code_patching(void) { diff --git a/arch/powerpc/lib/test_code-patching.S b/arch/powerpc/lib/test_code-patching.S deleted file mode 100644 index a9be6107844e.. --- a/arch/powerpc/lib/test_code-patching.S +++ /dev/null @@ -1,20 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * Copyright (C) 2020 IBM Corporation - */ -#include - - .text - -#define globl(x) \ - .globl x; \ -x: - -globl(code_patching_test1) - nop - nop -globl(end_code_patching_test1) - -globl(code_patching_test1_expected) - .long OP_PREFIX << 26 - .long 0x000 -- 2.33.1
[PATCH v1 01/11] powerpc/code-patching: Remove pr_debug()/pr_devel() messages and fix check()
code-patching has been working for years now, time has come to remove debugging messages. Change useful message to KERN_INFO and remove other ones. Also add KERN_ERR to check() macro and change it into a do/while to make checkpatch happy. Signed-off-by: Christophe Leroy --- This series applies on top of series "[v5,1/5] powerpc/inst: Refactor ___get_user_instr()" https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=274258 arch/powerpc/lib/code-patching.c | 16 +++- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index 312324a26df3..4fe4464b7a84 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -95,7 +95,6 @@ static int map_patch_area(void *addr, unsigned long text_poke_addr) err = map_kernel_page(text_poke_addr, (pfn << PAGE_SHIFT), PAGE_KERNEL); - pr_devel("Mapped addr %lx with pfn %lx:%d\n", text_poke_addr, pfn, err); if (err) return -1; @@ -130,8 +129,6 @@ static inline int unmap_patch_area(unsigned long addr) if (unlikely(!ptep)) return -EINVAL; - pr_devel("clearing mm %p, pte %p, addr %lx\n", _mm, ptep, addr); - /* * In hash, pte_clear flushes the tlb, in radix, we have to */ @@ -190,10 +187,9 @@ static int do_patch_instruction(u32 *addr, ppc_inst_t instr) int patch_instruction(u32 *addr, ppc_inst_t instr) { /* Make sure we aren't patching a freed init section */ - if (init_mem_is_free && init_section_contains(addr, 4)) { - pr_debug("Skipping init section patching addr: 0x%px\n", addr); + if (init_mem_is_free && init_section_contains(addr, 4)) return 0; - } + return do_patch_instruction(addr, instr); } NOKPROBE_SYMBOL(patch_instruction); @@ -411,8 +407,10 @@ static void __init test_trampoline(void) asm ("nop;\n"); } -#define check(x) \ - if (!(x)) printk("code-patching: test failed at line %d\n", __LINE__); +#define check(x) do {\ + if (!(x)) \ + pr_err("code-patching: test failed at line %d\n", __LINE__); \ +} while (0) static void __init test_branch_iform(void) { @@ -737,7 +735,7 @@ static inline void test_prefixed_patching(void) {} static int __init test_code_patching(void) { - printk(KERN_DEBUG "Running code patching self-tests ...\n"); + pr_info("Running code patching self-tests ...\n"); test_branch_iform(); test_branch_bform(); -- 2.33.1
[PATCH v1 02/11] powerpc/code-patching: Remove init_mem_is_free
A new state has been added by commit d2635f2012a4 ("mm: create a new system state and fix core_kernel_text()"). That state tells when initmem is about to be released and is redundant with init_mem_is_free. Remove init_mem_is_free. Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/setup.h | 1 - arch/powerpc/lib/code-patching.c | 3 +-- arch/powerpc/mm/mem.c| 2 -- 3 files changed, 1 insertion(+), 5 deletions(-) diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h index 6c1a7d217d1a..426a2d8d028f 100644 --- a/arch/powerpc/include/asm/setup.h +++ b/arch/powerpc/include/asm/setup.h @@ -9,7 +9,6 @@ extern void ppc_printk_progress(char *s, unsigned short hex); extern unsigned int rtas_data; extern unsigned long long memory_limit; -extern bool init_mem_is_free; extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask); struct device_node; diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index 4fe4464b7a84..7bb8dd2dc19e 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -15,7 +15,6 @@ #include #include #include -#include #include static int __patch_instruction(u32 *exec_addr, ppc_inst_t instr, u32 *patch_addr) @@ -187,7 +186,7 @@ static int do_patch_instruction(u32 *addr, ppc_inst_t instr) int patch_instruction(u32 *addr, ppc_inst_t instr) { /* Make sure we aren't patching a freed init section */ - if (init_mem_is_free && init_section_contains(addr, 4)) + if (system_state >= SYSTEM_FREEING_INITMEM && init_section_contains(addr, 4)) return 0; return do_patch_instruction(addr, instr); diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index bd5d91a31183..8e301cd8925b 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -26,7 +26,6 @@ #include unsigned long long memory_limit; -bool init_mem_is_free; unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss; EXPORT_SYMBOL(empty_zero_page); @@ -312,7 +311,6 @@ void free_initmem(void) { ppc_md.progress = ppc_printk_progress; mark_initmem_nx(); - init_mem_is_free = true; free_initmem_default(POISON_FREE_INITMEM); } -- 2.33.1
[PATCH v1 06/11] powerpc/code-patching: Fix patch_branch() return on out-of-range failure
Do not silentely ignore a failure of create_branch() in patch_branch(). Return -ERANGE. Signed-off-by: Christophe Leroy --- arch/powerpc/lib/code-patching.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index a43ca22313ee..e7a2a41ae8eb 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -191,7 +191,9 @@ int patch_branch(u32 *addr, unsigned long target, int flags) { ppc_inst_t instr; - create_branch(, addr, target, flags); + if (create_branch(, addr, target, flags)) + return -ERANGE; + return patch_instruction(addr, instr); } -- 2.33.1
[PATCH v1 03/11] powerpc/code-patching: Fix error handling in do_patch_instruction()
Use real errors instead of using -1 as error, so that errors returned by callees can be used towards callers. Signed-off-by: Christophe Leroy --- arch/powerpc/lib/code-patching.c | 13 +++-- 1 file changed, 3 insertions(+), 10 deletions(-) diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index 7bb8dd2dc19e..4ce2b6457757 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -85,19 +85,13 @@ void __init poking_init(void) static int map_patch_area(void *addr, unsigned long text_poke_addr) { unsigned long pfn; - int err; if (is_vmalloc_or_module_addr(addr)) pfn = vmalloc_to_pfn(addr); else pfn = __pa_symbol(addr) >> PAGE_SHIFT; - err = map_kernel_page(text_poke_addr, (pfn << PAGE_SHIFT), PAGE_KERNEL); - - if (err) - return -1; - - return 0; + return map_kernel_page(text_poke_addr, (pfn << PAGE_SHIFT), PAGE_KERNEL); } static inline int unmap_patch_area(unsigned long addr) @@ -156,10 +150,9 @@ static int do_patch_instruction(u32 *addr, ppc_inst_t instr) local_irq_save(flags); text_poke_addr = (unsigned long)__this_cpu_read(text_poke_area)->addr; - if (map_patch_area(addr, text_poke_addr)) { - err = -1; + err = map_patch_area(addr, text_poke_addr); + if (err) goto out; - } patch_addr = (u32 *)(text_poke_addr + (kaddr & ~PAGE_MASK)); -- 2.33.1
[PATCH v1 04/11] powerpc/code-patching: Fix unmap_patch_area() error handling
pXd_offset() doesn't return NULL. When the base is NULL, it still adds the offset. Use pXd_none() to check validity instead. It also improves performance by folding out none existing levels as pXd_none() always returns 0 in that case. Such an error is unexpected, use WARN_ON() so that the caller doesn't have to worry about it, and drop the returned value. And now that unmap_patch_area() doesn't return error, we can take into account the error returned by __patch_instruction(). While at it, remove the 'inline' property which is useless. Signed-off-by: Christophe Leroy --- arch/powerpc/lib/code-patching.c | 30 +- 1 file changed, 13 insertions(+), 17 deletions(-) diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c index 4ce2b6457757..5fa719a4ee69 100644 --- a/arch/powerpc/lib/code-patching.c +++ b/arch/powerpc/lib/code-patching.c @@ -94,7 +94,7 @@ static int map_patch_area(void *addr, unsigned long text_poke_addr) return map_kernel_page(text_poke_addr, (pfn << PAGE_SHIFT), PAGE_KERNEL); } -static inline int unmap_patch_area(unsigned long addr) +static void unmap_patch_area(unsigned long addr) { pte_t *ptep; pmd_t *pmdp; @@ -103,32 +103,30 @@ static inline int unmap_patch_area(unsigned long addr) pgd_t *pgdp; pgdp = pgd_offset_k(addr); - if (unlikely(!pgdp)) - return -EINVAL; + if (WARN_ON(pgd_none(*pgdp))) + return; p4dp = p4d_offset(pgdp, addr); - if (unlikely(!p4dp)) - return -EINVAL; + if (WARN_ON(p4d_none(*p4dp))) + return; pudp = pud_offset(p4dp, addr); - if (unlikely(!pudp)) - return -EINVAL; + if (WARN_ON(pud_none(*pudp))) + return; pmdp = pmd_offset(pudp, addr); - if (unlikely(!pmdp)) - return -EINVAL; + if (WARN_ON(pmd_none(*pmdp))) + return; ptep = pte_offset_kernel(pmdp, addr); - if (unlikely(!ptep)) - return -EINVAL; + if (WARN_ON(pte_none(*ptep))) + return; /* * In hash, pte_clear flushes the tlb, in radix, we have to */ pte_clear(_mm, addr, ptep); flush_tlb_kernel_range(addr, addr + PAGE_SIZE); - - return 0; } static int do_patch_instruction(u32 *addr, ppc_inst_t instr) @@ -156,11 +154,9 @@ static int do_patch_instruction(u32 *addr, ppc_inst_t instr) patch_addr = (u32 *)(text_poke_addr + (kaddr & ~PAGE_MASK)); - __patch_instruction(addr, instr, patch_addr); + err = __patch_instruction(addr, instr, patch_addr); - err = unmap_patch_area(text_poke_addr); - if (err) - pr_warn("failed to unmap %lx\n", text_poke_addr); + unmap_patch_area(text_poke_addr); out: local_irq_restore(flags); -- 2.33.1
[PATCH v1 11/11] powerpc/code-patching: Replace patch_instruction() by ppc_inst_write() in selftests
The purpose of selftests is to check that instructions are properly formed. Not to check that they properly run. For that test it uses normal memory, not special test memory. In preparation of a future patch enforcing patch_instruction() to be used only on valid text areas, implement a ppc_inst_write() instruction which is the complement of ppc_inst_read(). This new function writes the formated instruction in valid kernel memory and doesn't bother about icache. Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/inst.h | 8 +++ arch/powerpc/lib/test-code-patching.c | 85 ++- 2 files changed, 53 insertions(+), 40 deletions(-) diff --git a/arch/powerpc/include/asm/inst.h b/arch/powerpc/include/asm/inst.h index 631436f3f5c3..21fe8594e078 100644 --- a/arch/powerpc/include/asm/inst.h +++ b/arch/powerpc/include/asm/inst.h @@ -131,6 +131,14 @@ static inline unsigned long ppc_inst_as_ulong(ppc_inst_t x) return (u64)ppc_inst_val(x) << 32 | ppc_inst_suffix(x); } +static inline void ppc_inst_write(u32 *ptr, ppc_inst_t x) +{ + if (!ppc_inst_prefixed(x)) + *ptr = ppc_inst_val(x); + else + *(u64 *)ptr = ppc_inst_as_ulong(x); +} + #define PPC_INST_STR_LEN sizeof(" ") static inline char *__ppc_inst_as_str(char str[PPC_INST_STR_LEN], ppc_inst_t x) diff --git a/arch/powerpc/lib/test-code-patching.c b/arch/powerpc/lib/test-code-patching.c index e358c9d8a03e..c44823292f73 100644 --- a/arch/powerpc/lib/test-code-patching.c +++ b/arch/powerpc/lib/test-code-patching.c @@ -54,39 +54,39 @@ static void __init test_branch_iform(void) check(!instr_is_branch_iform(ppc_inst(0x7bfd))); /* Absolute branch to 0x100 */ - patch_instruction(iptr, ppc_inst(0x48000103)); + ppc_inst_write(iptr, ppc_inst(0x48000103)); check(instr_is_branch_to_addr(iptr, 0x100)); /* Absolute branch to 0x420fc */ - patch_instruction(iptr, ppc_inst(0x480420ff)); + ppc_inst_write(iptr, ppc_inst(0x480420ff)); check(instr_is_branch_to_addr(iptr, 0x420fc)); /* Maximum positive relative branch, + 20MB - 4B */ - patch_instruction(iptr, ppc_inst(0x49fc)); + ppc_inst_write(iptr, ppc_inst(0x49fc)); check(instr_is_branch_to_addr(iptr, addr + 0x1FC)); /* Smallest negative relative branch, - 4B */ - patch_instruction(iptr, ppc_inst(0x4bfc)); + ppc_inst_write(iptr, ppc_inst(0x4bfc)); check(instr_is_branch_to_addr(iptr, addr - 4)); /* Largest negative relative branch, - 32 MB */ - patch_instruction(iptr, ppc_inst(0x4a00)); + ppc_inst_write(iptr, ppc_inst(0x4a00)); check(instr_is_branch_to_addr(iptr, addr - 0x200)); /* Branch to self, with link */ err = create_branch(, iptr, addr, BRANCH_SET_LINK); - patch_instruction(iptr, instr); + ppc_inst_write(iptr, instr); check(instr_is_branch_to_addr(iptr, addr)); /* Branch to self - 0x100, with link */ err = create_branch(, iptr, addr - 0x100, BRANCH_SET_LINK); - patch_instruction(iptr, instr); + ppc_inst_write(iptr, instr); check(instr_is_branch_to_addr(iptr, addr - 0x100)); /* Branch to self + 0x100, no link */ err = create_branch(, iptr, addr + 0x100, 0); - patch_instruction(iptr, instr); + ppc_inst_write(iptr, instr); check(instr_is_branch_to_addr(iptr, addr + 0x100)); /* Maximum relative negative offset, - 32 MB */ err = create_branch(, iptr, addr - 0x200, BRANCH_SET_LINK); - patch_instruction(iptr, instr); + ppc_inst_write(iptr, instr); check(instr_is_branch_to_addr(iptr, addr - 0x200)); /* Out of range relative negative offset, - 32 MB + 4*/ @@ -103,7 +103,7 @@ static void __init test_branch_iform(void) /* Check flags are masked correctly */ err = create_branch(, iptr, addr, 0xFFFC); - patch_instruction(iptr, instr); + ppc_inst_write(iptr, instr); check(instr_is_branch_to_addr(iptr, addr)); check(ppc_inst_equal(instr, ppc_inst(0x4800))); } @@ -143,19 +143,19 @@ static void __init test_branch_bform(void) check(!instr_is_branch_bform(ppc_inst(0x7bff))); /* Absolute conditional branch to 0x100 */ - patch_instruction(iptr, ppc_inst(0x43ff0103)); + ppc_inst_write(iptr, ppc_inst(0x43ff0103)); check(instr_is_branch_to_addr(iptr, 0x100)); /* Absolute conditional branch to 0x20fc */ - patch_instruction(iptr, ppc_inst(0x43ff20ff)); + ppc_inst_write(iptr, ppc_inst(0x43ff20ff)); check(instr_is_branch_to_addr(iptr, 0x20fc)); /* Maximum positive relative conditional branch, + 32 KB - 4B */ - patch_instruction(iptr, ppc_inst(0x43ff7ffc)); + ppc_inst_write(iptr, ppc_inst(0x43ff7ffc)); check(instr_is_branch_to_addr(iptr,
Re: [PATCH v2 3/3] soc: fsl: Replace kernel.h with the necessary inclusions
On Wed, Dec 01, 2021 at 01:41:16PM -0600, Li Yang wrote: > On Tue, Nov 23, 2021 at 10:32 AM Andy Shevchenko > wrote: > > > > On Tue, Nov 16, 2021 at 11:38:01AM +0200, Andy Shevchenko wrote: > > > On Mon, Nov 15, 2021 at 10:24:36PM +, Leo Li wrote: > > > > > From: Andy Shevchenko > > > > > Sent: Monday, November 15, 2021 5:30 AM > > > > > On Wed, Nov 10, 2021 at 12:59:52PM +0200, Andy Shevchenko wrote: > > > > > > ... > > > > > > > > > v2: updated Cc list based on previous changes to MAINTAINERS > > > > > > > > > > Any comments on this, please? > > > > > > > > > > I really want to decrease amount of kernel.h usage in the common > > > > > headers. > > > > > So others won't copy'n'paste bad example. > > > > > > > > There seems to be no problem with the patch although I didn't get time > > > > to really compile with it applied. > > > > > > > > Will pick them up later after build test. > > > > > > Thank you! > > > > > > Note, it has two fixes against MAINTAINERS which may be sent, I believe, > > > sooner than later to Linus. > > > > Any new so far? > > The build test is good. I have applied it for next. Thanks. Thanks, what about MAINTAINERS updates? I don't see them neither in next nor in your tree. -- With Best Regards, Andy Shevchenko
Re: [PATCH] of: unmap memory regions in /memreserve node
On Tue, Nov 30, 2021 at 04:43:31PM -0600, Rob Herring wrote: > +linuxppc-dev > > On Wed, Nov 24, 2021 at 09:33:47PM +0800, Calvin Zhang wrote: > > Reserved memory regions in /memreserve node aren't and shouldn't > > be referenced elsewhere. So mark them no-map to skip direct mapping > > for them. > > I suspect this has a high chance of breaking some platform. There's no > rule a region can't be accessed. The subtlety is that the region shouldn't be explicitly accessed (e.g. modified), but the OS is permitted to have the region mapped. In ePAPR this is described as: This requirement is necessary because the client program is permitted to map memory with storage attributes specified as not Write Through Required, not Caching Inhibited, and Memory Coherence Required (i.e., WIMG = 0b001x), and VLE=0 where supported. The client program may use large virtual pages that contain reserved memory. However, the client program may not modify reserved memory, so the boot program may perform accesses to reserved memory as Write Through Required where conflicting values for this storage attribute are architecturally permissible. Historically arm64 relied upon this for spin-table to work, and I *think* we might not need that any more I agree that there's a high chance this will break something (especially on 16K or 64K page size kernels), so I'd prefer to leave it as-is. If someone requires no-map behaviour, they should use a /reserved-memory entry with a no-map property, which will work today and document their requirement explicitly. Thanks, Mark. > > Signed-off-by: Calvin Zhang > > --- > > drivers/of/fdt.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c > > index bdca35284ceb..9e88cc8445f6 100644 > > --- a/drivers/of/fdt.c > > +++ b/drivers/of/fdt.c > > @@ -638,7 +638,7 @@ void __init early_init_fdt_scan_reserved_mem(void) > > fdt_get_mem_rsv(initial_boot_params, n, , ); > > if (!size) > > break; > > - early_init_dt_reserve_memory_arch(base, size, false); > > + early_init_dt_reserve_memory_arch(base, size, true); > > } > > > > fdt_scan_reserved_mem(); > > -- > > 2.30.2 > > > >