[powerpc:next-test] BUILD SUCCESS 95c63df939789153540060ead8eb5d9fd4606274
allyesconfig h8300allyesconfig sh allmodconfig parisc defconfig s390 allyesconfig parisc allyesconfig s390defconfig i386 allyesconfig sparcallyesconfig sparc defconfig i386defconfig mips allyesconfig mips allmodconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig x86_64 randconfig-a003-20201116 x86_64 randconfig-a004-20201116 x86_64 randconfig-a002-20201116 x86_64 randconfig-a001-20201116 x86_64 randconfig-a005-20201116 x86_64 randconfig-a006-20201116 i386 randconfig-a006-20201116 i386 randconfig-a005-20201116 i386 randconfig-a001-20201116 i386 randconfig-a002-20201116 i386 randconfig-a004-20201116 i386 randconfig-a003-20201116 i386 randconfig-a006-20201115 i386 randconfig-a005-20201115 i386 randconfig-a001-20201115 i386 randconfig-a002-20201115 i386 randconfig-a004-20201115 i386 randconfig-a003-20201115 x86_64 randconfig-a015-20201115 x86_64 randconfig-a011-20201115 x86_64 randconfig-a016-20201115 x86_64 randconfig-a012-20201115 x86_64 randconfig-a014-20201115 x86_64 randconfig-a013-20201115 i386 randconfig-a012-20201116 i386 randconfig-a014-20201116 i386 randconfig-a016-20201116 i386 randconfig-a011-20201116 i386 randconfig-a015-20201116 i386 randconfig-a013-20201116 i386 randconfig-a012-20201115 i386 randconfig-a014-20201115 i386 randconfig-a016-20201115 i386 randconfig-a011-20201115 i386 randconfig-a015-20201115 i386 randconfig-a013-20201115 riscvnommu_k210_defconfig riscvallyesconfig riscvnommu_virt_defconfig riscv allnoconfig riscv defconfig riscv rv32_defconfig riscvallmodconfig x86_64 rhel x86_64 allyesconfig x86_64rhel-7.6-kselftests x86_64 defconfig x86_64 rhel-8.3 x86_64 kexec clang tested configs: x86_64 randconfig-a003-20201115 x86_64 randconfig-a005-20201115 x86_64 randconfig-a004-20201115 x86_64 randconfig-a002-20201115 x86_64 randconfig-a001-20201115 x86_64 randconfig-a006-20201115 x86_64 randconfig-a015-20201116 x86_64 randconfig-a011-20201116 x86_64 randconfig-a014-20201116 x86_64 randconfig-a013-20201116 x86_64 randconfig-a016-20201116 x86_64 randconfig-a012-20201116 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org
[powerpc:merge] BUILD SUCCESS daeef940ffae4fdf0ca4865c26ce7c32cb13bd52
onfig m68k hp300_defconfig arm gemini_defconfig powerpc64 defconfig mips malta_kvm_defconfig m68k sun3x_defconfig arm lpc32xx_defconfig shedosk7705_defconfig arm corgi_defconfig mips loongson1b_defconfig arm milbeaut_m10v_defconfig mips ip32_defconfig mips loongson3_defconfig arc nsimosci_hs_defconfig arm imx_v4_v5_defconfig m68k atari_defconfig powerpc lite5200b_defconfig shapsh4ad0a_defconfig s390 alldefconfig arm sunxi_defconfig sh rts7751r2d1_defconfig powerpc ep8248e_defconfig powerpc asp8347_defconfig mipsnlm_xlr_defconfig armmulti_v5_defconfig powerpc stx_gp3_defconfig armcerfcube_defconfig arm exynos_defconfig arm tegra_defconfig shdreamcast_defconfig arm omap1_defconfig c6xevmc6472_defconfig arm ixp4xx_defconfig sh r7785rp_defconfig m68k bvme6000_defconfig arm spear13xx_defconfig powerpc mpc885_ads_defconfig arm eseries_pxa_defconfig arm h5000_defconfig arm simpad_defconfig sh urquell_defconfig powerpc sequoia_defconfig powerpc tqm8548_defconfig sh rsk7264_defconfig powerpc tqm5200_defconfig mipse55_defconfig powerpc tqm8560_defconfig sh sh7710voipgw_defconfig ia64 allmodconfig ia64defconfig ia64 allyesconfig m68k allmodconfig m68k allyesconfig nios2 defconfig arc allyesconfig nds32 allnoconfig c6x allyesconfig nds32 defconfig nios2allyesconfig cskydefconfig alpha defconfig alphaallyesconfig xtensa allyesconfig h8300allyesconfig sh allmodconfig parisc defconfig s390 allyesconfig parisc allyesconfig s390defconfig i386 allyesconfig sparcallyesconfig sparc defconfig i386defconfig mips allyesconfig mips allmodconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig x86_64 randconfig-a003-20201110 x86_64 randconfig-a005-20201110 x86_64 randconfig-a004-20201110 x86_64 randconfig-a002-20201110 x86_64 randconfig-a006-20201110 x86_64 randconfig-a001-20201110 x86_64 randconfig-a003-20201116 x86_64 randconfig-a005-20201116 x86_64 randconfig-a004-20201116 x86_64 randconfig-a002-20201116 x86_64 randconfig-a001-20201116 x86_64 randconfig-a006-20201116 i386 randconfig-a006-20201110 i386 randconfig-a005-20201110 i386 randconfig-a002-20201110 i386 randconfig-a001-20201110 i386 randconfig-a003-20201110 i386 randconfig-a004-20201110 i386 randconfig-a006-20201116 i386 randconfig-a005-20201116 i386 randconfig-a001-20201116 i386 randconfig-a002-20201116 i386 randconfig-a004-20201116 i386 randconfig-a003-20201116 i386 randconfig-a004-20201109 i386 randconfig-a006-20201109 i386 randconfig-a005-20201109 i386 randconfig-a001-20201109 i386 randconfig-a003-20201109 i386 randconfig-a002-20201109 x86_64 randconfig-a015-20201115 x86_64 randconfig-a01
[powerpc:fixes-test] BUILD SUCCESS 75b49620267c700f0a07fec7f27f69852db70e46
sparc defconfig i386defconfig mips allyesconfig mips allmodconfig powerpc allyesconfig powerpc allmodconfig powerpc allnoconfig x86_64 randconfig-a003-20201116 x86_64 randconfig-a005-20201116 x86_64 randconfig-a004-20201116 x86_64 randconfig-a002-20201116 x86_64 randconfig-a001-20201116 x86_64 randconfig-a006-20201116 i386 randconfig-a006-20201116 i386 randconfig-a005-20201116 i386 randconfig-a001-20201116 i386 randconfig-a002-20201116 i386 randconfig-a004-20201116 i386 randconfig-a003-20201116 x86_64 randconfig-a015-20201115 x86_64 randconfig-a011-20201115 x86_64 randconfig-a016-20201115 x86_64 randconfig-a012-20201115 i386 randconfig-a012-20201116 i386 randconfig-a014-20201116 i386 randconfig-a016-20201116 i386 randconfig-a011-20201116 i386 randconfig-a015-20201116 i386 randconfig-a013-20201116 i386 randconfig-a012-20201115 i386 randconfig-a014-20201115 i386 randconfig-a016-20201115 i386 randconfig-a011-20201115 i386 randconfig-a015-20201115 i386 randconfig-a013-20201115 riscvnommu_k210_defconfig riscvallyesconfig riscvnommu_virt_defconfig riscv allnoconfig riscv defconfig riscv rv32_defconfig riscvallmodconfig x86_64 rhel x86_64 allyesconfig x86_64rhel-7.6-kselftests x86_64 defconfig x86_64 rhel-8.3 x86_64 kexec clang tested configs: x86_64 randconfig-a003-20201115 x86_64 randconfig-a005-20201115 x86_64 randconfig-a004-20201115 x86_64 randconfig-a002-20201115 x86_64 randconfig-a001-20201115 x86_64 randconfig-a006-20201115 x86_64 randconfig-a015-20201116 x86_64 randconfig-a014-20201116 x86_64 randconfig-a013-20201116 x86_64 randconfig-a016-20201116 x86_64 randconfig-a011-20201116 x86_64 randconfig-a012-20201116 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org
[PATCH 1/2] powerpc: Retire e200 core (mpc555x processor)
There is no defconfig selecting CONFIG_E200, and no platform. e200 is an earlier version of booke, a predecessor of e500, with some particularities like an unified cache instead of both an instruction cache and a data cache. Remove it. Signed-off-by: Christophe Leroy --- arch/powerpc/Makefile | 1 - arch/powerpc/include/asm/cputable.h | 11 - arch/powerpc/include/asm/mmu.h| 2 +- arch/powerpc/include/asm/reg.h| 5 -- arch/powerpc/include/asm/reg_booke.h | 12 - arch/powerpc/kernel/cpu_setup_fsl_booke.S | 9 arch/powerpc/kernel/cputable.c| 46 -- arch/powerpc/kernel/head_booke.h | 3 +- arch/powerpc/kernel/head_fsl_booke.S | 57 +-- arch/powerpc/kernel/setup_32.c| 2 - arch/powerpc/kernel/traps.c | 25 -- arch/powerpc/mm/nohash/fsl_booke.c| 12 ++--- arch/powerpc/platforms/Kconfig.cputype| 13 ++ 13 files changed, 11 insertions(+), 187 deletions(-) diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile index a4d56f0a41d9..16b8336f91dd 100644 --- a/arch/powerpc/Makefile +++ b/arch/powerpc/Makefile @@ -248,7 +248,6 @@ KBUILD_CFLAGS += $(call cc-option,-mno-string) cpu-as-$(CONFIG_40x) += -Wa,-m405 cpu-as-$(CONFIG_44x) += -Wa,-m440 cpu-as-$(CONFIG_ALTIVEC) += $(call as-option,-Wa$(comma)-maltivec) -cpu-as-$(CONFIG_E200) += -Wa,-me200 cpu-as-$(CONFIG_E500) += -Wa,-me500 # When using '-many -mpower4' gas will first try and find a matching power4 diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h index 3d2f94afc13a..7d815a3e7206 100644 --- a/arch/powerpc/include/asm/cputable.h +++ b/arch/powerpc/include/asm/cputable.h @@ -41,7 +41,6 @@ extern int machine_check_4xx(struct pt_regs *regs); extern int machine_check_440A(struct pt_regs *regs); extern int machine_check_e500mc(struct pt_regs *regs); extern int machine_check_e500(struct pt_regs *regs); -extern int machine_check_e200(struct pt_regs *regs); extern int machine_check_47x(struct pt_regs *regs); int machine_check_8xx(struct pt_regs *regs); int machine_check_83xx(struct pt_regs *regs); @@ -383,10 +382,6 @@ static inline void cpu_feature_keys_init(void) { } #define CPU_FTRS_440x6 (CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE | \ CPU_FTR_INDEXED_DCR) #define CPU_FTRS_47X (CPU_FTRS_440x6) -#define CPU_FTRS_E200 (CPU_FTR_SPE_COMP | \ - CPU_FTR_NODSISRALIGN | CPU_FTR_COHERENT_ICACHE | \ - CPU_FTR_NOEXECUTE | \ - CPU_FTR_DEBUG_LVL_EXC) #define CPU_FTRS_E500 (CPU_FTR_MAYBE_CAN_DOZE | \ CPU_FTR_SPE_COMP | CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_NODSISRALIGN | \ CPU_FTR_NOEXECUTE) @@ -535,9 +530,6 @@ enum { #ifdef CONFIG_PPC_47x CPU_FTRS_47X | CPU_FTR_476_DD2 | #endif -#ifdef CONFIG_E200 - CPU_FTRS_E200 | -#endif #ifdef CONFIG_E500 CPU_FTRS_E500 | CPU_FTRS_E500_2 | #endif @@ -608,9 +600,6 @@ enum { #ifdef CONFIG_44x CPU_FTRS_44X & CPU_FTRS_440x6 & #endif -#ifdef CONFIG_E200 - CPU_FTRS_E200 & -#endif #ifdef CONFIG_E500 CPU_FTRS_E500 & CPU_FTRS_E500_2 & #endif diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h index 255a1837e9f7..b724c38589a1 100644 --- a/arch/powerpc/include/asm/mmu.h +++ b/arch/powerpc/include/asm/mmu.h @@ -166,7 +166,7 @@ enum { #ifdef CONFIG_44x MMU_FTR_TYPE_44x | #endif -#if defined(CONFIG_E200) || defined(CONFIG_E500) +#ifdef CONFIG_E500 MMU_FTR_TYPE_FSL_E | MMU_FTR_BIG_PHYS | MMU_FTR_USE_TLBILX | #endif #ifdef CONFIG_PPC_47x diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h index f877a576b338..3c81a6efaf23 100644 --- a/arch/powerpc/include/asm/reg.h +++ b/arch/powerpc/include/asm/reg.h @@ -1232,14 +1232,9 @@ #define SPRN_SPRG_WSCRATCH_MC SPRN_SPRG1 #define SPRN_SPRG_RSCRATCH4SPRN_SPRG7R #define SPRN_SPRG_WSCRATCH4SPRN_SPRG7W -#ifdef CONFIG_E200 -#define SPRN_SPRG_RSCRATCH_DBG SPRN_SPRG6R -#define SPRN_SPRG_WSCRATCH_DBG SPRN_SPRG6W -#else #define SPRN_SPRG_RSCRATCH_DBG SPRN_SPRG9 #define SPRN_SPRG_WSCRATCH_DBG SPRN_SPRG9 #endif -#endif #ifdef CONFIG_PPC_8xx #define SPRN_SPRG_SCRATCH0 SPRN_SPRG0 diff --git a/arch/powerpc/include/asm/reg_booke.h b/arch/powerpc/include/asm/reg_booke.h index 29a948e0c0f2..262782f08fd4 100644 --- a/arch/powerpc/include/asm/reg_booke.h +++ b/arch/powerpc/include/asm/reg_booke.h @@ -281,18 +281,6 @@ #define MSRP_PMMP 0x0004 /* Protect MSR[PMM] */ #endif -#ifdef CONFIG_E200 -#define MCSR_MCP 0x8000UL /* Machine Check Input Pin */ -#define MCSR_CP_PERR 0x2000UL /* Cache Push Parity Error */ -#define MCSR_CPERR 0x1000UL /* Cache Parity Error */ -#define MCSR_EXCP_ERR 0x0800UL /* ISI, ITLB, or Bus Error on 1st insn -
[PATCH 2/2] powerpc: Remove ucache_bsize
ppc601 and e200 were the users of ucache_bsize. ppc601 and e200 are now gone. Remove ucache_bsize. Signed-off-by: Christophe Leroy --- arch/powerpc/include/asm/elf.h | 2 +- arch/powerpc/kernel/setup-common.c | 4 arch/powerpc/kernel/setup_32.c | 1 - 3 files changed, 1 insertion(+), 6 deletions(-) diff --git a/arch/powerpc/include/asm/elf.h b/arch/powerpc/include/asm/elf.h index 53ed2ca40151..900b8d7fdffa 100644 --- a/arch/powerpc/include/asm/elf.h +++ b/arch/powerpc/include/asm/elf.h @@ -168,7 +168,7 @@ do { \ /* Cache size items */ \ NEW_AUX_ENT(AT_DCACHEBSIZE, dcache_bsize); \ NEW_AUX_ENT(AT_ICACHEBSIZE, icache_bsize); \ - NEW_AUX_ENT(AT_UCACHEBSIZE, ucache_bsize); \ + NEW_AUX_ENT(AT_UCACHEBSIZE, 0); \ VDSO_AUX_ENT(AT_SYSINFO_EHDR, current->mm->context.vdso_base); \ ARCH_DLINFO_CACHE_GEOMETRY; \ } while (0) diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c index 808ec9fab605..c23449a93fef 100644 --- a/arch/powerpc/kernel/setup-common.c +++ b/arch/powerpc/kernel/setup-common.c @@ -90,8 +90,6 @@ EXPORT_SYMBOL_GPL(boot_cpuid); */ int dcache_bsize; int icache_bsize; -int ucache_bsize; - unsigned long klimit = (unsigned long) _end; @@ -802,8 +800,6 @@ static __init void print_system_info(void) pr_info("dcache_bsize = 0x%x\n", dcache_bsize); pr_info("icache_bsize = 0x%x\n", icache_bsize); - if (ucache_bsize != 0) - pr_info("ucache_bsize = 0x%x\n", ucache_bsize); pr_info("cpu_features = 0x%016lx\n", cur_cpu_spec->cpu_features); pr_info(" possible= 0x%016lx\n", diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c index 416e2c7a8b0a..8ba49a6bf515 100644 --- a/arch/powerpc/kernel/setup_32.c +++ b/arch/powerpc/kernel/setup_32.c @@ -222,5 +222,4 @@ __init void initialize_cache_info(void) */ dcache_bsize = cur_cpu_spec->dcache_bsize; icache_bsize = cur_cpu_spec->icache_bsize; - ucache_bsize = 0; } -- 2.25.0
[PATCH] powerpc: fix create_section_mapping compile warning
0day robot reports that a recent rework of how memory_add_physaddr_to_nid() and phys_to_target_node() are declared resulted in the following new compilation warning: arch/powerpc/mm/mem.c:91:12: warning: no previous prototype for 'create_section_mapping' [-Wmissing-prototypes] 91 | int __weak create_section_mapping(unsigned long start, unsigned long end, |^~ ...fix this by moving the declaration of create_section_mapping() outside of the CONFIG_NEED_MULTIPLE_NODES ifdef guard, and include an explicit include of asm/mmzone.h in mem.c. An include of linux/mmzone.h is not sufficient. Cc: Michael Ellerman Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: Andrew Morton Reported-by: kernel test robot Signed-off-by: Dan Williams --- arch/powerpc/include/asm/mmzone.h |7 +-- arch/powerpc/mm/mem.c |1 + 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/mmzone.h b/arch/powerpc/include/asm/mmzone.h index 177fd18caf83..6cda76b57c5d 100644 --- a/arch/powerpc/include/asm/mmzone.h +++ b/arch/powerpc/include/asm/mmzone.h @@ -33,8 +33,6 @@ extern struct pglist_data *node_data[]; extern int numa_cpu_lookup_table[]; extern cpumask_var_t node_to_cpumask_map[]; #ifdef CONFIG_MEMORY_HOTPLUG -extern int create_section_mapping(unsigned long start, unsigned long end, - int nid, pgprot_t prot); extern unsigned long max_pfn; u64 memory_hotplug_max(void); #else @@ -48,5 +46,10 @@ u64 memory_hotplug_max(void); #define __HAVE_ARCH_RESERVED_KERNEL_PAGES #endif +#ifdef CONFIG_MEMORY_HOTPLUG +extern int create_section_mapping(unsigned long start, unsigned long end, + int nid, pgprot_t prot); +#endif + #endif /* __KERNEL__ */ #endif /* _ASM_MMZONE_H_ */ diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index 01ec2a252f09..3fc325bebe4d 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -50,6 +50,7 @@ #include #include #include +#include #include
Re: [PATCH 1/6] ibmvfc: byte swap login_buf.resp values in attribute show functions
Tyrel, > The checkpatch script only warns at 100 char lines these days. To be > fair though I did have two lines go over that limit by a couple > characters, there are a couple commit log typos, and I had an if > keyword with no space after before the opening parenthesis. So, I'll > happily re-spin. Please tweak the little things that need fixing and resubmit. > However, for my info going forward is the SCSI subsystem sticking to > 80 char lines as a hard limit? As far as I'm concerned the 80 char limit is mainly about ensuring that the code is structured in a sensible way. Typesetting best practices also suggest that longer lines are harder to read. So while I generally don't strictly enforce the 80 char limit for drivers, I do push back if I feel that readability could be improved by breaking the line or restructuring the code. Use your best judgment to optimize for readability. Thanks! -- Martin K. Petersen Oracle Linux Engineering
Re: [PATCH] powerpc/powernv/sriov: fix unsigned int win compared to less than zero
Andrew Donnellan writes: > On 10/11/20 10:19 pm, xiakaixu1...@gmail.com wrote: >> From: Kaixu Xia >> >> Fix coccicheck warning: >> >> ./arch/powerpc/platforms/powernv/pci-sriov.c:443:7-10: WARNING: Unsigned >> expression compared with zero: win < 0 >> ./arch/powerpc/platforms/powernv/pci-sriov.c:462:7-10: WARNING: Unsigned >> expression compared with zero: win < 0 >> >> Reported-by: Tosk Robot >> Signed-off-by: Kaixu Xia > > This seems like the right fix, the value assigned to win can indeed be > -1 so it should be signed. Thanks for sending the patch. > > Reviewed-by: Andrew Donnellan I'll add: Fixes: 39efc03e3ee8 ("powerpc/powernv/sriov: Move M64 BAR allocation into a helper") Which I think is the culprit as it changed: if (win >= phb->ioda.m64_bar_idx + 1) to: if (win < 0) cheers
Re: [PATCH 3/3] powerpc: fix -Wimplicit-fallthrough
On Sun, Nov 15, 2020 at 08:35:32PM -0800, Nick Desaulniers wrote: > The "fallthrough" pseudo-keyword was added as a portable way to denote > intentional fallthrough. Clang will still warn on cases where there is a > fallthrough to an immediate break. Add explicit breaks for those cases. > > Link: https://github.com/ClangBuiltLinux/linux/issues/236 > Signed-off-by: Nick Desaulniers Reviewed-by: Nathan Chancellor Tested-by: Nathan Chancellor > --- > arch/powerpc/kernel/prom_init.c | 1 + > arch/powerpc/kernel/uprobes.c | 1 + > arch/powerpc/perf/imc-pmu.c | 1 + > 3 files changed, 3 insertions(+) > > diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c > index 38ae5933d917..e9d4eb6144e1 100644 > --- a/arch/powerpc/kernel/prom_init.c > +++ b/arch/powerpc/kernel/prom_init.c > @@ -355,6 +355,7 @@ static int __init prom_strtobool(const char *s, bool *res) > default: > break; > } > + break; > default: > break; > } > diff --git a/arch/powerpc/kernel/uprobes.c b/arch/powerpc/kernel/uprobes.c > index d200e7df7167..e8a63713e655 100644 > --- a/arch/powerpc/kernel/uprobes.c > +++ b/arch/powerpc/kernel/uprobes.c > @@ -141,6 +141,7 @@ int arch_uprobe_exception_notify(struct notifier_block > *self, > case DIE_SSTEP: > if (uprobe_post_sstep_notifier(regs)) > return NOTIFY_STOP; > + break; > default: > break; > } > diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c > index 7b25548ec42b..e106909ff9c3 100644 > --- a/arch/powerpc/perf/imc-pmu.c > +++ b/arch/powerpc/perf/imc-pmu.c > @@ -1500,6 +1500,7 @@ static int update_pmu_ops(struct imc_pmu *pmu) > pmu->pmu.stop = trace_imc_event_stop; > pmu->pmu.read = trace_imc_event_read; > pmu->attr_groups[IMC_FORMAT_ATTR] = &trace_imc_format_group; > + break; > default: > break; > } > -- > 2.29.2.299.gdc1121823c-goog >
Re: [PATCH 2/3] Revert "lib: Revert use of fallthrough pseudo-keyword in lib/"
On Sun, Nov 15, 2020 at 08:35:31PM -0800, Nick Desaulniers wrote: > This reverts commit 6a9dc5fd6170 ("lib: Revert use of fallthrough > pseudo-keyword in lib/") > > Now that we can build arch/powerpc/boot/ free of -Wimplicit-fallthrough, > re-enable these fixes for lib/. > > Link: https://github.com/ClangBuiltLinux/linux/issues/236 > Signed-off-by: Nick Desaulniers Reviewed-by: Nathan Chancellor Tested-by: Nathan Chancellor > --- > lib/asn1_decoder.c | 4 ++-- > lib/assoc_array.c | 2 +- > lib/bootconfig.c| 4 ++-- > lib/cmdline.c | 10 +- > lib/dim/net_dim.c | 2 +- > lib/dim/rdma_dim.c | 4 ++-- > lib/glob.c | 2 +- > lib/siphash.c | 36 ++-- > lib/ts_fsm.c| 2 +- > lib/vsprintf.c | 14 +++--- > lib/xz/xz_dec_lzma2.c | 4 ++-- > lib/xz/xz_dec_stream.c | 16 > lib/zstd/bitstream.h| 10 +- > lib/zstd/compress.c | 2 +- > lib/zstd/decompress.c | 12 ++-- > lib/zstd/huf_compress.c | 4 ++-- > 16 files changed, 64 insertions(+), 64 deletions(-) > > diff --git a/lib/asn1_decoder.c b/lib/asn1_decoder.c > index 58f72b25f8e9..13da529e2e72 100644 > --- a/lib/asn1_decoder.c > +++ b/lib/asn1_decoder.c > @@ -381,7 +381,7 @@ int asn1_ber_decoder(const struct asn1_decoder *decoder, > case ASN1_OP_END_SET_ACT: > if (unlikely(!(flags & FLAG_MATCHED))) > goto tag_mismatch; > - /* fall through */ > + fallthrough; > > case ASN1_OP_END_SEQ: > case ASN1_OP_END_SET_OF: > @@ -448,7 +448,7 @@ int asn1_ber_decoder(const struct asn1_decoder *decoder, > pc += asn1_op_lengths[op]; > goto next_op; > } > - /* fall through */ > + fallthrough; > > case ASN1_OP_ACT: > ret = actions[machine[pc + 1]](context, hdr, tag, data + tdp, > len); > diff --git a/lib/assoc_array.c b/lib/assoc_array.c > index 6f4bcf524554..04c98799c3ba 100644 > --- a/lib/assoc_array.c > +++ b/lib/assoc_array.c > @@ -1113,7 +1113,7 @@ struct assoc_array_edit *assoc_array_delete(struct > assoc_array *array, > index_key)) > goto found_leaf; > } > - /* fall through */ > + fallthrough; > case assoc_array_walk_tree_empty: > case assoc_array_walk_found_wrong_shortcut: > default: > diff --git a/lib/bootconfig.c b/lib/bootconfig.c > index 649ed44f199c..9f8c70a98fcf 100644 > --- a/lib/bootconfig.c > +++ b/lib/bootconfig.c > @@ -827,7 +827,7 @@ int __init xbc_init(char *buf, const char **emsg, int > *epos) > q - 2); > break; > } > - /* fall through */ > + fallthrough; > case '=': > ret = xbc_parse_kv(&p, q, c); > break; > @@ -836,7 +836,7 @@ int __init xbc_init(char *buf, const char **emsg, int > *epos) > break; > case '#': > q = skip_comment(q); > - /* fall through */ > + fallthrough; > case ';': > case '\n': > ret = xbc_parse_key(&p, q); > diff --git a/lib/cmdline.c b/lib/cmdline.c > index 9e186234edc0..46f2cb4ce6d1 100644 > --- a/lib/cmdline.c > +++ b/lib/cmdline.c > @@ -144,23 +144,23 @@ unsigned long long memparse(const char *ptr, char > **retptr) > case 'E': > case 'e': > ret <<= 10; > - /* fall through */ > + fallthrough; > case 'P': > case 'p': > ret <<= 10; > - /* fall through */ > + fallthrough; > case 'T': > case 't': > ret <<= 10; > - /* fall through */ > + fallthrough; > case 'G': > case 'g': > ret <<= 10; > - /* fall through */ > + fallthrough; > case 'M': > case 'm': > ret <<= 10; > - /* fall through */ > + fallthrough; > case 'K': > case 'k': > ret <<= 10; > diff --git a/lib/dim/net_dim.c b/lib/dim/net_dim.c > index a4db51c21266..06811d866775 100644 > --- a/lib/dim/net_dim.c > +++ b/lib/dim/net_dim.c > @@ -233,7 +233,7 @@ void net_dim(struct dim *dim, struct dim_sample > end_sample) > schedule_work(&dim->work); > break; > } > - /* fall through */ > + fallthrough; > case DIM_START_MEASURE: > dim_update_sample(end_sample.event_ctr, end_sample.pkt_ctr, > end_sample.byte_ctr, &dim->start_sample); > diff --git
Re: [PATCH 1/3] powerpc: boot: include compiler_attributes.h
On Sun, Nov 15, 2020 at 08:35:30PM -0800, Nick Desaulniers wrote: > The kernel uses `-include` to include include/linux/compiler_types.h > into all translation units (see scripts/Makefile.lib), which #includes > compiler_attributes.h. > > arch/powerpc/boot/ uses different compiler flags from the rest of the > kernel. As such, it doesn't contain the definitions from these headers, > and redefines a few that it needs. > > For the purpose of enabling -Wimplicit-fallthrough for ppc, include > compiler_types.h via `-include`. > > Link: https://github.com/ClangBuiltLinux/linux/issues/236 > Signed-off-by: Nick Desaulniers Reviewed-by: Nathan Chancellor Tested-by: Nathan Chancellor > --- > We could just `#include "include/linux/compiler_types.h"` in the few .c > sources used from lib/ (there are proper header guards in > compiler_types.h). > > It was also noted in 6a9dc5fd6170 that we could -D__KERNEL__ and > -include compiler_types.h like the main kernel does, though testing that > produces a whole sea of warnings to cleanup. This approach is minimally > invasive. > > arch/powerpc/boot/Makefile | 1 + > arch/powerpc/boot/decompress.c | 1 - > 2 files changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile > index f8ce6d2dde7b..1659963a8f1d 100644 > --- a/arch/powerpc/boot/Makefile > +++ b/arch/powerpc/boot/Makefile > @@ -31,6 +31,7 @@ endif > BOOTCFLAGS:= -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ >-fno-strict-aliasing -O2 -msoft-float -mno-altivec -mno-vsx \ >-pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc \ > + -include $(srctree)/include/linux/compiler_attributes.h \ >$(LINUXINCLUDE) > > ifdef CONFIG_PPC64_BOOT_WRAPPER > diff --git a/arch/powerpc/boot/decompress.c b/arch/powerpc/boot/decompress.c > index 8bf39ef7d2df..6098b879ac97 100644 > --- a/arch/powerpc/boot/decompress.c > +++ b/arch/powerpc/boot/decompress.c > @@ -21,7 +21,6 @@ > > #define STATIC static > #define INIT > -#define __always_inline inline > > /* > * The build process will copy the required zlib source files and headers > -- > 2.29.2.299.gdc1121823c-goog >
Re: [PATCH 1/2] kbuild: Hoist '--orphan-handling' into Kconfig
On Mon, Nov 16, 2020 at 05:41:58PM -0800, Nick Desaulniers wrote: > On Fri, Nov 13, 2020 at 11:56 AM Nathan Chancellor > wrote: > > > > Currently, '--orphan-handling=warn' is spread out across four different > > architectures in their respective Makefiles, which makes it a little > > unruly to deal with in case it needs to be disabled for a specific > > linker version (in this case, ld.lld 10.0.1). > > Hi Nathan, > This patch fails to apply for me via b4 on next-20201116 due to a > conflict in arch/Kconfig:1028. Would you mind sending a rebased V2? Hi Nick, This series is intended to go into v5.10 so rebasing it against -next defeats that; please test it against v5.10-rc4, where it still applies cleanly. The conflicts will be handled by other entities (Stephen Rothwell and Linus). If you want to test it against -next, 'git am -3' will allow you to easily handle the conflict. Cheers, Nathan
Re: [PATCH] ocxl: Mmio invalidation support
On 14/11/20 2:33 am, Christophe Lombard wrote: OpenCAPI 4.0/5.0 with TLBI/SLBI Snooping, is not used due to performance problems caused by the PAU having to process all incoming TLBI/SLBI commands which will cause them to back up on the PowerBus. When the Address Translation Mode requires TLB and SLB Invalidate operations to be initiated using MMIO registers, a set of registers like the following is used: • XTS MMIO ATSD0 LPARID register • XTS MMIO ATSD0 AVA register • XTS MMIO ATSD0 launch register, write access initiates a shoot down • XTS MMIO ATSD0 status register The MMIO based mechanism also blocks the NPU/PAU from snooping TLBIE commands from the PowerBus. The Shootdown commands (ATSD) will be generated using MMIO registers in the NPU/PAU and sent to the device. Signed-off-by: Christophe Lombard snowpatch has reported some minor checkpatch issues: https://openpower.xyz/job/snowpatch/job/snowpatch-linux-checkpatch/16267//artifact/linux/checkpatch.log -- Andrew Donnellan OzLabs, ADL Canberra a...@linux.ibm.com IBM Australia Limited
Re: [PATCH] powerpc/powernv/memtrace: Fake non-memblock aligned sized traces
On Mon, Nov 16, 2020 at 11:02 PM Michael Ellerman wrote: > > Jordan Niethe writes: > > The hardware trace macros which use the memory provided by memtrace are > > able to use trace sizes as small as 16MB. Only memblock aligned values > > can be removed from each NUMA node by writing that value to > > memtrace/enable in debugfs. This means setting up, say, a 16MB trace is > > not possible. To allow such a trace size, instead align whatever value > > is written to memtrace/enable to the memblock size for the purpose of > > removing it from each NUMA node but report the written value from > > memtrace/enable and memtrace/x/size in debugfs. > > Why does it matter if the size that's removed is larger than the size > that was requested? > > Is it about constraining the size of the trace? If so that seems like it > should be the job of the tracing tools, not the kernel. Yeah about constraining the size, I'll just do it in the trace tools. > > cheers
Re: Error: invalid switch -me200
On Fri, Nov 13, 2020 at 06:50:15PM -0600, Segher Boessenkool wrote: > On Fri, Nov 13, 2020 at 04:37:38PM -0800, Fāng-ruì Sòng wrote: > > On Fri, Nov 13, 2020 at 4:23 PM Segher Boessenkool > > wrote: > > > On Fri, Nov 13, 2020 at 12:14:18PM -0800, Nick Desaulniers wrote: > > > > > > > Error: invalid switch -me200 > > > > > > > Error: unrecognized option -me200 > > > > > > > > > > > > 251 cpu-as-$(CONFIG_E200) += -Wa,-me200 > > > > > > > > > > > > Are those all broken configs, or is Kconfig messed up such that > > > > > > randconfig can select these when it should not? > > > > > > > > > > Hmmm, looks like this flag does not exist in mainline binutils? There > > > > > is > > > > > a thread in 2010 about this that Segher commented on: > > > > > > > > > > https://lore.kernel.org/linuxppc-dev/9859e645-954d-4d07-8003-ffcd2391a...@kernel.crashing.org/ > > > > > > > > > > Guess this config should be eliminated? > > > > > > The help text for this config options says that e200 is used in 55xx, > > > and there *is* an -me5500 GAS flag (which probably does this same > > > thing, too). But is any of this tested, or useful, or wanted? > > > > > > Maybe Christophe knows, cc:ed. > > > > CC Alan Modra, a binutils global maintainer. > > > > Alan, can the few -Wa,-m* options deleted from arch/powerpc/Makefile ? > > All the others work fine (and are needed afaics), it is only -me200 that > doesn't exist (in mainline binutils). Right, and a quick check says it never existed. There is e200z4, added to binutils with dfdaec14b0d, 2016-08-01, but the kernel -me200 was added in 2005. I suspect the toolchain support only existed inside Freescale and pushing it upstream was too difficult. -- Alan Modra Australia Development Lab, IBM
Re: Error: invalid switch -me200
On Mon, Nov 16, 2020 at 02:27:12PM -0600, Scott Wood wrote: > On Fri, 2020-11-13 at 18:50 -0600, Segher Boessenkool wrote: > > All the others work fine (and are needed afaics), it is only -me200 that > > doesn't exist (in mainline binutils). Perhaps -me5500 will work for it > > instead. > > According to Wikipedia e200 is from mpc55xx (for which I don't see any > platform support having ever been added). e5500 is completely different (64- > bit version of e500mc). Ah yes, confusing processor numbers :-( That explains, sorry. Segher
Re: Error: invalid switch -me200
On Fri, 2020-11-13 at 18:50 -0600, Segher Boessenkool wrote: > On Fri, Nov 13, 2020 at 04:37:38PM -0800, Fāng-ruì Sòng wrote: > > On Fri, Nov 13, 2020 at 4:23 PM Segher Boessenkool > > wrote: > > > On Fri, Nov 13, 2020 at 12:14:18PM -0800, Nick Desaulniers wrote: > > > > > > > Error: invalid switch -me200 > > > > > > > Error: unrecognized option -me200 > > > > > > > > > > > > 251 cpu-as-$(CONFIG_E200) += -Wa,-me200 > > > > > > > > > > > > Are those all broken configs, or is Kconfig messed up such that > > > > > > randconfig can select these when it should not? > > > > > > > > > > Hmmm, looks like this flag does not exist in mainline binutils? > > > > > There is > > > > > a thread in 2010 about this that Segher commented on: > > > > > > > > > > https://lore.kernel.org/linuxppc-dev/9859e645-954d-4d07-8003-ffcd2391a...@kernel.crashing.org/ > > > > > > > > > > Guess this config should be eliminated? > > > > > > The help text for this config options says that e200 is used in 55xx, > > > and there *is* an -me5500 GAS flag (which probably does this same > > > thing, too). But is any of this tested, or useful, or wanted? > > > > > > Maybe Christophe knows, cc:ed. > > > > CC Alan Modra, a binutils global maintainer. > > > > Alan, can the few -Wa,-m* options deleted from arch/powerpc/Makefile ? > > All the others work fine (and are needed afaics), it is only -me200 that > doesn't exist (in mainline binutils). Perhaps -me5500 will work for it > instead. According to Wikipedia e200 is from mpc55xx (for which I don't see any platform support having ever been added). e5500 is completely different (64- bit version of e500mc). -Scott
Re: [PATCH] powerpc: Drop -me200 addition to build flags
On Mon, 2020-11-16 at 23:09 +1100, Michael Ellerman wrote: > Currently a build with CONFIG_E200=y will fail with: > > Error: invalid switch -me200 > Error: unrecognized option -me200 > > Upstream binutils has never supported an -me200 option. Presumably it > was supported at some point by either a fork or Freescale internal > binutils. > > We can't support code that we can't even build test, so drop the > addition of -me200 to the build flags, so we can at least build with > CONFIG_E200=y. > > Reported-by: Németh Márton > Reported-by: kernel test robot > Signed-off-by: Michael Ellerman > --- > > More discussion: > https://lore.kernel.org/lkml/202011131146.g8dplqdd-...@intel.com > --- > arch/powerpc/Makefile | 1 - > 1 file changed, 1 deletion(-) Acked-by: Scott Wood I'd go further and remove E200 code entirely, unless someone with the hardware can claim that it actually works. There doesn't appear to be any actual platform support for an e200-based system. It seems to be a long-abandoned work in progress. -Scott
Re: [PATCH 3/5] perf/core: Fix arch_perf_get_page_size()
On 11/13/2020 6:19 AM, Peter Zijlstra wrote: The (new) page-table walker in arch_perf_get_page_size() is broken in various ways. Specifically while it is used in a locless manner, it doesn't depend on CONFIG_HAVE_FAST_GUP nor uses the proper _lockless offset methods, nor is careful to only read each entry only once. Also the hugetlb support is broken due to calling pte_page() without first checking pte_special(). Rewrite the whole thing to be a proper lockless page-table walker and employ the new pXX_leaf_size() pgtable functions to determine the TLB size without looking at the page-frames. Fixes: 51b646b2d9f8 ("perf,mm: Handle non-page-table-aligned hugetlbfs") Fixes: 8d97e71811aa ("perf/core: Add PERF_SAMPLE_DATA_PAGE_SIZE") The issue (https://lkml.kernel.org/r/8e88ba79-7c40-ea32-a7ed-bdc4fc04b...@linux.intel.com) has been fixed by this patch set. Tested-by: Kan Liang Signed-off-by: Peter Zijlstra (Intel) --- arch/arm64/include/asm/pgtable.h|3 + arch/sparc/include/asm/pgtable_64.h | 13 arch/sparc/mm/hugetlbpage.c | 19 -- include/linux/pgtable.h | 16 + kernel/events/core.c| 102 +--- 5 files changed, 82 insertions(+), 71 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7001,90 +7001,62 @@ static u64 perf_virt_to_phys(u64 virt) return phys_addr; } -#ifdef CONFIG_MMU - /* - * Return the MMU page size of a given virtual address. - * - * This generic implementation handles page-table aligned huge pages, as well - * as non-page-table aligned hugetlbfs compound pages. - * - * If an architecture supports and uses non-page-table aligned pages in their - * kernel mapping it will need to provide it's own implementation of this - * function. + * Return the MMU/TLB page size of a given virtual address. */ -__weak u64 arch_perf_get_page_size(struct mm_struct *mm, unsigned long addr) +static u64 perf_get_tlb_page_size(struct mm_struct *mm, unsigned long addr) { - struct page *page; - pgd_t *pgd; - p4d_t *p4d; - pud_t *pud; - pmd_t *pmd; - pte_t *pte; + u64 size = 0; - pgd = pgd_offset(mm, addr); - if (pgd_none(*pgd)) - return 0; +#ifdef CONFIG_HAVE_FAST_GUP + pgd_t *pgdp, pgd; + p4d_t *p4dp, p4d; + pud_t *pudp, pud; + pmd_t *pmdp, pmd; + pte_t *ptep, pte; - p4d = p4d_offset(pgd, addr); - if (!p4d_present(*p4d)) + pgdp = pgd_offset(mm, addr); + pgd = READ_ONCE(*pgdp); + if (pgd_none(pgd)) return 0; - if (p4d_leaf(*p4d)) - return 1ULL << P4D_SHIFT; + if (pgd_leaf(pgd)) + return pgd_leaf_size(pgd); - pud = pud_offset(p4d, addr); - if (!pud_present(*pud)) + p4dp = p4d_offset_lockless(pgdp, pgd, addr); + p4d = READ_ONCE(*p4dp); + if (!p4d_present(p4d)) return 0; - if (pud_leaf(*pud)) { -#ifdef pud_page - page = pud_page(*pud); - if (PageHuge(page)) - return page_size(compound_head(page)); -#endif - return 1ULL << PUD_SHIFT; - } + if (p4d_leaf(p4d)) + return p4d_leaf_size(p4d); - pmd = pmd_offset(pud, addr); - if (!pmd_present(*pmd)) + pudp = pud_offset_lockless(p4dp, p4d, addr); + pud = READ_ONCE(*pudp); + if (!pud_present(pud)) return 0; - if (pmd_leaf(*pmd)) { -#ifdef pmd_page - page = pmd_page(*pmd); - if (PageHuge(page)) - return page_size(compound_head(page)); -#endif - return 1ULL << PMD_SHIFT; - } + if (pud_leaf(pud)) + return pud_leaf_size(pud); - pte = pte_offset_map(pmd, addr); - if (!pte_present(*pte)) { - pte_unmap(pte); + pmdp = pmd_offset_lockless(pudp, pud, addr); + pmd = READ_ONCE(*pmdp); + if (!pmd_present(pmd)) return 0; - } - page = pte_page(*pte); - if (PageHuge(page)) { - u64 size = page_size(compound_head(page)); - pte_unmap(pte); - return size; - } - - pte_unmap(pte); - return PAGE_SIZE; -} + if (pmd_leaf(pmd)) + return pmd_leaf_size(pmd); -#else + ptep = pte_offset_map(&pmd, addr); + pte = ptep_get_lockless(ptep); + if (pte_present(pte)) + size = pte_leaf_size(pte); + pte_unmap(ptep); +#endif /* CONFIG_HAVE_FAST_GUP */ -static u64 arch_perf_get_page_size(struct mm_struct *mm, unsigned long addr) -{ - return 0; + return size; } -#endif - static u64 perf_get_page_size(unsigned long addr) { struct mm_struct *mm; @@ -7109,7 +7081,7 @@ static u64 perf_get_page_size(unsigned l mm = &init_mm; } - size = arch_perf_get_page_size(mm, addr); +
Re: [PATCH net-next 04/12] ibmvnic: Introduce xmit_more support using batched subCRQ hcalls
On 11/14/20 5:46 PM, Jakub Kicinski wrote: On Thu, 12 Nov 2020 13:09:59 -0600 Thomas Falcon wrote: Include support for the xmit_more feature utilizing the H_SEND_SUB_CRQ_INDIRECT hypervisor call which allows the sending of multiple subordinate Command Response Queue descriptors in one hypervisor call via a DMA-mapped buffer. This update reduces hypervisor calls and thus hypervisor call overhead per TX descriptor. Signed-off-by: Thomas Falcon The common bug with xmit_more is not flushing the already queued notifications when there is a drop. Any time you drop a skb you need to check it's not an skb that was the end of an xmit_more train and if so flush notifications (or just always flush on error). Looking at the driver e.g. this starting goto: if (ibmvnic_xmit_workarounds(skb, netdev)) { tx_dropped++; tx_send_failed++; ret = NETDEV_TX_OK; goto out; } Does not seem to hit any flush on its way out AFAICS. Hi, I included those updates in a later patch to ease review but see now that that was a mistake. I will merge those bits back into this patch and resubmit. Thanks!
Re: [PATCH net-next 01/12] ibmvnic: Ensure that subCRQ entry reads are ordered
On Mon, 16 Nov 2020 12:28:05 -0600 Thomas Falcon wrote: > On 11/14/20 5:35 PM, Jakub Kicinski wrote: > > On Thu, 12 Nov 2020 13:09:56 -0600 Thomas Falcon wrote: > >> Ensure that received Subordinate Command-Response Queue > >> entries are properly read in order by the driver. > >> > >> Signed-off-by: Thomas Falcon > > Are you sure this is not a bug fix? > Yes, I guess it does look like a bug fix. I can omit this in v2 and > submit this as a stand-alone patch to net? Yup, that's the preferred way. Thanks!
Re: [PATCH net-next 01/12] ibmvnic: Ensure that subCRQ entry reads are ordered
On 11/14/20 5:35 PM, Jakub Kicinski wrote: On Thu, 12 Nov 2020 13:09:56 -0600 Thomas Falcon wrote: Ensure that received Subordinate Command-Response Queue entries are properly read in order by the driver. Signed-off-by: Thomas Falcon Are you sure this is not a bug fix? Yes, I guess it does look like a bug fix. I can omit this in v2 and submit this as a stand-alone patch to net?
Re: [PATCH net-next 02/12] ibmvnic: Introduce indirect subordinate Command Response Queue buffer
On 11/14/20 5:35 PM, Jakub Kicinski wrote: On Thu, 12 Nov 2020 13:09:57 -0600 Thomas Falcon wrote: This patch introduces the infrastructure to send batched subordinate Command Response Queue descriptors, which are used by the ibmvnic driver to send TX frame and RX buffer descriptors. Signed-off-by: Thomas Falcon @@ -2957,6 +2963,19 @@ static struct ibmvnic_sub_crq_queue *init_sub_crq_queue(struct ibmvnic_adapter scrq->adapter = adapter; scrq->size = 4 * PAGE_SIZE / sizeof(*scrq->msgs); + scrq->ind_buf.index = 0; + + scrq->ind_buf.indir_arr = + dma_alloc_coherent(dev, + IBMVNIC_IND_ARR_SZ, + &scrq->ind_buf.indir_dma, + GFP_KERNEL); + + if (!scrq->ind_buf.indir_arr) { + dev_err(dev, "Couldn't allocate indirect scrq buffer\n"); This warning/error is not necessary, memory allocation will trigger an OOM message already. Thanks, I can fix that in a v2. + goto reg_failed; Don't you have to do something like rc = plpar_hcall_norets(H_FREE_SUB_CRQ, adapter->vdev->unit_address, scrq->crq_num); ? Yes, you're right, I will include that in a v2 also. + } + spin_lock_init(&scrq->lock);
Re: [PATCH] ocxl: Mmio invalidation support
On 13/11/2020 16:33, Christophe Lombard wrote: OpenCAPI 4.0/5.0 with TLBI/SLBI Snooping, is not used due to performance problems caused by the PAU having to process all incoming TLBI/SLBI commands which will cause them to back up on the PowerBus. When the Address Translation Mode requires TLB and SLB Invalidate operations to be initiated using MMIO registers, a set of registers like the following is used: • XTS MMIO ATSD0 LPARID register • XTS MMIO ATSD0 AVA register • XTS MMIO ATSD0 launch register, write access initiates a shoot down • XTS MMIO ATSD0 status register The MMIO based mechanism also blocks the NPU/PAU from snooping TLBIE commands from the PowerBus. The Shootdown commands (ATSD) will be generated using MMIO registers in the NPU/PAU and sent to the device. Signed-off-by: Christophe Lombard --- arch/powerpc/include/asm/pnv-ocxl.h | 2 + arch/powerpc/platforms/powernv/ocxl.c | 19 +++ drivers/misc/ocxl/link.c | 180 ++ drivers/misc/ocxl/ocxl_internal.h | 46 ++- drivers/misc/ocxl/trace.h | 125 ++ 5 files changed, 348 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/include/asm/pnv-ocxl.h b/arch/powerpc/include/asm/pnv-ocxl.h index d37ededca3ee..4a23abcc347b 100644 --- a/arch/powerpc/include/asm/pnv-ocxl.h +++ b/arch/powerpc/include/asm/pnv-ocxl.h @@ -28,4 +28,6 @@ int pnv_ocxl_spa_setup(struct pci_dev *dev, void *spa_mem, int PE_mask, void **p void pnv_ocxl_spa_release(void *platform_data); int pnv_ocxl_spa_remove_pe_from_cache(void *platform_data, int pe_handle); +extern int pnv_ocxl_map_lpar(struct pci_dev *dev, uint64_t lparid, +uint64_t lpcr); "extern" is useless #endif /* _ASM_PNV_OCXL_H */ diff --git a/arch/powerpc/platforms/powernv/ocxl.c b/arch/powerpc/platforms/powernv/ocxl.c index ecdad219d704..100546ea635f 100644 --- a/arch/powerpc/platforms/powernv/ocxl.c +++ b/arch/powerpc/platforms/powernv/ocxl.c @@ -483,3 +483,22 @@ int pnv_ocxl_spa_remove_pe_from_cache(void *platform_data, int pe_handle) return rc; } EXPORT_SYMBOL_GPL(pnv_ocxl_spa_remove_pe_from_cache); + +int pnv_ocxl_map_lpar(struct pci_dev *dev, uint64_t lparid, + uint64_t lpcr) +{ + struct pci_controller *hose = pci_bus_to_host(dev->bus); + struct pnv_phb *phb = hose->private_data; + u32 bdfn; + int rc; + + bdfn = (dev->bus->number << 8) | dev->devfn; I was told a bit too late that pci_dev_id() exists, so we should probably use from now on. + rc = opal_npu_map_lpar(phb->opal_id, bdfn, lparid, lpcr); + if (rc) { + dev_err(&dev->dev, "Error mapping device to LPAR: %d\n", rc); + return -EINVAL; + } + + return 0; +} +EXPORT_SYMBOL_GPL(pnv_ocxl_map_lpar); diff --git a/drivers/misc/ocxl/link.c b/drivers/misc/ocxl/link.c index fd73d3bc0eb6..9b5b77d40734 100644 --- a/drivers/misc/ocxl/link.c +++ b/drivers/misc/ocxl/link.c Overall, there are many changes in that file and it would help the review if it could be broken up in a set of smaller patches. @@ -4,6 +4,8 @@ #include #include #include +#include +#include #include #include #include @@ -33,6 +35,31 @@ #define SPA_PE_VALID 0x8000 +struct spa; + +/* + * A opencapi link can be used be by several PCI functions. We have + * one link per device slot. + * + * A linked list of opencapi links should suffice, as there's a + * limited number of opencapi slots on a system and lookup is only + * done when the device is probed + */ +struct ocxl_link { + struct list_head list; + struct kref ref; + int domain; + int bus; + int dev; + u64 mmio_atsd; /* ATSD physical address */ + void __iomem *base;/* ATSD register virtual address */ + spinlock_t atsd_lock; // to serialize shootdowns + atomic_t irq_available; + struct spa *spa; + void *platform_data; +}; +static struct list_head links_list = LIST_HEAD_INIT(links_list); +static DEFINE_MUTEX(links_list_lock); struct pe_data { struct mm_struct *mm; @@ -41,6 +68,8 @@ struct pe_data { /* opaque pointer to be passed to the above callback */ void *xsl_err_data; struct rcu_head rcu; + struct ocxl_link *link; + struct mmu_notifier mmu_notifier; }; struct spa { @@ -69,27 +98,6 @@ struct spa { } xsl_fault; }; -/* - * A opencapi link can be used be by several PCI functions. We have - * one link per device slot. - * - * A linked list of opencapi links should suffice, as there's a - * limited number of opencapi slots on a system and lookup is only - * done when the device is probed - */ -struct ocxl_link { - struct list_head list; - struct kref ref; - int domain; - int bus; - int dev; - atomic_t irq_available; - struct spa *spa; - void *platform_data; -}; -static struc
Re: [PATCH 0/5] perf/mm: Fix PERF_SAMPLE_*_PAGE_SIZE
On Mon, Nov 16, 2020 at 08:36:36AM -0800, Dave Hansen wrote: > On 11/16/20 8:32 AM, Matthew Wilcox wrote: > >> > >> That's really the best we can do from software without digging into > >> microarchitecture-specific events. > > I mean this is perf. Digging into microarch specific events is what it > > does ;-) > > Yeah, totally. Sure, but the automatic promotion/demotion of TLB sizes is not visible if you don't know what you startd out with. > But, if we see a bunch of 4k TLB hit events, it's still handy to know > that those 4k TLB hits originated from a 2M page table entry. This > series just makes sure that perf has the data about the page table > mapping sizes regardless of what the microarchitecture does with it. This.
Re: [PATCH 0/5] perf/mm: Fix PERF_SAMPLE_*_PAGE_SIZE
On Mon, Nov 16, 2020 at 08:28:23AM -0800, Dave Hansen wrote: > On 11/16/20 7:54 AM, Matthew Wilcox wrote: > > It gets even more complicated with CPUs with multiple levels of TLB > > which support different TLB entry sizes. My CPU reports: > > > > TLB info > > Instruction TLB: 2M/4M pages, fully associative, 8 entries > > Instruction TLB: 4K pages, 8-way associative, 64 entries > > Data TLB: 1GB pages, 4-way set associative, 4 entries > > Data TLB: 4KB pages, 4-way associative, 64 entries > > Shared L2 TLB: 4KB/2MB pages, 6-way associative, 1536 entries > > It's even "worse" on recent AMD systems. Those will coalesce multiple > adjacent PTEs into a single TLB entry. I think Alphas did something > like this back in the day with an opt-in. > > Anyway, the changelog should probably replace: ARM64 does too. > > This enables PERF_SAMPLE_{DATA,CODE}_PAGE_SIZE to report accurate TLB > > page sizes. > > with something more like: > > This enables PERF_SAMPLE_{DATA,CODE}_PAGE_SIZE to report accurate page > table mapping sizes. Sure.
Re: [PATCH 0/5] perf/mm: Fix PERF_SAMPLE_*_PAGE_SIZE
On 11/16/20 8:32 AM, Matthew Wilcox wrote: >> >> That's really the best we can do from software without digging into >> microarchitecture-specific events. > I mean this is perf. Digging into microarch specific events is what it > does ;-) Yeah, totally. But, if we see a bunch of 4k TLB hit events, it's still handy to know that those 4k TLB hits originated from a 2M page table entry. This series just makes sure that perf has the data about the page table mapping sizes regardless of what the microarchitecture does with it. I'm just saying we need to make the descriptions in this perf feature specifically about the page tables, not the TLB.
Re: [PATCH 0/5] perf/mm: Fix PERF_SAMPLE_*_PAGE_SIZE
On Mon, Nov 16, 2020 at 08:28:23AM -0800, Dave Hansen wrote: > On 11/16/20 7:54 AM, Matthew Wilcox wrote: > > It gets even more complicated with CPUs with multiple levels of TLB > > which support different TLB entry sizes. My CPU reports: > > > > TLB info > > Instruction TLB: 2M/4M pages, fully associative, 8 entries > > Instruction TLB: 4K pages, 8-way associative, 64 entries > > Data TLB: 1GB pages, 4-way set associative, 4 entries > > Data TLB: 4KB pages, 4-way associative, 64 entries > > Shared L2 TLB: 4KB/2MB pages, 6-way associative, 1536 entries > > It's even "worse" on recent AMD systems. Those will coalesce multiple > adjacent PTEs into a single TLB entry. I think Alphas did something > like this back in the day with an opt-in. I debated mentioning that ;-) We can detect in software whether that's _possible_, but we can't detect whether it's *done* it. I heard it sometimes takes several faults on the 4kB entries for the CPU to decide that it's beneficial to use a 32kB TLB entry. But this is all rumour. > Anyway, the changelog should probably replace: > > > This enables PERF_SAMPLE_{DATA,CODE}_PAGE_SIZE to report accurate TLB > > page sizes. > > with something more like: > > This enables PERF_SAMPLE_{DATA,CODE}_PAGE_SIZE to report accurate page > table mapping sizes. > > That's really the best we can do from software without digging into > microarchitecture-specific events. I mean this is perf. Digging into microarch specific events is what it does ;-)
Re: [PATCH 0/5] perf/mm: Fix PERF_SAMPLE_*_PAGE_SIZE
On 11/16/20 7:54 AM, Matthew Wilcox wrote: > It gets even more complicated with CPUs with multiple levels of TLB > which support different TLB entry sizes. My CPU reports: > > TLB info > Instruction TLB: 2M/4M pages, fully associative, 8 entries > Instruction TLB: 4K pages, 8-way associative, 64 entries > Data TLB: 1GB pages, 4-way set associative, 4 entries > Data TLB: 4KB pages, 4-way associative, 64 entries > Shared L2 TLB: 4KB/2MB pages, 6-way associative, 1536 entries It's even "worse" on recent AMD systems. Those will coalesce multiple adjacent PTEs into a single TLB entry. I think Alphas did something like this back in the day with an opt-in. Anyway, the changelog should probably replace: > This enables PERF_SAMPLE_{DATA,CODE}_PAGE_SIZE to report accurate TLB > page sizes. with something more like: This enables PERF_SAMPLE_{DATA,CODE}_PAGE_SIZE to report accurate page table mapping sizes. That's really the best we can do from software without digging into microarchitecture-specific events.
[PATCH] powerpc/32s: Handle PROTFAULT in hash_page() also for CONFIG_PPC_KUAP
On hash 32 bits, handling minor protection faults like unsetting dirty flag is heavy if done from the normal page_fault processing, because it implies hash table software lookup for flushing the entry and then a DSI is taken anyway to add the entry back. When KUAP was implemented, as explained in commit a68c31fc01ef ("powerpc/32s: Implement Kernel Userspace Access Protection"), protection faults has been diverted from hash_page() because hash_page() was not able to identify a KUAP fault. Implement KUAP verification in hash_page(), by clearing write permission when the access is a kernel access and Ks is 1. This works regardless of the address because kernel segments always have Ks set to 0 while user segments have Ks set to 0 only when kernel write to userspace is granted. Then protection faults can be handled by hash_page() even for KUAP. Signed-off-by: Christophe Leroy --- arch/powerpc/kernel/head_book3s_32.S | 8 arch/powerpc/mm/book3s32/hash_low.S | 13 +++-- 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S index a0dda2a1f2df..a4b811044f97 100644 --- a/arch/powerpc/kernel/head_book3s_32.S +++ b/arch/powerpc/kernel/head_book3s_32.S @@ -294,11 +294,7 @@ BEGIN_MMU_FTR_SECTION stw r11, THR11(r10) mfspr r10, SPRN_DSISR mfcrr11 -#ifdef CONFIG_PPC_KUAP - andis. r10, r10, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH | DSISR_PROTFAULT)@h -#else andis. r10, r10, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH)@h -#endif mfspr r10, SPRN_SPRG_THREAD beq hash_page_dsi .Lhash_page_dsi_cont: @@ -323,11 +319,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE) EXCEPTION_PROLOG handle_dar_dsisr=1 get_and_save_dar_dsisr_on_stack r4, r5, r11 BEGIN_MMU_FTR_SECTION -#ifdef CONFIG_PPC_KUAP - andis. r0, r5, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH | DSISR_PROTFAULT)@h -#else andis. r0, r5, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH)@h -#endif bne handle_page_fault_tramp_2 /* if not, try to put a PTE */ rlwinm r3, r5, 32 - 15, 21, 21 /* DSISR_STORE -> _PAGE_RW */ bl hash_page diff --git a/arch/powerpc/mm/book3s32/hash_low.S b/arch/powerpc/mm/book3s32/hash_low.S index b2c912e517b9..9a56ba4f68f2 100644 --- a/arch/powerpc/mm/book3s32/hash_low.S +++ b/arch/powerpc/mm/book3s32/hash_low.S @@ -95,8 +95,6 @@ _GLOBAL(hash_page) #else rlwimi r8,r4,23,20,28 /* compute pte address */ #endif - rlwinm r0,r3,32-3,24,24/* _PAGE_RW access -> _PAGE_DIRTY */ - ori r0,r0,_PAGE_ACCESSED|_PAGE_HASHPTE /* * Update the linux PTE atomically. We do the lwarx up-front @@ -112,7 +110,18 @@ _GLOBAL(hash_page) #endif .Lretry: lwarx r6,0,r8 /* get linux-style pte, flag word */ +#ifdef CONFIG_PPC_KUAP + mfsrin r5,r4 + rlwinm r0,r9,28,_PAGE_RW /* MSR[PR] => _PAGE_RW */ + rlwinm r5,r5,12,_PAGE_RW /* Ks => _PAGE_RW */ + andcr5,r5,r0/* Ks & ~MSR[PR] */ + andcr5,r6,r5/* Clear _PAGE_RW when Ks = 1 && MSR[PR] = 0 */ + andc. r5,r3,r5/* check access & ~permission */ +#else andc. r5,r3,r6/* check access & ~permission */ +#endif + rlwinm r0,r3,32-3,24,24/* _PAGE_RW access -> _PAGE_DIRTY */ + ori r0,r0,_PAGE_ACCESSED|_PAGE_HASHPTE #ifdef CONFIG_SMP bne-.Lhash_page_out /* return if access not permitted */ #else -- 2.25.0
Re: [PATCH 0/5] perf/mm: Fix PERF_SAMPLE_*_PAGE_SIZE
On Mon, Nov 16, 2020 at 06:43:57PM +0300, Kirill A. Shutemov wrote: > On Fri, Nov 13, 2020 at 12:19:01PM +0100, Peter Zijlstra wrote: > > Hi, > > > > These patches provide generic infrastructure to determine TLB page size from > > page table entries alone. Perf will use this (for either data or code > > address) > > to aid in profiling TLB issues. > > I'm not sure it's an issue, but strictly speaking, size of page according > to page table tree doesn't mean pagewalk would fill TLB entry of the size. > CPU may support 1G pages in page table tree without 1G TLB at all. > > IIRC, current Intel CPU still don't have any 1G iTLB entries and fill 2M > iTLB instead. It gets even more complicated with CPUs with multiple levels of TLB which support different TLB entry sizes. My CPU reports: TLB info Instruction TLB: 2M/4M pages, fully associative, 8 entries Instruction TLB: 4K pages, 8-way associative, 64 entries Data TLB: 1GB pages, 4-way set associative, 4 entries Data TLB: 4KB pages, 4-way associative, 64 entries Shared L2 TLB: 4KB/2MB pages, 6-way associative, 1536 entries I'm not quite sure what the rules are for evicting a 1GB entry in the dTLB into the s2TLB. I've read them for so many different processors, I get quite confused. Some CPUs fracture them; others ditch them entirely and will look them up again if needed. I think the architecture here is fine, but it'll need a little bit of finagling to maybe pass i-vs-d to the pXd_leaf_size() routines, and x86 will need an implementation of pud_leaf_size() which interrogates the TLB info to find out what size TLB entry will actually be used.
[PATCH v2 3/5] powerpc/fault: Avoid heavy search_exception_tables() verification
search_exception_tables() is an heavy operation, we have to avoid it. When KUAP is selected, we'll know the fault has been blocked by KUAP. Otherwise, it behaves just as if the address was already in the TLBs and no fault was generated. Signed-off-by: Christophe Leroy Reviewed-by: Nicholas Piggin --- v2: Squashed with the preceeding patch which was re-ordering tests that get removed in this patch. --- arch/powerpc/mm/fault.c | 23 +++ 1 file changed, 7 insertions(+), 16 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 17665ff97469..1770b41e4730 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -210,28 +210,19 @@ static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code, return true; } - if (!is_exec && address < TASK_SIZE && (error_code & DSISR_PROTFAULT) && - !search_exception_tables(regs->nip)) { - pr_crit_ratelimited("Kernel attempted to access user page (%lx) - exploit attempt? (uid: %d)\n", - address, - from_kuid(&init_user_ns, current_uid())); - } - // Kernel fault on kernel address is bad if (address >= TASK_SIZE) return true; - // Fault on user outside of certain regions (eg. copy_tofrom_user()) is bad - if (!search_exception_tables(regs->nip)) - return true; - - // Read/write fault in a valid region (the exception table search passed - // above), but blocked by KUAP is bad, it can never succeed. - if (bad_kuap_fault(regs, address, is_write)) + // Read/write fault blocked by KUAP is bad, it can never succeed. + if (bad_kuap_fault(regs, address, is_write)) { + pr_crit_ratelimited("Kernel attempted to %s user page (%lx) - exploit attempt? (uid: %d)\n", + is_write ? "write" : "read", address, + from_kuid(&init_user_ns, current_uid())); return true; + } - // What's left? Kernel fault on user in well defined regions (extable - // matched), and allowed by KUAP in the faulting context. + // What's left? Kernel fault on user and allowed by KUAP in the faulting context. return false; } -- 2.25.0
[PATCH v2 4/5] powerpc/fault: Perform exception fixup in do_page_fault()
Exception fixup doesn't require the heady full regs saving, do it from do_page_fault() directly. For that, split bad_page_fault() in two parts. As bad_page_fault() can also be called from other places than handle_page_fault(), it will still perform exception fixup and fallback on __bad_page_fault(). handle_page_fault() directly calls __bad_page_fault() as the exception fixup will now be done by do_page_fault() Reviewed-by: Nicholas Piggin Signed-off-by: Christophe Leroy --- v2: Add prototype of __bad_page_fault() in asm/bug.h --- arch/powerpc/include/asm/bug.h | 1 + arch/powerpc/kernel/entry_32.S | 2 +- arch/powerpc/kernel/exceptions-64e.S | 2 +- arch/powerpc/kernel/exceptions-64s.S | 2 +- arch/powerpc/mm/fault.c | 33 5 files changed, 28 insertions(+), 12 deletions(-) diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h index 338f36cd9934..919a31840e51 100644 --- a/arch/powerpc/include/asm/bug.h +++ b/arch/powerpc/include/asm/bug.h @@ -113,6 +113,7 @@ struct pt_regs; extern int do_page_fault(struct pt_regs *, unsigned long, unsigned long); extern void bad_page_fault(struct pt_regs *, unsigned long, int); +void __bad_page_fault(struct pt_regs *regs, unsigned long address, int sig); extern void _exception(int, struct pt_regs *, int, unsigned long); extern void _exception_pkey(struct pt_regs *, unsigned long, int); extern void die(const char *, struct pt_regs *, long); diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S index 8cdc8bcde703..eafcf43e3613 100644 --- a/arch/powerpc/kernel/entry_32.S +++ b/arch/powerpc/kernel/entry_32.S @@ -671,7 +671,7 @@ handle_page_fault: mr r5,r3 addir3,r1,STACK_FRAME_OVERHEAD lwz r4,_DAR(r1) - bl bad_page_fault + bl __bad_page_fault b ret_from_except_full #ifdef CONFIG_PPC_BOOK3S_32 diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S index f579ce46eef2..74d07dc0bb48 100644 --- a/arch/powerpc/kernel/exceptions-64e.S +++ b/arch/powerpc/kernel/exceptions-64e.S @@ -1023,7 +1023,7 @@ storage_fault_common: mr r5,r3 addir3,r1,STACK_FRAME_OVERHEAD ld r4,_DAR(r1) - bl bad_page_fault + bl __bad_page_fault b ret_from_except /* diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index f7d748b88705..2cb3bcfb896d 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -3254,7 +3254,7 @@ handle_page_fault: mr r5,r3 addir3,r1,STACK_FRAME_OVERHEAD ld r4,_DAR(r1) - bl bad_page_fault + bl __bad_page_fault b interrupt_return /* We have a data breakpoint exception - handle it */ diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 1770b41e4730..2e50bc1c3783 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -538,10 +538,20 @@ NOKPROBE_SYMBOL(__do_page_fault); int do_page_fault(struct pt_regs *regs, unsigned long address, unsigned long error_code) { + const struct exception_table_entry *entry; enum ctx_state prev_state = exception_enter(); int rc = __do_page_fault(regs, address, error_code); exception_exit(prev_state); - return rc; + if (likely(!rc)) + return 0; + + entry = search_exception_tables(regs->nip); + if (unlikely(!entry)) + return rc; + + instruction_pointer_set(regs, extable_fixup(entry)); + + return 0; } NOKPROBE_SYMBOL(do_page_fault); @@ -550,17 +560,10 @@ NOKPROBE_SYMBOL(do_page_fault); * It is called from the DSI and ISI handlers in head.S and from some * of the procedures in traps.c. */ -void bad_page_fault(struct pt_regs *regs, unsigned long address, int sig) +void __bad_page_fault(struct pt_regs *regs, unsigned long address, int sig) { - const struct exception_table_entry *entry; int is_write = page_fault_is_write(regs->dsisr); - /* Are we prepared to handle this fault? */ - if ((entry = search_exception_tables(regs->nip)) != NULL) { - regs->nip = extable_fixup(entry); - return; - } - /* kernel has accessed a bad area */ switch (TRAP(regs)) { @@ -594,3 +597,15 @@ void bad_page_fault(struct pt_regs *regs, unsigned long address, int sig) die("Kernel access of bad area", regs, sig); } + +void bad_page_fault(struct pt_regs *regs, unsigned long address, int sig) +{ + const struct exception_table_entry *entry; + + /* Are we prepared to handle this fault? */ + entry = search_exception_tables(instruction_pointer(regs)); + if (entry) + instruction_pointer_set(regs, extable_fixup(entry)); + else + __bad_page_fault(regs,
[PATCH v2 5/5] powerpc/mm: Don't WARN() on KUAP fault
The WARN() in do_page_fault() is useless the problem is not in do_page_fault() but on the place which generated the DSI exception. We already have a dump from the Oops, no need of a WARN() in addition The warning emitted by bad_kernel_fault() is good enough. Signed-off-by: Christophe Leroy --- v2: New (Partly taken from patch "powerpc/mm: Kill the task on KUAP fault") --- arch/powerpc/include/asm/book3s/32/kup.h | 6 +- arch/powerpc/include/asm/book3s/64/kup-radix.h | 7 --- arch/powerpc/include/asm/nohash/32/kup-8xx.h | 3 +-- 3 files changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h index 32fd4452e960..a0117a9d5b06 100644 --- a/arch/powerpc/include/asm/book3s/32/kup.h +++ b/arch/powerpc/include/asm/book3s/32/kup.h @@ -183,11 +183,7 @@ bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) unsigned long begin = regs->kuap & 0xf000; unsigned long end = regs->kuap << 28; - if (!is_write) - return false; - - return WARN(address < begin || address >= end, - "Bug: write fault blocked by segment registers !"); + return is_write && (address < begin || address >= end); } #endif /* CONFIG_PPC_KUAP */ diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h index 3ee1ec60be84..8bdf559a4b32 100644 --- a/arch/powerpc/include/asm/book3s/64/kup-radix.h +++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h @@ -161,9 +161,10 @@ static inline void restore_user_access(unsigned long flags) static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) { - return WARN(mmu_has_feature(MMU_FTR_RADIX_KUAP) && - (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)), - "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read"); + if (!mmu_has_feature(MMU_FTR_RADIX_KUAP)) + return false; + + return !!(regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)); } #else /* CONFIG_PPC_KUAP */ static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr) diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h index 567cdc557402..17a4a616436f 100644 --- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h +++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h @@ -63,8 +63,7 @@ static inline void restore_user_access(unsigned long flags) static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) { - return WARN(!((regs->kuap ^ MD_APG_KUAP) & 0xff00), - "Bug: fault blocked by AP register !"); + return !((regs->kuap ^ MD_APG_KUAP) & 0xff00); } #endif /* !__ASSEMBLY__ */ -- 2.25.0
[PATCH v2 1/5] powerpc/mm: sanity_check_fault() should work for all, not only BOOK3S
The verification and message introduced by commit 374f3f5979f9 ("powerpc/mm/hash: Handle user access of kernel address gracefully") applies to all platforms, it should not be limited to BOOK3S. Make the BOOK3S version of sanity_check_fault() the one for all, and bail out earlier if not BOOK3S. Fixes: 374f3f5979f9 ("powerpc/mm/hash: Handle user access of kernel address gracefully") Reviewed-by: Nicholas Piggin Signed-off-by: Christophe Leroy --- arch/powerpc/mm/fault.c | 8 +++- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 0add963a849b..72e1b51beb10 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -303,7 +303,6 @@ static inline void cmo_account_page_fault(void) static inline void cmo_account_page_fault(void) { } #endif /* CONFIG_PPC_SMLPAR */ -#ifdef CONFIG_PPC_BOOK3S static void sanity_check_fault(bool is_write, bool is_user, unsigned long error_code, unsigned long address) { @@ -320,6 +319,9 @@ static void sanity_check_fault(bool is_write, bool is_user, return; } + if (!IS_ENABLED(CONFIG_PPC_BOOK3S)) + return; + /* * For hash translation mode, we should never get a * PROTFAULT. Any update to pte to reduce access will result in us @@ -354,10 +356,6 @@ static void sanity_check_fault(bool is_write, bool is_user, WARN_ON_ONCE(error_code & DSISR_PROTFAULT); } -#else -static void sanity_check_fault(bool is_write, bool is_user, - unsigned long error_code, unsigned long address) { } -#endif /* CONFIG_PPC_BOOK3S */ /* * Define the correct "is_write" bit in error_code based -- 2.25.0
[PATCH v2 2/5] powerpc/fault: Unnest definition of page_fault_is_write() and page_fault_is_bad()
To make it more readable, separate page_fault_is_write() and page_fault_is_bad() to avoir several levels of #ifdefs Reviewed-by: Nicholas Piggin Signed-off-by: Christophe Leroy --- arch/powerpc/mm/fault.c | 8 +--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 72e1b51beb10..17665ff97469 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -363,17 +363,19 @@ static void sanity_check_fault(bool is_write, bool is_user, */ #if (defined(CONFIG_4xx) || defined(CONFIG_BOOKE)) #define page_fault_is_write(__err) ((__err) & ESR_DST) -#define page_fault_is_bad(__err) (0) #else #define page_fault_is_write(__err) ((__err) & DSISR_ISSTORE) -#if defined(CONFIG_PPC_8xx) +#endif + +#if defined(CONFIG_4xx) || defined(CONFIG_BOOKE) +#define page_fault_is_bad(__err) (0) +#elif defined(CONFIG_PPC_8xx) #define page_fault_is_bad(__err) ((__err) & DSISR_NOEXEC_OR_G) #elif defined(CONFIG_PPC64) #define page_fault_is_bad(__err) ((__err) & DSISR_BAD_FAULT_64S) #else #define page_fault_is_bad(__err) ((__err) & DSISR_BAD_FAULT_32S) #endif -#endif /* * For 600- and 800-family processors, the error_code parameter is DSISR -- 2.25.0
Re: [PATCH 0/5] perf/mm: Fix PERF_SAMPLE_*_PAGE_SIZE
On Fri, Nov 13, 2020 at 12:19:01PM +0100, Peter Zijlstra wrote: > Hi, > > These patches provide generic infrastructure to determine TLB page size from > page table entries alone. Perf will use this (for either data or code address) > to aid in profiling TLB issues. I'm not sure it's an issue, but strictly speaking, size of page according to page table tree doesn't mean pagewalk would fill TLB entry of the size. CPU may support 1G pages in page table tree without 1G TLB at all. IIRC, current Intel CPU still don't have any 1G iTLB entries and fill 2M iTLB instead. -- Kirill A. Shutemov
Re: Error: invalid switch -me200
Quoting Michael Ellerman : Christophe Leroy writes: Le 14/11/2020 à 01:20, Segher Boessenkool a écrit : On Fri, Nov 13, 2020 at 12:14:18PM -0800, Nick Desaulniers wrote: Error: invalid switch -me200 Error: unrecognized option -me200 251 cpu-as-$(CONFIG_E200) += -Wa,-me200 Are those all broken configs, or is Kconfig messed up such that randconfig can select these when it should not? Hmmm, looks like this flag does not exist in mainline binutils? There is a thread in 2010 about this that Segher commented on: https://lore.kernel.org/linuxppc-dev/9859e645-954d-4d07-8003-ffcd2391a...@kernel.crashing.org/ Guess this config should be eliminated? The help text for this config options says that e200 is used in 55xx, and there *is* an -me5500 GAS flag (which probably does this same thing, too). But is any of this tested, or useful, or wanted? Maybe Christophe knows, cc:ed. I don't have much clue on this. Me either. But I see on wikipedia that e5500 is a 64 bits powerpc (https://en.wikipedia.org/wiki/PowerPC_e5500) What I see is that NXP seems to provide a GCC version that includes aditionnal cpu (e200z0 e200z2 e200z3 e200z4 e200z6 e200z7): valid arguments to '-mcpu=' are: 401 403 405 405fp 440 440fp 464 464fp 476 476fp 505 601 602 603 603e 604 604e 620 630 740 7400 7450 750 801 821 823 8540 8548 860 970 G3 G4 G5 a2 cell e200z0 e200z2 e200z3 e200z4 e200z6 e200z7 e300c2 e300c3 e500mc e500mc64 e5500 e6500 ec603e native power3 power4 power5 power5+ power6 power6x power7 power8 powerpc powerpc64 powerpc64le rs64 titan " https://community.nxp.com/t5/MPC5xxx/GCC-generating-not-implemented-instructions/m-p/845049 Apparently based on binutils 2.28 https://www.nxp.com/docs/en/release-note/S32DS-POWER-v1-2-RN.pdf But that's not exactly -me200 though. Now, I can't see any defconfig that selects CONFIG_E200, so is that worth keeping it in the kernel at all ? There was a commit in 2014 that suggests it worked at least to some extent then: 3477e71d5319 ("powerpc/booke: Restrict SPE exception handlers to e200/e500 cores") Not sure, that patch seems to be focussed on the new e500mc Presumably there was a non-upstream toolchain where it was supported? AFAICS the kernel builds OK with just the cpu-as modification removed: diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile index a4d56f0a41d9..16b8336f91dd 100644 --- a/arch/powerpc/Makefile +++ b/arch/powerpc/Makefile @@ -248,7 +248,6 @@ KBUILD_CFLAGS += $(call cc-option,-mno-string) cpu-as-$(CONFIG_40x) += -Wa,-m405 cpu-as-$(CONFIG_44x) += -Wa,-m440 cpu-as-$(CONFIG_ALTIVEC) += $(call as-option,-Wa$(comma)-maltivec) -cpu-as-$(CONFIG_E200) += -Wa,-me200 cpu-as-$(CONFIG_E500) += -Wa,-me500 # When using '-many -mpower4' gas will first try and find a matching power4 So that seems like the obvious fix for now. Or we could do diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype index c194c4ae8bc7..a11cf9431e1e 100644 --- a/arch/powerpc/platforms/Kconfig.cputype +++ b/arch/powerpc/platforms/Kconfig.cputype @@ -67,6 +67,7 @@ config 44x select PHYS_64BIT config E200 + depends on $(cc-option,-me200) bool "Freescale e200" endchoice --- Christophe
[PATCH -next] powerpc/powernv/sriov: Fix unsigned comparison to zero
Fixes coccicheck warnings: ./arch/powerpc/platforms/powernv/pci-sriov.c:443:7-10: WARNING: Unsigned expression compared with zero: win < 0 ./arch/powerpc/platforms/powernv/pci-sriov.c:462:7-10: WARNING: Unsigned expression compared with zero: win < 0 Signed-off-by: Zou Wei --- arch/powerpc/platforms/powernv/pci-sriov.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/platforms/powernv/pci-sriov.c b/arch/powerpc/platforms/powernv/pci-sriov.c index c4434f2..92fc861 100644 --- a/arch/powerpc/platforms/powernv/pci-sriov.c +++ b/arch/powerpc/platforms/powernv/pci-sriov.c @@ -422,7 +422,7 @@ static int pnv_pci_vf_assign_m64(struct pci_dev *pdev, u16 num_vfs) { struct pnv_iov_data *iov; struct pnv_phb*phb; - unsigned int win; + intwin; struct resource *res; inti, j; int64_trc; -- 2.6.2
Re: [PATCH V3] sched/rt, powerpc: Prepare for PREEMPT_RT
Wang Qing writes: > PREEMPT_RT is a separate preemption model, CONFIG_PREEMPT will > be disabled when CONFIG_PREEMPT_RT is enabled, so we need > to add CONFIG_PREEMPT_RT output to __die(). > > Signed-off-by: Wang Qing Something fairly similar was posted previously. That time I said: I don't think there's any point adding the "_RT" to the __die() output until/if we ever start supporting PREEMPT_RT. https://lore.kernel.org/linuxppc-dev/87d0ext4q3@mpe.ellerman.id.au/ And I think I still feel the same way. It's not clear powerpc will ever support PREEMPT_RT, so this would just be confusing to people. And potentially someone will then send a patch to remove it as dead code. cheers > diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c > index 5006dcb..dec7b81 > --- a/arch/powerpc/kernel/traps.c > +++ b/arch/powerpc/kernel/traps.c > @@ -262,10 +262,11 @@ static int __die(const char *str, struct pt_regs *regs, > long err) > { > printk("Oops: %s, sig: %ld [#%d]\n", str, err, ++die_counter); > > - printk("%s PAGE_SIZE=%luK%s%s%s%s%s%s %s\n", > + printk("%s PAGE_SIZE=%luK%s%s%s%s%s%s%s %s\n", > IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN) ? "LE" : "BE", > PAGE_SIZE / 1024, get_mmu_str(), > IS_ENABLED(CONFIG_PREEMPT) ? " PREEMPT" : "", > +IS_ENABLED(CONFIG_PREEMPT_RT) ? " PREEMPT_RT" : "", > IS_ENABLED(CONFIG_SMP) ? " SMP" : "", > IS_ENABLED(CONFIG_SMP) ? (" NR_CPUS=" __stringify(NR_CPUS)) : "", > debug_pagealloc_enabled() ? " DEBUG_PAGEALLOC" : "", > -- > 2.7.4
Re: [PATCH] KVM: PPC: Book3S HV: XIVE: Fix possible oops when accessing ESB page
Cédric Le Goater writes: > On 11/6/20 4:19 AM, Michael Ellerman wrote: >> Cédric Le Goater writes: >>> When accessing the ESB page of a source interrupt, the fault handler >>> will retrieve the page address from the XIVE interrupt 'xive_irq_data' >>> structure. If the associated KVM XIVE interrupt is not valid, that is >>> not allocated at the HW level for some reason, the fault handler will >>> dereference a NULL pointer leading to the oops below : >>> >>> WARNING: CPU: 40 PID: 59101 at >>> arch/powerpc/kvm/book3s_xive_native.c:259 xive_native_esb_fault+0xe4/0x240 >>> [kvm] >>> CPU: 40 PID: 59101 Comm: qemu-system-ppc Kdump: loaded Tainted: G >>> W- - - 4.18.0-240.el8.ppc64le #1 >>> NIP: c0080e949fac LR: c044b164 CTR: c0080e949ec8 >>> REGS: c01f69617840 TRAP: 0700 Tainted: GW >>> - - - (4.18.0-240.el8.ppc64le) >>> MSR: 90029033 CR: 44044282 XER: >>> >>> CFAR: c044b160 IRQMASK: 0 >>> GPR00: c044b164 c01f69617ac0 c0080e96e000 >>> c01f69617c10 >>> GPR04: 05faa2b21e80 0005 >>> >>> GPR08: 0001 >>> 0001 >>> GPR12: c0080e949ec8 c01d3400 >>> >>> GPR16: >>> >>> GPR20: c01f5c065160 >>> c1c76f90 >>> GPR24: c01f06f2 c01f5c065100 0008 >>> c01f0eb98c78 >>> GPR28: c01dcab4 c01dcab403d8 c01f69617c10 >>> 0011 >>> NIP [c0080e949fac] xive_native_esb_fault+0xe4/0x240 [kvm] >>> LR [c044b164] __do_fault+0x64/0x220 >>> Call Trace: >>> [c01f69617ac0] [000137a5dc20] 0x137a5dc20 (unreliable) >>> [c01f69617b50] [c044b164] __do_fault+0x64/0x220 >>> [c01f69617b90] [c0453838] do_fault+0x218/0x930 >>> [c01f69617bf0] [c0456f50] __handle_mm_fault+0x350/0xdf0 >>> [c01f69617cd0] [c0457b1c] handle_mm_fault+0x12c/0x310 >>> [c01f69617d10] [c007ef44] __do_page_fault+0x264/0xbb0 >>> [c01f69617df0] [c007f8c8] do_page_fault+0x38/0xd0 >>> [c01f69617e30] [c000a714] handle_page_fault+0x18/0x38 >>> Instruction dump: >>> 40c2fff0 7c2004ac 2fa9 409e0118 73e90001 41820080 e8bd0008 7c2004ac >>> 7ca90074 3940 915c 7929d182 <0b09> 2fa5 419e0080 >>> e89e0018 >>> ---[ end trace 66c6ff034c53f64f ]--- >>> xive-kvm: xive_native_esb_fault: accessing invalid ESB page for source >>> 8 ! >>> >>> Fix that by checking the validity of the KVM XIVE interrupt structure. >>> >>> Reported-by: Greg Kurz >>> Signed-off-by: Cédric Le Goater >> >> Fixes ? > > Ah yes :/ > > Cc: sta...@vger.kernel.org # v5.2+ > Fixes: 6520ca64cde7 ("KVM: PPC: Book3S HV: XIVE: Add a mapping for the source > ESB pages") > > Since my provider changed its imap servers, my email filters are really > screwed > up and I miss emails. > > Sorry about that, No worries. It doesn't look like Paul has grabbed this, so I'll take it. cheers
Re: [PATCH] powerpc/pseries/hotplug-cpu: Fix memleak when cpus node not exist
Tyrel Datwyler writes: > On 11/10/20 6:08 AM, Nathan Lynch wrote: >> Zhang Xiaoxu writes: >>> From: zhangxiaoxu >>> >>> If the cpus nodes not exist, we lost to free the 'cpu_drcs', which >>> will leak memory. >>> >>> Fixes: a0ff72f9f5a7 ("powerpc/pseries/hotplug-cpu: Remove double free in >>> error path") >>> Reported-by: Hulk Robot >>> Signed-off-by: zhangxiaoxu >>> --- >>> arch/powerpc/platforms/pseries/hotplug-cpu.c | 1 + >>> 1 file changed, 1 insertion(+) >>> >>> diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c >>> b/arch/powerpc/platforms/pseries/hotplug-cpu.c >>> index f2837e33bf5d..4bb1c9f2bb11 100644 >>> --- a/arch/powerpc/platforms/pseries/hotplug-cpu.c >>> +++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c >>> @@ -743,6 +743,7 @@ static int dlpar_cpu_add_by_count(u32 cpus_to_add) >>> parent = of_find_node_by_path("/cpus"); >>> if (!parent) { >>> pr_warn("Could not find CPU root node in device tree\n"); >>> + kfree(cpu_drcs); >>> return -1; >>> } >> >> Thanks for finding this. >> >> a0ff72f9f5a7 ("powerpc/pseries/hotplug-cpu: Remove double free in error >> path") was posted in Sept 2019 but was not applied until July 2020: >> >> https://lore.kernel.org/linuxppc-dev/20190919231633.1344-1-nath...@linux.ibm.com/ >> >> Here is that change as posted; note the function context is >> find_dlpar_cpus_to_add(), not dlpar_cpu_add_by_count(): >> >> --- a/arch/powerpc/platforms/pseries/hotplug-cpu.c >> +++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c >> @@ -726,7 +726,6 @@ static int find_dlpar_cpus_to_add(u32 *cpu_drcs, u32 >> cpus_to_add) >> parent = of_find_node_by_path("/cpus"); >> if (!parent) { >> pr_warn("Could not find CPU root node in device tree\n"); >> -kfree(cpu_drcs); >> return -1; >> } >> >> Meanwhile b015f6bc9547dbc056edde7177c7868ca8629c4c ("powerpc/pseries: Add >> cpu DLPAR support for drc-info property") was posted in Nov 2019 and >> committed a few days later: >> >> https://lore.kernel.org/linux-pci/1573449697-5448-4-git-send-email-tyr...@linux.ibm.com/ >> >> This change reorganized the same code, removing >> find_dlpar_cpus_to_add(), and it had the effect of fixing the same >> issue. >> >> However git apparently allowed the older change to still apply on top of >> this (changing a function different from the one in the original >> patch!), leading to a real bug. > > Yikes, not sure how that happened without either the committer massaging the > patch to apply, or the line location and context matching exactly. git-am won't apply it, but patch does. I often have to fall back to using patch when things don't apply, so that's presumably what happened here. I try to manually check the result is correct but I obviously didn't do a good job here. cheers
[PATCH] powerpc: Drop -me200 addition to build flags
Currently a build with CONFIG_E200=y will fail with: Error: invalid switch -me200 Error: unrecognized option -me200 Upstream binutils has never supported an -me200 option. Presumably it was supported at some point by either a fork or Freescale internal binutils. We can't support code that we can't even build test, so drop the addition of -me200 to the build flags, so we can at least build with CONFIG_E200=y. Reported-by: Németh Márton Reported-by: kernel test robot Signed-off-by: Michael Ellerman --- More discussion: https://lore.kernel.org/lkml/202011131146.g8dplqdd-...@intel.com --- arch/powerpc/Makefile | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile index a4d56f0a41d9..16b8336f91dd 100644 --- a/arch/powerpc/Makefile +++ b/arch/powerpc/Makefile @@ -248,7 +248,6 @@ KBUILD_CFLAGS += $(call cc-option,-mno-string) cpu-as-$(CONFIG_40x) += -Wa,-m405 cpu-as-$(CONFIG_44x) += -Wa,-m440 cpu-as-$(CONFIG_ALTIVEC) += $(call as-option,-Wa$(comma)-maltivec) -cpu-as-$(CONFIG_E200) += -Wa,-me200 cpu-as-$(CONFIG_E500) += -Wa,-me500 # When using '-many -mpower4' gas will first try and find a matching power4 -- 2.25.1
Re: [PATCH] powerpc/powernv/memtrace: Fake non-memblock aligned sized traces
Jordan Niethe writes: > The hardware trace macros which use the memory provided by memtrace are > able to use trace sizes as small as 16MB. Only memblock aligned values > can be removed from each NUMA node by writing that value to > memtrace/enable in debugfs. This means setting up, say, a 16MB trace is > not possible. To allow such a trace size, instead align whatever value > is written to memtrace/enable to the memblock size for the purpose of > removing it from each NUMA node but report the written value from > memtrace/enable and memtrace/x/size in debugfs. Why does it matter if the size that's removed is larger than the size that was requested? Is it about constraining the size of the trace? If so that seems like it should be the job of the tracing tools, not the kernel. cheers
Re: [PATCH 2/3] Revert "lib: Revert use of fallthrough pseudo-keyword in lib/"
Hi Nick, I love your patch! Perhaps something to improve: [auto build test WARNING on powerpc/next] [also build test WARNING on linus/master v5.10-rc4 next-20201116] [cannot apply to pmladek/for-next] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Nick-Desaulniers/PPC-Fix-Wimplicit-fallthrough-for-clang/20201116-123803 base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next config: x86_64-randconfig-m001-20201115 (attached as .config) compiler: gcc-9 (Debian 9.3.0-15) 9.3.0 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot smatch warnings: lib/zstd/huf_compress.c:559 HUF_compress1X_usingCTable() warn: inconsistent indenting vim +559 lib/zstd/huf_compress.c 529 530 #define HUF_FLUSHBITS_1(stream) \ 531 if (sizeof((stream)->bitContainer) * 8 < HUF_TABLELOG_MAX * 2 + 7) \ 532 HUF_FLUSHBITS(stream) 533 534 #define HUF_FLUSHBITS_2(stream) \ 535 if (sizeof((stream)->bitContainer) * 8 < HUF_TABLELOG_MAX * 4 + 7) \ 536 HUF_FLUSHBITS(stream) 537 538 size_t HUF_compress1X_usingCTable(void *dst, size_t dstSize, const void *src, size_t srcSize, const HUF_CElt *CTable) 539 { 540 const BYTE *ip = (const BYTE *)src; 541 BYTE *const ostart = (BYTE *)dst; 542 BYTE *const oend = ostart + dstSize; 543 BYTE *op = ostart; 544 size_t n; 545 BIT_CStream_t bitC; 546 547 /* init */ 548 if (dstSize < 8) 549 return 0; /* not enough space to compress */ 550 { 551 size_t const initErr = BIT_initCStream(&bitC, op, oend - op); 552 if (HUF_isError(initErr)) 553 return 0; 554 } 555 556 n = srcSize & ~3; /* join to mod 4 */ 557 switch (srcSize & 3) { 558 case 3: HUF_encodeSymbol(&bitC, ip[n + 2], CTable); HUF_FLUSHBITS_2(&bitC); > 559 fallthrough; 560 case 2: HUF_encodeSymbol(&bitC, ip[n + 1], CTable); HUF_FLUSHBITS_1(&bitC); 561 fallthrough; 562 case 1: HUF_encodeSymbol(&bitC, ip[n + 0], CTable); HUF_FLUSHBITS(&bitC); 563 case 0: 564 default:; 565 } 566 567 for (; n > 0; n -= 4) { /* note : n&3==0 at this stage */ 568 HUF_encodeSymbol(&bitC, ip[n - 1], CTable); 569 HUF_FLUSHBITS_1(&bitC); 570 HUF_encodeSymbol(&bitC, ip[n - 2], CTable); 571 HUF_FLUSHBITS_2(&bitC); 572 HUF_encodeSymbol(&bitC, ip[n - 3], CTable); 573 HUF_FLUSHBITS_1(&bitC); 574 HUF_encodeSymbol(&bitC, ip[n - 4], CTable); 575 HUF_FLUSHBITS(&bitC); 576 } 577 578 return BIT_closeCStream(&bitC); 579 } 580 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org .config.gz Description: application/gzip
Re: [PATCH 3/3] powerpc: fix -Wimplicit-fallthrough
On Mon, Nov 16, 2020 at 5:35 AM Nick Desaulniers wrote: > > The "fallthrough" pseudo-keyword was added as a portable way to denote > intentional fallthrough. Clang will still warn on cases where there is a > fallthrough to an immediate break. Add explicit breaks for those cases. > > Link: https://github.com/ClangBuiltLinux/linux/issues/236 > Signed-off-by: Nick Desaulniers It makes things clearer having a `break` added, so I like that warning. Reviewed-by: Miguel Ojeda Cheers, Miguel
Re: [PATCH 2/3] Revert "lib: Revert use of fallthrough pseudo-keyword in lib/"
On Mon, Nov 16, 2020 at 5:35 AM Nick Desaulniers wrote: > > This reverts commit 6a9dc5fd6170 ("lib: Revert use of fallthrough > pseudo-keyword in lib/") > > Now that we can build arch/powerpc/boot/ free of -Wimplicit-fallthrough, > re-enable these fixes for lib/. > > Link: https://github.com/ClangBuiltLinux/linux/issues/236 > Signed-off-by: Nick Desaulniers Looks fine on visual inspection: Reviewed-by: Miguel Ojeda Cheers, Miguel
Re: [PATCH 1/3] powerpc: boot: include compiler_attributes.h
On Mon, Nov 16, 2020 at 5:35 AM Nick Desaulniers wrote: > > It was also noted in 6a9dc5fd6170 that we could -D__KERNEL__ and > -include compiler_types.h like the main kernel does, though testing that > produces a whole sea of warnings to cleanup. This approach is minimally > invasive. I would add a comment noting this as a reminder -- it also helps to entice a cleanup. Acked-by: Miguel Ojeda Cheers, Miguel
Re: [PATCH 2/3] Revert "lib: Revert use of fallthrough pseudo-keyword in lib/"
On Mon, Nov 16, 2020 at 7:26 AM Gustavo A. R. Silva wrote: > > Reviewed-by: Gustavo A. R. Silva .org :-) Cheers, Miguel
[PATCH 2/3] Revert "lib: Revert use of fallthrough pseudo-keyword in lib/"
This reverts commit 6a9dc5fd6170 ("lib: Revert use of fallthrough pseudo-keyword in lib/") Now that we can build arch/powerpc/boot/ free of -Wimplicit-fallthrough, re-enable these fixes for lib/. Link: https://github.com/ClangBuiltLinux/linux/issues/236 Signed-off-by: Nick Desaulniers --- lib/asn1_decoder.c | 4 ++-- lib/assoc_array.c | 2 +- lib/bootconfig.c| 4 ++-- lib/cmdline.c | 10 +- lib/dim/net_dim.c | 2 +- lib/dim/rdma_dim.c | 4 ++-- lib/glob.c | 2 +- lib/siphash.c | 36 ++-- lib/ts_fsm.c| 2 +- lib/vsprintf.c | 14 +++--- lib/xz/xz_dec_lzma2.c | 4 ++-- lib/xz/xz_dec_stream.c | 16 lib/zstd/bitstream.h| 10 +- lib/zstd/compress.c | 2 +- lib/zstd/decompress.c | 12 ++-- lib/zstd/huf_compress.c | 4 ++-- 16 files changed, 64 insertions(+), 64 deletions(-) diff --git a/lib/asn1_decoder.c b/lib/asn1_decoder.c index 58f72b25f8e9..13da529e2e72 100644 --- a/lib/asn1_decoder.c +++ b/lib/asn1_decoder.c @@ -381,7 +381,7 @@ int asn1_ber_decoder(const struct asn1_decoder *decoder, case ASN1_OP_END_SET_ACT: if (unlikely(!(flags & FLAG_MATCHED))) goto tag_mismatch; - /* fall through */ + fallthrough; case ASN1_OP_END_SEQ: case ASN1_OP_END_SET_OF: @@ -448,7 +448,7 @@ int asn1_ber_decoder(const struct asn1_decoder *decoder, pc += asn1_op_lengths[op]; goto next_op; } - /* fall through */ + fallthrough; case ASN1_OP_ACT: ret = actions[machine[pc + 1]](context, hdr, tag, data + tdp, len); diff --git a/lib/assoc_array.c b/lib/assoc_array.c index 6f4bcf524554..04c98799c3ba 100644 --- a/lib/assoc_array.c +++ b/lib/assoc_array.c @@ -1113,7 +1113,7 @@ struct assoc_array_edit *assoc_array_delete(struct assoc_array *array, index_key)) goto found_leaf; } - /* fall through */ + fallthrough; case assoc_array_walk_tree_empty: case assoc_array_walk_found_wrong_shortcut: default: diff --git a/lib/bootconfig.c b/lib/bootconfig.c index 649ed44f199c..9f8c70a98fcf 100644 --- a/lib/bootconfig.c +++ b/lib/bootconfig.c @@ -827,7 +827,7 @@ int __init xbc_init(char *buf, const char **emsg, int *epos) q - 2); break; } - /* fall through */ + fallthrough; case '=': ret = xbc_parse_kv(&p, q, c); break; @@ -836,7 +836,7 @@ int __init xbc_init(char *buf, const char **emsg, int *epos) break; case '#': q = skip_comment(q); - /* fall through */ + fallthrough; case ';': case '\n': ret = xbc_parse_key(&p, q); diff --git a/lib/cmdline.c b/lib/cmdline.c index 9e186234edc0..46f2cb4ce6d1 100644 --- a/lib/cmdline.c +++ b/lib/cmdline.c @@ -144,23 +144,23 @@ unsigned long long memparse(const char *ptr, char **retptr) case 'E': case 'e': ret <<= 10; - /* fall through */ + fallthrough; case 'P': case 'p': ret <<= 10; - /* fall through */ + fallthrough; case 'T': case 't': ret <<= 10; - /* fall through */ + fallthrough; case 'G': case 'g': ret <<= 10; - /* fall through */ + fallthrough; case 'M': case 'm': ret <<= 10; - /* fall through */ + fallthrough; case 'K': case 'k': ret <<= 10; diff --git a/lib/dim/net_dim.c b/lib/dim/net_dim.c index a4db51c21266..06811d866775 100644 --- a/lib/dim/net_dim.c +++ b/lib/dim/net_dim.c @@ -233,7 +233,7 @@ void net_dim(struct dim *dim, struct dim_sample end_sample) schedule_work(&dim->work); break; } - /* fall through */ + fallthrough; case DIM_START_MEASURE: dim_update_sample(end_sample.event_ctr, end_sample.pkt_ctr, end_sample.byte_ctr, &dim->start_sample); diff --git a/lib/dim/rdma_dim.c b/lib/dim/rdma_dim.c index f7e26c7b4749..15462d54758d 100644 --- a/lib/dim/rdma_dim.c +++ b/lib/dim/rdma_dim.c @@ -59,7 +59,7 @@ static bool rdma_dim_decision(struct dim_stats *curr_stats, struct dim *dim) break;
[PATCH 0/3] PPC: Fix -Wimplicit-fallthrough for clang
While cleaning up the last few -Wimplicit-fallthrough warnings in tree for Clang, I noticed commit 6a9dc5fd6170d ("lib: Revert use of fallthrough pseudo-keyword in lib/") which seemed to undo a bunch of fixes in lib/ due to breakage in arch/powerpc/boot/ not including compiler_types.h. We don't need compiler_types.h for the definition of `fallthrough`, simply compiler_attributes.h. Include that, revert the revert to lib/, and fix the last remaining cases I observed for powernv_defconfig. Nick Desaulniers (3): powerpc: boot: include compiler_attributes.h Revert "lib: Revert use of fallthrough pseudo-keyword in lib/" powerpc: fix -Wimplicit-fallthrough arch/powerpc/boot/Makefile | 1 + arch/powerpc/boot/decompress.c | 1 - arch/powerpc/kernel/uprobes.c | 1 + arch/powerpc/perf/imc-pmu.c| 1 + lib/asn1_decoder.c | 4 ++-- lib/assoc_array.c | 2 +- lib/bootconfig.c | 4 ++-- lib/cmdline.c | 10 +- lib/dim/net_dim.c | 2 +- lib/dim/rdma_dim.c | 4 ++-- lib/glob.c | 2 +- lib/siphash.c | 36 +- lib/ts_fsm.c | 2 +- lib/vsprintf.c | 14 ++--- lib/xz/xz_dec_lzma2.c | 4 ++-- lib/xz/xz_dec_stream.c | 16 +++ lib/zstd/bitstream.h | 10 +- lib/zstd/compress.c| 2 +- lib/zstd/decompress.c | 12 ++-- lib/zstd/huf_compress.c| 4 ++-- 20 files changed, 67 insertions(+), 65 deletions(-) -- 2.29.2.299.gdc1121823c-goog