[PATCH v5 5/5] arch, mm: make kernel_page_present() always available

2020-11-07 Thread Mike Rapoport
From: Mike Rapoport For architectures that enable ARCH_HAS_SET_MEMORY having the ability to verify that a page is mapped in the kernel direct map can be useful regardless of hibernation. Add RISC-V implementation of kernel_page_present(), update its forward declarations and stubs to be a part

[PATCH v5 4/5] arch, mm: restore dependency of __kernel_map_pages() on DEBUG_PAGEALLOC

2020-11-07 Thread Mike Rapoport
From: Mike Rapoport The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never fail. With this assumption is wouldn't be safe to allow general usage of this function. Moreover, some architectures that implement __kernel_map_pages() have this function guarded by #ifdef

[PATCH v5 3/5] PM: hibernate: make direct map manipulations more explicit

2020-11-07 Thread Mike Rapoport
From: Mike Rapoport When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be not present in the direct map and has to be explicitly mapped before it could be copied. Introduce hibernate_map_page() and hibernation_unmap_page() that will explicitly use

[PATCH v5 2/5] slab: debug: split slab_kernel_map() to map and unmap variants

2020-11-07 Thread Mike Rapoport
From: Mike Rapoport Instead of using slab_kernel_map() with 'map' parameter to remap pages when DEBUG_PAGEALLOC is enabled, use dedicated helpers slab_kernel_map() and slab_kernel_unmap(). Signed-off-by: Mike Rapoport --- mm/slab.c | 26 +++--- 1 file changed, 15

[PATCH v5 1/5] mm: introduce debug_pagealloc_{map, unmap}_pages() helpers

2020-11-07 Thread Mike Rapoport
From: Mike Rapoport When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel direct mapping after free_pages(). The pages than need to be mapped back before they could be used. Theese mapping operations use __kernel_map_pages() guarded with with debug_pagealloc_enabled(). The

[PATCH v5 0/5] arch, mm: improve robustness of direct map manipulation

2020-11-07 Thread Mike Rapoport
From: Mike Rapoport Hi, During recent discussion about KVM protected memory, David raised a concern about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1]. Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is possible that __kernel_map_pages() would

[Bug 209733] Starting new KVM virtual machines on PPC64 starts to hang after box is up for a while

2020-11-07 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=209733 --- Comment #2 from Cameron (c...@neo-zeon.de) --- Verified this happens with 5.9.6 and and Debian vendor kernel of linux-image-5.9.0-1-powerpc64le. Might also be worth mentioning this is occurring with qemu-system-ppc package version

[PATCH] KVM: PPC: fix comparison to bool warning

2020-11-07 Thread xiakaixu1987
From: Kaixu Xia Fix the following coccicheck warning: ./arch/powerpc/kvm/booke.c:503:6-16: WARNING: Comparison to bool ./arch/powerpc/kvm/booke.c:505:6-17: WARNING: Comparison to bool ./arch/powerpc/kvm/booke.c:507:6-16: WARNING: Comparison to bool Reported-by: Tosk Robot Signed-off-by: Kaixu

Re: [PATCH] powerpc/64s: Remove RFI

2020-11-07 Thread Christophe Leroy
Le 06/11/2020 à 12:36, Christophe Leroy a écrit : Last use of RFI on PPC64 was removed by commit b8e90cb7bc04 ("powerpc/64: Convert the syscall exit path to use RFI_TO_USER/KERNEL"). Remove the macro. Forget this crazy patch. I missed two RFI in head_64.S Christophe Signed-off-by:

Re: [PATCH] powerpc/32s: Use relocation offset when setting early hash table

2020-11-07 Thread Andreas Schwab
On Nov 07 2020, Serge Belyshev wrote: > Christophe Leroy writes: > >> When calling early_hash_table(), the kernel hasn't been yet >> relocated to its linking address, so data must be addressed >> with relocation offset. >> >> Add relocation offset to write into Hash in early_hash_table(). >> >>

Re: Kernel panic from malloc() on SUSE 15.1?

2020-11-07 Thread Carl Jacobsen
On Fri, Nov 6, 2020 at 4:25 AM Michael Ellerman wrote: > So something seems to have gone wrong linking this, I see eg: > > 10004a8c : > 10004a8c: 2b 10 40 3c lis r2,4139 > 10004a90: 88 f7 42 38 addir2,r2,-2168 > 10004a94: a6 02 08 7c mflrr0 >

[PATCH] KVM: PPC: Book3S: Assign boolean values to a bool variable

2020-11-07 Thread xiakaixu1987
From: Kaixu Xia Fix the following coccinelle warnings: ./arch/powerpc/kvm/book3s_xics.c:476:3-15: WARNING: Assignment of 0/1 to bool variable ./arch/powerpc/kvm/book3s_xics.c:504:3-15: WARNING: Assignment of 0/1 to bool variable Reported-by: Tosk Robot Signed-off-by: Kaixu Xia ---

Re: [PATCH] powerpc: add compile-time support for lbarx, lwarx

2020-11-07 Thread Segher Boessenkool
On Sat, Nov 07, 2020 at 08:12:13AM +0100, Gabriel Paubert wrote: > On Sat, Nov 07, 2020 at 01:23:28PM +1000, Nicholas Piggin wrote: > > ISA v2.06 (POWER7 and up) as well as e6500 support lbarx and lwarx. > > Hmm, lwarx exists since original Power AFAIR, Almost: it was new on PowerPC. Segher

[PATCH] panic: don't dump stack twice on warn

2020-11-07 Thread Christophe Leroy
Before commit 3f388f28639f ("panic: dump registers on panic_on_warn"), __warn() was calling show_regs() when regs was not NULL, and show_stack() otherwise. After that commit, show_stack() is called regardless of whether show_regs() has been called or not, leading to duplicated Call Trace: [

Re: [RFC PATCH] powerpc: show registers when unwinding interrupt frames

2020-11-07 Thread Christophe Leroy
Le 07/11/2020 à 03:33, Nicholas Piggin a écrit : It's often useful to know the register state for interrupts in the stack frame. In the below example (with this patch applied), the important information is the state of the page fault. A blatant case like this probably rather should have the

Re: [RFC PATCH 0/9] powerpc/64s: fast interrupt exit

2020-11-07 Thread Christophe Leroy
Le 06/11/2020 à 16:59, Nicholas Piggin a écrit : This series attempts to improve the speed of interrupts and system calls in two major ways. Firstly, the SRR/HSRR registers do not need to be reloaded if they were not used or clobbered fur the duration of the interrupt. Secondly, an

Re: [PATCH] powerpc/32s: Use relocation offset when setting early hash table

2020-11-07 Thread Serge Belyshev
Christophe Leroy writes: > When calling early_hash_table(), the kernel hasn't been yet > relocated to its linking address, so data must be addressed > with relocation offset. > > Add relocation offset to write into Hash in early_hash_table(). > > Reported-by: Erhard Furtner > Reported-by:

Re: [PATCH 18/18] powerpc/64s: move power4 idle entirely to C

2020-11-07 Thread Christophe Leroy
Le 05/11/2020 à 15:34, Nicholas Piggin a écrit : Christophe asked about doing this, most of the code is still in asm but maybe it's slightly nicer? I don't know if it's worthwhile. Heu... I don't think I was asking for that, but why not, see later comments. At first I was just asking to

Re: [PATCH] powerpc/32s: Setup the early hash table at all time.

2020-11-07 Thread Christophe Leroy
Le 29/10/2020 à 22:07, Andreas Schwab a écrit : On Okt 01 2020, Christophe Leroy wrote: At the time being, an early hash table is set up when CONFIG_KASAN is selected. There is nothing wrong with setting such an early hash table all the time, even if it is not used. This is a statically

[Bug 209869] Kernel 5.10-rc1 fails to boot on a PowerMac G4 3,6 at an early stage

2020-11-07 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=209869 --- Comment #11 from Christophe Leroy (christophe.le...@csgroup.eu) --- Can (In reply to Erhard F. from comment #10) > (In reply to Christophe Leroy from comment #9) > > Ok, what about 5.10-rc1 + KASAN without reverting the patch ? > Nope, does

[PATCH] powerpc/32s: Use relocation offset when setting early hash table

2020-11-07 Thread Christophe Leroy
When calling early_hash_table(), the kernel hasn't been yet relocated to its linking address, so data must be addressed with relocation offset. Add relocation offset to write into Hash in early_hash_table(). Reported-by: Erhard Furtner Reported-by: Andreas Schwab Fixes: 69a1593abdbc

Re: [PATCH] powerpc: add compile-time support for lbarx, lwarx

2020-11-07 Thread Christophe Leroy
Le 07/11/2020 à 04:23, Nicholas Piggin a écrit : ISA v2.06 (POWER7 and up) as well as e6500 support lbarx and lwarx. Add a compile option that allows code to use it, and add support in cmpxchg and xchg 8 and 16 bit values. Do you mean lharx ? Because lwarx exists on all powerpcs I think.