Christophe Leroy writes:
> PAGE_KERNEL_TEXT is an old macro that is used to tell kernel whether
> kernel text has to be mapped read-only or read-write based on build
> time options.
>
> But nowadays, with functionnalities like jump_labels, static links,
> etc ... more only less all kernels need t
Christophe Leroy writes:
> Le 05/09/2025 à 05:55, Ritesh Harjani a écrit :
>> Christophe Leroy writes:
>>
>>> PAGE_KERNEL_TEXT is an old macro that is used to tell kernel whether
>>> kernel text has to be mapped read-only or read-write based on build
>>> time options.
>>>
>>> But nowadays, with
Andrew Donnellan writes:
> If patch_branch() or patch_instruction() fails while updating a jump
> label, we presently fail silently, leading to unpredictable behaviour
> later on.
>
> Change arch_jump_label_transform() to panic on a code patching failure,
> matching the existing behaviour of arch
> #include
> #include
>
> -DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key);
> -
> static void __init kasan_init_phys_region(void *start, void *end)
> {
> unsigned long k_start, k_end, k_cur;
> @@ -92,11 +90,9 @@ void __init kasan_init(void)
>*/
>
ress()
here and then free_pages() doing virt_to_page() internally..
The change looks good to me. Please feel free to add:
Reviewed-by: Ritesh Harjani (IBM)
>
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c
> b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index be523e5fe9c5..
Christophe Leroy writes:
> Le 30/08/2025 à 05:51, Ritesh Harjani (IBM) a écrit :
>> no_slb_preload cmdline can come useful in quickly disabling and/or
>> testing the performance impact of userspace slb preloads. Recently there
>> was a slb multi-hit issue due to slb preload
Christophe Leroy writes:
> Le 30/08/2025 à 05:51, Ritesh Harjani (IBM) a écrit :
>> We get below errors when we try to enable debug logs in book3s64/hash_utils.c
>> This patch fixes these errors related to phys_addr_t printf format.
>>
>> arch/powerpc/mm/book3s6
Christophe Leroy writes:
> Le 30/08/2025 à 05:51, Ritesh Harjani (IBM) a écrit :
>> We dropped preload_new_slb_context() in the previous patch. That means
>
> slb_setup_new_exec() was also checking preload_add() returned value but
> is also gone.
>
Right. Will add that
Christophe Leroy writes:
> Le 30/08/2025 à 05:51, Ritesh Harjani (IBM) a écrit :
>> This patch adds PGD/PUD/PMD/PTE level information while dumping kernel
>> page tables. Before this patch it was hard to identify which entries
>> belongs to which page table level e.g.
>
: Michael Ellerman
Cc: Nicholas Piggin
Cc: Christophe Leroy
Cc: Paul Mackerras
Cc: "Aneesh Kumar K.V"
Cc: Donet Tom
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/ptdump/8xx.c | 5 +
arch/powerpc/mm/ptdump/book3s64.c | 5 +
arch/
Stephen Rothwell writes:
> Hi Ritesh,
>
> On Sat, 30 Aug 2025 09:21:47 +0530 "Ritesh Harjani (IBM)"
> wrote:
>>
>> diff --git a/mm/vmstat.c b/mm/vmstat.c
>> index 71cd1ceba191..8cd17a5fc72b 100644
>> --- a/mm/vmstat.c
>> +++ b/mm/v
: Fix SLB multihit issue during SLB preload
Ritesh Harjani (IBM) (7):
book3s64/hash: Restrict stress_hpt_struct memblock region to within RMA limit
book3s64/hash: Fix phys_addr_t printf format in htab_initialize()
powerpc/ptdump/64: Fix kernel_hash_pagetable dump for ISA v3.00 HPTE format
d.au/
Cc: Madhavan Srinivasan
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Christophe Leroy
Cc: Donet Tom
Cc: Andrew Morton
Cc: David Hildenbrand
Cc: Lorenzo Stoakes
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-ker...@vger.kernel.org
Cc: linux...@kvack.org
Signed-off-by: Ritesh Harjani
pc/64: Simplify adaptation to new ISA v3.00 HPTE
format")
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/ptdump/hashpagetable.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/powerpc/mm/ptdump/hashpagetable.c
b/arch/powerpc/mm/ptdump/hashpagetable.c
index a6baa6166d94.
Kumar K.V"
Cc: Donet Tom
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Ritesh Harjani (IBM)
---
Documentation/admin-guide/kernel-parameters.txt | 3 +++
arch/powerpc/mm/book3s64/hash_utils.c | 3 +++
arch/powerpc/mm/book3s64/internal.h | 7 +++
arch/powerpc/m
: "Aneesh Kumar K.V"
Cc: Donet Tom
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/slb.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c
index 7
4s/hash: add stress_hpt kernel boot option to
increase hash faults")
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm
lerman
Cc: Nicholas Piggin
Cc: Christophe Leroy
Cc: Paul Mackerras
Cc: "Aneesh Kumar K.V"
Cc: Donet Tom
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch
lts) 02:47:29 [20157/42149]
Modules linked in:
CPU: 0 UID: 0 PID: 1810970 Comm: dd Not tainted 6.16.0-rc3-dirty #12
VOLUNTARY
Hardware name: IBM pSeries (emulated by qemu) POWER8 (architected)
0x4d0200 0xf04 of:SLOF,HEAD hv:linux,kvm pSeries
NIP: c015426c LR: c01543b4 CTR: 00
: Hash
e.g. Below shows that struct page pointers coming from vmemmap area i.e.
(gdb) p vmemmap
$5 = (struct page *) 0xc00c
(gdb) lx-pfn_to_page 0
pfn_to_page(0x0) = 0xc00c
(gdb) lx-pfn_to_page 1
pfn_to_page(0x0) = 0xc00c0040
Signed-off-by: Ritesh Harjani (IBM
0\000\00
Signed-off-by: Ritesh Harjani (IBM)
---
scripts/gdb/linux/constants.py.in | 1 +
scripts/gdb/linux/cpus.py | 17 -
2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/scripts/gdb/linux/constants.py.in
b/scripts/gdb/linux/constants.py.in
index c388
my book3s64 and ppc32 platform.
I think we should fix the subject line.. s/ptdump_pglevel/ptdump_pg_level
Otherwise the changes looks good to me. So please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
>
> diff --git a/arch/powerpc/mm/ptdump/8xx.c b/arch/powerpc/mm/ptdump/8xx.c
>
def CONFIG_PPC_BOOK3S and #else block. This patch takes those
duplicate definitions out and move it to a common place.
BTW, there is nothing sndmsg specific in this, so this could be open
coded as well. But as far as this patch is concerned, it looks good.
So please feel free to add:
Reviewe
etting up the mmu context properly. But we didn't do
>> * that since the prev mm_struct running on cpu-0 was same as the
>> * next mm_struct (which is true for swapper / kernel threads). So
>> * now when we try to add this new entry into the HW SLB of cpu-0,
into the HW SLB of cpu-0,
> * we hit a SLB multi-hit error.
> */
>
> WARNING: CPU: 0 PID: 1810970 at arch/powerpc/mm/book3s64/slb.c:62
> assert_slb_presence+0x2c/0x50(48 results) 02:47:29 [20157/42149]
> Modules linked in:
> CPU: 0 UID: 0 PID: 1810970 Co
dd this new entry into the HW SLB of cpu-0,
* we hit a SLB multi-hit error.
*/
WARNING: CPU: 0 PID: 1810970 at arch/powerpc/mm/book3s64/slb.c:62
assert_slb_presence+0x2c/0x50(48 results) 02:47:29 [20157/42149]
Modules linked in:
CPU: 0 UID: 0 PID: 1810970 Comm: dd Not tainted 6.16.0-rc3-dirt
olved. For
> s390 this requires forcing a couple functions to be inline with
> __always_inline.
>
> Signed-off-by: Kees Cook
> ---
> Cc: Madhavan Srinivasan
> Cc: Michael Ellerman
> Cc: Nicholas Piggin
> Cc: Christophe Leroy
> Cc: Naveen N Rao
> Cc: "Ritesh
o me. Please feel free to add:
Reviewed-by: Ritesh Harjani (IBM)
> Signed-off-by: Gautam Menghani
> ---
> arch/powerpc/kvm/trace_book3s.h | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/powerpc/kvm/trace_book3s.h b/arch/powerpc/kvm/trace_book3s.h
> index 372
reate mappings for vmemmap area. In this, we first try
to allocate pmd entry using vmemmap_alloc_block_buf() of PMD_SIZE. If we
couldn't allocate, we should definitely fallback to base page mapping.
Looks good to me. Feel free to add:
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
> Signed-off-by
ore calling vfree().
nitpick: I might have re-pharsed the commit msg as:
powerpc/pseries/iommu: Fix kmemleak in TCE table userspace view
The patch looks good to me purely from the kmemleak bug perspective.
So feel free to take:
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
but by
> cumulative
> operations during the test sequence.
>
>
> Environment Details:
> Kernel: 6.15.0-rc1-g521d54901f98
> Reproducible with: 6.15.0-rc2-gf3a2e2a79c9d
Looks like the issue is happening on 6.15-rc2. Did git bisect revealed a
faulty commit?
>
Dan Horák writes:
> Hi,
>
> after updating to Fedora built 6.15-rc2 kernel from 6.14 I am getting a
> soft lockup early in the boot and NVME related timeout/crash later
> (could it be related?). I am first checking if this is a known issue
> as I have not started bisecting yet.
>
> [2.866399]
n the shared link, indeed had an unmet
dependency. i.e.
CONFIG_PPC_64S_HASH_MMU=y
# CONFIG_PPC_RADIX_MMU is not set
CONFIG_PPC_RADIX_BROADCAST_TLBIE=y
So, the fix look good to me. Please feel free to take:
Reviewed-by: Ritesh Harjani (IBM)
> ---
> arch/powerpc/platforms/powernv/Kconfig | 2 +-
Christophe Leroy writes:
> Le 07/04/2025 à 21:10, Ritesh Harjani (IBM) a écrit :
>> Madhavan Srinivasan writes:
>>
>>> Commit 3d45a3d0d2e6 ("powerpc: Define config option for processors with
>>> broadcast TLBIE")
>>
>> We may need to add
Stefan Berger writes:
> I bisected Linux between 6.13.0 and 6.12.0 due to failing kexec on a
> Power8 baremetal host on 6.13.0:
>
> 8fec58f503b296af87ffca3898965e3054f2b616 is the first bad commit
> commit 8fec58f503b296af87ffca3898965e3054f2b616
> Author: Ritesh Harjani (I
+linux-btrfs
Venkat Rao Bagalkote writes:
> Greetings!!!
>
>
> I am observing Kernel oops while running brtfs/108 TC on IBM Power System.
>
> Repo: Linux-Next (next-20250320)
Looks like this next tag had many btrfs related changes -
https://web.git.kernel.org/pub/scm/lin
Christophe Leroy writes:
> Le 10/03/2025 à 13:44, Donet Tom a écrit :
>> From: "Ritesh Harjani (IBM)"
>>
>> Fix compile errors when CONFIG_ARCH_WANT_OPTIMIZE_DAX_VMEMMAP=n
>
> I don't understand your patch.
>
> As far as I can see, CONFIG_AR
Sourabh Jain writes:
> Hello Ritesh,
>
>
> On 04/03/25 10:27, Ritesh Harjani (IBM) wrote:
>> Sourabh Jain writes:
>>
>>> Hello Ritesh,
>>>
>>> Thanks for the review.
>>>
>>> On 02/03/25 12:05, Ritesh Harjani (IBM) wrote:
>&
Sourabh Jain writes:
> Hello Ritesh,
>
> Thanks for the review.
>
> On 02/03/25 12:05, Ritesh Harjani (IBM) wrote:
>> Sourabh Jain writes:
>>
>>> The fadump kernel boots with limited memory solely to collect the kernel
>>> core dump. Having giganti
0.00] HugeTLB: hugepages=1 does not follow a valid hugepagesz,
> ignoring
> [0.706375] HugeTLB support is disabled!
> [0.773530] hugetlbfs: disabling because there are no supported hugepage
> sizes
>
> $ cat /proc/meminfo | grep -i "hugetlb"
> -
Erhard Furtner writes:
> Greetings!
>
> At boot with a KASAN-enabled v6.14-rc4 kernel on my PowerMac G4 DP I get:
>
> [...]
> vmalloc_node_range for size 4198400 failed: Address range restricted to
> 0xf100 - 0xf511
> swapon: vmalloc error: size 4194304, vm_struct allocation failed,
> m
n [1].
But looks good otherwise. With that addressed in the commit message,
please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
>
> arch/powerpc/kvm/powerpc.c | 5 +
> 1 file changed, 1 insertion(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/kvm/powerpc.c b/
ring_choices.h i.e.
include/linux/seq_file.h -> linux/string_helpers.h ->
linux/string_choices.h
Directly having string_choices include could be better.
#include
However no hard preferences. The patch functionally looks correct to me.
Please feel free to add -
Reviewed
lot information at the
> right offset for hugetlb")
> Signed-off-by: Christophe Leroy
> ---
> v2: Also inline __rpte_to_hidx() for the same reason
Thanks for addressing the other warning too in v2. I also tested the
changes on my system and this fixes both the reported warnings.
> arch/powerpc/sysdev/xics/icp-native.c | 21 -
> 2 files changed, 22 deletions(-)
Indeed there are no callers left of this function. Great catch!
Looks good to me. Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
Christophe Leroy writes:
> Rewrite __real_pte() as a static inline in order to avoid
> following warning/error when building with 4k page size:
>
> CC arch/powerpc/mm/book3s64/hash_tlb.o
> arch/powerpc/mm/book3s64/hash_tlb.c: In function 'hpte_need_flush':
> arch/powerpc/
Amit Machhiwal writes:
> Currently, on book3s-hv, the capability KVM_CAP_SPAPR_TCE_VFIO is only
> available for KVM Guests running on PowerNV and not for the KVM guests
> running on pSeries hypervisors.
IIUC it was said here [1] that this capability is not available on
pSeries, hence it got rem
uot;off");
> + str_on_off(KERNEL_COHERENCY),
> + str_on_off(devtree_coherency));
> BUG();
> }
Looks good to me. Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
Ritesh Harjani (IBM) writes:
> Sourabh Jain writes:
>
>> Commit 8597538712eb ("powerpc/fadump: Do not use hugepages when fadump
>> is active") disabled hugetlb support when fadump is active by returning
>> early from hugetlbpage_init():arch/powerpc/mm/h
gt; CC: Hari Bathini
> CC: Madhavan Srinivasan
> Cc: Mahesh Salgaonkar
> Cc: Michael Ellerman
> CC: Ritesh Harjani (IBM)
> Signed-off-by: Sourabh Jain
> ---
>
> Note: Even with this fix included, it is possible to enable gigantic
> pages in the fadump kernel. IIUC
ional macros pointed out by Ritesh
>which are duplicates and are avilable in "pkeys.h"
Thanks! The changes looks good to me.
Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
Gave a quick run on my lpar too -
# selftests: powerpc/ptrace: core-pkey
# test:
Madhavan Srinivasan writes:
> Both core-pkey.c and ptrace-pkey.c tests have similar macro
> definitions, move them to "pkeys.h" and remove the macro
> definitions from the C file.
>
> Signed-off-by: Madhavan Srinivasan
> ---
> tools/testing/selftests/powerpc/include/pkeys.h | 8
>
y.c | 14 +-
> 1 file changed, 1 insertion(+), 13 deletions(-)
>
Similar to previous patch. Cleanup looks good to me.
Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
an this up and consolidate the common header definitions
into pkeys.h header file. The changes looks good to me. Please feel free
to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
series as well for the callers to know, whether the eeh recovery is
completed.
This looks good to me. Please feel free to add -
Reviewed-by: Ritesh Harjani (IBM)
-ritesh
Vaibhav Jain writes:
> Hi Ritesh,
>
> Thanks for looking into this patch. My responses on behalf of Narayana
> below:
>
> "Ritesh Harjani (IBM)" writes:
>
>> Narayana Murty N writes:
>>
>>> The PE Reset State "0" obtained from RT
mod
> fuse loop nfnetlink xfs sd_mod nvme nvme_core ibmvscsi scsi_transport_srp
> nvme_auth [last unloaded: scsi_debug]
> [16631.058617] CPU: 1 UID: 0 PID: 0 Comm: swapper/1 Kdump: loaded Tainted: G
> W 6.12.0-rc6+ #1
> [16631.058623] Tainted: [W]=WARN
> [16631.05862
/run/ext4
# 4k kernel
du -sh /run/ext4
84K /run/ext4
>
> It seems fraught to rely on the ext4.img taking less space on disk than
> the allocated size, so instead create the tmpfs with a size of 2MB. With
> that all 21 tests pass on 64K PAGE_SIZE kernels.
That looks like the right th
Narayana Murty N writes:
> The PE Reset State "0" obtained from RTAS calls
> ibm_read_slot_reset_[state|state2] indicates that
> the Reset is deactivated and the PE is not in the MMIO
> Stopped or DMA Stopped state.
>
> With PE Reset State "0", the MMIO and DMA is allowed for
> the PE.
Looking a
let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem() to catch such wrong usages.
Acked-by: David Hildenbrand
Acked-by: Zi Yan
Reviewed-by: Anshuman Khandual
Signed-off-by: Ritesh Harjani (IBM)
---
RFCv3 -> v4:
1. Dropped RFC tagged as requested by Andrew.
2. Upd
Marco Elver writes:
> On Fri, 18 Oct 2024 at 19:46, Ritesh Harjani (IBM)
> wrote:
>>
>> From: Nirjhar Roy
>>
>> Faults from copy_from_kernel_nofault() needs to be handled by fixup
>> table and should not be handled by kfence. Otherwise whi
"Ritesh Harjani (IBM)" writes:
> cma_init_reserved_mem() checks base and size alignment with
> CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
> early boot when pageblock_order is 0. That means if base and size does
> not have pageblock_order a
Sourabh Jain writes:
> Hello Ritesh,
>
>
> On 12/11/24 17:23, Ritesh Harjani (IBM) wrote:
>> Ritesh Harjani (IBM) writes:
>>
>>> Sourabh Jain writes:
>>>
>>>> Hello Ritesh,
>>>>
>>>>
>>>> On 12/11/24 11:
Ritesh Harjani (IBM) writes:
> Sourabh Jain writes:
>
>> Hello Ritesh,
>>
>>
>> On 12/11/24 11:51, Ritesh Harjani (IBM) wrote:
>>> Sourabh Jain writes:
>>>
>>>> The param area is a memory region where the kernel places additional
&
Sourabh Jain writes:
> Hello Ritesh,
>
>
> On 12/11/24 11:51, Ritesh Harjani (IBM) wrote:
>> Sourabh Jain writes:
>>
>>> The param area is a memory region where the kernel places additional
>>> command-line arguments for fadump kernel. Currently, the p
erpc/kernel/prom.c
> @@ -908,6 +908,9 @@ void __init early_init_devtree(void *params)
>
> mmu_early_init_devtree();
>
> + /* Setup param area for passing additional parameters to fadump capture
> kernel. */
> + fadump_setup_param_area();
> +
Maybe we should add
Sourabh Jain writes:
> The param area is a memory region where the kernel places additional
> command-line arguments for fadump kernel. Currently, the param memory
> area is reserved in fadump kernel if it is above boot_mem_top. However,
> it should be reserved if it is below boot_mem_top because
> pending
> + * external interrupts. Hence, explicity mask off MER
> bit
> + * here as otherwise it may generate spurious
> interrupts in L2 KVM
> + * causing an endless loop, which results in L2 guest
> g
.c: flags & HT_MSI_FLAGS_ENABLE ? "enabled"
: "disabled", addr);
> Signed-off-by: Thorsten Blum
> ---
> arch/powerpc/kernel/secure_boot.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
For this patch it looks good to me. P
Gautam Menghani writes:
> Mask off the LPCR_MER bit before running a vCPU to ensure that it is not
> set if there are no pending interrupts. Running a vCPU with LPCR_MER bit
> set and no pending interrupts results in L2 vCPU getting an infinite flood
> of spurious interrupts. The 'if check' in kv
Michael Ellerman writes:
> Hi Ritesh,
>
> "Ritesh Harjani (IBM)" writes:
>> copy_from_kernel_nofault() can be called when doing read of /proc/kcore.
>> /proc/kcore can have some unmapped kfence objects which when read via
>> copy_from_kernel_nofault() c
ned-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/fault.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 81c77ddce2e3..316f5162ffc4 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
unmapped address from kfence pool.
Let's add a testcase to cover this case.
Co-developed-by: Ritesh Harjani (IBM)
Signed-off-by: Nirjhar Roy
Signed-off-by: Ritesh Harjani (IBM)
---
Will be nice if we can get some feedback on this.
v2 -> v3:
=
1. Separated out this kfence kunit t
_thread+0x14/0x1c
Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
Suggested-by: David Hildenbrand
Reported-by: Sachin P Bappalige
Acked-by: Hari Bathini
Reviewed-by: Madhavan Srinivasan
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/fad
decide
linear map pagesize if hash supports either debug_pagealloc or
kfence.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch
d for kernel
linear map in book3s64.
This patch refactors out the common functions required to detect kfence
early init is enabled or not.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h| 8 ++--
arch/powerpc/mm/book3s64/pgtable.c | 13 +
if kfence early init is not
enabled.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/book3s64/hash_utils.c
index 558d6f5202b9..2f5dd6310a8f 10
= 32MB)
4. The hash slot information for kfence memory gets added in linear map
in hash_linear_map_add_slot() (which also adds for debug_pagealloc).
Reported-by: Pavithra Prakash
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 -
arch/powerpc/mm/book3s64/has
Make size of the linear map to be allocated in RMA region to be of
ppc64_rma_size / 4. If debug_pagealloc requires more memory than that
then do not allocate any memory and disable debug_pagealloc.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 15
arate out kfence from debug_pagealloc
infrastructure.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 47 ++-
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/boo
This refactors hash__kernel_map_pages() function to call
hash_debug_pagealloc_map_pages(). This will come useful when we will add
kfence support.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 9 -
1 file changed
linear_map_hash_slots and linear_map_hash_count
variables under the same config too.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 29 ---
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
This adds hash_debug_pagealloc_add_slot() function instead of open
coding that in htab_bolt_mapping(). This is required since we will be
separating kfence functionality to not depend upon debug_pagealloc.
No functionality change in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch
This just brings all linear map related handling at one place instead of
having those functions scattered in hash_utils file.
Makes it easy for review.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 164
eeds some refactoring.
We will bring in kfence on Hash support in later patches.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 +
arch/powerpc/mm/book3s64/hash_utils.c | 16 +++-
2 files changed, 16 insertions(+), 5 deletions(-)
diff --git a/arc
kunit testcase patch-1.
2. Fixed a false negative with copy_from_kernel_nofault() in patch-2.
3. Addressed review comments from Christophe Leroy.
4. Added patch-13.
Ritesh Harjani (IBM) (12):
powerpc: mm/fault: Fix kfence page fault reporting
book3s64/hash: Remove kfence support temporarily
boo
s false or dump_active, so
that in later patches we can call fadump_cma_init() separately from
setup_arch().
Acked-by: Hari Bathini
Reviewed-by: Madhavan Srinivasan
Signed-off-by: Ritesh Harjani (IBM)
---
v3 -> v4
=
1. Dropped RFC tag.
2. Updated commit subject from fadump: <>
later in setup_arch() where pageblock_order is non-zero.
Suggested-by: Sourabh Jain
Acked-by: Hari Bathini
Reviewed-by: Madhavan Srinivasan
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/kernel/fadump.c | 34 ++
1 file changed, 22 insertions(+), 12
Madhavan Srinivasan writes:
>
> Patchset looks fine to me.
>
> Reviewed-by: Madhavan Srinivasan for the series.
>
Thanks Maddy for the reviews!
I will spin PATCH v4 with these minor suggested changes (No code changes)
-ritesh
Christophe Leroy writes:
> Le 15/10/2024 à 03:33, Ritesh Harjani (IBM) a écrit :
>> copy_from_kernel_nofault() can be called when doing read of /proc/kcore.
>> /proc/kcore can have some unmapped kfence objects which when read via
>> copy_from_kernel_nofault() can cau
decide
linear map pagesize if hash supports either debug_pagealloc or
kfence.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 25 +
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch
if kfence early init is not
enabled.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/book3s64/hash_utils.c
index 53e6f3a524eb..b6da25719e37 10
d for kernel
linear map in book3s64.
This patch refactors out the common functions required to detect kfence
early init is enabled or not.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h| 8 ++--
arch/powerpc/mm/book3s64/pgtable.c | 13 +
= 32MB)
4. The hash slot information for kfence memory gets added in linear map
in hash_linear_map_add_slot() (which also adds for debug_pagealloc).
Reported-by: Pavithra Prakash
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/include/asm/kfence.h | 5 -
arch/powerpc/mm/book3s64/has
Make size of the linear map to be allocated in RMA region to be of
ppc64_rma_size / 4. If debug_pagealloc requires more memory than that
then do not allocate any memory and disable debug_pagealloc.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 15
arate out kfence from debug_pagealloc
infrastructure.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 47 ++-
1 file changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
b/arch/powerpc/mm/boo
This refactors hash__kernel_map_pages() function to call
hash_debug_pagealloc_map_pages(). This will come useful when we will add
kfence support.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 9 -
1 file changed
linear_map_hash_slots and linear_map_hash_count
variables under the same config too.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 29 ---
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c
This adds hash_debug_pagealloc_add_slot() function instead of open
coding that in htab_bolt_mapping(). This is required since we will be
separating kfence functionality to not depend upon debug_pagealloc.
No functionality change in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch
This just brings all linear map related handling at one place instead of
having those functions scattered in hash_utils file.
Makes it easy for review.
No functionality changes in this patch.
Signed-off-by: Ritesh Harjani (IBM)
---
arch/powerpc/mm/book3s64/hash_utils.c | 164
1 - 100 of 195 matches
Mail list logo