s will see the new aligned value of the memory limit.
Signed-off-by: Aneesh Kumar K.V (IBM)
---
arch/powerpc/kernel/prom.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 7451bedad1f4..b8f764453eaa 100644
.
Cc: Mahesh Salgaonkar
Signed-off-by: Aneesh Kumar K.V (IBM)
---
arch/powerpc/kernel/fadump.c | 16
1 file changed, 16 deletions(-)
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index d14eda1e8589..4e768d93c6d4 100644
--- a/arch/powerpc/kernel
. This alignment value will work for both
hash and radix translations.
Signed-off-by: Aneesh Kumar K.V (IBM)
---
arch/powerpc/kernel/prom.c | 7 +--
arch/powerpc/kernel/prom_init.c | 4 ++--
2 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel/prom.c b/arch
On 3/2/24 4:53 AM, Michael Ellerman wrote:
> Hi Joel,
>
> Joel Savitz writes:
>> On 64-bit powerpc, usage of a non-16MB-aligned value for the mem= kernel
>> cmdline parameter results in a system hang at boot.
>
> Can you give us any more details on that? It might be a bug we can fix.
>
>> For e
On 2/20/24 8:16 AM, Andrew Morton wrote:
> On Mon, 29 Jan 2024 13:43:39 +0530 "Aneesh Kumar K.V"
> wrote:
>
>>> return (pud_val(pud) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
>>> }
>>> #endif
>>>
>>> #ifdef CONFIG_HAVE_
On 1/29/24 12:23 PM, Anshuman Khandual wrote:
>
>
> On 1/29/24 11:56, Aneesh Kumar K.V wrote:
>> On 1/29/24 11:52 AM, Anshuman Khandual wrote:
>>>
>>>
>>> On 1/29/24 11:30, Aneesh Kumar K.V (IBM) wrote:
>>>> Architectures like powerpc add d
On 1/29/24 11:52 AM, Anshuman Khandual wrote:
>
>
> On 1/29/24 11:30, Aneesh Kumar K.V (IBM) wrote:
>> Architectures like powerpc add debug checks to ensure we find only devmap
>> PUD pte entries. These debug checks are only done with CONFIG_DEBUG_VM.
>> This patch
tests+0x1b4/0x334
[c4a2fa40] [c206db34] debug_vm_pgtable+0xcbc/0x1c48
[c4a2fc10] [c000fd28] do_one_initcall+0x60/0x388
Fixes: 27af67f35631 ("powerpc/book3s64/mm: enable transparent pud hugepage")
Signed-off-by: Aneesh Kumar K.V (IBM)
---
mm/debug_v
On 1/25/24 3:16 PM, Kunwu Chan wrote:
> This part was commented in about 17 years before.
> If there are no plans to enable this part code in the future,
> we can remove this dead code.
>
> Signed-off-by: Kunwu Chan
> ---
> arch/powerpc/include/asm/book3s/64/mmu-hash.h | 22 ---
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Hi Linus,
Please pull powerpc fixes for 6.8:
The following changes since commit d2441d3e8c0c076d0a2e705fa235c76869a85140:
MAINTAINERS: powerpc: Add Aneesh & Naveen (2023-12-13 22:35:57 +1100)
are available in the git repository at:
https:/
On 12/11/23 9:26 AM, Vaibhav Jain wrote:
> Hi Aneesh,
>
> Thanks for looking into this patch. My responses inline:
>
> "Aneesh Kumar K.V (IBM)" writes:
>
>
>> May be we should use
>> firmware_has_feature(FW_FEATURE_H_COPY_TOFROM_GUEST))?
>>
Vishal Chourasia writes:
> This patch modifies the ARCH_HIBERNATION_POSSIBLE option to ensure that it
> correctly depends on these PowerPC configurations being enabled. As a result,
> it prevents the HOTPLUG_CPU from being selected when the required dependencies
> are not satisfied.
>
> This chan
Srikar Dronamraju writes:
> If there are shared processor LPARs, underlying Hypervisor can have more
> virtual cores to handle than actual physical cores.
>
> Starting with Power 9, a big core (aka SMT8 core) has 2 nearly
> independent thread groups. On a shared processors LPARs, it helps to
> pa
Srikar Dronamraju writes:
> PowerVM systems configured in shared processors mode have some unique
> challenges. Some device-tree properties will be missing on a shared
> processor. Hence some sched domains may not make sense for shared processor
> systems.
>
> Most shared processor systems are ov
Srikar Dronamraju writes:
> If there are shared processor LPARs, underlying Hypervisor can have more
> virtual cores to handle than actual physical cores.
>
> Starting with Power 9, a big core (aka SMT8 core) has 2 nearly
> independent thread groups. On a shared processors LPARs, it helps to
> pa
Sourabh Jain writes:
> diff --git a/arch/powerpc/include/asm/fadump-internal.h
> b/arch/powerpc/include/asm/fadump-internal.h
> index 27f9e11eda28..7be3d8894520 100644
> --- a/arch/powerpc/include/asm/fadump-internal.h
> +++ b/arch/powerpc/include/asm/fadump-internal.h
> @@ -42,7 +42,25 @@
No functional change in this patch. A helper is added to find if
vcpu is dispatched by hypervisor. Use that instead of opencoding.
Also clarify some of the comments.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/paravirt.h | 33 ++---
1 file changed, 25
age fault
path")
explains the details.
Also revert commit 1abce0580b89 ("powerpc/64s: Fix __pte_needs_flush() false
positive warning")
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 9 +++--
arch/powerpc/include/asm/book3s/64/tlbflush.h | 9 ++---
Christophe Leroy writes:
> Le 07/11/2023 à 14:34, Aneesh Kumar K.V a écrit :
>> Christophe Leroy writes:
>>
>>> Le 31/10/2023 à 11:15, Aneesh Kumar K.V a écrit :
>>>> Christophe Leroy writes:
>>
>>
>> We are adding the pte flags
Hello,
Some architectures can now support EXEC_ONLY mappings and I am wondering
what get_user_pages() on those addresses should return. Earlier
PROT_EXEC implied PROT_READ and pte_access_permitted() returned true for
that. But arm64 does have this explicit comment that says
/*
* p??_access_pe
Christophe Leroy writes:
> Le 31/10/2023 à 11:15, Aneesh Kumar K.V a écrit :
>> Christophe Leroy writes:
>>
>>> pte_user() is now only used in pte_access_permitted() to check
>>> access on vmas. User flag is cleared to make a page unreadable.
>>>
>&
leared (no-access). This also remove pte_user() from
book3s/64.
pte_access_permitted() now checks for _PAGE_EXEC because we now support
EXECONLY mappings.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 23 +---
arch/powerpc/mm/book3s64/ha
Christophe Leroy writes:
> Introduce PAGE_EXECONLY_X macro which provides exec-only rights.
> The _X may be seen as redundant with the EXECONLY but it helps
> keep consistancy, all macros having the EXEC right have _X.
>
> And put it next to PAGE_NONE as PAGE_EXECONLY_X is
> somehow PAGE_NONE + E
Christophe Leroy writes:
> pte_user() is now only used in pte_access_permitted() to check
> access on vmas. User flag is cleared to make a page unreadable.
>
> So rename it pte_read() and remove pte_user() which isn't used
> anymore.
>
> For the time being it checks _PAGE_USER but in the near fut
gt;
> if (ret == H_SUCCESS)
> return retbuf[0];
>
There is no functionality change in this patch. It is clarifying the
details that it expect the buf to have the big-endian format and retbuf
contains native endian format.
Not sure why this was not picked.
Reviewed-by: Aneesh Kumar K.V
Hari Bathini writes:
> patch_instruction() entails setting up pte, patching the instruction,
> clearing the pte and flushing the tlb. If multiple instructions need
> to be patched, every instruction would have to go through the above
> drill unnecessarily. Instead, introduce patch_instructions()
1 ("powerpc: implement the new page table range API")
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/pgtable.c | 32 ++--
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index 3ba9fe4116
Aneesh Kumar K V writes:
> On 10/18/23 11:25 AM, Christophe Leroy wrote:
>>
>>
>> Le 18/10/2023 à 06:55, Aneesh Kumar K.V a écrit :
>>> With commit 9fee28baa601 ("powerpc: implement the new page table range
>>> API") we added set_ptes to power
e expensive tlb invalidate which
is not needed when you are setting up the pte for the first time. See
commit 56eecdb912b5 ("mm: Use ptep/pmdp_set_numa() for updating
_PAGE_NUMA bit") for more details
Fixes: 9fee28baa601 ("powerpc: implement the new page table range API")
Signed-
Erhard Furtner writes:
> On Thu, 12 Oct 2023 20:54:13 +0100
> "Matthew Wilcox (Oracle)" wrote:
>
>> Dave Woodhouse reported that we now nest calls to
>> arch_enter_lazy_mmu_mode(). That was inadvertent, but in principle we
>> should allow it. On further investigation, Juergen already fixed it
Erhard Furtner writes:
> On Fri, 06 Oct 2023 11:04:15 +0530
> "Aneesh Kumar K.V" wrote:
>
>> Can you check this change?
>>
>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
>> index 3ba9fe411604..6d144fedd557 100644
>
..
Hi,
Erhard Furtner writes:
> Greetings!
>
> Kernel 6.5.5 boots fine on my PowerMac G5 11,2 but kernel 6.6-rc3 fails to
> boot with following dmesg shown on the OpenFirmware console (transcribed
> screenshot):
> I bisected the issue and got 9fee28baa601f4dbf869b1373183b312d2d5ef3d as 1st
>
Aditya Gupta writes:
> On Wed, Sep 20, 2023 at 05:45:36PM +0530, Aneesh Kumar K.V wrote:
>> Aditya Gupta writes:
>>
>> > Since below commit, address mapping for vmemmap has changed for Radix
>> > MMU, where address mapping is stored in kernel page table its
Aditya Gupta writes:
> Since below commit, address mapping for vmemmap has changed for Radix
> MMU, where address mapping is stored in kernel page table itself,
> instead of earlier used 'vmemmap_list'.
>
> commit 368a0590d954 ("powerpc/book3s64/vmemmap: switch radix to use
> a different
can still map them using a 256MB memory block size.
Fixes: 4d15721177d5 ("powerpc/mm: Cleanup memory block size probing")
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/init_64.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/init_64
ck size probing")
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/init_64.c | 19 +++
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index fcda46c2b8df..e3d7379ef480 100644
--- a/arch/powerpc/mm/init_64.c
+
block
size, we require 4 pages to map vmemmap pages, In order to align things
correctly we end up adding a reserve of 28 pages. ie, for every 4096
pages 28 pages get reserved.
Reviewed-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/Kconfig | 1
: Michal Hocko
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
drivers/base/memory.c | 27 +
include/linux/memory.h | 8 ++-
mm/memory_hotplug.c| 54 ++
3 files changed, 52 insertions(+), 37 deletions(-)
diff
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
.../admin-guide/mm/memory-hotplug.rst | 12 ++
mm/memory_hotplug.c | 120 +++---
2 files changed, 113 insertions(+), 19 deletions(-)
diff --git a/Documentation/admin-guide/mm/memory
Some architectures would want different restrictions. Hence add an
architecture-specific override.
The PMD_SIZE check is moved there.
Acked-by: Michal Hocko
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
mm/memory_hotplug.c | 24
1 file changed, 20
If not supported, fallback to not using memap on memmory. This avoids
the need for callers to do the fallback.
Acked-by: Michal Hocko
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
drivers/acpi/acpi_memhotplug.c | 3 +--
include/linux/memory_hotplug.h | 3 ++-
mm
Instead of adding menu entry with all supported architectures, add
mm/Kconfig variable and select the same from supported architectures.
No functional change in this patch.
Acked-by: Michal Hocko
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
arch/arm64/Kconfig | 4
we remove the memory we can find the altmap details which
is needed on some architectures.
* rebase to latest linus tree
Aneesh Kumar K.V (6):
mm/memory_hotplug: Simplify ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE kconfig
mm/memory_hotplug: Allow memmap on memory hotplug request to fallback
mm
also be more than the section size.
Reviewed-by: Reza Arbab
Signed-off-by: Aneesh Kumar K.V
---
.../admin-guide/kernel-parameters.txt | 3 +++
arch/powerpc/kernel/setup_64.c| 23 +++
arch/powerpc/mm/init_64.c | 17 ++
3
block size value.
Add workaround to force 256MB memory block size if device driver managed
memory such as GPU memory is present. This helps to add GPU memory
that is not aligned to 1G.
Co-developed-by: Reza Arbab
Signed-off-by: Reza Arbab
Signed-off-by: Aneesh Kumar K.V
---
Changes from v3
: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
drivers/base/memory.c | 27 +
include/linux/memory.h | 8 ++
mm/memory_hotplug.c| 55 ++
3 files changed, 53 insertions(+), 37 deletions(-)
diff --git a/drivers/base
Allow updating memmap_on_memory mode after the kernel boot. Memory
hotplug done after the mode update will use the new mmemap_on_memory
value.
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
mm/memory_hotplug.c | 33 +
1 file changed, 17
block
size, we require 4 pages to map vmemmap pages, In order to align things
correctly we end up adding a reserve of 28 pages. ie, for every 4096
pages 28 pages get reserved.
Reviewed-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/Kconfig | 1
Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
.../admin-guide/mm/memory-hotplug.rst | 12 ++
mm/memory_hotplug.c | 120 +++---
2 files changed, 113 insertions(+), 19 deletions(-)
diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst
b
Some architectures would want different restrictions. Hence add an
architecture-specific override.
The PMD_SIZE check is moved there.
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
mm/memory_hotplug.c | 24
1 file changed, 20 insertions(+), 4
If not supported, fallback to not using memap on memmory. This avoids
the need for callers to do the fallback.
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
drivers/acpi/acpi_memhotplug.c | 3 +--
include/linux/memory_hotplug.h | 3 ++-
mm/memory_hotplug.c| 13
Instead of adding menu entry with all supported architectures, add
mm/Kconfig variable and select the same from supported architectures.
No functional change in this patch.
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
arch/arm64/Kconfig | 4 +---
arch/x86/Kconfig | 4
linus tree
Aneesh Kumar K.V (7):
mm/memory_hotplug: Simplify ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE kconfig
mm/memory_hotplug: Allow memmap on memory hotplug request to fallback
mm/memory_hotplug: Allow architecture to override memmap on memory
support check
mm/memory_hotplug: Support
also be more than the section size.
Reviewed-by: Reza Arbab
Signed-off-by: Aneesh Kumar K.V
---
.../admin-guide/kernel-parameters.txt | 3 +++
arch/powerpc/kernel/setup_64.c| 23 +++
arch/powerpc/mm/init_64.c | 17 ++
3
block size value.
Add workaround to force 256MB memory block size if device driver managed
memory such as GPU memory is present. This helps to add GPU memory
that is not aligned to 1G.
Signed-off-by: Aneesh Kumar K.V
---
Changes from v2:
* Add workaround for forcing 256MB memory blocksize with
>From 2d37f0570983bfa710e73a6485e178658e8f4b38 Mon Sep 17 00:00:00 2001
From: "Aneesh Kumar K.V"
Date: Fri, 28 Jul 2023 14:47:46 +0530
Subject: [PATCH] powerpc/mm: Fix kernel build error
arch/powerpc/mm/init_64.c:201:15: error: no previous prototype for function
'__vmemmap_
>From a3f49a79ffa78a7de736af77e13fdbb272c9f221 Mon Sep 17 00:00:00 2001
From: "Aneesh Kumar K.V"
Date: Fri, 28 Jul 2023 15:36:53 +0530
Subject: [PATCH] powerpc/mm: Fix kernel build error
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
mm/memory_hotplug.c | 35 +++
1 file changed, 19 insertions(+), 16 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index aa8724bd1d53..7c877756b363 100644
--- a/mm
block
size, we require 4 pages to map vmemmap pages, In order to align things
correctly we end up adding a reserve of 28 pages. ie, for every 4096
pages 28 pages get reserved.
Reviewed-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/Kconfig | 1
functional change in this patch
Signed-off-by: Aneesh Kumar K.V
---
drivers/base/memory.c | 25 +++---
include/linux/memory.h | 8 ++
mm/memory_hotplug.c| 58 +++---
3 files changed, 55 insertions(+), 36 deletions(-)
diff --git a/drivers/base
If not supported, fallback to not using memap on memmory. This avoids
the need for callers to do the fallback.
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
drivers/acpi/acpi_memhotplug.c | 3 +--
include/linux/memory_hotplug.h | 3 ++-
mm/memory_hotplug.c| 13
Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
.../admin-guide/mm/memory-hotplug.rst | 12 ++
mm/memory_hotplug.c | 120 +++---
2 files changed, 113 insertions(+), 19 deletions(-)
diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst
b
Some architectures would want different restrictions. Hence add an
architecture-specific override.
The PMD_SIZE check is moved there.
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
mm/memory_hotplug.c | 24
1 file changed, 20 insertions(+), 4
Instead of adding menu entry with all supported architectures, add
mm/Kconfig variable and select the same from supported architectures.
No functional change in this patch.
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
arch/arm64/Kconfig | 4 +---
arch/x86/Kconfig | 4
patchset.
Changes from v1:
* update the memblock to store vmemmap_altmap details. This is required
so that when we remove the memory we can find the altmap details which
is needed on some architectures.
* rebase to latest linus tree
Aneesh Kumar K.V (7):
mm/memory_hotplug: Simplify
Andrew Morton writes:
> On Wed, 26 Jul 2023 10:59:32 +0530 Aneesh Kumar K V
> wrote:
>
>> On 7/26/23 12:59 AM, Andrew Morton wrote:
>> > On Tue, 25 Jul 2023 00:37:46 +0530 "Aneesh Kumar K.V"
>> > wrote:
>> >
>> >> This
>From 9125b1815758ab3b83966aeead6f486c0708ea73 Mon Sep 17 00:00:00 2001
From: "Aneesh Kumar K.V"
Date: Thu, 27 Jul 2023 10:02:37 +0530
Subject: [PATCH] powerpc/mm: Fix section mismatch warning
remove_pte_table is only called from remove_pmd_table which is marked
__meminit. These
>From 9252360e483246e13e6bb28cd6773af2b99eeb55 Mon Sep 17 00:00:00 2001
From: "Aneesh Kumar K.V"
Date: Wed, 26 Jul 2023 10:54:14 +0530
Subject: [PATCH] -next build fixup
Fix build error
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/radix.h | 2 ++
1 fi
>From 81719b31a4e86d2f7352da653175b7c508a94303 Mon Sep 17 00:00:00 2001
From: "Aneesh Kumar K.V"
Date: Wed, 26 Jul 2023 13:45:28 +0530
Subject: [PATCH] mm/debug_vm_pgtable: Use the new
has_transparent_pud_hugepage()
Use the new helper to check pud hugepage support. Architecture li
David Hildenbrand writes:
> On 25.07.23 12:02, Aneesh Kumar K.V wrote:
>> With memmap on memory, some architecture needs more details w.r.t altmap
>> such as base_pfn, end_pfn, etc to unmap vmemmap memory. Instead of
>> computing them again when we remove a memory blo
David Hildenbrand writes:
> On 25.07.23 12:02, Aneesh Kumar K.V wrote:
>> Currently, memmap_on_memory feature is only supported with memory block
>> sizes that result in vmemmap pages covering full page blocks. This is
>> because memory onlining/offlining code requires ap
Some architectures would want different restrictions. Hence add an
architecture-specific override.
The PMD_SIZE check is moved there.
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
mm/memory_hotplug.c | 21 ++---
1 file changed, 18 insertions(+), 3 deletions
Signed-off-by: Aneesh Kumar K.V
---
mm/memory_hotplug.c | 27 +++
1 file changed, 15 insertions(+), 12 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 96e794f39313..6cb6eac1aee5 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -95,7
block
size, we require 4 pages to map vmemmap pages, In order to align things
correctly we end up adding a reserve of 28 pages. ie, for every 4096
pages 28 pages get reserved.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/pgtable.h
functional change in this patch
Signed-off-by: Aneesh Kumar K.V
---
drivers/base/memory.c | 32 +++-
include/linux/memory.h | 8 ++--
mm/memory_hotplug.c| 41 ++---
3 files changed, 47 insertions(+), 34 deletions(-)
diff --git
Kumar K.V
---
.../admin-guide/mm/memory-hotplug.rst | 12 ++
mm/memory_hotplug.c | 121 --
2 files changed, 119 insertions(+), 14 deletions(-)
diff --git a/Documentation/admin-guide/mm/memory-hotplug.rst
b/Documentation/admin-guide/mm/memory
If not supported, fallback to not using memap on memmory. This avoids
the need for callers to do the fallback.
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
drivers/acpi/acpi_memhotplug.c | 3 +--
include/linux/memory_hotplug.h | 3 ++-
mm/memory_hotplug.c| 13
Instead of adding menu entry with all supported architectures, add
mm/Kconfig variable and select the same from supported architectures.
No functional change in this patch.
Acked-by: David Hildenbrand
Signed-off-by: Aneesh Kumar K.V
---
arch/arm64/Kconfig | 4 +---
arch/x86/Kconfig | 4
can find the altmap details which
is needed on some architectures.
* rebase to latest linus tree
Aneesh Kumar K.V (7):
mm/hotplug: Simplify ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE kconfig
mm/hotplug: Allow memmap on memory hotplug request to fallback
mm/hotplug: Allow architecture to override memmap
We will use this in a later patch to do tlb flush when clearing pud entries
on powerpc. This is similar to commit 93a98695f2f9 ("mm: change
pmdp_huge_get_and_clear_full take vm_area_struct as arg")
Reviewed-by: Christophe Leroy
Signed-off-by: Aneesh Kumar K.V
---
include/linux/pgt
pudp_set_wrprotect and move_huge_pud helpers are only used when
CONFIG_TRANSPARENT_HUGEPAGE is enabled. Similar to pmdp_set_wrprotect and
move_huge_pmd_helpers use architecture override only if
CONFIG_TRANSPARENT_HUGEPAGE is set
Reviewed-by: Christophe Leroy
Signed-off-by: Aneesh Kumar K.V
This is not used by radix anymore.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/book3s64/radix_pgtable.c | 11 ---
arch/powerpc/mm/init_64.c| 21 ++---
2 files changed, 14 insertions(+), 18 deletions(-)
diff --git a/arch/powerpc/mm/book3s64
vmemmap mapping
[ 293.550032] radix-mmu: PMD_SIZE vmemmap mapping
[ 293.550076] radix-mmu: PMD_SIZE vmemmap mapping
[ 293.550117] radix-mmu: PMD_SIZE vmemmap mapping
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/book3s64/radix_pgtable.c | 3 +++
1 file changed, 3 insertions(+)
diff
With 2M PMD-level mapping, we require 32 struct pages and a single vmemmap
page can contain 1024 struct pages (PAGE_SIZE/sizeof(struct page)). Hence
with 64K page size, we don't use vmemmap deduplication for PMD-level
mapping.
Signed-off-by: Aneesh Kumar K.V
---
Documentati
page size, we need to do the above check even at the
PAGE_SIZE granularity.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/radix.h | 2 +
arch/powerpc/include/asm/pgtable.h | 6 +
arch/powerpc/mm/book3s64/radix_pgtable.c | 325 +++--
arch/
expected pte bit combination is _PAGE_PTE | _PAGE_DEVMAP.
Some of the helpers are never expected to get called on hash translation
and hence is marked to call BUG() in such a case.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/hash.h | 9 +
arch/powerpc/include/asm
A follow-up patch will add a pud variant for this same event.
Using event class makes that addition simpler.
No functional change in this patch.
Reviewed-by: Christophe Leroy
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/book3s64/hash_pgtable.c | 2 +-
arch/powerpc/mm/book3s64
config is not
enabled for them. With this change, arm64 should be able to select DAX
optimization
[1] commit 060a2c92d1b6 ("arm64: mm: hugetlb: Disable
HUGETLB_PAGE_OPTIMIZE_VMEMMAP")
Signed-off-by: Aneesh Kumar K.V
---
arch/loongarch/Kconfig | 2 +-
arch/riscv/Kconfig | 2 +
This helps architectures to override pmd_same and pud_same independently.
Signed-off-by: Aneesh Kumar K.V
---
include/linux/pgtable.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 124427ece520..0af8bc4ce258 100644
--- a/include
Architectures like powerpc will like to use different page table allocators
and mapping mechanisms to implement vmemmap optimization. Similar to
vmemmap_populate allow architectures to implement
vmemap_populate_compound_pages
Signed-off-by: Aneesh Kumar K.V
---
mm/sparse-vmemmap.c | 3 +++
1
MMU translation). Hence allow architecture
override.
Reviewed-by: Christophe Leroy
Signed-off-by: Aneesh Kumar K.V
---
include/linux/mm.h | 27 +++
mm/mm_init.c | 2 +-
2 files changed, 24 insertions(+), 5 deletions(-)
diff --git a/include/linux/mm.h b/include/linux
Architectures like powerpc would like to enable transparent huge page pud
support only with radix translation. To support that add
has_transparent_pud_hugepage() helper that architectures can override.
Reviewed-by: Christophe Leroy
Signed-off-by: Aneesh Kumar K.V
---
drivers/nvdimm/pfn_devs.c
g
Changes from v2:
* Rebase to latest linus tree
* Address review feedback
Changes from V1:
* Fix make htmldocs warning
* Fix vmemmap allocation bugs with different alignment values.
* Correctly check for section validity to before we free vmemmap area
Aneesh Kumar K.V (13):
mm/hugepage
"Aneesh Kumar K.V" writes:
> This is in preparation to update radix to implement vmemmap optimization
> for devdax. Below are the rules w.r.t radix vmemmap mapping
>
> 1. First try to map things using PMD (2M)
> 2. With altmap if altmap cross-boundary check r
ltmap is unusable")
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/init_64.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index fe1b83020e0d..0ec5b45b1e86 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc
David Hildenbrand writes:
> On 24.07.23 18:02, Aneesh Kumar K V wrote:
>> On 7/24/23 9:11 PM, David Hildenbrand wrote:
>>> On 24.07.23 17:16, Aneesh Kumar K V wrote:
>>>
>
> /*
> * In "forced" memmap_on_memory mode, we always align the vmemmap size
> up to cover
> * ful
Signed-off-by: Aneesh Kumar K.V
---
This is dependent on patches posted at
https://lore.kernel.org/linux-mm/20230718024409.95742-1-aneesh.ku...@linux.ibm.com/
mm/memory_hotplug.c | 27 +++
1 file changed, 15 insertions(+), 12 deletions(-)
diff --git a/mm
Hugh Dickins writes:
> Instead of pte_lockptr(), use the recently added pte_offset_map_nolock()
> in assert_pte_locked(). BUG if pte_offset_map_nolock() fails: this is
> stricter than the previous implementation, which skipped when pmd_none()
> (with a comment on khugepaged collapse transitions)
functional change in this patch
Signed-off-by: Aneesh Kumar K.V
---
drivers/base/memory.c | 32 +++-
include/linux/memory.h | 8 ++--
mm/memory_hotplug.c| 38 ++
3 files changed, 43 insertions(+), 35 deletions(-)
diff --git a
block
size, we require 4 pages to map vmemmap pages, In order to align things
correctly we end up adding a reserve of 28 pages. ie, for every 4096
pages 28 pages get reserved.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/pgtable.h
1 - 100 of 2626 matches
Mail list logo