This will be used by the following patches
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/feature-fixups.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/include/asm/feature-fixups.h
b/arch/powerpc/include/asm/feature-fixups.h
index fbd406cd6916..5cdba929a8ae
out of a pfn is to
make a huge-page."
message-id: CAHk-=whG+Z2mBFTT026PZAdjn=gsslk9bk0wnyj5peyuvgf...@mail.gmail.com
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 17 -
arch/powerpc/mm/book3s64/pgtable.c | 8 +++-
2 file
Christophe Leroy writes:
> Le 12/10/2020 à 17:39, Christophe Leroy a écrit :
>> On the same principle as commit 773edeadf672 ("powerpc/mm: Add mask
>> of possible MMU features"), add mask for MMU features that are
>> always there in order to optimise out dead branches.
>>
>> Signed-off-by: Chris
Hi Michal,
On 10/15/20 8:16 PM, Michal Suchánek wrote:
Hello,
On Thu, Feb 06, 2020 at 12:25:18AM -0300, Leonardo Bras wrote:
On Thu, 2020-02-06 at 00:08 -0300, Leonardo Bras wrote:
gup_pgd_range(addr, end, gup_flags, pages, &nr);
- local_irq_enable();
+
callers
are returned false. Do the final kobject delete checking
the return value of sysfs_remove_file_self().
Cc: Mahesh Salgaonkar
Cc: Oliver O'Halloran
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/platforms/powernv/opal-elog.c | 11 ---
1 file changed, 8 insertions(+), 3 dele
On 10/13/20 3:45 PM, Michael Ellerman wrote:
Christophe Leroy writes:
Le 13/10/2020 à 09:23, Aneesh Kumar K.V a écrit :
Christophe Leroy writes:
CPU_FTR_NODSISRALIGN has not been used since
commit 31bfdb036f12 ("powerpc: Use instruction emulation
infrastructure to handle alignment f
On 10/14/20 2:28 AM, Andrew Morton wrote:
On Wed, 2 Sep 2020 17:12:09 +0530 "Aneesh Kumar K.V"
wrote:
This patch series includes fixes for debug_vm_pgtable test code so that
they follow page table updates rules correctly. The first two patches introduce
changes w.r.t ppc64. The p
Christophe Leroy writes:
> CPU_FTR_NODSISRALIGN has not been used since
> commit 31bfdb036f12 ("powerpc: Use instruction emulation
> infrastructure to handle alignment faults")
>
> Remove it.
>
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/include/asm/cputable.h | 22 ++
Guenter Roeck writes:
> On Wed, Sep 02, 2020 at 05:12:22PM +0530, Aneesh Kumar K.V wrote:
>> pte_clear_tests operate on an existing pte entry. Make sure that
>> is not a none pte entry.
>>
>> Signed-off-by: Aneesh Kumar K.V
>
> This patch causes all riscv64 i
On 10/8/20 10:32 PM, Linus Torvalds wrote:
On Thu, Oct 8, 2020 at 2:27 AM Aneesh Kumar K.V
wrote:
In copy_present_page, after we mark the pte non-writable, we should
check for previous dirty bit updates and make sure we don't lose the dirty
bit on reset.
No, we'll just remove tha
horpe
Cc: John Hubbard
Cc: linux...@kvack.org
Cc: linux-ker...@vger.kernel.org
Cc: Andrew Morton
Cc: Jan Kara
Cc: Michal Hocko
Cc: Kirill Shutemov
Cc: Hugh Dickins
Cc: Linus Torvalds
Signed-off-by: Aneesh Kumar K.V
---
mm/memory.c | 8
1 file changed, 8 insertions(+)
diff --git
Cc: John Hubbard
Cc: linux...@kvack.org
Cc: linux-ker...@vger.kernel.org
Cc: Andrew Morton
Cc: Jan Kara
Cc: Michal Hocko
Cc: Kirill Shutemov
Cc: Hugh Dickins
Cc: Linus Torvalds
Signed-off-by: Aneesh Kumar K.V
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
Make it consistent with other usages.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/book3s64/radix_pgtable.c| 7 ---
arch/powerpc/platforms/pseries/hotplug-memory.c | 13 +
2 files changed, 13 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/mm/book3s64
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block
panic")
make sure different variables tracking lmb_size are updated to be 64 bit.
Fixes: af9d00e93a4f ("powerpc/mm/radix: Create separate mappings for
hot-plugged memory")
Signed-off-by: Aneesh
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block
panic")
make sure different variables tracking lmb_size are updated to be 64 bit.
This was found by code audit.
Cc: sta...@vger.kernel.org
Signed-off-by: Aneesh Kumar K.V
---
.../platforms/pseries/hotplu
Similar to commit 89c140bbaeee ("pseries: Fix 64 bit logical memory block
panic")
make sure different variables tracking lmb_size are updated to be 64 bit.
This was found by code audit.
Cc: sta...@vger.kernel.org
Acked-by: Nathan Lynch
Signed-off-by: Aneesh Kumar K.V
---
arch/power
Changes from v2:
* Don't use root addr and size cells during runtime. Walk up the
device tree and use the first addr and size cells value (of_n_addr_cells()/
of_n_size_cells())
Aneesh Kumar K.V (4):
powerpc/drmem: Make lmb_size 64 bit
powerpc/memhotplug: Make lmb size 64bit
po
With POWER10, single tlbiel instruction invalidates all the congruence
class of the TLB and hence we need to issue only one tlbiel with SET=0.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_hv.c | 7 ++-
arch/powerpc/kvm/book3s_hv_builtin.c | 11 ++-
arch
With POWER10, tlbiel invalidates all the congruence class of TLB
and hence we need to issue only one tlbiel with SET=0. Update
POWER10_TLB_SETS to 1 and use that in the rest of the code.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 1 +
arch/powerpc/kvm
MR into the stack before treclaim and
> before trechkpt, restoring it later, just before returning from tm_reclaim
> and __tm_recheckpoint.
>
> Is also fixes two nonrelated comments about CR and MSR.
>
Tested-by: Aneesh Kumar K.V
> Signed-off-by: Gustavo Romero
> ---
>
On 9/18/20 9:35 AM, Gustavo Romero wrote:
Althought AMR is stashed on the checkpoint area, currently we don't save
it to the per thread checkpoint struct after a treclaim and so we don't
restore it either from that struct when we trechkpt. As a consequence when
the transaction is later rolled bac
] ? kernel_init_freeable+0x72/0xa3
[9.423539] ? rest_init+0x134/0x134
[9.424055] ? kernel_init+0x5/0x12c
[9.424574] ? ret_from_fork+0x19/0x30
Reported-by: kernel test robot
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff
Nathan Chancellor writes:
> On Wed, Sep 02, 2020 at 05:12:22PM +0530, Aneesh Kumar K.V wrote:
>> pte_clear_tests operate on an existing pte entry. Make sure that
>> is not a none pte entry.
>>
>> Signed-off-by: Aneesh Kumar K.V
>> ---
>> mm/debug_vm_pgta
Matthew Wilcox writes:
> PowerPC has special handling of hugetlbfs pages. Well, that's what
> the config option says, but actually it handles THP as well. If
> the config option is enabled.
>
> #ifdef CONFIG_HUGETLB_PAGE
> if (PageCompound(page)) {
> flush_dcache_icache_
Gerald Schaefer writes:
> On Fri, 4 Sep 2020 18:01:15 +0200
> Gerald Schaefer wrote:
>
> [...]
>>
>> BTW2, a quick test with this change (so far) made the issues on s390
>> go away:
>>
>> @@ -1069,7 +1074,7 @@ static int __init debug_vm_pgtable(void)
>> spin_unlock(ptl);
>>
>> #ifnde
Christophe Leroy writes:
> search_exception_tables() is an heavy operation, we have to avoid it.
> When KUAP is selected, we'll know the fault has been blocked by KUAP.
> Otherwise, it behaves just as if the address was already in the TLBs
> and no fault was generated.
>
> Signed-off-by: Christop
es that are not aligned. This helps to catch
access to these partially mapped pages early.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/book3s64/hash_utils.c| 12 +---
arch/powerpc/mm/book3s64/radix_pgtable.c | 1 +
2 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/
Anshuman Khandual writes:
> On 09/01/2020 12:00 PM, Aneesh Kumar K.V wrote:
>> On 9/1/20 9:33 AM, Anshuman Khandual wrote:
>>>
>>>
>>> On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
>>>> The seems to be missing quite a lot of details
On 9/2/20 6:10 PM, Christophe Leroy wrote:
Le 02/09/2020 à 13:42, Aneesh Kumar K.V a écrit :
ppc64 supports huge vmap only with radix translation. Hence use arch
helper
to determine the huge vmap support.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 14 --
1
pte_clear_tests operate on an existing pte entry. Make sure that
is not a none pte entry.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 9afa1354326b
pte_clear_tests operate on an existing pte entry. Make sure that
is not a none pte entry.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 9afa1354326b
ppc64.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index b53903fdee85..9afa1354326b 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -811,6 +811,7 @@ static void
pmd_clear() should not be used to clear pmd level pte entries.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 26023d990bd0..b53903fdee85 100644
--- a/mm
Architectures like ppc64 use deposited page table while updating the
huge pte entries.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 10 +++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 2bc1952e5f83
Make sure we call pte accessors with correct lock held.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 35 ++-
1 file changed, 22 insertions(+), 13 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index f59cf6a9b05e
This will help in adding proper locks in a later patch
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 51 ---
1 file changed, 28 insertions(+), 23 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index de333871f407
set_pte_at() should not be used to set a pte entry at locations that
already holds a valid pte entry. Architectures like ppc64 don't do TLB
invalidate in set_pte_at() and hence expect it to be used to set locations
that are not a valid PTE.
Signed-off-by: Aneesh Kumar K.V
--
kernel expects entries to be marked huge before we use
set_pmd_at()/set_pud_at().
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 20 +++-
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 8704901f6bd8
enable the test only when CONFIG_NUMA_BALANCING is enabled and
use protnone protflags.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 4c73e63b4ceb
ppc64 supports huge vmap only with radix translation. Hence use arch helper
to determine the huge vmap support.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 14 --
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm
ppc64 use bit 62 to indicate a pte entry (_PAGE_PTE). Avoid setting
that bit in random value.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
With the hash page table, the kernel should not use pmd_clear for clearing
huge pte entries. Add a DEBUG_VM WARN to catch the wrong usage.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 14 ++
1 file changed, 14 insertions(+)
diff --git a/arch
setting _PAGE_PTE bit. We will remove that after a few releases.
With respect to huge pmd entries, pmd_mkhuge() takes care of adding the
_PAGE_PTE bit.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 15 +--
arch/powerpc/include/asm/nohash/pgtable.h
bisect failure.
Changes from v2:
* Fix build failure with different configs and architecture.
Changes from v1:
* Address review feedback
* drop test specific pfn_pte and pfn_pmd.
* Update ppc64 page table helper to add _PAGE_PTE
Aneesh Kumar K.V (13):
powerpc/mm: Add DEBUG_VM WARN for pmd_clear
On 9/2/20 1:41 PM, Christophe Leroy wrote:
Le 02/09/2020 à 05:23, Aneesh Kumar K.V a écrit :
Christophe Leroy writes:
The following random segfault is observed from time to time with
map_hugetlb selftest:
root@localhost:~# ./map_hugetlb 1 19
524288 kB hugepages
Mapping 1 Mbytes
+0x4dc/0x5a4
[c00c6d19fdb0] c0012474 kernel_init+0x24/0x160
[c00c6d19fe20] c000cbd0 ret_from_kernel_thread+0x5c/0x6c
33:mon>
Signed-off-by: Aneesh Kumar K.V
---
Documentation/features/debug/debug-vm-pgtable/arch-support.txt | 2 +-
arch/powerpc/Kcon
On 9/2/20 9:19 AM, Anshuman Khandual wrote:
On 09/01/2020 03:28 PM, Aneesh Kumar K.V wrote:
On 9/1/20 1:08 PM, Anshuman Khandual wrote:
On 09/01/2020 12:07 PM, Aneesh Kumar K.V wrote:
On 9/1/20 8:55 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote
not be done in hugetlb_free_pgd_range(), it
> must be done in hugetlb_free_pte_range().
>
Reviewed-by: Aneesh Kumar K.V
> Fixes: b250c8c08c79 ("powerpc/8xx: Manage 512k huge pages as standard pages.")
> Cc: sta...@vger.kernel.org
> Signed-off-by: Christophe Leroy
&g
On 9/1/20 1:08 PM, Anshuman Khandual wrote:
On 09/01/2020 12:07 PM, Aneesh Kumar K.V wrote:
On 9/1/20 8:55 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
pte_clear_tests operate on an existing pte entry. Make sure that is not a none
pte entry.
Signed-off-by
+0x4dc/0x5a4
[c00c6d19fdb0] c0012474 kernel_init+0x24/0x160
[c00c6d19fe20] c000cbd0 ret_from_kernel_thread+0x5c/0x6c
33:mon>
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/powerpc/Kconfig b/arch/powe
[ 17.080644] [ cut here ]
[ 17.081342] kernel BUG at mm/pgtable-generic.c:164!
[ 17.082091] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[ 17.082977] Modules linked in:
[ 17.083481] CPU: 79 PID: 1 Comm: swapper/0 Tainted: G W
5.9.0-rc2-00105-
On 9/1/20 2:40 PM, Christophe Leroy wrote:
Le 01/09/2020 à 10:15, Christophe Leroy a écrit :
Le 01/09/2020 à 10:12, Aneesh Kumar K.V a écrit :
On 9/1/20 1:40 PM, Christophe Leroy wrote:
Le 01/09/2020 à 10:02, Aneesh Kumar K.V a écrit :
The test is broken w.r.t page table update rules
On 9/1/20 1:40 PM, Christophe Leroy wrote:
Le 01/09/2020 à 10:02, Aneesh Kumar K.V a écrit :
The test is broken w.r.t page table update rules and results in kernel
crash as below. Disable the support untill we get the tests updated.
Signed-off-by: Aneesh Kumar K.V
Any Fixes: tag
>
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 65bed1fdeaad..787e829b6f25 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -116,7 +116,6 @@ config
On 9/1/20 1:16 PM, Anshuman Khandual wrote:
On 09/01/2020 01:06 PM, Aneesh Kumar K.V wrote:
On 9/1/20 1:02 PM, Anshuman Khandual wrote:
On 09/01/2020 11:51 AM, Aneesh Kumar K.V wrote:
On 9/1/20 8:45 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
ppc64 use
On 9/1/20 12:20 PM, Christophe Leroy wrote:
Le 01/09/2020 à 08:25, Aneesh Kumar K.V a écrit :
On 9/1/20 8:52 AM, Anshuman Khandual wrote:
There is a checkpatch.pl warning here.
WARNING: Possible unwrapped commit description (prefer a maximum 75
chars per line)
#7:
Architectures like
On 9/1/20 1:02 PM, Anshuman Khandual wrote:
On 09/01/2020 11:51 AM, Aneesh Kumar K.V wrote:
On 9/1/20 8:45 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
ppc64 use bit 62 to indicate a pte entry (_PAGE_PTE). Avoid setting that bit in
random value.
Signed-off
On 9/1/20 9:11 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
This will help in adding proper locks in a later patch
It really makes sense to classify these tests here as static and dynamic.
Static are the ones that test via page table entry values modification
On 9/1/20 8:55 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
pte_clear_tests operate on an existing pte entry. Make sure that is not a none
pte entry.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 6 --
1 file changed, 4 insertions(+), 2
On 9/1/20 9:33 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
The seems to be missing quite a lot of details w.r.t allocating
the correct pgtable_t page (huge_pte_alloc()), holding the right
lock (huge_pte_lock()) etc. The vma used is also not a hugetlb VMA
On 9/1/20 8:52 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
Architectures like ppc64 use deposited page table while updating the huge pte
entries.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 10 +++---
1 file changed, 7 insertions(+), 3
On 9/1/20 8:51 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
kernel expects entries to be marked huge before we use
set_pmd_at()/set_pud_at().
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 21 -
1 file changed, 12
On 9/1/20 8:45 AM, Anshuman Khandual wrote:
On 08/27/2020 01:34 PM, Aneesh Kumar K.V wrote:
ppc64 use bit 62 to indicate a pte entry (_PAGE_PTE). Avoid setting that bit in
random value.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 13 ++---
1 file changed, 10
thini
Reported-by: Shirisha Ganta
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/mmu.h | 10 +-
arch/powerpc/mm/book3s64/radix_pgtable.c | 15 ---
arch/powerpc/mm/init_64.c| 11 +--
3 files changed, 14 insertions(+), 22 deletions(-)
pte_clear_tests operate on an existing pte entry. Make sure that is not a none
pte entry.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 21329c7d672f
ppc64.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 4
1 file changed, 4 insertions(+)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index a188b6e4e37e..21329c7d672f 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -813,6 +813,7 @@ static void
pmd_clear() should not be used to clear pmd level pte entries.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 0a6e771ebd13..a188b6e4e37e 100644
--- a/mm
This will help in adding proper locks in a later patch
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 52 ---
1 file changed, 29 insertions(+), 23 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 0ce5c6a24c5b
Make sure we call pte accessors with correct lock held.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 34 --
1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 78c8af3445ac
Architectures like ppc64 use deposited page table while updating the huge pte
entries.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 10 +++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index f9f6358899a8
set_pte_at() should not be used to set a pte entry at locations that
already holds a valid pte entry. Architectures like ppc64 don't do TLB
invalidate in set_pte_at() and hence expect it to be used to set locations
that are not a valid PTE.
Signed-off-by: Aneesh Kumar K.V
--
kernel expects entries to be marked huge before we use
set_pmd_at()/set_pud_at().
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 21 -
1 file changed, 12 insertions(+), 9 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
enable the test only
when CONFIG_NUMA_BALANCING is enabled and use protnone protflags.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 28f9d0558c20
ppc64 supports huge vmap only with radix translation. Hence use arch helper
to determine the huge vmap support.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 15 +--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm
ppc64 use bit 62 to indicate a pte entry (_PAGE_PTE). Avoid setting that bit in
random value.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
setting
_PAGE_PTE bit. We will remove that after a few releases.
With respect to huge pmd entries, pmd_mkhuge() takes care of adding the
_PAGE_PTE bit.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 15 +--
arch/powerpc/include/asm/nohash/pgtable.h
With the hash page table, the kernel should not use pmd_clear for clearing
huge pte entries. Add a DEBUG_VM WARN to catch the wrong usage.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 14 ++
1 file changed, 14 insertions(+)
diff --git a/arch
test specific pfn_pte and pfn_pmd.
* Update ppc64 page table helper to add _PAGE_PTE
Aneesh Kumar K.V (13):
powerpc/mm: Add DEBUG_VM WARN for pmd_clear
powerpc/mm: Move setting pte specific flags to pfn_pte
mm/debug_vm_pgtable/ppc64: Avoid setting top bits in radom value
mm
cycles
With smap/smep enabled:
Without patch:
1017.26 ns2950.36 cycles
With patch:
1021.51 ns2962.44 cycles
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/kup.h | 61 +---
arch/powerpc/kernel/entry_64.S | 2 +-
arch
Make KUAP/KUEP key a variable and also check whether the platform
limit the max key such that we can't use the key for KUAP/KEUP.
Signed-off-by: Aneesh Kumar K.V
---
.../powerpc/include/asm/book3s/64/hash-pkey.h | 22 +---
arch/powerpc/include/asm/book3s/64/pkeys.h| 1 +
arch/po
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/book3s64/pkeys.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 16ea0b2f0ea5..b862d5cd78ff 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/book3s64/pkeys.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 391230f93da2..16ea0b2f0ea5 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b
Radix use IAMR Key 0 and hash translation use IAMR key 3.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/kup.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/book3s/64/kup.h
b/arch/powerpc/include/asm/book3s/64/kup.h
index
Radix use AMR Key 0 and hash translation use AMR key 3.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/kup.h | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/kup.h
b/arch/powerpc/include/asm/book3s/64/kup.h
If an application has configured address protection such that read/write is
denied using pkey even the kernel should receive a FAULT on accessing the same.
This patch use user AMR value stored in pt_regs.kuap to achieve the same.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm
With hash translation use DSISR_KEYFAULT to identify a wrong access.
With Radix we look at the AMR value and type of fault.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/32/kup.h | 4 +--
arch/powerpc/include/asm/book3s/64/kup.h | 27
arch
Now that kernel correctly store/restore userspace AMR/IAMR values, avoid
manipulating AMR and IAMR from the kernel on behalf of userspace.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/kup.h | 18
arch/powerpc/include/asm/processor.h | 4 --
arch/powerpc
We will remove thread.amr/iamr/uamor in a later patch
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kernel/ptrace/ptrace-view.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kernel/ptrace/ptrace-view.c
b/arch/powerpc/kernel/ptrace/ptrace-view.c
On fork, we inherit from the parent and on exec, we should switch to
default_amr values.
Also, avoid changing the AMR register value within the kernel. The kernel now
runs with
different AMR values.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pkeys.h | 2 ++
arch
Child thread.kuap value is inherited from the parent in copy_thread_tls. We
still
need to make sure when the child returns from a fork in the kernel we start
with the kernel
default AMR value.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kernel/process.c | 9 +
1 file changed, 9
ernel. This is required so that if we get interrupted
within copy_to/from_user we continue with the right AMR value.
If we hae MMU_FTR_KUEP enabled we need to restore IAMR on return to userspace
beause kernel will be running with a different IAMR value.
Signed-off-by: Aneesh Kumar K.V
---
arch/po
In later patches during exec, we would like to access default regs.kuap to
control access to the user mapping. Having thread.regs set early makes the
code changes simpler.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/thread_info.h | 2 --
arch/powerpc/kernel/process.c
This is in preparate to adding support for kuap with hash translation.
In preparation for that rename/move kuap related functions to
non radix names. Also move the feature bit closer to MMU_FTR_KUEP.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/kup.h | 18
.
Signed-off-by: Aneesh Kumar K.V
---
.../powerpc/include/asm/book3s/64/hash-pkey.h | 24 ++-
arch/powerpc/include/asm/book3s/64/hash.h | 2 +-
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 1 +
arch/powerpc/include/asm/mmu_context.h| 2 +-
arch/powerpc/mm/book3s64
The next set of patches adds support for kuep with hash translation.
In preparation for that rename/move kuap related functions to
non radix names.
Also set MMU_FTR_KUEP and add the missing isync().
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/kup.h | 1 +
arch
Use CONFIG_PPC_BOOK3S_64 instead of CONFIG_PPC64. This avoid wrong inclusion
with other 64bit platforms. To fix booke 64 build error add macro
kuap_check_amr.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/kup.h | 8
1 file changed, 8 insertions(+)
diff --git a/arch
The next set of patches adds support for kuap with hash translation.
In preparation for that rename/move kuap related functions to
non radix names.
Signed-off-by: Aneesh Kumar K.V
---
.../asm/book3s/64/{kup-radix.h => kup.h} | 6 ++---
arch/powerpc/include/asm/kup.h|
h all CPUs supporting radix translation.
The old code was not updating UAMOR if we had smap disabled and smep enabled.
This change handles that case.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/book3s64/radix_pgtable.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --
initialization code to pkeys.c
Signed-off-by: Aneesh Kumar K.V
---
.../powerpc/include/asm/book3s/64/kup-radix.h | 33 +++
arch/powerpc/include/asm/book3s/64/mmu.h | 2 +-
arch/powerpc/include/asm/ptrace.h | 2 +-
arch/powerpc/kernel/asm-offsets.c | 2
ignore access to them and for mfstpr return
0 indicating no AMR/IAMR update is no allowed.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/kvm/book3s_emulate.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/arch/powerpc/kvm/book3s_emulate.c
b/arch/powerpc/kvm/book3s_emulate.c
index
601 - 700 of 4547 matches
Mail list logo