Le 12/08/2020 à 21:37, Segher Boessenkool a écrit :
On Wed, Aug 12, 2020 at 12:25:17PM +, Christophe Leroy wrote:
Enable pre-update addressing mode in __get_user_asm() and __put_user_asm()
Signed-off-by: Christophe Leroy
---
v3: new, splited out from patch 1.
It still looks fine to
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
> The tests do expect _PAGE_PTE bit set by different page table accessors.
> This is not true for the kernel. Within the kernel, _PAGE_PTE bits are
> usually set by set_pte_at(). To make the below tests work correctly add test
> specific
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
> pmd_clear() should not be used to clear pmd level pte entries.
Could you please elaborate on this. The proposed change set does
not match the description here.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> mm/debug_vm_pgtable.c | 7 ---
>
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
> Architectures like ppc64 use deposited page table while updating the huge pte
> entries.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> mm/debug_vm_pgtable.c | 8 ++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git
The newly introduced 'perf_stats' attribute uses the default access
mode of 0444 letting non-root users access performance stats of an
nvdimm and potentially force the kernel into issuing large number of
expensive HCALLs. Since the information exposed by this attribute
cannot be cached hence its
This test creates some threads, which write to TM SPRs, and then makes
sure the registers maintain the correct values across context switches
and contention with other threads.
But currently the test finishes almost instantaneously, which reduces
the chance of it hitting an interesting condition.
This test tries to set affinity to CPUs that don't exist, especially
if the set of online CPUs doesn't start at 0.
But there's no real reason for it to use setaffinity in the first
place, it's just trying to create lots of threads to cause contention.
So drop the setaffinity entirely.
Several of the TM tests fail spuriously if CPU 0 is offline, because
they blindly try to affinitise to CPU 0.
Fix them by picking any online CPU and using that instead.
Signed-off-by: Michael Ellerman
---
tools/testing/selftests/powerpc/tm/tm-poison.c | 11 +++
Michael Ellerman writes:
> Tyrel Datwyler writes:
>> On 8/11/20 6:20 PM, Nathan Lynch wrote:
>>>
>>> +static inline struct drmem_lmb *drmem_lmb_next(struct drmem_lmb *lmb)
>>> +{
>>> + const unsigned int resched_interval = 20;
>>> +
>>> + BUG_ON(lmb < drmem_info->lmbs);
>>> + BUG_ON(lmb
On Mon, 2020-08-03 at 22:41 +1000, Michael Ellerman wrote:
> Michael Neuling writes:
> > On POWER10 bit 12 in the PVR indicates if the core is SMT4 or
> > SMT8. Bit 12 is set for SMT4.
> >
> > Without this patch, /proc/cpuinfo on a SMT4 DD1 POWER10 looks like
> > this:
> > cpu :
Tyrel Datwyler writes:
> On 8/11/20 6:20 PM, Nathan Lynch wrote:
>> The drmem lmb list can have hundreds of thousands of entries, and
>> unfortunately lookups take the form of linear searches. As long as
>> this is the case, traversals have the potential to monopolize the CPU
>> and provoke
This patch adds an DTSI-File that can be used by various device-tree
files for APM82181-based devices.
Some of the nodes (like UART, PCIE, SATA) are used by the uboot and
need to stick with the naming-conventions of the old times'.
I've added comments whenever this was the case. But
This patch adds the device-tree definitions for Meraki MR24
Accesspoint devices.
Ready to go images and install instruction can be found @OpenWrt.
Signed-off-By: Chris Blake
Signed-off-by: Christian Lamparter
---
arch/powerpc/boot/dts/meraki-mr24.dts | 237 +
Hello,
I've been holding on to these devices dts' for a while now.
But ever since the recent purge of the PPC405, I'm feeling
the urge to move forward.
The devices in question have been running with OpenWrt since
around 2016/2017. Back then it was linux v4.4 and required
many out-of-tree patches
This patch tries to integrate the existing bluestone.dts into the
apm82181.dtsi framework.
The original bluestone.dts produces a peculiar warning message.
> bluestone.dts:120.10-125.4: Warning (i2c_bus_reg):
> /plb/opb/i2c@ef600700/sttm@4C: I2C bus unit address format error, expected
> "4c"
This patch adds the device-tree definitions for
Western Digital MyBook Live NAS devices.
Technically, this devicetree file is shared by two, very
similar devices.
There's the My Book Live and the My Book Live Duo. WD's uboot
on the device will enable/disable the nodes for the device.
Ready to
On Wed, Aug 12, 2020 at 12:25:17PM +, Christophe Leroy wrote:
> Enable pre-update addressing mode in __get_user_asm() and __put_user_asm()
>
> Signed-off-by: Christophe Leroy
> ---
> v3: new, splited out from patch 1.
It still looks fine to me, you can keep my Reviewed-by: :-)
Segher
On Wed, Aug 12, 2020 at 02:32:51PM +0200, Christophe Leroy wrote:
> Anyway, it seems that GCC doesn't make much use of the "m<>" and the
> pre-update form.
GCC does not use update form outside of inner loops much. Did you
expect anything else?
> Most of the benefit of flexible addressing seems
On 8/11/20 6:20 PM, Nathan Lynch wrote:
> The drmem lmb list can have hundreds of thousands of entries, and
> unfortunately lookups take the form of linear searches. As long as
> this is the case, traversals have the potential to monopolize the CPU
> and provoke lockup reports, workqueue stalls,
On Wed, 12 Aug 2020 13:25:24 +0100
Jonathan Cameron wrote:
> On Mon, 10 Aug 2020 12:27:32 +1000
> Nicholas Piggin wrote:
>
> > On platforms that define HAVE_ARCH_HUGE_VMAP and support PMD vmaps,
> > vmalloc will attempt to allocate PMD-sized pages first, before falling
> > back to small pages.
Le 12/08/2020 à 15:46, Nathan Lynch a écrit :
Hi Christophe,
Christophe Leroy writes:
+static inline struct drmem_lmb *drmem_lmb_next(struct drmem_lmb *lmb)
+{
+ const unsigned int resched_interval = 20;
+
+ BUG_ON(lmb < drmem_info->lmbs);
+ BUG_ON(lmb >= drmem_info->lmbs
On 8/12/20 7:04 PM, Anshuman Khandual wrote:
On 08/12/2020 06:46 PM, Aneesh Kumar K.V wrote:
On 8/12/20 6:33 PM, Anshuman Khandual wrote:
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
The seems to be missing quite a lot of details w.r.t allocating
the correct pgtable_t page
Hi Christophe,
Christophe Leroy writes:
>> +static inline struct drmem_lmb *drmem_lmb_next(struct drmem_lmb *lmb)
>> +{
>> +const unsigned int resched_interval = 20;
>> +
>> +BUG_ON(lmb < drmem_info->lmbs);
>> +BUG_ON(lmb >= drmem_info->lmbs + drmem_info->n_lmbs);
>
> BUG_ON() shall
On Tue, 11 Aug 2020 11:15:44 -0500
Michael Roth wrote:
> For a power9 KVM guest with XIVE enabled, running a test loop
> where we hotplug 384 vcpus and then unplug them, the following traces
> can be seen (generally within a few loops) either from the unplugged
> vcpu:
>
> [ 1767.353447] cpu
On 08/12/2020 06:46 PM, Aneesh Kumar K.V wrote:
> On 8/12/20 6:33 PM, Anshuman Khandual wrote:
>>
>>
>> On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
>>> The seems to be missing quite a lot of details w.r.t allocating
>>> the correct pgtable_t page (huge_pte_alloc()), holding the right
>>>
On 8/12/20 6:33 PM, Anshuman Khandual wrote:
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
The seems to be missing quite a lot of details w.r.t allocating
the correct pgtable_t page (huge_pte_alloc()), holding the right
lock (huge_pte_lock()) etc. The vma used is also not a hugetlb VMA.
Hello Daniel,
The VirtIO-GPU doesn't work anymore with the latest Git kernel in a
virtual e5500 PPC64 QEMU machine [1,2] after the commit "drm/virtio:
Call the right shmem helpers". [3]
The kernel 5.8 works with the VirtIO-GPU in this virtual machine.
I bisected today [4].
Result:
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
> Saved write support was added to track the write bit of a pte after marking
> the
> pte protnone. This was done so that AUTONUMA can convert a write pte to
> protnone
> and still track the old write bit. When converting it back we set the pte
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
> The seems to be missing quite a lot of details w.r.t allocating
> the correct pgtable_t page (huge_pte_alloc()), holding the right
> lock (huge_pte_lock()) etc. The vma used is also not a hugetlb VMA.
>
> ppc64 do have runtime checks within
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
> set_pud_at() should not be used to set a pte entry at locations that
> already holds a valid pte entry. Architectures like ppc64 don't do TLB
> invalidate in set_pud_at() and hence expect it to be used to set locations
> that are not a valid
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
> set_pmd_at() should not be used to set a pte entry at locations that
> already holds a valid pte entry. Architectures like ppc64 don't do TLB
> invalidate in set_pmd_at() and hence expect it to be used to set locations
> that are not a valid
Le 08/07/2020 à 06:49, Christophe Leroy a écrit :
Le 07/07/2020 à 21:02, Christophe Leroy a écrit :
Le 07/07/2020 à 14:44, Christophe Leroy a écrit :
Le 30/06/2020 à 03:19, Michael Ellerman a écrit :
Michael Ellerman writes:
Christophe Leroy writes:
Hi Michael,
I see this patch
On Mon, 10 Aug 2020 12:27:32 +1000
Nicholas Piggin wrote:
> On platforms that define HAVE_ARCH_HUGE_VMAP and support PMD vmaps,
> vmalloc will attempt to allocate PMD-sized pages first, before falling
> back to small pages.
>
> Allocations which use something other than PAGE_KERNEL protections
At the time being, __put_user()/__get_user() and friends only use
D-form addressing, with 0 offset. Ex:
lwz reg1, 0(reg2)
Give the compiler the opportunity to use other adressing modes
whenever possible, to get more optimised code.
Hereunder is a small exemple:
struct test {
Enable pre-update addressing mode in __get_user_asm() and __put_user_asm()
Signed-off-by: Christophe Leroy
---
v3: new, splited out from patch 1.
---
arch/powerpc/include/asm/uaccess.h | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/include/asm/uaccess.h
put_sigset_t() calls copy_to_user() for copying two words.
Because INLINE_COPY_TO_USER is not defined on powerpc,
copy_to_user() doesn't get optimised and falls back to
copy_tofrom_user() with the relevant glue. This is terribly
inefficient for copying two words.
By switching to
As this was the last user of put_sigset_t(), remove it as well.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/signal_32.c | 24 ++--
1 file changed, 10 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
Implement 'unsafe' version of put_compat_sigset()
For the bigendian, use unsafe_put_user() directly
to avoid intermediate copy through the stack.
For the littleendian, use a straight unsafe_copy_to_user().
Signed-off-by: Christophe Leroy
Cc: Dmitry V. Levin
Cc: Al Viro
---
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/signal_32.c | 47 +
1 file changed, 30 insertions(+), 17 deletions(-)
diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
index 4ea83578ba9d..d03ba3d8eb68 100644
---
If something is bad in the frame, there is no point in
knowing which part of the frame exactly is wrong as it
got allocated as a single block.
Always print the root address of the frame in case on
failed user access, just like handle_signal32().
Signed-off-by: Christophe Leroy
---
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/signal_32.c | 19 +++
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
index 0d076c2a9f6c..4ea83578ba9d 100644
---
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/signal_32.c | 168
1 file changed, 84 insertions(+), 84 deletions(-)
diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
index 2c3d5d4400ec..0d076c2a9f6c 100644
---
For the non VSX version, that's trivial. Just use unsafe_copy_to_user()
instead of __copy_to_user().
For the VSX version, remove the intermediate step through a buffer and
use unsafe_put_user() directly. This generates a far smaller code which
is acceptable to inline, see below:
Standard VSX
Reorder actions in save_user_regs() and save_tm_user_regs() to
regroup copies together in order to switch to user_access_begin()
logic in a later patch.
In save_tm_user_regs(), first perform copies to frame, then
perform copies to tm_frame.
Signed-off-by: Christophe Leroy
---
There is no point in copying floating point regs when there
is no FPU and MATH_EMULATION is not selected.
Create a new CONFIG_PPC_FPU_REGS bool that is selected by
CONFIG_MATH_EMULATION and CONFIG_PPC_FPU, and use it to
opt out everything related to fp_state in thread_struct.
The asm const used
The logging of bad frame appears half a dozen of times
and is pretty similar.
Create signal_fault() fonction to perform that logging.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/signal.c| 11 +++
arch/powerpc/kernel/signal.h| 3 +++
arch/powerpc/kernel/signal_32.c
Instead of calling get_tm_stackpointer() from the caller, call it
directly from get_sigframe(). This avoids a double call and
allows get_tm_stackpointer() to become static and be inlined
into get_sigframe() by GCC.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/signal.c| 9
The e300c2 core which is embedded in mpc832x CPU doesn't have
an FPU.
Make it possible to not select CONFIG_PPC_FPU when building a
kernel dedicated to that target.
Signed-off-by: Christophe Leroy
---
arch/powerpc/platforms/Kconfig.cputype | 11 +--
1 file changed, 9 insertions(+), 2
This access_ok() will soon be performed by user_access_begin().
So move it out of get_sigframe()
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/signal.c| 4
arch/powerpc/kernel/signal_32.c | 4 ++--
arch/powerpc/kernel/signal_64.c | 2 +-
3 files changed, 3 insertions(+), 7
get_clean_sp() is only used once in kernel/signal.c .
And GCC is smart enough to see that x & 0x is a nop
calculation on PPC32, no need of a special PPC32 trivial version.
Include the logic from the PPC64 version of get_clean_sp() directly
in get_sigframe()
Signed-off-by: Christophe
To really be inlined, the functions needs to be defined in the
same C file as the caller, or in an included header.
Move functions from signal .c defined inline in signal.h
Signed-off-by: Christophe Leroy
Fixes: 3dd4eb83a9c0 ("powerpc: move common register copy functions from
signal_32.c to
ptrace_get_reg() and ptrace_set_reg() are only used internally by
ptrace.
Move them in arch/powerpc/kernel/ptrace/ptrace-decl.h
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/ptrace.h| 6 --
arch/powerpc/kernel/ptrace/ptrace-decl.h | 3 +++
2 files changed, 3
This series replaces copies to users by unsafe_put_user() and friends
with user_write_access_begin() dance in signal32.
The advantages are:
- No KUAP unlock/lock at every copy
- More readable code.
- Better generated code.
Copying Al Viro who did it on x86 and may have suggestions,
and Dmitry V.
On the same model as ptrace_get_reg() and ptrace_put_reg(),
create ptrace_get_fpr() and ptrace_put_fpr() to get/set
the floating points registers.
We move the boundary checkings in them.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/ptrace/Makefile | 1 +
Today we have:
#ifdef CONFIG_PPC32
index = addr >> 2;
if ((addr & 3) || child->thread.regs == NULL)
#else
index = addr >> 3;
if ((addr & 7))
#endif
sizeof(long) has value 4 for PPC32 and value 8 for PPC64.
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
> kernel expect entries to be marked huge before we use set_pud_at().
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> mm/debug_vm_pgtable.c | 11 +++
> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/mm/debug_vm_pgtable.c
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
> Saved write support was added to track the write bit of a pte after marking
> the
> pte protnone. This was done so that AUTONUMA can convert a write pte to
> protnone
> and still track the old write bit. When converting it back we set the pte
On Wed, Aug 12, 2020 at 06:18:28PM +1000, Nicholas Piggin wrote:
> Excerpts from pet...@infradead.org's message of August 7, 2020 9:11 pm:
> >
> > What's wrong with something like this?
> >
> > AFAICT there's no reason to actually try and add IRQ tracing here, it's
> > just a hand full of
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
> ppc64 supports huge vmap only with radix translation. Hence use arch helper
> to determine the huge vmap support.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> mm/debug_vm_pgtable.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>
On 8/12/20 2:42 PM, Anshuman Khandual wrote:
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
set_pte_at() should not be used to set a pte entry at locations that
already holds a valid pte entry. Architectures like ppc64 don't do TLB
invalidate in set_pte_at() and hence expect it to be used to
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
> set_pte_at() should not be used to set a pte entry at locations that
> already holds a valid pte entry. Architectures like ppc64 don't do TLB
> invalidate in set_pte_at() and hence expect it to be used to set locations
> that are not a valid
On 8/12/20 1:16 PM, Anshuman Khandual wrote:
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
With the hash page table, the kernel should not use pmd_clear for clearing
huge pte entries. Add a DEBUG_VM WARN to catch the wrong usage.
Signed-off-by: Aneesh Kumar K.V
This particular change is
On 8/12/20 1:42 PM, Anshuman Khandual wrote:
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
ppc64 use bit 62 to indicate a pte entry (_PAGE_PTE). Avoid setting that bit in
random value.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 5 -
1 file changed, 4 insertions(+),
Excerpts from pet...@infradead.org's message of August 7, 2020 9:11 pm:
>
> What's wrong with something like this?
>
> AFAICT there's no reason to actually try and add IRQ tracing here, it's
> just a hand full of instructions at the most.
Because we may want to use that in other places as well,
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
> ppc64 use bit 62 to indicate a pte entry (_PAGE_PTE). Avoid setting that bit
> in
> random value.
>
> Signed-off-by: Aneesh Kumar K.V
> ---
> mm/debug_vm_pgtable.c | 5 -
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git
Excerpts from Zefan Li's message of August 12, 2020 11:07 am:
> On 2020/8/12 0:32, Jonathan Cameron wrote:
>> On Mon, 10 Aug 2020 12:27:24 +1000
>> Nicholas Piggin wrote:
>>
>>> Not tested on x86 or arm64, would appreciate a quick test there so I can
>>> ask Andrew to put it in -mm. Other option
Currently, using llvm-objtool, this script just silently succeeds without
actually do the intended checking. So this updates it to work properly.
Firstly, llvm-objdump does not add target symbol names to the end
of branches in its asm output, so we have to drop the branch to
This is considerably faster then parsing the objdump asm output. It will
also make the enabling of llvm-objdump a little easier.
Cc: Nicholas Piggin
Cc: Bill Wendling
Signed-off-by: Stephen Rothwell
---
arch/powerpc/Makefile.postlink | 2 +-
These 2 patches enable this script to work properly when llvm-objtool
is being used.
They depend on my previos series that make this script suck less.
Cc: Nicholas Piggin
Cc: Bill Wendling
On 08/12/2020 12:03 PM, Aneesh Kumar K.V wrote:
> With the hash page table, the kernel should not use pmd_clear for clearing
> huge pte entries. Add a DEBUG_VM WARN to catch the wrong usage.
>
> Signed-off-by: Aneesh Kumar K.V
This particular change is very much powerpc specific. Hence please
Hi Mpe,
Thanks for reviewing this patch. My responses below:
Michael Ellerman writes:
> Vaibhav Jain writes:
>> The newly introduced 'perf_stats' attribute uses the default access
>> mode of 0444 letting non-root users access performance stats of an
>> nvdimm and potentially force the kernel
On 8/12/20 12:10 PM, Christophe Leroy wrote:
Le 12/08/2020 à 08:33, Aneesh Kumar K.V a écrit :
ppc64 use bit 62 to indicate a pte entry (_PAGE_PTE). Avoid setting
that bit in
random value.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 5 -
1 file changed, 4
Le 12/08/2020 à 08:33, Aneesh Kumar K.V a écrit :
ppc64 use bit 62 to indicate a pte entry (_PAGE_PTE). Avoid setting that bit in
random value.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git
The tests do expect _PAGE_PTE bit set by different page table accessors.
This is not true for the kernel. Within the kernel, _PAGE_PTE bits are
usually set by set_pte_at(). To make the below tests work correctly add test
specific pfn_pte/pmd helpers that set _PAGE_PTE bit.
pte_t pte =
Saved write support was added to track the write bit of a pte after marking the
pte protnone. This was done so that AUTONUMA can convert a write pte to protnone
and still track the old write bit. When converting it back we set the pte write
bit correctly thereby avoiding a write fault again.
The seems to be missing quite a lot of details w.r.t allocating
the correct pgtable_t page (huge_pte_alloc()), holding the right
lock (huge_pte_lock()) etc. The vma used is also not a hugetlb VMA.
ppc64 do have runtime checks within CONFIG_DEBUG_VM for most of these.
Hence disable the test on
pmd_clear() should not be used to clear pmd level pte entries.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 061c19bba7f0..529892b9be2f 100644
---
Make sure we call pte accessors with correct lock held.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 34 --
1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
This will help in adding proper locks in a later patch
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 53 +++
1 file changed, 29 insertions(+), 24 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
Architectures like ppc64 use deposited page table while updating the huge pte
entries.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
set_pud_at() should not be used to set a pte entry at locations that
already holds a valid pte entry. Architectures like ppc64 don't do TLB
invalidate in set_pud_at() and hence expect it to be used to set locations
that are not a valid PTE.
Signed-off-by: Aneesh Kumar K.V
---
set_pmd_at() should not be used to set a pte entry at locations that
already holds a valid pte entry. Architectures like ppc64 don't do TLB
invalidate in set_pmd_at() and hence expect it to be used to set locations
that are not a valid PTE.
Signed-off-by: Aneesh Kumar K.V
---
kernel expect entries to be marked huge before we use set_pud_at().
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index b6aca2526e01..cd609a212dd4 100644
---
kernel expect entries to be marked huge before we use set_pmd_at().
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index de8a62d0a931..b6aca2526e01 100644
---
Saved write support was added to track the write bit of a pte after marking the
pte protnone. This was done so that AUTONUMA can convert a write pte to protnone
and still track the old write bit. When converting it back we set the pte write
bit correctly thereby avoiding a write fault again. Hence
ppc64 supports huge vmap only with radix translation. Hence use arch helper
to determine the huge vmap support.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
set_pte_at() should not be used to set a pte entry at locations that
already holds a valid pte entry. Architectures like ppc64 don't do TLB
invalidate in set_pte_at() and hence expect it to be used to set locations
that are not a valid PTE.
Signed-off-by: Aneesh Kumar K.V
---
ppc64 use bit 62 to indicate a pte entry (_PAGE_PTE). Avoid setting that bit in
random value.
Signed-off-by: Aneesh Kumar K.V
---
mm/debug_vm_pgtable.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index
With the hash page table, the kernel should not use pmd_clear for clearing
huge pte entries. Add a DEBUG_VM WARN to catch the wrong usage.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 14 ++
1 file changed, 14 insertions(+)
diff --git
Hi Andrew, Michal, David
* Andrew Morton [2020-08-06 21:32:11]:
> On Fri, 3 Jul 2020 18:28:23 +0530 Srikar Dronamraju
> wrote:
>
> > > The memory hotplug changes that somehow because you can hotremove numa
> > > nodes and therefore make the nodemask sparse but that is not a common
> > >
90 matches
Mail list logo