> struct vmem_altmap *altmap = restrictions->altmap;
> > >
> > > + err = check_hotplug_memory_addressable(pfn, nr_pages);
> > > + if (err)
> > > + return err;
> > > +
> > > if (altmap) {
> > > /*
> > >
From: Alastair D'Silva
This series adds bounds checks for hotplugged memory, ensuring that
it is within the physically addressable range (for platforms that
define MAX_(POSSIBLE_)PHYSMEM_BITS.
This allows for early failure, rather than attempting to access
bogus section numbers.
Changelog
From: Alastair D'Silva
On PowerPC, the address ranges allocated to OpenCAPI LPC memory
are allocated from firmware. These address ranges may be higher
than what older kernels permit, as we increased the maximum
permissable address in commit 4ffe713b7587
("powerpc/mm: Increase the max a
From: Alastair D'Silva
On PowerPC, the address ranges allocated to OpenCAPI LPC memory
are allocated from firmware. These address ranges may be higher
than what older kernels permit, as we increased the maximum
permissable address in commit 4ffe713b7587
("powerpc/mm: Increase the max a
From: Alastair D'Silva
This series adds bounds checks for hotplugged memory, ensuring that
it is within the physically addressable range (for platforms that
define MAX_(POSSIBLE_)PHYSMEM_BITS.
This allows for early failure, rather than attempting to access
bogus section numbers.
Changelog
On Mon, 2019-09-30 at 12:21 +1000, Alastair D'Silva wrote:
> From: Alastair D'Silva
>
> On PowerPC, the address ranges allocated to OpenCAPI LPC memory
> are allocated from firmware. These address ranges may be higher
> than what older kernels permit, as we increased
From: Alastair D'Silva
This series adds bounds checks for hotplugged memory, ensuring that
it is within the physically addressable range (for platforms that
define MAX_(POSSIBLE_)PHYSMEM_BITS.
This allows for early failure, rather than attempting to access
bogus section numbers.
Changelog
From: Alastair D'Silva
On PowerPC, the address ranges allocated to OpenCAPI LPC memory
are allocated from firmware. These address ranges may be higher
than what older kernels permit, as we increased the maximum
permissable address in commit 4ffe713b7587
("powerpc/mm: Increase the max a
On Fri, 2019-09-27 at 08:37 +0200, Mark Marshall wrote:
> Comment below...
>
> On Thu, 26 Sep 2019 at 12:18, Alastair D'Silva
> wrote:
> > From: Alastair D'Silva
> >
> > When presented with large amounts of memory being hotplugged
> > (in my test
On Thu, 2019-09-26 at 09:46 +0200, David Hildenbrand wrote:
> On 26.09.19 09:43, Michal Hocko wrote:
> > On Thu 26-09-19 09:12:50, David Hildenbrand wrote:
> > > On 26.09.19 03:34, Alastair D'Silva wrote:
> > > > From: Alastair D'Silva
> > > >
On Thu, 2019-09-26 at 09:53 +0200, Oscar Salvador wrote:
> On Thu, Sep 26, 2019 at 11:34:05AM +1000, Alastair D'Silva wrote:
> > From: Alastair D'Silva
> >
> > On PowerPC, the address ranges allocated to OpenCAPI LPC memory
> > are allocated from firmwar
From: Alastair D'Silva
This operation takes a significant amount of time when hotplugging
large amounts of memory (~50 seconds with 890GB of persistent memory).
This was orignally in commit fb5924fddf9e
("powerpc/mm: Flush cache on memory hot(un)plug") to support memtrace,
but t
From: Alastair D'Silva
When presented with large amounts of memory being hotplugged
(in my test case, ~890GB), the call to flush_dcache_range takes
a while (~50 seconds), triggering RCU stalls.
This patch breaks up the call into 1GB chunks, calling
cond_resched() inbetween to allo
From: Alastair D'Silva
This patch adds helpers to retrieve icache sizes, and renames the existing
helpers to make it clear that they are for dcache.
Signed-off-by: Alastair D'Silva
---
arch/powerpc/include/asm/cache.h | 29 +++
arch/powerpc/i
From: Alastair D'Silva
When calling __kernel_sync_dicache with a size >4GB, we were masking
off the upper 32 bits, so we would incorrectly flush a range smaller
than intended.
This patch replaces the 32 bit shifts with 64 bit ones, so that
the full size is accounted for.
Signe
From: Alastair D'Silva
Similar to commit 22e9c88d486a
("powerpc/64: reuse PPC32 static inline flush_dcache_range()")
this patch converts the following ASM symbols to C:
flush_icache_range()
__flush_dcache_icache()
__flush_dcache_icache_phys()
This was done as we di
From: Alastair D'Silva
When calling flush_icache_range with a size >4GB, we were masking
off the upper 32 bits, so we would incorrectly flush a range smaller
than intended.
This patch replaces the 32 bit shifts with 64 bit ones, so that
the full size is accounted for.
Signed-off-by:
From: Alastair D'Silva
This series addresses a few issues discovered in how we flush caches:
1. Flushes were truncated at 4GB, so larger flushes were incorrect.
2. Flushing the dcache in arch_add_memory was unnecessary
This series also converts much of the cache assembler to C, with the
a
t; +/**
> > + * Map the LPC system & special purpose memory for an AFU
> > + *
> > + * Do not call this during device discovery, as there may me
> > multiple
> > + * devices on a link, and the memory is mapped for the whole link,
> > not
> > + * just one
From: Alastair D'Silva
This series adds bounds checks for hotplugged memory, ensuring that
it is within the physically addressable range (for platforms that
define MAX_(POSSIBLE_)PHYSMEM_BITS.
This allows for early failure, rather than attempting to access
bogus section numbers.
Changelog
From: Alastair D'Silva
On PowerPC, the address ranges allocated to OpenCAPI LPC memory
are allocated from firmware. These address ranges may be higher
than what older kernels permit, as we increased the maximum
permissable address in commit 4ffe713b7587
("powerpc/mm: Increase the max a
On Mon, 2019-09-23 at 14:25 +0200, Michal Hocko wrote:
> On Tue 17-09-19 11:07:47, Alastair D'Silva wrote:
> > From: Alastair D'Silva
> >
> > On PowerPC, the address ranges allocated to OpenCAPI LPC memory
> > are allocated from firmware. These address ran
On Wed, 2019-09-18 at 16:02 +0200, Frederic Barrat wrote:
>
> Le 17/09/2019 à 03:42, Alastair D'Silva a écrit :
> > From: Alastair D'Silva
> >
> > Tally up the LPC memory on an OpenCAPI link & allow it to be mapped
> >
> > Signed-off-by: Alasta
On Thu, 2019-09-19 at 13:43 +1000, Michael Ellerman wrote:
> "Alastair D'Silva" writes:
> > From: Alastair D'Silva
> >
> > When calling flush_icache_range with a size >4GB, we were masking
> > off the upper 32 bits, so we would incor
On Tue, 2019-09-17 at 11:43 +1000, Alastair D'Silva wrote:
> From: Alastair D'Silva
>
> Add functions to map/unmap LPC memory
>
> Signed-off-by: Alastair D'Silva
> ---
> drivers/misc/ocxl/config.c| 4 +++
>
On Wed, 2019-09-18 at 16:03 +0200, Frederic Barrat wrote:
>
> Le 17/09/2019 à 03:43, Alastair D'Silva a écrit :
> > From: Alastair D'Silva
> >
> > Add functions to map/unmap LPC memory
> >
> > Signed-off-by: Alastair D'Silva
&
On Wed, 2019-09-18 at 16:03 +0200, Frederic Barrat wrote:
>
> Le 17/09/2019 à 03:42, Alastair D'Silva a écrit :
> > From: Alastair D'Silva
> >
> > Map & release OpenCAPI LPC memory.
> >
> > Signed-off-by: Alastair D'Silva
> > ---
&
From: Alastair D'Silva
This operation takes a significant amount of time when hotplugging
large amounts of memory (~50 seconds with 890GB of persistent memory).
This was orignally in commit fb5924fddf9e
("powerpc/mm: Flush cache on memory hot(un)plug") to support memtrace,
but t
From: Alastair D'Silva
Similar to commit 22e9c88d486a
("powerpc/64: reuse PPC32 static inline flush_dcache_range()")
this patch converts the following ASM symbols to C:
flush_icache_range()
__flush_dcache_icache()
__flush_dcache_icache_phys()
This was done as we di
From: Alastair D'Silva
When presented with large amounts of memory being hotplugged
(in my test case, ~890GB), the call to flush_dcache_range takes
a while (~50 seconds), triggering RCU stalls.
This patch breaks up the call into 1GB chunks, calling
cond_resched() inbetween to allo
From: Alastair D'Silva
This patch adds helpers to retrieve icache sizes, and renames the existing
helpers to make it clear that they are for dcache.
Signed-off-by: Alastair D'Silva
---
arch/powerpc/include/asm/cache.h | 29 +++
arch/powerpc/i
From: Alastair D'Silva
This series addresses a few issues discovered in how we flush caches:
1. Flushes were truncated at 4GB, so larger flushes were incorrect.
2. Flushing the dcache in arch_add_memory was unnecessary
This series also converts much of the cache assembler to C, with the
a
From: Alastair D'Silva
When calling flush_icache_range with a size >4GB, we were masking
off the upper 32 bits, so we would incorrectly flush a range smaller
than intended.
__kernel_sync_dicache in the 64 bit VDSO has the same bug.
This patch replaces the 32 bit shifts with 64 bit
On Tue, 2019-09-17 at 11:43 +1000, Alastair D'Silva wrote:
> From: Alastair D'Silva
>
> This patch exposes the OpenCAPI device serial number to
> userspace.
>
> It also includes placeholders for the LPC & special purpose
> memory information (which will be po
From: Alastair D'Silva
This patch exposes the OpenCAPI device serial number to
userspace.
It also includes placeholders for the LPC & special purpose
memory information (which will be populated in a subsequent patch)
to avoid creating excessive versions of the IOCTL.
Signed-off-by:
From: Alastair D'Silva
Add functions to map/unmap LPC memory
Signed-off-by: Alastair D'Silva
---
drivers/misc/ocxl/config.c| 4 +++
drivers/misc/ocxl/core.c | 50 +++
drivers/misc/ocxl/link.c | 4 +--
drivers/misc/ocxl/ocxl_
From: Alastair D'Silva
Map & release OpenCAPI LPC memory.
Signed-off-by: Alastair D'Silva
---
arch/powerpc/include/asm/pnv-ocxl.h | 2 ++
arch/powerpc/platforms/powernv/ocxl.c | 42 +++
2 files changed, 44 insertions(+)
diff --git a/arch/powerpc/i
From: Alastair D'Silva
Add OPAL calls for LPC memory alloc/release
Signed-off-by: Alastair D'Silva
---
arch/powerpc/include/asm/opal-api.h| 4 +++-
arch/powerpc/include/asm/opal.h| 3 +++
arch/powerpc/platforms/powernv/opal-call.c | 2 ++
3 files changed, 8
From: Alastair D'Silva
Tally up the LPC memory on an OpenCAPI link & allow it to be mapped
Signed-off-by: Alastair D'Silva
---
drivers/misc/ocxl/core.c | 9 +
drivers/misc/ocxl/link.c | 61 +++
drivers/misc/ocxl/ocxl_i
From: Alastair D'Silva
This series provides the prerequisite infrastructure to allow
external drivers to map & access OpenCAPI LPC memory.
Alastair D'Silva (5):
powerpc: Add OPAL calls for LPC memory alloc/release
powerpc: Map & release OpenCAPI LPC memory
ocxl: Tally up
From: Alastair D'Silva
The call to check_hotplug_memory_addressable() validates that the memory
is fully addressable.
Without this call, it is possible that we may remap pages that is
not physically addressable, resulting in bogus section numbers
being returned from __section_nr().
Signe
From: Alastair D'Silva
On PowerPC, the address ranges allocated to OpenCAPI LPC memory
are allocated from firmware. These address ranges may be higher
than what older kernels permit, as we increased the maximum
permissable address in commit 4ffe713b7587
("powerpc/mm: Increase the max a
From: Alastair D'Silva
This series adds bounds checks for hotplugged memory, ensuring that
it is within the physically addressable range (for platforms that
define MAX_(POSSIBLE_)PHYSMEM_BITS.
This allows for early failure, rather than attempting to access
bogus section numbers.
Changelog
From: Alastair D'Silva
The call to check_hotplug_memory_addressable() validates that the memory
is fully addressable.
Without this call, it is possible that we may remap pages that is
not physically addressable, resulting in bogus section numbers
being returned from __section_nr().
Signe
From: Alastair D'Silva
On PowerPC, the address ranges allocated to OpenCAPI LPC memory
are allocated from firmware. These address ranges may be higher
than what older kernels permit, as we increased the maximum
permissable address in commit 4ffe713b7587
("powerpc/mm: Increase the max a
From: Alastair D'Silva
This series adds bounds checks for hotplugged memory, ensuring that
it is within the physically addressable range (for platforms that
define MAX_(POSSIBLE_)PHYSMEM_BITS.
This allows for early failure, rather than attempting to access
bogus section numbers.
Changelog
On Sat, 2019-09-14 at 09:46 +0200, Christophe Leroy wrote:
>
> Le 03/09/2019 à 07:23, Alastair D'Silva a écrit :
> > From: Alastair D'Silva
> >
> > When calling flush_icache_range with a size >4GB, we were masking
> > off the upper 32 bits, so we
> -Original Message-
> From: Kirill A. Shutemov
> Sent: Tuesday, 10 September 2019 8:15 PM
> To: Alastair D'Silva
> Cc: alast...@d-silva.org; Andrew Morton ;
> David Hildenbrand ; Oscar Salvador
> ; Michal Hocko ; Pavel Tatashin
> ; Wei Yang ;
>
> -Original Message-
> From: David Hildenbrand
> Sent: Tuesday, 10 September 2019 5:46 PM
> To: Alastair D'Silva ; alast...@d-silva.org
> Cc: Andrew Morton ; Oscar Salvador
> ; Michal Hocko ; Pavel Tatashin
> ; Wei Yang ;
> Dan Williams ; Qian Cai ; Jason
> -Original Message-
> From: David Hildenbrand
> Sent: Tuesday, 10 September 2019 5:39 PM
> To: Alastair D'Silva ; alast...@d-silva.org
> Cc: Andrew Morton ; Oscar Salvador
> ; Michal Hocko ; Pavel Tatashin
> ; Dan Williams ;
> Wei Yang ; Qian Cai ; Jason
From: Alastair D'Silva
This series adds bounds checks for hotplugged memory, ensuring that
it is within the physically addressable range (for platforms that
define MAX_(POSSIBLE_)PHYSMEM_BITS.
This allows for early failure, rather than attempting to access
bogus section numbers.
Ala
From: Alastair D'Silva
The call to check_hotplug_memory_addressable() validates that the memory
is fully addressable.
Without this call, it is possible that we may remap pages that is
not physically addressable, resulting in bogus section numbers
being returned from __section_nr().
Signe
From: Alastair D'Silva
On PowerPC, the address ranges allocated to OpenCAPI LPC memory
are allocated from firmware. These address ranges may be higher
than what older kernels permit, as we increased the maximum
permissable address in commit 4ffe713b7587
("powerpc/mm: Increase the max a
On Mon, 2019-09-02 at 09:28 +0200, David Hildenbrand wrote:
> On 02.09.19 01:54, Alastair D'Silva wrote:
> > On Tue, 2019-08-27 at 09:13 +0200, David Hildenbrand wrote:
> > > On 27.08.19 08:39, Alastair D'Silva wrote:
> > > > On Tue, 2019-08-27 at 08:28 +02
>
> But you tell me that you leave to people the opportunity to not
> apply
> that subsequent patch, and that's the reason you didn't put that
> patch
> before this one. In that case adding a helper is worth it.
>
> Christophe
I factored it out anyway, since it
3", it is always possible to copy %0 to %3 and use it
> as
> an address register for the second loop. One register less to
> allocate
> for the compiler. Constraints of course have to be adjusted.
>
>
Given that we're dealing with registers holding data that has be
On Tue, 2019-09-03 at 11:04 -0500, Segher Boessenkool wrote:
> On Tue, Sep 03, 2019 at 04:28:09PM +0200, Christophe Leroy wrote:
> > Le 03/09/2019 à 15:04, Segher Boessenkool a écrit :
> > > On Tue, Sep 03, 2019 at 03:23:57PM +1000, Alastair D'Silva wrote:
>
On Tue, 2019-09-03 at 08:08 +0200, Christophe Leroy wrote:
>
> Le 03/09/2019 à 07:23, Alastair D'Silva a écrit :
> > From: Alastair D'Silva
> >
> > Similar to commit 22e9c88d486a
> > ("powerpc/64: reuse PPC32 static inline flush_dcache_range()")
On Tue, 2019-09-03 at 08:23 +0200, Christophe Leroy wrote:
>
> Le 03/09/2019 à 07:24, Alastair D'Silva a écrit :
> > From: Alastair D'Silva
> >
> > This operation takes a significant amount of time when hotplugging
> > large amounts of memory (~50 sec
On Tue, 2019-09-03 at 08:19 +0200, Christophe Leroy wrote:
>
> Le 03/09/2019 à 07:23, Alastair D'Silva a écrit :
> > From: Alastair D'Silva
> >
> > When presented with large amounts of memory being hotplugged
> > (in my test case, ~890GB), the call to f
From: Alastair D'Silva
Similar to commit 22e9c88d486a
("powerpc/64: reuse PPC32 static inline flush_dcache_range()")
this patch converts the following ASM symbols to C:
flush_icache_range()
__flush_dcache_icache()
__flush_dcache_icache_phys()
This was done as we di
From: Alastair D'Silva
This operation takes a significant amount of time when hotplugging
large amounts of memory (~50 seconds with 890GB of persistent memory).
This was orignally in commit fb5924fddf9e
("powerpc/mm: Flush cache on memory hot(un)plug") to support memtrace,
but t
From: Alastair D'Silva
The 'extern' keyword does not value-add for function prototypes.
Signed-off-by: Alastair D'Silva
---
arch/powerpc/include/asm/cache.h | 8
arch/powerpc/include/asm/cacheflush.h | 6 +++---
2 files changed, 7 insertions(+), 7 deletions(-
From: Alastair D'Silva
When presented with large amounts of memory being hotplugged
(in my test case, ~890GB), the call to flush_dcache_range takes
a while (~50 seconds), triggering RCU stalls.
This patch breaks up the call into 1GB chunks, calling
cond_resched() inbetween to allo
From: Alastair D'Silva
This patch adds helpers to retrieve icache sizes, and renames the existing
helpers to make it clear that they are for dcache.
Signed-off-by: Alastair D'Silva
---
arch/powerpc/include/asm/cache.h | 29 +++
arch/powerpc/i
From: Alastair D'Silva
This series addresses a few issues discovered in how we flush caches:
1. Flushes were truncated at 4GB, so larger flushes were incorrect.
2. Flushing the dcache in arch_add_memory was unnecessary
This series also converts much of the cache assembler to C, with the
a
From: Alastair D'Silva
When calling flush_icache_range with a size >4GB, we were masking
off the upper 32 bits, so we would incorrectly flush a range smaller
than intended.
This patch replaces the 32 bit shifts with 64 bit ones, so that
the full size is accounted for.
Signed-off-by:
On Tue, 2019-08-27 at 09:13 +0200, David Hildenbrand wrote:
> On 27.08.19 08:39, Alastair D'Silva wrote:
> > On Tue, 2019-08-27 at 08:28 +0200, Michal Hocko wrote:
> > > On Tue 27-08-19 15:20:46, Alastair D'Silva wrote:
> > > > From: Alastair D'Silva
e passed a NULL has been removed, so there
is no longer a possibility that memmap can be NULL.
Signed-off-by: Alastair D'Silva
---
mm/sparse.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/mm/sparse.c b/mm/sparse.c
index 78979c142b7d..9f7e3682cdcb 100644
--- a/mm/sparse.c
++
On Tue, 2019-08-27 at 08:24 +0200, Michal Hocko wrote:
> On Tue 27-08-19 15:36:55, Alastair D'Silva wrote:
> > From: Alastair D'Silva
> >
> > By adding offset to memmap before passing it in to
> > clear_hwpoisoned_pages,
> > we hide a theoretic
On Tue, 2019-08-27 at 08:28 +0200, Michal Hocko wrote:
> On Tue 27-08-19 15:20:46, Alastair D'Silva wrote:
> > From: Alastair D'Silva
> >
> > It is possible for firmware to allocate memory ranges outside
> > the range of physical memory that we support (MAX_P
From: Alastair D'Silva
Use the function written to do it instead.
Signed-off-by: Alastair D'Silva
---
mm/sparse.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/sparse.c b/mm/sparse.c
index 72f010d9bff5..e41917a7e844 100644
--- a/mm/sparse.c
+++ b/m
From: Alastair D'Silva
By adding offset to memmap before passing it in to clear_hwpoisoned_pages,
we hide a theoretically null memmap from the null check inside
clear_hwpoisoned_pages.
This patch passes the offset to clear_hwpoisoned_pages instead, allowing
memmap to successfully perform
From: Alastair D'Silva
This series addresses some minor issues & obsoletes:
mm: Cleanup & allow modules to hotplug memory
Alastair D'Silva (2):
mm: Don't manually decrement num_poisoned_pages
mm: don't hide potentially null memmap pointer in
sparse_remov
From: Alastair D'Silva
It is possible for firmware to allocate memory ranges outside
the range of physical memory that we support (MAX_PHYSMEM_BITS).
This patch adds a bounds check to ensure that any hotplugged
memory is addressable.
Signed-off-by: Alastair D'Silva
---
arch/powerp
On Thu, 2019-08-22 at 07:06 +0200, Christophe Leroy wrote:
>
> Le 22/08/2019 à 02:27, Alastair D'Silva a écrit :
> > On Wed, 2019-08-21 at 22:27 +0200, Christophe Leroy wrote:
> > > Le 20/08/2019 à 06:36, Alastair D'Silva a écrit :
> > > > On Fri, 2019-0
On Wed, 2019-08-21 at 22:27 +0200, Christophe Leroy wrote:
>
> Le 20/08/2019 à 06:36, Alastair D'Silva a écrit :
> > On Fri, 2019-08-16 at 15:52 +, Christophe Leroy wrote:
>
> [...]
>
> >
> > Thanks Christophe,
> >
> > I'm
From: Alastair D'Silva
The upstream commit:
22e9c88d486a ("powerpc/64: reuse PPC32 static inline flush_dcache_range()")
has a similar effect, but since it is a rewrite of the assembler to C, is
too invasive for stable. This patch is a minimal fix to address the issue in
assembl
uction cache */
address = addr;
for (i = 0; i < ilines; i++, address += ibytes)
icbi((void *)address);
mtmsr(msr);
}
void test_flush_phys(unsigned long addr)
{
flush_dcache_icache_phys(addr);
}
This gives the following assembler (using pmac32_
On Thu, 2019-08-15 at 09:36 +0200, christophe leroy wrote:
>
> Le 15/08/2019 à 06:10, Alastair D'Silva a écrit :
> > From: Alastair D'Silva
> >
> > When presented with large amounts of memory being hotplugged
> > (in my test case, ~890GB), the call to f
On Thu, 2019-08-15 at 09:29 +0200, christophe leroy wrote:
>
> Le 15/08/2019 à 06:10, Alastair D'Silva a écrit :
> > From: Alastair D'Silva
> >
> > Similar to commit 22e9c88d486a
> > ("powerpc/64: reuse PPC32 static inline flush_dcache_range()")
From: Alastair D'Silva
Heads Up: This patch cannot be submitted to Linus's tree, as the affected
assembler functions have already been converted to C.
When calling flush_(inval_)dcache_range with a size >4GB, we were masking
off the upper 32 bits, so we would incorrectly flush a
From: Alastair D'Silva
This series addresses a few issues discovered in how we flush caches:
1. Flushes were truncated at 4GB, so larger flushes were incorrect.
2. Flushing the dcache in arch_add_memory was unnecessary
This series also converts much of the cache assembler to C, with the
a
From: Alastair D'Silva
This operation takes a significant amount of time when hotplugging
large amounts of memory (~50 seconds with 890GB of persistent memory).
This was orignally in commit fb5924fddf9e
("powerpc/mm: Flush cache on memory hot(un)plug") to support memtrace,
but t
From: Alastair D'Silva
The 'extern' keyword does not value-add for function prototypes.
Signed-off-by: Alastair D'Silva
---
arch/powerpc/include/asm/cache.h | 8
arch/powerpc/include/asm/cacheflush.h | 6 +++---
2 files changed, 7 insertions(+), 7 deletions(-
From: Alastair D'Silva
Similar to commit 22e9c88d486a
("powerpc/64: reuse PPC32 static inline flush_dcache_range()")
this patch converts flush_icache_range() to C, and reimplements the
following functions as wrappers around it:
__flush_dcache_icache
__flush_dcache_icache_phys
T
From: Alastair D'Silva
When presented with large amounts of memory being hotplugged
(in my test case, ~890GB), the call to flush_dcache_range takes
a while (~50 seconds), triggering RCU stalls.
This patch breaks up the call into 16GB chunks, calling
cond_resched() inbetween to allo
From: Alastair D'Silva
This patch adds helpers to retrieve icache sizes, and renames the existing
helpers to make it clear that they are for dcache.
Signed-off-by: Alastair D'Silva
---
arch/powerpc/include/asm/cache.h | 29 +++
arch/powerpc/i
From: Alastair D'Silva
When calling flush_icache_range with a size >4GB, we were masking
off the upper 32 bits, so we would incorrectly flush a range smaller
than intended.
This patch replaces the 32 bit shifts with 64 bit ones, so that
the full size is accounted for.
Signed-off-by:
On Fri, 2019-08-09 at 10:59 +0200, Christophe Leroy wrote:
>
> Le 09/08/2019 à 02:45, Alastair D'Silva a écrit :
> > From: Alastair D'Silva
> >
> > When calling flush_icache_range with a size >4GB, we were masking
> > off the upper 32 bits, so we
From: Alastair D'Silva
Similar to commit 22e9c88d486a
("powerpc/64: reuse PPC32 static inline flush_dcache_range()")
this patch converts flush_icache_range to C.
This was done as we discovered a long-standing bug where the
length of the range was truncated due to using a 32 bit s
From: Alastair D'Silva
When calling flush_icache_range with a size >4GB, we were masking
off the upper 32 bits, so we would incorrectly flush a range smaller
than intended.
This patch replaces the 32 bit shifts with 64 bit ones, so that
the full size is accounted for.
Heads-up for bac
On Tue, 2019-07-02 at 08:13 +0200, Michal Hocko wrote:
> On Tue 02-07-19 14:13:25, Alastair D'Silva wrote:
> > On Mon, 2019-07-01 at 12:46 +0200, Michal Hocko wrote:
> > > On Fri 28-06-19 10:46:28, Alastair D'Silva wrote:
> > > [...]
> > > > Given t
On Mon, 2019-07-01 at 12:46 +0200, Michal Hocko wrote:
> On Fri 28-06-19 10:46:28, Alastair D'Silva wrote:
> [...]
> > Given that there is already a VM_BUG_ON in the code, how do you
> > feel
> > about broadening the scope from 'VM_BUG_ON(!root)' t
On Thu, 2019-06-27 at 10:10 +0200, Michal Hocko wrote:
> On Thu 27-06-19 10:50:57, Alastair D'Silva wrote:
> > On Wed, 2019-06-26 at 08:57 +0200, Michal Hocko wrote:
> > > On Wed 26-06-19 16:27:30, Alastair D'Silva wrote:
> > > > On Wed, 2019-06-26 at 08:21
On Wed, 2019-06-26 at 00:57 -0700, Christoph Hellwig wrote:
> On Wed, Jun 26, 2019 at 04:11:20PM +1000, Alastair D'Silva wrote:
> > - Drop mm/hotplug: export try_online_node
> > (not necessary)
>
> With this the subject line of the cover letter seems incorre
On Wed, 2019-06-26 at 08:57 +0200, Michal Hocko wrote:
> On Wed 26-06-19 16:27:30, Alastair D'Silva wrote:
> > On Wed, 2019-06-26 at 08:21 +0200, Michal Hocko wrote:
> > > On Wed 26-06-19 16:11:21, Alastair D'Silva wrote:
> > > > From: Alastair D'Silv
On Wed, 2019-06-26 at 08:23 +0200, Michal Hocko wrote:
> On Wed 26-06-19 16:11:22, Alastair D'Silva wrote:
> > From: Alastair D'Silva
> >
> > By adding offset to memmap before passing it in to
> > clear_hwpoisoned_pages,
> > we hide a potenti
On Wed, 2019-06-26 at 08:21 +0200, Michal Hocko wrote:
> On Wed 26-06-19 16:11:21, Alastair D'Silva wrote:
> > From: Alastair D'Silva
> >
> > If a memory section comes in where the physical address is greater
> > than
> > that which is managed by the
From: Alastair D'Silva
Use the function written to do it instead.
Signed-off-by: Alastair D'Silva
---
mm/sparse.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/sparse.c b/mm/sparse.c
index 1ec32aef5590..d9b3625bfdf0 100644
--- a/mm/sparse.c
+++ b/m
1 - 100 of 196 matches
Mail list logo