On 2021-03-20 12:54, Matthew Wilcox wrote:
On Sat, Mar 20, 2021 at 10:20:09AM -0700, Minchan Kim wrote:
> > Tested-by: Oliver Sang
> > Reported-by: kernel test robot
> > Signed-off-by: Chris Goldsworthy
> > Signed-off-by: Minchan Kim
>
> The signoff chain
On 2021-03-11 14:41, Chris Goldsworthy wrote:
On 2021-03-10 08:14, Minchan Kim wrote:
LRU pagevec holds refcount of pages until the pagevec are drained.
It could prevent migration since the refcount of the page is greater
than the expection in migration logic. To mitigate the issue,
callers
On 2021-03-10 08:14, Minchan Kim wrote:
LRU pagevec holds refcount of pages until the pagevec are drained.
It could prevent migration since the refcount of the page is greater
than the expection in migration logic. To mitigate the issue,
callers of migrate_pages drains LRU pagevec via
On 2021-02-11 06:09, Matthew Wilcox wrote:
On Wed, Feb 10, 2021 at 09:35:40PM -0800, Chris Goldsworthy wrote:
+/* These are used to control the BH LRU invalidation during page
migration */
+static struct cpumask lru_needs_invalidation;
+static bool bh_lru_disabled = false;
As I asked before
being cached in the LRU caches,
until migration has finished.
Signed-off-by: Chris Goldsworthy
Cc: Minchan Kim
Cc: Matthew Wilcox
Cc: Andrew Morton
---
fs/buffer.c | 54 +++--
include/linux/buffer_head.h | 8 +++
include/linux
://elixir.bootlin.com/linux/latest/source/fs/buffer.c#L1238
[2]
https://lore.kernel.org/linux-fsdevel/cover.1611642038.git.cgold...@codeaurora.org/
[3] https://lkml.org/lkml/2021/2/2/68
Chris Goldsworthy (1):
[RFC] mm: fs: Invalidate BH LRU during page migration
fs/buffer.c | 54
Pages containing buffer_heads that are in the buffer_head LRU cache
will be pinned and thus cannot be migrated. Correspondingly,
invalidate the BH LRU before a migration starts and stop any
buffer_head from being cached in the LRU, until migration has
finished.
Signed-off-by: Chris Goldsworthy
...@codeaurora.org/
Chris Goldsworthy (1):
[RFC] mm: fs: Invalidate BH LRU during page migration
fs/buffer.c | 6 ++
include/linux/buffer_head.h | 3 +++
include/linux/migrate.h | 2 ++
mm/migrate.c| 18 ++
mm/page_alloc.c | 3
On 2021-01-28 09:08, Minchan Kim wrote:
On Thu, Jan 28, 2021 at 12:28:37AM -0800, Chris Goldsworthy wrote:
On 2021-01-26 18:59, Matthew Wilcox wrote:
> On Tue, Jan 26, 2021 at 02:59:17PM -0800, Minchan Kim wrote:
> > The release buffer_head in LRU is great improvement for migration
On 2021-01-26 18:59, Matthew Wilcox wrote:
On Tue, Jan 26, 2021 at 02:59:17PM -0800, Minchan Kim wrote:
The release buffer_head in LRU is great improvement for migration
point of view.
A question:
Hey guys,
Can't we invalidate(e.g., invalidate_bh_lrus) bh_lru in migrate_prep
or
elsewhere
s_for_each() to xa_for_each().
Signed-off-by: Laura Abbott
Signed-off-by: Chris Goldsworthy
Cc: Matthew Wilcox
Reported-by: kernel test robot
---
fs/buffer.c | 79 +
1 file changed, 74 insertions(+), 5 deletions(-)
diff --git a/fs/buf
It is possible for file-backed pages to end up in a contiguous memory area
(CMA), such that the relevant page must be migrated using the .migratepage()
callback when its backing physical memory is selected for use in an CMA
allocation (through cma_alloc()). However, if a set of address space
It is possible for file-backed pages to end up in a contiguous memory area
(CMA), such that the relevant page must be migrated using the .migratepage()
callback when its backing physical memory is selected for use in an CMA
allocation (through cma_alloc()). However, if a set of address space
s_for_each() to xa_for_each().
Signed-off-by: Laura Abbott
Signed-off-by: Chris Goldsworthy
Cc: Matthew Wilcox
Reported-by: kernel test robot
---
fs/buffer.c | 81 +
1 file changed, 76 insertions(+), 5 deletions(-)
diff --git a/fs/buf
On 2020-11-24 07:39, Matthew Wilcox wrote:
On Mon, Nov 23, 2020 at 10:49:38PM -0800, Chris Goldsworthy wrote:
+static void __evict_bh_lru(void *arg)
+{
+ struct bh_lru *b = _cpu_var(bh_lrus);
+ struct buffer_head *bh = arg;
+ int i;
+
+ for (i = 0; i < BH_LRU_SIZE
to drop it. There is still the possibility that the buffer
could be added back on the list, but that indicates the buffer is
still in use and would probably have other 'in use' indicates to
prevent dropping.
Signed-off-by: Laura Abbott
Signed-off-by: Chris Goldsworthy
Cc: Matthew Wilcox
---
fs
It is possible for file-backed pages to end up in a contiguous memory area
(CMA), such that the relevant page must be migrated using the .migratepage()
callback when its backing physical memory is selected for use in an CMA
allocation (through cma_alloc()). However, if a set of address space
to drop it. There is still the possibility that the buffer
could be added back on the list, but that indicates the buffer is
still in use and would probably have other 'in use' indicates to
prevent dropping.
Signed-off-by: Laura Abbott
Signed-off-by: Chris Goldsworthy
---
fs/buffer.c | 47
It is possible for file-backed pages to end up in a contiguous memory
area (CMA), such that the relevant page must be migrated using the
.migratepage() callback when its backing physical memory is selected
for use in an CMA allocation (through cma_alloc()). However, if a set
of address space
The current approach to increasing CMA utilization introduced in
commit 16867664936e ("mm,page_alloc,cma: conditionally prefer cma
pageblocks for movable allocations") increases CMA utilization by
redirecting MIGRATE_MOVABLE allocations to a CMA region, when
greater than half of the free pages in
of free cma pages, resulting
in kswapd or direct reclaim not making enough progress.
Signed-off-by: Vinayak Menon
Signed-off-by: Chris Goldsworthy
---
drivers/block/zram/zram_drv.c | 5 +++--
mm/zsmalloc.c | 4 ++--
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers
-off-by: Vinayak Menon
[cgold...@codeaurora.org: Place in bugfixes]
Signed-off-by: Chris Goldsworthy
---
include/linux/gfp.h | 15 +
include/linux/highmem.h | 4 ++-
include/linux/mmzone.h | 4 +++
mm/page_alloc.c | 83 +++--
4
On 2020-09-28 22:59, Christoph Hellwig wrote:
On Mon, Sep 28, 2020 at 01:30:27PM -0700, Chris Goldsworthy wrote:
CMA allocations will fail if 'pinned' pages are in a CMA area, since
we
cannot migrate pinned pages. The _refcount of a struct page being
greater
than _mapcount for that page can
tOn 2020-10-08 23:42, Chris Goldsworthy wrote:
Hi there,
ext4_aops and ext4_da_aops both have a migratepage callback, whereas
ext4_journalled_aops lacks such a callback. Why is this so? I’m
asking this due to the following: when a page containing EXT4 journal
buffer heads ends up being
Hi there,
ext4_aops and ext4_da_aops both have a migratepage callback, whereas
ext4_journalled_aops lacks such a callback. Why is this so? I’m asking
this due to the following: when a page containing EXT4 journal buffer
heads ends up being migrated, fallback_migrate_page() is used, which
On 2020-09-29 21:46, Chris Goldsworthy wrote:
On 2020-09-25 21:24, John Stultz wrote:
Reuse/abuse the pagepool code from the network code to speed
up allocation performance.
This is similar to the ION pagepool usage, but tries to
utilize generic code instead of a custom implementation.
Cc
On 2020-09-25 21:24, John Stultz wrote:
Reuse/abuse the pagepool code from the network code to speed
up allocation performance.
This is similar to the ION pagepool usage, but tries to
utilize generic code instead of a custom implementation.
Cc: Sumit Semwal
Cc: Liam Mark
Cc: Laura Abbott
how long the sleep lasts for (it
is now a fixed 100 ms):
https://lkml.org/lkml/2020/9/28/1144
V5: Add missing fatal_signal_pending() check
Chris Goldsworthy (1):
mm: cma: indefinitely retry allocations in cma_alloc
arch/powerpc/kvm/book3s_hv_builtin.c | 2 +-
drivers/dma-buf/heaps
page.
So, inside of cma_alloc(), add the option of letting users pass in
__GFP_NOFAIL to indicate that we should retry CMA allocations indefinitely,
in the event that alloc_contig_range() returns -EBUSY after having scanned
a whole CMA-region bitmap.
Signed-off-by: Chris Goldsworthy
Co-developed-by
how long the sleep lasts for (it
is now a fixed 100 ms).
Chris Goldsworthy (1):
mm: cma: indefinitely retry allocations in cma_alloc
arch/powerpc/kvm/book3s_hv_builtin.c | 2 +-
drivers/dma-buf/heaps/cma_heap.c | 2 +-
drivers/s390/char/vmcp.c | 2 +-
drivers
page.
So, inside of cma_alloc(), add the option of letting users pass in
__GFP_NOFAIL to indicate that we should retry CMA allocations indefinitely,
in the event that alloc_contig_range() returns -EBUSY after having scanned
a whole CMA-region bitmap.
Signed-off-by: Chris Goldsworthy
Co-developed-by
On 2020-09-27 12:23, Minchan Kim wrote:
On Wed, Sep 23, 2020 at 10:16:25PM -0700, Chris Goldsworthy wrote:
CMA allocations will fail if 'pinned' pages are in a CMA area, since
we
+config CMA_RETRY_SLEEP_DURATION
+ int "Sleep duration between retries"
+ depe
On 2020-09-25 05:18, David Hildenbrand wrote:
On 24.09.20 07:16, Chris Goldsworthy wrote:
-GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0));
+GFP_KERNEL | (gfp_mask & __GFP_NOWARN));
Right, we definetly don't want to
if they want
to perform indefinite retires. Also introduces a config option for
controlling the duration of the sleep between retries.
Chris Goldsworthy (1):
mm: cma: indefinitely retry allocations in cma_alloc
arch/powerpc/kvm/book3s_hv_builtin.c | 2 +-
drivers/dma-buf/heaps/cma_heap.c
page.
So, inside of cma_alloc(), add the option of letting users pass in
__GFP_NOFAIL to indicate that we should retry CMA allocations indefinitely,
in the event that alloc_contig_range() returns -EBUSY after having scanned
a whole CMA-region bitmap.
Signed-off-by: Chris Goldsworthy
Co-developed-by
On 2020-09-17 10:54, Chris Goldsworthy wrote:
On 2020-09-15 00:53, David Hildenbrand wrote:
On 14.09.20 20:33, Chris Goldsworthy wrote:
On 2020-09-14 02:31, David Hildenbrand wrote:
On 11.09.20 21:17, Chris Goldsworthy wrote:
So, inside of cma_alloc(), instead of giving up when
On 2020-09-15 00:53, David Hildenbrand wrote:
On 14.09.20 20:33, Chris Goldsworthy wrote:
On 2020-09-14 02:31, David Hildenbrand wrote:
On 11.09.20 21:17, Chris Goldsworthy wrote:
So, inside of cma_alloc(), instead of giving up when
alloc_contig_range()
returns -EBUSY after having scanned
On 2020-09-15 00:53, David Hildenbrand wrote:
On 14.09.20 20:33, Chris Goldsworthy wrote:
On 2020-09-14 02:31, David Hildenbrand wrote:
On 11.09.20 21:17, Chris Goldsworthy wrote:
So, inside of cma_alloc(), instead of giving up when
alloc_contig_range()
returns -EBUSY after having scanned
On 2020-09-14 11:33, Chris Goldsworthy wrote:
On 2020-09-14 02:31, David Hildenbrand wrote:
What about long-term pinnings? IIRC, that can happen easily e.g., with
vfio (and I remember there is a way via vmsplice).
Not convinced trying forever is a sane approach in the general case
...
Hi
On 2020-09-11 14:42, Randy Dunlap wrote:
On 9/11/20 2:37 PM, Florian Fainelli wrote:
I am by no means an authoritative CMA person but this behavior does
not seem acceptable, there is no doubt the existing one is sub-optimal
under specific circumstances, but an indefinite retry, as well as a
On 2020-09-11 14:37, Florian Fainelli wrote:
On 9/11/2020 1:54 PM, Chris Goldsworthy wrote:
CMA allocations will fail if 'pinned' pages are in a CMA area, since
we
cannot migrate pinned pages. The _refcount of a struct page being
greater
than _mapcount for that page can cause pinning
On 2020-09-14 02:31, David Hildenbrand wrote:
On 11.09.20 21:17, Chris Goldsworthy wrote:
So, inside of cma_alloc(), instead of giving up when
alloc_contig_range()
returns -EBUSY after having scanned a whole CMA-region bitmap, perform
retries indefinitely, with sleeps, to give the system
On 2020-09-11 13:54, Chris Goldsworthy wrote:
[PATCH v2] cma_alloc(), indefinitely retry allocations for -EBUSY
failures
On mobile devices, failure to allocate from a CMA area constitutes a
functional failure. Sometimes during CMA allocations, we have observed
that pages in a CMA area
/2020/8/5/1096
https://lkml.org/lkml/2020/8/21/1490
v2: To address this concern, we switched to retrying indefinitely, as opposed to
doing to retrying the allocation a limited number of times.
Chris Goldsworthy (1):
mm: cma: indefinitely retry allocations in cma_alloc
mm/cma.c | 25
unt, then the page
will be temporarily pinned.
So, inside of cma_alloc(), instead of giving up when alloc_contig_range()
returns -EBUSY after having scanned a whole CMA-region bitmap, perform
retries indefinitely, with sleeps, to give the system an opportunity to
unpin any pinned pages.
Signed-off-by: Ch
/2020/8/5/1096
https://lkml.org/lkml/2020/8/21/1490
v2: To address this concern, we switched to retrying indefinitely, as opposed to
doing to retrying the allocation a limited number of times.
Chris Goldsworthy (1):
mm: cma: indefinitely retry allocations in cma_alloc
mm/cma.c | 25
this concern, we switched to retrying indefinitely, as opposed to
doing to retrying the allocation a limited number of times.
Chris Goldsworthy (1):
mm: cma: indefinitely retry allocations in cma_alloc
mm/cma.c | 25 +++--
1 file changed, 23 insertions(+), 2 deletions
unt, then the page
will be temporarily pinned.
So, inside of cma_alloc(), instead of giving up when alloc_contig_range()
returns -EBUSY after having scanned a whole CMA-region bitmap, perform
retries indefinitely, with sleeps, to give the system an opportunity to
unpin any pinned pages.
Signed-off-by: Ch
performing retries of the allocation a fixed number of times.
Andrew Morton disliked this, as it didn't guarantee that the allocation would
succeed.
v2: To address this concern, we switched to retrying indefinitely, as opposed to
doing to retrying the allocation a limited number of times.
Chris
unt, then the page
will be temporarily pinned.
So, inside of cma_alloc(), instead of giving up when alloc_contig_range()
returns -EBUSY after having scanned a whole CMA-region bitmap, perform
retries indefinitely, with sleeps, to give the system an opportunity to
unpin any pinned pages.
Signed-off-by: Ch
On 2020-08-21 15:01, Andrew Morton wrote:
On Tue, 11 Aug 2020 15:20:47 -0700 cgold...@codeaurora.org wrote:
One
thing to stress is that there are other instances of CMA page pinning,
that
this patch isn't attempting to address.
Oh. How severe are these?
Hey Andrew,
- get_user_pages()
On mobile devices, failure to allocate from a CMA area constitutes a
functional failure. Sometimes during CMA allocations, we have observed
that pages in a CMA area allocated through alloc_pages(), that we're trying
to migrate away to make room for a CMA allocation, are temporarily pinned.
This
unt, then the page
will be temporarily pinned.
So, inside of cma_alloc(), instead of giving up when alloc_contig_range()
returns -EBUSY after having scanned a whole CMA-region bitmap, perform
retries with sleeps to give the system an opportunity to unpin any pinned
pages.
Signed-off-by: Chris Goldswor
it.
Fixes: d698a388146c ("of: reserved-memory: ignore disabled memory-region nodes")
Signed-off-by: Chris Goldsworthy
To: Rob Herring
Cc: devicet...@vger.kernel.org
Cc: sta...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-arm-...@vger.kernel.org
Cc: linux-arm-ker...@lists.infrad
it.
Fixes: d698a388146c ("of: reserved-memory: ignore disabled memory-region nodes")
Signed-off-by: Chris Goldsworthy
To: Rob Herring
Cc: devicet...@vger.kernel.org
Cc: sta...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
---
drivers/of/of_reserved_me
55 matches
Mail list logo