On 3/16/21 11:07 AM, Christoph Lameter wrote:
> On Mon, 15 Mar 2021, Yang Shi wrote:
>
>> > It seems like CONFIG_SLUB_DEBUG is a more popular option than
>> > CONFIG_SLUB_STATS.
>> > CONFIG_SLUB_DEBUG is enabled on my Fedora workstation, CONFIG_SLUB_STATS
>> > is off.
>> > I doubt an average
On 3/9/21 4:25 PM, Xunlei Pang wrote:
> count_partial() can hold n->list_lock spinlock for quite long, which
> makes much trouble to the system. This series eliminate this problem.
Before I check the details, I have two high-level comments:
- patch 1 introduces some counting scheme that patch 4
On 3/15/21 6:32 PM, Paul E. McKenney wrote:
> On Mon, Mar 15, 2021 at 06:28:42PM +0100, Vlastimil Babka wrote:
>> On 3/15/21 6:16 PM, David Rientjes wrote:
>> > On Mon, 15 Mar 2021, Vlastimil Babka wrote:
>> >
>> >> Commit ca0cab65ea2b ("m
On 3/15/21 6:16 PM, David Rientjes wrote:
> On Mon, 15 Mar 2021, Vlastimil Babka wrote:
>
>> Commit ca0cab65ea2b ("mm, slub: introduce static key for slub_debug()")
>> introduced a static key to optimize the case where no debugging is enabled
>> for
>> a
ing cpu hotplug lock"),
static_branch_enable_cpuslocked() should be used.
[1] https://lore.kernel.org/linux-btrfs/20210315141824.26099-1-dste...@suse.com/
Reported-by: Oliver Glitta
Fixes: ca0cab65ea2b ("mm, slub: introduce static key for slub_debug()")
Signed-off-by: Vlast
pfn to be
> scanned, we reuse the cc->migrate_pfn field to keep track of that.
>
> Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
> ---
> mm/compaction.c | 48
> mm/internal.h | 2 +-
> mm/page_alloc.c
s (5 at the moment) instead of bailing out.
>
> migrate_pages bails out right away on -ENOMEM because it is considered a fatal
> error. Do the same here instead of keep going and retrying.
>
> Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
> ---
> mm/page_al
On 3/11/21 11:51 AM, Maninder Singh wrote:
> Hi,
>
>
>
>> Instead of your changes to SL*B, could you check mem_dump_obj() and others
>> added
>> by Paul in 5.12-rc1?
>
>> (+CC Paul, thus not trimming)
>
>
>
> checked mem_dump_obj(), but it only provides path of allocation and not of
>
On 2/25/21 8:56 AM, Maninder Singh wrote:
> In case of "Use After Free" kernel OOPs, free path of object
> is required to debug futher.
> And in most of cases object address is present in one of registers.
>
> Thus check for register address and if it belongs to slab,
> print its alloc and free
On 3/9/21 7:14 PM, Georgi Djakov wrote:
> Hi Vlastimil,
>
> Thanks for the comment!
>
> On 3/9/21 17:09, Vlastimil Babka wrote:
>> On 3/9/21 2:47 PM, Georgi Djakov wrote:
>>> Being able to stop the system immediately when a memory corruption
>>> is de
On 3/9/21 2:47 PM, Georgi Djakov wrote:
> Being able to stop the system immediately when a memory corruption
> is detected is crucial to finding the source of it. This is very
> useful when the memory can be inspected with kdump or other tools.
Is this in some testing scenarios where you would
gt; This will also eliminate the extern declaration from header file.
> No functionality is broken or changed this way.
>
> Signed-off-by: Pintu Kumar
> Signed-off-by: Pintu Agarwal
Reviewed-by: Vlastimil Babka
> ---
> v2: completely get rid of this variable and set .data t
On 3/2/21 6:56 PM, Pintu Kumar wrote:
> The sysctl_compact_memory is mostly unsed in mm/compaction.c
> It just acts as a place holder for sysctl.
>
> Thus we can remove it from here and move the declaration directly
> in kernel/sysctl.c itself.
> This will also eliminate the extern declaration
On 3/2/21 2:29 PM, Petr Mladek wrote:
> On Tue 2021-03-02 13:51:35, Geert Uytterhoeven wrote:
>> > > > +
>> > > > +
>> > > > pr_warn("**\n");
>> > > > + pr_warn("** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE
>> > > >
ady. I have verified
the POC no longer reproduces afterwards.
[1] https://bugs.chromium.org/p/project-zero/issues/detail?id=2045
Reported-by: Nicolai Stange
Signed-off-by: Vlastimil Babka
---
mm/huge_memory.c | 15 +++
1 file changed, 15 insertions(+)
diff --git a/mm/huge_memory.c b/mm
he DMA32 zone. Ensure the allocations resulting from
> the gfp_mask returned by limit_gfp_mask use the zone flags that were
> originally passed to shmem_getpage_gfp.
>
> Signed-off-by: Rik van Riel
> Suggested-by: Hugh Dickins
Acked-by: Vlastimil Babka
> ---
> mm/shmem.c
On 2/26/21 11:59 AM, Mike Rapoport wrote:
> On Thu, Feb 25, 2021 at 07:38:44PM +0100, Vlastimil Babka wrote:
>> On 2/25/21 7:05 PM, Mike Rapoport wrote:
>> >>
>> >> What if two zones are adjacent? I.e. if the hole was at a boundary
>> >> between
On 2/26/21 10:17 AM, Yu Zhao wrote:
> Patch series "mm: lru related cleanups" starting at commit 42895ea73bcd
> ("mm/vmscan.c: use add_page_to_lru_list()") bloated vmlinux by 1777
> bytes, according to:
>
> https://lore.kernel.org/linux-mm/85b3e8f2-5982-3329-c20d-cf062b8da...@suse.cz/
Huh, I
On 2/25/21 7:05 PM, Mike Rapoport wrote:
> On Thu, Feb 25, 2021 at 06:51:53PM +0100, Vlastimil Babka wrote:
>> >
>> > unset zone link in struct page will trigger
>> >
>> >VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page);
>>
&g
t
> links to the adjacent zone/node.
What if two zones are adjacent? I.e. if the hole was at a boundary between two
zones.
> Fixes: 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions rather
> that check each PFN")
> Signed-off-by: Mike
should be
only used for procfs and similar files, not dmesg buffer. This patch clarifies
the documentation in that regard.
Signed-off-by: Vlastimil Babka
---
Documentation/core-api/printk-formats.rst | 26 ++-
lib/vsprintf.c| 7 --
2 files changed
heir kernels with everything that's needed to decode stack
> traces later.
Looks good!
> Signed-off-by: Thorsten Leemhuis
> Reviewed-by: Qais Yousef
Acked-by: Vlastimil Babka
Thanks!
On 2/18/21 6:24 PM, Charan Teja Reddy wrote:
> I would like to start discussion about balancing the occupancy of
> memory zones in a node in the system whose imabalance may be caused by
> migration of pages to other zones during hotremove and then hotadding
> same memory. In this case there is a
On 2/17/21 6:33 PM, Vlastimil Babka wrote:
> Compaction always operates on pages from a single given zone when isolating
> both pages to migrate and freepages. Pageblock boundaries are intersected with
> zone boundaries to be safe in case zone starts or ends in the middle of
> pagebl
Let's add include/uapi/ and arch/*/include/uapi/ to API/ABI section, so that
for patches modifying them, get_maintainers.pl suggests CCing linux-api@ so
people don't forget.
Reported-by: David Hildenbrand
Signed-off-by: Vlastimil Babka
---
MAINTAINERS | 2 ++
1 file changed, 2 insertions
() on a range of pfn's
from two different zones and end up e.g. isolating freepages under the wrong
zone's lock.
This patch should fix the above issues.
Fixes: 5a811889de10 ("mm, compaction: use free lists to quickly locate a
migration target")
Cc:
Signed-off-by: Vlastimil Babk
ave to route every fault while
> populating via the userfaultfd handler.
>
> [1] https://lkml.org/lkml/2013/6/27/698
>
> Cc: Andrew Morton
> Cc: Arnd Bergmann
> Cc: Michal Hocko
> Cc: Oscar Salvador
> Cc: Matthew Wilcox (Oracle)
> Cc: Andrea Arcangeli
> Cc:
On 2/16/21 6:49 PM, Mike Rapoport wrote:
> Hi Vlastimil,
>
> On Tue, Feb 16, 2021 at 05:39:12PM +0100, Vlastimil Babka wrote:
>>
>>
>> So, Andrea could you please check if this fixes the original
>> fast_isolate_around() issue for you? With the VM_BUG_ON
On 2/16/21 2:11 PM, Michal Hocko wrote:
> On Tue 16-02-21 13:34:56, Vlastimil Babka wrote:
>> On 2/16/21 12:01 PM, Mike Rapoport wrote:
>> >>
>> >> I do understand that. And I am not objecting to the patch. I have to
>> >> confess I haven't
On 2/16/21 1:34 PM, Vlastimil Babka wrote:
> On 2/16/21 12:01 PM, Mike Rapoport wrote:
>>>
>>> I do understand that. And I am not objecting to the patch. I have to
>>> confess I haven't digested it yet. Any changes to early memory
>>> intialization have t
On 2/16/21 12:01 PM, Mike Rapoport wrote:
>>
>> I do understand that. And I am not objecting to the patch. I have to
>> confess I haven't digested it yet. Any changes to early memory
>> intialization have turned out to be subtle and corner cases only pop up
>> later. This is almost impossible to
On 6/19/20 4:33 PM, Greg Kroah-Hartman wrote:
> From: Andrea Arcangeli
>
> commit c444eb564fb16645c172d550359cb3d75fe8a040 upstream.
>
> Write protect anon page faults require an accurate mapcount to decide
> if to break the COW or not. This is implemented in the THP path with
>
n capture_control.
>
> Signed-off-by: Charan Teja Reddy
Acked-by: Vlastimil Babka
Thanks!
> ---
>
> changes in V1: https://lore.kernel.org/patchwork/patch/1373665/
>
> mm/compaction.c | 8
> mm/page_alloc.c | 2 ++
> 2 files changed, 10 insertions(+)
>
On 2/11/21 6:29 PM, Yang Shi wrote:
> On Thu, Feb 11, 2021 at 5:10 AM Vlastimil Babka wrote:
>> > trace_mm_shrink_slab_start(shrinker, shrinkctl, nr,
>> > freeable, delta, total_scan, priority);
>> > @@ -737,10 +708,9 @@ stati
On 2/9/21 6:46 PM, Yang Shi wrote:
> The number of deferred objects might get windup to an absurd number, and it
> results in clamp of slab objects. It is undesirable for sustaining
> workingset.
>
> So shrink deferred objects proportional to priority and cap nr_deferred to
> twice
> of cache
heir
> shrinker->nr_deferred would always be NULL. This would prevent the shrinkers
> from unregistering correctly.
>
> Remove SHRINKER_REGISTERING since we could check if shrinker is registered
> successfully by the new flag.
>
> Acked-by: Kirill Tkhai
> Signed-of
And the later patch
> will add more dereference places.
>
> So extract the dereference into a helper to make the code more readable. No
> functional change.
>
> Signed-off-by: Yang Shi
Acked-by: Vlastimil Babka
t;> keep both.
>> Remove memcg_shrinker_map_size since shrinker_nr_max is also used by
>> iterating the
>> bit map.
>>
>> Acked-by: Kirill Tkhai
>> Signed-off-by: Yang Shi
Acked-by: Vlastimil Babka
>> ---
>> mm/vmscan.c | 18 +-
On 2/1/21 8:19 PM, Milan Broz wrote:
> On 01/02/2021 19:55, Vlastimil Babka wrote:
>> On 2/1/21 7:00 PM, Milan Broz wrote:
>>> On 01/02/2021 14:08, Vlastimil Babka wrote:
>>>> On 1/8/21 3:39 PM, Milan Broz wrote:
>>>>> On 08/01/2021 14:41, Michal Hocko
On 2/9/21 8:03 PM, Oscar Salvador wrote:
> On Tue, Feb 09, 2021 at 07:17:59PM +0100, David Hildenbrand wrote:
>> I was expecting some magical reason why this is still required but I am not
>> able to find a compelling one. Maybe this is really some historical
>> artifact.
>>
>> Let's see if other
is pointless.
>
> This patch removes it.
>
> Signed-off-by: Minchan Kim
Acked-by: Vlastimil Babka
> ---
> mm/page_alloc.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 6446778cbc6b..f8fbee73dd6d 100644
>
Randy Dunlap
> Acked-by: Sergey Senozhatsky
Acked-by: Vlastimil Babka
Thanks!
> ---
> .../admin-guide/kernel-parameters.txt | 15
> lib/test_printf.c | 8
> lib/vsprintf.c| 38 ++-
&
On 2/9/21 4:13 PM, Marco Elver wrote:
> We cannot rely on CONFIG_DEBUG_KERNEL to decide if we're running a
> "debug kernel" where we can safely show potentially sensitive
> information in the kernel log.
>
> Therefore, add the option CONFIG_KFENCE_REPORT_SENSITIVE to decide if we
> should add
On 2/5/21 11:28 PM, David Rientjes wrote:
> On Tue, 2 Feb 2021, Charan Teja Kalla wrote:
>
>> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> >> index 519a60d..531f244 100644
>> >> --- a/mm/page_alloc.c
>> >> +++ b/mm/page_alloc.c
>> >> @@ -4152,6 +4152,8 @@
on both arguments :)
> Fixes: f289041ed4 ("mm, page_poison: remove CONFIG_PAGE_POISONING_ZERO")
> Signed-off-by: David Gow
Acked-by: Vlastimil Babka
...
> Disabling PAGE_POISONING fixes this. The issue can't be repoduced with
> just PAGE_POISONING, there's clearly some
388 tests passed
> [ 501.488762] test_printf: unloaded.
>
> Signed-off-by: Yafang Shao
> Cc: David Hildenbrand
> Cc: Joe Perches
> Cc: Miaohe Lin
> Cc: Vlastimil Babka
> Cc: Andy Shevchenko
> Cc: Matthew Wilcox
Acked-by: Vlastimil Babka
The 'pfl' array should be even useful in kernel crash dump tools!
On 2/8/21 6:26 PM, Matthew Wilcox wrote:
> On Mon, Feb 08, 2021 at 06:14:38PM +0800, Yafang Shao wrote:
>> It is strange to combine "pr_err" with "INFO", so let's remove the
>> prefix completely.
>
> So is this the right thing to do? Should it be pr_info() instead?
> Many of these messages do
used=3
> fp=0x60d32ca8 flags=0x17c0010200(slab|head)
>
> - after the patch
> [ 6343.396602] Slab 0x4382e02b objects=33 used=3
> fp=0x9ae06ffc flags=0x17c0010200(slab|head)
>
> [1].
> https://lore.kernel.org/linux-mm/b9c0f2b6-e9b0-0c36-ebdd
c9487b ("mm/slub: let number of online CPUs determine the slub
page order")
Reported-by: Vincent Guittot
Reported-by: Mel Gorman
Cc:
Signed-off-by: Vlastimil Babka
---
OK, this is a 5.11 regression, so we should try to it by 5.12. I've also
Cc'd stable for that reason although it's n
On 2/3/21 2:41 AM, Abel Wu wrote:
>> On Feb 2, 2021, at 6:11 PM, Christoph Lameter wrote:
>>
>> On Tue, 2 Feb 2021, Abel Wu wrote:
>>
>>> Since slab_alloc_node() is the only caller of __slab_alloc(), embed
>>> __slab_alloc() to its caller to save function call overhead. This
>>> will also
s for debugging
> page migration issues. For example both alloc and free timestamps
> being the same can gave hints that there is an issue with migrating
> memory, as opposed to a page just being dropped during migration.
>
> Signed-off-by: Georgi Djakov
Acked-by: Vlastimil Babka
Thanks.
On 2/2/21 10:36 PM, Timur Tabi wrote:
> If the make-printk-non-secret command-line parameter is set, then
> printk("%p") will print addresses as unhashed. This is useful for
> debugging purposes.
>
> A large warning message is displayed if this option is enabled,
> because unhashed addresses,
On 2/3/21 12:10 PM, Bharata B Rao wrote:
> On Wed, Jan 27, 2021 at 12:04:01PM +0100, Vlastimil Babka wrote:
>> Yes, but it's tricky to do the retuning safely, e.g. if freelist
>> randomization
>> is enabled, see [1].
>>
>> But as a quick fix for the regressi
On 2/1/21 7:00 PM, Milan Broz wrote:
> On 01/02/2021 14:08, Vlastimil Babka wrote:
>> On 1/8/21 3:39 PM, Milan Broz wrote:
>>> On 08/01/2021 14:41, Michal Hocko wrote:
>>>> On Wed 06-01-21 16:20:15, Milan Broz wrote:
>>>>> Hi,
>>>>>
On 1/29/21 7:04 PM, Yang Shi wrote:
>> > > @@ -209,9 +214,15 @@ static int expand_one_shrinker_info(struct
>> > > mem_cgroup *memcg,
>> > > if (!new)
>> > > return -ENOMEM;
>> > >
>> > > - /* Set all old bits, clear all new bits */
>> > > -
On 1/8/21 3:39 PM, Milan Broz wrote:
> On 08/01/2021 14:41, Michal Hocko wrote:
>> On Wed 06-01-21 16:20:15, Milan Broz wrote:
>>> Hi,
>>>
>>> we use mlockall(MCL_CURRENT | MCL_FUTURE) / munlockall() in cryptsetup code
>>> and someone tried to use it with hardened memory allocator library.
>>>
>>>
e in the proper
> order since otherwise would be misleading to somebody who is actively
> reading and trying to understand the logic of the code - like it
> happened to me.
>
> Signed-off-by: Oscar Salvador
> Acked-by: Johannes Weiner
Acked-by: Vlastimil Babka
> ---
>
On 1/30/21 12:45 AM, Georgi Djakov wrote:
> Collect the time when each allocation is freed, to help with memory
> analysis with kdump/ramdump.
>
> Having another timestamp when we free the page helps for debugging
> page migration issues. For example both alloc and free timestamps
> being the
On 1/28/21 12:33 AM, Yang Shi wrote:
> Currently registered shrinker is indicated by non-NULL shrinker->nr_deferred.
> This approach is fine with nr_deferred at the shrinker level, but the
> following
> patches will move MEMCG_AWARE shrinkers' nr_deferred to memcg level, so their
>
> and make
> review easier. Rename "memcg_shrinker_info" to "shrinker_info" as well.
You mean rename struct memcg_shrinker_map, not "memcg_shrinker_info", right?
>
> Signed-off-by: Yang Shi
Acked-by: Vlastimil Babka
On 1/28/21 12:33 AM, Yang Shi wrote:
> Both memcg_shrinker_map_size and shrinker_nr_max is maintained, but actually
> the
> map size can be calculated via shrinker_nr_max, so it seems unnecessary to
> keep both.
> Remove memcg_shrinker_map_size since shrinker_nr_max is also used by
> iterating
m it? Yes, sure,
> but this is not the thing we want to remember in the future, since this
> spreads modularity.
>
> And a test with heavy paging workload didn't show write lock makes things
> worse.
>
> Signed-off-by: Yang Shi
Acked-by: Vlastimil Babka
structure. So
> move the
> shrinker_maps handling code into vmscan.c for tighter integration with
> shrinker code,
> and remove the "memcg_" prefix. There is no functional change.
>
> Signed-off-by: Yang Shi
Acked-by: Vlastimil Babka
Nits below:
&
shrink happens
> on one
> node but end up on the other node. It seems confusing. And the following
> patch
> will remove using nid directly in do_shrink_slab(), this patch also helps
> cleanup
> the code.
>
> Signed-off-by: Yang Shi
Acked-by: Vlastimil Babka
> ---
On 1/28/21 3:17 PM, Colin King wrote:
> From: Colin Ian King
>
> In the case where zpool_can_sleep_mapped(pool) returns 0
> then tmp is not allocated and tmp is then an uninitialized
> pointer. Later if entry is null, tmp is freed, hence free'ing
> an uninitialized pointer. Fix this by ensuring
syzbot+d0bd96b4696c1ef67...@syzkaller.appspotmail.com
> Fixes: dde3c6b72a16 ("mm/slub: fix a memory leak in sysfs_slab_add()")
> Signed-off-by: Wang Hai
Cc:
Acked-by: Vlastimil Babka
Double-free is worse than a rare small memory leak. Which would still be nice to
fix, b
On 1/28/21 3:19 AM, Yafang Shao wrote:
> Currently the pGp only shows the names of page flags, rather than
> the full information including section, node, zone, last cpupid and
> kasan tag. While it is not easy to parse these information manually
> because there're so many flavors. Let's interpret
On 1/28/21 3:19 AM, Yafang Shao wrote:
> It is strange to combine "pr_err" with "INFO", so let's clean them up.
> This patch is motivated by David's comment[1].
>
> - before the patch
> [ 8846.517809] INFO: Slab 0xf42a2c60 objects=33 used=3
> fp=0x60d32ca8
s=0x17c0010200
>
> While after this change, the output is,
> [ 8846.517809] INFO: Slab 0xf42a2c60 objects=33 used=3
> fp=0x60d32ca8 flags=0x17c0010200(slab|head)
>
> Reviewed-by: David Hildenbrand
> Signed-off-by: Yafang Shao
Reviewed-by: Vlastimil Ba
On 1/26/21 2:59 PM, Michal Hocko wrote:
>>
>> On 8 CPUs, I run hackbench with up to 16 groups which means 16*40
>> threads. But I raise up to 256 groups, which means 256*40 threads, on
>> the 224 CPUs system. In fact, hackbench -g 1 (with 1 group) doesn't
>> regress on the 224 CPUs system. The
The boot param and config determine the value of memcg_sysfs_enabled, which is
unused since commit 10befea91b61 ("mm: memcg/slab: use a single set of
kmem_caches for all allocations") as there are no per-memcg kmem caches
anymore.
Signed-off-by: Vlastimil Babka
---
Documentation/a
On 1/27/21 10:10 AM, Christoph Lameter wrote:
> On Tue, 26 Jan 2021, Will Deacon wrote:
>
>> > Hm, but booting the secondaries is just a software (kernel) action? They
>> > are
>> > already physically there, so it seems to me as if the cpu_present_mask is
>> > not
>> > populated correctly on
On 1/26/21 10:34 PM, Yu Zhao wrote:
> On Tue, Jan 26, 2021 at 08:13:11PM +0100, Vlastimil Babka wrote:
>> On 1/22/21 11:05 PM, Yu Zhao wrote:
>> > The "enum lru_list" parameter to add_page_to_lru_list() and
>> > add_page_to_lru_list_tail() is redundant in th
On 1/27/21 11:11 AM, Petr Mladek wrote:
> On Tue 2021-01-26 12:40:32, Steven Rostedt wrote:
>> On Tue, 26 Jan 2021 12:39:12 -0500
>> Steven Rostedt wrote:
>>
>> > On Tue, 26 Jan 2021 11:30:02 -0600
>> > Timur Tabi wrote:
>> >
&
On 1/22/21 11:05 PM, Yu Zhao wrote:
> The "enum lru_list" parameter to add_page_to_lru_list() and
> add_page_to_lru_list_tail() is redundant in the sense that it can
> be extracted from the "struct page" parameter by page_lru().
Okay, however, it means repeated extraction of a value that we
y: Yu Zhao
Acked-by: Vlastimil Babka
On 1/22/21 11:05 PM, Yu Zhao wrote:
> There is add_page_to_lru_list(), and move_pages_to_lru() should reuse
> it, not duplicate it.
>
> Link:
> https://lore.kernel.org/linux-mm/20201207220949.830352-2-yuz...@google.com/
> Signed-off-by: Yu Zhao
> Reviewed-by: Alex Shi
On 1/26/21 5:59 PM, Timur Tabi wrote:
> On 1/26/21 10:47 AM, Vlastimil Babka wrote:
>> Given Linus' current stance later in this thread, could we revive the idea
>> of a
>> boot time option, or at least a CONFIG (I assume a runtime toggle would be
>> t
On 1/19/21 11:38 AM, Sergey Senozhatsky wrote:
> On (21/01/19 01:47), Matthew Wilcox wrote:
> [..]
>>
>> > So maybe DUMP_PREFIX_UNHASHED can do the unhashed dump only when
>> > CONFIG_DEBUG_KERNEL=y and fallback to DUMP_PREFIX_ADDRESS otherwise?
>>
>> Distros enable CONFIG_DEBUG_KERNEL.
>
> Oh,
to be above wmark_high. Thus it avoids the
> unnecessary trigger and deferrals of the proactive compaction.
>
> Fix-suggested-by: Vlastimil Babka
> Signed-off-by: Charan Teja Reddy
> ---
>
> Changes in V3: Addressed suggestions from Vlastimil
>
> Changes in V2: htt
ohe Lin
Acked-by: Vlastimil Babka
> ---
> mm/workingset.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/mm/workingset.c b/mm/workingset.c
> index 10e96de945b3..7db8f3dad13c 100644
> --- a/mm/workingset.c
> +++ b/mm/workingset.c
>
On 1/22/21 1:48 AM, Roman Gushchin wrote:
> On Thu, Jan 21, 2021 at 06:21:54PM +0100, Vlastimil Babka wrote:
>
> Hi Vlastimil!
>
> This makes a lot of sense, however it looks a bit as an overkill to me (on
> 5.9+).
> Isn't limiting a number of pages (instead of number o
On 1/23/21 1:32 PM, Vincent Guittot wrote:
>> PowerPC PowerNV Host: (160 cpus)
>> num_online_cpus 1 num_present_cpus 160 num_possible_cpus 160 nr_cpu_ids 160
>>
>> PowerPC pseries KVM guest: (-smp 16,maxcpus=160)
>> num_online_cpus 1 num_present_cpus 16 num_possible_cpus 160 nr_cpu_ids 160
>>
>>
On 1/22/21 2:05 PM, Jann Horn wrote:
> On Thu, Jan 21, 2021 at 7:19 PM Vlastimil Babka wrote:
>> On 1/21/21 11:01 AM, Christoph Lameter wrote:
>> > On Thu, 21 Jan 2021, Bharata B Rao wrote:
>> >
>> >> > The problem is that calculate_order() is called a
On 1/22/21 9:03 AM, Vincent Guittot wrote:
> On Thu, 21 Jan 2021 at 19:19, Vlastimil Babka wrote:
>>
>> On 1/21/21 11:01 AM, Christoph Lameter wrote:
>> > On Thu, 21 Jan 2021, Bharata B Rao wrote:
>> >
>> >> > The problem is that calculate_or
On 1/21/21 11:01 AM, Christoph Lameter wrote:
> On Thu, 21 Jan 2021, Bharata B Rao wrote:
>
>> > The problem is that calculate_order() is called a number of times
>> > before secondaries CPUs are booted and it returns 1 instead of 224.
>> > This makes the use of num_online_cpus() irrelevant for
rinking is
now also called as part of the drop_caches sysctl operation.
[1]
https://lore.kernel.org/linux-mm/CAG48ez2Qx5K1Cab-m8BdSibp6wLTip6ro4=-umr7blsegje...@mail.gmail.com/
Reported-by: Jann Horn
Signed-off-by: Vlastimil Babka
---
include/linux/slub_def.h |
On 1/12/21 12:12 AM, Jann Horn wrote:
> At first I thought that this wasn't a significant issue because SLUB
> has a reclaim path that can trim the percpu partial lists; but as it
> turns out, that reclaim path is not actually wired up to the page
> allocator's reclaim logic. The SLUB reclaim
shrink_slab() and trace_mm_shrink_slab_start().
Signed-off-by: Vlastimil Babka
---
include/linux/shrinker.h | 3 +++
include/trace/events/vmscan.h | 8 +++-
mm/vmscan.c | 14 --
3 files changed, 14 insertions(+), 11 deletions(-)
diff --git a/include
On 1/19/21 8:26 PM, David Rientjes wrote:
> On Mon, 18 Jan 2021, Charan Teja Reddy wrote:
>
>> should_proactive_compact_node() returns true when sum of the
>> weighted fragmentation score of all the zones in the node is greater
>> than the wmark_high of compaction, which then triggers the
to be above wmark_high. Thus it avoids the
> unnecessary trigger and deferrals of the proactive compaction.
>
> Fix-suggested-by: Vlastimil Babka
> Signed-off-by: Charan Teja Reddy
Acked-by: Vlastimil Babka
Thanks!
978d480e2843 ("kernel/sysctl: support setting sysctl parameters
> from kernel command line")
> Cc: sta...@kernel.org # v5.8-rc1+
> Signed-off-by: Xiaoming Ni
Acked-by: Vlastimil Babka
Thanks!
>
> -
> v4: According to Vlastimil Babka's recommendations
> add c
to be above wmark_high. Thus it avoids the
> unnecessary trigger and deferrals of the proactive compaction.
>
> Fix-suggested-by: Vlastimil Babka
> Signed-off-by: Charan Teja Reddy
Acked-by: Vlastimil Babka
But I would move fragmentation_score_zone() above
fragmentati
On 1/17/21 3:59 AM, Xiaoming Ni wrote:
> On 2021/1/12 19:42, Vlastimil Babka wrote:
>> On 1/12/21 8:24 AM, Michal Hocko wrote:
>>>>>>
>>>>>> If we're going to do a separate "patch: make process_sysctl_arg()
>>>>>> ret
a single cmpxchg_double().
Signed-off-by: Vlastimil Babka
---
Hi,
I stumbled on the optimization while pondering over what to do with the percpu
partial list memory wastage [1], but it should be useful on its own. I haven't
run any measurements yet, but eliminating cmpxchg_double() operatio
Should have CCd linux-api@, please do next time
On 1/15/21 2:03 PM, Alexander Potapenko wrote:
> This patchset adds a library that captures error reports from debugging
> tools like KASAN or KFENCE and exposes those reports to userspace via
> sysfs. Report capturing is controlled by two new types
On 1/15/21 10:03 AM, David Rientjes wrote:
> On Thu, 14 Jan 2021, Vlastimil Babka wrote:
>
>> On 1/8/21 7:46 PM, Christoph Lameter wrote:
>> > I am ok with you as a slab maintainer. I have seen some good work from
>> > you.
>> >
>> > Acked-by:
On 1/13/21 3:03 PM, Charan Teja Reddy wrote:
> should_proactive_compact_node() returns true when sum of the
> fragmentation score of all the zones in the node is greater than the
> wmark_high of compaction which then triggers the proactive compaction
> that operates on the individual zones of the
On 1/8/21 7:46 PM, Christoph Lameter wrote:
> I am ok with you as a slab maintainer. I have seen some good work from
> you.
>
> Acked-by: Christoph Lameter
Thanks!
Vlastimil
t; Signed-off-by: Johannes Berg
Acked-by: Vlastimil Babka
> ---
> Perhaps instead it should go the other way around, and kmemleak
> could even use/access the stack trace that's already in there ...
> But I don't really care too much, I can just turn off slub debug
> for the kmemleak cach
101 - 200 of 6119 matches
Mail list logo