t
> links to the adjacent zone/node.
What if two zones are adjacent? I.e. if the hole was at a boundary between two
zones.
> Fixes: 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions rather
> that check each PFN")
> Signed-off-by: Mike
should be
only used for procfs and similar files, not dmesg buffer. This patch clarifies
the documentation in that regard.
Signed-off-by: Vlastimil Babka
---
Documentation/core-api/printk-formats.rst | 26 ++-
lib/vsprintf.c| 7 --
2 files changed
heir kernels with everything that's needed to decode stack
> traces later.
Looks good!
> Signed-off-by: Thorsten Leemhuis
> Reviewed-by: Qais Yousef
Acked-by: Vlastimil Babka
Thanks!
On 2/18/21 6:24 PM, Charan Teja Reddy wrote:
> I would like to start discussion about balancing the occupancy of
> memory zones in a node in the system whose imabalance may be caused by
> migration of pages to other zones during hotremove and then hotadding
> same memory. In this case there is a
On 2/17/21 6:33 PM, Vlastimil Babka wrote:
> Compaction always operates on pages from a single given zone when isolating
> both pages to migrate and freepages. Pageblock boundaries are intersected with
> zone boundaries to be safe in case zone starts or ends in the middle of
> pagebl
Let's add include/uapi/ and arch/*/include/uapi/ to API/ABI section, so that
for patches modifying them, get_maintainers.pl suggests CCing linux-api@ so
people don't forget.
Reported-by: David Hildenbrand
Signed-off-by: Vlastimil Babka
---
MAINTAINERS | 2 ++
1 file changed, 2 insertions
() on a range of pfn's
from two different zones and end up e.g. isolating freepages under the wrong
zone's lock.
This patch should fix the above issues.
Fixes: 5a811889de10 ("mm, compaction: use free lists to quickly locate a
migration target")
Cc:
Signed-off-by: Vlastimil Babk
ave to route every fault while
> populating via the userfaultfd handler.
>
> [1] https://lkml.org/lkml/2013/6/27/698
>
> Cc: Andrew Morton
> Cc: Arnd Bergmann
> Cc: Michal Hocko
> Cc: Oscar Salvador
> Cc: Matthew Wilcox (Oracle)
> Cc: Andrea Arcangeli
> Cc:
On 2/16/21 6:49 PM, Mike Rapoport wrote:
> Hi Vlastimil,
>
> On Tue, Feb 16, 2021 at 05:39:12PM +0100, Vlastimil Babka wrote:
>>
>>
>> So, Andrea could you please check if this fixes the original
>> fast_isolate_around() issue for you? With the VM_BUG_ON
On 2/16/21 2:11 PM, Michal Hocko wrote:
> On Tue 16-02-21 13:34:56, Vlastimil Babka wrote:
>> On 2/16/21 12:01 PM, Mike Rapoport wrote:
>> >>
>> >> I do understand that. And I am not objecting to the patch. I have to
>> >> confess I haven't
On 2/16/21 1:34 PM, Vlastimil Babka wrote:
> On 2/16/21 12:01 PM, Mike Rapoport wrote:
>>>
>>> I do understand that. And I am not objecting to the patch. I have to
>>> confess I haven't digested it yet. Any changes to early memory
>>> intialization have t
On 2/16/21 12:01 PM, Mike Rapoport wrote:
>>
>> I do understand that. And I am not objecting to the patch. I have to
>> confess I haven't digested it yet. Any changes to early memory
>> intialization have turned out to be subtle and corner cases only pop up
>> later. This is almost impossible to
On 6/19/20 4:33 PM, Greg Kroah-Hartman wrote:
> From: Andrea Arcangeli
>
> commit c444eb564fb16645c172d550359cb3d75fe8a040 upstream.
>
> Write protect anon page faults require an accurate mapcount to decide
> if to break the COW or not. This is implemented in the THP path with
>
n capture_control.
>
> Signed-off-by: Charan Teja Reddy
Acked-by: Vlastimil Babka
Thanks!
> ---
>
> changes in V1: https://lore.kernel.org/patchwork/patch/1373665/
>
> mm/compaction.c | 8
> mm/page_alloc.c | 2 ++
> 2 files changed, 10 insertions(+)
>
On 2/11/21 6:29 PM, Yang Shi wrote:
> On Thu, Feb 11, 2021 at 5:10 AM Vlastimil Babka wrote:
>> > trace_mm_shrink_slab_start(shrinker, shrinkctl, nr,
>> > freeable, delta, total_scan, priority);
>> > @@ -737,10 +708,9 @@ stati
On 2/9/21 6:46 PM, Yang Shi wrote:
> The number of deferred objects might get windup to an absurd number, and it
> results in clamp of slab objects. It is undesirable for sustaining
> workingset.
>
> So shrink deferred objects proportional to priority and cap nr_deferred to
> twice
> of cache
heir
> shrinker->nr_deferred would always be NULL. This would prevent the shrinkers
> from unregistering correctly.
>
> Remove SHRINKER_REGISTERING since we could check if shrinker is registered
> successfully by the new flag.
>
> Acked-by: Kirill Tkhai
> Signed-of
And the later patch
> will add more dereference places.
>
> So extract the dereference into a helper to make the code more readable. No
> functional change.
>
> Signed-off-by: Yang Shi
Acked-by: Vlastimil Babka
t;> keep both.
>> Remove memcg_shrinker_map_size since shrinker_nr_max is also used by
>> iterating the
>> bit map.
>>
>> Acked-by: Kirill Tkhai
>> Signed-off-by: Yang Shi
Acked-by: Vlastimil Babka
>> ---
>> mm/vmscan.c | 18 +-
On 2/1/21 8:19 PM, Milan Broz wrote:
> On 01/02/2021 19:55, Vlastimil Babka wrote:
>> On 2/1/21 7:00 PM, Milan Broz wrote:
>>> On 01/02/2021 14:08, Vlastimil Babka wrote:
>>>> On 1/8/21 3:39 PM, Milan Broz wrote:
>>>>> On 08/01/2021 14:41, Michal Hocko
On 2/9/21 8:03 PM, Oscar Salvador wrote:
> On Tue, Feb 09, 2021 at 07:17:59PM +0100, David Hildenbrand wrote:
>> I was expecting some magical reason why this is still required but I am not
>> able to find a compelling one. Maybe this is really some historical
>> artifact.
>>
>> Let's see if other
is pointless.
>
> This patch removes it.
>
> Signed-off-by: Minchan Kim
Acked-by: Vlastimil Babka
> ---
> mm/page_alloc.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 6446778cbc6b..f8fbee73dd6d 100644
>
Randy Dunlap
> Acked-by: Sergey Senozhatsky
Acked-by: Vlastimil Babka
Thanks!
> ---
> .../admin-guide/kernel-parameters.txt | 15
> lib/test_printf.c | 8
> lib/vsprintf.c| 38 ++-
&
On 2/9/21 4:13 PM, Marco Elver wrote:
> We cannot rely on CONFIG_DEBUG_KERNEL to decide if we're running a
> "debug kernel" where we can safely show potentially sensitive
> information in the kernel log.
>
> Therefore, add the option CONFIG_KFENCE_REPORT_SENSITIVE to decide if we
> should add
On 2/5/21 11:28 PM, David Rientjes wrote:
> On Tue, 2 Feb 2021, Charan Teja Kalla wrote:
>
>> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> >> index 519a60d..531f244 100644
>> >> --- a/mm/page_alloc.c
>> >> +++ b/mm/page_alloc.c
>> >> @@ -4152,6 +4152,8 @@
on both arguments :)
> Fixes: f289041ed4 ("mm, page_poison: remove CONFIG_PAGE_POISONING_ZERO")
> Signed-off-by: David Gow
Acked-by: Vlastimil Babka
...
> Disabling PAGE_POISONING fixes this. The issue can't be repoduced with
> just PAGE_POISONING, there's clearly some
388 tests passed
> [ 501.488762] test_printf: unloaded.
>
> Signed-off-by: Yafang Shao
> Cc: David Hildenbrand
> Cc: Joe Perches
> Cc: Miaohe Lin
> Cc: Vlastimil Babka
> Cc: Andy Shevchenko
> Cc: Matthew Wilcox
Acked-by: Vlastimil Babka
The 'pfl' array should be even useful in kernel crash dump tools!
On 2/8/21 6:26 PM, Matthew Wilcox wrote:
> On Mon, Feb 08, 2021 at 06:14:38PM +0800, Yafang Shao wrote:
>> It is strange to combine "pr_err" with "INFO", so let's remove the
>> prefix completely.
>
> So is this the right thing to do? Should it be pr_info() instead?
> Many of these messages do
used=3
> fp=0x60d32ca8 flags=0x17c0010200(slab|head)
>
> - after the patch
> [ 6343.396602] Slab 0x4382e02b objects=33 used=3
> fp=0x9ae06ffc flags=0x17c0010200(slab|head)
>
> [1].
> https://lore.kernel.org/linux-mm/b9c0f2b6-e9b0-0c36-ebdd
c9487b ("mm/slub: let number of online CPUs determine the slub
page order")
Reported-by: Vincent Guittot
Reported-by: Mel Gorman
Cc:
Signed-off-by: Vlastimil Babka
---
OK, this is a 5.11 regression, so we should try to it by 5.12. I've also
Cc'd stable for that reason although it's n
On 2/3/21 2:41 AM, Abel Wu wrote:
>> On Feb 2, 2021, at 6:11 PM, Christoph Lameter wrote:
>>
>> On Tue, 2 Feb 2021, Abel Wu wrote:
>>
>>> Since slab_alloc_node() is the only caller of __slab_alloc(), embed
>>> __slab_alloc() to its caller to save function call overhead. This
>>> will also
s for debugging
> page migration issues. For example both alloc and free timestamps
> being the same can gave hints that there is an issue with migrating
> memory, as opposed to a page just being dropped during migration.
>
> Signed-off-by: Georgi Djakov
Acked-by: Vlastimil Babka
Thanks.
On 2/2/21 10:36 PM, Timur Tabi wrote:
> If the make-printk-non-secret command-line parameter is set, then
> printk("%p") will print addresses as unhashed. This is useful for
> debugging purposes.
>
> A large warning message is displayed if this option is enabled,
> because unhashed addresses,
On 2/3/21 12:10 PM, Bharata B Rao wrote:
> On Wed, Jan 27, 2021 at 12:04:01PM +0100, Vlastimil Babka wrote:
>> Yes, but it's tricky to do the retuning safely, e.g. if freelist
>> randomization
>> is enabled, see [1].
>>
>> But as a quick fix for the regressi
On 2/1/21 7:00 PM, Milan Broz wrote:
> On 01/02/2021 14:08, Vlastimil Babka wrote:
>> On 1/8/21 3:39 PM, Milan Broz wrote:
>>> On 08/01/2021 14:41, Michal Hocko wrote:
>>>> On Wed 06-01-21 16:20:15, Milan Broz wrote:
>>>>> Hi,
>>>>>
On 1/29/21 7:04 PM, Yang Shi wrote:
>> > > @@ -209,9 +214,15 @@ static int expand_one_shrinker_info(struct
>> > > mem_cgroup *memcg,
>> > > if (!new)
>> > > return -ENOMEM;
>> > >
>> > > - /* Set all old bits, clear all new bits */
>> > > -
On 1/8/21 3:39 PM, Milan Broz wrote:
> On 08/01/2021 14:41, Michal Hocko wrote:
>> On Wed 06-01-21 16:20:15, Milan Broz wrote:
>>> Hi,
>>>
>>> we use mlockall(MCL_CURRENT | MCL_FUTURE) / munlockall() in cryptsetup code
>>> and someone tried to use it with hardened memory allocator library.
>>>
>>>
e in the proper
> order since otherwise would be misleading to somebody who is actively
> reading and trying to understand the logic of the code - like it
> happened to me.
>
> Signed-off-by: Oscar Salvador
> Acked-by: Johannes Weiner
Acked-by: Vlastimil Babka
> ---
>
On 1/30/21 12:45 AM, Georgi Djakov wrote:
> Collect the time when each allocation is freed, to help with memory
> analysis with kdump/ramdump.
>
> Having another timestamp when we free the page helps for debugging
> page migration issues. For example both alloc and free timestamps
> being the
On 1/28/21 12:33 AM, Yang Shi wrote:
> Currently registered shrinker is indicated by non-NULL shrinker->nr_deferred.
> This approach is fine with nr_deferred at the shrinker level, but the
> following
> patches will move MEMCG_AWARE shrinkers' nr_deferred to memcg level, so their
>
> and make
> review easier. Rename "memcg_shrinker_info" to "shrinker_info" as well.
You mean rename struct memcg_shrinker_map, not "memcg_shrinker_info", right?
>
> Signed-off-by: Yang Shi
Acked-by: Vlastimil Babka
On 1/28/21 12:33 AM, Yang Shi wrote:
> Both memcg_shrinker_map_size and shrinker_nr_max is maintained, but actually
> the
> map size can be calculated via shrinker_nr_max, so it seems unnecessary to
> keep both.
> Remove memcg_shrinker_map_size since shrinker_nr_max is also used by
> iterating
m it? Yes, sure,
> but this is not the thing we want to remember in the future, since this
> spreads modularity.
>
> And a test with heavy paging workload didn't show write lock makes things
> worse.
>
> Signed-off-by: Yang Shi
Acked-by: Vlastimil Babka
structure. So
> move the
> shrinker_maps handling code into vmscan.c for tighter integration with
> shrinker code,
> and remove the "memcg_" prefix. There is no functional change.
>
> Signed-off-by: Yang Shi
Acked-by: Vlastimil Babka
Nits below:
&
shrink happens
> on one
> node but end up on the other node. It seems confusing. And the following
> patch
> will remove using nid directly in do_shrink_slab(), this patch also helps
> cleanup
> the code.
>
> Signed-off-by: Yang Shi
Acked-by: Vlastimil Babka
> ---
On 1/28/21 3:17 PM, Colin King wrote:
> From: Colin Ian King
>
> In the case where zpool_can_sleep_mapped(pool) returns 0
> then tmp is not allocated and tmp is then an uninitialized
> pointer. Later if entry is null, tmp is freed, hence free'ing
> an uninitialized pointer. Fix this by ensuring
syzbot+d0bd96b4696c1ef67...@syzkaller.appspotmail.com
> Fixes: dde3c6b72a16 ("mm/slub: fix a memory leak in sysfs_slab_add()")
> Signed-off-by: Wang Hai
Cc:
Acked-by: Vlastimil Babka
Double-free is worse than a rare small memory leak. Which would still be nice to
fix, b
On 1/28/21 3:19 AM, Yafang Shao wrote:
> Currently the pGp only shows the names of page flags, rather than
> the full information including section, node, zone, last cpupid and
> kasan tag. While it is not easy to parse these information manually
> because there're so many flavors. Let's interpret
On 1/28/21 3:19 AM, Yafang Shao wrote:
> It is strange to combine "pr_err" with "INFO", so let's clean them up.
> This patch is motivated by David's comment[1].
>
> - before the patch
> [ 8846.517809] INFO: Slab 0xf42a2c60 objects=33 used=3
> fp=0x60d32ca8
s=0x17c0010200
>
> While after this change, the output is,
> [ 8846.517809] INFO: Slab 0xf42a2c60 objects=33 used=3
> fp=0x60d32ca8 flags=0x17c0010200(slab|head)
>
> Reviewed-by: David Hildenbrand
> Signed-off-by: Yafang Shao
Reviewed-by: Vlastimil Ba
On 1/26/21 2:59 PM, Michal Hocko wrote:
>>
>> On 8 CPUs, I run hackbench with up to 16 groups which means 16*40
>> threads. But I raise up to 256 groups, which means 256*40 threads, on
>> the 224 CPUs system. In fact, hackbench -g 1 (with 1 group) doesn't
>> regress on the 224 CPUs system. The
The boot param and config determine the value of memcg_sysfs_enabled, which is
unused since commit 10befea91b61 ("mm: memcg/slab: use a single set of
kmem_caches for all allocations") as there are no per-memcg kmem caches
anymore.
Signed-off-by: Vlastimil Babka
---
Documentation/a
On 1/27/21 10:10 AM, Christoph Lameter wrote:
> On Tue, 26 Jan 2021, Will Deacon wrote:
>
>> > Hm, but booting the secondaries is just a software (kernel) action? They
>> > are
>> > already physically there, so it seems to me as if the cpu_present_mask is
>> > not
>> > populated correctly on
On 1/26/21 10:34 PM, Yu Zhao wrote:
> On Tue, Jan 26, 2021 at 08:13:11PM +0100, Vlastimil Babka wrote:
>> On 1/22/21 11:05 PM, Yu Zhao wrote:
>> > The "enum lru_list" parameter to add_page_to_lru_list() and
>> > add_page_to_lru_list_tail() is redundant in th
On 1/27/21 11:11 AM, Petr Mladek wrote:
> On Tue 2021-01-26 12:40:32, Steven Rostedt wrote:
>> On Tue, 26 Jan 2021 12:39:12 -0500
>> Steven Rostedt wrote:
>>
>> > On Tue, 26 Jan 2021 11:30:02 -0600
>> > Timur Tabi wrote:
>> >
&
On 1/22/21 11:05 PM, Yu Zhao wrote:
> The "enum lru_list" parameter to add_page_to_lru_list() and
> add_page_to_lru_list_tail() is redundant in the sense that it can
> be extracted from the "struct page" parameter by page_lru().
Okay, however, it means repeated extraction of a value that we
y: Yu Zhao
Acked-by: Vlastimil Babka
On 1/22/21 11:05 PM, Yu Zhao wrote:
> There is add_page_to_lru_list(), and move_pages_to_lru() should reuse
> it, not duplicate it.
>
> Link:
> https://lore.kernel.org/linux-mm/20201207220949.830352-2-yuz...@google.com/
> Signed-off-by: Yu Zhao
> Reviewed-by: Alex Shi
On 1/26/21 5:59 PM, Timur Tabi wrote:
> On 1/26/21 10:47 AM, Vlastimil Babka wrote:
>> Given Linus' current stance later in this thread, could we revive the idea
>> of a
>> boot time option, or at least a CONFIG (I assume a runtime toggle would be
>> t
On 1/19/21 11:38 AM, Sergey Senozhatsky wrote:
> On (21/01/19 01:47), Matthew Wilcox wrote:
> [..]
>>
>> > So maybe DUMP_PREFIX_UNHASHED can do the unhashed dump only when
>> > CONFIG_DEBUG_KERNEL=y and fallback to DUMP_PREFIX_ADDRESS otherwise?
>>
>> Distros enable CONFIG_DEBUG_KERNEL.
>
> Oh,
to be above wmark_high. Thus it avoids the
> unnecessary trigger and deferrals of the proactive compaction.
>
> Fix-suggested-by: Vlastimil Babka
> Signed-off-by: Charan Teja Reddy
> ---
>
> Changes in V3: Addressed suggestions from Vlastimil
>
> Changes in V2: htt
ohe Lin
Acked-by: Vlastimil Babka
> ---
> mm/workingset.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/mm/workingset.c b/mm/workingset.c
> index 10e96de945b3..7db8f3dad13c 100644
> --- a/mm/workingset.c
> +++ b/mm/workingset.c
>
On 1/22/21 1:48 AM, Roman Gushchin wrote:
> On Thu, Jan 21, 2021 at 06:21:54PM +0100, Vlastimil Babka wrote:
>
> Hi Vlastimil!
>
> This makes a lot of sense, however it looks a bit as an overkill to me (on
> 5.9+).
> Isn't limiting a number of pages (instead of number o
On 1/23/21 1:32 PM, Vincent Guittot wrote:
>> PowerPC PowerNV Host: (160 cpus)
>> num_online_cpus 1 num_present_cpus 160 num_possible_cpus 160 nr_cpu_ids 160
>>
>> PowerPC pseries KVM guest: (-smp 16,maxcpus=160)
>> num_online_cpus 1 num_present_cpus 16 num_possible_cpus 160 nr_cpu_ids 160
>>
>>
On 1/22/21 2:05 PM, Jann Horn wrote:
> On Thu, Jan 21, 2021 at 7:19 PM Vlastimil Babka wrote:
>> On 1/21/21 11:01 AM, Christoph Lameter wrote:
>> > On Thu, 21 Jan 2021, Bharata B Rao wrote:
>> >
>> >> > The problem is that calculate_order() is called a
On 1/22/21 9:03 AM, Vincent Guittot wrote:
> On Thu, 21 Jan 2021 at 19:19, Vlastimil Babka wrote:
>>
>> On 1/21/21 11:01 AM, Christoph Lameter wrote:
>> > On Thu, 21 Jan 2021, Bharata B Rao wrote:
>> >
>> >> > The problem is that calculate_or
On 1/21/21 11:01 AM, Christoph Lameter wrote:
> On Thu, 21 Jan 2021, Bharata B Rao wrote:
>
>> > The problem is that calculate_order() is called a number of times
>> > before secondaries CPUs are booted and it returns 1 instead of 224.
>> > This makes the use of num_online_cpus() irrelevant for
rinking is
now also called as part of the drop_caches sysctl operation.
[1]
https://lore.kernel.org/linux-mm/CAG48ez2Qx5K1Cab-m8BdSibp6wLTip6ro4=-umr7blsegje...@mail.gmail.com/
Reported-by: Jann Horn
Signed-off-by: Vlastimil Babka
---
include/linux/slub_def.h |
On 1/12/21 12:12 AM, Jann Horn wrote:
> At first I thought that this wasn't a significant issue because SLUB
> has a reclaim path that can trim the percpu partial lists; but as it
> turns out, that reclaim path is not actually wired up to the page
> allocator's reclaim logic. The SLUB reclaim
shrink_slab() and trace_mm_shrink_slab_start().
Signed-off-by: Vlastimil Babka
---
include/linux/shrinker.h | 3 +++
include/trace/events/vmscan.h | 8 +++-
mm/vmscan.c | 14 --
3 files changed, 14 insertions(+), 11 deletions(-)
diff --git a/include
On 1/19/21 8:26 PM, David Rientjes wrote:
> On Mon, 18 Jan 2021, Charan Teja Reddy wrote:
>
>> should_proactive_compact_node() returns true when sum of the
>> weighted fragmentation score of all the zones in the node is greater
>> than the wmark_high of compaction, which then triggers the
to be above wmark_high. Thus it avoids the
> unnecessary trigger and deferrals of the proactive compaction.
>
> Fix-suggested-by: Vlastimil Babka
> Signed-off-by: Charan Teja Reddy
Acked-by: Vlastimil Babka
Thanks!
978d480e2843 ("kernel/sysctl: support setting sysctl parameters
> from kernel command line")
> Cc: sta...@kernel.org # v5.8-rc1+
> Signed-off-by: Xiaoming Ni
Acked-by: Vlastimil Babka
Thanks!
>
> -
> v4: According to Vlastimil Babka's recommendations
> add c
to be above wmark_high. Thus it avoids the
> unnecessary trigger and deferrals of the proactive compaction.
>
> Fix-suggested-by: Vlastimil Babka
> Signed-off-by: Charan Teja Reddy
Acked-by: Vlastimil Babka
But I would move fragmentation_score_zone() above
fragmentati
On 1/17/21 3:59 AM, Xiaoming Ni wrote:
> On 2021/1/12 19:42, Vlastimil Babka wrote:
>> On 1/12/21 8:24 AM, Michal Hocko wrote:
>>>>>>
>>>>>> If we're going to do a separate "patch: make process_sysctl_arg()
>>>>>> ret
a single cmpxchg_double().
Signed-off-by: Vlastimil Babka
---
Hi,
I stumbled on the optimization while pondering over what to do with the percpu
partial list memory wastage [1], but it should be useful on its own. I haven't
run any measurements yet, but eliminating cmpxchg_double() operatio
Should have CCd linux-api@, please do next time
On 1/15/21 2:03 PM, Alexander Potapenko wrote:
> This patchset adds a library that captures error reports from debugging
> tools like KASAN or KFENCE and exposes those reports to userspace via
> sysfs. Report capturing is controlled by two new types
On 1/15/21 10:03 AM, David Rientjes wrote:
> On Thu, 14 Jan 2021, Vlastimil Babka wrote:
>
>> On 1/8/21 7:46 PM, Christoph Lameter wrote:
>> > I am ok with you as a slab maintainer. I have seen some good work from
>> > you.
>> >
>> > Acked-by:
On 1/13/21 3:03 PM, Charan Teja Reddy wrote:
> should_proactive_compact_node() returns true when sum of the
> fragmentation score of all the zones in the node is greater than the
> wmark_high of compaction which then triggers the proactive compaction
> that operates on the individual zones of the
On 1/8/21 7:46 PM, Christoph Lameter wrote:
> I am ok with you as a slab maintainer. I have seen some good work from
> you.
>
> Acked-by: Christoph Lameter
Thanks!
Vlastimil
t; Signed-off-by: Johannes Berg
Acked-by: Vlastimil Babka
> ---
> Perhaps instead it should go the other way around, and kmemleak
> could even use/access the stack trace that's already in there ...
> But I don't really care too much, I can just turn off slub debug
> for the kmemleak cach
On 1/12/21 5:35 PM, Christoph Lameter wrote:
> On Tue, 12 Jan 2021, Jann Horn wrote:
>
>> [This is not something I intend to work on myself. But since I
>> stumbled over this issue, I figured I should at least document/report
>> it, in case anyone is willing to pick it up.]
>
> Well yeah all
On 1/12/21 12:12 AM, Jann Horn wrote:
> [This is not something I intend to work on myself. But since I
> stumbled over this issue, I figured I should at least document/report
> it, in case anyone is willing to pick it up.]
>
> Hi!
Hi, thanks for saving me a lot of typing!
...
> This means that
false-positives by resetting pointer tags during these accesses.
>
> Link:
> https://linux-review.googlesource.com/id/I50dd32838a666e173fe06c3c5c766f2c36aae901
> Fixes: aa1ef4d7b3f67 ("kasan, mm: reset tags when accessing metadata")
> Reported-by: Dmitry Vyukov
> Signed-off-by: A
On 1/13/21 5:09 PM, Johannes Berg wrote:
> From: Johannes Berg
>
> If kmemleak is enabled, it uses a kmem cache for its own objects.
> These objects are used to hold information kmemleak uses, including
> a stack trace. If slub_debug is also turned on, each of them has
> *another* stack trace,
On 1/12/21 10:21 AM, Faiyaz Mohammed wrote:
> Reading the sys slab alloc_calls, free_calls returns the available object
> owners, but the size of this file is limited to PAGE_SIZE
> because of the limitation of sysfs attributes, it is returning the
> partial owner info, which is not sufficient to
tplug callback takes the
slab_mutex.
To sum up, this patch removes get/put_online_cpus() calls from slab as it
should be safe without further adjustments.
Signed-off-by: Vlastimil Babka
---
mm/slab_common.c | 10 --
1 file changed, 10 deletions(-)
diff --git a/mm/slab_common.c b/mm/slab_common.
elying on N_NORMAL_MEMORY doesn't apply to SLAB, as its
setup_kmem_cache_nodes relies on N_ONLINE, and the new node is already set
there during the MEM_GOING_ONLINE callback, so no special care is needed
for SLAB.
As such, this patch removes all get/put_online_mems() usage by the slab
subsystem
able in order to succeed hotremove in the first place, and
thus the GFP_KERNEL allocated kmem_cache_node will come from elsewhere.
[1] https://lore.kernel.org/linux-mm/20190924151147.gb23...@dhcp22.suse.cz/
Signed-off-by: Vlastimil Babka
---
mm/slub.c | 28 +++-
1 fil
), but the most sane solution is not to introduce more of them, but
rather accept some wasted memory in scenarios that should be rare anyway (full
memory hot remove), as we do the same in other contexts already.
Vlastimil Babka (3):
mm, slub: stop freeing kmem_cache_node structures on node offline
mm
On 1/12/21 8:24 AM, Michal Hocko wrote:
>> > >
>> > > If we're going to do a separate "patch: make process_sysctl_arg()
>> > > return an errno instead of 0" then fine, we can discuss that. But it's
>> > > conceptually a different work from fixing this situation.
>> > > .
>> > >
>> > However,
tch when call move_freelist_head in
> fast_isolate_freepages().
>
> Link:
> http://lkml.kernel.org/r/20190118175136.31341-12-mgor...@techsingularity.net
> Fixes: 5a811889de10f1eb ("mm, compaction: use free lists to quickly locate a
> migration target")
Sounds serious
On 1/6/21 8:09 PM, Christoph Lameter wrote:
> On Wed, 6 Jan 2021, Vlastimil Babka wrote:
>
>> rather accept some wasted memory in scenarios that should be rare anyway
>> (full
>> memory hot remove), as we do the same in other contexts already. It's all RFC
>> f
On 1/8/21 8:01 PM, Paul E. McKenney wrote:
>
> Andrew pushed this to an upstream maintainer, but I have not seen these
> patches appear anywhere. So if that upstream maintainer was Linus, I can
> send a follow-up patch once we converge. If the upstream maintainer was
> in fact me, I can of
On 1/8/21 1:26 AM, Paul E. McKenney wrote:
> On Wed, Jan 06, 2021 at 03:42:12PM -0800, Paul E. McKenney wrote:
>> On Wed, Jan 06, 2021 at 01:48:43PM -0800, Andrew Morton wrote:
>> > On Tue, 5 Jan 2021 17:16:03 -0800 "Paul E. McKenney"
>> > wrote:
>> >
>> > > This is v4 of the series the
rted-by: Andrii Nakryiko
> Suggested-by: Vlastimil Babka
> Signed-off-by: Paul E. McKenney
Acked-by: Vlastimil Babka
> ---
> mm/vmalloc.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index c274ea4..e3229ff
as
> vmalloc() storage from kernel_clone() or similar, depending on the degree
> of inlining that your compiler does. This is likely more helpful than
> the earlier "non-paged (local) memory".
>
> Cc: Andrew Morton
> Cc: Joonsoo Kim
> Cc:
> Reported-by: Andrii Nakryiko
> Signed-off-by: Paul E. McKenney
Acked-by: Vlastimil Babka
id Rientjes
> Cc: Joonsoo Kim
> Cc: Andrew Morton
> Cc:
> Reported-by: Andrii Nakryiko
> [ paulmck: Convert to printing and change names per Joonsoo Kim. ]
> [ paulmck: Move slab definition per Stephen Rothwell and kbuild test robot. ]
> [ paulmck: Handle CONFIG_MMU=n case wher
the kmemcg
accounting rewrite last year.
Signed-off-by: Vlastimil Babka
---
Hi,
this might look perhaps odd with 4 people (plus Andrew) already listed, but on
closer look we have 2 (or 3 if you count SLOB) allocators and the focus of each
maintainer varies. Maybe this would be also
vec() was once useful in detecting a KSM
> charge bug, so may be worth keeping: but skip if mem_cgroup_disabled().
>
> Fixes: 9a1ac2288cf1 ("mm/memcontrol:rewrite mem_cgroup_page_lruvec()")
> Signed-off-by: Hugh Dickins
Acked-by: Vlastimil Babka
> ---
>
> include/lin
101 - 200 of 6100 matches
Mail list logo