Kirill Tkhai wrote:
> Hi, Greg,
>
> good finding. See comments below.
>
> On 01.06.2020 06:22, Greg Thelen wrote:
>> Since v4.19 commit b0dedc49a2da ("mm/vmscan.c: iterate only over charged
>> shrinkers during memcg shrink_slab()") a memcg aware shrinker is
On Tue, Jul 28, 2020 at 12:32 AM Greg Thelen wrote:
>
> selftests can be built from the toplevel kernel makefile (e.g. make
> kselftest-all) or directly (make -C tools/testing/selftests all).
>
> The toplevel kernel makefile explicitly disables implicit rules with
> "MAKE
Shuah Khan wrote:
> On 8/5/20 1:36 PM, Greg Thelen wrote:
>> On Tue, Jul 28, 2020 at 12:32 AM Greg Thelen wrote:
>>>
>>> selftests can be built from the toplevel kernel makefile (e.g. make
>>> kselftest-all) or directly (make -C tools/testing/selftests all).
G_DEBUG_INFO_BTF=y.
>
> Link:
> https://lkml.kernel.org/r/caadnvqj6tmzbxvtrobueh6qa0h+q7yaskxrvvvxhqr3kbzd...@mail.gmail.com
> Cc: Michal Kubecek
> Cc: Justin Forbes
> Cc: Alex Shi
> Cc: Souptick Joarder
> Cc: Alexei Starovoitov
> Cc: Daniel Borkmann
> Cc: Josef
r, give it a
> logically significant name, and check for the possibility of page
> demotion.
Reviewed-by: Greg Thelen
> Signed-off-by: Dave Hansen
> Cc: David Rientjes
> Cc: Huang Ying
> Cc: Dan Williams
> Cc: David Hildenbrand
> Cc: osalvador
> ---
>
>
Alex Shi wrote:
> 在 2020/11/11 上午3:50, Andrew Morton 写道:
>> On Tue, 10 Nov 2020 08:39:24 +0530 Souptick Joarder
>> wrote:
>>
>>> On Fri, Nov 6, 2020 at 4:55 PM Alex Shi wrote:
Otherwise it cause gcc warning:
^~~
../mm/filemap.c:830:14: warning: no pre
Axel Rasmussen wrote:
> On Mon, Nov 30, 2020 at 5:34 PM Shakeel Butt wrote:
>>
>> On Mon, Nov 30, 2020 at 3:43 PM Axel Rasmussen
>> wrote:
>> >
>> > syzbot reported[1] a use-after-free introduced in 0f818c4bc1f3. The bug
>> > is that an ongoing trace event might race with the tracepoint being
On Mon, Nov 04 2013, Andrew Morton wrote:
> On Sun, 27 Oct 2013 10:30:15 -0700 Greg Thelen wrote:
>
>> Tests various percpu operations.
>
> Could you please take a look at the 32-bit build (this is i386):
>
> lib/percpu_test.c: In function 'percpu_test_init'
On Tue, Apr 01 2014, Vladimir Davydov wrote:
> Currently to allocate a page that should be charged to kmemcg (e.g.
> threadinfo), we pass __GFP_KMEMCG flag to the page allocator. The page
> allocated is then to be freed by free_memcg_kmem_pages. Apart from
> looking asymmetrical, this also requi
On Tue, Apr 01 2014, Davidlohr Bueso wrote:
> On Tue, 2014-04-01 at 19:56 -0400, KOSAKI Motohiro wrote:
>> >> > Ah-hah, that's interesting info.
>> >> >
>> >> > Let's make the default 64GB?
>> >>
>> >> 64GB is infinity at that time, but it no longer near infinity today. I
>> >> like
>> >> very
o eliminates any possibility of misaccounting an allocation
> going from one memcg's cache to another memcg, because now we always
> charge slabs against the memcg the cache belongs to. That's why this
> patch removes the big comment to memcg_kmem_get_cache.
>
> Signed-off-by: V
>>
>>>> Surething. Why not. :)
>>>
>>> *sigh* actually, the plot thickens a bit with SHMALL (total size of shm
>>> segments system wide, in pages). Currently by default:
>>>
>>> #define SHMALL (SHMMAX/getpagesize()*(SHMMNI/16))
>>&
d-off-by: Vladimir Davydov
One comment nit below, otherwise looks good to me.
Acked-by: Greg Thelen
> Cc: Johannes Weiner
> Cc: Michal Hocko
> Cc: Glauber Costa
> Cc: Christoph Lameter
> Cc: Pekka Enberg
> ---
> Changes in v2.1:
> - add missing kmalloc_order forward decl
tical over aggressive shrinking of dm bufio objects.
If the uninitialized dm_bufio_client.shrinker.flags contains
SHRINKER_NUMA_AWARE then shrink_slab() would call the dm shrinker for
each numa node rather than just once. This has been broken since 3.12.
Signed-off-by: Greg Thelen
---
drivers/md/
On Tue, Sep 16 2014, Vladimir Davydov wrote:
> Hi Suleiman,
>
> On Mon, Sep 15, 2014 at 12:13:33PM -0700, Suleiman Souhlal wrote:
>> On Mon, Sep 15, 2014 at 3:44 AM, Vladimir Davydov
>> wrote:
>> > Hi,
>> >
>> > I'd like to discuss downsides of the kmem accounting part of the memory
>> > cgroup
On Fri, Sep 19 2014, Johannes Weiner wrote:
> In a memcg with even just moderate cache pressure, success rates for
> transparent huge page allocations drop to zero, wasting a lot of
> effort that the allocator puts into assembling these pages.
>
> The reason for this is that the memcg reclaim cod
On Tue, Sep 23 2014, Johannes Weiner wrote:
> On Mon, Sep 22, 2014 at 10:52:50PM -0700, Greg Thelen wrote:
>>
>> On Fri, Sep 19 2014, Johannes Weiner wrote:
>>
>> > In a memcg with even just moderate cache pressure, success rates for
>> > transparent huge
On Mon, Apr 28 2014, Roman Gushchin wrote:
> 28.04.2014, 16:27, "Michal Hocko" :
>> The series is based on top of the current mmotm tree. Once the series
>> gets accepted I will post a patch which will mark the soft limit as
>> deprecated with a note that it will be eventually dropped. Let me kn
On Tue, May 13 2014, Michal Hocko wrote:
> force_empty has been introduced primarily to drop memory before it gets
> reparented on the group removal. This alone doesn't sound fully
> justified because reparented pages which are not in use can be reclaimed
> also later when there is a memory pres
On Wed, Mar 26 2014, Vladimir Davydov wrote:
> We don't track any random page allocation, so we shouldn't track kmalloc
> that falls back to the page allocator.
This seems like a change which will leads to confusing (and arguably
improper) kernel behavior. I prefer the behavior prior to this p
6b208e3f6e35 ("mm: memcg: remove unused node/section info from
pc->flags") deleted the lookup_cgroup_page() function but left a
prototype for it.
Kill the vestigial prototype.
Signed-off-by: Greg Thelen
---
include/linux/page_cgroup.h | 1 -
1 file changed, 1 deletion(-)
diff --
On Wed, Jul 9, 2014 at 9:36 AM, Vladimir Davydov wrote:
> Hi Tim,
>
> On Wed, Jul 09, 2014 at 08:08:07AM -0700, Tim Hockin wrote:
>> How is this different from RLIMIT_AS? You specifically mentioned it
>> earlier but you don't explain how this is different.
>
> The main difference is that RLIMIT_A
On Fri, Jun 06 2014, Michal Hocko wrote:
> Some users (e.g. Google) would like to have stronger semantic than low
> limit offers currently. The fallback mode is not desirable and they
> prefer hitting OOM killer rather than ignoring low limit for protected
> groups. There are other possible usec
On Tue, Jun 10 2014, Johannes Weiner wrote:
> On Mon, Jun 09, 2014 at 03:52:51PM -0700, Greg Thelen wrote:
>>
>> On Fri, Jun 06 2014, Michal Hocko wrote:
>>
>> > Some users (e.g. Google) would like to have stronger semantic than low
>> > limit off
On Thu, Mar 27, 2014 at 12:37 AM, Vladimir Davydov
wrote:
> Hi Greg,
>
> On 03/27/2014 08:31 AM, Greg Thelen wrote:
>> On Wed, Mar 26 2014, Vladimir Davydov wrote:
>>
>>> We don't track any random page allocation, so we shouldn't track kmalloc
>>&g
On Wed, May 28 2014, Johannes Weiner wrote:
> On Wed, May 28, 2014 at 04:21:44PM +0200, Michal Hocko wrote:
>> On Wed 28-05-14 09:49:05, Johannes Weiner wrote:
>> > On Wed, May 28, 2014 at 02:10:23PM +0200, Michal Hocko wrote:
>> > > Hi Andrew, Johannes,
>> > >
>> > > On Mon 28-04-14 14:26:41,
> migration fails. This prevents unnecessary work done by the freeing scanner
>> but
>> also encourages memory to be as compacted as possible at the end of the zone.
>>
>> Reported-by: Greg Thelen
>
> What did Greg actually report? IOW, what if any ob
On Wed, Dec 11 2013, Michal Hocko wrote:
> Hi,
> previous discussions have shown that soft limits cannot be reformed
> (http://lwn.net/Articles/555249/). This series introduces an alternative
> approach to protecting memory allocated to processes executing within
> a memory cgroup controller. It i
On Thu, Jan 30 2014, Michal Hocko wrote:
> On Wed 29-01-14 11:08:46, Greg Thelen wrote:
> [...]
>> The series looks useful. We (Google) have been using something similar.
>> In practice such a low_limit (or memory guarantee), doesn't nest very
>> well.
>>
>
On Mon, Feb 03 2014, Michal Hocko wrote:
> On Thu 30-01-14 16:28:27, Greg Thelen wrote:
>> On Thu, Jan 30 2014, Michal Hocko wrote:
>>
>> > On Wed 29-01-14 11:08:46, Greg Thelen wrote:
>> > [...]
>> >> The series looks useful. We (Google) have be
f 88 a6 00 00 00 48 8b b2 30 02 00 00 45 89 ca <4c> 39 56 18 0f 8c 36
>> 01 00 00 44 89 c9
>> f7 d9 89 cf 65 48 01 7e
>> [ 7691.528638] RIP [] mem_cgroup_move_account+0xf4/0x290
>>
>> Add the required __this_cpu_read().
>
> Sorry for my mistake and thanks for
);
Signed-off-by: Greg Thelen
---
arch/x86/include/asm/percpu.h | 3 ++-
include/linux/percpu.h| 8
lib/percpu_test.c | 2 +-
3 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index 0da5200
Tests various percpu operations.
Enable with CONFIG_PERCPU_TEST=m.
Signed-off-by: Greg Thelen
---
lib/Kconfig.debug | 9
lib/Makefile | 2 +
lib/percpu_test.c | 138 ++
3 files changed, 149 insertions(+)
create mode 100644 lib
(counter, -(int)nr_pages)
admitting that __this_cpu_add/sub() doesn't work with unsigned adjustments. But
I felt like fixing the core services to prevent this in the future.
Greg Thelen (3):
percpu counter: test module
percpu counter: cast this_cpu_sub() adjustment
memcg: use __this_cpu_su
The fix is to subtract the unsigned page count rather than adding its
negation. This only works with the "percpu counter: cast
this_cpu_sub() adjustment" patch which fixes this_cpu_sub().
Signed-off-by: Greg Thelen
---
mm/memcontrol.c | 2 +-
1 file changed, 1 insertion(+), 1 deletio
On Sun, Oct 27 2013, Tejun Heo wrote:
> On Sun, Oct 27, 2013 at 05:04:29AM -0700, Andrew Morton wrote:
>> On Sun, 27 Oct 2013 07:22:55 -0400 Tejun Heo wrote:
>>
>> > We probably want to cc stable for this and the next one. How should
>> > these be routed? I can take these through percpu tree o
On Sun, Oct 27 2013, Greg Thelen wrote:
> this_cpu_sub() is implemented as negation and addition.
>
> This patch casts the adjustment to the counter type before negation to
> sign extend the adjustment. This helps in cases where the counter
> type is wider than an unsigned
ogs, and test module description now
referring to per cpu operations rather than per cpu counters.
- move small test code update from patch 2 to patch 1 (where the test is
introduced).
Greg Thelen (3):
percpu: add test module for various percpu operations
percpu: fix this_cpu_sub() subtrahend c
);
Signed-off-by: Greg Thelen
Acked-by: Tejun Heo
---
arch/x86/include/asm/percpu.h | 3 ++-
include/linux/percpu.h| 8
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index 0da5200..b3e18f8 100644
--- a/arch
Tests various percpu operations.
Enable with CONFIG_PERCPU_TEST=m.
Signed-off-by: Greg Thelen
Acked-by: Tejun Heo
---
lib/Kconfig.debug | 9
lib/Makefile | 2 +
lib/percpu_test.c | 138 ++
3 files changed, 149 insertions
The fix is to subtract the unsigned page count rather than adding its
negation. This only works once "percpu: fix this_cpu_sub() subtrahend
casting for unsigneds" is applied to fix this_cpu_sub().
Signed-off-by: Greg Thelen
Acked-by: Tejun Heo
---
mm/memcontrol.c | 2 +-
1 file cha
s got sorted out by re-introducing the old test within
> the racy critical sections.
>
> This patch introduces ipc_valid_object() to consolidate the way we cope with
> IPC_RMID races by using the same abstraction across the API implementation.
>
> Signed-off-by: Rafael Aquini
Acked
uot;)
Fixes: 2caacaa82a51 ("ipc,shm: shorten critical region for shmctl")
Signed-off-by: Greg Thelen
Cc: # 3.10.17+ 3.11.6+
---
ipc/shm.c | 28 +++-
1 file changed, 23 insertions(+), 5 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index d69739610fd4..
On Thu, Aug 07 2014, Johannes Weiner wrote:
> On Thu, Aug 07, 2014 at 03:08:22PM +0200, Michal Hocko wrote:
>> On Mon 04-08-14 17:14:54, Johannes Weiner wrote:
>> > Instead of passing the request size to direct reclaim, memcg just
>> > manually loops around reclaiming SWAP_CLUSTER_MAX pages until
mho...@kernel.org wrote:
> From: Michal Hocko
>
> Journal transaction might fail prematurely because the frozen_buffer
> is allocated by GFP_NOFS request:
> [ 72.440013] do_get_write_access: OOM for frozen_buffer
> [ 72.440014] EXT4-fs: ext4_reserve_inode_write:4729: aborting transaction:
>
commit f61c42a7d911 ("memcg: remove tasks/children test from
mem_cgroup_force_empty()") removed memory reparenting from the function.
Fix the function's comment.
Signed-off-by: Greg Thelen
---
mm/memcontrol.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/
On Fri, Oct 31 2014, Junjie Mao wrote:
> When choosing a random address, the current implementation does not take into
> account the reversed space for .bss and .brk sections. Thus the relocated
> kernel
> may overlap other components in memory. Here is an example of the overlap
> from a
> x86_6
On Mon, Nov 17 2014, Greg Thelen wrote:
[...]
> Given that bss and brk are nobits (i.e. only ALLOC) sections, does
> file_offset make sense as a load address. This fails with gold:
>
> $ git checkout v3.18-rc5
> $ make # with gold
> [...]
> ..bss and .brk lack common fi
On Mon, Mar 09 2015, David Rientjes wrote:
> If __get_user_pages() is faulting a significant number of hugetlb pages,
> usually as the result of mmap(MAP_LOCKED), it can potentially allocate a
> very large amount of memory.
>
> If the process has been oom killed, this will cause a lot of memory t
> allocating user memory if TIF_MEMDIE is set"), hugetlb page faults now
> terminate when the process has been oom killed.
>
> Cc: Greg Thelen
> Cc: Naoya Horiguchi
> Cc: Davidlohr Bueso
> Acked-by: "Kirill A. Shutemov"
> Signed-off-by: David Rientjes
L
On Fri, Feb 6, 2015 at 6:17 AM, Tejun Heo wrote:
> Hello, Greg.
>
> On Thu, Feb 05, 2015 at 04:03:34PM -0800, Greg Thelen wrote:
>> So this is a system which charges all cgroups using a shared inode
>> (recharge on read) for all resident pages of that shared inode. The
On Wed, Feb 04 2015, Tejun Heo wrote:
> Hello,
>
> On Tue, Feb 03, 2015 at 03:30:31PM -0800, Greg Thelen wrote:
>> If a machine has several top level memcg trying to get some form of
>> isolation (using low, min, soft limit) then a shared libc will be
>> moved to th
On Thu, Feb 05 2015, Tejun Heo wrote:
> Hello, Greg.
>
> On Wed, Feb 04, 2015 at 03:51:01PM -0800, Greg Thelen wrote:
>> I think the linux-next low (and the TBD min) limits also have the
>> problem for more than just the root memcg. I'm thinking of a 2M file
>&
On Thu, Feb 05 2015, Tejun Heo wrote:
> Hey,
>
> On Thu, Feb 05, 2015 at 02:05:19PM -0800, Greg Thelen wrote:
>> >A
>> >+-B(usage=2M lim=3M min=2M hosted_usage=2M)
>> > +-C (usage=0 lim=2M min=1M shared_usage=2M)
>> >
On Mon, Feb 2, 2015 at 11:46 AM, Tejun Heo wrote:
> Hey,
>
> On Mon, Feb 02, 2015 at 10:26:44PM +0300, Konstantin Khlebnikov wrote:
>
>> Keeping shared inodes in common ancestor is reasonable.
>> We could schedule asynchronous moving when somebody opens or mmaps
>> inode from outside of its curren
On Tue, Feb 10, 2015 at 6:19 PM, Tejun Heo wrote:
> Hello, again.
>
> On Sat, Feb 07, 2015 at 09:38:39AM -0500, Tejun Heo wrote:
>> If we can argue that memcg and blkcg having different views is
>> meaningful and characterize and justify the behaviors stemming from
>> the deviation, sure, that'd b
On Wed, Feb 11, 2015 at 12:33 PM, Tejun Heo wrote:
[...]
>> page count to throttle based on blkcg's bandwidth. Note: memcg
>> doesn't yet have dirty page counts, but several of us have made
>> attempts at adding the counters. And it shouldn't be hard to get them
>> merged.
>
> Can you please pos
On Thu, Jan 29 2015, Tejun Heo wrote:
> Hello,
>
> Since the cgroup writeback patchset[1] have been posted, several
> people brought up concerns about the complexity of allowing an inode
> to be dirtied against multiple cgroups is necessary for the purpose of
> writeback and it is true that a sig
Theodore Ts'o wrote:
> The following changes since commit 243d50678583100855862bc084b8b307eea67f68:
>
> Merge branch 'overlayfs-linus' of
> git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs (2016-03-22
> 13:11:15 -0700)
>
n> are available in the git repository at:
>
> git://git.ker
Vladimir Davydov wrote:
> Currently, to charge a page to kmemcg one should use alloc_kmem_pages
> helper. When the page is not needed anymore it must be freed with
> free_kmem_pages helper, which will uncharge the page before freeing it.
> Such a design is acceptable for thread info pages and kma
rcpu_counter. And the
percpu_counter spinlocks are more heavyweight than is required.
It probably also makes sense to use exact dirty and writeback counters
in memcg oom reports. But that is saved for later.
Cc: sta...@vger.kernel.org # v4.16+
Signed-off-by: Greg Thelen
---
Changelog since v1
Andrew Morton wrote:
> On Thu, 7 Mar 2019 08:56:32 -0800 Greg Thelen wrote:
>
>> Since commit a983b5ebee57 ("mm: memcontrol: fix excessive complexity in
>> memory.stat reporting") memcg dirty and writeback counters are managed
>> as:
>> 1) per-memcg p
Johannes Weiner wrote:
> On Thu, Mar 07, 2019 at 08:56:32AM -0800, Greg Thelen wrote:
>> --- a/mm/memcontrol.c
>> +++ b/mm/memcontrol.c
>> @@ -3880,6 +3880,7 @@ struct wb_domain *mem_cgroup_wb_domain(struct
>> bdi_writeback *wb)
>> * @pheadroom: out paramete
delete all possible compression formats.
Once patched usr/initramfs_data.cpio.gz and friends are deleted by
"make clean".
Fixes: 9e3596b0c653 ("kbuild: initramfs cleanup, set target from Kconfig")
Signed-off-by: Greg Thelen
---
usr/Makefile | 3 +++
1 file changed, 3
transaction and
use it for stat updates")
Signed-off-by: Greg Thelen
---
include/linux/backing-dev.h | 2 +-
include/linux/fs.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index c28a47cbe355..f9b02
es cpus going offline,
so no need for that overhead from percpu_counter. And the
percpu_counter spinlocks are more heavyweight than is required.
It probably also makes sense to include exact dirty and writeback
counters in memcg oom reports. But that is saved for later.
Signed-off-by: Gre
akenly
set.
Relocate endif to balance the newly added -record-mcount check.
Fixes: 96f60dfa5819 ("trace: Use -mcount-record for dynamic ftrace")
Signed-off-by: Greg Thelen
---
scripts/Makefile.build | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/Makefile.build b/sc
.
RDMA_USER_CM_CMD_JOIN_MCAST is interface for AF_IB multicast.
And add a buffer length safety check.
Fixes: 5bc2b7b397b0 ("RDMA/ucma: Allow user space to specify AF_IB when joining
multicast")
Signed-off-by: Greg Thelen
---
drivers/infiniband/core/ucma.c | 10 +-
1 file changed, 9 insert
On Thu, Mar 29, 2018 at 9:24 PM, Greg Thelen wrote:
> syzbot discovered that ucma_join_ip_multicast() mishandles AF_IB request
> addresses. If an RDMA_USER_CM_CMD_JOIN_IP_MCAST request has
> cmd.addr.sa_family=AF_IB then ucma_join_ip_multicast() reads beyond the
> end of its cmd.addr
j/memory.${LIM}"
echo 1G > "${CGPATH}/i/K/memory.${LIM}"
echo 2G > "${CGPATH}/L/memory.${LIM}"
echo 4G > "${CGPATH}/L/memory.max"
echo 3G > "${CGPATH}/L/m/memory.${LIM}"
echo 1G > "${CGPATH}/L/N/memory.${LIM}"
vmtouch
When targeting reclaim to a memcg, protect that memcg from reclaim is
memory consumption of any level is below respective memory.low.
Signed-off-by: Greg Thelen
---
mm/memcontrol.c | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
The new memory.min limit is similar to memory.low, just no bypassing it
when reclaim is desparate. Prefer oom kills before reclaim memory below
memory.min. Sharing more code with memory_cgroup_low() is possible, but
the idea is posted here for simplicity.
Signed-off-by: Greg Thelen
On Mon, Apr 23, 2018 at 3:38 AM Roman Gushchin wrote:
> Hi, Greg!
> On Sun, Apr 22, 2018 at 01:26:10PM -0700, Greg Thelen wrote:
> > Roman's previously posted memory.low,min patches add per memcg effective
> > low limit to detect overcommitment of parental limits. But
Mark memcg1_events static: it's only used by memcontrol.c.
And mark it const: it's not modified.
Signed-off-by: Greg Thelen
---
mm/memcontrol.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 2bd3df3d101a..c9c7e5ea0e2f 10064
INFINIBAND_ADDR_TRANS depends on INFINIBAND. So there's no need for
options which depend INFINIBAND_ADDR_TRANS to also depend on INFINIBAND.
Remove the unnecessary INFINIBAND depends.
Signed-off-by: Greg Thelen
---
drivers/infiniband/ulp/srpt/Kconfig | 2 +-
drivers/nvme/host/Kc
On Tue, May 1, 2018 at 1:48 PM Jason Gunthorpe wrote:
> On Tue, May 01, 2018 at 03:08:57AM +0000, Greg Thelen wrote:
> > On Mon, Apr 30, 2018 at 4:35 PM Jason Gunthorpe wrote:
> >
> > > On Wed, Apr 25, 2018 at 03:33:39PM -0700, Greg Thelen wrote:
> > >
Allow INFINIBAND without INFINIBAND_ADDR_TRANS.
Signed-off-by: Greg Thelen
Cc: Tarick Bedeir
Change-Id: I6fbbf8a432e467710fa65e4904b7d61880b914e5
---
drivers/infiniband/Kconfig | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/Kconfig b/drivers
On Sat, Apr 14, 2018 at 8:13 AM Dennis Dalessandro <
dennis.dalessan...@intel.com> wrote:
> On 4/13/2018 1:27 PM, Greg Thelen wrote:
> > Allow INFINIBAND without INFINIBAND_ADDR_TRANS.
> >
> > Signed-off-by: Greg Thelen
> >
Allow INFINIBAND without INFINIBAND_ADDR_TRANS.
Signed-off-by: Greg Thelen
Cc: Tarick Bedeir
---
drivers/infiniband/Kconfig | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig
index ee270e065ba9..2a972ed6851b 100644
On Sun, Apr 15, 2018 at 5:06 AM Christoph Hellwig wrote:
> On Fri, Apr 13, 2018 at 12:06:44AM -0700, Greg Thelen wrote:
> > Allow INFINIBAND without INFINIBAND_ADDR_TRANS.
> Why? We are pushing everyone heavily to use RDMA/CM, so making it
> optional seems rather counter-intuit
mem_caches().
This leak only affects destroyed SLAB_ACCOUNT kmem caches when kasan is
enabled. So I don't think it's worth patching stable kernels.
Signed-off-by: Greg Thelen
---
include/linux/kasan.h | 4 ++--
mm/kasan/kasan.c | 2 +-
mm/kasan/quarantine.c | 1 +
mm/slab_com
mcg accounted
object
[ 124.456789] kmem_cache_destroy test_cache: Slab cache still has objects
Kernels with fix [1] don't have the "Slab cache still has objects"
warning or the underlying leak.
The new test runs and passes in the default (root) memcg, though in the
root memcg it won't
Michal Hocko wrote:
> On Tue 22-09-15 15:16:32, Greg Thelen wrote:
>> mem_cgroup_read_stat() returns a page count by summing per cpu page
>> counters. The summing is racy wrt. updates, so a transient negative sum
>> is possible. Callers don't want negative values
Dave Hansen wrote:
> I've been seeing some strange behavior with 4.3-rc1 kernels on my Ubuntu
> 14.04.3 system. The system will run fine for a few hours, but suddenly
> start becoming horribly I/O bound. A compile of perf for instance takes
> 20-30 minutes and the compile seems entirely I/O bou
Greg Thelen wrote:
> Dave Hansen wrote:
>
>> I've been seeing some strange behavior with 4.3-rc1 kernels on my Ubuntu
>> 14.04.3 system. The system will run fine for a few hours, but suddenly
>> start becoming horribly I/O bound. A compile of perf for instance t
Andrew Morton wrote:
> On Tue, 22 Sep 2015 15:16:32 -0700 Greg Thelen wrote:
>
>> mem_cgroup_read_stat() returns a page count by summing per cpu page
>> counters. The summing is racy wrt. updates, so a transient negative sum
>> is possible. Callers do
Commit 733a572e66d2 ("memcg: make mem_cgroup_read_{stat|event}() iterate
possible cpus instead of online") removed the last use of the per memcg
pcp_counter_lock but forgot to remove the variable.
Kill the vestigial variable.
Signed-off-by: Greg Thelen
---
include/linux/memcontrol.h
Andrew Morton wrote:
> On Tue, 22 Sep 2015 17:42:13 -0700 Greg Thelen wrote:
>
>> Andrew Morton wrote:
>>
>> > On Tue, 22 Sep 2015 15:16:32 -0700 Greg Thelen wrote:
>> >
>> >> mem_cgroup_read_stat() returns a page count by summing per cpu page
&
but
larger files use the oom killer to avoid ENOMEM.
Memory overcommit requires use of the oom killer to select a victim
regardless of file size.
Enable oom killer for small seq_buf_alloc() allocations.
Signed-off-by: David Rientjes
Signed-off-by: Greg Thelen
---
fs/seq_file.c | 11 -
Dave Hansen wrote:
> On 09/17/2015 11:09 PM, Greg Thelen wrote:
>> I'm not denying the issue, bug the WARNING splat isn't necessarily
>> catching a problem. The corresponding code comes from your debug patch:
>> +
>> WARN_ONCE(__this_cpu_read(mem
emory.stat shouldn't show confusing negative usage.
- tree_usage() already avoids negatives.
Avoid returning negative page counts from mem_cgroup_read_stat() and
convert it to unsigned.
Signed-off-by: Greg Thelen
---
mm/memcontrol.c | 30 ++
1 file changed, 18 in
:13: error:
'pnv_ioda_setup_bus_dma' defined but not used
Add CONFIG_IOMMU_API ifdef guard to avoid dead code.
Fixes: dc3d8f85bb57 ("powerpc/powernv/pci: Re-work bus PE configuration")
Signed-off-by: Greg Thelen
---
arch/powerpc/platforms/powernv/pci-ioda.c | 2 ++
1 file changed, 2 insertions
eck_me'
defined but not used [-Wunused-function]
Add CONFIG_PM_SLEEP ifdef guard to avoid dead code.
Fixes: e086ba2fccda ("e1000e: disable s0ix entry and exit flows for ME systems")
Signed-off-by: Greg Thelen
---
drivers/net/ethernet/intel/e1000e/netdev.c | 2 ++
1 file cha
:13: error:
'pnv_ioda_setup_bus_dma' defined but not used
Move pnv_ioda_setup_bus_dma() under CONFIG_IOMMU_API to avoid dead code.
Fixes: dc3d8f85bb57 ("powerpc/powernv/pci: Re-work bus PE configuration")
Signed-off-by: Greg Thelen
---
arch/powerpc/platforms/powernv/pci-ioda.c | 26 +
Yang Shi wrote:
> On Sun, May 31, 2020 at 8:22 PM Greg Thelen wrote:
>>
>> Since v4.19 commit b0dedc49a2da ("mm/vmscan.c: iterate only over charged
>> shrinkers during memcg shrink_slab()") a memcg aware shrinker is only
>> called when the per-memcg per
SeongJae Park wrote:
> From: SeongJae Park
>
> This commit adds documents for DAMON under
> `Documentation/admin-guide/mm/damon/` and `Documentation/vm/damon/`.
>
> Signed-off-by: SeongJae Park
> ---
> Documentation/admin-guide/mm/damon/guide.rst | 157 ++
> Documentation/admin-guide/m
SeongJae Park wrote:
> From: SeongJae Park
>
> This commit introduces a reference implementation of the address space
> specific low level primitives for the virtual address space, so that
> users of DAMON can easily monitor the data accesses on virtual address
> spaces of specific processes by
Oliver O'Halloran wrote:
> On Mon, Jun 15, 2020 at 9:33 AM Greg Thelen wrote:
>>
>> Commit dc3d8f85bb57 ("powerpc/powernv/pci: Re-work bus PE
>> configuration") removed a couple pnv_ioda_setup_bus_dma() calls. The
>> only remaining calls are beh
irect side effect of "make -R". This enables arbitrary makefile
nesting.
Signed-off-by: Greg Thelen
---
tools/testing/selftests/Makefile | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
in
Dave Hansen wrote:
> From: Keith Busch
>
> Migrating pages had been allocating the new page before it was actually
> needed. Subsequent operations may still fail, which would have to handle
> cleaning up the newly allocated page when it was never used.
>
> Defer allocating the page until we are
101 - 200 of 209 matches
Mail list logo