m_cache_iter/open_coded_iter:OK
> 145 kmem_cache_iter:OK
> Summary: 1/3 PASSED, 0 SKIPPED, 0 FAILED
>
> [...]
Here is the summary with links:
- selftests/bpf: Fix kmem_cache iterator draining
https://git.kernel.org/bpf/bpf-next/c/38d976c32d85
You are awesome, thank you!
--
Deet-doot-dot, I
On Mon, Apr 28, 2025 at 1:06 PM Namhyung Kim wrote:
>
> Hello,
>
> On Mon, Apr 28, 2025 at 06:02:54PM +, T.J. Mercier wrote:
> > The closing parentheses around the read syscall is misplaced, causing
> > single byte reads from the iterator instead of buf sized reads. While
> > the end result is
Hello,
On Mon, Apr 28, 2025 at 06:02:54PM +, T.J. Mercier wrote:
> The closing parentheses around the read syscall is misplaced, causing
> single byte reads from the iterator instead of buf sized reads. While
> the end result is the same, many more read calls than necessary are
> performed.
>
On Mon, Apr 28, 2025 at 11:03 AM T.J. Mercier wrote:
>
> The closing parentheses around the read syscall is misplaced, causing
> single byte reads from the iterator instead of buf sized reads. While
> the end result is the same, many more read calls than necessary are
> performed.
>
> $ tools/test
The closing parentheses around the read syscall is misplaced, causing
single byte reads from the iterator instead of buf sized reads. While
the end result is the same, many more read calls than necessary are
performed.
$ tools/testing/selftests/bpf/vmtest.sh "./test_progs -t kmem_cache_iter"
145/
: Peter Zijlstra
CommitterDate: Tue, 16 Mar 2021 21:44:42 +01:00
perf core: Add a kmem_cache for struct perf_event
The kernel can allocate a lot of struct perf_event when profiling. For
example, 256 cpu x 8 events x 20 cgroups = 40K instances of the struct
would be allocated on a large system
t; The size of struct perf_event in my setup is 1152 byte. As it's
> allocated by kmalloc, the actual allocation size would be rounded up
> to 2K.
>
> Then there's 896 byte (~43%) of waste per instance resulting in total
> ~35MB with 40K instances. We can create a dedicate
ctual allocation size would be rounded up
to 2K.
Then there's 896 byte (~43%) of waste per instance resulting in total
~35MB with 40K instances. We can create a dedicated kmem_cache to
avoid such a big unnecessary memory consumption.
With this change, I can see below (note this machine ha
t d38a2b7a9c939e6d7329ab92b96559ccebf7b135 upstream.
> > >
> > > If the kmem_cache refcount is greater than one, we should not mark the
> > > root kmem_cache as dying. If we mark the root kmem_cache dying
> > > incorrectly, the non-root kmem_cache can never be
On Tue, Jul 28, 2020 at 08:56:41PM +0800, Muchun Song wrote:
On Mon, Jul 27, 2020 at 10:12 PM Greg Kroah-Hartman
wrote:
From: Muchun Song
commit d38a2b7a9c939e6d7329ab92b96559ccebf7b135 upstream.
If the kmem_cache refcount is greater than one, we should not mark the
root kmem_cache as
On Tue, Jul 28, 2020 at 08:56:41PM +0800, Muchun Song wrote:
> On Mon, Jul 27, 2020 at 10:12 PM Greg Kroah-Hartman
> wrote:
> >
> > From: Muchun Song
> >
> > commit d38a2b7a9c939e6d7329ab92b96559ccebf7b135 upstream.
> >
> > If the kmem_cache refcount i
On Mon, Jul 27, 2020 at 10:12 PM Greg Kroah-Hartman
wrote:
>
> From: Muchun Song
>
> commit d38a2b7a9c939e6d7329ab92b96559ccebf7b135 upstream.
>
> If the kmem_cache refcount is greater than one, we should not mark the
> root kmem_cache as dying. If we mark the
From: Muchun Song
commit d38a2b7a9c939e6d7329ab92b96559ccebf7b135 upstream.
If the kmem_cache refcount is greater than one, we should not mark the
root kmem_cache as dying. If we mark the root kmem_cache dying
incorrectly, the non-root kmem_cache can never be destroyed. It
resulted in memory
From: Muchun Song
commit d38a2b7a9c939e6d7329ab92b96559ccebf7b135 upstream.
If the kmem_cache refcount is greater than one, we should not mark the
root kmem_cache as dying. If we mark the root kmem_cache dying
incorrectly, the non-root kmem_cache can never be destroyed. It
resulted in memory
From: Muchun Song
commit d38a2b7a9c939e6d7329ab92b96559ccebf7b135 upstream.
If the kmem_cache refcount is greater than one, we should not mark the
root kmem_cache as dying. If we mark the root kmem_cache dying
incorrectly, the non-root kmem_cache can never be destroyed. It
resulted in memory
On Thu, Jul 16, 2020 at 07:21:42PM +0200, Vlastimil Babka wrote:
> On 7/16/20 6:51 PM, Muchun Song wrote:
> > If the kmem_cache refcount is greater than one, we should not
> > mark the root kmem_cache as dying. If we mark the root kmem_cache
> > dying incorrectly, the non-root
On 7/16/20 6:51 PM, Muchun Song wrote:
> If the kmem_cache refcount is greater than one, we should not
> mark the root kmem_cache as dying. If we mark the root kmem_cache
> dying incorrectly, the non-root kmem_cache can never be destroyed.
> It resulted in memory leak when memcg was d
If the kmem_cache refcount is greater than one, we should not
mark the root kmem_cache as dying. If we mark the root kmem_cache
dying incorrectly, the non-root kmem_cache can never be destroyed.
It resulted in memory leak when memcg was destroyed. We can use the
following steps to reproduce.
1
> > On Thu, Jul 16, 2020 at 1:54 AM Roman Gushchin wrote:
> > > > >
> > > > > On Thu, Jul 16, 2020 at 12:50:22AM +0800, Muchun Song wrote:
> > > > > > If the kmem_cache refcount is greater than one, we should not
> > > > > > m
gt; > > > On Thu, Jul 16, 2020 at 12:50:22AM +0800, Muchun Song wrote:
> > > > > If the kmem_cache refcount is greater than one, we should not
> > > > > mark the root kmem_cache as dying. If we mark the root kmem_cache
> > > > > dying incorrectly, th
On Thu, Jul 16, 2020 at 11:46 PM Roman Gushchin wrote:
>
> On Thu, Jul 16, 2020 at 01:07:02PM +0800, Muchun Song wrote:
> > On Thu, Jul 16, 2020 at 1:54 AM Roman Gushchin wrote:
> > >
> > > On Thu, Jul 16, 2020 at 12:50:22AM +0800, Muchun Song wrote:
> > >
On Thu, Jul 16, 2020 at 01:07:02PM +0800, Muchun Song wrote:
> On Thu, Jul 16, 2020 at 1:54 AM Roman Gushchin wrote:
> >
> > On Thu, Jul 16, 2020 at 12:50:22AM +0800, Muchun Song wrote:
> > > If the kmem_cache refcount is greater than one, we should not
> > > mark
On Thu, Jul 16, 2020 at 1:54 AM Roman Gushchin wrote:
>
> On Thu, Jul 16, 2020 at 12:50:22AM +0800, Muchun Song wrote:
> > If the kmem_cache refcount is greater than one, we should not
> > mark the root kmem_cache as dying. If we mark the root kmem_cache
> > dying i
On Thu, Jul 16, 2020 at 12:50:22AM +0800, Muchun Song wrote:
> If the kmem_cache refcount is greater than one, we should not
> mark the root kmem_cache as dying. If we mark the root kmem_cache
> dying incorrectly, the non-root kmem_cache can never be destroyed.
> It resulted in memo
If the kmem_cache refcount is greater than one, we should not
mark the root kmem_cache as dying. If we mark the root kmem_cache
dying incorrectly, the non-root kmem_cache can never be destroyed.
It resulted in memory leak when memcg was destroyed. We can use the
following steps to reproduce.
1
On Thu, Jul 16, 2020 at 12:24 AM Roman Gushchin wrote:
>
> On Wed, Jul 15, 2020 at 01:32:00PM +0200, Vlastimil Babka wrote:
> > On 7/7/20 8:27 AM, Muchun Song wrote:
> > > If the kmem_cache refcount is greater than one, we should not
> > > mark the root kmem_cache
On Wed, Jul 15, 2020 at 01:32:00PM +0200, Vlastimil Babka wrote:
> On 7/7/20 8:27 AM, Muchun Song wrote:
> > If the kmem_cache refcount is greater than one, we should not
> > mark the root kmem_cache as dying. If we mark the root kmem_cache
> > dying incorrectly, the non-root
On Wed, Jul 15, 2020 at 11:43 PM Vlastimil Babka wrote:
>
> On 7/15/20 5:13 PM, Muchun Song wrote:
> > On Wed, Jul 15, 2020 at 7:32 PM Vlastimil Babka wrote:
> >>
> >> On 7/7/20 8:27 AM, Muchun Song wrote:
> >> > If the kmem_cache refcount is greater th
On 7/15/20 5:13 PM, Muchun Song wrote:
> On Wed, Jul 15, 2020 at 7:32 PM Vlastimil Babka wrote:
>>
>> On 7/7/20 8:27 AM, Muchun Song wrote:
>> > If the kmem_cache refcount is greater than one, we should not
>> > mark the root kmem_cache as dying. If we ma
On Wed, Jul 15, 2020 at 11:21 PM Shakeel Butt wrote:
>
> Sorry I missed this email.
>
>
> On Mon, Jul 6, 2020 at 11:28 PM Muchun Song wrote:
> >
> > If the kmem_cache refcount is greater than one, we should not
> > mark the root kmem_cache as dying. If we m
Sorry I missed this email.
On Mon, Jul 6, 2020 at 11:28 PM Muchun Song wrote:
>
> If the kmem_cache refcount is greater than one, we should not
> mark the root kmem_cache as dying. If we mark the root kmem_cache
> dying incorrectly, the non-root kmem_cache can never be destroyed.
&
On Wed, Jul 15, 2020 at 7:32 PM Vlastimil Babka wrote:
>
> On 7/7/20 8:27 AM, Muchun Song wrote:
> > If the kmem_cache refcount is greater than one, we should not
> > mark the root kmem_cache as dying. If we mark the root kmem_cache
> > dying incorrectly, the non-root
On 7/7/20 8:27 AM, Muchun Song wrote:
> If the kmem_cache refcount is greater than one, we should not
> mark the root kmem_cache as dying. If we mark the root kmem_cache
> dying incorrectly, the non-root kmem_cache can never be destroyed.
> It resulted in memory leak when memcg was d
: Peter Zijlstra
CommitterDate: Wed, 08 Jul 2020 11:38:55 +02:00
perf/x86/intel/lbr: Create kmem_cache for the LBR context data
A new kmem_cache method is introduced to allocate the PMU specific data
task_ctx_data, which requires the PMU specific code to create a
kmem_cache.
Currently, the
: Peter Zijlstra
CommitterDate: Wed, 08 Jul 2020 11:38:55 +02:00
perf/core: Use kmem_cache to allocate the PMU specific data
Currently, the PMU specific data task_ctx_data is allocated by the
function kzalloc() in the perf generic code. When there is no specific
alignment requirement for the
If the kmem_cache refcount is greater than one, we should not
mark the root kmem_cache as dying. If we mark the root kmem_cache
dying incorrectly, the non-root kmem_cache can never be destroyed.
It resulted in memory leak when memcg was destroyed. We can use the
following steps to reproduce.
1
As a
result, both the generic structure and the PMU specific structure
will become bigger. Besides, extra function calls are added when
allocating/freeing the buffer. This option will increase both the
space overhead and CPU overhead.
- The third option is to use a kmem_cache to allocate a bu
From: Kan Liang
A new kmem_cache method is introduced to allocate the PMU specific data
task_ctx_data, which requires the PMU specific code to create a
kmem_cache.
Currently, the task_ctx_data is only used by the Intel LBR call stack
feature, which is introduced since Haswell. The kmem_cache
Track the kmem_cache used for non-page KVM MMU memory caches instead of
passing in the associated kmem_cache when filling the cache. This will
allow consolidating code and other cleanups.
No functional change intended.
Reviewed-by: Ben Gardon
Signed-off-by: Sean Christopherson
---
arch/x86
As a
result, both the generic structure and the PMU specific structure
will become bigger. Besides, extra function calls are added when
allocating/freeing the buffer. This option will increase both the
space overhead and CPU overhead.
- The third option is to use a kmem_cache to allocate a bu
From: Kan Liang
A new kmem_cache method is introduced to allocate the PMU specific data
task_ctx_data, which requires the PMU specific code to create a
kmem_cache.
Currently, the task_ctx_data is only used by the Intel LBR call stack
feature, which is introduced since Haswell. The kmem_cache
Track the kmem_cache used for non-page KVM MMU memory caches instead of
passing in the associated kmem_cache when filling the cache. This will
allow consolidating code and other cleanups.
No functional change intended.
Reviewed-by: Ben Gardon
Signed-off-by: Sean Christopherson
---
arch/x86
From: Kan Liang
A new kmem_cache method is introduced to allocate the PMU specific data
task_ctx_data, which requires the PMU specific code to create a
kmem_cache.
Currently, the task_ctx_data is only used by the Intel LBR call stack
feature, which is introduced since Haswell. The kmem_cache
As a
result, both the generic structure and the PMU specific structure
will become bigger. Besides, extra function calls are added when
allocating/freeing the buffer. This option will increase both the
space overhead and CPU overhead.
- The third option is to use a kmem_cache to allocate a bu
On Fri, Jun 5, 2020 at 2:39 PM Sean Christopherson
wrote:
>
> Track the kmem_cache used for non-page KVM MMU memory caches instead of
> passing in the associated kmem_cache when filling the cache. This will
> allow consolidating code and other cleanups.
>
> No function
Track the kmem_cache used for non-page KVM MMU memory caches instead of
passing in the associated kmem_cache when filling the cache. This will
allow consolidating code and other cleanups.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/include/asm/kvm_host.h
Because the size of binder_thread is fixed and it's only 304 bytes.
> > > > > > It will save 208 bytes per binder_thread when use create a
> > > > > > kmem_cache
> > > > > > for the binder_thread.
> > > > >
> >
On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
> > > [snip]
> > > > > The allocated size for each binder_thread is 512 bytes by kzalloc.
> > > > > Because the size of binder_thread is fixed and it's only 304 bytes.
> > > > >
On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
> > > [snip]
> > > > > The allocated size for each binder_thread is 512 bytes by kzalloc.
> > > > > Because the size of binder_thread is fixed and it's only 304 bytes.
> > > > >
]
> > > > The allocated size for each binder_thread is 512 bytes by kzalloc.
> > > > Because the size of binder_thread is fixed and it's only 304 bytes.
> > > > It will save 208 bytes per binder_thread when use create a kmem_cache
> > > > for the bi
hu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
>> > [snip]
>> > > > The allocated size for each binder_thread is 512 bytes by
>kzalloc.
>> > > > Because the size of binder_thread is fixed and it's only 304
>bytes.
>> > &g
]
> > > > The allocated size for each binder_thread is 512 bytes by kzalloc.
> > > > Because the size of binder_thread is fixed and it's only 304 bytes.
> > > > It will save 208 bytes per binder_thread when use create a kmem_cache
> > > > for the bi
> > Because the size of binder_thread is fixed and it's only 304 bytes.
> > > It will save 208 bytes per binder_thread when use create a kmem_cache
> > > for the binder_thread.
> >
> > Are you _sure_ it really will save that much memory? You want to do
&
gt; It will save 208 bytes per binder_thread when use create a kmem_cache
> > for the binder_thread.
>
> Are you _sure_ it really will save that much memory? You want to do
> allocations based on a nice alignment for lots of good reasons,
> especially for something that needs qui
On Thu, Aug 29, 2019 at 01:49:53PM +0800, Peikan Tsai wrote:
> Hi,
>
> The allocated size for each binder_thread is 512 bytes by kzalloc.
> Because the size of binder_thread is fixed and it's only 304 bytes.
> It will save 208 bytes per binder_thread when use create a
inder_thread when use create a kmem_cache
> for the binder_thread.
Are you _sure_ it really will save that much memory? You want to do
allocations based on a nice alignment for lots of good reasons,
especially for something that needs quick accesses.
Did you test your change on a system that
Hi,
The allocated size for each binder_thread is 512 bytes by kzalloc.
Because the size of binder_thread is fixed and it's only 304 bytes.
It will save 208 bytes per binder_thread when use create a kmem_cache
for the binder_thread.
Signed-off-by: Peikan Tsai
---
drivers/android/binder.c
;
size_t capacity;
+ struct kmem_cache *cache;
u8 *sdata;
};
diff --git a/include/net/9p/client.h b/include/net/9p/client.h
index c2671d40bb6b..735f3979d559 100644
--- a/include/net/9p/client.h
+++ b/include/net/9p/client.h
@@ -123,6 +123,7 @@ struct p9_client {
struct
e released unless at least one reference to the memcg exists, which
> is very far from optimal.
>
> Let's rework it in a way that allows releasing individual kmem_caches
> as soon as the cgroup is offline, the kmem_cache is empty and there
> are no pending allocations.
>
&
On Tue, Jun 11, 2019 at 4:18 PM Roman Gushchin wrote:
>
> Currently the memcg_params.dying flag and the corresponding
> workqueue used for the asynchronous deactivation of kmem_caches
> is synchronized using the slab_mutex.
>
> It makes impossible to check this flag from the irq context,
> which w
On Tue, Jun 11, 2019 at 4:18 PM Roman Gushchin wrote:
>
> There is no point in checking the root_cache->memcg_params.dying
> flag on kmem_cache creation path. New allocations shouldn't be
> performed using a dead root kmem_cache,
Yes, it's the user's responsibility
On Tue, Jun 11, 2019 at 4:18 PM Roman Gushchin wrote:
>
> Currently SLUB uses a work scheduled after an RCU grace period
> to deactivate a non-root kmem_cache. This mechanism can be reused
> for kmem_caches release, but requires generalization for SLAB
> case
On Tue, Jun 11, 2019 at 04:18:10PM -0700, Roman Gushchin wrote:
> Currently the memcg_params.dying flag and the corresponding
> workqueue used for the asynchronous deactivation of kmem_caches
> is synchronized using the slab_mutex.
>
> It makes impossible to check this flag from the irq context,
>
On Tue, Jun 11, 2019 at 04:18:09PM -0700, Roman Gushchin wrote:
> There is no point in checking the root_cache->memcg_params.dying
> flag on kmem_cache creation path. New allocations shouldn't be
> performed using a dead root kmem_cache, so no new memcg kmem_cache
> creation ca
On Tue, 11 Jun 2019 16:18:04 -0700 Roman Gushchin wrote:
> Subject: [PATCH v7 01/10] mm: postpone kmem_cache memcg pointer
> initialization to memcg_link_cache()]
I think mm is too large a place for patches to be described as
affecting simply "mm". So I'll irritatin
On Wed, Jun 12, 2019 at 07:04:23PM -0700, Andrew Morton wrote:
> On Tue, 11 Jun 2019 16:18:04 -0700 Roman Gushchin wrote:
>
> > Subject: [PATCH v7 01/10] mm: postpone kmem_cache memcg pointer
> > initialization to memcg_link_cache()]
>
> I think mm is too large a
from optimal.
Let's rework it in a way that allows releasing individual kmem_caches
as soon as the cgroup is offline, the kmem_cache is empty and there
are no pending allocations.
To make it possible, let's introduce a new percpu refcounter for
non-root kmem caches. The counter is ini
4bf08 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -130,6 +130,7 @@ int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t
flags, size_t nr,
#ifdef CONFIG_MEMCG_KMEM
LIST_HEAD(slab_root_caches);
+static DEFINE_SPINLOCK(memcg_kmem_wq_lock);
void slab_init_memcg_params(struct kmem_
There is no point in checking the root_cache->memcg_params.dying
flag on kmem_cache creation path. New allocations shouldn't be
performed using a dead root kmem_cache, so no new memcg kmem_cache
creation can be scheduled after the flag is set. And if it was
scheduled before, flush_memcg_w
Currently SLUB uses a work scheduled after an RCU grace period
to deactivate a non-root kmem_cache. This mechanism can be reused
for kmem_caches release, but requires generalization for SLAB
case.
Introduce kmemcg_cache_deactivate() function, which calls
allocator-specific __kmem_cache_deactivate
Initialize kmem_cache->memcg_params.memcg pointer in
memcg_link_cache() rather than in init_memcg_params().
Once kmem_cache will hold a reference to the memory cgroup,
it will simplify the refcounting.
For non-root kmem_caches memcg_link_cache() is always called
before the kmem_cache beco
09b26673b63f..2914a8f0aa85 100644
> > --- a/mm/slab_common.c
> > +++ b/mm/slab_common.c
> > @@ -130,6 +130,7 @@ int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t
> > flags, size_t nr,
> > #ifdef CONFIG_MEMCG_KMEM
> >
> > LIST_HEAD(slab_r
On Mon, Jun 10, 2019 at 04:33:44PM -0400, Johannes Weiner wrote:
> On Sun, Jun 09, 2019 at 03:10:52PM +0300, Vladimir Davydov wrote:
> > On Tue, Jun 04, 2019 at 07:44:45PM -0700, Roman Gushchin wrote:
> > > Johannes noticed that reading the memcg kmem_cache pointer in
> >
On Sun, Jun 09, 2019 at 03:10:52PM +0300, Vladimir Davydov wrote:
> On Tue, Jun 04, 2019 at 07:44:45PM -0700, Roman Gushchin wrote:
> > Johannes noticed that reading the memcg kmem_cache pointer in
> > cache_from_memcg_idx() is performed using READ_ONCE() macro,
> > which do
> are released unless at least one reference to the memcg exists, which
> is very far from optimal.
>
> Let's rework it in a way that allows releasing individual kmem_caches
> as soon as the cgroup is offline, the kmem_cache is empty and there
> are no pending allocations.
>
++
> 1 file changed, 15 insertions(+), 4 deletions(-)
>
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 09b26673b63f..2914a8f0aa85 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -130,6 +130,7 @@ int __kmem_cache_alloc_bulk(struct k
On Tue, Jun 04, 2019 at 07:44:48PM -0700, Roman Gushchin wrote:
> Currently SLUB uses a work scheduled after an RCU grace period
> to deactivate a non-root kmem_cache. This mechanism can be reused
> for kmem_caches release, but requires generalization for SLAB
> case.
&g
On Tue, Jun 04, 2019 at 07:44:45PM -0700, Roman Gushchin wrote:
> Johannes noticed that reading the memcg kmem_cache pointer in
> cache_from_memcg_idx() is performed using READ_ONCE() macro,
> which doesn't implement a SMP barrier, which is required
> by the logic.
>
> Ad
t; > > 1 file changed, 15 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > > index 09b26673b63f..2914a8f0aa85 100644
> > > --- a/mm/slab_common.c
> > > +++ b/mm/slab_common.c
> > > @@
09b26673b63f..2914a8f0aa85 100644
> > --- a/mm/slab_common.c
> > +++ b/mm/slab_common.c
> > @@ -130,6 +130,7 @@ int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t
> > flags, size_t nr,
> > #ifdef CONFIG_MEMCG_KMEM
> >
> > LIST_HEAD(slab_r
On Wed, Jun 5, 2019 at 10:14 AM Roman Gushchin wrote:
>
> On Tue, Jun 04, 2019 at 09:35:02PM -0700, Shakeel Butt wrote:
> > On Tue, Jun 4, 2019 at 7:45 PM Roman Gushchin wrote:
> > >
> > > Johannes noticed that reading the memcg kmem_cache pointer in
> > &
On Tue, Jun 04, 2019 at 09:35:02PM -0700, Shakeel Butt wrote:
> On Tue, Jun 4, 2019 at 7:45 PM Roman Gushchin wrote:
> >
> > Johannes noticed that reading the memcg kmem_cache pointer in
> > cache_from_memcg_idx() is performed using READ_ONCE() macro,
> > which does
++
> 1 file changed, 15 insertions(+), 4 deletions(-)
>
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 09b26673b63f..2914a8f0aa85 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -130,6 +130,7 @@ int __kmem_cache_alloc_bulk(struct k
On Tue, Jun 04, 2019 at 07:44:45PM -0700, Roman Gushchin wrote:
> Johannes noticed that reading the memcg kmem_cache pointer in
> cache_from_memcg_idx() is performed using READ_ONCE() macro,
> which doesn't implement a SMP barrier, which is required
> by the logic.
>
> Ad
On Tue, Jun 4, 2019 at 7:45 PM Roman Gushchin wrote:
>
> Johannes noticed that reading the memcg kmem_cache pointer in
> cache_from_memcg_idx() is performed using READ_ONCE() macro,
> which doesn't implement a SMP barrier, which is required
> by the logic.
>
> Add a pro
3b63f..2914a8f0aa85 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -130,6 +130,7 @@ int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t
flags, size_t nr,
#ifdef CONFIG_MEMCG_KMEM
LIST_HEAD(slab_root_caches);
+static DEFINE_SPINLOCK(memcg_kmem_wq_lock);
void slab_init_memcg_params(s
from optimal.
Let's rework it in a way that allows releasing individual kmem_caches
as soon as the cgroup is offline, the kmem_cache is empty and there
are no pending allocations.
To make it possible, let's introduce a new percpu refcounter for
non-root kmem caches. The counter is ini
Initialize kmem_cache->memcg_params.memcg pointer in
memcg_link_cache() rather than in init_memcg_params().
Once kmem_cache will hold a reference to the memory cgroup,
it will simplify the refcounting.
For non-root kmem_caches memcg_link_cache() is always called
before the kmem_cache beco
Currently SLUB uses a work scheduled after an RCU grace period
to deactivate a non-root kmem_cache. This mechanism can be reused
for kmem_caches release, but requires generalization for SLAB
case.
Introduce kmemcg_cache_deactivate() function, which calls
allocator-specific __kmem_cache_deactivate
Johannes noticed that reading the memcg kmem_cache pointer in
cache_from_memcg_idx() is performed using READ_ONCE() macro,
which doesn't implement a SMP barrier, which is required
by the logic.
Add a proper smp_rmb() to be paired with smp_wmb() in
memcg_create_kmem_cache().
The same appli
On Tue, May 28, 2019 at 06:03:53PM -0400, Johannes Weiner wrote:
> On Tue, May 21, 2019 at 01:07:33PM -0700, Roman Gushchin wrote:
> > + arr = rcu_dereference(cachep->memcg_params.memcg_caches);
> > +
> > + /*
> > +* Make sure we will access the up-to-date value. The code updating
> > +
On Tue, May 21, 2019 at 01:07:33PM -0700, Roman Gushchin wrote:
> + arr = rcu_dereference(cachep->memcg_params.memcg_caches);
> +
> + /*
> + * Make sure we will access the up-to-date value. The code updating
> + * memcg_caches issues a write barrier to match this (see
> + * m
On Tue, May 21, 2019 at 01:07:29PM -0700, Roman Gushchin wrote:
> Initialize kmem_cache->memcg_params.memcg pointer in
> memcg_link_cache() rather than in init_memcg_params().
>
> Once kmem_cache will hold a reference to the memory cgroup,
> it will simplify the refcounting.
On Tue, May 28, 2019 at 08:08:28PM +0300, Vladimir Davydov wrote:
> Hello Roman,
>
> On Tue, May 21, 2019 at 01:07:33PM -0700, Roman Gushchin wrote:
> > This commit makes several important changes in the lifecycle
> > of a non-root kmem_cache, which also affect the life
On 5/28/19 1:39 PM, Vladimir Davydov wrote:
> On Tue, May 28, 2019 at 01:37:50PM -0400, Waiman Long wrote:
>> On 5/28/19 1:08 PM, Vladimir Davydov wrote:
>>>> static void flush_memcg_workqueue(struct kmem_cache *s)
>>>> {
>>>> + /*
>>>>
On Tue, May 28, 2019 at 01:37:50PM -0400, Waiman Long wrote:
> On 5/28/19 1:08 PM, Vladimir Davydov wrote:
> >> static void flush_memcg_workqueue(struct kmem_cache *s)
> >> {
> >> + /*
> >> + * memcg_params.dying is synchronized using slab_mutex AND
On Tue, May 21, 2019 at 01:07:29PM -0700, Roman Gushchin wrote:
> Initialize kmem_cache->memcg_params.memcg pointer in
> memcg_link_cache() rather than in init_memcg_params().
>
> Once kmem_cache will hold a reference to the memory cgroup,
> it will simplify the refcounting.
On Tue, May 21, 2019 at 01:07:30PM -0700, Roman Gushchin wrote:
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 6e00bdf8618d..4e5b4292a763 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -866,11 +859,12 @@ static void flush_memcg_workqueue(struc
Hello Roman,
On Tue, May 21, 2019 at 01:07:33PM -0700, Roman Gushchin wrote:
> This commit makes several important changes in the lifecycle
> of a non-root kmem_cache, which also affect the lifecycle
> of a memory cgroup.
>
> Currently each charged slab page has a page->mem_c
Use the preferred KMEM_CACHE helper for brevity.
Signed-off-by: Peng Wang
---
block/blk-core.c | 3 +--
block/blk-ioc.c | 3 +--
2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index 1bf83a0df0f6..841bf0b12755 100644
--- a/block/blk-core.c
1 - 100 of 341 matches
Mail list logo