On Thu, Oct 25, 2012 at 11:06:52AM -0700, Tejun Heo wrote:
> Hello, Glauber.
>
> On Thu, Oct 25, 2012 at 03:05:22PM +0400, Glauber Costa wrote:
> > > Is there any rmb() pair?
> > > As far as I know, without rmb(), wmb() doesn't guarantee anything.
> > >
> >
> > There should be. But it seems I mi
Hello, Glauber.
On Thu, Oct 25, 2012 at 03:05:22PM +0400, Glauber Costa wrote:
> > Is there any rmb() pair?
> > As far as I know, without rmb(), wmb() doesn't guarantee anything.
> >
>
> There should be. But it seems I missed it. Speaking of which, I should
You probably can use read_barrier_dep
On 10/24/2012 10:10 PM, JoonSoo Kim wrote:
> 2012/10/19 Glauber Costa :
>> @@ -2930,9 +2937,188 @@ int memcg_register_cache(struct mem_cgroup *memcg,
>> struct kmem_cache *s)
>>
>> void memcg_release_cache(struct kmem_cache *s)
>> {
>> + struct kmem_cache *root;
>> + int id = memcg_c
2012/10/19 Glauber Costa :
> @@ -2930,9 +2937,188 @@ int memcg_register_cache(struct mem_cgroup *memcg,
> struct kmem_cache *s)
>
> void memcg_release_cache(struct kmem_cache *s)
> {
> + struct kmem_cache *root;
> + int id = memcg_css_id(s->memcg_params->memcg);
> +
> + if (s->
The page allocator is able to bind a page to a memcg when it is
allocated. But for the caches, we'd like to have as many objects as
possible in a page belonging to the same cache.
This is done in this patch by calling memcg_kmem_get_cache in the
beginning of every allocation function. This routing
5 matches
Mail list logo