On Thu, Jan 03, 2019 at 06:07:35PM +0100, Michal Hocko wrote:
> > > On Wed 02-01-19 13:06:19, Qian Cai wrote:
> > > [...]
> > >> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> > >> index f9d9dc250428..9e1aa3b7df75 100644
> > >> --- a/mm/kmemleak.c
> > >> +++ b/mm/kmemleak.c
> > >> @@ -576,6 +576,16 @@ static struct kmemleak_object 
> > >> *create_object(unsigned long ptr, size_t size,
> > >>          struct rb_node **link, *rb_parent;
> > >>  
> > >>          object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp));
> > >> +#ifdef CONFIG_PREEMPT_COUNT
> > >> +        if (!object) {
> > >> +                /* last-ditch effort in a low-memory situation */
> > >> +                if (irqs_disabled() || is_idle_task(current) || 
> > >> in_atomic())
> > >> +                        gfp = GFP_ATOMIC;
> > >> +                else
> > >> +                        gfp = gfp_kmemleak_mask(gfp) | 
> > >> __GFP_DIRECT_RECLAIM;
> > >> +                object = kmem_cache_alloc(object_cache, gfp);
> > >> +        }
> > >> +#endif
[...]
> I will not object to this workaround but I strongly believe that
> kmemleak should rethink the metadata allocation strategy to be really
> robust.

This would be nice indeed and it was discussed last year. I just haven't
got around to trying anything yet:

https://marc.info/?l=linux-mm&m=152812489819532

-- 
Catalin

Reply via email to