> > > > > > :    * The preload is done in non-atomic context, thus it allows us
> > > > > > :    * to use more permissive allocation masks to be more stable 
> > > > > > under
> > > > > > :    * low memory condition and high memory pressure.
> > > > > > :    *
> > > > > > :    * Even if it fails we do not really care about that. Just 
> > > > > > proceed
> > > > > > :    * as it is. "overflow" path will refill the cache we allocate 
> > > > > > from.
> > > > > > :    */
> > > > > > :   if (!this_cpu_read(ne_fit_preload_node)) {
> > > > > > 
> > > > > > Readability nit: local `pva' should be defined here, rather than 
> > > > > > having
> > > > > > function-wide scope.
> > > > > > 
> > > > > > :           pva = kmem_cache_alloc_node(vmap_area_cachep, 
> > > > > > GFP_KERNEL, node);
> > > > > > 
> > > > > > Why doesn't this honour gfp_mask?  If it's not a bug, please add
> > > > > > comment explaining this.
> > > > > > 
> > > > But there is a comment, if understand you correctly:
> > > > 
> > > > <snip>
> > > > * Even if it fails we do not really care about that. Just proceed
> > > > * as it is. "overflow" path will refill the cache we allocate from.
> > > > <snip>
> > > 
> > > My point is that the alloc_vmap_area() caller passed us a gfp_t but
> > > this code ignores it, as does adjust_va_to_fit_type().  These *look*
> > > like potential bugs.  If not, they should be commented so they don't
> > > look like bugs any more ;)
> > > 
> > I got it, there was misunderstanding from my side :) I agree.
> > 
> > In the first case i should have used and respect the passed "gfp_mask",
> > like below:
> > 
> > diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> > index f48cd0711478..880b6e8cdeae 100644
> > --- a/mm/vmalloc.c
> > +++ b/mm/vmalloc.c
> > @@ -1113,7 +1113,8 @@ static struct vmap_area *alloc_vmap_area(unsigned 
> > long size,
> >                  * Just proceed as it is. If needed "overflow" path
> >                  * will refill the cache we allocate from.
> >                  */
> > -               pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, 
> > node);
> > +               pva = kmem_cache_alloc_node(vmap_area_cachep,
> > +                               gfp_mask & GFP_RECLAIM_MASK, node);
> >  
> >         spin_lock(&vmap_area_lock);
> > 
> > It should be sent as a separate patch, i think.
> 
> Yes. I do not think this would make any real difference because that
> battle is lost long ago. vmalloc is simply not gfp mask friendly. There
> are places like page table allocation which are hardcoded GFP_KERNEL so
> GFP_NOWAIT semantic is not going to work, really. The above makes sense
> from a pure aesthetic POV, though, I would say.
I agree. Then i will create a patch.

Thank you!

--
Vlad Rezki

Reply via email to