On Mon 19-08-24 19:56:53, Yafang Shao wrote:
> On Mon, Aug 19, 2024 at 6:10 PM Barry Song <21cn...@gmail.com> wrote:
> >
> > On Mon, Aug 19, 2024 at 9:46 PM Yafang Shao <laoar.s...@gmail.com> wrote:
> > >
> > > On Mon, Aug 19, 2024 at 5:39 PM Barry Song <21cn...@gmail.com> wrote:
> > > >
> > > > On Mon, Aug 19, 2024 at 9:25 PM Yafang Shao <laoar.s...@gmail.com> 
> > > > wrote:
> > > > >
> > > > > On Mon, Aug 19, 2024 at 3:50 PM Michal Hocko <mho...@suse.com> wrote:
> > > > > >
> > > > > > On Sun 18-08-24 10:55:09, Yafang Shao wrote:
> > > > > > > On Sat, Aug 17, 2024 at 2:25 PM Barry Song <21cn...@gmail.com> 
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > From: Barry Song <v-songbao...@oppo.com>
> > > > > > > >
> > > > > > > > When users allocate memory with the __GFP_NOFAIL flag, they 
> > > > > > > > might
> > > > > > > > incorrectly use it alongside GFP_ATOMIC, GFP_NOWAIT, etc.  This 
> > > > > > > > kind of
> > > > > > > > non-blockable __GFP_NOFAIL is not supported and is pointless.  
> > > > > > > > If we
> > > > > > > > attempt and still fail to allocate memory for these users, we 
> > > > > > > > have two
> > > > > > > > choices:
> > > > > > > >
> > > > > > > >     1. We could busy-loop and hope that some other direct 
> > > > > > > > reclamation or
> > > > > > > >     kswapd rescues the current process. However, this is 
> > > > > > > > unreliable
> > > > > > > >     and could ultimately lead to hard or soft lockups,
> > > > > > >
> > > > > > > That can occur even if we set both __GFP_NOFAIL and
> > > > > > > __GFP_DIRECT_RECLAIM, right?
> > > > > >
> > > > > > No, it cannot! With __GFP_DIRECT_RECLAIM the allocator might take a 
> > > > > > long
> > > > > > time to satisfy the allocation but it will reclaim to get the 
> > > > > > memory, it
> > > > > > will sleep if necessary and it will will trigger OOM killer if 
> > > > > > there is
> > > > > > no other option. __GFP_DIRECT_RECLAIM is a completely different 
> > > > > > story
> > > > > > than without it which means _no_sleeping_ is allowed and therefore 
> > > > > > only
> > > > > > a busy loop waiting for the allocation to proceed is allowed.
> > > > >
> > > > > That could be a livelock.
> > > > > From the user's perspective, there's no noticeable difference between
> > > > > a livelock, soft lockup, or hard lockup.
> > > >
> > > > This is certainly different. A lockup occurs when tasks can't be 
> > > > scheduled,
> > > > causing the entire system to stop functioning.
> > >
> > > When a livelock occurs, your only options are to migrate your
> > > applications to other servers or reboot the system—there’s no other
> > > resolution (except for using oomd, which is difficult for users
> > > without cgroup2 or swap).
> > >
> > > So, there's effectively no difference.
> >
> > Could you express your options more clearly? I am guessing two
> > possibilities?
> > 1. entirely drop __GFP_NOFAIL and require all users who are
> > using __GFP_NOFAIL to add error handlers instead?
> 
> When the system is unstable—such as after reaching the maximum retries
> without successfully allocating pages—simply failing the operation
> might be the better option.

It seems you are failing to understand the __GFP_NOFAIL semantic and you
are circling around that. So let me repeat that for you here. Make sure
you understand before going forward with the discussion. Feel free if
something is not clear but please do not continue with what-if kind of
questions.

GFP_NOFAIL means that the caller has no way to deal with the allocation
strategy. Allocator simply cannot fail the request even if that takes
ages to succeed! To put it simpler if you have a code like

        while (!(ptr = alloc()));
or
        BUG_ON(!(ptr = alloc()));

then you should better use __GFP_NOFAIL rather than opencode the endless
loop or the bug on for the failure.

Our (page, vmalloc, kmalloc) allocators do support that node for
allocation that are allowed to sleep. But those allocators have never
supported and are unlikely to suppoort atomic non-failing allocations.

More clear?
-- 
Michal Hocko
SUSE Labs

Reply via email to