On Wed, Oct 07, 2020 at 12:02:34PM +0200, Michal Hocko wrote:
> On Wed 07-10-20 00:25:29, Uladzislau Rezki wrote:
> > On Mon, Oct 05, 2020 at 05:41:00PM +0200, Michal Hocko wrote:
> > > On Mon 05-10-20 17:08:01, Uladzislau Rezki wrote:
> > > > On Fri, Oct 02, 2020 at 11:05:07AM +0200, Michal Hocko
On Wed 07-10-20 00:25:29, Uladzislau Rezki wrote:
> On Mon, Oct 05, 2020 at 05:41:00PM +0200, Michal Hocko wrote:
> > On Mon 05-10-20 17:08:01, Uladzislau Rezki wrote:
> > > On Fri, Oct 02, 2020 at 11:05:07AM +0200, Michal Hocko wrote:
> > > > On Fri 02-10-20 09:50:14, Mel Gorman wrote:
> > > > > O
On Mon, Oct 05, 2020 at 05:41:00PM +0200, Michal Hocko wrote:
> On Mon 05-10-20 17:08:01, Uladzislau Rezki wrote:
> > On Fri, Oct 02, 2020 at 11:05:07AM +0200, Michal Hocko wrote:
> > > On Fri 02-10-20 09:50:14, Mel Gorman wrote:
> > > > On Fri, Oct 02, 2020 at 09:11:23AM +0200, Michal Hocko wrote:
On Tue, Oct 06, 2020 at 11:03:34AM +0100, Mel Gorman wrote:
> On Fri, Oct 02, 2020 at 07:41:20AM -0700, Paul E. McKenney wrote:
> > On Fri, Oct 02, 2020 at 11:19:52AM +0100, Mel Gorman wrote:
> > > On Fri, Oct 02, 2020 at 11:58:58AM +0200, Peter Zijlstra wrote:
> > > > > It's enabled by default by
On Fri, Oct 02, 2020 at 07:41:20AM -0700, Paul E. McKenney wrote:
> On Fri, Oct 02, 2020 at 11:19:52AM +0100, Mel Gorman wrote:
> > On Fri, Oct 02, 2020 at 11:58:58AM +0200, Peter Zijlstra wrote:
> > > > It's enabled by default by enough distros that adding too many checks
> > > > is potentially pa
On Mon 05-10-20 17:08:01, Uladzislau Rezki wrote:
> On Fri, Oct 02, 2020 at 11:05:07AM +0200, Michal Hocko wrote:
> > On Fri 02-10-20 09:50:14, Mel Gorman wrote:
> > > On Fri, Oct 02, 2020 at 09:11:23AM +0200, Michal Hocko wrote:
> > > > On Thu 01-10-20 21:26:26, Uladzislau Rezki wrote:
> > > > > >
On Fri, Oct 02, 2020 at 11:05:07AM +0200, Michal Hocko wrote:
> On Fri 02-10-20 09:50:14, Mel Gorman wrote:
> > On Fri, Oct 02, 2020 at 09:11:23AM +0200, Michal Hocko wrote:
> > > On Thu 01-10-20 21:26:26, Uladzislau Rezki wrote:
> > > > >
> > > > > No, I meant going back to idea of new gfp flag,
On Fri, Oct 02, 2020 at 09:06:24AM +0100, Mel Gorman wrote:
> On Thu, Oct 01, 2020 at 09:26:26PM +0200, Uladzislau Rezki wrote:
> > >
> > > No, I meant going back to idea of new gfp flag, but adjust the
> > > implementation in
> > > the allocator (different from what you posted in previous versio
On Fri, Oct 02, 2020 at 09:11:23AM +0200, Michal Hocko wrote:
> On Thu 01-10-20 21:26:26, Uladzislau Rezki wrote:
> > >
> > > No, I meant going back to idea of new gfp flag, but adjust the
> > > implementation in
> > > the allocator (different from what you posted in previous version) so
> > > t
On Fri, Oct 02, 2020 at 11:19:52AM +0100, Mel Gorman wrote:
> On Fri, Oct 02, 2020 at 11:58:58AM +0200, Peter Zijlstra wrote:
> > > It's enabled by default by enough distros that adding too many checks
> > > is potentially painful. Granted it would be missed by most benchmarking
> > > which tend to
On Fri, Oct 02, 2020 at 11:58:58AM +0200, Peter Zijlstra wrote:
> > It's enabled by default by enough distros that adding too many checks
> > is potentially painful. Granted it would be missed by most benchmarking
> > which tend to control allocations from userspace but a lot of performance
> > pro
On Fri, Oct 02, 2020 at 10:45:02AM +0100, Mel Gorman wrote:
> On Fri, Oct 02, 2020 at 11:07:29AM +0200, Peter Zijlstra wrote:
> > On Fri, Oct 02, 2020 at 09:50:14AM +0100, Mel Gorman wrote:
> > > On Fri, Oct 02, 2020 at 09:11:23AM +0200, Michal Hocko wrote:
> >
> > > > > +#define ___GFP_NO_LOCKS
On Fri, Oct 02, 2020 at 11:07:29AM +0200, Peter Zijlstra wrote:
> On Fri, Oct 02, 2020 at 09:50:14AM +0100, Mel Gorman wrote:
> > On Fri, Oct 02, 2020 at 09:11:23AM +0200, Michal Hocko wrote:
>
> > > > +#define ___GFP_NO_LOCKS0x80u
> > >
> > > Even if a new gfp flag gains a su
On Fri, Oct 02, 2020 at 09:50:14AM +0100, Mel Gorman wrote:
> On Fri, Oct 02, 2020 at 09:11:23AM +0200, Michal Hocko wrote:
> > > +#define ___GFP_NO_LOCKS0x80u
> >
> > Even if a new gfp flag gains a sufficient traction and support I am
> > _strongly_ opposed against consuming
On Fri 02-10-20 09:50:14, Mel Gorman wrote:
> On Fri, Oct 02, 2020 at 09:11:23AM +0200, Michal Hocko wrote:
> > On Thu 01-10-20 21:26:26, Uladzislau Rezki wrote:
> > > >
> > > > No, I meant going back to idea of new gfp flag, but adjust the
> > > > implementation in
> > > > the allocator (differe
On Fri, Oct 02, 2020 at 09:11:23AM +0200, Michal Hocko wrote:
> On Thu 01-10-20 21:26:26, Uladzislau Rezki wrote:
> > >
> > > No, I meant going back to idea of new gfp flag, but adjust the
> > > implementation in
> > > the allocator (different from what you posted in previous version) so
> > > t
On Thu, Oct 01, 2020 at 09:26:26PM +0200, Uladzislau Rezki wrote:
> >
> > No, I meant going back to idea of new gfp flag, but adjust the
> > implementation in
> > the allocator (different from what you posted in previous version) so that
> > it
> > only looks at the flag after it tries to alloca
On Thu 01-10-20 21:26:26, Uladzislau Rezki wrote:
> >
> > No, I meant going back to idea of new gfp flag, but adjust the
> > implementation in
> > the allocator (different from what you posted in previous version) so that
> > it
> > only looks at the flag after it tries to allocate from pcplist
On Wed, Sep 30, 2020 at 12:35:57PM +0200, Michal Hocko wrote:
> On Wed 30-09-20 00:07:42, Uladzislau Rezki wrote:
> [...]
> >
> > bool is_pcp_cache_empty(gfp_t gfp)
> > {
> > struct per_cpu_pages *pcp;
> > struct zoneref *ref;
> > unsigned long flags;
> > bool empty;
> >
> > r
>
> No, I meant going back to idea of new gfp flag, but adjust the implementation
> in
> the allocator (different from what you posted in previous version) so that it
> only looks at the flag after it tries to allocate from pcplist and finds out
> it's empty. So, no inventing of new page allocato
On Wed, Sep 30, 2020 at 06:46:00PM +0200, Michal Hocko wrote:
> On Wed 30-09-20 15:39:54, Uladzislau Rezki wrote:
> > On Wed, Sep 30, 2020 at 02:44:13PM +0200, Michal Hocko wrote:
> > > On Wed 30-09-20 14:35:35, Uladzislau Rezki wrote:
> > > > On Wed, Sep 30, 2020 at 11:27:32AM +0200, Michal Hocko
On Wed, Sep 30, 2020 at 1:22 PM Michal Hocko wrote:
> > > > I think documenting is useful.
> > > >
> > > > Could it be more explicit in what the issue is? Something like:
> > > >
> > > > * Even with GFP_ATOMIC, calls to the allocator can sleep on PREEMPT_RT
> > > > systems. Therefore, the current
On Wed 30-09-20 13:03:29, Joel Fernandes wrote:
> On Wed, Sep 30, 2020 at 12:48 PM Michal Hocko wrote:
> >
> > On Wed 30-09-20 11:25:17, Joel Fernandes wrote:
> > > On Fri, Sep 25, 2020 at 05:47:41PM +0200, Michal Hocko wrote:
> > > > On Fri 25-09-20 17:31:29, Uladzislau Rezki wrote:
> > > > > > >
On Wed, Sep 30, 2020 at 12:48 PM Michal Hocko wrote:
>
> On Wed 30-09-20 11:25:17, Joel Fernandes wrote:
> > On Fri, Sep 25, 2020 at 05:47:41PM +0200, Michal Hocko wrote:
> > > On Fri 25-09-20 17:31:29, Uladzislau Rezki wrote:
> > > > > > > >
> > > > > > > > All good points!
> > > > > > > >
> > >
On Wed 30-09-20 11:25:17, Joel Fernandes wrote:
> On Fri, Sep 25, 2020 at 05:47:41PM +0200, Michal Hocko wrote:
> > On Fri 25-09-20 17:31:29, Uladzislau Rezki wrote:
> > > > > > >
> > > > > > > All good points!
> > > > > > >
> > > > > > > On the other hand, duplicating a portion of the allocator
On Wed 30-09-20 15:39:54, Uladzislau Rezki wrote:
> On Wed, Sep 30, 2020 at 02:44:13PM +0200, Michal Hocko wrote:
> > On Wed 30-09-20 14:35:35, Uladzislau Rezki wrote:
> > > On Wed, Sep 30, 2020 at 11:27:32AM +0200, Michal Hocko wrote:
> > > > On Tue 29-09-20 18:25:14, Uladzislau Rezki wrote:
> > >
On Wed, Sep 30, 2020 at 04:39:53PM +0200, Vlastimil Babka wrote:
> On 9/30/20 12:07 AM, Uladzislau Rezki wrote:
> > On Tue, Sep 29, 2020 at 12:15:34PM +0200, Vlastimil Babka wrote:
> >> On 9/18/20 9:48 PM, Uladzislau Rezki (Sony) wrote:
> >>
> >> After reading all the threads and mulling over this
On Fri, Sep 25, 2020 at 05:47:41PM +0200, Michal Hocko wrote:
> On Fri 25-09-20 17:31:29, Uladzislau Rezki wrote:
> > > > > >
> > > > > > All good points!
> > > > > >
> > > > > > On the other hand, duplicating a portion of the allocator
> > > > > > functionality
> > > > > > within RCU increases
On 9/30/20 12:07 AM, Uladzislau Rezki wrote:
> On Tue, Sep 29, 2020 at 12:15:34PM +0200, Vlastimil Babka wrote:
>> On 9/18/20 9:48 PM, Uladzislau Rezki (Sony) wrote:
>>
>> After reading all the threads and mulling over this, I am going to deflect
>> from
>> Mel and Michal and not oppose the idea
On Wed, Sep 30, 2020 at 02:44:13PM +0200, Michal Hocko wrote:
> On Wed 30-09-20 14:35:35, Uladzislau Rezki wrote:
> > On Wed, Sep 30, 2020 at 11:27:32AM +0200, Michal Hocko wrote:
> > > On Tue 29-09-20 18:25:14, Uladzislau Rezki wrote:
> > > > > > I look at it in scope of GFP_ATOMIC/GFP_NOWAIT issu
On Wed 30-09-20 14:35:35, Uladzislau Rezki wrote:
> On Wed, Sep 30, 2020 at 11:27:32AM +0200, Michal Hocko wrote:
> > On Tue 29-09-20 18:25:14, Uladzislau Rezki wrote:
> > > > > I look at it in scope of GFP_ATOMIC/GFP_NOWAIT issues, i.e. inability
> > > > > to provide a memory service for contexts
On Wed, Sep 30, 2020 at 11:27:32AM +0200, Michal Hocko wrote:
> On Tue 29-09-20 18:25:14, Uladzislau Rezki wrote:
> > > > I look at it in scope of GFP_ATOMIC/GFP_NOWAIT issues, i.e. inability
> > > > to provide a memory service for contexts which are not allowed to
> > > > sleep, RCU is part of the
On Wed 30-09-20 00:07:42, Uladzislau Rezki wrote:
[...]
>
> bool is_pcp_cache_empty(gfp_t gfp)
> {
> struct per_cpu_pages *pcp;
> struct zoneref *ref;
> unsigned long flags;
> bool empty;
>
> ref = first_zones_zonelist(node_zonelist(
> numa_node_id(), gfp), gfp_zon
On Tue 29-09-20 18:25:14, Uladzislau Rezki wrote:
> > > I look at it in scope of GFP_ATOMIC/GFP_NOWAIT issues, i.e. inability
> > > to provide a memory service for contexts which are not allowed to
> > > sleep, RCU is part of them. Both flags used to provide such ability
> > > before but not anymor
On Tue, Sep 29, 2020 at 12:15:34PM +0200, Vlastimil Babka wrote:
> On 9/18/20 9:48 PM, Uladzislau Rezki (Sony) wrote:
> > Some background and kfree_rcu()
> > ===
> > The pointers to be freed are stored in the per-cpu array to improve
> > performance, to enable an easier-
> > I look at it in scope of GFP_ATOMIC/GFP_NOWAIT issues, i.e. inability
> > to provide a memory service for contexts which are not allowed to
> > sleep, RCU is part of them. Both flags used to provide such ability
> > before but not anymore.
> >
> > Do you agree with it?
>
> Yes this sucks. But
On 9/18/20 9:48 PM, Uladzislau Rezki (Sony) wrote:
> Some background and kfree_rcu()
> ===
> The pointers to be freed are stored in the per-cpu array to improve
> performance, to enable an easier-to-use API, to accommodate vmalloc
> memmory and to support a single argume
On Fri, Sep 25, 2020 at 10:26:18AM +0200, Peter Zijlstra wrote:
> On Thu, Sep 24, 2020 at 08:38:34AM -0700, Paul E. McKenney wrote:
> > On Thu, Sep 24, 2020 at 01:19:07PM +0200, Peter Zijlstra wrote:
> > > On Thu, Sep 24, 2020 at 10:16:14AM +0200, Uladzislau Rezki wrote:
> > > > The key point is "e
On Fri, Sep 25, 2020 at 05:17:12PM +0100, Mel Gorman wrote:
> On Fri, Sep 25, 2020 at 05:31:29PM +0200, Uladzislau Rezki wrote:
> > > > > >
> > > > > > All good points!
> > > > > >
> > > > > > On the other hand, duplicating a portion of the allocator
> > > > > > functionality
> > > > > > within
On Fri, Sep 25, 2020 at 05:31:29PM +0200, Uladzislau Rezki wrote:
> > > > >
> > > > > All good points!
> > > > >
> > > > > On the other hand, duplicating a portion of the allocator
> > > > > functionality
> > > > > within RCU increases the amount of reserved memory, and needlessly
> > > > > mos
On Fri 25-09-20 17:31:29, Uladzislau Rezki wrote:
> > > > >
> > > > > All good points!
> > > > >
> > > > > On the other hand, duplicating a portion of the allocator
> > > > > functionality
> > > > > within RCU increases the amount of reserved memory, and needlessly
> > > > > most
> > > > > of t
> > > >
> > > > All good points!
> > > >
> > > > On the other hand, duplicating a portion of the allocator functionality
> > > > within RCU increases the amount of reserved memory, and needlessly most
> > > > of the time.
> > > >
> > >
> > > But it's very similar to what mempools are for.
> > >
On Fri, Sep 25, 2020 at 10:15:52AM +0200, Peter Zijlstra wrote:
> On Thu, Sep 24, 2020 at 05:21:12PM +0200, Uladzislau Rezki wrote:
> > On Thu, Sep 24, 2020 at 01:19:07PM +0200, Peter Zijlstra wrote:
> > > On Thu, Sep 24, 2020 at 10:16:14AM +0200, Uladzislau Rezki wrote:
> > > > The key point is "e
On Thu, Sep 24, 2020 at 08:38:34AM -0700, Paul E. McKenney wrote:
> On Thu, Sep 24, 2020 at 01:19:07PM +0200, Peter Zijlstra wrote:
> > On Thu, Sep 24, 2020 at 10:16:14AM +0200, Uladzislau Rezki wrote:
> > > The key point is "enough". We need pages to make a) fast progress b)
> > > support
> > > s
On Thu, Sep 24, 2020 at 05:21:12PM +0200, Uladzislau Rezki wrote:
> On Thu, Sep 24, 2020 at 01:19:07PM +0200, Peter Zijlstra wrote:
> > On Thu, Sep 24, 2020 at 10:16:14AM +0200, Uladzislau Rezki wrote:
> > > The key point is "enough". We need pages to make a) fast progress b)
> > > support
> > > s
On Thu 24-09-20 10:16:14, Uladzislau Rezki wrote:
> > On Wed, Sep 23, 2020 at 08:41:05AM -0700, Paul E. McKenney wrote:
> > > > Fundamentally, this is simply shifting the problem from RCU to the page
> > > > allocator because of the locking arrangements and hazard of acquiring
> > > > zone
> > > >
On Thu, Sep 24, 2020 at 01:19:07PM +0200, Peter Zijlstra wrote:
> On Thu, Sep 24, 2020 at 10:16:14AM +0200, Uladzislau Rezki wrote:
> > The key point is "enough". We need pages to make a) fast progress b) support
> > single argument of kvfree_rcu(one_arg). Not vice versa. That "enough"
> > depends
On Thu, Sep 24, 2020 at 01:19:07PM +0200, Peter Zijlstra wrote:
> On Thu, Sep 24, 2020 at 10:16:14AM +0200, Uladzislau Rezki wrote:
> > The key point is "enough". We need pages to make a) fast progress b) support
> > single argument of kvfree_rcu(one_arg). Not vice versa. That "enough"
> > depends
On Thu, Sep 24, 2020 at 01:16:32PM +0200, Peter Zijlstra wrote:
> On Thu, Sep 24, 2020 at 10:16:14AM +0200, Uladzislau Rezki wrote:
> > Other option is if we had unconditionally enabled PREEMPT_COUNT config.
> > It would be easy to identify a context type and invoke a page allocator
> > if a contex
On Thu, Sep 24, 2020 at 10:16:14AM +0200, Uladzislau Rezki wrote:
> The key point is "enough". We need pages to make a) fast progress b) support
> single argument of kvfree_rcu(one_arg). Not vice versa. That "enough" depends
> on scheduler latency and vague pre-allocated number of pages, it might
>
On Thu, Sep 24, 2020 at 10:16:14AM +0200, Uladzislau Rezki wrote:
> Other option is if we had unconditionally enabled PREEMPT_COUNT config.
> It would be easy to identify a context type and invoke a page allocator
> if a context is preemtale. But as of now preemptable() is "half" working.
> Thomas
> On Wed, Sep 23, 2020 at 08:41:05AM -0700, Paul E. McKenney wrote:
> > > Fundamentally, this is simply shifting the problem from RCU to the page
> > > allocator because of the locking arrangements and hazard of acquiring zone
> > > lock is a raw spinlock is held on RT. It does not even make the ti
On Wed, Sep 23, 2020 at 08:41:05AM -0700, Paul E. McKenney wrote:
> > Fundamentally, this is simply shifting the problem from RCU to the page
> > allocator because of the locking arrangements and hazard of acquiring zone
> > lock is a raw spinlock is held on RT. It does not even make the timing
> >
On Wed, Sep 23, 2020 at 11:37:06AM +0100, Mel Gorman wrote:
> On Tue, Sep 22, 2020 at 03:12:57PM +0200, Uladzislau Rezki wrote:
> > > > > > Yes, I do well remember that you are unhappy with this approach.
> > > > > > Unfortunately, thus far, there is no solution that makes all
> > > > > > develope
> > > Other approaches under consideration include making CONFIG_PREEMPT_COUNT
> > > unconditional and thus allowing call_rcu() and kvfree_rcu() to determine
> > > whether direct calls to the allocator are safe (some guy named Linus
> > > doesn't like this one),
> >
> > I assume that the primary a
On Tue, Sep 22, 2020 at 03:12:57PM +0200, Uladzislau Rezki wrote:
> > > > > Yes, I do well remember that you are unhappy with this approach.
> > > > > Unfortunately, thus far, there is no solution that makes all
> > > > > developers
> > > > > happy. You might be glad to hear that we are also look
On Tue, Sep 22, 2020 at 09:50:02AM +0200, Michal Hocko wrote:
> [Cc Mel - the thread starts
> http://lkml.kernel.org/r/20200918194817.48921-1-ure...@gmail.com]
>
> On Mon 21-09-20 21:48:19, Uladzislau Rezki wrote:
> > Hello, Michal.
> >
> > > >
> > > > Yes, I do well remember that you are unhapp
On Tue, Sep 22, 2020 at 10:03:06AM +0200, Michal Hocko wrote:
> On Mon 21-09-20 20:35:53, Paul E. McKenney wrote:
> > On Mon, Sep 21, 2020 at 06:03:18PM +0200, Michal Hocko wrote:
> > > On Mon 21-09-20 08:45:58, Paul E. McKenney wrote:
> > > > On Mon, Sep 21, 2020 at 09:47:16AM +0200, Michal Hocko
On Tue 22-09-20 15:12:57, Uladzislau Rezki wrote:
[...]
> > Mimicing a similar implementation shouldn't be all that hard
> > and you will get your own pool which doesn't affect other page allocator
> > users as much as a bonus.
> >
> I see your point Michal. As i mentioned before, it is important
> > > > Yes, I do well remember that you are unhappy with this approach.
> > > > Unfortunately, thus far, there is no solution that makes all developers
> > > > happy. You might be glad to hear that we are also looking into other
> > > > solutions, each of which makes some other developers unhappy
On Mon 21-09-20 20:35:53, Paul E. McKenney wrote:
> On Mon, Sep 21, 2020 at 06:03:18PM +0200, Michal Hocko wrote:
> > On Mon 21-09-20 08:45:58, Paul E. McKenney wrote:
> > > On Mon, Sep 21, 2020 at 09:47:16AM +0200, Michal Hocko wrote:
> > > > On Fri 18-09-20 21:48:15, Uladzislau Rezki (Sony) wrote
[Cc Mel - the thread starts
http://lkml.kernel.org/r/20200918194817.48921-1-ure...@gmail.com]
On Mon 21-09-20 21:48:19, Uladzislau Rezki wrote:
> Hello, Michal.
>
> > >
> > > Yes, I do well remember that you are unhappy with this approach.
> > > Unfortunately, thus far, there is no solution that
On Mon, Sep 21, 2020 at 06:03:18PM +0200, Michal Hocko wrote:
> On Mon 21-09-20 08:45:58, Paul E. McKenney wrote:
> > On Mon, Sep 21, 2020 at 09:47:16AM +0200, Michal Hocko wrote:
> > > On Fri 18-09-20 21:48:15, Uladzislau Rezki (Sony) wrote:
> > > [...]
> > > > Proposal
> > > >
> > > > In
Hello, Michal.
> >
> > Yes, I do well remember that you are unhappy with this approach.
> > Unfortunately, thus far, there is no solution that makes all developers
> > happy. You might be glad to hear that we are also looking into other
> > solutions, each of which makes some other developers unh
On Mon 21-09-20 08:45:58, Paul E. McKenney wrote:
> On Mon, Sep 21, 2020 at 09:47:16AM +0200, Michal Hocko wrote:
> > On Fri 18-09-20 21:48:15, Uladzislau Rezki (Sony) wrote:
> > [...]
> > > Proposal
> > >
> > > Introduce a lock-free function that obtain a page from the per-cpu-lists
> > >
On Mon, Sep 21, 2020 at 09:47:16AM +0200, Michal Hocko wrote:
> On Fri 18-09-20 21:48:15, Uladzislau Rezki (Sony) wrote:
> [...]
> > Proposal
> >
> > Introduce a lock-free function that obtain a page from the per-cpu-lists
> > on current CPU. It returns NULL rather than acquiring any non-r
On Fri 18-09-20 21:48:15, Uladzislau Rezki (Sony) wrote:
[...]
> Proposal
>
> Introduce a lock-free function that obtain a page from the per-cpu-lists
> on current CPU. It returns NULL rather than acquiring any non-raw spinlock.
I was not happy about this solution when we have discussed t
Some background and kfree_rcu()
===
The pointers to be freed are stored in the per-cpu array to improve
performance, to enable an easier-to-use API, to accommodate vmalloc
memmory and to support a single argument of the kfree_rcu() when only
a pointer is passed. More det
68 matches
Mail list logo