On 07/11/2017 08:05 AM, Joel Fernandes wrote: > ftrace can fail to allocate per-CPU ring buffer on systems with a large > number of CPUs coupled while large amounts of cache happening in the > page cache. Currently the ring buffer allocation doesn't retry in the VM > implementation even if direct-reclaim made some progress but still > wasn't able to find a free page. On retrying I see that the allocations > almost always succeed. The retry doesn't happen because __GFP_NORETRY is > used in the tracer to prevent the case where we might OOM, however if we > drop __GFP_NORETRY, we risk destabilizing the system if OOM killer is > triggered. To prevent this situation, use the __GFP_RETRY_MAYFAIL flag > introduced recently [1]. > > Tested the following still succeeds without destabilizing a system with > 1GB memory. > echo 300000 > /sys/kernel/debug/tracing/buffer_size_kb > > [1] https://marc.info/?l=linux-mm&m=149820805124906&w=2 > > Cc: Alexander Duyck <alexander.h.du...@intel.com> > Cc: Mel Gorman <mgor...@suse.de> > Cc: Hao Lee <haolee.sw...@gmail.com> > Cc: Vladimir Davydov <vdavydov....@gmail.com> > Cc: Johannes Weiner <han...@cmpxchg.org> > Cc: Joonsoo Kim <iamjoonsoo....@lge.com> > Cc: Michal Hocko <mho...@kernel.org> > Cc: Tim Murray <timmur...@google.com> > Cc: Ingo Molnar <mi...@redhat.com> > Cc: Steven Rostedt <rost...@goodmis.org> > Cc: sta...@vger.kernel.org
Not stable, as Michal mentioned. Acked-by: Vlastimil Babka <vba...@suse.cz> > Signed-off-by: Joel Fernandes <joe...@google.com> > --- > kernel/trace/ring_buffer.c | 10 +++++----- > 1 file changed, 5 insertions(+), 5 deletions(-) > > diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c > index 4ae268e687fe..529cc50d7243 100644 > --- a/kernel/trace/ring_buffer.c > +++ b/kernel/trace/ring_buffer.c > @@ -1136,12 +1136,12 @@ static int __rb_allocate_pages(long nr_pages, struct > list_head *pages, int cpu) > for (i = 0; i < nr_pages; i++) { > struct page *page; > /* > - * __GFP_NORETRY flag makes sure that the allocation fails > - * gracefully without invoking oom-killer and the system is > - * not destabilized. > + * __GFP_RETRY_MAYFAIL flag makes sure that the allocation fails > + * gracefully without invoking oom-killer and the system is not > + * destabilized. > */ > bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()), > - GFP_KERNEL | __GFP_NORETRY, > + GFP_KERNEL | __GFP_RETRY_MAYFAIL, > cpu_to_node(cpu)); > if (!bpage) > goto free_pages; > @@ -1149,7 +1149,7 @@ static int __rb_allocate_pages(long nr_pages, struct > list_head *pages, int cpu) > list_add(&bpage->list, pages); > > page = alloc_pages_node(cpu_to_node(cpu), > - GFP_KERNEL | __GFP_NORETRY, 0); > + GFP_KERNEL | __GFP_RETRY_MAYFAIL, 0); > if (!page) > goto free_pages; > bpage->page = page_address(page); >