On Tue, 4 Dec 2018 11:12:43 +0000
Will Deacon <will.dea...@arm.com> wrote:

> > diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> > index 8ef9fc226037..42e89397778b 100644
> > --- a/kernel/trace/ftrace.c
> > +++ b/kernel/trace/ftrace.c
> > @@ -2393,11 +2393,14 @@ void __weak ftrace_replace_code(int enable)
> >  {
> >     struct dyn_ftrace *rec;
> >     struct ftrace_page *pg;
> > +   bool schedulable;
> >     int failed;
> >  
> >     if (unlikely(ftrace_disabled))
> >             return;
> >  
> > +   schedulable = !irqs_disabled() & !preempt_count();  
> 
> Looks suspiciously like a bitwise preemptible() to me!

Ah, thanks. Yeah, that should have been &&. But what did you expect.
I didn't even compile this ;-)

> 
> > +
> >     do_for_each_ftrace_rec(pg, rec) {
> >  
> >             if (rec->flags & FTRACE_FL_DISABLED)
> > @@ -2409,6 +2412,8 @@ void __weak ftrace_replace_code(int enable)
> >                     /* Stop processing */
> >                     return;
> >             }
> > +           if (schedulable)
> > +                   cond_resched();
> >     } while_for_each_ftrace_rec();
> >  }  
> 
> If this solves the problem in core code, them I'm all for it. Otherwise, I
> was thinking of rolling our own ftrace_replace_code() for arm64, but that's
> going to involve a fair amount of duplication.
> 

If it does, then I'll add it. Or take a patch for it ;-) 

My main concern is that this can be called from non schedulable context.

-- Steve

Reply via email to