On Wed, 26 Aug 2020 17:22:39 +0900
Masami Hiramatsu <[email protected]> wrote:

> On Wed, 26 Aug 2020 07:07:09 +0000
> "[email protected]" <[email protected]> wrote:
> 
> > 
> > > -----Original Message-----
> > > From: [email protected] <[email protected]>
> > > Sent: Tuesday, August 25, 2020 8:09 PM
> > > To: Masami Hiramatsu <[email protected]>
> > > Cc: Eddy Wu (RD-TW) <[email protected]>; 
> > > [email protected]; [email protected]; David S. Miller
> > > <[email protected]>
> > > Subject: Re: x86/kprobes: kretprobe fails to triggered if kprobe at 
> > > function entry is not optimized (trigger by int3 breakpoint)
> > >
> > > Surely we can do a lockless list for this. We have llist_add() and
> > > llist_del_first() to make a lockless LIFO/stack.
> > >
> > 
> > llist operations require atomic cmpxchg, for some arch doesn't have 
> > CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG, in_nmi() check might still needed.
> > (HAVE_KRETPROBES && !CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG): arc, arm, csky, 
> > mips
> 
> Good catch. In those cases, we can add in_nmi() check at arch dependent code.

Oops, in_nmi() check is needed in pre_kretprobe_handler() which has no
arch dependent code. Hmm, so we still need an weak function to check it...

Thanks,

-- 
Masami Hiramatsu <[email protected]>

Reply via email to