Re: [PATCH 3/3] early kprobes: x86: don't try to recover ftraced instruction before ftrace get ready.
On 2015/3/5 9:53, Wang Nan wrote: >> ... >> > >>> >> /* kprobe is available. */ >>> >> KP_STG_AVAIL, >> > >> > KP_STG_EARLY is better, isn't it? :) > Sure. I will change it. > Sorry, I remembered the reason why I call it KP_STG_AVAIL. What about .config turns off CONFIG_EARLY_KPROBES? In fact, in my WIP v5 series, the progress valued are introduced earlier than early kprobe. At that time the principle of KP_STG_EARLY should not be created. What do you think? Thank you. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 3/3] early kprobes: x86: don't try to recover ftraced instruction before ftrace get ready.
Thanks to your reply. I have some inline comment. Please see below. By the way, as you suggested me to fold my patches up, will you review my v4 series? Or do you want to start looking at early kprobe after I merge small patch pieces together? Then I can decide whether to post a v5 series with only patch merging and cleanups before receiving your comments. On 2015/3/4 23:26, Masami Hiramatsu wrote: > Hi Wang, > > (2015/03/04 20:22), Wang Nan wrote: >> Hi Masami, >> >> Following your advise, I adjusted early kprobe patches, added a >> kprobes_init_stage var to indicate the initiaization progress of >> kprobes. It has following avaliable values: >> >> typedef enum { >> /* kprobe initialization is error. */ >> KP_STG_ERR = -1, > > Eventually, all the negative values should be handled as an error, > then we can store an encoded error reason or location on it. > Anyway, at this point we don't need it. > >> /* kprobe is not ready. */ >> KP_STG_NONE, > > Please put the comment on the same line, as below. > > KP_STG_ERR = -1,/* kprobe initialization is error. */ > KP_STG_NONE,/* kprobe is not ready. */ > ... > >> /* kprobe is available. */ >> KP_STG_AVAIL, > > KP_STG_EARLY is better, isn't it? :) Sure. I will change it. > >> /* Ftrace initialize is ready */ >> KP_STG_FTRACE_READY, >> /* All resources are ready */ >> KP_STG_FULL, >> /* >> * Since memory system is ready before ftrace, after ftrace inited we >> * are able to alloc normal kprobes. >> */ >> #define KP_STG_NORMAL KP_STG_FTRACE_READY > > No need to use macro, you can directly define enum symbol. > KP_STG_NORMAL = KP_STG_FTRACE_READY > > BTW, what's the difference of FULL and NORMAL? > Please see patch 4/34 and 7/34. Since populate_kprobe_blacklist() is the only part which is unable to be called before memory system is ready, I leave it at its original place (init_kprobes), and introduces a within_kprobe_blacklist_early() which is slow but doesn't require populate_kprobe_blacklist(). KP_STG_FULL now is used to indicate whether populate_kprobe_blacklist() is ready. When init progress goes to KP_STG_NORMAL, there's no need to allocate statically allocated slots, but still require calling within_kprobe_blacklist_early(). In fact, it is possible to make memory system notifying kprobes like what ftrace does, and set KP_STG_NORMAL to something like KP_STG_MEMORY_READY. populate_kprobe_blacklist() can be done when get notified from memory system. Then init_kprobes() can be totally trimed. Howver, if we want to do something on early kprobes before whole system booted (currently I don't do that), we still need init_kprobes() and a KP_STG_FULL init stage. >> } kprobes_init_stage_t; >> >> And kprobes_is_early becomes: >> >> static inline bool kprobes_is_early(void) >> { >> return (kprobes_init_stage < KP_STG_NORMAL); >> } >> >> A helper function is add to make progress: >> >> static void kprobes_update_init_stage(kprobes_init_stage_t s) >> { >> BUG_ON((kprobes_init_stage == KP_STG_ERR) || >> ((s <= kprobes_init_stage) && (s != KP_STG_ERR))); >> kprobes_init_stage = s; >> } > > Good! this serializes the initialization stage :) > >> >> Original kprobes_initialized, kprobes_blacklist_initialized >> kprobes_on_ftrace_initialized are all removed, replaced by: >> >> kprobes_initialized --> (kprobes_init_stage >= KP_STG_AVAIL) >> kprobes_blacklist_initialized --> (kprobes_init_stage >= KP_STG_FULL) >> kprobes_on_ftrace_initialized --> (kprobes_init_stage >= >> KP_STG_FTRACE_READY) >> >> respectively. > > Yes, that is what I want. > >> >> Following patch is extracted from my WIP v5 series and will be distributed >> into separated patches. >> >> (Please note that it is edited manually and unable to be applied directly.) >> >> Do you have any futher suggestion? > > Sorry, I have still not reviewed your series yet, but it seems that your > patches are chopped to smaller pieces. I recommend you to fold them up to > better granularity, like > - Each patch can be compiled without warnings. > - If you introduce new functions, it should be called from somewhere in >the same patch. (no orphaned functions, except for module APIs) It may mix arch and arch-dependent code together. I'll try it in v5 series. > - Bugfix, cleanup, and enhance it. > > And now I'm considering that the early kprobe should be started as an > independent series from ftrace. For example, we can configure early_kprobe > only when KPROBE_ON_FTRACE=n. That will make it simpler and we can focus > on what we basically need to do. > Yes, this is what I did in v4 series. Patch 'early kprobes: add CONFIG_EARLY_KPROBES option.' only enables it for ARM. After solving KPROBES_ON_FTRACE, an 'early kprobes: enable early kprobes for x86.' enables it for x86. Thank you. > Thank you, > >> >> Thank you! >> >> --- >> diff --git a
Re: Re: [PATCH 3/3] early kprobes: x86: don't try to recover ftraced instruction before ftrace get ready.
Hi Wang, (2015/03/04 20:22), Wang Nan wrote: > Hi Masami, > > Following your advise, I adjusted early kprobe patches, added a > kprobes_init_stage var to indicate the initiaization progress of > kprobes. It has following avaliable values: > > typedef enum { > /* kprobe initialization is error. */ > KP_STG_ERR = -1, Eventually, all the negative values should be handled as an error, then we can store an encoded error reason or location on it. Anyway, at this point we don't need it. > /* kprobe is not ready. */ > KP_STG_NONE, Please put the comment on the same line, as below. KP_STG_ERR = -1,/* kprobe initialization is error. */ KP_STG_NONE,/* kprobe is not ready. */ ... > /* kprobe is available. */ > KP_STG_AVAIL, KP_STG_EARLY is better, isn't it? :) > /* Ftrace initialize is ready */ > KP_STG_FTRACE_READY, > /* All resources are ready */ > KP_STG_FULL, > /* > * Since memory system is ready before ftrace, after ftrace inited we > * are able to alloc normal kprobes. > */ > #define KP_STG_NORMAL KP_STG_FTRACE_READY No need to use macro, you can directly define enum symbol. KP_STG_NORMAL = KP_STG_FTRACE_READY BTW, what's the difference of FULL and NORMAL? > } kprobes_init_stage_t; > > And kprobes_is_early becomes: > > static inline bool kprobes_is_early(void) > { > return (kprobes_init_stage < KP_STG_NORMAL); > } > > A helper function is add to make progress: > > static void kprobes_update_init_stage(kprobes_init_stage_t s) > { > BUG_ON((kprobes_init_stage == KP_STG_ERR) || > ((s <= kprobes_init_stage) && (s != KP_STG_ERR))); > kprobes_init_stage = s; > } Good! this serializes the initialization stage :) > > Original kprobes_initialized, kprobes_blacklist_initialized > kprobes_on_ftrace_initialized are all removed, replaced by: > > kprobes_initialized --> (kprobes_init_stage >= KP_STG_AVAIL) > kprobes_blacklist_initialized --> (kprobes_init_stage >= KP_STG_FULL) > kprobes_on_ftrace_initialized --> (kprobes_init_stage >= > KP_STG_FTRACE_READY) > > respectively. Yes, that is what I want. > > Following patch is extracted from my WIP v5 series and will be distributed > into separated patches. > > (Please note that it is edited manually and unable to be applied directly.) > > Do you have any futher suggestion? Sorry, I have still not reviewed your series yet, but it seems that your patches are chopped to smaller pieces. I recommend you to fold them up to better granularity, like - Each patch can be compiled without warnings. - If you introduce new functions, it should be called from somewhere in the same patch. (no orphaned functions, except for module APIs) - Bugfix, cleanup, and enhance it. And now I'm considering that the early kprobe should be started as an independent series from ftrace. For example, we can configure early_kprobe only when KPROBE_ON_FTRACE=n. That will make it simpler and we can focus on what we basically need to do. Thank you, > > Thank you! > > --- > diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c > index 87beb64..f8b7dcb 100644 > --- a/arch/x86/kernel/kprobes/core.c > +++ b/arch/x86/kernel/kprobes/core.c > @@ -234,6 +234,20 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned > long addr) >*/ > if (WARN_ON(faddr && faddr != addr)) > return 0UL; > + > + /* > + * If ftrace is not ready yet, pretend this is not an ftrace > + * location, because currently the target instruction has not > + * been replaced by a NOP yet. When ftrace trying to convert > + * it to NOP, kprobe should be notified and the kprobe data > + * should be fixed at that time. > + * > + * Since it is possible that an early kprobe already on that > + * place, don't return addr directly. > + */ > + if (unlikely(kprobes_init_stage < KP_STG_FTRACE_READY)) > + faddr = 0UL; > + > /* >* Use the current code if it is not modified by Kprobe >* and it cannot be modified by ftrace. > diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h > index 2d78bbb..04b97ec 100644 > --- a/include/linux/kprobes.h > +++ b/include/linux/kprobes.h > @@ -50,7 +50,31 @@ > #define KPROBE_REENTER 0x0004 > #define KPROBE_HIT_SSDONE0x0008 > > -extern bool kprobes_is_early(void); > +/* Initialization state of kprobe */ > +typedef enum { > + /* kprobe initialization is error. */ > + KP_STG_ERR = -1, > + /* kprobe is not ready. */ > + KP_STG_NONE, > + /* kprobe is available. */ > + KP_STG_AVAIL, > + /* Ftrace initialize is ready */ > + KP_STG_FTRACE_READY, > + /* All resources are ready */ > + KP_STG_FULL, > +/* > + * Since memory system is ready before ftrace, after ftrace inited we > + * are able to alloc normal kprobes. > + */
Re: [PATCH 3/3] early kprobes: x86: don't try to recover ftraced instruction before ftrace get ready.
Hi Masami, Following your advise, I adjusted early kprobe patches, added a kprobes_init_stage var to indicate the initiaization progress of kprobes. It has following avaliable values: typedef enum { /* kprobe initialization is error. */ KP_STG_ERR = -1, /* kprobe is not ready. */ KP_STG_NONE, /* kprobe is available. */ KP_STG_AVAIL, /* Ftrace initialize is ready */ KP_STG_FTRACE_READY, /* All resources are ready */ KP_STG_FULL, /* * Since memory system is ready before ftrace, after ftrace inited we * are able to alloc normal kprobes. */ #define KP_STG_NORMAL KP_STG_FTRACE_READY } kprobes_init_stage_t; And kprobes_is_early becomes: static inline bool kprobes_is_early(void) { return (kprobes_init_stage < KP_STG_NORMAL); } A helper function is add to make progress: static void kprobes_update_init_stage(kprobes_init_stage_t s) { BUG_ON((kprobes_init_stage == KP_STG_ERR) || ((s <= kprobes_init_stage) && (s != KP_STG_ERR))); kprobes_init_stage = s; } Original kprobes_initialized, kprobes_blacklist_initialized kprobes_on_ftrace_initialized are all removed, replaced by: kprobes_initialized --> (kprobes_init_stage >= KP_STG_AVAIL) kprobes_blacklist_initialized --> (kprobes_init_stage >= KP_STG_FULL) kprobes_on_ftrace_initialized --> (kprobes_init_stage >= KP_STG_FTRACE_READY) respectively. Following patch is extracted from my WIP v5 series and will be distributed into separated patches. (Please note that it is edited manually and unable to be applied directly.) Do you have any futher suggestion? Thank you! --- diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 87beb64..f8b7dcb 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -234,6 +234,20 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr) */ if (WARN_ON(faddr && faddr != addr)) return 0UL; + + /* +* If ftrace is not ready yet, pretend this is not an ftrace +* location, because currently the target instruction has not +* been replaced by a NOP yet. When ftrace trying to convert +* it to NOP, kprobe should be notified and the kprobe data +* should be fixed at that time. +* +* Since it is possible that an early kprobe already on that +* place, don't return addr directly. +*/ + if (unlikely(kprobes_init_stage < KP_STG_FTRACE_READY)) + faddr = 0UL; + /* * Use the current code if it is not modified by Kprobe * and it cannot be modified by ftrace. diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h index 2d78bbb..04b97ec 100644 --- a/include/linux/kprobes.h +++ b/include/linux/kprobes.h @@ -50,7 +50,31 @@ #define KPROBE_REENTER 0x0004 #define KPROBE_HIT_SSDONE 0x0008 -extern bool kprobes_is_early(void); +/* Initialization state of kprobe */ +typedef enum { + /* kprobe initialization is error. */ + KP_STG_ERR = -1, + /* kprobe is not ready. */ + KP_STG_NONE, + /* kprobe is available. */ + KP_STG_AVAIL, + /* Ftrace initialize is ready */ + KP_STG_FTRACE_READY, + /* All resources are ready */ + KP_STG_FULL, +/* + * Since memory system is ready before ftrace, after ftrace inited we + * are able to alloc normal kprobes. + */ +#define KP_STG_NORMAL KP_STG_FTRACE_READY +} kprobes_init_stage_t; + +extern kprobes_init_stage_t kprobes_init_stage; + +static inline bool kprobes_is_early(void) +{ + return (kprobes_init_stage < KP_STG_NORMAL); +} #else /* CONFIG_KPROBES */ typedef int kprobe_opcode_t; diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 1ec8e6e..f4d9fca 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -67,17 +67,14 @@ addr = ((kprobe_opcode_t *)(kallsyms_lookup_name(name))) #endif -static int kprobes_initialized; -static int kprobes_blacklist_initialized; -#if defined(CONFIG_KPROBES_ON_FTRACE) && defined(CONFIG_EARLY_KPROBES) -static bool kprobes_on_ftrace_initialized __read_mostly = false; -#else -# define kprobes_on_ftrace_initialized false -#endif +kprobes_init_stage_t kprobes_init_stage __read_mostly = KP_STG_NONE; +#define kprobes_initialized(kprobes_init_stage >= KP_STG_AVAIL) -bool kprobes_is_early(void) +static void kprobes_update_init_stage(kprobes_init_stage_t s) { - return !(kprobes_initialized && kprobes_blacklist_initialized); + BUG_ON((kprobes_init_stage == KP_STG_ERR) || + ((s <= kprobes_init_stage) && (s != KP_STG_ERR))); + kprobes_init_stage = s; } static struct hlist_head kprobe_table[KPROBE_TABLE_SIZE]; @@ -1416,7 +1415,7 @@ static bool within_kprobe_blacklist(unsigned long addr) return true; #endif - if (!kprobes_blacklist_initialized) + if (kprob
Re: [PATCH 3/3] early kprobes: x86: don't try to recover ftraced instruction before ftrace get ready.
(2015/03/04 13:39), Wang Nan wrote: > On 2015/3/4 11:36, Masami Hiramatsu wrote: >> (2015/03/04 11:24), Wang Nan wrote: >>> On 2015/3/4 1:06, Petr Mladek wrote: On Tue 2015-03-03 13:09:05, Wang Nan wrote: > Before ftrace convertin instruction to nop, if an early kprobe is > registered then unregistered, without this patch its first bytes will > be replaced by head of NOP, which may confuse ftrace. > > Actually, since we have a patch which convert ftrace entry to nop > when probing, this problem should never be triggered. Provide it for > safety. > > Signed-off-by: Wang Nan > --- > arch/x86/kernel/kprobes/core.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/arch/x86/kernel/kprobes/core.c > b/arch/x86/kernel/kprobes/core.c > index 87beb64..c7d304d 100644 > --- a/arch/x86/kernel/kprobes/core.c > +++ b/arch/x86/kernel/kprobes/core.c > @@ -225,6 +225,9 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned > long addr) > struct kprobe *kp; > unsigned long faddr; > > + if (!kprobes_on_ftrace_initialized) > + return addr; This is not correct. The function has to return a buffer with the original code also when it is modified by normal kprobes. If it is a normal Kprobe, it reads the current code and replaces the first byte (INT3 instruction) with the saved kp->opcode. > + > kp = get_kprobe((void *)addr); > faddr = ftrace_location(addr); IMHO, the proper fix might be to replace the above line with if (kprobes_on_ftrace_initialized) faddr = ftrace_location(addr); else faddr = 0UL; By other words, it might pretend that it is not a ftrace location when the ftrace is not ready yet. >>> >>> Thanks for your reply. I'll follow your suggection in my next version. I >>> change >>> it as follow to enable the checking. >>> >>> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c >>> index 4e3d5a9..3241677 100644 >>> --- a/arch/x86/kernel/kprobes/core.c >>> +++ b/arch/x86/kernel/kprobes/core.c >>> @@ -234,6 +234,20 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned >>> long addr) >>> */ >>> if (WARN_ON(faddr && faddr != addr)) >>> return 0UL; >>> + >>> + /* >>> +* If ftrace is not ready yet, pretend this is not an ftrace >>> +* location, because currently the target instruction has not >>> +* been replaced by a NOP yet. When ftrace trying to convert >>> +* it to NOP, kprobe should be notified and the kprobe data >>> +* should be fixed at that time. >>> +* >>> +* Since it is possible that an early kprobe already on that >>> +* place, don't return addr directly. >>> +*/ >>> + if (likely(kprobes_on_ftrace_initialized)) >>> + faddr = 0UL; >>> + >>> /* >>> * Use the current code if it is not modified by Kprobe >>> * and it cannot be modified by ftrace >>> >> >> This is better, but I don't think we need bool XXX_initialized flags >> for each subfunctions. Those should be serialized. >> >> Thank you, >> > > For this specific case, calling __recover_probed_insn() is mandatory for > can_boost(). However, we can disallow early kprobes to be unregistered before > ftrace is ready, and let ftrace fix all inconsistency by calling > kprobe_on_ftrace_get_old_insn(). Which will make things simpler, and constrain > the using scope of kprobes_on_ftrace_initialized to kernel/kprobes.c. What I meant was consolidating those XXX_initialized flag to kprobes_init_stage flag and enum kprobes_stage which has { KP_STG_NONE, KP_STG_EARLY, KP_STG_NORMAL } etc. This can serialize staging phases and do not cause unexpected initialized-flags combination. > The > cost is unable to do smoke test for early ftrace because it will remove all > kprobe before returning. I think it should be acceptable. What do you think? Ah, I see. We should change it to just remove only the kprobes which smoke test inserted. Or, sort the test order to move it after the ftrace is initialized. I guess latter is better, since at the point of smoke test is executed, all the kprobe-events feature should be available. Thank you, > > Thank you. > > -- Masami HIRAMATSU Software Platform Research Dept. Linux Technology Research Center Hitachi, Ltd., Yokohama Research Laboratory E-mail: masami.hiramatsu...@hitachi.com -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 3/3] early kprobes: x86: don't try to recover ftraced instruction before ftrace get ready.
On 2015/3/4 11:36, Masami Hiramatsu wrote: > (2015/03/04 11:24), Wang Nan wrote: >> On 2015/3/4 1:06, Petr Mladek wrote: >>> On Tue 2015-03-03 13:09:05, Wang Nan wrote: Before ftrace convertin instruction to nop, if an early kprobe is registered then unregistered, without this patch its first bytes will be replaced by head of NOP, which may confuse ftrace. Actually, since we have a patch which convert ftrace entry to nop when probing, this problem should never be triggered. Provide it for safety. Signed-off-by: Wang Nan --- arch/x86/kernel/kprobes/core.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 87beb64..c7d304d 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -225,6 +225,9 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr) struct kprobe *kp; unsigned long faddr; + if (!kprobes_on_ftrace_initialized) + return addr; >>> >>> This is not correct. The function has to return a buffer with the original >>> code also when it is modified by normal kprobes. If it is a normal >>> Kprobe, it reads the current code and replaces the first byte (INT3 >>> instruction) with the saved kp->opcode. >>> + kp = get_kprobe((void *)addr); faddr = ftrace_location(addr); >>> >>> IMHO, the proper fix might be to replace the above line with >>> >>> if (kprobes_on_ftrace_initialized) >>> faddr = ftrace_location(addr); >>> else >>> faddr = 0UL; >>> >>> By other words, it might pretend that it is not a ftrace location >>> when the ftrace is not ready yet. >>> >> >> Thanks for your reply. I'll follow your suggection in my next version. I >> change >> it as follow to enable the checking. >> >> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c >> index 4e3d5a9..3241677 100644 >> --- a/arch/x86/kernel/kprobes/core.c >> +++ b/arch/x86/kernel/kprobes/core.c >> @@ -234,6 +234,20 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned >> long addr) >> */ >> if (WARN_ON(faddr && faddr != addr)) >> return 0UL; >> + >> +/* >> + * If ftrace is not ready yet, pretend this is not an ftrace >> + * location, because currently the target instruction has not >> + * been replaced by a NOP yet. When ftrace trying to convert >> + * it to NOP, kprobe should be notified and the kprobe data >> + * should be fixed at that time. >> + * >> + * Since it is possible that an early kprobe already on that >> + * place, don't return addr directly. >> + */ >> +if (likely(kprobes_on_ftrace_initialized)) >> +faddr = 0UL; >> + >> /* >> * Use the current code if it is not modified by Kprobe >> * and it cannot be modified by ftrace >> > > This is better, but I don't think we need bool XXX_initialized flags > for each subfunctions. Those should be serialized. > > Thank you, > For this specific case, calling __recover_probed_insn() is mandatory for can_boost(). However, we can disallow early kprobes to be unregistered before ftrace is ready, and let ftrace fix all inconsistency by calling kprobe_on_ftrace_get_old_insn(). Which will make things simpler, and constrain the using scope of kprobes_on_ftrace_initialized to kernel/kprobes.c. The cost is unable to do smoke test for early ftrace because it will remove all kprobe before returning. I think it should be acceptable. What do you think? Thank you. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Re: [PATCH 3/3] early kprobes: x86: don't try to recover ftraced instruction before ftrace get ready.
(2015/03/04 11:24), Wang Nan wrote: > On 2015/3/4 1:06, Petr Mladek wrote: >> On Tue 2015-03-03 13:09:05, Wang Nan wrote: >>> Before ftrace convertin instruction to nop, if an early kprobe is >>> registered then unregistered, without this patch its first bytes will >>> be replaced by head of NOP, which may confuse ftrace. >>> >>> Actually, since we have a patch which convert ftrace entry to nop >>> when probing, this problem should never be triggered. Provide it for >>> safety. >>> >>> Signed-off-by: Wang Nan >>> --- >>> arch/x86/kernel/kprobes/core.c | 3 +++ >>> 1 file changed, 3 insertions(+) >>> >>> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c >>> index 87beb64..c7d304d 100644 >>> --- a/arch/x86/kernel/kprobes/core.c >>> +++ b/arch/x86/kernel/kprobes/core.c >>> @@ -225,6 +225,9 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned >>> long addr) >>> struct kprobe *kp; >>> unsigned long faddr; >>> >>> + if (!kprobes_on_ftrace_initialized) >>> + return addr; >> >> This is not correct. The function has to return a buffer with the original >> code also when it is modified by normal kprobes. If it is a normal >> Kprobe, it reads the current code and replaces the first byte (INT3 >> instruction) with the saved kp->opcode. >> >>> + >>> kp = get_kprobe((void *)addr); >>> faddr = ftrace_location(addr); >> >> IMHO, the proper fix might be to replace the above line with >> >> if (kprobes_on_ftrace_initialized) >> faddr = ftrace_location(addr); >> else >> faddr = 0UL; >> >> By other words, it might pretend that it is not a ftrace location >> when the ftrace is not ready yet. >> > > Thanks for your reply. I'll follow your suggection in my next version. I > change > it as follow to enable the checking. > > diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c > index 4e3d5a9..3241677 100644 > --- a/arch/x86/kernel/kprobes/core.c > +++ b/arch/x86/kernel/kprobes/core.c > @@ -234,6 +234,20 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned > long addr) >*/ > if (WARN_ON(faddr && faddr != addr)) > return 0UL; > + > + /* > + * If ftrace is not ready yet, pretend this is not an ftrace > + * location, because currently the target instruction has not > + * been replaced by a NOP yet. When ftrace trying to convert > + * it to NOP, kprobe should be notified and the kprobe data > + * should be fixed at that time. > + * > + * Since it is possible that an early kprobe already on that > + * place, don't return addr directly. > + */ > + if (likely(kprobes_on_ftrace_initialized)) > + faddr = 0UL; > + > /* >* Use the current code if it is not modified by Kprobe >* and it cannot be modified by ftrace > This is better, but I don't think we need bool XXX_initialized flags for each subfunctions. Those should be serialized. Thank you, -- Masami HIRAMATSU Software Platform Research Dept. Linux Technology Research Center Hitachi, Ltd., Yokohama Research Laboratory E-mail: masami.hiramatsu...@hitachi.com -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 3/3] early kprobes: x86: don't try to recover ftraced instruction before ftrace get ready.
On 2015/3/4 1:06, Petr Mladek wrote: > On Tue 2015-03-03 13:09:05, Wang Nan wrote: >> Before ftrace convertin instruction to nop, if an early kprobe is >> registered then unregistered, without this patch its first bytes will >> be replaced by head of NOP, which may confuse ftrace. >> >> Actually, since we have a patch which convert ftrace entry to nop >> when probing, this problem should never be triggered. Provide it for >> safety. >> >> Signed-off-by: Wang Nan >> --- >> arch/x86/kernel/kprobes/core.c | 3 +++ >> 1 file changed, 3 insertions(+) >> >> diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c >> index 87beb64..c7d304d 100644 >> --- a/arch/x86/kernel/kprobes/core.c >> +++ b/arch/x86/kernel/kprobes/core.c >> @@ -225,6 +225,9 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned >> long addr) >> struct kprobe *kp; >> unsigned long faddr; >> >> +if (!kprobes_on_ftrace_initialized) >> +return addr; > > This is not correct. The function has to return a buffer with the original > code also when it is modified by normal kprobes. If it is a normal > Kprobe, it reads the current code and replaces the first byte (INT3 > instruction) with the saved kp->opcode. > >> + >> kp = get_kprobe((void *)addr); >> faddr = ftrace_location(addr); > > IMHO, the proper fix might be to replace the above line with > > if (kprobes_on_ftrace_initialized) > faddr = ftrace_location(addr); > else > faddr = 0UL; > > By other words, it might pretend that it is not a ftrace location > when the ftrace is not ready yet. > Thanks for your reply. I'll follow your suggection in my next version. I change it as follow to enable the checking. diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 4e3d5a9..3241677 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -234,6 +234,20 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned long addr) */ if (WARN_ON(faddr && faddr != addr)) return 0UL; + + /* +* If ftrace is not ready yet, pretend this is not an ftrace +* location, because currently the target instruction has not +* been replaced by a NOP yet. When ftrace trying to convert +* it to NOP, kprobe should be notified and the kprobe data +* should be fixed at that time. +* +* Since it is possible that an early kprobe already on that +* place, don't return addr directly. +*/ + if (likely(kprobes_on_ftrace_initialized)) + faddr = 0UL; + /* * Use the current code if it is not modified by Kprobe * and it cannot be modified by ftrace > Or is the code modified another special way when it is a ftrace location but > ftrace has not been initialized yet? > > Best Regards, > Petr > >> /* >> -- >> 1.8.4 >> >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in >> the body of a message to majord...@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> Please read the FAQ at http://www.tux.org/lkml/ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH 3/3] early kprobes: x86: don't try to recover ftraced instruction before ftrace get ready.
On Tue 2015-03-03 13:09:05, Wang Nan wrote: > Before ftrace convertin instruction to nop, if an early kprobe is > registered then unregistered, without this patch its first bytes will > be replaced by head of NOP, which may confuse ftrace. > > Actually, since we have a patch which convert ftrace entry to nop > when probing, this problem should never be triggered. Provide it for > safety. > > Signed-off-by: Wang Nan > --- > arch/x86/kernel/kprobes/core.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c > index 87beb64..c7d304d 100644 > --- a/arch/x86/kernel/kprobes/core.c > +++ b/arch/x86/kernel/kprobes/core.c > @@ -225,6 +225,9 @@ __recover_probed_insn(kprobe_opcode_t *buf, unsigned long > addr) > struct kprobe *kp; > unsigned long faddr; > > + if (!kprobes_on_ftrace_initialized) > + return addr; This is not correct. The function has to return a buffer with the original code also when it is modified by normal kprobes. If it is a normal Kprobe, it reads the current code and replaces the first byte (INT3 instruction) with the saved kp->opcode. > + > kp = get_kprobe((void *)addr); > faddr = ftrace_location(addr); IMHO, the proper fix might be to replace the above line with if (kprobes_on_ftrace_initialized) faddr = ftrace_location(addr); else faddr = 0UL; By other words, it might pretend that it is not a ftrace location when the ftrace is not ready yet. Or is the code modified another special way when it is a ftrace location but ftrace has not been initialized yet? Best Regards, Petr > /* > -- > 1.8.4 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/