Hi Steve, > "Joel Fernandes (Google)" <[email protected]> wrote: [..] > Also could you let me know what is the correct behavior of the filters > after a module being traced is unloaded, are the filters supposed to > be setup again after the module is unloaded, before the module can be > loaded again?
Actually I learnt that it doesn't get preserved and wrote a patch to fix that, let me know what you think, thanks. (its only lightly tested - checked that the filters are preserved, will do more testing tomorrow): ---8<--- From: Joel Fernandes <[email protected]> Subject: [PATCH RFC] ftrace: Preserve filters across loading/unloading of module When a module is removed, the filters associated with it is cleared, however it doesn't make sense to clear it in the following work flow: 1. setup filters (:mod:<mod-name>) 2. load module 3. unload module 4. load module In step 4, the filters would have gone. Since the recently added mechanism to do step 1 before 2 (deferred execution of the module command string) is already present, we can reuse the same mechanism to preserve the filters. This patch does that and avoids losing the filters and having to set them up again. Cc: Steven Rostedt <[email protected]> Signed-off-by: Joel Fernandes <[email protected]> --- kernel/trace/ftrace.c | 25 ++++++++++++++++++++----- 1 file changed, 20 insertions(+), 5 deletions(-) diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 6abfafd7f173..4979fd2ef466 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -5693,7 +5693,9 @@ static int referenced_filters(struct dyn_ftrace *rec) } static void -clear_mod_from_hash(struct ftrace_page *pg, struct ftrace_hash *hash) +deactivate_mod_from_hash(struct module *mod, struct ftrace_page *pg, + struct trace_array *tr, int enable, + struct ftrace_hash *hash) { struct ftrace_func_entry *entry; struct dyn_ftrace *rec; @@ -5712,11 +5714,22 @@ clear_mod_from_hash(struct ftrace_page *pg, struct ftrace_hash *hash) */ if (entry) entry->ip = 0; + + /* + * The hash has been made cleared, however lets save it as a + * string for the next time the module is loaded. + */ + if (entry) { + char func[KSYM_SYMBOL_LEN]; + kallsyms_lookup(rec->ip, NULL, NULL, NULL, func); + ftrace_add_mod(tr, func, mod->name, enable); + } } } /* Clear any records from hashs */ -static void clear_mod_from_hashes(struct ftrace_page *pg) +static void deactivate_mod_from_hashes(struct module *mod, + struct ftrace_page *pg) { struct trace_array *tr; @@ -5725,8 +5738,10 @@ static void clear_mod_from_hashes(struct ftrace_page *pg) if (!tr->ops || !tr->ops->func_hash) continue; mutex_lock(&tr->ops->func_hash->regex_lock); - clear_mod_from_hash(pg, tr->ops->func_hash->filter_hash); - clear_mod_from_hash(pg, tr->ops->func_hash->notrace_hash); + deactivate_mod_from_hash(mod, pg, tr, 1, + tr->ops->func_hash->filter_hash); + deactivate_mod_from_hash(mod, pg, tr, 0, + tr->ops->func_hash->notrace_hash); mutex_unlock(&tr->ops->func_hash->regex_lock); } mutex_unlock(&trace_types_lock); @@ -5778,7 +5793,7 @@ void ftrace_release_mod(struct module *mod) for (pg = tmp_page; pg; pg = tmp_page) { /* Needs to be called outside of ftrace_lock */ - clear_mod_from_hashes(pg); + deactivate_mod_from_hashes(mod, pg); order = get_count_order(pg->size / ENTRIES_PER_PAGE); free_pages((unsigned long)pg->records, order); -- 2.14.2.920.gcf0c67979c-goog

