Message ID | 157918590192.29301.6909688694265698678.stgit@devnote2 |
---|---|
State | RFC |
Headers | show |
Series | tracing: kprobes: Introduce async unregistration | expand |
Hi Masami,
I love your patch! Perhaps something to improve:
[auto build test WARNING on next-20200115]
[cannot apply to tip/perf/core ia64/next tip/x86/core linus/master v5.5-rc6 v5.5-rc5 v5.5-rc4 v5.5]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]
url: https://github.com/0day-ci/linux/commits/Masami-Hiramatsu/tracing-kprobes-Introduce-async-unregistration/20200117-191143
base: 5b483a1a0ea1ab19a5734051c9692c2cfabe1327
reproduce:
# apt-get install sparse
# sparse version: v0.6.1-153-g47b6dfef-dirty
make ARCH=x86_64 allmodconfig
make C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'
If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>
sparse warnings: (new ones prefixed by >>)
>> kernel/trace/trace_kprobe.c:331:10: sparse: sparse: symbol 'trace_kprobe_disabled_finished' was not declared. Should it be static?
Please review and possibly fold the followup patch.
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org Intel Corporation
On Mon, 27 Jan 2020 23:02:43 +0800 kbuild test robot <lkp@intel.com> wrote: > > Fixes: 3c794bf25a2b ("tracing/kprobe: Use call_rcu to defer freeing event_file_link") > Signed-off-by: kbuild test robot <lkp@intel.com> > --- > trace_kprobe.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c > index 1a5882bb77471..fba738aa458af 100644 > --- a/kernel/trace/trace_kprobe.c > +++ b/kernel/trace/trace_kprobe.c > @@ -328,7 +328,7 @@ static inline int __enable_trace_kprobe(struct trace_kprobe *tk) > return ret; > } > > -atomic_t trace_kprobe_disabled_finished; > +static atomic_t trace_kprobe_disabled_finished; > > static void trace_kprobe_disabled_handlers_finish(void) > { Oops, right. I forgot the static. I'll update it. Thanks,
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c index cbdc4f4e64c7..906af1ffe2b2 100644 --- a/kernel/trace/trace_kprobe.c +++ b/kernel/trace/trace_kprobe.c @@ -328,10 +328,25 @@ static inline int __enable_trace_kprobe(struct trace_kprobe *tk) return ret; } +atomic_t trace_kprobe_disabled_finished; + +static void trace_kprobe_disabled_handlers_finish(void) +{ + if (atomic_read(&trace_kprobe_disabled_finished)) + synchronize_rcu(); +} + +static void trace_kprobe_disabled_finish_cb(struct rcu_head *head) +{ + atomic_dec(&trace_kprobe_disabled_finished); + kfree(head); +} + static void __disable_trace_kprobe(struct trace_probe *tp) { struct trace_probe *pos; struct trace_kprobe *tk; + struct rcu_head *head; list_for_each_entry(pos, trace_probe_probe_list(tp), list) { tk = container_of(pos, struct trace_kprobe, tp); @@ -342,6 +357,13 @@ static void __disable_trace_kprobe(struct trace_probe *tp) else disable_kprobe(&tk->rp.kp); } + + /* Handler exit gatekeeper */ + head = kzalloc(sizeof(*head), GFP_KERNEL); + if (WARN_ON(!head)) + return; + atomic_inc(&trace_kprobe_disabled_finished); + call_rcu(head, trace_kprobe_disabled_finish_cb); } /* @@ -422,13 +444,11 @@ static int disable_trace_kprobe(struct trace_event_call *call, out: if (file) - /* - * Synchronization is done in below function. For perf event, - * file == NULL and perf_trace_event_unreg() calls - * tracepoint_synchronize_unregister() to ensure synchronize - * event. We don't need to care about it. - */ trace_probe_remove_file(tp, file); + /* + * We have no RCU synchronization here. Caller must wait for the + * completion of disabling. + */ return 0; } @@ -542,6 +562,9 @@ static int unregister_trace_kprobe(struct trace_kprobe *tk) if (trace_probe_is_enabled(&tk->tp)) return -EBUSY; + /* Make sure all disabled trace_kprobe handlers finished */ + trace_kprobe_disabled_handlers_finish(); + /* Will fail if probe is being used by ftrace or perf */ if (unregister_kprobe_event(tk)) return -EBUSY; diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c index 905b10af5d5c..b18df8e1b2d6 100644 --- a/kernel/trace/trace_probe.c +++ b/kernel/trace/trace_probe.c @@ -1067,6 +1067,13 @@ struct event_file_link *trace_probe_get_file_link(struct trace_probe *tp, return NULL; } +static void event_file_link_free_cb(struct rcu_head *head) +{ + struct event_file_link *link = container_of(head, typeof(*link), rcu); + + kfree(link); +} + int trace_probe_remove_file(struct trace_probe *tp, struct trace_event_file *file) { @@ -1077,8 +1084,7 @@ int trace_probe_remove_file(struct trace_probe *tp, return -ENOENT; list_del_rcu(&link->list); - synchronize_rcu(); - kfree(link); + call_rcu(&link->rcu, event_file_link_free_cb); if (list_empty(&tp->event->files)) trace_probe_clear_flag(tp, TP_FLAG_TRACE); diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h index 4ee703728aec..71ac01a50815 100644 --- a/kernel/trace/trace_probe.h +++ b/kernel/trace/trace_probe.h @@ -243,6 +243,7 @@ struct trace_probe { struct event_file_link { struct trace_event_file *file; struct list_head list; + struct rcu_head rcu; }; static inline bool trace_probe_test_flag(struct trace_probe *tp,
Use call_rcu() to defer freeing event_file_link data structure. This removes RCU synchronization from per-probe event disabling operation. Since unregistering kprobe event requires all handlers to be disabled and have finished, this also introduces a gatekeeper to ensure that. If there is any disabled event which is not finished, the unregister process can synchronize RCU once (IOW, may sleep a while.) Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> --- kernel/trace/trace_kprobe.c | 35 +++++++++++++++++++++++++++++------ kernel/trace/trace_probe.c | 10 ++++++++-- kernel/trace/trace_probe.h | 1 + 3 files changed, 38 insertions(+), 8 deletions(-)