From patchwork Sat Jul 11 01:26:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Liu X-Patchwork-Id: 1327204 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=fb.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=fb.com header.i=@fb.com header.a=rsa-sha256 header.s=facebook header.b=HU43yetM; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3XPt4flFz9sSJ for ; Sat, 11 Jul 2020 11:29:22 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727054AbgGKB3R (ORCPT ); Fri, 10 Jul 2020 21:29:17 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:42720 "EHLO mx0b-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727005AbgGKB3P (ORCPT ); Fri, 10 Jul 2020 21:29:15 -0400 Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06B1SpQn018695 for ; Fri, 10 Jul 2020 18:29:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=sCfAZ8R5XYksxd3Rvwf64+RwJ+YDSaUF1RQ9sffmk+Q=; b=HU43yetMXZDvRF3zsuYSQxsoMkUj8o5Mp1sTPmPYUor9xsV07JSLu0vRN6aok0fYhRVB 6hgegPFOxuyDYRsAzNQzqXC9u9VG8ZRtgdtKtYORJIICFh8YrypqbajzyTV8wEmCzM/G 9qxv26iR08PlFC9KZfeaiR7aF3QNYvpVOTI= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com with ESMTP id 325k2cn9te-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 10 Jul 2020 18:29:14 -0700 Received: from intmgw004.03.ash8.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:21d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Fri, 10 Jul 2020 18:29:12 -0700 Received: by devbig006.ftw2.facebook.com (Postfix, from userid 4523) id DC9EE62E50B7; Fri, 10 Jul 2020 18:26:51 -0700 (PDT) Smtp-Origin-Hostprefix: devbig From: Song Liu Smtp-Origin-Hostname: devbig006.ftw2.facebook.com To: , , CC: , , , , , , , Song Liu Smtp-Origin-Cluster: ftw2c04 Subject: [PATCH bpf-next 1/5] bpf: block bpf_get_[stack|stackid] on perf_event with PEBS entries Date: Fri, 10 Jul 2020 18:26:35 -0700 Message-ID: <20200711012639.3429622-2-songliubraving@fb.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200711012639.3429622-1-songliubraving@fb.com> References: <20200711012639.3429622-1-songliubraving@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-10_14:2020-07-10,2020-07-10 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 phishscore=0 spamscore=0 priorityscore=1501 clxscore=1015 malwarescore=0 adultscore=0 lowpriorityscore=0 mlxscore=0 suspectscore=0 bulkscore=0 impostorscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007110007 X-FB-Internal: deliver Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Calling get_perf_callchain() on perf_events from PEBS entries may cause unwinder errors. To fix this issue, the callchain is fetched early. Such perf_events are marked with __PERF_SAMPLE_CALLCHAIN_EARLY. Similarly, calling bpf_get_[stack|stackid] on perf_events from PEBS may also cause unwinder errors. To fix this, block bpf_get_[stack|stackid] on these perf_events. Unfortunately, bpf verifier cannot tell whether the program will be attached to perf_event with PEBS entries. Therefore, block such programs during ioctl(PERF_EVENT_IOC_SET_BPF). Signed-off-by: Song Liu --- include/linux/filter.h | 3 ++- kernel/bpf/verifier.c | 3 +++ kernel/events/core.c | 10 ++++++++++ 3 files changed, 15 insertions(+), 1 deletion(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index 2593777236037..fb34dc40f039b 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -534,7 +534,8 @@ struct bpf_prog { is_func:1, /* program is a bpf function */ kprobe_override:1, /* Do we override a kprobe? */ has_callchain_buf:1, /* callchain buffer allocated? */ - enforce_expected_attach_type:1; /* Enforce expected_attach_type checking at attach time */ + enforce_expected_attach_type:1, /* Enforce expected_attach_type checking at attach time */ + call_get_perf_callchain:1; /* Do we call helpers that uses get_perf_callchain()? */ enum bpf_prog_type type; /* Type of BPF program */ enum bpf_attach_type expected_attach_type; /* For some prog types */ u32 len; /* Number of filter blocks */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index b608185e1ffd5..1e11b0f6fba31 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4884,6 +4884,9 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn env->prog->has_callchain_buf = true; } + if (func_id == BPF_FUNC_get_stackid || func_id == BPF_FUNC_get_stack) + env->prog->call_get_perf_callchain = true; + if (changes_data) clear_all_pkt_pointers(env); return 0; diff --git a/kernel/events/core.c b/kernel/events/core.c index 856d98c36f562..f2f575a286bb4 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -9544,6 +9544,16 @@ static int perf_event_set_bpf_handler(struct perf_event *event, u32 prog_fd) if (IS_ERR(prog)) return PTR_ERR(prog); + if ((event->attr.sample_type & __PERF_SAMPLE_CALLCHAIN_EARLY) && + prog->call_get_perf_callchain) { + /* + * The perf_event get_perf_callchain() early, the attached + * BPF program shouldn't call get_perf_callchain() again. + */ + bpf_prog_put(prog); + return -EINVAL; + } + event->prog = prog; event->orig_overflow_handler = READ_ONCE(event->overflow_handler); WRITE_ONCE(event->overflow_handler, bpf_overflow_handler); From patchwork Sat Jul 11 01:26:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Liu X-Patchwork-Id: 1327205 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=fb.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=fb.com header.i=@fb.com header.a=rsa-sha256 header.s=facebook header.b=iWQDjuYI; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3XPv0D8Zz9sRR for ; Sat, 11 Jul 2020 11:29:23 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727075AbgGKB3V (ORCPT ); Fri, 10 Jul 2020 21:29:21 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:46376 "EHLO mx0b-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727028AbgGKB3R (ORCPT ); Fri, 10 Jul 2020 21:29:17 -0400 Received: from pps.filterd (m0109331.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06B1SpSp018705 for ; Fri, 10 Jul 2020 18:29:15 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=RUib/HXP+VBprzF23xQbf2Dp0Yp03NX2lImz1c9fDJE=; b=iWQDjuYIdV7RPt0IGz7XVSPSDg0dibn2GeinZf7LAQ4ZURFFLOF6C7GjjPW2qI0CFGx1 yyySJfYlwTCSPuOQVdEUELZB2DRRvlG7qEOVIKo33Le89ZkueTr3gfhYLwyf4I6455m7 ionkgMMg+Qe6vyJmTwyeVsmOBI25iRwQ9FA= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com with ESMTP id 325k2cn9td-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 10 Jul 2020 18:29:15 -0700 Received: from intmgw005.03.ash8.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:11d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Fri, 10 Jul 2020 18:29:12 -0700 Received: by devbig006.ftw2.facebook.com (Postfix, from userid 4523) id 338A662E51D5; Fri, 10 Jul 2020 18:26:54 -0700 (PDT) Smtp-Origin-Hostprefix: devbig From: Song Liu Smtp-Origin-Hostname: devbig006.ftw2.facebook.com To: , , CC: , , , , , , , Song Liu Smtp-Origin-Cluster: ftw2c04 Subject: [PATCH bpf-next 2/5] bpf: add callchain to bpf_perf_event_data Date: Fri, 10 Jul 2020 18:26:36 -0700 Message-ID: <20200711012639.3429622-3-songliubraving@fb.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200711012639.3429622-1-songliubraving@fb.com> References: <20200711012639.3429622-1-songliubraving@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-10_14:2020-07-10,2020-07-10 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 phishscore=0 spamscore=0 priorityscore=1501 clxscore=1015 malwarescore=0 adultscore=0 lowpriorityscore=0 mlxscore=0 suspectscore=0 bulkscore=0 impostorscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007110007 X-FB-Internal: deliver Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org If the callchain is available, BPF program can use bpf_probe_read_kernel() to fetch the callchain, or use it in a BPF helper. Signed-off-by: Song Liu --- include/linux/perf_event.h | 5 ----- include/linux/trace_events.h | 5 +++++ include/uapi/linux/bpf_perf_event.h | 7 ++++++ kernel/bpf/btf.c | 5 +++++ kernel/trace/bpf_trace.c | 27 +++++++++++++++++++++++ tools/include/uapi/linux/bpf_perf_event.h | 8 +++++++ 6 files changed, 52 insertions(+), 5 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 00ab5efa38334..3a68c999f50d1 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -59,11 +59,6 @@ struct perf_guest_info_callbacks { #include #include -struct perf_callchain_entry { - __u64 nr; - __u64 ip[]; /* /proc/sys/kernel/perf_event_max_stack */ -}; - struct perf_callchain_entry_ctx { struct perf_callchain_entry *entry; u32 max_stack; diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h index 5c69433540494..8e1e88f40eef9 100644 --- a/include/linux/trace_events.h +++ b/include/linux/trace_events.h @@ -631,6 +631,7 @@ void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp); int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id, u32 *fd_type, const char **buf, u64 *probe_offset, u64 *probe_addr); +int bpf_trace_init_btf_ids(struct btf *btf); #else static inline unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx) { @@ -672,6 +673,10 @@ static inline int bpf_get_perf_event_info(const struct perf_event *event, { return -EOPNOTSUPP; } +int bpf_trace_init_btf_ids(struct btf *btf) +{ + return -EOPNOTSUPP; +} #endif enum { diff --git a/include/uapi/linux/bpf_perf_event.h b/include/uapi/linux/bpf_perf_event.h index eb1b9d21250c6..40f4df80ab4fa 100644 --- a/include/uapi/linux/bpf_perf_event.h +++ b/include/uapi/linux/bpf_perf_event.h @@ -9,11 +9,18 @@ #define _UAPI__LINUX_BPF_PERF_EVENT_H__ #include +#include + +struct perf_callchain_entry { + __u64 nr; + __u64 ip[]; /* /proc/sys/kernel/perf_event_max_stack */ +}; struct bpf_perf_event_data { bpf_user_pt_regs_t regs; __u64 sample_period; __u64 addr; + __bpf_md_ptr(struct perf_callchain_entry *, callchain); }; #endif /* _UAPI__LINUX_BPF_PERF_EVENT_H__ */ diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 4c3007f428b16..cb122e14dba38 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -20,6 +20,7 @@ #include #include #include +#include #include /* BTF (BPF Type Format) is the meta data format which describes @@ -3673,6 +3674,10 @@ struct btf *btf_parse_vmlinux(void) if (err < 0) goto errout; + err = bpf_trace_init_btf_ids(btf); + if (err < 0) + goto errout; + bpf_struct_ops_init(btf, log); init_btf_sock_ids(btf); diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index e0b7775039ab9..c014846c2723c 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -31,6 +32,20 @@ struct bpf_trace_module { static LIST_HEAD(bpf_trace_modules); static DEFINE_MUTEX(bpf_module_mutex); +static u32 perf_callchain_entry_btf_id; + +int bpf_trace_init_btf_ids(struct btf *btf) +{ + s32 type_id; + + type_id = btf_find_by_name_kind(btf, "perf_callchain_entry", + BTF_KIND_STRUCT); + if (type_id < 0) + return -EINVAL; + perf_callchain_entry_btf_id = type_id; + return 0; +} + static struct bpf_raw_event_map *bpf_get_raw_tracepoint_module(const char *name) { struct bpf_raw_event_map *btp, *ret = NULL; @@ -1650,6 +1665,10 @@ static bool pe_prog_is_valid_access(int off, int size, enum bpf_access_type type if (!bpf_ctx_narrow_access_ok(off, size, size_u64)) return false; break; + case bpf_ctx_range(struct bpf_perf_event_data, callchain): + info->reg_type = PTR_TO_BTF_ID; + info->btf_id = perf_callchain_entry_btf_id; + break; default: if (size != sizeof(long)) return false; @@ -1682,6 +1701,14 @@ static u32 pe_prog_convert_ctx_access(enum bpf_access_type type, bpf_target_off(struct perf_sample_data, addr, 8, target_size)); break; + case offsetof(struct bpf_perf_event_data, callchain): + *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct bpf_perf_event_data_kern, + data), si->dst_reg, si->src_reg, + offsetof(struct bpf_perf_event_data_kern, data)); + *insn++ = BPF_LDX_MEM(BPF_DW, si->dst_reg, si->dst_reg, + bpf_target_off(struct perf_sample_data, callchain, + 8, target_size)); + break; default: *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct bpf_perf_event_data_kern, regs), si->dst_reg, si->src_reg, diff --git a/tools/include/uapi/linux/bpf_perf_event.h b/tools/include/uapi/linux/bpf_perf_event.h index 8f95303f9d807..40f4df80ab4fa 100644 --- a/tools/include/uapi/linux/bpf_perf_event.h +++ b/tools/include/uapi/linux/bpf_perf_event.h @@ -9,10 +9,18 @@ #define _UAPI__LINUX_BPF_PERF_EVENT_H__ #include +#include + +struct perf_callchain_entry { + __u64 nr; + __u64 ip[]; /* /proc/sys/kernel/perf_event_max_stack */ +}; struct bpf_perf_event_data { bpf_user_pt_regs_t regs; __u64 sample_period; + __u64 addr; + __bpf_md_ptr(struct perf_callchain_entry *, callchain); }; #endif /* _UAPI__LINUX_BPF_PERF_EVENT_H__ */ From patchwork Sat Jul 11 01:26:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Liu X-Patchwork-Id: 1327207 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=fb.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=fb.com header.i=@fb.com header.a=rsa-sha256 header.s=facebook header.b=aD0KwEIQ; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3XQ86Nffz9sSd for ; Sat, 11 Jul 2020 11:29:36 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727785AbgGKB3e (ORCPT ); Fri, 10 Jul 2020 21:29:34 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:40056 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727017AbgGKB3e (ORCPT ); Fri, 10 Jul 2020 21:29:34 -0400 Received: from pps.filterd (m0089730.ppops.net [127.0.0.1]) by m0089730.ppops.net (8.16.0.42/8.16.0.42) with SMTP id 06B1TTga002741 for ; Fri, 10 Jul 2020 18:29:32 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=qp2hoE/pqt0LGBJIrpJTXfBycOk0rkspQzKcnnz52Fg=; b=aD0KwEIQROStMf1PqCsxRkgxmLI7Ey06q84JmutVneC+UNIsEq6Aa5p9w5OSmGEm8A8J ByPO8Gh9WERAt+VSYPV5TikqbmBrIzSxpB4AS20Bk10a7M7MA/AzV87ZioI8dgdTNm7e pmuHjBYGrEx04apGpB6ZCkBk/3N154ZOCec= Received: from mail.thefacebook.com ([163.114.132.120]) by m0089730.ppops.net with ESMTP id 325k2un5y8-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 10 Jul 2020 18:29:31 -0700 Received: from intmgw004.08.frc2.facebook.com (2620:10d:c085:208::11) by mail.thefacebook.com (2620:10d:c085:11d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Fri, 10 Jul 2020 18:29:02 -0700 Received: by devbig006.ftw2.facebook.com (Postfix, from userid 4523) id 2004062E51F8; Fri, 10 Jul 2020 18:26:56 -0700 (PDT) Smtp-Origin-Hostprefix: devbig From: Song Liu Smtp-Origin-Hostname: devbig006.ftw2.facebook.com To: , , CC: , , , , , , , Song Liu Smtp-Origin-Cluster: ftw2c04 Subject: [PATCH bpf-next 3/5] bpf: introduce bpf_get_callchain_stackid Date: Fri, 10 Jul 2020 18:26:37 -0700 Message-ID: <20200711012639.3429622-4-songliubraving@fb.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200711012639.3429622-1-songliubraving@fb.com> References: <20200711012639.3429622-1-songliubraving@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-10_14:2020-07-10,2020-07-10 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 malwarescore=0 adultscore=0 mlxscore=0 suspectscore=0 priorityscore=1501 clxscore=1015 bulkscore=0 phishscore=0 spamscore=0 mlxlogscore=999 lowpriorityscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007110007 X-FB-Internal: deliver Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org This helper is only used by BPF program attached to perf_event. If the perf_event has PEBS entries, calling get_perf_callchain from BPF program may cause unwinder errors. bpf_get_callchain_stackid serves as alternative to bpf_get_stackid for these BPF programs. Signed-off-by: Song Liu --- include/linux/bpf.h | 1 + include/uapi/linux/bpf.h | 43 +++++++++++++++++++++++ kernel/bpf/stackmap.c | 63 ++++++++++++++++++++++++++-------- kernel/bpf/verifier.c | 4 ++- kernel/trace/bpf_trace.c | 2 ++ scripts/bpf_helpers_doc.py | 2 ++ tools/include/uapi/linux/bpf.h | 43 +++++++++++++++++++++++ 7 files changed, 142 insertions(+), 16 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 0cd7f6884c5cd..45cf12acb0e26 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1628,6 +1628,7 @@ extern const struct bpf_func_proto bpf_get_current_comm_proto; extern const struct bpf_func_proto bpf_get_stackid_proto; extern const struct bpf_func_proto bpf_get_stack_proto; extern const struct bpf_func_proto bpf_get_task_stack_proto; +extern const struct bpf_func_proto bpf_get_callchain_stackid_proto; extern const struct bpf_func_proto bpf_sock_map_update_proto; extern const struct bpf_func_proto bpf_sock_hash_update_proto; extern const struct bpf_func_proto bpf_get_current_cgroup_id_proto; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 548a749aebb3e..a808accfbd457 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -3319,6 +3319,48 @@ union bpf_attr { * A non-negative value equal to or less than *size* on success, * or a negative error in case of failure. * + * long bpf_get_callchain_stackid(struct perf_callchain_entry *callchain, struct bpf_map *map, u64 flags) + * Description + * Walk a user or a kernel stack and return its id. To achieve + * this, the helper needs *callchain*, which is a pointer to a + * valid perf_callchain_entry, and a pointer to a *map* of type + * **BPF_MAP_TYPE_STACK_TRACE**. + * + * The last argument, *flags*, holds the number of stack frames to + * skip (from 0 to 255), masked with + * **BPF_F_SKIP_FIELD_MASK**. The next bits can be used to set + * a combination of the following flags: + * + * **BPF_F_USER_STACK** + * Collect a user space stack instead of a kernel stack. + * **BPF_F_FAST_STACK_CMP** + * Compare stacks by hash only. + * **BPF_F_REUSE_STACKID** + * If two different stacks hash into the same *stackid*, + * discard the old one. + * + * The stack id retrieved is a 32 bit long integer handle which + * can be further combined with other data (including other stack + * ids) and used as a key into maps. This can be useful for + * generating a variety of graphs (such as flame graphs or off-cpu + * graphs). + * + * For walking a stack, this helper is an improvement over + * **bpf_probe_read**\ (), which can be used with unrolled loops + * but is not efficient and consumes a lot of eBPF instructions. + * Instead, **bpf_get_callchain_stackid**\ () can collect up to + * **PERF_MAX_STACK_DEPTH** both kernel and user frames. Note that + * this limit can be controlled with the **sysctl** program, and + * that it should be manually increased in order to profile long + * user stacks (such as stacks for Java programs). To do so, use: + * + * :: + * + * # sysctl kernel.perf_event_max_stack= + * Return + * The positive or null stack id on success, or a negative error + * in case of failure. + * */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -3463,6 +3505,7 @@ union bpf_attr { FN(skc_to_tcp_request_sock), \ FN(skc_to_udp6_sock), \ FN(get_task_stack), \ + FN(get_callchain_stackid), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c index a6c361ed7937b..28acc610f7f94 100644 --- a/kernel/bpf/stackmap.c +++ b/kernel/bpf/stackmap.c @@ -386,11 +386,10 @@ get_callchain_entry_for_task(struct task_struct *task, u32 init_nr) #endif } -BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map, - u64, flags) +static long __bpf_get_stackid(struct bpf_map *map, struct perf_callchain_entry *trace, + u64 flags) { struct bpf_stack_map *smap = container_of(map, struct bpf_stack_map, map); - struct perf_callchain_entry *trace; struct stack_map_bucket *bucket, *new_bucket, *old_bucket; u32 max_depth = map->value_size / stack_map_data_size(map); /* stack_map_alloc() checks that max_depth <= sysctl_perf_event_max_stack */ @@ -398,21 +397,9 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map, u32 skip = flags & BPF_F_SKIP_FIELD_MASK; u32 hash, id, trace_nr, trace_len; bool user = flags & BPF_F_USER_STACK; - bool kernel = !user; u64 *ips; bool hash_matches; - if (unlikely(flags & ~(BPF_F_SKIP_FIELD_MASK | BPF_F_USER_STACK | - BPF_F_FAST_STACK_CMP | BPF_F_REUSE_STACKID))) - return -EINVAL; - - trace = get_perf_callchain(regs, init_nr, kernel, user, - sysctl_perf_event_max_stack, false, false); - - if (unlikely(!trace)) - /* couldn't fetch the stack trace */ - return -EFAULT; - /* get_perf_callchain() guarantees that trace->nr >= init_nr * and trace-nr <= sysctl_perf_event_max_stack, so trace_nr <= max_depth */ @@ -477,6 +464,30 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map, return id; } +BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map, + u64, flags) +{ + u32 max_depth = map->value_size / stack_map_data_size(map); + /* stack_map_alloc() checks that max_depth <= sysctl_perf_event_max_stack */ + u32 init_nr = sysctl_perf_event_max_stack - max_depth; + bool user = flags & BPF_F_USER_STACK; + struct perf_callchain_entry *trace; + bool kernel = !user; + + if (unlikely(flags & ~(BPF_F_SKIP_FIELD_MASK | BPF_F_USER_STACK | + BPF_F_FAST_STACK_CMP | BPF_F_REUSE_STACKID))) + return -EINVAL; + + trace = get_perf_callchain(regs, init_nr, kernel, user, + sysctl_perf_event_max_stack, false, false); + + if (unlikely(!trace)) + /* couldn't fetch the stack trace */ + return -EFAULT; + + return __bpf_get_stackid(map, trace, flags); +} + const struct bpf_func_proto bpf_get_stackid_proto = { .func = bpf_get_stackid, .gpl_only = true, @@ -486,6 +497,28 @@ const struct bpf_func_proto bpf_get_stackid_proto = { .arg3_type = ARG_ANYTHING, }; +BPF_CALL_3(bpf_get_callchain_stackid, struct perf_callchain_entry *, callchain, + struct bpf_map *, map, u64, flags) +{ + if (unlikely(flags & ~(BPF_F_SKIP_FIELD_MASK | BPF_F_USER_STACK | + BPF_F_FAST_STACK_CMP | BPF_F_REUSE_STACKID))) + return -EINVAL; + if (!callchain) + return -EFAULT; + return __bpf_get_stackid(map, callchain, flags); +} + +static int bpf_get_callchain_stackid_btf_ids[5]; +const struct bpf_func_proto bpf_get_callchain_stackid_proto = { + .func = bpf_get_callchain_stackid, + .gpl_only = false, + .ret_type = RET_INTEGER, + .arg1_type = ARG_PTR_TO_BTF_ID, + .arg2_type = ARG_CONST_MAP_PTR, + .arg3_type = ARG_ANYTHING, + .btf_id = bpf_get_callchain_stackid_btf_ids, +}; + static long __bpf_get_stack(struct pt_regs *regs, struct task_struct *task, void *buf, u32 size, u64 flags) { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 1e11b0f6fba31..07be75550ca93 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4094,7 +4094,8 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env, goto error; break; case BPF_MAP_TYPE_STACK_TRACE: - if (func_id != BPF_FUNC_get_stackid) + if (func_id != BPF_FUNC_get_stackid && + func_id != BPF_FUNC_get_callchain_stackid) goto error; break; case BPF_MAP_TYPE_CGROUP_ARRAY: @@ -4187,6 +4188,7 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env, goto error; break; case BPF_FUNC_get_stackid: + case BPF_FUNC_get_callchain_stackid: if (map->map_type != BPF_MAP_TYPE_STACK_TRACE) goto error; break; diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index c014846c2723c..7a504f734a025 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1396,6 +1396,8 @@ pe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) return &bpf_perf_prog_read_value_proto; case BPF_FUNC_read_branch_records: return &bpf_read_branch_records_proto; + case BPF_FUNC_get_callchain_stackid: + return &bpf_get_callchain_stackid_proto; default: return bpf_tracing_func_proto(func_id, prog); } diff --git a/scripts/bpf_helpers_doc.py b/scripts/bpf_helpers_doc.py index 6843376733df8..1b99e3618e492 100755 --- a/scripts/bpf_helpers_doc.py +++ b/scripts/bpf_helpers_doc.py @@ -427,6 +427,7 @@ class PrinterHelpers(Printer): 'struct tcp_request_sock', 'struct udp6_sock', 'struct task_struct', + 'struct perf_callchain_entry', 'struct __sk_buff', 'struct sk_msg_md', @@ -470,6 +471,7 @@ class PrinterHelpers(Printer): 'struct tcp_request_sock', 'struct udp6_sock', 'struct task_struct', + 'struct perf_callchain_entry', } mapped_types = { 'u8': '__u8', diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 548a749aebb3e..a808accfbd457 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -3319,6 +3319,48 @@ union bpf_attr { * A non-negative value equal to or less than *size* on success, * or a negative error in case of failure. * + * long bpf_get_callchain_stackid(struct perf_callchain_entry *callchain, struct bpf_map *map, u64 flags) + * Description + * Walk a user or a kernel stack and return its id. To achieve + * this, the helper needs *callchain*, which is a pointer to a + * valid perf_callchain_entry, and a pointer to a *map* of type + * **BPF_MAP_TYPE_STACK_TRACE**. + * + * The last argument, *flags*, holds the number of stack frames to + * skip (from 0 to 255), masked with + * **BPF_F_SKIP_FIELD_MASK**. The next bits can be used to set + * a combination of the following flags: + * + * **BPF_F_USER_STACK** + * Collect a user space stack instead of a kernel stack. + * **BPF_F_FAST_STACK_CMP** + * Compare stacks by hash only. + * **BPF_F_REUSE_STACKID** + * If two different stacks hash into the same *stackid*, + * discard the old one. + * + * The stack id retrieved is a 32 bit long integer handle which + * can be further combined with other data (including other stack + * ids) and used as a key into maps. This can be useful for + * generating a variety of graphs (such as flame graphs or off-cpu + * graphs). + * + * For walking a stack, this helper is an improvement over + * **bpf_probe_read**\ (), which can be used with unrolled loops + * but is not efficient and consumes a lot of eBPF instructions. + * Instead, **bpf_get_callchain_stackid**\ () can collect up to + * **PERF_MAX_STACK_DEPTH** both kernel and user frames. Note that + * this limit can be controlled with the **sysctl** program, and + * that it should be manually increased in order to profile long + * user stacks (such as stacks for Java programs). To do so, use: + * + * :: + * + * # sysctl kernel.perf_event_max_stack= + * Return + * The positive or null stack id on success, or a negative error + * in case of failure. + * */ #define __BPF_FUNC_MAPPER(FN) \ FN(unspec), \ @@ -3463,6 +3505,7 @@ union bpf_attr { FN(skc_to_tcp_request_sock), \ FN(skc_to_udp6_sock), \ FN(get_task_stack), \ + FN(get_callchain_stackid), \ /* */ /* integer value in 'imm' field of BPF_CALL instruction selects which helper From patchwork Sat Jul 11 01:26:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Liu X-Patchwork-Id: 1327206 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=fb.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=fb.com header.i=@fb.com header.a=rsa-sha256 header.s=facebook header.b=DPMQQlhC; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3XQ74Hxdz9s1x for ; Sat, 11 Jul 2020 11:29:35 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727768AbgGKB3e (ORCPT ); Fri, 10 Jul 2020 21:29:34 -0400 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:16986 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727115AbgGKB3e (ORCPT ); Fri, 10 Jul 2020 21:29:34 -0400 Received: from pps.filterd (m0148461.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06B1TXgr003173 for ; Fri, 10 Jul 2020 18:29:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=+7acijKP+ydJH0i9GNfmpy6At+PfvVqznxldN0G+Kyg=; b=DPMQQlhCmXucNNFx5LOCfjRWJODrT3owbrEDq3m6rdgLSt5pi4/0JbJS6pjDpTWqPtz/ iVTNiMtMR6W3opdqPdxsQBHvPBmIomDfiADj1GhxoM6syxDk+IE6/fhoBvSEY9wsNly1 JfmGszEmqXRx1+AGDAJDj/Sx/uXBzmw29T8= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com with ESMTP id 325k1s5brv-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 10 Jul 2020 18:29:33 -0700 Received: from intmgw004.03.ash8.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:11d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Fri, 10 Jul 2020 18:29:12 -0700 Received: by devbig006.ftw2.facebook.com (Postfix, from userid 4523) id 298B362E5286; Fri, 10 Jul 2020 18:26:58 -0700 (PDT) Smtp-Origin-Hostprefix: devbig From: Song Liu Smtp-Origin-Hostname: devbig006.ftw2.facebook.com To: , , CC: , , , , , , , Song Liu Smtp-Origin-Cluster: ftw2c04 Subject: [PATCH bpf-next 4/5] selftests/bpf: add get_stackid_cannot_attach Date: Fri, 10 Jul 2020 18:26:38 -0700 Message-ID: <20200711012639.3429622-5-songliubraving@fb.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200711012639.3429622-1-songliubraving@fb.com> References: <20200711012639.3429622-1-songliubraving@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-10_14:2020-07-10,2020-07-10 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 mlxscore=0 clxscore=1015 malwarescore=0 mlxlogscore=999 priorityscore=1501 suspectscore=0 impostorscore=0 phishscore=0 bulkscore=0 spamscore=0 adultscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007110007 X-FB-Internal: deliver Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org This test confirms that BPF program that calls bpf_get_stackid() cannot attach to perf_event with PEBS entry. Signed-off-by: Song Liu --- .../prog_tests/get_stackid_cannot_attach.c | 57 +++++++++++++++++++ 1 file changed, 57 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c diff --git a/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c b/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c new file mode 100644 index 0000000000000..ae943c502b62b --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/get_stackid_cannot_attach.c @@ -0,0 +1,57 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2020 Facebook +#include +#include "test_stacktrace_build_id.skel.h" + +void test_get_stackid_cannot_attach(void) +{ + struct perf_event_attr attr = { + /* .type = PERF_TYPE_SOFTWARE, */ + .type = PERF_TYPE_HARDWARE, + .config = PERF_COUNT_HW_CPU_CYCLES, + .precise_ip = 2, + .sample_type = PERF_SAMPLE_IP | PERF_SAMPLE_BRANCH_STACK | + PERF_SAMPLE_CALLCHAIN, + .branch_sample_type = PERF_SAMPLE_BRANCH_USER | + PERF_SAMPLE_BRANCH_NO_FLAGS | + PERF_SAMPLE_BRANCH_NO_CYCLES | + PERF_SAMPLE_BRANCH_CALL_STACK, + .sample_period = 5000, + .size = sizeof(struct perf_event_attr), + }; + struct test_stacktrace_build_id *skel; + __u32 duration = 0; + int pmu_fd, err; + + skel = test_stacktrace_build_id__open(); + if (CHECK(!skel, "skel_open", "skeleton open failed\n")) + return; + + /* override program type */ + bpf_program__set_perf_event(skel->progs.oncpu); + + err = test_stacktrace_build_id__load(skel); + if (CHECK(err, "skel_load", "skeleton load failed: %d\n", err)) + goto cleanup; + + pmu_fd = syscall(__NR_perf_event_open, &attr, -1 /* pid */, + 0 /* cpu 0 */, -1 /* group id */, + 0 /* flags */); + if (pmu_fd < 0 && errno == ENOENT) { + printf("%s:SKIP:no PERF_COUNT_HW_CPU_CYCLES\n", __func__); + test__skip(); + goto cleanup; + } + if (CHECK(pmu_fd < 0, "perf_event_open", "err %d errno %d\n", + pmu_fd, errno)) + goto cleanup; + + skel->links.oncpu = bpf_program__attach_perf_event(skel->progs.oncpu, + pmu_fd); + CHECK(!IS_ERR(skel->links.oncpu), "attach_perf_event", + "should have failed\n"); + close(pmu_fd); + +cleanup: + test_stacktrace_build_id__destroy(skel); +} From patchwork Sat Jul 11 01:26:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Song Liu X-Patchwork-Id: 1327203 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=fb.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=fb.com header.i=@fb.com header.a=rsa-sha256 header.s=facebook header.b=cwzPqJzF; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4B3XPt1qRrz9sRk for ; Sat, 11 Jul 2020 11:29:22 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727006AbgGKB3O (ORCPT ); Fri, 10 Jul 2020 21:29:14 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:38728 "EHLO mx0b-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726605AbgGKB3O (ORCPT ); Fri, 10 Jul 2020 21:29:14 -0400 Received: from pps.filterd (m0148460.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 06B1OTFb027668 for ; Fri, 10 Jul 2020 18:29:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=Zt9u2EpjWjLtKzNqOLuW2yd8dLixE48PVNDjLnQBOGM=; b=cwzPqJzFj3nKDLXek/ZNIQ/9oxUu1edCuWpV3UbI6SQ3J12UrHs9ZJAHSADpnl9oMBoh kgl/ynnFLB0FjLoGiA8OFYivfDc69+gPJtRY5CChtOM6JW6e6C9USV94A0WzaMeLolBf Q7Nwj9uq1bzrNcTsjx79Jhv6CULRu39q5zU= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com with ESMTP id 325k2cd84t-15 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Fri, 10 Jul 2020 18:29:13 -0700 Received: from intmgw004.03.ash8.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:11d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Fri, 10 Jul 2020 18:29:09 -0700 Received: by devbig006.ftw2.facebook.com (Postfix, from userid 4523) id 56F4662E5296; Fri, 10 Jul 2020 18:27:00 -0700 (PDT) Smtp-Origin-Hostprefix: devbig From: Song Liu Smtp-Origin-Hostname: devbig006.ftw2.facebook.com To: , , CC: , , , , , , , Song Liu Smtp-Origin-Cluster: ftw2c04 Subject: [PATCH bpf-next 5/5] selftests/bpf: add callchain_stackid Date: Fri, 10 Jul 2020 18:26:39 -0700 Message-ID: <20200711012639.3429622-6-songliubraving@fb.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200711012639.3429622-1-songliubraving@fb.com> References: <20200711012639.3429622-1-songliubraving@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-07-10_14:2020-07-10,2020-07-10 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 suspectscore=0 priorityscore=1501 impostorscore=0 adultscore=0 mlxscore=0 bulkscore=0 phishscore=0 malwarescore=0 spamscore=0 lowpriorityscore=0 mlxlogscore=999 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2007110006 X-FB-Internal: deliver Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org This tests new helper function bpf_get_callchain_stackid(), which is the alternative to bpf_get_stackid() for perf_event with PEBS entry. Signed-off-by: Song Liu --- .../bpf/prog_tests/callchain_stackid.c | 61 +++++++++++++++++++ .../selftests/bpf/progs/callchain_stackid.c | 37 +++++++++++ 2 files changed, 98 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/callchain_stackid.c create mode 100644 tools/testing/selftests/bpf/progs/callchain_stackid.c diff --git a/tools/testing/selftests/bpf/prog_tests/callchain_stackid.c b/tools/testing/selftests/bpf/prog_tests/callchain_stackid.c new file mode 100644 index 0000000000000..ebe6251324a1a --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/callchain_stackid.c @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2020 Facebook +#include +#include "callchain_stackid.skel.h" + +void test_callchain_stackid(void) +{ + struct perf_event_attr attr = { + /* .type = PERF_TYPE_SOFTWARE, */ + .type = PERF_TYPE_HARDWARE, + .config = PERF_COUNT_HW_CPU_CYCLES, + .precise_ip = 2, + .sample_type = PERF_SAMPLE_IP | PERF_SAMPLE_BRANCH_STACK | + PERF_SAMPLE_CALLCHAIN, + .branch_sample_type = PERF_SAMPLE_BRANCH_USER | + PERF_SAMPLE_BRANCH_NO_FLAGS | + PERF_SAMPLE_BRANCH_NO_CYCLES | + PERF_SAMPLE_BRANCH_CALL_STACK, + .sample_period = 5000, + .size = sizeof(struct perf_event_attr), + }; + struct callchain_stackid *skel; + __u32 duration = 0; + int pmu_fd, err; + + skel = callchain_stackid__open(); + + if (CHECK(!skel, "skel_open", "skeleton open failed\n")) + return; + + /* override program type */ + bpf_program__set_perf_event(skel->progs.oncpu); + + err = callchain_stackid__load(skel); + if (CHECK(err, "skel_load", "skeleton load failed: %d\n", err)) + goto cleanup; + + pmu_fd = syscall(__NR_perf_event_open, &attr, -1 /* pid */, + 0 /* cpu 0 */, -1 /* group id */, + 0 /* flags */); + if (pmu_fd < 0) { + printf("%s:SKIP:cpu doesn't support the event\n", __func__); + test__skip(); + goto cleanup; + } + + skel->links.oncpu = bpf_program__attach_perf_event(skel->progs.oncpu, + pmu_fd); + if (CHECK(IS_ERR(skel->links.oncpu), "attach_perf_event", + "err %ld\n", PTR_ERR(skel->links.oncpu))) { + close(pmu_fd); + goto cleanup; + } + usleep(500000); + + CHECK(skel->data->total_val == 1, "get_callchain_stack", "failed\n"); + close(pmu_fd); + +cleanup: + callchain_stackid__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/callchain_stackid.c b/tools/testing/selftests/bpf/progs/callchain_stackid.c new file mode 100644 index 0000000000000..aab2c736a0a45 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/callchain_stackid.c @@ -0,0 +1,37 @@ +// SPDX-License-Identifier: GPL-2.0 +// Copyright (c) 2020 Facebook +#include "vmlinux.h" +#include + +#ifndef PERF_MAX_STACK_DEPTH +#define PERF_MAX_STACK_DEPTH 127 +#endif + +#ifndef BPF_F_USER_STACK +#define BPF_F_USER_STACK (1ULL << 8) +#endif + +typedef __u64 stack_trace_t[PERF_MAX_STACK_DEPTH]; +struct { + __uint(type, BPF_MAP_TYPE_STACK_TRACE); + __uint(max_entries, 16384); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(stack_trace_t)); +} stackmap SEC(".maps"); + +long total_val = 1; + +SEC("perf_event") +int oncpu(struct bpf_perf_event_data *ctx) +{ + long val; + + val = bpf_get_callchain_stackid(ctx->callchain, &stackmap, 0); + + if (val > 0) + total_val += val; + + return 0; +} + +char LICENSE[] SEC("license") = "GPL";