From patchwork Thu Aug 27 00:06:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 1352178 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=fb.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=fb.com header.i=@fb.com header.a=rsa-sha256 header.s=facebook header.b=XfObal2+; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BcNLY2mtxz9sTg for ; Thu, 27 Aug 2020 10:06:29 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726788AbgH0AG0 (ORCPT ); Wed, 26 Aug 2020 20:06:26 -0400 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:30390 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726753AbgH0AG0 (ORCPT ); Wed, 26 Aug 2020 20:06:26 -0400 Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 07R05YuW032208 for ; Wed, 26 Aug 2020 17:06:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=PoctbjKW2PqcwUYfWkP0gtzAdd17ljuE+ugTmwzAGbI=; b=XfObal2+nvUMOQ1b71lZ9DHb6y1ltHC5E8Nu6xjcmgVVokcNm2iTz+2Hy0aAWbxX3moo kb6EmhBwJWC+oYRMYL+Y+RXuFOvDdXje63peTtZiWzZdKyBOooqWHQ5es3J3KudU1IRV IV9EPp4mvVj+UDhV3CrMdBFHykYVrlKhua4= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com with ESMTP id 335up8jbxp-8 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 26 Aug 2020 17:06:25 -0700 Received: from intmgw003.03.ash8.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::d) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Wed, 26 Aug 2020 17:06:22 -0700 Received: by devbig003.ftw2.facebook.com (Postfix, from userid 128203) id 103843705306; Wed, 26 Aug 2020 17:06:19 -0700 (PDT) Smtp-Origin-Hostprefix: devbig From: Yonghong Song Smtp-Origin-Hostname: devbig003.ftw2.facebook.com To: , CC: Alexei Starovoitov , Daniel Borkmann , Smtp-Origin-Cluster: ftw2c04 Subject: [PATCH bpf-next 1/5] bpf: make bpf_link_info.iter similar to bpf_iter_link_info Date: Wed, 26 Aug 2020 17:06:19 -0700 Message-ID: <20200827000619.2711883-1-yhs@fb.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200827000618.2711826-1-yhs@fb.com> References: <20200827000618.2711826-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-08-26_14:2020-08-26,2020-08-26 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 malwarescore=0 lowpriorityscore=0 mlxlogscore=827 clxscore=1015 bulkscore=0 adultscore=0 impostorscore=0 priorityscore=1501 phishscore=0 spamscore=0 mlxscore=0 suspectscore=8 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2008260187 X-FB-Internal: deliver Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org bpf_link_info.iter is used by link_query to return bpf_iter_link_info to user space. Fields may be different ,e.g., map_fd vs. map_id, so we cannot reuse the exact structure. But make them similar, e.g., struct bpf_link_info { /* common fields */ union { struct { ... } raw_tracepoint; struct { ... } tracing; ... struct { /* common fields for iter */ union { struct { __u32 map_id; } map; /* other structs for other targets */ }; }; }; }; so the structure is extensible the same way as bpf_iter_link_info. Fixes: 6b0a249a301e ("bpf: Implement link_query for bpf iterators") Signed-off-by: Yonghong Song Acked-by: Andrii Nakryiko --- include/uapi/linux/bpf.h | 6 ++++-- tools/include/uapi/linux/bpf.h | 6 ++++-- 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 0388bc0200b0..ef7af384f5ee 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -4251,8 +4251,10 @@ struct bpf_link_info { __aligned_u64 target_name; /* in/out: target_name buffer ptr */ __u32 target_name_len; /* in/out: target_name buffer len */ union { - __u32 map_id; - } map; + struct { + __u32 map_id; + } map; + }; } iter; struct { __u32 netns_ino; diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 0388bc0200b0..ef7af384f5ee 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -4251,8 +4251,10 @@ struct bpf_link_info { __aligned_u64 target_name; /* in/out: target_name buffer ptr */ __u32 target_name_len; /* in/out: target_name buffer len */ union { - __u32 map_id; - } map; + struct { + __u32 map_id; + } map; + }; } iter; struct { __u32 netns_ino; From patchwork Thu Aug 27 00:06:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 1352180 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=fb.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=fb.com header.i=@fb.com header.a=rsa-sha256 header.s=facebook header.b=mRc6ahy2; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BcNLb1h2Bz9sSJ for ; Thu, 27 Aug 2020 10:06:31 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726838AbgH0AGa (ORCPT ); Wed, 26 Aug 2020 20:06:30 -0400 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:39892 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726753AbgH0AG1 (ORCPT ); Wed, 26 Aug 2020 20:06:27 -0400 Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 07R05Whu032129 for ; Wed, 26 Aug 2020 17:06:26 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=ZCRr2uHYbaNACCigjH5orYMe3ngORjEcy/ZuvQdfQDM=; b=mRc6ahy2u7AVlJd7QCBTBu4T0dLG31yf26cYeTThbinLCF47VYEIqYCGJHYQOhwFpi4q oA06GC7NASIN13KxBZLjgLfU3mlWh5r7UIKmG20bgEQDYCZUP5BZr1D9lyTnp4rtOXIi hqAcseuklyV77ITKKZmYp1AZunVBg9q9JLE= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com with ESMTP id 335up8jbxu-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 26 Aug 2020 17:06:26 -0700 Received: from intmgw004.03.ash8.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:21d::4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Wed, 26 Aug 2020 17:06:25 -0700 Received: by devbig003.ftw2.facebook.com (Postfix, from userid 128203) id 900C637052E0; Wed, 26 Aug 2020 17:06:20 -0700 (PDT) Smtp-Origin-Hostprefix: devbig From: Yonghong Song Smtp-Origin-Hostname: devbig003.ftw2.facebook.com To: , CC: Alexei Starovoitov , Daniel Borkmann , Smtp-Origin-Cluster: ftw2c04 Subject: [PATCH bpf-next 2/5] bpf: add main_thread_only customization for task/task_file iterators Date: Wed, 26 Aug 2020 17:06:20 -0700 Message-ID: <20200827000620.2711963-1-yhs@fb.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200827000618.2711826-1-yhs@fb.com> References: <20200827000618.2711826-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-08-26_14:2020-08-26,2020-08-26 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 malwarescore=0 lowpriorityscore=0 mlxlogscore=999 clxscore=1015 bulkscore=0 adultscore=0 impostorscore=0 priorityscore=1501 phishscore=0 spamscore=0 mlxscore=0 suspectscore=8 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2008260187 X-FB-Internal: deliver Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Currently, task and task_file by default iterates through all tasks. For task_file, by default, all files from all tasks will be traversed. But for a user process, the file_table is shared by all threads of that process. So traversing the main thread per process should be enough to traverse all files and this can save a lot of cpu time if some process has large number of threads and each thread has lots of open files. This patch implemented a customization for task/task_file iterator, permitting to traverse only the kernel task where its pid equal to tgid in the kernel. This includes some kernel threads, and main threads of user processes. This will solve the above potential performance issue for task_file. This customization may be useful for task iterator too if only traversing main threads is enough. Signed-off-by: Yonghong Song --- include/linux/bpf.h | 3 ++- include/uapi/linux/bpf.h | 5 ++++ kernel/bpf/task_iter.c | 46 +++++++++++++++++++++++----------- tools/include/uapi/linux/bpf.h | 5 ++++ 4 files changed, 43 insertions(+), 16 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index a6131d95e31e..058eb9b0ba78 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1220,7 +1220,8 @@ int bpf_obj_get_user(const char __user *pathname, int flags); int __init bpf_iter_ ## target(args) { return 0; } struct bpf_iter_aux_info { - struct bpf_map *map; + struct bpf_map *map; /* for iterator traversing map elements */ + bool main_thread_only; /* for task/task_file iterator */ }; typedef int (*bpf_iter_attach_target_t)(struct bpf_prog *prog, diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index ef7af384f5ee..af5c600bf673 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -85,6 +85,11 @@ union bpf_iter_link_info { struct { __u32 map_fd; } map; + + struct { + __u32 main_thread_only:1; + __u32 :31; + } task; }; /* BPF syscall commands, see bpf(2) man-page for details. */ diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c index 232df29793e9..362bf2dda63a 100644 --- a/kernel/bpf/task_iter.c +++ b/kernel/bpf/task_iter.c @@ -11,19 +11,22 @@ struct bpf_iter_seq_task_common { struct pid_namespace *ns; + bool main_thread_only; }; struct bpf_iter_seq_task_info { /* The first field must be struct bpf_iter_seq_task_common. - * this is assumed by {init, fini}_seq_pidns() callback functions. + * this is assumed by {init, fini}_seq_task_common() callback functions. */ struct bpf_iter_seq_task_common common; u32 tid; }; -static struct task_struct *task_seq_get_next(struct pid_namespace *ns, - u32 *tid) +static struct task_struct *task_seq_get_next( + struct bpf_iter_seq_task_common *task_common, u32 *tid) { + bool main_thread_only = task_common->main_thread_only; + struct pid_namespace *ns = task_common->ns; struct task_struct *task = NULL; struct pid *pid; @@ -31,7 +34,10 @@ static struct task_struct *task_seq_get_next(struct pid_namespace *ns, retry: pid = idr_get_next(&ns->idr, tid); if (pid) { - task = get_pid_task(pid, PIDTYPE_PID); + if (main_thread_only) + task = get_pid_task(pid, PIDTYPE_TGID); + else + task = get_pid_task(pid, PIDTYPE_PID); if (!task) { ++*tid; goto retry; @@ -47,7 +53,7 @@ static void *task_seq_start(struct seq_file *seq, loff_t *pos) struct bpf_iter_seq_task_info *info = seq->private; struct task_struct *task; - task = task_seq_get_next(info->common.ns, &info->tid); + task = task_seq_get_next(&info->common, &info->tid); if (!task) return NULL; @@ -64,7 +70,7 @@ static void *task_seq_next(struct seq_file *seq, void *v, loff_t *pos) ++*pos; ++info->tid; put_task_struct((struct task_struct *)v); - task = task_seq_get_next(info->common.ns, &info->tid); + task = task_seq_get_next(&info->common, &info->tid); if (!task) return NULL; @@ -118,7 +124,7 @@ static const struct seq_operations task_seq_ops = { struct bpf_iter_seq_task_file_info { /* The first field must be struct bpf_iter_seq_task_common. - * this is assumed by {init, fini}_seq_pidns() callback functions. + * this is assumed by {init, fini}_seq_task_common() callback functions. */ struct bpf_iter_seq_task_common common; struct task_struct *task; @@ -131,7 +137,6 @@ static struct file * task_file_seq_get_next(struct bpf_iter_seq_task_file_info *info, struct task_struct **task, struct files_struct **fstruct) { - struct pid_namespace *ns = info->common.ns; u32 curr_tid = info->tid, max_fds; struct files_struct *curr_files; struct task_struct *curr_task; @@ -147,7 +152,7 @@ task_file_seq_get_next(struct bpf_iter_seq_task_file_info *info, curr_files = *fstruct; curr_fd = info->fd; } else { - curr_task = task_seq_get_next(ns, &curr_tid); + curr_task = task_seq_get_next(&info->common, &curr_tid); if (!curr_task) return NULL; @@ -293,15 +298,16 @@ static void task_file_seq_stop(struct seq_file *seq, void *v) } } -static int init_seq_pidns(void *priv_data, struct bpf_iter_aux_info *aux) +static int init_seq_task_common(void *priv_data, struct bpf_iter_aux_info *aux) { struct bpf_iter_seq_task_common *common = priv_data; common->ns = get_pid_ns(task_active_pid_ns(current)); + common->main_thread_only = aux->main_thread_only; return 0; } -static void fini_seq_pidns(void *priv_data) +static void fini_seq_task_common(void *priv_data) { struct bpf_iter_seq_task_common *common = priv_data; @@ -315,19 +321,28 @@ static const struct seq_operations task_file_seq_ops = { .show = task_file_seq_show, }; +static int bpf_iter_attach_task(struct bpf_prog *prog, + union bpf_iter_link_info *linfo, + struct bpf_iter_aux_info *aux) +{ + aux->main_thread_only = linfo->task.main_thread_only; + return 0; +} + BTF_ID_LIST(btf_task_file_ids) BTF_ID(struct, task_struct) BTF_ID(struct, file) static const struct bpf_iter_seq_info task_seq_info = { .seq_ops = &task_seq_ops, - .init_seq_private = init_seq_pidns, - .fini_seq_private = fini_seq_pidns, + .init_seq_private = init_seq_task_common, + .fini_seq_private = fini_seq_task_common, .seq_priv_size = sizeof(struct bpf_iter_seq_task_info), }; static struct bpf_iter_reg task_reg_info = { .target = "task", + .attach_target = bpf_iter_attach_task, .ctx_arg_info_size = 1, .ctx_arg_info = { { offsetof(struct bpf_iter__task, task), @@ -338,13 +353,14 @@ static struct bpf_iter_reg task_reg_info = { static const struct bpf_iter_seq_info task_file_seq_info = { .seq_ops = &task_file_seq_ops, - .init_seq_private = init_seq_pidns, - .fini_seq_private = fini_seq_pidns, + .init_seq_private = init_seq_task_common, + .fini_seq_private = fini_seq_task_common, .seq_priv_size = sizeof(struct bpf_iter_seq_task_file_info), }; static struct bpf_iter_reg task_file_reg_info = { .target = "task_file", + .attach_target = bpf_iter_attach_task, .ctx_arg_info_size = 2, .ctx_arg_info = { { offsetof(struct bpf_iter__task_file, task), diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index ef7af384f5ee..af5c600bf673 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -85,6 +85,11 @@ union bpf_iter_link_info { struct { __u32 map_fd; } map; + + struct { + __u32 main_thread_only:1; + __u32 :31; + } task; }; /* BPF syscall commands, see bpf(2) man-page for details. */ From patchwork Thu Aug 27 00:06:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 1352181 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=fb.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=fb.com header.i=@fb.com header.a=rsa-sha256 header.s=facebook header.b=Qa6ZuEW1; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BcNLf2QQxz9sSJ for ; Thu, 27 Aug 2020 10:06:34 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726828AbgH0AGa (ORCPT ); Wed, 26 Aug 2020 20:06:30 -0400 Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:46012 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726839AbgH0AG3 (ORCPT ); Wed, 26 Aug 2020 20:06:29 -0400 Received: from pps.filterd (m0044010.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 07R05Yub032208 for ; Wed, 26 Aug 2020 17:06:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=Bj1FGEHaoqbiaRh/ZSXUetNzMCXqGCc22v1K92GuKZA=; b=Qa6ZuEW1V4JApraTk7w1bK9v4bySl69LE4d2lUQjJgXcXIlpQZsasaNKU/Kg35CmPkQ9 7L+y9UAH9phQP5u/1c6sXcxjX25bMD/Gy+P9ACIIj7J/0omcLzIO+RGFlkac33jtbaHy 65Rd8tdvBYsRxPxnIzCUe1LrU7tBZ4NHgUI= Received: from maileast.thefacebook.com ([163.114.130.16]) by mx0a-00082601.pphosted.com with ESMTP id 335up8jbxp-13 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 26 Aug 2020 17:06:27 -0700 Received: from intmgw003.03.ash8.facebook.com (2620:10d:c0a8:1b::d) by mail.thefacebook.com (2620:10d:c0a8:82::d) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Wed, 26 Aug 2020 17:06:23 -0700 Received: by devbig003.ftw2.facebook.com (Postfix, from userid 128203) id C3EE137052E0; Wed, 26 Aug 2020 17:06:21 -0700 (PDT) Smtp-Origin-Hostprefix: devbig From: Yonghong Song Smtp-Origin-Hostname: devbig003.ftw2.facebook.com To: , CC: Alexei Starovoitov , Daniel Borkmann , Smtp-Origin-Cluster: ftw2c04 Subject: [PATCH bpf-next 3/5] bpf: add link_query support for newly added main_thread_only info Date: Wed, 26 Aug 2020 17:06:21 -0700 Message-ID: <20200827000621.2712111-1-yhs@fb.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200827000618.2711826-1-yhs@fb.com> References: <20200827000618.2711826-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-08-26_14:2020-08-26,2020-08-26 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 malwarescore=0 lowpriorityscore=0 mlxlogscore=999 clxscore=1015 bulkscore=0 adultscore=0 impostorscore=0 priorityscore=1501 phishscore=0 spamscore=0 mlxscore=0 suspectscore=8 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2008260187 X-FB-Internal: deliver Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Added support for link_query for main_thread_only information with task/task_file iterators. Signed-off-by: Yonghong Song --- include/uapi/linux/bpf.h | 5 +++++ kernel/bpf/task_iter.c | 17 +++++++++++++++++ tools/include/uapi/linux/bpf.h | 5 +++++ 3 files changed, 27 insertions(+) diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index af5c600bf673..595bdc4c9431 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -4259,6 +4259,11 @@ struct bpf_link_info { struct { __u32 map_id; } map; + + struct { + __u32 main_thread_only:1; + __u32 :31; + } task; }; } iter; struct { diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c index 362bf2dda63a..7636abe05f27 100644 --- a/kernel/bpf/task_iter.c +++ b/kernel/bpf/task_iter.c @@ -329,6 +329,19 @@ static int bpf_iter_attach_task(struct bpf_prog *prog, return 0; } +static void bpf_iter_task_show_fdinfo(const struct bpf_iter_aux_info *aux, + struct seq_file *seq) +{ + seq_printf(seq, "main_thread_only:\t%u\n", aux->main_thread_only); +} + +static int bpf_iter_task_fill_link_info(const struct bpf_iter_aux_info *aux, + struct bpf_link_info *info) +{ + info->iter.task.main_thread_only = aux->main_thread_only; + return 0; +} + BTF_ID_LIST(btf_task_file_ids) BTF_ID(struct, task_struct) BTF_ID(struct, file) @@ -343,6 +356,8 @@ static const struct bpf_iter_seq_info task_seq_info = { static struct bpf_iter_reg task_reg_info = { .target = "task", .attach_target = bpf_iter_attach_task, + .show_fdinfo = bpf_iter_task_show_fdinfo, + .fill_link_info = bpf_iter_task_fill_link_info, .ctx_arg_info_size = 1, .ctx_arg_info = { { offsetof(struct bpf_iter__task, task), @@ -361,6 +376,8 @@ static const struct bpf_iter_seq_info task_file_seq_info = { static struct bpf_iter_reg task_file_reg_info = { .target = "task_file", .attach_target = bpf_iter_attach_task, + .show_fdinfo = bpf_iter_task_show_fdinfo, + .fill_link_info = bpf_iter_task_fill_link_info, .ctx_arg_info_size = 2, .ctx_arg_info = { { offsetof(struct bpf_iter__task_file, task), diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index af5c600bf673..595bdc4c9431 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -4259,6 +4259,11 @@ struct bpf_link_info { struct { __u32 map_id; } map; + + struct { + __u32 main_thread_only:1; + __u32 :31; + } task; }; } iter; struct { From patchwork Thu Aug 27 00:06:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 1352185 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=fb.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=fb.com header.i=@fb.com header.a=rsa-sha256 header.s=facebook header.b=J/um0pRd; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BcNLk5P5Mz9sTh for ; Thu, 27 Aug 2020 10:06:38 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726880AbgH0AGh (ORCPT ); Wed, 26 Aug 2020 20:06:37 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:46700 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726882AbgH0AGd (ORCPT ); Wed, 26 Aug 2020 20:06:33 -0400 Received: from pps.filterd (m0089730.ppops.net [127.0.0.1]) by m0089730.ppops.net (8.16.0.42/8.16.0.42) with SMTP id 07R05hJh007898 for ; Wed, 26 Aug 2020 17:06:32 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=QKD71FgRdrV63rw8OMTz4tR+lNYd7sHK+PSEKlPqm18=; b=J/um0pRdurVRiSKk9WV6YNDlmTpTtqH+0F/VkgALRBLcboFPHmjFftasrqIs7+QXRIGd Q5JMiLYY6E/7vjqG8+UF56fakO1Jhu93x/q5Qk1n6SlihvFq8DMwkPW6Towi0AlU6lP5 y024PNSd0HrT1FyVpCI2nZJGWG3xzHTLkWw= Received: from mail.thefacebook.com ([163.114.132.120]) by m0089730.ppops.net with ESMTP id 335up6t9mn-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 26 Aug 2020 17:06:32 -0700 Received: from intmgw002.08.frc2.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:11d::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Wed, 26 Aug 2020 17:06:30 -0700 Received: by devbig003.ftw2.facebook.com (Postfix, from userid 128203) id 0C7C037052E0; Wed, 26 Aug 2020 17:06:23 -0700 (PDT) Smtp-Origin-Hostprefix: devbig From: Yonghong Song Smtp-Origin-Hostname: devbig003.ftw2.facebook.com To: , CC: Alexei Starovoitov , Daniel Borkmann , Smtp-Origin-Cluster: ftw2c04 Subject: [PATCH bpf-next 4/5] bpftool: support optional 'task main_thread_only' argument Date: Wed, 26 Aug 2020 17:06:22 -0700 Message-ID: <20200827000622.2712178-1-yhs@fb.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200827000618.2711826-1-yhs@fb.com> References: <20200827000618.2711826-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-08-26_14:2020-08-26,2020-08-26 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 adultscore=0 mlxlogscore=999 priorityscore=1501 lowpriorityscore=0 impostorscore=0 phishscore=0 suspectscore=8 clxscore=1015 spamscore=0 mlxscore=0 bulkscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2008260187 X-FB-Internal: deliver Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org For task and task_file bpf iterators, optional 'task main_thread_only' can signal the kernel to only iterate through main threads of each process. link_query will also print out main_thread_only value for task/task_file iterators. This patch also fixed the issue where if the additional arguments are not supported, bpftool will print an error message and exit. $ ./bpftool iter pin ./bpf_iter_task.o /sys/fs/bpf/p1 task main_thread_only $ ./bpftool iter pin ./bpf_iter_task_file.o /sys/fs/bpf/p2 task main_thread_only $ ./bpftool iter pin ./bpf_iter_task_file.o /sys/fs/bpf/p3 $ ./bpftool link show 1: iter prog 6 target_name bpf_map 2: iter prog 7 target_name bpf_prog 3: iter prog 12 target_name task main_thread_only 1 5: iter prog 23 target_name task_file main_thread_only 1 6: iter prog 28 target_name task_file main_thread_only 0 $ cat /sys/fs/bpf/p2 tgid gid fd file ... 1716 1716 255 ffffffffa2e95ec0 1756 1756 0 ffffffffa2e95ec0 1756 1756 1 ffffffffa2e95ec0 1756 1756 2 ffffffffa2e95ec0 1756 1756 3 ffffffffa2e20a80 1756 1756 4 ffffffffa2e19ba0 1756 1756 5 ffffffffa2e16460 1756 1756 6 ffffffffa2e16460 1756 1756 7 ffffffffa2e16460 1756 1756 8 ffffffffa2e16260 1761 1761 0 ffffffffa2e95ec0 ... $ ls /proc/1756/task/ 1756 1757 1758 1759 1760 In the above task_file iterator, the process with id 1756 has 5 threads and only the thread with pid = 1756 is processed by the bpf program. Signed-off-by: Yonghong Song --- .../bpftool/Documentation/bpftool-iter.rst | 17 +++++++++-- tools/bpf/bpftool/bash-completion/bpftool | 9 +++++- tools/bpf/bpftool/iter.c | 28 ++++++++++++++++--- tools/bpf/bpftool/link.c | 12 ++++++++ 4 files changed, 59 insertions(+), 7 deletions(-) diff --git a/tools/bpf/bpftool/Documentation/bpftool-iter.rst b/tools/bpf/bpftool/Documentation/bpftool-iter.rst index 070ffacb42b5..d9aac12c76da 100644 --- a/tools/bpf/bpftool/Documentation/bpftool-iter.rst +++ b/tools/bpf/bpftool/Documentation/bpftool-iter.rst @@ -17,15 +17,16 @@ SYNOPSIS ITER COMMANDS =================== -| **bpftool** **iter pin** *OBJ* *PATH* [**map** *MAP*] +| **bpftool** **iter pin** *OBJ* *PATH* [**map** *MAP* | **task** *TASK_OPT*] | **bpftool** **iter help** | | *OBJ* := /a/file/of/bpf_iter_target.o | *MAP* := { **id** *MAP_ID* | **pinned** *FILE* } +| *TASK_OPT* := { **main_thread_only** } DESCRIPTION =========== - **bpftool iter pin** *OBJ* *PATH* [**map** *MAP*] + **bpftool iter pin** *OBJ* *PATH* [**map** *MAP* | **task** *TASK_OPT*] A bpf iterator combines a kernel iterating of particular kernel data (e.g., tasks, bpf_maps, etc.) and a bpf program called for each kernel data object @@ -44,6 +45,11 @@ DESCRIPTION with each map element, do checking, filtering, aggregation, etc. without copying data to user space. + The task or task_file bpf iterator can have an optional + parameter *TASK_OPT*. The current supported value is + **main_thread_only** which supports to iterate only main + threads of each process. + User can then *cat PATH* to see the bpf iterator output. **bpftool iter help** @@ -78,6 +84,13 @@ EXAMPLES Create a file-based bpf iterator from bpf_iter_hashmap.o and map with id 20, and pin it to /sys/fs/bpf/my_hashmap +**# bpftool iter pin bpf_iter_task.o /sys/fs/bpf/my_task task main_thread_only** + +:: + + Create a file-based bpf iterator from bpf_iter_task.o which iterates main + threads of processes only, and pin it to /sys/fs/bpf/my_hashmap + SEE ALSO ======== **bpf**\ (2), diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool index 7b68e3c0a5fb..84d538de71e1 100644 --- a/tools/bpf/bpftool/bash-completion/bpftool +++ b/tools/bpf/bpftool/bash-completion/bpftool @@ -613,6 +613,7 @@ _bpftool() esac ;; iter) + local TARGET_TYPE='map task' case $command in pin) case $prev in @@ -628,9 +629,15 @@ _bpftool() pinned) _filedir ;; - *) + task) + _bpftool_one_of_list 'main_thread_only' + ;; + map) _bpftool_one_of_list $MAP_TYPE ;; + *) + _bpftool_one_of_list $TARGET_TYPE + ;; esac return 0 ;; diff --git a/tools/bpf/bpftool/iter.c b/tools/bpf/bpftool/iter.c index 3b1aad7535dd..a4c789ea43f1 100644 --- a/tools/bpf/bpftool/iter.c +++ b/tools/bpf/bpftool/iter.c @@ -26,6 +26,7 @@ static int do_pin(int argc, char **argv) /* optional arguments */ if (argc) { + memset(&linfo, 0, sizeof(linfo)); if (is_prefix(*argv, "map")) { NEXT_ARG(); @@ -38,11 +39,29 @@ static int do_pin(int argc, char **argv) if (map_fd < 0) return -1; - memset(&linfo, 0, sizeof(linfo)); linfo.map.map_fd = map_fd; - iter_opts.link_info = &linfo; - iter_opts.link_info_len = sizeof(linfo); + } else if (is_prefix(*argv, "task")) { + NEXT_ARG(); + + if (!REQ_ARGS(1)) { + p_err("incorrect task spec"); + return -1; + } + + if (strcmp(*argv, "main_thread_only") != 0) { + p_err("incorrect task spec"); + return -1; + } + + linfo.task.main_thread_only = true; + } else { + p_err("expected no more arguments, 'map' or 'task', got: '%s'?", + *argv); + return -1; } + + iter_opts.link_info = &linfo; + iter_opts.link_info_len = sizeof(linfo); } obj = bpf_object__open(objfile); @@ -95,9 +114,10 @@ static int do_pin(int argc, char **argv) static int do_help(int argc, char **argv) { fprintf(stderr, - "Usage: %1$s %2$s pin OBJ PATH [map MAP]\n" + "Usage: %1$s %2$s pin OBJ PATH [map MAP | task TASK_OPT]\n" " %1$s %2$s help\n" " " HELP_SPEC_MAP "\n" + " TASK_OPT := { main_thread_only }\n" "", bin_name, "iter"); diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c index e77e1525d20a..a159d5680c74 100644 --- a/tools/bpf/bpftool/link.c +++ b/tools/bpf/bpftool/link.c @@ -83,6 +83,12 @@ static bool is_iter_map_target(const char *target_name) strcmp(target_name, "bpf_sk_storage_map") == 0; } +static bool is_iter_task_target(const char *target_name) +{ + return strcmp(target_name, "task") == 0 || + strcmp(target_name, "task_file") == 0; +} + static void show_iter_json(struct bpf_link_info *info, json_writer_t *wtr) { const char *target_name = u64_to_ptr(info->iter.target_name); @@ -91,6 +97,9 @@ static void show_iter_json(struct bpf_link_info *info, json_writer_t *wtr) if (is_iter_map_target(target_name)) jsonw_uint_field(wtr, "map_id", info->iter.map.map_id); + else if (is_iter_task_target(target_name)) + jsonw_uint_field(wtr, "main_thread_only", + info->iter.task.main_thread_only); } static int get_prog_info(int prog_id, struct bpf_prog_info *info) @@ -202,6 +211,9 @@ static void show_iter_plain(struct bpf_link_info *info) if (is_iter_map_target(target_name)) printf("map_id %u ", info->iter.map.map_id); + else if (is_iter_task_target(target_name)) + printf("main_thread_only %u ", + info->iter.task.main_thread_only); } static int show_link_close_plain(int fd, struct bpf_link_info *info) From patchwork Thu Aug 27 00:06:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yonghong Song X-Patchwork-Id: 1352183 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=fb.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=fb.com header.i=@fb.com header.a=rsa-sha256 header.s=facebook header.b=e0tfi6EI; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4BcNLk2Yc3z9sTg for ; Thu, 27 Aug 2020 10:06:38 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726938AbgH0AGh (ORCPT ); Wed, 26 Aug 2020 20:06:37 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:39988 "EHLO mx0b-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726876AbgH0AGb (ORCPT ); Wed, 26 Aug 2020 20:06:31 -0400 Received: from pps.filterd (m0148460.ppops.net [127.0.0.1]) by mx0a-00082601.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 07R051Cg000821 for ; Wed, 26 Aug 2020 17:06:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=facebook; bh=dste37kOoRMfRa0kpK6WbpiioQNAoXy9XpJkYx45WO4=; b=e0tfi6EIkmu6m8Lu23OD3FuZ6P0Y0m2urc2GhNLHNm6LBRyEsUVXUPhELQcGRW4NPH6a KxJ5E5CItjdhT9dwfUXu1av7KEMWOyVafrno4VS7zuhuOAxsbjcngj2An+oQfnvzlov+ svg+OsK+QNzPNFhaw13GoYH9nulXuehKOuk= Received: from mail.thefacebook.com ([163.114.132.120]) by mx0a-00082601.pphosted.com with ESMTP id 335up629pe-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Wed, 26 Aug 2020 17:06:31 -0700 Received: from intmgw002.03.ash8.facebook.com (2620:10d:c085:208::f) by mail.thefacebook.com (2620:10d:c085:11d::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1979.3; Wed, 26 Aug 2020 17:06:29 -0700 Received: by devbig003.ftw2.facebook.com (Postfix, from userid 128203) id 4498E37052E0; Wed, 26 Aug 2020 17:06:24 -0700 (PDT) Smtp-Origin-Hostprefix: devbig From: Yonghong Song Smtp-Origin-Hostname: devbig003.ftw2.facebook.com To: , CC: Alexei Starovoitov , Daniel Borkmann , Smtp-Origin-Cluster: ftw2c04 Subject: [PATCH bpf-next 5/5] selftests/bpf: test task_file iterator with main_thread_only Date: Wed, 26 Aug 2020 17:06:24 -0700 Message-ID: <20200827000624.2712244-1-yhs@fb.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200827000618.2711826-1-yhs@fb.com> References: <20200827000618.2711826-1-yhs@fb.com> MIME-Version: 1.0 X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687 definitions=2020-08-26_14:2020-08-26,2020-08-26 signatures=0 X-Proofpoint-Spam-Details: rule=fb_default_notspam policy=fb_default score=0 mlxscore=0 priorityscore=1501 mlxlogscore=999 malwarescore=0 spamscore=0 clxscore=1015 impostorscore=0 suspectscore=8 phishscore=0 bulkscore=0 adultscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2008260187 X-FB-Internal: deliver Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Modified existing bpf_iter_test_file program to check whether all accessed files from the main thread or not. $ ./test_progs -n 4 ... #4/7 task_file:OK ... #4 bpf_iter:OK Summary: 1/24 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by: Yonghong Song --- .../selftests/bpf/prog_tests/bpf_iter.c | 50 ++++++++++++++----- .../selftests/bpf/progs/bpf_iter_task_file.c | 9 +++- 2 files changed, 46 insertions(+), 13 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c index 7375d9a6d242..235580801372 100644 --- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c +++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c @@ -37,13 +37,14 @@ static void test_btf_id_or_null(void) } } -static void do_dummy_read(struct bpf_program *prog) +static void do_dummy_read(struct bpf_program *prog, + struct bpf_iter_attach_opts *opts) { struct bpf_link *link; char buf[16] = {}; int iter_fd, len; - link = bpf_program__attach_iter(prog, NULL); + link = bpf_program__attach_iter(prog, opts); if (CHECK(IS_ERR(link), "attach_iter", "attach_iter failed\n")) return; @@ -71,7 +72,7 @@ static void test_ipv6_route(void) "skeleton open_and_load failed\n")) return; - do_dummy_read(skel->progs.dump_ipv6_route); + do_dummy_read(skel->progs.dump_ipv6_route, NULL); bpf_iter_ipv6_route__destroy(skel); } @@ -85,7 +86,7 @@ static void test_netlink(void) "skeleton open_and_load failed\n")) return; - do_dummy_read(skel->progs.dump_netlink); + do_dummy_read(skel->progs.dump_netlink, NULL); bpf_iter_netlink__destroy(skel); } @@ -99,7 +100,7 @@ static void test_bpf_map(void) "skeleton open_and_load failed\n")) return; - do_dummy_read(skel->progs.dump_bpf_map); + do_dummy_read(skel->progs.dump_bpf_map, NULL); bpf_iter_bpf_map__destroy(skel); } @@ -113,7 +114,7 @@ static void test_task(void) "skeleton open_and_load failed\n")) return; - do_dummy_read(skel->progs.dump_task); + do_dummy_read(skel->progs.dump_task, NULL); bpf_iter_task__destroy(skel); } @@ -127,22 +128,47 @@ static void test_task_stack(void) "skeleton open_and_load failed\n")) return; - do_dummy_read(skel->progs.dump_task_stack); + do_dummy_read(skel->progs.dump_task_stack, NULL); bpf_iter_task_stack__destroy(skel); } +static void *do_nothing(void *arg) +{ + pthread_exit(arg); +} + static void test_task_file(void) { + DECLARE_LIBBPF_OPTS(bpf_iter_attach_opts, opts); struct bpf_iter_task_file *skel; + union bpf_iter_link_info linfo; + pthread_t thread_id; + void *ret; skel = bpf_iter_task_file__open_and_load(); if (CHECK(!skel, "bpf_iter_task_file__open_and_load", "skeleton open_and_load failed\n")) return; - do_dummy_read(skel->progs.dump_task_file); + if (CHECK(pthread_create(&thread_id, NULL, &do_nothing, NULL), + "pthread_create", "pthread_create failed\n")) + goto done; + + memset(&linfo, 0, sizeof(linfo)); + linfo.task.main_thread_only = true; + opts.link_info = &linfo; + opts.link_info_len = sizeof(linfo); + do_dummy_read(skel->progs.dump_task_file, &opts); + + if (CHECK(pthread_join(thread_id, &ret) || ret != NULL, + "pthread_join", "pthread_join failed\n")) + goto done; + + CHECK(skel->bss->count != 0, "", + "invalid non main-thread file visit %d\n", skel->bss->count); +done: bpf_iter_task_file__destroy(skel); } @@ -155,7 +181,7 @@ static void test_tcp4(void) "skeleton open_and_load failed\n")) return; - do_dummy_read(skel->progs.dump_tcp4); + do_dummy_read(skel->progs.dump_tcp4, NULL); bpf_iter_tcp4__destroy(skel); } @@ -169,7 +195,7 @@ static void test_tcp6(void) "skeleton open_and_load failed\n")) return; - do_dummy_read(skel->progs.dump_tcp6); + do_dummy_read(skel->progs.dump_tcp6, NULL); bpf_iter_tcp6__destroy(skel); } @@ -183,7 +209,7 @@ static void test_udp4(void) "skeleton open_and_load failed\n")) return; - do_dummy_read(skel->progs.dump_udp4); + do_dummy_read(skel->progs.dump_udp4, NULL); bpf_iter_udp4__destroy(skel); } @@ -197,7 +223,7 @@ static void test_udp6(void) "skeleton open_and_load failed\n")) return; - do_dummy_read(skel->progs.dump_udp6); + do_dummy_read(skel->progs.dump_udp6, NULL); bpf_iter_udp6__destroy(skel); } diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_task_file.c b/tools/testing/selftests/bpf/progs/bpf_iter_task_file.c index 8b787baa2654..880f6c9db8fb 100644 --- a/tools/testing/selftests/bpf/progs/bpf_iter_task_file.c +++ b/tools/testing/selftests/bpf/progs/bpf_iter_task_file.c @@ -6,6 +6,8 @@ char _license[] SEC("license") = "GPL"; +int count = 0; + SEC("iter/task_file") int dump_task_file(struct bpf_iter__task_file *ctx) { @@ -17,8 +19,13 @@ int dump_task_file(struct bpf_iter__task_file *ctx) if (task == (void *)0 || file == (void *)0) return 0; - if (ctx->meta->seq_num == 0) + if (ctx->meta->seq_num == 0) { + count = 0; /* reset the counter for this seq_file session */ BPF_SEQ_PRINTF(seq, " tgid gid fd file\n"); + } + + if (task->tgid != task->pid) + count++; BPF_SEQ_PRINTF(seq, "%8d %8d %8d %lx\n", task->tgid, task->pid, fd, (long)file->f_op);