From patchwork Sat Dec 24 02:22:30 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 708554 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3tlpwF25W7z9sQw for ; Sat, 24 Dec 2016 13:23:41 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965914AbcLXCX2 (ORCPT ); Fri, 23 Dec 2016 21:23:28 -0500 Received: from mail.kernel.org ([198.145.29.136]:59310 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965536AbcLXCWv (ORCPT ); Fri, 23 Dec 2016 21:22:51 -0500 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 88947203DB; Sat, 24 Dec 2016 02:22:44 +0000 (UTC) Received: from localhost (c-71-202-137-17.hsd1.ca.comcast.net [71.202.137.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 84AEE203B1; Sat, 24 Dec 2016 02:22:43 +0000 (UTC) From: Andy Lutomirski To: Daniel Borkmann , Netdev , LKML , Linux Crypto Mailing List Cc: "Jason A. Donenfeld" , Hannes Frederic Sowa , Alexei Starovoitov , Eric Dumazet , Eric Biggers , Tom Herbert , "David S. Miller" , Andy Lutomirski , Alexei Starovoitov Subject: [RFC PATCH 4.10 4/6] bpf: Avoid copying the entire BPF program when hashing it Date: Fri, 23 Dec 2016 18:22:30 -0800 Message-Id: <5ceef8771e35980e4e249d042075cd80c729f332.1482545792.git.luto@kernel.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: References: In-Reply-To: References: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The sha256 helpers can consume a message incrementally, so there's no need to allocate a buffer to store the whole blob to be hashed. This may be a slight slowdown for very long messages because gcc can't inline the sha256_update() calls. For reasonably-sized programs, however, this should be a considerable speedup as vmalloc() is quite slow. Cc: Daniel Borkmann Cc: Alexei Starovoitov Signed-off-by: Andy Lutomirski --- kernel/bpf/core.c | 34 ++++++++++++++-------------------- 1 file changed, 14 insertions(+), 20 deletions(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 911993863799..1c2931f505af 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -149,43 +149,37 @@ void __bpf_prog_free(struct bpf_prog *fp) int bpf_prog_calc_digest(struct bpf_prog *fp) { struct sha256_state sha; - u32 i, psize; - struct bpf_insn *dst; + u32 i; bool was_ld_map; - u8 *raw; - - psize = bpf_prog_insn_size(fp); - raw = vmalloc(psize); - if (!raw) - return -ENOMEM; sha256_init(&sha); /* We need to take out the map fd for the digest calculation * since they are unstable from user space side. */ - dst = (void *)raw; for (i = 0, was_ld_map = false; i < fp->len; i++) { - dst[i] = fp->insnsi[i]; + struct bpf_insn insn = fp->insnsi[i]; + if (!was_ld_map && - dst[i].code == (BPF_LD | BPF_IMM | BPF_DW) && - dst[i].src_reg == BPF_PSEUDO_MAP_FD) { + insn.code == (BPF_LD | BPF_IMM | BPF_DW) && + insn.src_reg == BPF_PSEUDO_MAP_FD) { was_ld_map = true; - dst[i].imm = 0; + insn.imm = 0; } else if (was_ld_map && - dst[i].code == 0 && - dst[i].dst_reg == 0 && - dst[i].src_reg == 0 && - dst[i].off == 0) { + insn.code == 0 && + insn.dst_reg == 0 && + insn.src_reg == 0 && + insn.off == 0) { was_ld_map = false; - dst[i].imm = 0; + insn.imm = 0; } else { was_ld_map = false; } + + sha256_update(&sha, (const u8 *)&insn, sizeof(insn)); } - sha256_finup(&sha, raw, psize, fp->digest); - vfree(raw); + sha256_final(&sha, fp->digest); return 0; }