From patchwork Tue Jan 10 23:24:40 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 713490 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3typ833yp7z9sxN for ; Wed, 11 Jan 2017 10:26:59 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756897AbdAJXZB (ORCPT ); Tue, 10 Jan 2017 18:25:01 -0500 Received: from mail.kernel.org ([198.145.29.136]:50552 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750734AbdAJXYz (ORCPT ); Tue, 10 Jan 2017 18:24:55 -0500 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4D9A82037E; Tue, 10 Jan 2017 23:24:53 +0000 (UTC) Received: from localhost (c-71-202-137-17.hsd1.ca.comcast.net [71.202.137.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2727620364; Tue, 10 Jan 2017 23:24:52 +0000 (UTC) From: Andy Lutomirski To: Daniel Borkmann , Netdev , LKML , Linux Crypto Mailing List Cc: "Jason A. Donenfeld" , Hannes Frederic Sowa , Alexei Starovoitov , Eric Dumazet , Eric Biggers , Tom Herbert , "David S. Miller" , Andy Lutomirski , Ard Biesheuvel , Herbert Xu Subject: [PATCH v2 2/8] crypto/sha256: Export a sha256_{init, update, final}_direct() API Date: Tue, 10 Jan 2017 15:24:40 -0800 Message-Id: <8c59a4dd8b7ba4f2e5a6461132bbd16c83ff7c1f.1484090585.git.luto@kernel.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: References: In-Reply-To: References: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This provides a very simple interface for kernel code to use to do synchronous, unaccelerated, virtual-address-based SHA256 hashing without needing to create a crypto context. Subsequent patches will make this work without building the crypto core and will use to avoid making BPF-based tracing depend on crypto. Cc: Ard Biesheuvel Cc: Herbert Xu Signed-off-by: Andy Lutomirski --- crypto/sha256_generic.c | 31 ++++++++++++++++++++++++++----- include/crypto/sha.h | 24 ++++++++++++++++++++++++ include/crypto/sha256_base.h | 13 ------------- 3 files changed, 50 insertions(+), 18 deletions(-) diff --git a/crypto/sha256_generic.c b/crypto/sha256_generic.c index 8f9c47e1a96e..573e114382f9 100644 --- a/crypto/sha256_generic.c +++ b/crypto/sha256_generic.c @@ -240,24 +240,45 @@ static void sha256_generic_block_fn(struct sha256_state *sst, u8 const *src, } } +void sha256_update_direct(struct sha256_state *sctx, const u8 *data, + unsigned int len) +{ + __sha256_base_do_update(sctx, data, len, sha256_generic_block_fn); +} +EXPORT_SYMBOL(sha256_update_direct); + int crypto_sha256_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha256_base_do_update(desc, data, len, sha256_generic_block_fn); + sha256_update_direct(shash_desc_ctx(desc), data, len); + return 0; } EXPORT_SYMBOL(crypto_sha256_update); static int sha256_final(struct shash_desc *desc, u8 *out) { - sha256_base_do_finalize(desc, sha256_generic_block_fn); - return sha256_base_finish(desc, out); + __sha256_final_direct(shash_desc_ctx(desc), + crypto_shash_digestsize(desc->tfm), out); + return 0; } +void __sha256_final_direct(struct sha256_state *sctx, unsigned int digest_size, + u8 *out) +{ + sha256_do_finalize_direct(sctx, sha256_generic_block_fn); + __sha256_base_finish(sctx, digest_size, out); +} +EXPORT_SYMBOL(sha256_final_direct); + int crypto_sha256_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *hash) { - sha256_base_do_update(desc, data, len, sha256_generic_block_fn); - return sha256_final(desc, hash); + struct sha256_state *sctx = shash_desc_ctx(desc); + unsigned int digest_size = crypto_shash_digestsize(desc->tfm); + + sha256_update_direct(sctx, data, len); + __sha256_final_direct(sctx, digest_size, hash); + return 0; } EXPORT_SYMBOL(crypto_sha256_finup); diff --git a/include/crypto/sha.h b/include/crypto/sha.h index c94d3eb1cefd..0df1a0e42c95 100644 --- a/include/crypto/sha.h +++ b/include/crypto/sha.h @@ -88,6 +88,30 @@ struct sha512_state { u8 buf[SHA512_BLOCK_SIZE]; }; +static inline void sha256_init_direct(struct sha256_state *sctx) +{ + sctx->state[0] = SHA256_H0; + sctx->state[1] = SHA256_H1; + sctx->state[2] = SHA256_H2; + sctx->state[3] = SHA256_H3; + sctx->state[4] = SHA256_H4; + sctx->state[5] = SHA256_H5; + sctx->state[6] = SHA256_H6; + sctx->state[7] = SHA256_H7; + sctx->count = 0; +} + +extern void sha256_update_direct(struct sha256_state *sctx, const u8 *data, + unsigned int len); + +extern void __sha256_final_direct(struct sha256_state *sctx, + unsigned int digest_size, u8 *out); + +static inline void sha256_final_direct(struct sha256_state *sctx, u8 *out) +{ + __sha256_final_direct(sctx, SHA256_DIGEST_SIZE, out); +} + struct shash_desc; extern int crypto_sha1_update(struct shash_desc *desc, const u8 *data, diff --git a/include/crypto/sha256_base.h b/include/crypto/sha256_base.h index fc77b8e099a7..9bbe73ce458f 100644 --- a/include/crypto/sha256_base.h +++ b/include/crypto/sha256_base.h @@ -37,19 +37,6 @@ static inline int sha224_base_init(struct shash_desc *desc) return 0; } -static inline void sha256_init_direct(struct sha256_state *sctx) -{ - sctx->state[0] = SHA256_H0; - sctx->state[1] = SHA256_H1; - sctx->state[2] = SHA256_H2; - sctx->state[3] = SHA256_H3; - sctx->state[4] = SHA256_H4; - sctx->state[5] = SHA256_H5; - sctx->state[6] = SHA256_H6; - sctx->state[7] = SHA256_H7; - sctx->count = 0; -} - static inline int sha256_base_init(struct shash_desc *desc) { sha256_init_direct(shash_desc_ctx(desc));