From patchwork Wed Jun 6 14:20:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juerg Haefliger X-Patchwork-Id: 925880 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4119nx653qzB3sm; Thu, 7 Jun 2018 00:21:09 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1fQZJ7-0003wJ-Cu; Wed, 06 Jun 2018 14:21:01 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1fQZJ4-0003uk-Mw for kernel-team@lists.ubuntu.com; Wed, 06 Jun 2018 14:20:58 +0000 Received: from mail-wr0-f197.google.com ([209.85.128.197]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1fQZJ4-00046V-BM for kernel-team@lists.ubuntu.com; Wed, 06 Jun 2018 14:20:58 +0000 Received: by mail-wr0-f197.google.com with SMTP id h12-v6so3744886wrq.2 for ; Wed, 06 Jun 2018 07:20:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=uxKcG9Kr0efzTmt95rwV6NHOoqVKrzGOgy/5MM07Y7s=; b=dDZl2SQ6Dancr82CCaV4ywKTmqSDC9rGduyAMVKFQEOdLtMX2OS2oY4l6R9jQjTDiS 6XBwBnGVpuHohrHxGDdcCHxBzWA2jnY/+7ZxOxjweKIaReIkCKwavDxndrc4spGBUavY yV0lacIamCLpNu0CaUglDXv4Q+WJSkrwQPUuujCC75QQa8Ku82b312mwPIeZf8qKWA2E IQjRyM2z5kAlVFlWoeMKmy32+AeLS4QjPXanKw9r4p/9CRRB5HLI/rsP6/BvmymgwPSx 8D+ijPI/arfTyxcT9mUwWkO0afNnN9TfuwaKrLaTwb2ciFHrvzIEpAjueZT2GuNMcspn O3lw== X-Gm-Message-State: APt69E1i7Dcjp5SgDI6d0ym78z6/ouUk/byFYnJBb27DxFTVgq2j9g9Z fqhiOKzuG0P5l08FxeRvCNBHto2vMnMHjKNM9iK8OV66sip2o/S25wgLpultq74AuEzRclGdQNq ZwY0NzVcV7Saxu3/6MDx9I+AZelKLrM7vSHvzPFkaLg== X-Received: by 2002:a50:9007:: with SMTP id b7-v6mr3929877eda.279.1528294857909; Wed, 06 Jun 2018 07:20:57 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIExwDLoftxV+J5zbQGDBfIi6x7pmbOLTiEvEx/cz07TOeqgM4Vqxebmd0x9fSHWrERaPZ75Q== X-Received: by 2002:a50:9007:: with SMTP id b7-v6mr3929857eda.279.1528294857720; Wed, 06 Jun 2018 07:20:57 -0700 (PDT) Received: from gollum.fritz.box ([81.221.205.149]) by smtp.gmail.com with ESMTPSA id f25-v6sm14810130edd.87.2018.06.06.07.20.56 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Jun 2018 07:20:57 -0700 (PDT) From: Juerg Haefliger X-Google-Original-From: Juerg Haefliger To: kernel-team@lists.ubuntu.com Subject: [SRU][Xenial][PATCH 4/5] x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end} Date: Wed, 6 Jun 2018 16:20:51 +0200 Message-Id: <20180606142052.32684-5-juergh@canonical.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180606142052.32684-1-juergh@canonical.com> References: <20180606142052.32684-1-juergh@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Dan Williams BugLink: https://bugs.launchpad.net/bugs/1775137 In preparation for converting some __uaccess_begin() instances to __uacess_begin_nospec(), make sure all 'from user' uaccess paths are using the _begin(), _end() helpers rather than open-coded stac() and clac(). No functional changes. Suggested-by: Ingo Molnar Signed-off-by: Dan Williams Signed-off-by: Thomas Gleixner Cc: linux-arch@vger.kernel.org Cc: Tom Lendacky Cc: Kees Cook Cc: kernel-hardening@lists.openwall.com Cc: gregkh@linuxfoundation.org Cc: Al Viro Cc: torvalds@linux-foundation.org Cc: alan@linux.intel.com Link: https://lkml.kernel.org/r/151727416438.33451.17309465232057176966.stgit@dwillia2-desk3.amr.corp.intel.com (backported from commit b5c4ae4f35325d520b230bab6eb3310613b72ac1) [juergh: - Replaced some more clac/stac with __uaccess_begin/end. - Adjusted context.] Signed-off-by: Juerg Haefliger --- arch/x86/lib/usercopy_32.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c index 91d93b95bd86..5755942f5eb2 100644 --- a/arch/x86/lib/usercopy_32.c +++ b/arch/x86/lib/usercopy_32.c @@ -570,12 +570,12 @@ do { \ unsigned long __copy_to_user_ll(void __user *to, const void *from, unsigned long n) { - stac(); + __uaccess_begin(); if (movsl_is_ok(to, from, n)) __copy_user(to, from, n); else n = __copy_user_intel(to, from, n); - clac(); + __uaccess_end(); return n; } EXPORT_SYMBOL(__copy_to_user_ll); @@ -583,12 +583,12 @@ EXPORT_SYMBOL(__copy_to_user_ll); unsigned long __copy_from_user_ll(void *to, const void __user *from, unsigned long n) { - stac(); + __uaccess_begin(); if (movsl_is_ok(to, from, n)) __copy_user_zeroing(to, from, n); else n = __copy_user_zeroing_intel(to, from, n); - clac(); + __uaccess_end(); return n; } EXPORT_SYMBOL(__copy_from_user_ll); @@ -596,13 +596,13 @@ EXPORT_SYMBOL(__copy_from_user_ll); unsigned long __copy_from_user_ll_nozero(void *to, const void __user *from, unsigned long n) { - stac(); + __uaccess_begin(); if (movsl_is_ok(to, from, n)) __copy_user(to, from, n); else n = __copy_user_intel((void __user *)to, (const void *)from, n); - clac(); + __uaccess_end(); return n; } EXPORT_SYMBOL(__copy_from_user_ll_nozero); @@ -610,7 +610,7 @@ EXPORT_SYMBOL(__copy_from_user_ll_nozero); unsigned long __copy_from_user_ll_nocache(void *to, const void __user *from, unsigned long n) { - stac(); + __uaccess_begin(); #ifdef CONFIG_X86_INTEL_USERCOPY if (n > 64 && cpu_has_xmm2) n = __copy_user_zeroing_intel_nocache(to, from, n); @@ -619,7 +619,7 @@ unsigned long __copy_from_user_ll_nocache(void *to, const void __user *from, #else __copy_user_zeroing(to, from, n); #endif - clac(); + __uaccess_end(); return n; } EXPORT_SYMBOL(__copy_from_user_ll_nocache); @@ -627,7 +627,7 @@ EXPORT_SYMBOL(__copy_from_user_ll_nocache); unsigned long __copy_from_user_ll_nocache_nozero(void *to, const void __user *from, unsigned long n) { - stac(); + __uaccess_begin(); #ifdef CONFIG_X86_INTEL_USERCOPY if (n > 64 && cpu_has_xmm2) n = __copy_user_intel_nocache(to, from, n); @@ -636,7 +636,7 @@ unsigned long __copy_from_user_ll_nocache_nozero(void *to, const void __user *fr #else __copy_user(to, from, n); #endif - clac(); + __uaccess_end(); return n; } EXPORT_SYMBOL(__copy_from_user_ll_nocache_nozero);