From patchwork Wed Jun 6 14:20:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juerg Haefliger X-Patchwork-Id: 925878 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4119ns4pbdzB3sh; Thu, 7 Jun 2018 00:21:04 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1fQZJ4-0003uH-5X; Wed, 06 Jun 2018 14:20:58 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1fQZJ2-0003tf-Mk for kernel-team@lists.ubuntu.com; Wed, 06 Jun 2018 14:20:56 +0000 Received: from mail-wm0-f72.google.com ([74.125.82.72]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1fQZJ2-00046M-FN for kernel-team@lists.ubuntu.com; Wed, 06 Jun 2018 14:20:56 +0000 Received: by mail-wm0-f72.google.com with SMTP id g73-v6so3179200wmc.5 for ; Wed, 06 Jun 2018 07:20:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=uqKlvPbmRA7v/ECNXMpgVkSmKhGfk2T9jlOJ1TBE4zA=; b=JpLRJdkTZMYWHPRhYXsm1KwdFwI6jAWhg2JQXT5bg825v0IIa43HtFBlE/Li15QGyi ggkeqyy8ybv/Sn+bpVxxb/9+4BKJCGufhCN8ZRlmfENaNLTCzRg5OPLPhw/nE8MJjgnQ sUF6mDbrp5lmGtjovdMGFhWhHLNZdnp4KYv2eQbRGVMkee2U4E1t9exJgV4ZWU9V8RL8 I0vJhFRWPQ8GphQ1W34SgwD5nzD4yrHrNg9P6ySv+bGICYz6zEvQ97var4yDo0ajSLRu asC/FGPKRj24MDIpBBOsBTrQTPTKZCKxHMdPaML7KUiezv2CuBRmw6l8JGO6/Hcx6dij mIng== X-Gm-Message-State: APt69E12ffBCOzEygYY64UolL05Gmrirp+3mlA1J/cb2LjTntQKOZOe3 gUvL8ZGNjPPst7OVOY3lZvkk+qo7d5CYBFg86J7yvCgsu+d01G5zt9xwJELRVjxZDoILRbVY8ay 5pE72ObUGU2jVrTo5A32/1O7Y1h3SKd18cvxNXAi3xw== X-Received: by 2002:aa7:d4d7:: with SMTP id t23-v6mr3978503edr.121.1528294856031; Wed, 06 Jun 2018 07:20:56 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIZ+4Q1aGnASznj7XHX4e2FdbAs8uuiO7hIXmVBDH9BsET6PHq4oWtyaYGGs001GVXxbQCiJg== X-Received: by 2002:aa7:d4d7:: with SMTP id t23-v6mr3978497edr.121.1528294855858; Wed, 06 Jun 2018 07:20:55 -0700 (PDT) Received: from gollum.fritz.box ([81.221.205.149]) by smtp.gmail.com with ESMTPSA id f25-v6sm14810130edd.87.2018.06.06.07.20.54 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Jun 2018 07:20:55 -0700 (PDT) From: Juerg Haefliger X-Google-Original-From: Juerg Haefliger To: kernel-team@lists.ubuntu.com Subject: [SRU][Xenial][PATCH 2/5] x86: fix SMAP in 32-bit environments Date: Wed, 6 Jun 2018 16:20:49 +0200 Message-Id: <20180606142052.32684-3-juergh@canonical.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180606142052.32684-1-juergh@canonical.com> References: <20180606142052.32684-1-juergh@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Linus Torvalds BugLink: https://bugs.launchpad.net/bugs/1775137 In commit 11f1a4b9755f ("x86: reorganize SMAP handling in user space accesses") I changed how the stac/clac instructions were generated around the user space accesses, which then made it possible to do batched accesses efficiently for user string copies etc. However, in doing so, I completely spaced out, and didn't even think about the 32-bit case. And nobody really even seemed to notice, because SMAP doesn't even exist until modern Skylake processors, and you'd have to be crazy to run 32-bit kernels on a modern CPU. Which brings us to Andy Lutomirski. He actually tested the 32-bit kernel on new hardware, and noticed that it doesn't work. My bad. The trivial fix is to add the required uaccess begin/end markers around the raw accesses in . I feel a bit bad about this patch, just because that header file really should be cleaned up to avoid all the duplicated code in it, and this commit just expands on the problem. But this just fixes the bug without any bigger cleanup surgery. Reported-and-tested-by: Andy Lutomirski Signed-off-by: Linus Torvalds (cherry picked from commit de9e478b9d49f3a0214310d921450cf5bb4a21e6) Signed-off-by: Juerg Haefliger --- arch/x86/include/asm/uaccess_32.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h index f5dcb5204dcd..3fe0eac59462 100644 --- a/arch/x86/include/asm/uaccess_32.h +++ b/arch/x86/include/asm/uaccess_32.h @@ -48,20 +48,28 @@ __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n) switch (n) { case 1: + __uaccess_begin(); __put_user_size(*(u8 *)from, (u8 __user *)to, 1, ret, 1); + __uaccess_end(); return ret; case 2: + __uaccess_begin(); __put_user_size(*(u16 *)from, (u16 __user *)to, 2, ret, 2); + __uaccess_end(); return ret; case 4: + __uaccess_begin(); __put_user_size(*(u32 *)from, (u32 __user *)to, 4, ret, 4); + __uaccess_end(); return ret; case 8: + __uaccess_begin(); __put_user_size(*(u64 *)from, (u64 __user *)to, 8, ret, 8); + __uaccess_end(); return ret; } } @@ -103,13 +111,19 @@ __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n) switch (n) { case 1: + __uaccess_begin(); __get_user_size(*(u8 *)to, from, 1, ret, 1); + __uaccess_end(); return ret; case 2: + __uaccess_begin(); __get_user_size(*(u16 *)to, from, 2, ret, 2); + __uaccess_end(); return ret; case 4: + __uaccess_begin(); __get_user_size(*(u32 *)to, from, 4, ret, 4); + __uaccess_end(); return ret; } } @@ -148,13 +162,19 @@ __copy_from_user(void *to, const void __user *from, unsigned long n) switch (n) { case 1: + __uaccess_begin(); __get_user_size(*(u8 *)to, from, 1, ret, 1); + __uaccess_end(); return ret; case 2: + __uaccess_begin(); __get_user_size(*(u16 *)to, from, 2, ret, 2); + __uaccess_end(); return ret; case 4: + __uaccess_begin(); __get_user_size(*(u32 *)to, from, 4, ret, 4); + __uaccess_end(); return ret; } } @@ -170,13 +190,19 @@ static __always_inline unsigned long __copy_from_user_nocache(void *to, switch (n) { case 1: + __uaccess_begin(); __get_user_size(*(u8 *)to, from, 1, ret, 1); + __uaccess_end(); return ret; case 2: + __uaccess_begin(); __get_user_size(*(u16 *)to, from, 2, ret, 2); + __uaccess_end(); return ret; case 4: + __uaccess_begin(); __get_user_size(*(u32 *)to, from, 4, ret, 4); + __uaccess_end(); return ret; } }