From patchwork Wed Aug 22 06:40:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juerg Haefliger X-Patchwork-Id: 960798 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 41wHx62TCKz9s7c; Wed, 22 Aug 2018 16:40:42 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1fsMok-0006xu-M2; Wed, 22 Aug 2018 06:40:34 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1fsMoi-0006w5-Js for kernel-team@lists.ubuntu.com; Wed, 22 Aug 2018 06:40:32 +0000 Received: from mail-ed1-f69.google.com ([209.85.208.69]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1fsMoh-0004FI-Vw for kernel-team@lists.ubuntu.com; Wed, 22 Aug 2018 06:40:32 +0000 Received: by mail-ed1-f69.google.com with SMTP id k48-v6so490523ede.14 for ; Tue, 21 Aug 2018 23:40:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=4UzNAt5MSHfg7To/RvzTuswrTI9fMjxNz4bVfRe2wqI=; b=JonttcVk1rCYDd9zeRjeyf62iqv+JwB/LsZw83eJLYvBrb4S3Xkm+szm8ay2H67RdK QUpUfqDGZpRvyQXbZD+9ZzpFyas91sAMErxWY2zuF15270UrbXHwylHyZVA6rBlRXxjx rjATW+F/1bUbkLE+9I3q89jrDofzvRVgL3+tDTgWLxM38N39ccvTKAgi+LHvQ8F2KcRA toziZIsQWcazeoYnkA+zEzuBsFJ//bLR6LY6CU5D3wm35iV9ssy9seaTD0aBwNNS/omO c2UsG8QCjf5wC9nYsBHSNXruk6iZJ45QUnD8zhiJSgSXNSFNPRnPeRSbz/6iicFvboIk aZqQ== X-Gm-Message-State: AOUpUlHpNaC4x+DXoFis4hsFELKx4P0GYfBPaU9jnIjOysgShmHaSYo/ SaecyLuRzqWtdPHVgZn6P46Ql7kHMc3UYOoofaehDkJg8aw+fsb2VS3IpRbKzmlyFRAfPa+IO/G 6FVwTvW0sm3b1SqQaHoQi8cF75B04KQ3yI7CjofyvaQ== X-Received: by 2002:a50:c34b:: with SMTP id q11-v6mr15889825edb.177.1534920031376; Tue, 21 Aug 2018 23:40:31 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZqDqpXYrGlE8fqb8HKKyXMVg5chNOzJb6JStQnl4XZt6Wff5EG9r7FsOd0rz5l5ZvXZ6tfNQ== X-Received: by 2002:a50:c34b:: with SMTP id q11-v6mr15889807edb.177.1534920031205; Tue, 21 Aug 2018 23:40:31 -0700 (PDT) Received: from localhost.localdomain ([81.221.205.149]) by smtp.gmail.com with ESMTPSA id j23-v6sm529350edh.29.2018.08.21.23.40.30 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 21 Aug 2018 23:40:30 -0700 (PDT) From: Juerg Haefliger X-Google-Original-From: Juerg Haefliger To: kernel-team@lists.ubuntu.com Subject: [SRU][Trusty][PATCH 7/7] UBUNTU: SAUCE: x86/fremap: Invert the offset when converting to/from a PTE Date: Wed, 22 Aug 2018 08:40:21 +0200 Message-Id: <20180822064021.17216-8-juergh@canonical.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180822064021.17216-1-juergh@canonical.com> References: <20180822064021.17216-1-juergh@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: juergh@canonical.com MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" The 3.13 kernel still uses non-emulated code for the remap_file_pages syscall which makes use of macros to convert between page offsets and PTEs. These macros need to invert the offset for L1TF protection. Without this, the page table entries of a remapped and swapped out file page look like this: [28865.660359] virtual user addr: 00007fe49ea9b000 [28865.660360] page: ffffeddf927aa6c0 [28865.660361] pgd: ffff8802605267f8 (8000000260229067) | USR RW NX | pgd [28865.660365] pud: ffff880260229c90 (000000025f9d1067) | USR RW PAT x | pud 1G [28865.660368] pmd: ffff88025f9d17a8 (00000002602d8067) | USR RW x | pmd 2M [28865.660371] pte: ffff8802602d84d8 (00000000001f4040) | ro x | pte 4K ^^^^^^ non-inverted offset With this commit, they look like: [ 2564.508511] virtual user addr: 00007f728c787000 [ 2564.508514] page: ffffedddca31e1c0 [ 2564.508518] pgd: ffff8802603207f0 (800000026036b067) | USR RW NX | pgd [ 2564.508531] pud: ffff88026036be50 (0000000260ee6067) | USR RW x | pud 1G [ 2564.508543] pmd: ffff880260ee6318 (0000000260360067) | USR RW x | pmd 2M [ 2564.508554] pte: ffff880260360c38 (00003fffffe0b040) | ro x | pte 4K ^^^^^^ inverted offset Also make sure that the number of bits for the maximum offset for a remap is limited to 1 bit less than the number of actual physical bits so that the highest bit can be inverted by the conversion macros. CVE-2018-3620 CVE-2018-3646 Signed-off-by: Juerg Haefliger --- arch/x86/include/asm/pgtable_64.h | 22 ++++++++++++++++++---- mm/fremap.c | 6 ++++++ 2 files changed, 24 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h index a9b88fe94bfa..7342c233e9ca 100644 --- a/arch/x86/include/asm/pgtable_64.h +++ b/arch/x86/include/asm/pgtable_64.h @@ -154,10 +154,24 @@ static inline int pgd_large(pgd_t pgd) { return 0; } /* PUD - Level3 access */ /* PMD - Level 2 access */ -#define pte_to_pgoff(pte) ((pte_val((pte)) & PHYSICAL_PAGE_MASK) >> PAGE_SHIFT) -#define pgoff_to_pte(off) ((pte_t) { .pte = ((off) << PAGE_SHIFT) | \ - _PAGE_FILE }) -#define PTE_FILE_MAX_BITS __PHYSICAL_MASK_SHIFT +#define pte_to_pgoff(pte) ((~pte_val((pte)) & PHYSICAL_PAGE_MASK) >> PAGE_SHIFT) +#define pgoff_to_pte(off) ((pte_t) { .pte = \ + ((~(off) & (PHYSICAL_PAGE_MASK >> PAGE_SHIFT)) \ + << PAGE_SHIFT) | _PAGE_FILE }) +/* + * Set the highest allowed nonlinear pgoff to 1 bit less than + * x86_phys_bits to guarantee the inversion of the highest bit + * in the pgoff_to_pte conversion. The lowest x86_phys_bits is + * 36 so x86 implementations with 36 bits will find themselves + * unable to keep using remap_file_pages() with file offsets + * above 128TiB (calculated as 1<<(36-1+PAGE_SHIFT)). More + * recent CPUs will retain much higher max file offset limits. + */ +#ifdef PTE_FILE_MAX_BITS +#error "Huh? PTE_FILE_MAX_BITS shouldn't be defined here" +#endif +#define L1TF_PTE_FILE_MAX_BITS min(__PHYSICAL_MASK_SHIFT, \ + boot_cpu_data.x86_phys_bits - 1) /* PTE - Level 1 access. */ diff --git a/mm/fremap.c b/mm/fremap.c index fd94a867cda0..9959bad4ec55 100644 --- a/mm/fremap.c +++ b/mm/fremap.c @@ -153,10 +153,16 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size, return err; /* Can we represent this offset inside this architecture's pte's? */ +#ifdef L1TF_PTE_FILE_MAX_BITS + if (L1TF_PTE_FILE_MAX_BITS < BITS_PER_LONG && + (pgoff + (size >> PAGE_SHIFT) >= (1UL << L1TF_PTE_FILE_MAX_BITS))) + return err; +#else #if PTE_FILE_MAX_BITS < BITS_PER_LONG if (pgoff + (size >> PAGE_SHIFT) >= (1UL << PTE_FILE_MAX_BITS)) return err; #endif +#endif /* L1TF_PTE_FILE_MAX_BITS */ /* We need down_write() to change vma->vm_flags. */ down_read(&mm->mmap_sem);