From patchwork Mon Nov 29 16:15:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thadeu Lima de Souza Cascardo X-Patchwork-Id: 1561272 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=InUdzScj; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4J2r8h3qRMz9sRR for ; Tue, 30 Nov 2021 03:17:06 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1mrjKl-0007Cu-JS; Mon, 29 Nov 2021 16:16:51 +0000 Received: from smtp-relay-canonical-1.internal ([10.131.114.174] helo=smtp-relay-canonical-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1mrjKj-0007CW-6Q for kernel-team@lists.ubuntu.com; Mon, 29 Nov 2021 16:16:49 +0000 Received: from localhost.localdomain (1.general.cascardo.us.vpn [10.172.70.58]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-canonical-1.canonical.com (Postfix) with ESMTPSA id 5609040600 for ; Mon, 29 Nov 2021 16:16:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1638202609; bh=DtL5UuVwgG1eg1pRL2rNin14a7xn987yoKZNitKWA3s=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=InUdzScjsLlMltOC+etS4YDxWJk2qOQR43qwLjhGBEKS0v1rW9rjBZL3T1XhMp3SD nKTow/zuKhy5LvjxJ6THFp+6dUpttax9QhKe4zKcgjki0txDZ+M+z3CBQREUTnSNc9 UIFMxp9FMwO4biV4x/jt1assySR/ejoXFlsIqrDlrp3Cuk8KUs9dauLRgdN8GjAd6J lk5lNGRBKdYB9pkhrbndaM/BUDEQUhwecJEUpelKk0UBSY+qZtJXbCrRFnfr/41R+F NpDuure/DEk46PDXhl2ktXGd+Nkb+9sjCUJJskv1dfOwYNrBWfVJ/T95ZSyFmQP2mO 1S2B3rzRZel4g== From: Thadeu Lima de Souza Cascardo To: kernel-team@lists.ubuntu.com Subject: [SRU OEM-5.10/OEM-5.14 1/1] hugetlbfs: flush TLBs correctly after huge_pmd_unshare Date: Mon, 29 Nov 2021 13:15:39 -0300 Message-Id: <20211129161539.1308219-2-cascardo@canonical.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20211129161539.1308219-1-cascardo@canonical.com> References: <20211129161539.1308219-1-cascardo@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Nadav Amit When __unmap_hugepage_range() calls to huge_pmd_unshare() succeed, a TLB flush is missing. This TLB flush must be performed before releasing the i_mmap_rwsem, in order to prevent an unshared PMDs page from being released and reused before the TLB flush took place. Arguably, a comprehensive solution would use mmu_gather interface to batch the TLB flushes and the PMDs page release, however it is not an easy solution: (1) try_to_unmap_one() and try_to_migrate_one() also call huge_pmd_unshare() and they cannot use the mmu_gather interface; and (2) deferring the release of the page reference for the PMDs page until after i_mmap_rwsem is dropeed can confuse huge_pmd_unshare() into thinking PMDs are shared when they are not. Fix __unmap_hugepage_range() by adding the missing TLB flush, and forcing a flush when unshare is successful. Fixes: 24669e58477e ("hugetlb: use mmu_gather instead of a temporary linked list for accumulating pages)" # 3.6 Signed-off-by: Nadav Amit Reviewed-by: Mike Kravetz Cc: Aneesh Kumar K.V Cc: KAMEZAWA Hiroyuki Cc: Andrew Morton Signed-off-by: Linus Torvalds (cherry picked from commit a4a118f2eead1d6c49e00765de89878288d4b890) CVE-2021-4002 Signed-off-by: Thadeu Lima de Souza Cascardo Acked-by: Krzysztof Kozlowski --- mm/hugetlb.c | 23 +++++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 32a2ea7c487a..8c899481c8b7 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3915,6 +3915,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, struct hstate *h = hstate_vma(vma); unsigned long sz = huge_page_size(h); struct mmu_notifier_range range; + bool force_flush = false; WARN_ON(!is_vm_hugetlb_page(vma)); BUG_ON(start & ~huge_page_mask(h)); @@ -3943,10 +3944,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, ptl = huge_pte_lock(h, mm, ptep); if (huge_pmd_unshare(mm, vma, &address, ptep)) { spin_unlock(ptl); - /* - * We just unmapped a page of PMDs by clearing a PUD. - * The caller's TLB flush range should cover this area. - */ + tlb_flush_pmd_range(tlb, address & PUD_MASK, PUD_SIZE); + force_flush = true; continue; } @@ -4003,6 +4002,22 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, } mmu_notifier_invalidate_range_end(&range); tlb_end_vma(tlb, vma); + + /* + * If we unshared PMDs, the TLB flush was not recorded in mmu_gather. We + * could defer the flush until now, since by holding i_mmap_rwsem we + * guaranteed that the last refernece would not be dropped. But we must + * do the flushing before we return, as otherwise i_mmap_rwsem will be + * dropped and the last reference to the shared PMDs page might be + * dropped as well. + * + * In theory we could defer the freeing of the PMD pages as well, but + * huge_pmd_unshare() relies on the exact page_count for the PMD page to + * detect sharing, so we cannot defer the release of the page either. + * Instead, do flush now. + */ + if (force_flush) + tlb_flush_mmu_tlbonly(tlb); } void __unmap_hugepage_range_final(struct mmu_gather *tlb,