From patchwork Mon Apr 30 17:24:59 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steve Conklin X-Patchwork-Id: 155930 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id E546CB6FA5 for ; Tue, 1 May 2012 03:25:22 +1000 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SOuLZ-00059O-Cg; Mon, 30 Apr 2012 17:25:13 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1SOuLW-00058m-1G for kernel-team@lists.ubuntu.com; Mon, 30 Apr 2012 17:25:10 +0000 Received: from user-69-73-1-154.knology.net ([69.73.1.154] helo=[172.31.0.160]) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1SOuLV-0001ph-S0 for kernel-team@lists.ubuntu.com; Mon, 30 Apr 2012 17:25:09 +0000 Message-ID: <4F9ECAEB.1050706@canonical.com> Date: Mon, 30 Apr 2012 12:24:59 -0500 From: Steve Conklin User-Agent: Mozilla/5.0 (X11; Linux i686; rv:11.0) Gecko/20120411 Thunderbird/11.0.1 MIME-Version: 1.0 To: kernel-team@lists.ubuntu.com Subject: [CVE-2012-2121] precise/oneiric/natty KVM: unmap pages from the iommu when slots are removed X-Enigmail-Version: 1.4 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.13 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com This is a backport from upstream. There are changes due to a change that was made in parameters passed to a function, but it's straightforward change. From 2739eaeac4a9bf777810b2837e528dbdbdb3d5cc Mon Sep 17 00:00:00 2001 From: Alex Williamson Date: Wed, 11 Apr 2012 09:51:49 -0600 Subject: [PATCH] KVM: unmap pages from the iommu when slots are removed CVE-2012-2121 BugLink: http://bugs.launchpad.net/bugs/987569 We've been adding new mappings, but not destroying old mappings. This can lead to a page leak as pages are pinned using get_user_pages, but only unpinned with put_page if they still exist in the memslots list on vm shutdown. A memslot that is destroyed while an iommu domain is enabled for the guest will therefore result in an elevated page reference count that is never cleared. Additionally, without this fix, the iommu is only programmed with the first translation for a gpa. This can result in peer-to-peer errors if a mapping is destroyed and replaced by a new mapping at the same gpa as the iommu will still be pointing to the original, pinned memory address. Signed-off-by: Alex Williamson Signed-off-by: Marcelo Tosatti Backported with minor changes from upstream commit 32f6daad4651a748a58a3ab6da0611862175722f Signed-off-by: Steve Conklin --- include/linux/kvm_host.h | 6 ++++++ virt/kvm/iommu.c | 8 ++++++-- virt/kvm/kvm_main.c | 5 +++-- 3 files changed, 15 insertions(+), 4 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 7e8a29f..6136821 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -562,6 +562,7 @@ void kvm_free_irq_source_id(struct kvm *kvm, int irq_source_id); #ifdef CONFIG_IOMMU_API int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot); +void kvm_iommu_unmap_pages(struct kvm *kvm, struct kvm_memory_slot *slot); int kvm_iommu_map_guest(struct kvm *kvm); int kvm_iommu_unmap_guest(struct kvm *kvm); int kvm_assign_device(struct kvm *kvm, @@ -575,6 +576,11 @@ static inline int kvm_iommu_map_pages(struct kvm *kvm, return 0; } +static inline void kvm_iommu_unmap_pages(struct kvm *kvm, + struct kvm_memory_slot *slot) +{ +} + static inline int kvm_iommu_map_guest(struct kvm *kvm) { return -ENODEV; diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c index a195c07..981f55a 100644 --- a/virt/kvm/iommu.c +++ b/virt/kvm/iommu.c @@ -309,6 +309,11 @@ static void kvm_iommu_put_pages(struct kvm *kvm, } } +void kvm_iommu_unmap_pages(struct kvm *kvm, struct kvm_memory_slot *slot) +{ + kvm_iommu_put_pages(kvm, slot->base_gfn, slot->npages); +} + static int kvm_iommu_unmap_memslots(struct kvm *kvm) { int i, idx; @@ -318,8 +323,7 @@ static int kvm_iommu_unmap_memslots(struct kvm *kvm) slots = kvm_memslots(kvm); for (i = 0; i < slots->nmemslots; i++) { - kvm_iommu_put_pages(kvm, slots->memslots[i].base_gfn, - slots->memslots[i].npages); + kvm_iommu_unmap_pages(kvm, &slots->memslots[i]); } srcu_read_unlock(&kvm->srcu, idx); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 5007b7d..63e01de 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -802,12 +802,13 @@ skip_lpage: if (r) goto out_free; - /* map the pages in iommu page table */ + /* map/unmap the pages in iommu page table */ if (npages) { r = kvm_iommu_map_pages(kvm, &new); if (r) goto out_free; - } + } else + kvm_iommu_unmap_pages(kvm, &old); r = -ENOMEM; slots = kzalloc(sizeof(struct kvm_memslots), GFP_KERNEL);