From patchwork Fri Jun 29 16:32:03 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herton Ronaldo Krzesinski X-Patchwork-Id: 168178 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id 011FF1007D4 for ; Sat, 30 Jun 2012 02:32:30 +1000 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1Ske7I-0006hd-WD; Fri, 29 Jun 2012 16:32:21 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1Ske7G-0006h0-TK for kernel-team@lists.ubuntu.com; Fri, 29 Jun 2012 16:32:18 +0000 Received: from 201.47.27.107.dynamic.adsl.gvt.net.br ([201.47.27.107] helo=canonical.com) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1Ske7G-0001MG-7k for kernel-team@lists.ubuntu.com; Fri, 29 Jun 2012 16:32:18 +0000 From: Herton Ronaldo Krzesinski To: kernel-team@lists.ubuntu.com Subject: [PATCH 3/4] KVM: Don't destroy vcpu in case vcpu_setup fails Date: Fri, 29 Jun 2012 13:32:03 -0300 Message-Id: <1340987524-22261-4-git-send-email-herton.krzesinski@canonical.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1340987524-22261-1-git-send-email-herton.krzesinski@canonical.com> References: <1340987524-22261-1-git-send-email-herton.krzesinski@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.13 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com From: Glauber Costa CVE-2012-1601 BugLink: http://bugs.launchpad.net/bugs/971685 One of vcpu_setup responsibilities is to do mmu initialization. However, in case we fail in kvm_arch_vcpu_reset, before we get the chance to init mmu. OTOH, vcpu_destroy will attempt to destroy mmu, triggering a bug. Keeping track of whether or not mmu is initialized would unnecessarily complicate things. Rather, we just make return, making sure any needed uninitialization is done before we return, in case we fail. Signed-off-by: Glauber Costa Signed-off-by: Avi Kivity (cherry picked from commit 7d8fece678c1abc2ca3e1ceda2277c3538a9161c upstream) Signed-off-by: Herton Ronaldo Krzesinski --- .../binary-custom.d/openvz/src/virt/kvm/kvm_main.c | 5 ++--- debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c | 5 ++--- virt/kvm/kvm_main.c | 5 ++--- 3 files changed, 6 insertions(+), 9 deletions(-) diff --git a/debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c b/debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c index a1794b6..07adef4 100644 --- a/debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c +++ b/debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c @@ -799,12 +799,11 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n) r = kvm_arch_vcpu_setup(vcpu); if (r) - goto vcpu_destroy; + return r; mutex_lock(&kvm->lock); if (kvm->vcpus[n]) { r = -EEXIST; - mutex_unlock(&kvm->lock); goto vcpu_destroy; } kvm->vcpus[n] = vcpu; @@ -819,8 +818,8 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n) unlink: mutex_lock(&kvm->lock); kvm->vcpus[n] = NULL; - mutex_unlock(&kvm->lock); vcpu_destroy: + mutex_unlock(&kvm->lock); kvm_arch_vcpu_destroy(vcpu); return r; } diff --git a/debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c b/debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c index a1794b6..07adef4 100644 --- a/debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c +++ b/debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c @@ -799,12 +799,11 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n) r = kvm_arch_vcpu_setup(vcpu); if (r) - goto vcpu_destroy; + return r; mutex_lock(&kvm->lock); if (kvm->vcpus[n]) { r = -EEXIST; - mutex_unlock(&kvm->lock); goto vcpu_destroy; } kvm->vcpus[n] = vcpu; @@ -819,8 +818,8 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n) unlink: mutex_lock(&kvm->lock); kvm->vcpus[n] = NULL; - mutex_unlock(&kvm->lock); vcpu_destroy: + mutex_unlock(&kvm->lock); kvm_arch_vcpu_destroy(vcpu); return r; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index a1794b6..07adef4 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -799,12 +799,11 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n) r = kvm_arch_vcpu_setup(vcpu); if (r) - goto vcpu_destroy; + return r; mutex_lock(&kvm->lock); if (kvm->vcpus[n]) { r = -EEXIST; - mutex_unlock(&kvm->lock); goto vcpu_destroy; } kvm->vcpus[n] = vcpu; @@ -819,8 +818,8 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n) unlink: mutex_lock(&kvm->lock); kvm->vcpus[n] = NULL; - mutex_unlock(&kvm->lock); vcpu_destroy: + mutex_unlock(&kvm->lock); kvm_arch_vcpu_destroy(vcpu); return r; }