From patchwork Wed Jul 11 17:39:35 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herton Ronaldo Krzesinski X-Patchwork-Id: 170495 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id 300C92C007F for ; Thu, 12 Jul 2012 03:39:52 +1000 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1Sp0t3-0002fy-Qh; Wed, 11 Jul 2012 17:39:41 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1Sp0t1-0002fj-Fo for kernel-team@lists.ubuntu.com; Wed, 11 Jul 2012 17:39:39 +0000 Received: from [177.16.140.170] (helo=canonical.com) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1Sp0t0-0007bl-Qg; Wed, 11 Jul 2012 17:39:39 +0000 Date: Wed, 11 Jul 2012 14:39:35 -0300 From: Herton Ronaldo Krzesinski To: Tim Gardner Subject: Re: [PATCH] KVM: fix backport of 3e51570 on hardy Message-ID: <20120711173934.GB3162@herton-Z68MA-D2H-B3> References: <1342022973-6783-1-git-send-email-herton.krzesinski@canonical.com> <4FFDAF7B.8090305@canonical.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <4FFDAF7B.8090305@canonical.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: kernel-team@lists.ubuntu.com X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.13 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com > Isn't this needed in the other custom binary files ? > > ./virt/kvm/kvm_main.c > ./debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c > ./debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c Updated patch: From: Herton Ronaldo Krzesinski Subject: [PATCH] KVM: fix backport of 3e51570 on hardy CVE-2012-1601 BugLink: http://bugs.launchpad.net/bugs/971685 Sasha Levin reported that our backport of 3e51570 ("KVM: Ensure all vcpus are consistent with in-kernel irqchip settings") has a bug, and suggested possible fixes. We increment kvm->online_vcpus, but not decrement it in the case create_vcpu_fd fails, which could cause issues if it fails and vm is not destroyed after (counter will be out of sync). In the upstream change this is not a problem since the increment is done after create_vcpu_fd is called. The solution chosen here is to just decrement it on the failure path. Reported-by: Sasha Levin Signed-off-by: Herton Ronaldo Krzesinski --- .../binary-custom.d/openvz/src/virt/kvm/kvm_main.c | 1 + debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c | 1 + virt/kvm/kvm_main.c | 1 + 3 files changed, 3 insertions(+) diff --git a/debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c b/debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c index d9a8ae0..61c18ba 100644 --- a/debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c +++ b/debian/binary-custom.d/openvz/src/virt/kvm/kvm_main.c @@ -823,6 +823,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n) unlink: mutex_lock(&kvm->lock); kvm->vcpus[n] = NULL; + atomic_dec(&kvm->online_vcpus); vcpu_destroy: mutex_unlock(&kvm->lock); kvm_arch_vcpu_destroy(vcpu); diff --git a/debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c b/debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c index d9a8ae0..61c18ba 100644 --- a/debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c +++ b/debian/binary-custom.d/xen/src/virt/kvm/kvm_main.c @@ -823,6 +823,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n) unlink: mutex_lock(&kvm->lock); kvm->vcpus[n] = NULL; + atomic_dec(&kvm->online_vcpus); vcpu_destroy: mutex_unlock(&kvm->lock); kvm_arch_vcpu_destroy(vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d9a8ae0..61c18ba 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -823,6 +823,7 @@ static int kvm_vm_ioctl_create_vcpu(struct kvm *kvm, int n) unlink: mutex_lock(&kvm->lock); kvm->vcpus[n] = NULL; + atomic_dec(&kvm->online_vcpus); vcpu_destroy: mutex_unlock(&kvm->lock); kvm_arch_vcpu_destroy(vcpu);