From patchwork Mon Mar 25 13:55:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juerg Haefliger X-Patchwork-Id: 1064358 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 44SbQG3j75z9sT7; Tue, 26 Mar 2019 00:56:06 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1h8Q52-0001ko-Rt; Mon, 25 Mar 2019 13:56:00 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1h8Q51-0001kB-28 for kernel-team@lists.ubuntu.com; Mon, 25 Mar 2019 13:55:59 +0000 Received: from mail-ed1-f71.google.com ([209.85.208.71]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1h8Q50-00079l-RI for kernel-team@lists.ubuntu.com; Mon, 25 Mar 2019 13:55:58 +0000 Received: by mail-ed1-f71.google.com with SMTP id x13so3845350edq.11 for ; Mon, 25 Mar 2019 06:55:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uAhgUDXv84V9LF52oSat9qIlyZ48kMjSvGXJcMIXGY8=; b=k+EztviN0/fwEwFM6gapLjPNcptGMc+joKclSlqTfUiBOO5x3m+CSPYohScdlM5zJc bDsK6TrbQIGH/SN+osOKwG3aEf+y07oLsgBrmiUDtcMchBe1dM3x7TSARK22paUkAAbq j4EWxm4X9hJi41ZUs6dRVPk5SlaF5SxlhLYDK9bczippwz1k+MhuEZQT9kruVyOTwDJY aqI0W3mau7AYs592/uw/HkLXuQ2eeASvi7gm26jxBzCEo0A/pNbO8PCn3hspT7H78M5A iTKpMw5bSosVBtyuBFeV9kS4GayYn5dHhck4Khfqsk/G17iOteYbumFnlR7qeUEgzfI0 ubuQ== X-Gm-Message-State: APjAAAX/PFkAUQlfrrxKKIA6xA9Xz41L90FwfG2OhmnF7D4o8whcyAgS K33HkWvhwHkEZDhJQ4Mi7KI0G7T6Lhx1m15ka2Fg5Vq1ds0TaASyXLzfpGYga4kicRf0FnP4PZu UBgNuB3v9NWwLw5QLNxDUiZGYZfBr6lEKvE4JNjs2zQ== X-Received: by 2002:a50:a7a6:: with SMTP id i35mr16608516edc.96.1553522158396; Mon, 25 Mar 2019 06:55:58 -0700 (PDT) X-Google-Smtp-Source: APXvYqz7iLnsR8MyBB6zCROmNw8ABdcDAtWpDkWRNU4HRVWktTLiNhwtmCWAyKwtpINWK+5/XLntLw== X-Received: by 2002:a50:a7a6:: with SMTP id i35mr16608510edc.96.1553522158256; Mon, 25 Mar 2019 06:55:58 -0700 (PDT) Received: from gollum.fritz.box ([81.221.192.120]) by smtp.gmail.com with ESMTPSA id d55sm601920edb.5.2019.03.25.06.55.57 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 25 Mar 2019 06:55:57 -0700 (PDT) From: Juerg Haefliger X-Google-Original-From: Juerg Haefliger To: kernel-team@lists.ubuntu.com Subject: [SRU][Cosmic][PATCH 1/4] kvm: svm: Ensure an IBPB on all affected CPUs when freeing a vmcb Date: Mon, 25 Mar 2019 14:55:52 +0100 Message-Id: <20190325135555.23768-2-juergh@canonical.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20190325145014.52534dce@gollum> References: <20190325145014.52534dce@gollum> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Jim Mattson Previously, we only called indirect_branch_prediction_barrier on the logical CPU that freed a vmcb. This function should be called on all logical CPUs that last loaded the vmcb in question. Fixes: 15d45071523d ("KVM/x86: Add IBPB support") Reported-by: Neel Natu Signed-off-by: Jim Mattson Reviewed-by: Konrad Rzeszutek Wilk Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini CVE-2017-5715 (cherry picked from commit fd65d3142f734bc4376053c8d75670041903134d) Signed-off-by: Juerg Haefliger Acked-by: Tyler Hicks --- arch/x86/kvm/svm.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 3e59a187fe30..75d5f180ffa5 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -2188,21 +2188,31 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) return ERR_PTR(err); } +static void svm_clear_current_vmcb(struct vmcb *vmcb) +{ + int i; + + for_each_online_cpu(i) + cmpxchg(&per_cpu(svm_data, i)->current_vmcb, vmcb, NULL); +} + static void svm_free_vcpu(struct kvm_vcpu *vcpu) { struct vcpu_svm *svm = to_svm(vcpu); + /* + * The vmcb page can be recycled, causing a false negative in + * svm_vcpu_load(). So, ensure that no logical CPU has this + * vmcb page recorded as its current vmcb. + */ + svm_clear_current_vmcb(svm->vmcb); + __free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT)); __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER); __free_page(virt_to_page(svm->nested.hsave)); __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); kvm_vcpu_uninit(vcpu); kmem_cache_free(kvm_vcpu_cache, svm); - /* - * The vmcb page can be recycled, causing a false negative in - * svm_vcpu_load(). So do a full IBPB now. - */ - indirect_branch_prediction_barrier(); } static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)