From patchwork Tue Sep 24 07:48:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juerg Haefliger X-Patchwork-Id: 1166401 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 46ctcZ12DVz9sPK; Tue, 24 Sep 2019 17:49:18 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1iCfZR-0006OS-HU; Tue, 24 Sep 2019 07:49:13 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1iCfZM-0006O9-20 for kernel-team@lists.ubuntu.com; Tue, 24 Sep 2019 07:49:08 +0000 Received: from mail-ed1-f72.google.com ([209.85.208.72]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1iCfZL-0005Hd-QO for kernel-team@lists.ubuntu.com; Tue, 24 Sep 2019 07:49:07 +0000 Received: by mail-ed1-f72.google.com with SMTP id s29so524788eds.21 for ; Tue, 24 Sep 2019 00:49:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=GIQ0EMjLYVweyVqyFhpAHFFA72G/sYJKoTmxIAzKE2U=; b=M5AXHFV1iRaGibWg4cBLkAs3nRbbtuXDEPLJ2K9cdgrRIXCz8zPXbBO4Bm5njgabtz tvxkeO6+EgHwQBHmYHzGjm/dUCWlQH8pFnH6UsNhyWZphKt93pfdsurPb4OREHex9XPj Ql+1AwwrnRxXHSuEdY7M8slvqaNMVZ4g+EgTBSs/AXhuiCsLcwXqbJGBikhh6goz4D4f pN/6XU8qsfmFWOsN47gvW/10kLDHK4u+BK7xRODXjaEXFEuqygz0jOBe33Tpeg1GHEH7 3CEDnQ4s1ndk1G0lZiFSS5tKcGK7SghJTsQ+DPMDocX3VmnFA9jWXW6MRx6S2jR+jglJ AYdA== X-Gm-Message-State: APjAAAVbAUk7B6iFSd9uDZ+Ch/isxZYCdyJNUw4BAzrbbxrq/70WyJEH k8UykPfvOrGa7ScGSoGeFbWyKvwRYpflvxOd0hudxUGyNGc6eyNzAXKj4JsMf70fVh0ZcVbSu8N 79PQEYLSBCluMtOpg7cKVjzNzjPQQ6zGShsFOdMCP8Q== X-Received: by 2002:a17:906:c787:: with SMTP id cw7mr1237939ejb.34.1569311347307; Tue, 24 Sep 2019 00:49:07 -0700 (PDT) X-Google-Smtp-Source: APXvYqwa6TEODlCLBn5xnka9XyXHB233hKvD2xMCjlDibGVtEl4zDyXumMvfByryNS+jEpDjjihsGQ== X-Received: by 2002:a17:906:c787:: with SMTP id cw7mr1237932ejb.34.1569311347092; Tue, 24 Sep 2019 00:49:07 -0700 (PDT) Received: from localhost.localdomain ([194.191.228.147]) by smtp.gmail.com with ESMTPSA id qn27sm116697ejb.84.2019.09.24.00.49.06 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Sep 2019 00:49:06 -0700 (PDT) From: Juerg Haefliger X-Google-Original-From: Juerg Haefliger To: kernel-team@lists.ubuntu.com Subject: [SRU][Xenial][CVE-2019-14821][PATCH] KVM: coalesced_mmio: add bounds checking Date: Tue, 24 Sep 2019 09:48:34 +0200 Message-Id: <20190924074834.14995-1-juergh@canonical.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Matt Delco commit b60fe990c6b07ef6d4df67bc0530c7c90a62623a upstream. The first/last indexes are typically shared with a user app. The app can change the 'last' index that the kernel uses to store the next result. This change sanity checks the index before using it for writing to a potentially arbitrary address. This fixes CVE-2019-14821. Cc: stable@vger.kernel.org Fixes: 5f94c1741bdc ("KVM: Add coalesced MMIO support (common part)") Signed-off-by: Matt Delco Signed-off-by: Jim Mattson Reported-by: syzbot+983c866c3dd6efa3662a@syzkaller.appspotmail.com [Use READ_ONCE. - Paolo] Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman CVE-2019-14821 (cherry picked from commit ae41539657ce0a4e9f4588e89e5e19a8b8f11928 linux-4.4.y) Signed-off-by: Juerg Haefliger Acked-by: Sultan Alsawaf Acked-by: Tyler Hicks --- virt/kvm/coalesced_mmio.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c index 571c1ce37d15..5c1efb869df2 100644 --- a/virt/kvm/coalesced_mmio.c +++ b/virt/kvm/coalesced_mmio.c @@ -39,7 +39,7 @@ static int coalesced_mmio_in_range(struct kvm_coalesced_mmio_dev *dev, return 1; } -static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev) +static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev, u32 last) { struct kvm_coalesced_mmio_ring *ring; unsigned avail; @@ -51,7 +51,7 @@ static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev) * there is always one unused entry in the buffer */ ring = dev->kvm->coalesced_mmio_ring; - avail = (ring->first - ring->last - 1) % KVM_COALESCED_MMIO_MAX; + avail = (ring->first - last - 1) % KVM_COALESCED_MMIO_MAX; if (avail == 0) { /* full */ return 0; @@ -66,24 +66,27 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, { struct kvm_coalesced_mmio_dev *dev = to_mmio(this); struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring; + __u32 insert; if (!coalesced_mmio_in_range(dev, addr, len)) return -EOPNOTSUPP; spin_lock(&dev->kvm->ring_lock); - if (!coalesced_mmio_has_room(dev)) { + insert = READ_ONCE(ring->last); + if (!coalesced_mmio_has_room(dev, insert) || + insert >= KVM_COALESCED_MMIO_MAX) { spin_unlock(&dev->kvm->ring_lock); return -EOPNOTSUPP; } /* copy data in first free entry of the ring */ - ring->coalesced_mmio[ring->last].phys_addr = addr; - ring->coalesced_mmio[ring->last].len = len; - memcpy(ring->coalesced_mmio[ring->last].data, val, len); + ring->coalesced_mmio[insert].phys_addr = addr; + ring->coalesced_mmio[insert].len = len; + memcpy(ring->coalesced_mmio[insert].data, val, len); smp_wmb(); - ring->last = (ring->last + 1) % KVM_COALESCED_MMIO_MAX; + ring->last = (insert + 1) % KVM_COALESCED_MMIO_MAX; spin_unlock(&dev->kvm->ring_lock); return 0; }