From patchwork Tue Jun 4 18:18:40 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corey Bryant X-Patchwork-Id: 248795 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 9DF242C007C for ; Wed, 5 Jun 2013 04:20:44 +1000 (EST) Received: from localhost ([::1]:53711 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Ujvqc-0000Ti-MA for incoming@patchwork.ozlabs.org; Tue, 04 Jun 2013 14:20:42 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45407) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UjvpN-0007dI-3s for qemu-devel@nongnu.org; Tue, 04 Jun 2013 14:19:27 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UjvpK-0004aS-Gr for qemu-devel@nongnu.org; Tue, 04 Jun 2013 14:19:25 -0400 Received: from e39.co.us.ibm.com ([32.97.110.160]:39246) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UjvpK-0004VG-6v for qemu-devel@nongnu.org; Tue, 04 Jun 2013 14:19:22 -0400 Received: from /spool/local by e39.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 4 Jun 2013 12:18:56 -0600 Received: from d03dlp01.boulder.ibm.com (9.17.202.177) by e39.co.us.ibm.com (192.168.1.139) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 4 Jun 2013 12:18:55 -0600 Received: from d03relay03.boulder.ibm.com (d03relay03.boulder.ibm.com [9.17.195.228]) by d03dlp01.boulder.ibm.com (Postfix) with ESMTP id B338BC40004 for ; Tue, 4 Jun 2013 12:13:39 -0600 (MDT) Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d03relay03.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r54IInJN099532 for ; Tue, 4 Jun 2013 12:18:49 -0600 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r54IIjP6017367 for ; Tue, 4 Jun 2013 12:18:46 -0600 Received: from localhost ([9.80.98.244]) by d03av01.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r54IIjhJ017291; Tue, 4 Jun 2013 12:18:45 -0600 From: Corey Bryant To: qemu-devel@nongnu.org Date: Tue, 4 Jun 2013 14:18:40 -0400 Message-Id: <1370369921-14925-2-git-send-email-coreyb@linux.vnet.ibm.com> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1370369921-14925-1-git-send-email-coreyb@linux.vnet.ibm.com> References: <1370369921-14925-1-git-send-email-coreyb@linux.vnet.ibm.com> X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13060418-3620-0000-0000-000002E94BCC X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] X-Received-From: 32.97.110.160 Cc: kwolf@redhat.com, aliguori@us.ibm.com, stefanb@linux.vnet.ibm.com, Corey Bryant , mdroth@linux.vnet.ibm.com, jschopp@linux.vnet.ibm.com, stefanha@redhat.com Subject: [Qemu-devel] [PATCH 1/2] nvram: Add TPM NVRAM implementation X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Provides TPM NVRAM implementation that enables storing of TPM NVRAM data in a persistent image file. The block driver is used to read/write the drive image. This will enable, for example, an ecrypted QCOW2 image to be used to store sensitive keys. This patch provides APIs that a TPM backend can use to read and write data. Signed-off-by: Corey Bryant --- hw/tpm/Makefile.objs | 1 + hw/tpm/tpm_nvram.c | 399 ++++++++++++++++++++++++++++++++++++++++++++++++++ hw/tpm/tpm_nvram.h | 25 +++ 3 files changed, 425 insertions(+), 0 deletions(-) create mode 100644 hw/tpm/tpm_nvram.c create mode 100644 hw/tpm/tpm_nvram.h diff --git a/hw/tpm/Makefile.objs b/hw/tpm/Makefile.objs index 99f5983..49faef4 100644 --- a/hw/tpm/Makefile.objs +++ b/hw/tpm/Makefile.objs @@ -1,2 +1,3 @@ common-obj-$(CONFIG_TPM_TIS) += tpm_tis.o +common-obj-$(CONFIG_TPM_TIS) += tpm_nvram.o common-obj-$(CONFIG_TPM_PASSTHROUGH) += tpm_passthrough.o diff --git a/hw/tpm/tpm_nvram.c b/hw/tpm/tpm_nvram.c new file mode 100644 index 0000000..95ff396 --- /dev/null +++ b/hw/tpm/tpm_nvram.c @@ -0,0 +1,399 @@ +/* + * TPM NVRAM - enables storage of persistent NVRAM data on an image file + * + * Copyright (C) 2013 IBM Corporation + * + * Authors: + * Stefan Berger + * Corey Bryant + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#include "tpm_nvram.h" +#include "block/block_int.h" +#include "qemu/thread.h" +#include "sysemu/sysemu.h" + +/* #define TPM_NVRAM_DEBUG */ + +#ifdef TPM_NVRAM_DEBUG +#define DPRINTF(fmt, ...) \ + do { fprintf(stderr, fmt, ## __VA_ARGS__); } while (0) +#else +#define DPRINTF(fmt, ...) \ + do { } while (0) +#endif + +/* Round a value up to the next SIZE */ +#define ROUNDUP(VAL, SIZE) \ + (((VAL)+(SIZE)-1) & ~((SIZE)-1)) + +/* Get the number of sectors required to contain SIZE bytes */ +#define NUM_SECTORS(SIZE) \ + (ROUNDUP(SIZE, BDRV_SECTOR_SIZE) / BDRV_SECTOR_SIZE) + +/* Read/write request data */ +typedef struct TPMNvramRWRequest { + BlockDriverState *bdrv; + bool is_write; + uint64_t sector_num; + int num_sectors; + uint8_t **blob_r; + uint8_t *blob_w; + uint32_t size; + QEMUIOVector *qiov; + bool done; + int rc; + + QemuMutex completion_mutex; + QemuCond completion; + + QSIMPLEQ_ENTRY(TPMNvramRWRequest) list; +} TPMNvramRWRequest; + +/* Mutex protected queue of read/write requests */ +static QemuMutex tpm_nvram_rwrequests_mutex; +static QSIMPLEQ_HEAD(, TPMNvramRWRequest) tpm_nvram_rwrequests = + QSIMPLEQ_HEAD_INITIALIZER(tpm_nvram_rwrequests); + +static QEMUBH *tpm_nvram_bh; + +/* + * Increase the drive size if it's too small to store the blob + */ +static int tpm_nvram_adjust_size(BlockDriverState *bdrv, uint64_t sector_num, + int num_sectors) +{ + int rc = 0; + int64_t drive_size, required_size; + + drive_size = bdrv_getlength(bdrv); + if (drive_size < 0) { + DPRINTF("%s: Unable to determine TPM NVRAM drive size\n", __func__); + rc = drive_size; + goto err_exit; + } + + required_size = (sector_num + num_sectors) * BDRV_SECTOR_SIZE; + + if (drive_size < required_size) { + rc = bdrv_truncate(bdrv, required_size); + if (rc < 0) { + DPRINTF("%s: TPM NVRAM drive too small\n", __func__); + } + } + +err_exit: + return rc; +} + +/* + * Coroutine that reads a blob from the drive asynchronously + */ +static void coroutine_fn tpm_nvram_co_read(void *opaque) +{ + TPMNvramRWRequest *rwr = opaque; + + rwr->rc = bdrv_co_readv(rwr->bdrv, + rwr->sector_num, + rwr->num_sectors, + rwr->qiov); + rwr->done = true; +} + +/* + * Coroutine that writes a blob to the drive asynchronously + */ +static void coroutine_fn tpm_nvram_co_write(void *opaque) +{ + TPMNvramRWRequest *rwr = opaque; + + rwr->rc = bdrv_co_writev(rwr->bdrv, + rwr->sector_num, + rwr->num_sectors, + rwr->qiov); + rwr->done = true; +} + +/* + * Prepare for and enter a coroutine to read a blob from the drive + */ +static void tpm_nvram_do_co_read(TPMNvramRWRequest *rwr) +{ + Coroutine *co; + size_t buf_len = rwr->num_sectors * BDRV_SECTOR_SIZE; + uint8_t *buf = g_malloc(buf_len); + + memset(buf, 0x0, buf_len); + + struct iovec iov = { + .iov_base = (void *)buf, + .iov_len = rwr->size, + }; + + qemu_iovec_init_external(rwr->qiov, &iov, 1); + + co = qemu_coroutine_create(tpm_nvram_co_read); + qemu_coroutine_enter(co, rwr); + + while (!rwr->done) { + qemu_aio_wait(); + } + + if (rwr->rc == 0) { + rwr->rc = rwr->num_sectors; + *rwr->blob_r = g_malloc(rwr->size); + memcpy(*rwr->blob_r, buf, rwr->size); + } else { + *rwr->blob_r = NULL; + } + + g_free(buf); +} + +/* + * Prepare for and enter a coroutine to write a blob to the drive + */ +static void tpm_nvram_do_co_write(TPMNvramRWRequest *rwr) +{ + Coroutine *co; + size_t buf_len = rwr->num_sectors * BDRV_SECTOR_SIZE; + uint8_t *buf = g_malloc(buf_len); + + memset(buf, 0x0, buf_len); + memcpy(buf, rwr->blob_w, rwr->size); + + struct iovec iov = { + .iov_base = (void *)buf, + .iov_len = rwr->size, + }; + + qemu_iovec_init_external(rwr->qiov, &iov, 1); + + rwr->rc = tpm_nvram_adjust_size(rwr->bdrv, rwr->sector_num, + rwr->num_sectors); + if (rwr->rc < 0) { + goto err_exit; + } + + co = qemu_coroutine_create(tpm_nvram_co_write); + qemu_coroutine_enter(co, rwr); + + while (!rwr->done) { + qemu_aio_wait(); + } + + if (rwr->rc == 0) { + rwr->rc = rwr->num_sectors; + } + +err_exit: + g_free(buf); +} + +/* + * Initialization for read requests + */ +static TPMNvramRWRequest *tpm_nvram_rwrequest_init_read(BlockDriverState *bdrv, + int64_t sector_num, + uint8_t **blob, + uint32_t size) +{ + TPMNvramRWRequest *rwr; + + rwr = g_new0(TPMNvramRWRequest, 1); + rwr->bdrv = bdrv; + rwr->is_write = false; + rwr->sector_num = sector_num; + rwr->num_sectors = NUM_SECTORS(size); + rwr->blob_r = blob; + rwr->size = size; + rwr->qiov = g_new0(QEMUIOVector, 1); + rwr->done = false; + + return rwr; +} + +/* + * Initialization for write requests + */ +static TPMNvramRWRequest *tpm_nvram_rwrequest_init_write(BlockDriverState *bdrv, + int64_t sector_num, + uint8_t *blob, + uint32_t size) +{ + TPMNvramRWRequest *rwr; + + rwr = g_new0(TPMNvramRWRequest, 1); + rwr->bdrv = bdrv; + rwr->is_write = true; + rwr->sector_num = sector_num; + rwr->num_sectors = NUM_SECTORS(size); + rwr->blob_w = blob; + rwr->size = size; + rwr->qiov = g_new0(QEMUIOVector, 1); + rwr->done = false; + + return rwr; +} + +/* + * Free read/write request memory + */ +static void tpm_nvram_rwrequest_free(TPMNvramRWRequest *rwr) +{ + g_free(rwr->qiov); + g_free(rwr); +} + +/* + * Execute a read or write of TPM NVRAM blob data + */ +static void tpm_nvram_rwrequest_exec(TPMNvramRWRequest *rwr) +{ + if (rwr->is_write) { + tpm_nvram_do_co_write(rwr); + } else { + tpm_nvram_do_co_read(rwr); + } + + qemu_mutex_lock(&rwr->completion_mutex); + qemu_cond_signal(&rwr->completion); + qemu_mutex_unlock(&rwr->completion_mutex); +} + +/* + * Bottom-half callback that is invoked by QEMU's main thread to + * process TPM NVRAM read/write requests. + */ +static void tpm_nvram_rwrequest_callback(void *opaque) +{ + TPMNvramRWRequest *rwr, *next; + + qemu_mutex_lock(&tpm_nvram_rwrequests_mutex); + + QSIMPLEQ_FOREACH_SAFE(rwr, &tpm_nvram_rwrequests, list, next) { + QSIMPLEQ_REMOVE(&tpm_nvram_rwrequests, rwr, TPMNvramRWRequest, list); + + qemu_mutex_unlock(&tpm_nvram_rwrequests_mutex); + tpm_nvram_rwrequest_exec(rwr); + qemu_mutex_lock(&tpm_nvram_rwrequests_mutex); + } + + qemu_mutex_unlock(&tpm_nvram_rwrequests_mutex); +} + +/* + * Schedule a bottom-half to read or write a blob to the TPM NVRAM drive + */ +static void tpm_nvram_rwrequest_schedule(TPMNvramRWRequest *rwr) +{ + qemu_mutex_lock(&tpm_nvram_rwrequests_mutex); + QSIMPLEQ_INSERT_TAIL(&tpm_nvram_rwrequests, rwr, list); + qemu_mutex_unlock(&tpm_nvram_rwrequests_mutex); + + qemu_bh_schedule(tpm_nvram_bh); + + /* Wait for completion of the read/write request */ + qemu_mutex_lock(&rwr->completion_mutex); + qemu_cond_wait(&rwr->completion, &rwr->completion_mutex); + qemu_mutex_unlock(&rwr->completion_mutex); +} + +/* + * Initialize a TPM NVRAM drive + */ +int tpm_nvram_init(BlockDriverState *bdrv) +{ + qemu_mutex_init(&tpm_nvram_rwrequests_mutex); + tpm_nvram_bh = qemu_bh_new(tpm_nvram_rwrequest_callback, NULL); + + if (bdrv_is_read_only(bdrv)) { + DPRINTF("%s: TPM NVRAM drive '%s' is read-only\n", __func__, + bdrv->filename); + return -EPERM; + } + + bdrv_lock_medium(bdrv, true); + + DPRINTF("%s: TPM NVRAM drive '%s' initialized successfully\n", __func__, + bdrv->filename); + + return 0; +} + +/* + * Read a TPM NVRAM blob from the drive. On success, returns the + * number of sectors used by this blob. + */ +int tpm_nvram_read(BlockDriverState *bdrv, int64_t sector_num, + uint8_t **blob, uint32_t size) +{ + int rc; + TPMNvramRWRequest *rwr; + + *blob = NULL; + + if (!bdrv) { + return -EPERM; + } + + if (sector_num < 0) { + return -EINVAL; + } + + rwr = tpm_nvram_rwrequest_init_read(bdrv, sector_num, blob, size); + tpm_nvram_rwrequest_schedule(rwr); + rc = rwr->rc; + +#ifdef TPM_NVRAM_DEBUG + if (rc < 0) { + DPRINTF("%s: TPM NVRAM read failed\n", __func__); + } else { + DPRINTF("%s: TPM NVRAM read successful: sector_num=%"PRIu64", " + "size=%"PRIu32", num_sectors=%d\n", __func__, + rwr->sector_num, rwr->size, rwr->num_sectors); + } +#endif + + tpm_nvram_rwrequest_free(rwr); + return rc; +} + +/* + * Write a TPM NVRAM blob to the drive. On success, returns the + * number of sectors used by this blob. + */ +int tpm_nvram_write(BlockDriverState *bdrv, int64_t sector_num, + uint8_t *blob, uint32_t size) +{ + int rc; + TPMNvramRWRequest *rwr; + + if (!bdrv) { + return -EPERM; + } + + if (sector_num < 0 || !blob) { + return -EINVAL; + } + + rwr = tpm_nvram_rwrequest_init_write(bdrv, sector_num, blob, size); + tpm_nvram_rwrequest_schedule(rwr); + rc = rwr->rc; + +#ifdef TPM_NVRAM_DEBUG + if (rc < 0) { + DPRINTF("%s: TPM NVRAM write failed\n", __func__); + } else { + DPRINTF("%s: TPM NVRAM write successful: sector_num=%"PRIu64", " + "size=%"PRIu32", num_sectors=%d\n", __func__, + rwr->sector_num, rwr->size, rwr->num_sectors); + } +#endif + + tpm_nvram_rwrequest_free(rwr); + return rc; +} diff --git a/hw/tpm/tpm_nvram.h b/hw/tpm/tpm_nvram.h new file mode 100644 index 0000000..fceb4d0 --- /dev/null +++ b/hw/tpm/tpm_nvram.h @@ -0,0 +1,25 @@ +/* + * TPM NVRAM - enables storage of persistent NVRAM data on an image file + * + * Copyright (C) 2013 IBM Corporation + * + * Authors: + * Stefan Berger + * Corey Bryant + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + */ + +#ifndef TPM_TPM_NVRAM_H +#define TPM_TPM_NVRAM_H + +#include "block/block.h" + +int tpm_nvram_init(BlockDriverState *bdrv); +int tpm_nvram_read(BlockDriverState *bdrv, int64_t sector_num, + uint8_t **blob, uint32_t size); +int tpm_nvram_write(BlockDriverState *bdrv, int64_t sector_num, + uint8_t *blob, uint32_t size); + +#endif