From patchwork Sun Jun 12 15:38:41 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lin Ma X-Patchwork-Id: 634193 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3rSKqM5KDGz9s4q for ; Mon, 13 Jun 2016 01:40:15 +1000 (AEST) Received: from localhost ([::1]:51619 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bC7Uj-0003o2-Mj for incoming@patchwork.ozlabs.org; Sun, 12 Jun 2016 11:40:13 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47335) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bC7Tp-0003FV-7U for qemu-devel@nongnu.org; Sun, 12 Jun 2016 11:39:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bC7Tm-0005IZ-2y for qemu-devel@nongnu.org; Sun, 12 Jun 2016 11:39:17 -0400 Received: from prv3-mh.provo.novell.com ([137.65.250.26]:33473) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bC7Tl-0005IA-QB for qemu-devel@nongnu.org; Sun, 12 Jun 2016 11:39:14 -0400 Received: from localhost.localdomain (prv-ext-foundry1int.gns.novell.com [137.65.251.240]) by prv3-mh.provo.novell.com with ESMTP (NOT encrypted); Sun, 12 Jun 2016 09:38:53 -0600 From: Lin Ma To: kwolf@redhat.com, qemu-devel@nongnu.org Date: Sun, 12 Jun 2016 23:38:41 +0800 Message-Id: <1465745921-22733-1-git-send-email-lma@suse.com> X-Mailer: git-send-email 2.8.1 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 137.65.250.26 Subject: [Qemu-devel] [] [PATCH] Show all of snapshot info on every block device in output of 'info snapshots' X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Currently, the output of 'info snapshots' shows fully available snapshots. In my opinion there are 2 disadvantages: 1. It's opaque, hides some snapshot information to users. It's not convenient if users want to know more about all of snapshot information on every block device via monitor. 2. It uses snapshot id to determine whether a snapshot is 'fully available', It causes incorrect output in some scenario. For instance: (qemu) info block drive_image1 (#block113): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk0.qcow2 (qcow2) Cache mode: writeback drive_image2 (#block349): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk1.qcow2 (qcow2) Cache mode: writeback (qemu) (qemu) info snapshots There is no snapshot available. (qemu) (qemu) snapshot_blkdev_internal drive_image1 snap1 (qemu) (qemu) info snapshots There is no suitable snapshot available (qemu) (qemu) savevm checkpoint-1 (qemu) (qemu) info snapshots ID TAG VM SIZE DATE VM CLOCK 1 snap1 0 2016-05-22 16:57:31 00:01:30.567 (qemu) $ qemu-img snapshot -l disk0.qcow2 Snapshot list: ID TAG VM SIZE DATE VM CLOCK 1 snap1 0 2016-05-22 16:57:31 00:01:30.567 2 checkpoint-1 165M 2016-05-22 16:58:07 00:02:06.813 $ qemu-img snapshot -l disk1.qcow2 Snapshot list: ID TAG VM SIZE DATE VM CLOCK 1 checkpoint-1 0 2016-05-22 16:58:07 00:02:06.813 The patch uses snapshot name instead of snapshot id to determine whether a snapshot is 'fully available', and follow Kevin's suggestion, Make the output more detailed/accurate: (qemu) info snapshots List of snapshots present on all disks: ID TAG VM SIZE DATE VM CLOCK -- checkpoint-1 165M 2016-05-22 16:58:07 00:02:06.813 List of partial (non-loadable) snapshots on 'drive_image1': ID TAG VM SIZE DATE VM CLOCK 1 snap1 0 2016-05-22 16:57:31 00:01:30.567 Signed-off-by: Lin Ma --- migration/savevm.c | 77 +++++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 74 insertions(+), 3 deletions(-) diff --git a/migration/savevm.c b/migration/savevm.c index 6c21231..8444c62 100644 --- a/migration/savevm.c +++ b/migration/savevm.c @@ -2153,12 +2153,28 @@ void hmp_delvm(Monitor *mon, const QDict *qdict) void hmp_info_snapshots(Monitor *mon, const QDict *qdict) { BlockDriverState *bs, *bs1; + BdrvNextIterator it1; QEMUSnapshotInfo *sn_tab, *sn; - int nb_sns, i; + bool no_snapshot = true; + int nb_sns, nb_sns_tmp, i; int total; int *available_snapshots; AioContext *aio_context; + typedef struct SnapshotEntry { + QEMUSnapshotInfo *sn; + QTAILQ_ENTRY(SnapshotEntry) next; + } SnapshotEntry; + + typedef struct ImageEntry { + char *imagename; + QTAILQ_ENTRY(ImageEntry) next; + QTAILQ_HEAD(, SnapshotEntry) snapshots; + } ImageEntry; + + ImageEntry *image_entry; + SnapshotEntry *snapshot_entry; + bs = bdrv_all_find_vmstate_bs(); if (!bs) { monitor_printf(mon, "No available block device supports snapshots\n"); @@ -2175,7 +2191,34 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict) return; } - if (nb_sns == 0) { + QTAILQ_HEAD(image_list, ImageEntry) image_list = + QTAILQ_HEAD_INITIALIZER(image_list); + + for (bs1 = bdrv_first(&it1); bs1; bs1 = bdrv_next(&it1)) { + AioContext *ctx = bdrv_get_aio_context(bs); + + aio_context_acquire(ctx); + nb_sns_tmp = 0; + if (bdrv_can_snapshot(bs1)) { + nb_sns_tmp = bdrv_snapshot_list(bs1, &sn); + if (nb_sns_tmp > 0) { + no_snapshot = false; + ImageEntry *ie = g_malloc0(sizeof(*ie)); + ie->imagename = g_strdup(bdrv_get_device_name(bs1)); + QTAILQ_INIT(&ie->snapshots); + QTAILQ_INSERT_TAIL(&image_list, ie, next); + int x; + for (x = 0; x < nb_sns_tmp; x++) { + SnapshotEntry *se = g_malloc0(sizeof(*se)); + se->sn = &sn[x]; + QTAILQ_INSERT_TAIL(&ie->snapshots, se, next); + } + } + } + aio_context_release(ctx); + } + + if (no_snapshot) { monitor_printf(mon, "There is no snapshot available.\n"); return; } @@ -2183,17 +2226,28 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict) available_snapshots = g_new0(int, nb_sns); total = 0; for (i = 0; i < nb_sns; i++) { - if (bdrv_all_find_snapshot(sn_tab[i].id_str, &bs1) == 0) { + if (bdrv_all_find_snapshot(sn_tab[i].name, &bs1) == 0) { available_snapshots[total] = i; total++; } } + monitor_printf(mon, "List of snapshots present on all disks:\n"); + if (total > 0) { bdrv_snapshot_dump((fprintf_function)monitor_printf, mon, NULL); monitor_printf(mon, "\n"); for (i = 0; i < total; i++) { sn = &sn_tab[available_snapshots[i]]; + QTAILQ_FOREACH(image_entry, &image_list, next) { + QTAILQ_FOREACH(snapshot_entry, &image_entry->snapshots, next) { + if (!strcmp(sn->name, snapshot_entry->sn->name)) { + QTAILQ_REMOVE(&image_entry->snapshots, snapshot_entry, + next); + } + } + } + pstrcpy(sn->id_str, sizeof(sn->id_str), "--"); bdrv_snapshot_dump((fprintf_function)monitor_printf, mon, sn); monitor_printf(mon, "\n"); } @@ -2201,6 +2255,23 @@ void hmp_info_snapshots(Monitor *mon, const QDict *qdict) monitor_printf(mon, "There is no suitable snapshot available\n"); } + monitor_printf(mon, "\n"); + QTAILQ_FOREACH(image_entry, &image_list, next) { + if (QTAILQ_EMPTY(&image_entry->snapshots)) { + continue; + } + monitor_printf(mon, "List of partial (non-loadable) snapshots on '%s':", + image_entry->imagename); + monitor_printf(mon, "\n"); + bdrv_snapshot_dump((fprintf_function)monitor_printf, mon, NULL); + monitor_printf(mon, "\n"); + QTAILQ_FOREACH(snapshot_entry, &image_entry->snapshots, next) { + bdrv_snapshot_dump((fprintf_function)monitor_printf, mon, + snapshot_entry->sn); + monitor_printf(mon, "\n"); + } + } + g_free(sn_tab); g_free(available_snapshots);