From patchwork Tue Oct 22 16:59:24 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: mrhines@linux.vnet.ibm.com X-Patchwork-Id: 285470 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id EF16E2C032D for ; Wed, 23 Oct 2013 04:00:20 +1100 (EST) Received: from localhost ([::1]:45948 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VYfJa-0006lt-CB for incoming@patchwork.ozlabs.org; Tue, 22 Oct 2013 13:00:18 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48245) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VYfJ9-0006hb-Ga for qemu-devel@nongnu.org; Tue, 22 Oct 2013 13:00:02 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VYfIx-0000pL-O4 for qemu-devel@nongnu.org; Tue, 22 Oct 2013 12:59:51 -0400 Received: from e38.co.us.ibm.com ([32.97.110.159]:57574) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VYfIx-0000p5-HW for qemu-devel@nongnu.org; Tue, 22 Oct 2013 12:59:39 -0400 Received: from /spool/local by e38.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 22 Oct 2013 10:59:37 -0600 Received: from d03dlp02.boulder.ibm.com (9.17.202.178) by e38.co.us.ibm.com (192.168.1.138) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 22 Oct 2013 10:59:36 -0600 Received: from d03relay03.boulder.ibm.com (d03relay03.boulder.ibm.com [9.17.195.228]) by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id 767FB3E40042 for ; Tue, 22 Oct 2013 10:59:35 -0600 (MDT) Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169]) by d03relay03.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r9MGxYqS130712 for ; Tue, 22 Oct 2013 10:59:34 -0600 Received: from d03av03.boulder.ibm.com (localhost [127.0.0.1]) by d03av03.boulder.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id r9MGxY7h026850 for ; Tue, 22 Oct 2013 10:59:34 -0600 Received: from mrhinesdev.klabtestbed.com (klinux.watson.ibm.com [9.2.208.21]) by d03av03.boulder.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id r9MGxXCG026813; Tue, 22 Oct 2013 10:59:33 -0600 From: mrhines@linux.vnet.ibm.com To: qemu-devel@nongnu.org Date: Tue, 22 Oct 2013 12:59:24 -0400 Message-Id: <1382461164-8854-1-git-send-email-mrhines@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.10.4 X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13102216-1344-0000-0000-000002A8DADA X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] X-Received-From: 32.97.110.159 Cc: aliguori@us.ibm.com, quintela@redhat.com, owasserm@redhat.com, onom@us.ibm.com, abali@us.ibm.com, mrhines@us.ibm.com, gokul@us.ibm.com, pbonzini@redhat.com, chegu_vinod@hp.com Subject: [Qemu-devel] [PATCH] rdma: rename 'x-rdma' => 'rdma' X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: "Michael R. Hines" As far as we can tell, all known bugs have been fixed, there as been very good participation in testing and running. 1. Parallel RDMA migrations are working 2. IPv6 migration is working 3. Libvirt patches are ready 4. virt-test is working Any objections to removing the experimental tag? There is one remaining bug: qemu-system-i386 does not compile with RDMA: I have very zero access to 32-bit hardware using RDMA, so this hasn't been much of a priority. It seems safer to *not* submit non-testable patch rather than submit submit a fix just for the sake of compiling =) Signed-off-by: Michael R. Hines --- docs/rdma.txt | 12 ++++++------ migration-rdma.c | 2 +- migration.c | 6 +++--- qapi-schema.json | 4 ++-- 4 files changed, 12 insertions(+), 12 deletions(-) diff --git a/docs/rdma.txt b/docs/rdma.txt index 8d1e003..2c2beba 100644 --- a/docs/rdma.txt +++ b/docs/rdma.txt @@ -66,7 +66,7 @@ bulk-phase round of the migration and can be enabled for extremely high-performance RDMA hardware using the following command: QEMU Monitor Command: -$ migrate_set_capability x-rdma-pin-all on # disabled by default +$ migrate_set_capability rdma-pin-all on # disabled by default Performing this action will cause all 8GB to be pinned, so if that's not what you want, then please ignore this step altogether. @@ -93,12 +93,12 @@ $ migrate_set_speed 40g # or whatever is the MAX of your RDMA device Next, on the destination machine, add the following to the QEMU command line: -qemu ..... -incoming x-rdma:host:port +qemu ..... -incoming rdma:host:port Finally, perform the actual migration on the source machine: QEMU Monitor Command: -$ migrate -d x-rdma:host:port +$ migrate -d rdma:host:port PERFORMANCE =========== @@ -120,8 +120,8 @@ For example, in the same 8GB RAM example with all 8GB of memory in active use and the VM itself is completely idle using the same 40 gbps infiniband link: -1. x-rdma-pin-all disabled total time: approximately 7.5 seconds @ 9.5 Gbps -2. x-rdma-pin-all enabled total time: approximately 4 seconds @ 26 Gbps +1. rdma-pin-all disabled total time: approximately 7.5 seconds @ 9.5 Gbps +2. rdma-pin-all enabled total time: approximately 4 seconds @ 26 Gbps These numbers would of course scale up to whatever size virtual machine you have to migrate using RDMA. @@ -407,7 +407,7 @@ socket is broken during a non-RDMA based migration. TODO: ===== -1. 'migrate x-rdma:host:port' and '-incoming x-rdma' options will be +1. 'migrate rdma:host:port' and '-incoming rdma' options will be renamed to 'rdma' after the experimental phase of this work has completed upstream. 2. Currently, 'ulimit -l' mlock() limits as well as cgroups swap limits diff --git a/migration-rdma.c b/migration-rdma.c index f94f3b4..eeb4302 100644 --- a/migration-rdma.c +++ b/migration-rdma.c @@ -3412,7 +3412,7 @@ void rdma_start_outgoing_migration(void *opaque, } ret = qemu_rdma_source_init(rdma, &local_err, - s->enabled_capabilities[MIGRATION_CAPABILITY_X_RDMA_PIN_ALL]); + s->enabled_capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL]); if (ret) { goto err; diff --git a/migration.c b/migration.c index b4f8462..d9c7a62 100644 --- a/migration.c +++ b/migration.c @@ -81,7 +81,7 @@ void qemu_start_incoming_migration(const char *uri, Error **errp) if (strstart(uri, "tcp:", &p)) tcp_start_incoming_migration(p, errp); #ifdef CONFIG_RDMA - else if (strstart(uri, "x-rdma:", &p)) + else if (strstart(uri, "rdma:", &p)) rdma_start_incoming_migration(p, errp); #endif #if !defined(WIN32) @@ -423,7 +423,7 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk, if (strstart(uri, "tcp:", &p)) { tcp_start_outgoing_migration(s, p, &local_err); #ifdef CONFIG_RDMA - } else if (strstart(uri, "x-rdma:", &p)) { + } else if (strstart(uri, "rdma:", &p)) { rdma_start_outgoing_migration(s, p, &local_err); #endif #if !defined(WIN32) @@ -501,7 +501,7 @@ bool migrate_rdma_pin_all(void) s = migrate_get_current(); - return s->enabled_capabilities[MIGRATION_CAPABILITY_X_RDMA_PIN_ALL]; + return s->enabled_capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL]; } bool migrate_auto_converge(void) diff --git a/qapi-schema.json b/qapi-schema.json index 145eca8..4bae4b1 100644 --- a/qapi-schema.json +++ b/qapi-schema.json @@ -615,7 +615,7 @@ # This feature allows us to minimize migration traffic for certain work # loads, by sending compressed difference of the pages # -# @x-rdma-pin-all: Controls whether or not the entire VM memory footprint is +# @rdma-pin-all: Controls whether or not the entire VM memory footprint is # mlock()'d on demand or all at once. Refer to docs/rdma.txt for usage. # Disabled by default. Experimental: may (or may not) be renamed after # further testing is complete. (since 1.6) @@ -632,7 +632,7 @@ # Since: 1.2 ## { 'enum': 'MigrationCapability', - 'data': ['xbzrle', 'x-rdma-pin-all', 'auto-converge', 'zero-blocks'] } + 'data': ['xbzrle', 'rdma-pin-all', 'auto-converge', 'zero-blocks'] } ## # @MigrationCapabilityStatus