From patchwork Tue Mar 27 09:10:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 892033 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="oSbsQ6MW"; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40B2J856bRz9s0b for ; Wed, 28 Mar 2018 20:14:16 +1100 (AEDT) Received: from localhost ([::1]:38164 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f179q-0003DC-5S for incoming@patchwork.ozlabs.org; Wed, 28 Mar 2018 05:14:14 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56620) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1f176C-0000kp-JI for qemu-devel@nongnu.org; Wed, 28 Mar 2018 05:10:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1f1768-0004ui-5S for qemu-devel@nongnu.org; Wed, 28 Mar 2018 05:10:28 -0400 Received: from mail-pl0-x22f.google.com ([2607:f8b0:400e:c01::22f]:35819) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1f1767-0004uO-TH for qemu-devel@nongnu.org; Wed, 28 Mar 2018 05:10:24 -0400 Received: by mail-pl0-x22f.google.com with SMTP id p9-v6so1211024pls.2 for ; Wed, 28 Mar 2018 02:10:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=Qksf8rPSSuLo8Y6NkeAKlx3qRLrZc9s8eETywSAZajA=; b=oSbsQ6MWqvpjRpR9tAZr5Vm+PXxx+L2NybZy87qLsgbXzdrBHOfUDDtetc0UYAjUfZ /S4VexGBoiEQ1XU9MAn4ZI9htZ/lJiuyhS3Lquk6aSaFGzuBd2EgiiNLo2JvjxwW1pIM DS9gncDjL+qCeZSoIadumjb5th0G+U+VXZ4b6IQwE3cG/5JT4PHfkZ1VaVPARO3hi9Uj lYuZUvR7hUhcRNqt/IyW+2P8UBgnUpL0aeBzr80y7XDV2cN7OfYA09eAp7FUxOFs2M/h 1QgLvknBtyQCxAOboJCsIrNqYvMIt0mXnuWlRp/EBMmJ83Zh1j5Gidv27dXDDm7nW2Dz lliw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=Qksf8rPSSuLo8Y6NkeAKlx3qRLrZc9s8eETywSAZajA=; b=Kq7oWehwuAWHf1NE/rtKoek0HINLvSEjlvrGM/qAKpqkYHwRQ142Yw09fgqgxZQh/c pmBX5g2QQTIU7q/WWgl7m0FsWSxY6BjlYTk2Do6RoPgal2UwEf9a+3Qv11BIQLLOl99E Y5t5ZjGZUgkkUARnScjTF4csFvtZxH9LV2oPVn0jtw5kn2pvWQ05Dx6B4eOtRmQkUDSA ctANwxUIaBoBylqWpAw7hgMoNXqfDP5754lrT7a28pBAMx9Lar7q7MkI/8L0nEMquVl2 Hm2XLuHDdNzmcNNN/di02QzVF4d7NJ3p+OFuTpGYt38pvQ8mdXZTq+8w01WC/M7UgryF zEEw== X-Gm-Message-State: AElRT7Fqsn3a/a7BTclzNCL3+F5kWEqVviFsJ0S+QIbJwJiWyWbBCukF m7OX/On0hCvzXUUvFWjF+E4= X-Google-Smtp-Source: AIpwx49Vi0rMAjqHK9Tz+2wsRjd4KgrSnUIm3UB0jZ+pUPAzVBwrDl4IiV0PK5/zBu735CRhjyeapQ== X-Received: by 2002:a17:902:5acf:: with SMTP id g15-v6mr2896610plm.138.1522228222919; Wed, 28 Mar 2018 02:10:22 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.38]) by smtp.gmail.com with ESMTPSA id q9sm5590648pgs.89.2018.03.28.02.10.19 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 28 Mar 2018 02:10:22 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Date: Tue, 27 Mar 2018 17:10:33 +0800 Message-Id: <20180327091043.30220-1-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.3 MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400e:c01::22f Subject: [Qemu-devel] [PATCH v2 00/10] migration: improve and cleanup compression X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, Xiao Guangrong , qemu-devel@nongnu.org, peterx@redhat.com, dgilbert@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Xiao Guangrong Changelog in v2: Thanks to the review from Dave, Peter, Wei and Jiang Biao, the changes in this version are: 1) include the performance number in the cover letter 2)add some comments to explain how to use z_stream->opaque in the patchset 3) allocate a internal buffer for per thread to store the data to be compressed 4) add a new patch that moves some code to ram_save_host_page() so that 'goto' can be omitted gracefully 5) split the optimization of compression and decompress into two separated patches 6) refine and correct code styles This is the first part of our work to improve compression to make it be more useful in the production. The first patch resolves the problem that the migration thread spends too much CPU resource to compression memory if it jumps to a new block that causes the network is used very deficient. The second patch fixes the performance issue that too many VM-exits happen during live migration if compression is being used, it is caused by huge memory returned to kernel frequently as the memory is allocated and freed for every signal call to compress2() The remaining patches clean the code up dramatically Performance numbers: We have tested it on my desktop, i7-4790 + 16G, by locally live migrate the VM which has 8 vCPUs + 6G memory and the max-bandwidth is limited to 350. During the migration, a workload which has 8 threads repeatedly written total 6G memory in the VM. Before this patchset, its bandwidth is ~25 mbps, after applying, the bandwidth is ~50 mbp. We also collected the perf data for patch 2 and 3 on our production, before the patchset: + 57.88% kqemu [kernel.kallsyms] [k] queued_spin_lock_slowpath + 10.55% kqemu [kernel.kallsyms] [k] __lock_acquire + 4.83% kqemu [kernel.kallsyms] [k] flush_tlb_func_common - 1.16% kqemu [kernel.kallsyms] [k] lock_acquire ▒ - lock_acquire ▒ - 15.68% _raw_spin_lock ▒ + 29.42% __schedule ▒ + 29.14% perf_event_context_sched_out ▒ + 23.60% tdp_page_fault ▒ + 10.54% do_anonymous_page ▒ + 2.07% kvm_mmu_notifier_invalidate_range_start ▒ + 1.83% zap_pte_range ▒ + 1.44% kvm_mmu_notifier_invalidate_range_end apply our work: + 51.92% kqemu [kernel.kallsyms] [k] queued_spin_lock_slowpath + 14.82% kqemu [kernel.kallsyms] [k] __lock_acquire + 1.47% kqemu [kernel.kallsyms] [k] mark_lock.clone.0 + 1.46% kqemu [kernel.kallsyms] [k] native_sched_clock + 1.31% kqemu [kernel.kallsyms] [k] lock_acquire + 1.24% kqemu libc-2.12.so [.] __memset_sse2 - 14.82% kqemu [kernel.kallsyms] [k] __lock_acquire ▒ - __lock_acquire ▒ - 99.75% lock_acquire ▒ - 18.38% _raw_spin_lock ▒ + 39.62% tdp_page_fault ▒ + 31.32% __schedule ▒ + 27.53% perf_event_context_sched_out ▒ + 0.58% hrtimer_interrupt We can see the TLB flush and mmu-lock contention have gone. Xiao Guangrong (10): migration: stop compressing page in migration thread migration: stop compression to allocate and free memory frequently migration: stop decompression to allocate and free memory frequently migration: detect compression and decompression errors migration: introduce control_save_page() migration: move some code ram_save_host_page migration: move calling control_save_page to the common place migration: move calling save_zero_page to the common place migration: introduce save_normal_page() migration: remove ram_save_compressed_page() migration/qemu-file.c | 43 ++++- migration/qemu-file.h | 6 +- migration/ram.c | 479 ++++++++++++++++++++++++++++++-------------------- 3 files changed, 322 insertions(+), 206 deletions(-)