From patchwork Thu Jul 19 12:15:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 946242 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="a3ec1DqB"; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 41WY8w1r3wz9s0n for ; Thu, 19 Jul 2018 22:24:00 +1000 (AEST) Received: from localhost ([::1]:42108 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fg7yP-0007ZG-34 for incoming@patchwork.ozlabs.org; Thu, 19 Jul 2018 08:23:57 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35091) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fg7qr-0002FR-40 for qemu-devel@nongnu.org; Thu, 19 Jul 2018 08:16:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fg7qk-0005WT-RM for qemu-devel@nongnu.org; Thu, 19 Jul 2018 08:16:09 -0400 Received: from mail-pg1-x529.google.com ([2607:f8b0:4864:20::529]:33069) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1fg7qk-0005VL-JL for qemu-devel@nongnu.org; Thu, 19 Jul 2018 08:16:02 -0400 Received: by mail-pg1-x529.google.com with SMTP id r5-v6so3591486pgv.0 for ; Thu, 19 Jul 2018 05:16:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Xn13UKBIIS7bezDpJYirzTV6oEJqpxCKsm/KDJZKtr4=; b=a3ec1DqBcXOLodLGOLrlwanwsrL/W5Ry9Ze5HWOuuVEmvsd1Lz6XZ5+ROE251Yr1iB GwJGfWbVDlVmVMu9XrWd7N1WTzFWjvV/VdPRXSq6ee6jP+FgdevhGPJO25Fr8Z3afo+N IWXVv/WB7XSgxVHhtNL0hz++eOoWftl/f9NaEDi86M8BvyL7H/flliD1590k2PB2rngk GYuRcWzmI7wojHXEaBxDCL6N/7FAzvPQwa98aEf+p9vVO0xoWJ7Y5C9OtcRLbczIlRvn +7D8PfDKWGyJVa/JfTDZJcGevYKekDSWC3xYe8zk1idXCWzKQWuwYWuHYM9VlSfwlRty kZSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Xn13UKBIIS7bezDpJYirzTV6oEJqpxCKsm/KDJZKtr4=; b=MTYazKNLnMOILxQHGX/RPNOjA7wJzyyFkHZU5L6mRLu9O7KPuKB8DTIkhK0FX9Vv/Y O7Wf+mf9qnFcbeCSEpEvjp2/YmNv6BfnZ2uQdINyJNvuJcE4kNOAsD7Q/ZD0yiWSUFY5 sbdPisycnd4sD8vlRzwZq0HclSPyuqFYjDYZruAhj1HlNsMKXJ2ZUhfmhRLTgvClmskN +gE1gvkcKWjQOerFjITyDYZEr6W8NvMpsnHaKbpuTeN6gTi4VE08sDkLfPBmnLzqXF7m E7p7HWW/lPExRGpetNeJ1fsnNLerukcqZaF/sKTqK1S0aFFsQ3z08i8iat81U79Sj9hN 3JIg== X-Gm-Message-State: AOUpUlFEwb1eIFpwRaixbgvZHUXxyiX3kp11QKKYUTg3mpXjVZQ6su+Q RaXhi/UUOATRhekF1+rUI2k= X-Google-Smtp-Source: AAOMgpdNwJcu86r/6zGmiopVdzAv861/PeucLOPD0a8RidMxgO3kt2ZbndgdIAIKKppgSZJUuEawlw== X-Received: by 2002:a62:d39b:: with SMTP id z27-v6mr9442738pfk.182.1532002561719; Thu, 19 Jul 2018 05:16:01 -0700 (PDT) Received: from localhost.localdomain ([203.205.141.37]) by smtp.gmail.com with ESMTPSA id h190-v6sm21270456pge.85.2018.07.19.05.15.58 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 19 Jul 2018 05:16:01 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mst@redhat.com, mtosatti@redhat.com Date: Thu, 19 Jul 2018 20:15:18 +0800 Message-Id: <20180719121520.30026-7-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180719121520.30026-1-xiaoguangrong@tencent.com> References: <20180719121520.30026-1-xiaoguangrong@tencent.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:4864:20::529 Subject: [Qemu-devel] [PATCH v2 6/8] migration: move handle of zero page to the thread X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, Xiao Guangrong , qemu-devel@nongnu.org, peterx@redhat.com, dgilbert@redhat.com, wei.w.wang@intel.com, jiang.biao2@zte.com.cn Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Xiao Guangrong Detecting zero page is not a light work, moving it to the thread to speed the main thread up Signed-off-by: Xiao Guangrong --- migration/ram.c | 112 +++++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 78 insertions(+), 34 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 5aa624b3b9..e1909502da 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -351,6 +351,7 @@ CompressionStats compression_counters; struct CompressParam { bool done; bool quit; + bool zero_page; QEMUFile *file; QemuMutex mutex; QemuCond cond; @@ -392,7 +393,7 @@ static QemuThread *decompress_threads; static QemuMutex decomp_done_lock; static QemuCond decomp_done_cond; -static void do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, +static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, ram_addr_t offset, uint8_t *source_buf); static void *do_data_compress(void *opaque) @@ -400,6 +401,7 @@ static void *do_data_compress(void *opaque) CompressParam *param = opaque; RAMBlock *block; ram_addr_t offset; + bool zero_page; qemu_mutex_lock(¶m->mutex); while (!param->quit) { @@ -409,11 +411,12 @@ static void *do_data_compress(void *opaque) param->block = NULL; qemu_mutex_unlock(¶m->mutex); - do_compress_ram_page(param->file, ¶m->stream, block, offset, - param->originbuf); + zero_page = do_compress_ram_page(param->file, ¶m->stream, + block, offset, param->originbuf); qemu_mutex_lock(&comp_done_lock); param->done = true; + param->zero_page = zero_page; qemu_cond_signal(&comp_done_cond); qemu_mutex_unlock(&comp_done_lock); @@ -1871,13 +1874,19 @@ static int ram_save_multifd_page(RAMState *rs, RAMBlock *block, return 1; } -static void do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, +static bool do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, ram_addr_t offset, uint8_t *source_buf) { RAMState *rs = ram_state; uint8_t *p = block->host + (offset & TARGET_PAGE_MASK); + bool zero_page = false; int ret; + if (save_zero_page_to_file(rs, f, block, offset)) { + zero_page = true; + goto exit; + } + save_page_header(rs, f, block, offset | RAM_SAVE_FLAG_COMPRESS_PAGE); /* @@ -1890,10 +1899,12 @@ static void do_compress_ram_page(QEMUFile *f, z_stream *stream, RAMBlock *block, if (ret < 0) { qemu_file_set_error(migrate_get_current()->to_dst_file, ret); error_report("compressed data failed!"); - return; + return false; } +exit: ram_release_pages(block->idstr, offset & TARGET_PAGE_MASK, 1); + return zero_page; } static void flush_compressed_data(RAMState *rs) @@ -1917,10 +1928,20 @@ static void flush_compressed_data(RAMState *rs) qemu_mutex_lock(&comp_param[idx].mutex); if (!comp_param[idx].quit) { len = qemu_put_qemu_file(rs->f, comp_param[idx].file); - /* 8 means a header with RAM_SAVE_FLAG_CONTINUE. */ - compression_counters.reduced_size += TARGET_PAGE_SIZE - len + 8; - compression_counters.pages++; ram_counters.transferred += len; + + /* + * it's safe to fetch zero_page without holding comp_done_lock + * as there is no further request submitted to the thread, + * i.e, the thread should be waiting for a request at this point. + */ + if (comp_param[idx].zero_page) { + ram_counters.duplicate++; + } else { + /* 8 means a header with RAM_SAVE_FLAG_CONTINUE. */ + compression_counters.reduced_size += TARGET_PAGE_SIZE - len + 8; + compression_counters.pages++; + } } qemu_mutex_unlock(&comp_param[idx].mutex); } @@ -1950,12 +1971,16 @@ retry: set_compress_params(&comp_param[idx], block, offset); qemu_cond_signal(&comp_param[idx].cond); qemu_mutex_unlock(&comp_param[idx].mutex); - pages = 1; - /* 8 means a header with RAM_SAVE_FLAG_CONTINUE. */ - compression_counters.reduced_size += TARGET_PAGE_SIZE - - bytes_xmit + 8; - compression_counters.pages++; ram_counters.transferred += bytes_xmit; + pages = 1; + if (comp_param[idx].zero_page) { + ram_counters.duplicate++; + } else { + /* 8 means a header with RAM_SAVE_FLAG_CONTINUE. */ + compression_counters.reduced_size += TARGET_PAGE_SIZE - + bytes_xmit + 8; + compression_counters.pages++; + } break; } } @@ -2229,6 +2254,40 @@ static bool save_page_use_compression(RAMState *rs) return false; } +/* + * try to compress the page before post it out, return true if the page + * has been properly handled by compression, otherwise needs other + * paths to handle it + */ +static bool save_compress_page(RAMState *rs, RAMBlock *block, ram_addr_t offset) +{ + if (!save_page_use_compression(rs)) { + return false; + } + + /* + * When starting the process of a new block, the first page of + * the block should be sent out before other pages in the same + * block, and all the pages in last block should have been sent + * out, keeping this order is important, because the 'cont' flag + * is used to avoid resending the block name. + * + * We post the fist page as normal page as compression will take + * much CPU resource. + */ + if (block != rs->last_sent_block) { + flush_compressed_data(rs); + return false; + } + + if (compress_page_with_multi_thread(rs, block, offset) > 0) { + return true; + } + + compression_counters.busy++; + return false; +} + /** * ram_save_target_page: save one target page * @@ -2249,15 +2308,8 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, return res; } - /* - * When starting the process of a new block, the first page of - * the block should be sent out before other pages in the same - * block, and all the pages in last block should have been sent - * out, keeping this order is important, because the 'cont' flag - * is used to avoid resending the block name. - */ - if (block != rs->last_sent_block && save_page_use_compression(rs)) { - flush_compressed_data(rs); + if (save_compress_page(rs, block, offset)) { + return 1; } res = save_zero_page(rs, block, offset); @@ -2275,18 +2327,10 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss, } /* - * Make sure the first page is sent out before other pages. - * - * we post it as normal page as compression will take much - * CPU resource. - */ - if (block == rs->last_sent_block && save_page_use_compression(rs)) { - res = compress_page_with_multi_thread(rs, block, offset); - if (res > 0) { - return res; - } - compression_counters.busy++; - } else if (migrate_use_multifd()) { + * do not use multifd for compression as the first page in the new + * block should be posted out before sending the compressed page + */ + if (!save_page_use_compression(rs) && migrate_use_multifd()) { return ram_save_multifd_page(rs, block, offset); }